Software Testing Concepts: Difference between revisions
Line 90: | Line 90: | ||
="Given, When, Then" Testing= | ="Given, When, Then" Testing= | ||
<font color=darkkhaki>TO PROCESS: https://archive.is/20140814232159/http://www.jroller.com/perryn/entry/given_when_then_and_how#selection-149.0-149.17</font> | <font color=darkkhaki>TO PROCESS: https://archive.is/20140814232159/http://www.jroller.com/perryn/entry/given_when_then_and_how#selection-149.0-149.17</font> | ||
=How to Speed Up Slow Test Suites= |
Revision as of 04:53, 1 January 2022
Internal
Overview
Automated testing provides an effective mechanism for catching regressions, especially when combined with test-driven development. Well written tests can also act as a form of a system documentation but if not used carefully, writing tests may produce meaningless boilerplate test cases. Automated testing is also valuable for covering corner cases that rarely arise in normal operations. Thinking about writing the tests improves the understanding of the problem and in some cases leads to better solutions.
Testing is ultimately about managing risks.
Test-Driven Development (TDD)
Test-Driven Development (TDD) is an agile software development technique for writing software that guides software development by writing tests. It consists of the following steps, applied repeatedly:
- Write a test for the next bit of functionality you want to add.
- Write the functional code until the test passes.
- Refactor both new and old code to make it well structured.
Writing the tests first and exercising them in process of writing the code makes the code more modular, and also prevents it to break if changed later - the tests become part of the code base. This is referring to as "building quality in" instead of "testing quality in".
Automated Test
An automated test verifies an assumption about the behavior of the system and provides a safety mesh that is exercised continuously, again and again, in an automated fashion, in most cases on each commit in the repository. The benefits of automated testing and that the software is continuously verified, maintaining its quality. Another benefit of tests is that they serve as documentation for code. Yet another benefit is that enables refactoring.
Continuous Testing
Continuous testing means the test suite is run as often as possible, ideally after each commit. The CI build and the CD pipeline should run immediately someone pushes a change to the codebase. Continuous testing provide continuous feedback. Quick feedback, ideally immediately after a change was committed, allows developers to respond to the feedback with as little interruption to their flow as possible. Additionally, more often the tests run, the smaller scope of change is examined for each run, so it's easier to figure out where the problems were introduced, if any.
Test Categories
Unit Test
Unit testing is the testing of the smallest possible part of software, such a single method, a small set of related methods or a class. In reality we test logical units - which can extend to a method, a single class or multiple classes. A unit test has the following characteristics:
- It should be automated. Continually verified assumptions encourage refactoring. Code should not be refactored without proper automated test coverage.
- It should be reliable - it should fail if, and only if, the production code is broken. If the test starts failing for some other reason, for example if the internet connection is not available, that implies the unit testing code is broken.
- It should be fast - not more that a few milliseconds to finish execution.
- It should be self-contained and runnable in isolation, and then in any order as part of the test suite. The unit test should not depend on the result of another test or on the execution order, or external state (such as configuration files stored on the developer's machine). One should understand what is going on in a unit test without having to look at other parts of the code.
- It should not depend on database access, or any long running task. If the presence of external dependencies are necessary to test the logic, they should be provided as test doubles.
- It should be time (time of the day, timezone) and location independent.
- It should be meaningful. Getter or setter testing is not meaningful.
- It should be usable as documentation: readable and expressive.
External Dependency
The following are examples of situations when the test relies on external dependencies:
- The test acquire a database connection and fetches/updates data.
- The test connects to the internet and downloads files.
- The test interacts with a mail server to sent an e-mail.
- The test looks up JNDI objects.
- The test invokes a web service.
Integration Test
Integration test depend on real external dependencies, such databases, and they are inherently slower than the unit tests. Integration tests should be automated, but they should run outside the continuous unit test feedback loop.
UI-Driven Journey Test
Acceptance Test
Acceptance tests are written by analysts and other stakeholders, in a user story manner.
Other Kinds of Tests
When people think about automated testing, they generally think about functional tests like unit tests, integration tests or UI-driven journey tests. However, the scope of risks is broader than functional defects, so the scope of validation should be broader as well. Constraints and requirements beyond the purely functional ones are called Non-Functional Requirements (NFR) or Cross-Functional Requirements (CFR) (see A Decade of Cross Functional Requirements (CFRs) by Sarah Taraporewalla):
Code Quality
Security
Compliance
Performance
Scalability
Automated tests can prove that scaling works correctly.
Availability
Automated tests can prove that failover works.
Test Suite
A test suite is a collection of automated tests that are run as a group.
Test Double
A test double is meant to replace a real external dependency in the unit test cycle, isolating it from the real dependency and allowing the test to run in standalone mode. This may be necessary either because the external dependency is unavailable, or the interaction with it is slow. The term was introduced by Gerard Meszaros in his xUnit Test Patterns book.
Dummy
A dummy object is passed as a mandatory parameter object but is not directly used in the test code or the code under test. The dummy object is required for the creation of another object required in the code under test. When implemented as a class, all methods should throw a "Not Implemented" runtime exception.
Stub
A stub delivers indirect inputs to the caller when the stub's methods are invoked. Stubs are programmed only for the test scope. Stubs' methods can be programmed to return hardcoded results or to throw specific exceptions. Stubs are useful in impersonating error conditions in external dependencies. The same result can be achieve with a mock.
Spy
A spy is a variation of a stub but instead of only setting the expectation, a spy records the method calls made to the collaborator. A spy can act as an indirect output of the unit under test and can also act as an audit log.
Mock
A mock object is a combination of a stub and a spy. It stubs methods to return values and throw exceptions, like a stub, but it acts as an indirect output for code under test, as a spy. A mock object fails a test if an expected method is not invoked, or if the parameter of the method do not match. The Mockito framework provides an API to mock objects.
Fake
A fake object is a test double with real logic, unlike stubs, but it is much more simplified and cheaper than the real external dependency. This way, the external dependencies of the unit are mocked or stubbed so that the output of the dependent objects can be controlled and observed from the test. A classic example is a database fake, which is an entirely in-memory non-persistent database that otherwise is fully functional. By contrast, a database stub would return a fixed value.
Fake objects are extensively used in legacy code, in scenarios like:
- the real object cannot be instantiated (such as when the constructor reads a file or performs a JNDI lookup).
- the real object has slow methods - a
calculate()
method that invokes aload()
methods that reads from a database.
Fake objects are working implementations. Fake classes extends the original classes, but it usually performs some sort of hack which makes them unsuitable for production.