about
11/30/2022
Testing Repo
Testing Repository
Single-user Test Harness and Assertions
Quick Status
1.0 Concept:
Test Type details
-
Construction Tests:
These tests are implemented as an integral part of the implementation of a package's code1. We add a few lines of code or a small function, then add a test to ensure that the new code functions as expected. If we need more than one simple test to verify the new code, then we aren't testing often enough. If the test fails, we know where to find the problem - in the last few lines of code. The test code may be part of the package's main function or may reside in a separate test package.
-
Unit Tests:
The intent of unit testing is to attempt to ensure that tested code meets all its obligations in a robust manner. That entails testing every path through the code and testing boundary conditions, e.g., beginning and end of the computational range, all cases that it may need to execute, and success or failure when executing operations that may fail, like opening streams or connecting a socket. Unit tests are labor intensive, and we may elect to unit test only those packages on which many other packages depend.
-
Regression Tests:
Regression tests are tests typically conducted over a library or large subsystem during their implementation. Each regression test contains a set of test cases that are executed individually usually in a predetermined sequence. It is very common to use a test harness to aggregate all the tests and apply them to the library or subsystem whenever there is significant change. The idea is to discover problems early that are due to changes in dependencies or the platform on which the code executes.
-
Performance Tests:
Performance testing attempts to construct tests that:
- Compare two processing streams satisfying the same obligations, to see which has higher throughput, lower latency, or other performance metrics.
- Attempt to make testing overhead a negligible part of the complete test process, by pulling as much overhead as possible into initial and final activities that are not included in measured outputs.
- Run many times to amortize any remaining startup and shutdown, and average over environmental effects that may have nothing to do with the comparison, but happen to ocur during testing.
Test Properties details
-
Tests should be repeatable with the same results every time.
That implies that each test has a "setup" process that guarentees the testing environment is in a fixed state at the beginning of testing. We may choose to do that with an initialize function or may use a test class for each test that sets up the environment in its constructor.
-
Test normal and abnormal conditions as completely as practical.
We do that by planning each test, defining input data to provide both expected and possible but unexpected conditions. It helps to define functions:where predicate is a boolean valued operation on the test environment and/or code state.
-
Requires(pred)
defines condtions that are expected to hold before an operation begins.
-
Ensures(pedicate)
defines condtions that are expected to hold after an operation.
-
Assert(predicate)
defines conditions that should be true at specific places in an operation.
-
Requires(pred)
-
Visualize operation results.
Evaluating all the conditions above often results in a lot of raw data about the environment and code states. We need a way to selectively display that to a test developer. That means we need a logging facility that can write to the console, to test data files, or both. We want to be able to select the levels of display, so we get very little output when the tests are running successfully, but with a lot more detail when operations fail or are not as expected.
2.0 Design:
-
TestSequencer runs test sequences. EachTestClass of the sequence is registered with theTestSequencer . When started it iterates over itsTestClass es, and for each, passes theTestClass instance to an instance ofExecutor 1. -
Executor executes each test within a try-catch block and annunciates the result. The purpose of this class is to avoid littering test code with try-catch blocks for each test and with code to announce the results. -
TestClass is provided by the test developer. It is required to implement theITest interface and to bind to code to be tested. TheExecutor tests a singlebool TestClass::test() function. Often that function will execute several lower-level test functions in a localExecutor instance and return false unless all of the internal test functions pass. -
TestExecutive , created by the test developer, contains themain entry point for testing2. It may be just that main or may have its own implementation class(es).
-
The
TestSequencer andExecutor can also execute tests where each test is defined by a pointer to a test function, e.g., has a function signaturebool(*)() . - All of this is illustrated by code in this repository.
- A name
- Brief test description a.k.a. test story
- Description of required environment and dependencies
- Expected results
-
TestWidgetClass main methodbool TestWidgetClass::test() -
child test methods being executed with an
Executor instance in that function. - Comments that describe each of the tests.