about
11/30/2022
Testing Repo
Testing code

Testing  Repository

Single-user Test Harness and Assertions

Quick Status Code functions correctly no known defects Demonstration code yes Documentation yes Test cases yes Static library not yet Planned design changes None for now

1.0 Concept:

When a code project contains more than one or two packages adequate testing is likely to require a sequence of test cases. There are four types of tests needed for code in this repository: Construction, Unit, Regression, and Performance tests.
Test Type details
  1. Construction Tests:
    These tests are implemented as an integral part of the implementation of a package's code1. We add a few lines of code or a small function, then add a test to ensure that the new code functions as expected. If we need more than one simple test to verify the new code, then we aren't testing often enough. If the test fails, we know where to find the problem - in the last few lines of code. The test code may be part of the package's main function or may reside in a separate test package.
  2. Unit Tests:
    The intent of unit testing is to attempt to ensure that tested code meets all its obligations in a robust manner. That entails testing every path through the code and testing boundary conditions, e.g., beginning and end of the computational range, all cases that it may need to execute, and success or failure when executing operations that may fail, like opening streams or connecting a socket. Unit tests are labor intensive, and we may elect to unit test only those packages on which many other packages depend.
  3. Regression Tests:
    Regression tests are tests typically conducted over a library or large subsystem during their implementation. Each regression test contains a set of test cases that are executed individually usually in a predetermined sequence. It is very common to use a test harness to aggregate all the tests and apply them to the library or subsystem whenever there is significant change. The idea is to discover problems early that are due to changes in dependencies or the platform on which the code executes.
  4. Performance Tests:
    Performance testing attempts to construct tests that:
    • Compare two processing streams satisfying the same obligations, to see which has higher throughput, lower latency, or other performance metrics.
    • Attempt to make testing overhead a negligible part of the complete test process, by pulling as much overhead as possible into initial and final activities that are not included in measured outputs.
    • Run many times to amortize any remaining startup and shutdown, and average over environmental effects that may have nothing to do with the comparison, but happen to ocur during testing.
    Often, a single iteration of a test may run fast enough that it is not possible to accurately measure the time consumed, so running many iterations is also a way of improving measurement accuracy.
For construction tests, we provide simple tests that are quick to write and don't require a lot of analysis to build. For unit, regression, and performance tests we need to be more careful. These tests should satisfy three properties: they should be repeatable, test both normal and abnormal conditions, and present test output as information, not data.
Test Properties details
  1. Tests should be repeatable with the same results every time.
    That implies that each test has a "setup" process that guarentees the testing environment is in a fixed state at the beginning of testing. We may choose to do that with an initialize function or may use a test class for each test that sets up the environment in its constructor.
  2. Test normal and abnormal conditions as completely as practical.
    We do that by planning each test, defining input data to provide both expected and possible but unexpected conditions. It helps to define functions:
    • Requires(pred)
      defines condtions that are expected to hold before an operation begins.
    • Ensures(pedicate)
      defines condtions that are expected to hold after an operation.
    • Assert(predicate)
      defines conditions that should be true at specific places in an operation.
    where predicate is a boolean valued operation on the test environment and/or code state.
  3. Visualize operation results.
    Evaluating all the conditions above often results in a lot of raw data about the environment and code states. We need a way to selectively display that to a test developer. That means we need a logging facility that can write to the console, to test data files, or both. We want to be able to select the levels of display, so we get very little output when the tests are running successfully, but with a lot more detail when operations fail or are not as expected.
For these thorough tests it is common to write a brief test specification which clearly defines the expected test results, initial setup, and any additional instructions for test developers that may be needed (ideally none). When unit, regression, and performance tests are concluded, a test report, generated by a logging facility, is saved in the appropriate code repository. This should have a summary of what passed and what failed, along with whatever data was logged during the final tests.

2.0 Design:

This repository contains code that implements the concept. The code structure is shown in Fig 1., below. Here's the way it works: Fig 1. Test Harness Classes The test developer creates a TestExecutive package that may consist of one or more TestClass instances bound to code to test. Each TestClass derives from ITest and so is required to implement bool TestClass::test() which will be executed by the test harness. The TestExecutive main function registers each TestClass with the TestSequencer. The the sequencer is run with the method bool TestSequencer::doTests(). Each test is passed to the Executor to run in the context of a try-catch block and announces its results. Should a test throw an exception, that test fails and testing continues, as the Executor catches exceptions and simply announces failure, then continues with the next test. The main test method, bool TestClass::test() of a TestClass is likely to contain a series of child test methods with the same signature, e.g., bool(TestClass::*)(). So the main method can create a local Executor instance in which to run each of these test functions. Single-user Test Harness The single-user test harness environment has four classes:
  1. TestSequencer runs test sequences. Each TestClass of the sequence is registered with the TestSequencer. When started it iterates over its TestClasses, and for each, passes the TestClass instance to an instance of Executor1.
  2. Executor executes each test within a try-catch block and annunciates the result. The purpose of this class is to avoid littering test code with try-catch blocks for each test and with code to announce the results.
  3. TestClass is provided by the test developer. It is required to implement the ITest interface and to bind to code to be tested. The Executor tests a single bool TestClass::test() function. Often that function will execute several lower-level test functions in a local Executor instance and return false unless all of the internal test functions pass.
  4. TestExecutive, created by the test developer, contains the main entry point for testing2. It may be just that main or may have its own implementation class(es).

  1. The TestSequencer and Executor can also execute tests where each test is defined by a pointer to a test function, e.g., has a function signature bool(*)().
  2. All of this is illustrated by code in this repository.
The code contained in this repository has a demonstration test class TestWidgetClass and demonstration tested code WidgetClass. We show, in the blocks below, partial code and results of running the code. Test Case: A test case contains in comments:
  1. A name
  2. Brief test description a.k.a. test story
  3. Description of required environment and dependencies
  4. Expected results
The right panel illustrates:
  1. TestWidgetClass main method bool TestWidgetClass::test()
  2. child test methods being executed with an Executor instance in that function.
  3. Comments that describe each of the tests.
These comments are relatively simple because this is demo code.
Test Output: Testing TestClass =================== Testing TestWidgetClass test1 passed test2 passed test3 passed exception thrown test4 failed at least one test failed Testing test functions ------------------------ testing tester testTester passed alwaysFails failed Testing TestSequencer ----------------------- testing tester testTester passed alwaysFails failed Testing TestWidgetClass test1 passed test2 passed test3 passed exception thrown test4 failed TestWidgetClass failed Widget testItem destroyed Widget testItem destroyed
Demo Sample Code /*--------------------------------------------------------- Test Description: - Demonstrate testing using test harness TestSequencer - Can be run outside the TestSequencer Test Environment: - All code built with C++17 option Test Operation: - run each of the implementing tests: test1 to test4 */ bool TestWidgetClass::test() { std::cout << "\n Testing " << name(); bool t1 = executor_.doTest(&TestWidgetClass::test1, this); executor_.showResult(t1, "test1"); bool t2 = executor_.doTest(&TestWidgetClass::test2, this); executor_.showResult(t2, "test2"); bool t3 = executor_.doTest(&TestWidgetClass::test3, this); executor_.showResult(t3, "test3"); bool t4 = executor_.doTest(&TestWidgetClass::test4, this); executor_.showResult(t4, "test4"); return t1 && t2 && t3 && t4; } /*--------------------------------------------------------- Requirement #1 Widget Class - Widget is initialized with name = "unknown" */ bool TestWidgetClass::test1() { return (pWidget_->name() == "unknown"); } /*--------------------------------------------------------- Requirement #2 Widget Class - Widget::name(const std::string&) sets name_ member - Widget::name() returns value of name_ member */ bool TestWidgetClass::test2() { pWidget_->name("testItem"); return (pWidget_->name() == "testItem"); } /*--------------------------------------------------------- Requirement #3 Widget Class - Widget::say() returns "hi from Widget instance " + name_ - Requires test2() to run immediately before this test */ bool TestWidgetClass::test3() { std::string temp = pWidget_->say(); return temp == "hi from Widget instance testItem"; } /*--------------------------------------------------------- Requirement #4 Executor - Tests Executor::doTest(), required to return false if exception is thrown during execution of test. - Also tests Executor::showResult(r, msg) */ bool TestWidgetClass::test4() { throw(std::exception()); return true; }
It is quite likely that a test developer will want to use a logger to provide selected output information, with some control over how much is displayed. The Logger Repository has a pair of loggers intended to work effectively with this test harness.

Status:

This test harness seems to deliver sequenced tests effectively. I haven't used it yet on a major project in this collection of repositories, but will start that soon. I will report back observations about those first few uses in this page.
  Next Prev Pages Sections About Keys