Performance testing attempts to construct tests that:
-
Compare two processing streams satisfying the same obligations, to see which
has higher throughput, lower latency, or other performance metrics.
-
Attempt to make testing overhead a negligible part of the complete test process,
by pulling as much overhead as possible into initial and final activities that
are not included in measured outputs.
-
Run many times to amortize any remaining startup and shutdown, and average over
environmental effects that may have nothing to do with the comparison, but
happen to ocur during testing.
Often, a single iteration of a test may run fast enough that it is not possible to accurately
measure the time consumed, so running many iterations is also a way of improving measurement
accuracy.