If you always pass…

It is a common sense statement that you want your software to pass your validation tests.  However, if you always pass your tests how do you know your test can catch a failure?

Step 1: Validating tests components

Ideally, Image result for bug trapwhen you write tests you are assembling them from validated components, e.g. every time you write a test you are not reinventing the wheel.  For a test component to be validated the following must hold

  • Fully defined interface: The inputs and output(s) must be fully specified, e.g. if the test only works for integer data that must be specified.
  • Invalid data handling: The test component must respond correctly to invalid data, e.g. provide a failure message.
  • Validated test cases (positive, negative, invalid data): The test component should have test cases which show the response to data that results in a pass, failure, and response to invalid data.

Step 2: Validate the order of execution

When tests are created there are a series of steps taken to execute the test

  1. Environment set up: In this stage, the test environment is instantiated, running a test in the wrong environment will invalidate the test (for example running a fixed point test in a floating point environment)
  2. Test set up: The Unit Under Test (UUT) is loaded and the testing tools are brought online.
  3. Stimulus: This is the test execution, data is fed into the system and the outputs are monitored
    1. Analysis: In a subset of cases the evaluation takes place during execution.
  4. Evaluation: Now the data collected during execution is put through the test components to determine the pass / fail status.
  5. Cleanup: While not explicitly part of the test the clean up gets the environment ready for the next test…

Failure to follow these steps can result in incomplete or invalid test results

Image result for put one foot in front of the other

Step 3: Validating the test

AtImage result for stimulus and response it’s most basic testing is about stimulus and response; the unit under test (UUT) is provided a stimulus and the response of the unit is observed and compared to an expected behavior.  This is the “expected input / expected output” data most commonly discussed in testing.  In this context validating the test implies

  • Exercising range of inputs: The test should exercise the unit across its’ full range of inputs and configurations.
  • Exercise invalid inputs: Unless the component has input checking the unit should be validated against invalid inputs, e.g. what happens if you give an infinite velocity.
  • Sanity check of outputs: When first developing the test visual inspection of the outputs provides a quick
  • Inspecting the outputs: The outputs from the unit to the test component needs to be validated against the components expected input format.

Clearing up misconceptions

This blog is not saying that the UUT needs to “fail” the test to show that the testing is valid, rather it must be possible for the test component to return a failed condition.  Here are my 7 favorite “It never fails” examples

  1. Checks for negative values… when the input is an unsigned integer
  2. Looks for errors lasting more than 30 seconds… in a 10-second test
  3. Check for a function call… when the parent function is never called
  4. The difference less than X tolerance…. when the tolerance is an order of magnitude greater than the signal
  5. Check for consecutive occurrences…. when the data sampled is missing records
  6. The test code is commented out… yes, that happens

Related image

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.