Site icon Model-Based Design

If you always pass…

It is a common sense statement that you want your software to pass your validation tests.  However, if you always pass your tests how do you know your test can catch a failure?

Step 1: Validating tests components

Ideally, when you write tests you are assembling them from validated components, e.g. every time you write a test you are not reinventing the wheel.  For a test component to be validated the following must hold

Step 2: Validate the order of execution

When tests are created there are a series of steps taken to execute the test

  1. Environment set up: In this stage, the test environment is instantiated, running a test in the wrong environment will invalidate the test (for example running a fixed point test in a floating point environment)
  2. Test set up: The Unit Under Test (UUT) is loaded and the testing tools are brought online.
  3. Stimulus: This is the test execution, data is fed into the system and the outputs are monitored
    1. Analysis: In a subset of cases the evaluation takes place during execution.
  4. Evaluation: Now the data collected during execution is put through the test components to determine the pass / fail status.
  5. Cleanup: While not explicitly part of the test the clean up gets the environment ready for the next test…

Failure to follow these steps can result in incomplete or invalid test results

Step 3: Validating the test

At it’s most basic testing is about stimulus and response; the unit under test (UUT) is provided a stimulus and the response of the unit is observed and compared to an expected behavior.  This is the “expected input / expected output” data most commonly discussed in testing.  In this context validating the test implies

Clearing up misconceptions

This blog is not saying that the UUT needs to “fail” the test to show that the testing is valid, rather it must be possible for the test component to return a failed condition.  Here are my 7 favorite “It never fails” examples

  1. Checks for negative values… when the input is an unsigned integer
  2. Looks for errors lasting more than 30 seconds… in a 10-second test
  3. Check for a function call… when the parent function is never called
  4. The difference less than X tolerance…. when the tolerance is an order of magnitude greater than the signal
  5. Check for consecutive occurrences…. when the data sampled is missing records
  6. The test code is commented out… yes, that happens

Exit mobile version