It is a common sense statement that you want your software to pass your validation tests. However, if you always pass your tests how do you know your test can catch a failure?
Step 1: Validating tests components
Ideally, when you write tests you are assembling them from validated components, e.g. every time you write a test you are not reinventing the wheel. For a test component to be validated the following must hold
- Fully defined interface: The inputs and output(s) must be fully specified, e.g. if the test only works for integer data that must be specified.
- Invalid data handling: The test component must respond correctly to invalid data, e.g. provide a failure message.
- Validated test cases (positive, negative, invalid data): The test component should have test cases which show the response to data that results in a pass, failure, and response to invalid data.
Step 2: Validate the order of execution
When tests are created there are a series of steps taken to execute the test
- Environment set up: In this stage, the test environment is instantiated, running a test in the wrong environment will invalidate the test (for example running a fixed point test in a floating point environment)
- Test set up: The Unit Under Test (UUT) is loaded and the testing tools are brought online.
- Stimulus: This is the test execution, data is fed into the system and the outputs are monitored
- Analysis: In a subset of cases the evaluation takes place during execution.
- Evaluation: Now the data collected during execution is put through the test components to determine the pass / fail status.
- Cleanup: While not explicitly part of the test the clean up gets the environment ready for the next test…
Failure to follow these steps can result in incomplete or invalid test results
Step 3: Validating the test
At it’s most basic testing is about stimulus and response; the unit under test (UUT) is provided a stimulus and the response of the unit is observed and compared to an expected behavior. This is the “expected input / expected output” data most commonly discussed in testing. In this context validating the test implies
- Exercising range of inputs: The test should exercise the unit across its’ full range of inputs and configurations.
- Exercise invalid inputs: Unless the component has input checking the unit should be validated against invalid inputs, e.g. what happens if you give an infinite velocity.
- Sanity check of outputs: When first developing the test visual inspection of the outputs provides a quick
- Inspecting the outputs: The outputs from the unit to the test component needs to be validated against the components expected input format.
Clearing up misconceptions
This blog is not saying that the UUT needs to “fail” the test to show that the testing is valid, rather it must be possible for the test component to return a failed condition. Here are my 7 favorite “It never fails” examples
- Checks for negative values… when the input is an unsigned integer
- Looks for errors lasting more than 30 seconds… in a 10-second test
- Check for a function call… when the parent function is never called
- The difference less than X tolerance…. when the tolerance is an order of magnitude greater than the signal
- Check for consecutive occurrences…. when the data sampled is missing records
- The test code is commented out… yes, that happens
I recently read an article about identifying programmers based on their programming “style“. One thing that was interesting about the study sited in the article was that the algorithm for determining who wrote the code was more accurate for mature programmers in that they developed unique styles and were less likely to rely on pre-existing code “snippets”.
Time to reconsider
Starting any sort of work is a balance between applying known patterns and adopting new strategies. Adopting new strategies, or at least considering them, takes time and therefore you only want to do so when warranted. The question is “How do I honestly evaluate if a new solution is required”. I would put forth there are 4 questions that you need to ask yourself honestly
- Is this a new type of problem: If you are working on your 101st iteration of the same type of problem chances are your validated solution can be applied.
- Is it new in whole or in parts: If it is something new, start scoping how much of the problem is “new”
- How well do I understand the new part: This is the hard part, do you understand, in outline at least, what is new in the system? Until you can say “Yes” to functional understanding you should not move forward.
- Can the new parts be treated in isolation: Once you understand what is new and what “known” you can determine the degree to which the new parts affect the behavior of the known parts.
Why ask What?
Life is to short to spend time reinventing the wheel for known problems, at the same time the quality of your work is dependent on the understanding of your problems. By knowing when new solutions need to be applied you can minimize the amount of needless reinvention of the wheel.
The questions listed above are intended to spark a “is this a wheel” understanding. What are your metacognition questions?
Every once in a while it is good to take a look outside and see what other people are doing, with that in mind here are a few blogs/articles that I have read recently
- Continuous integration methods with Simulink projects
- Execution order inspection for Simulink Functions
- Using the Simulation test manager
- Model-Based Design for Safety Critical Automotive systems
- The challenges and benefits of Model-Based Testing
- Model-Based Design for game architecture
- Model-Based Design architecture for large systems
So tell me, what are some of the articles you have been reading?
In the past when I have written about MetaData, I talked about how it could be used to help define a project or test. Today I want to briefly revisit the topic to discuss open-topic versus closed-topic metadata comments.
In cases where you have open-topic metadata users are free to create any metadata tags. During the initial discovery phase of a project, this is a powerful ability allowing users to define the key descriptive elements of the work in progress. On most projects, as the development matures, the rate of creating new metadata tags declines. At some point, groups should consider moving to a closed-topic metadata tagging system.
Why closed? How much
As a general rule of thumb, most objects support 5 to 10 metadata tags. More than that and the information contained in the tags is redundant. Creation of additional tags ends up being a burden in N ways
- The meaning of the tags become unclear, as the gradation between them becomes smaller
- Redundant tags creep in (truth be told my blog post uses both the #MBD and #ModelBasedDesign metadata tags)
- People search for the wrong thing; if I look in my database for #MBD and the post is tagged #ModelBasedDesign, I will miss that post.
The comments on metadata, of course, can apply across different applications. The use of categories quickly becomes useless when the categories become so small you cannot find them nor fit anything into them. #HopeYouLikedIt, #PleaseComment, #NoNeedForHashTags