This blog is a re-posting of early work from linkedin; I will be re-posting this week while I am at the Software Design for Medical Devices Europe conference in Munich.
Enabling the adoption of Model-Based Design
Test early, test often, test against requirements and test using formal methods. This is the mantra that developers (hopefully) hear. But what does it mean in practice? How do you produce effective and maintainable tests? I will argue that the first step is to think of test development in the same light as software development. Good testing infrastructure has requirements, is built from reusable components and written in a clear fashion to facilitate extensions and debugging efforts.
Why should you care?
In my 20+ years working in software, 2/3 of it in a consultative role, the most common problem I am called in to work on is mushroom code(1). Mushroom code is the end result of unstructured development, new algorithms are added on top of existing algorithms with little understanding of what it is feeding on. The result is an organic mess that is hard to sort out. This is prevalent in algorithmic development and even more common in testing which is often done “late and under the gun”
A fully developed testing infrastructure consists of 5 components, a manager, execution methods, harnesses, reporting methods, and evaluation methods.
1.) Evaluation methods: use the data created through the execution of the test to determine the pass / fail / percentage complete status of the test:
Example a.) A MCDC test the evaluation would determine the percentage of conditions taken
Example b.) A regression test could compare the output values of between the baseline version of the code and the current release.
2.) Reporting methods: take the data from the evaluation methods and generate both human readable and history reports. The history reports are used to track overall trends in the software development process.(2)
3.) Harness: the harness provides a method for calling the unit under test (UUT) without modifying the UUT. Note test harnesses facilitate black box testing, e.g. the internal states of the unit under test are not known. However if internal states of the UUT are outputs at the root level of the model then white box testing can be done using the unit under test.(3)
4.) Execution methods: is how the test is run. This could be the simulation of a Simulink model, the execution of a .exe file, static testing (as with Polyspace) or the Real-Time execution (4)of the code.
As the name implies there is more than one “execution method.” They should be developed as a general class that allows the same method (simulation) to be applied to multiple harnesses. Each instance of a execution method applied to a harness is considered a test case.
5.) Test manager: is were all of these components come together. The test manager
- Holds a list of the test cases
- Automates the loading of associated test data
- Triggers the execution of the test
- Triggers the evaluation of the results
- Triggers the generate of the test report
Sadly it will not yet fetch you a cold beverage.
1.) Mushroom code and spaghetti code are similar that they develop due to a lack of planning. Spaghetti code is characterized with convoluted calling structures; mushroom code is accumulation of code on top of code.
2.) An interesting list of what should go into a report can be found here.
3.) Any model can be turned into white box testing if global data is used. However the use of global data potential introduces additional failure cases.
4.) Yes, this blog retreads the work from 6 months ago, however it is good to review these issues.
5 thoughts on “Testing is software”