I recently taught a class on testing fundamentals, in it I made the comment that by my estimate there are over 8,000 lines of code in MATLAB dedicated to the simple (simple seeming) test
A == B
Why? What is testing for equality so hard? Let’s break it down.
- Data types and numerical precision: depending on the selected data type the resolution to determine “equal” may not be present. You can end up with false positives and false negatives
- Tolerances: You can take data type into account by adding tolerances into comparison.
- Absolute tolerance: abs(A-B) < tol
- Relative tolerance: abs(A-B) < per_tol * A
- But what about zero: Percentage tolerance is good, but what do you do when the value is zero?
- Relative tolerance (mean): abs(A-B) < per_tol * mean(A);
- Realitive tolerance (max): abs(A-B) < per_tol * max(A);
- Realitive tolerance (moving average) : abs(A-B) < per_tol * mean(A(i-N) : A(i+N))
- What about noise: for measured data how do you handle the “junk data”?
- What about missing data: much like junk data what do you do with missing data points?
- What about data shifts (temporal or other): it is fairly common for comparison operations to take place with “shifted” data. Where one signal is offset by some fixed amount in time.
- What about non-standard data formats: how do you handle the comparison of a structure of data? Do all elements in the structure have to match to “pass”? Do you apply the same standard of tolerances to all elements?
You can quickly see where my estimate of 8K lines of code come from. Why then do I mention this? Two reasons
- Start thinking about the complexity in “simple” tests
- Stop creating test operations when they already exist
This is written in the context of testing. Any sort of algorithmic or logical code will, of course, use comparison operations. For those cases keep 2 simple rules in mind
- Do not use floating point equivalence operations:
- Take into account the “else” conditions
Recently I had a discussion about the meaning of words with my wife. Most of the time, most people, play some degree of fast and loose with the definition of words and the structure of their sentences. However, there are some aspects of life and work where that will not stand, doctors visits, political discussions, and the writing of requirements. With that in mind, here is a couple simple “mad lib” for writing a clear requirement.
Form 1 Response Requirement: When <Subject> is <State> then <Action> shall happen to <Action object>.
In this form, the requirement specifies a response. For example, when my wife (subject) comes into the room (state) then I (action object) smile (action).
(Note: this should be fleshed out with definitions of “the room” and “smile”; e..g how long after, for how long. The good news is that this is a testable requirement. My wife enters rooms all the time so I can test it out tonight!
Form 2 State Check: When <Subject> is <State> then <Measured Object> shall have value <State>
This form enforces existing conditions, it can also be written in a “Before <Subject> is <State>…” form. An example is “Before the car <subject> is placed in park <state> the vehicle <measured object> shall have a velocity less then 0.1 mph <state>.
So what templates do you use?
Completion of a task is accomplished by performing a sequence of steps. The more steps in the sequence the more likely you are to make a mistake; either by forgetting a step or doing the step out of order. One method for reducing the likelihood of making a mistake is the creation of sub-tasks. This is where the analogy to dancing comes in to play.
When you first learn to dance you learn basic steps; the waltz’s box step, the tangos 8-count “Slow, Slow, Quick Quick Slow”… Once the basic step is mastered (and heaven help me one day I will master the box step) then additional “sub-tasks” can be learned. There are four virtues of sub-steps.
- Low chance of order mistakes: shorter tasks have a lower risk for errors due to their simplicity
- Low cost for errors: if a mistake is made in a sub-task it is, often, isolated to that sub-task and it can be quickly re-run
- Decomposition: frequently when broken into sub-tasks, the task can be distributed to multiple people.
- Ability to chain together: The sub-tasks can be decomposed into multiple “routines” and reused in multiple processes.
In general, processes that have 3 ~ 5 steps are considered “easy” to remember and master. Going above 8 steps in a process results in increased possibilities of human error.
In today’s post, I will address some commonly asked question:
How long does it take to learn MATLAB and Simulink?
The answer to this question is dependent on a number of factors. First, do you already know a programing language? Are you familiar with control algorithms? Do you have supporting people who know the toolset already?
Assuming a basic level of programming & software knowledge and a controls background with basic support most people will start using Simulink to basic develop control models within 2 ~ 3 weeks. Over the course of 3 ~ 4 months, they will learn how to develop more complex systems. For most people that level of understanding is sufficient for all of their development needs.
Deeper master of the tools, as is required for people to develop a group’s modeling patterns and best practices can be learned over the course of 3 to 5 years.
What is test coverage and why is it important?
Test coverage is the measure of how well the software is tested. This can apply to MCDC coverage (the if/else checking) range coverage (e.g. did you hit every point in your tables) and temporal coverage (do you account for temporal, e.g. integral, effects). Test coverage then tells you if you are sufficiently exercising the code base. One important thing to keep in mind, it is possible to have 100% coverage and still have incorrect behavior.
How do I transition from a C based to Model-Based Design environment?
The answer, in part, is dependent on the state of your existing code base. If it is well organized (e.g.encapsulated with independent modules) then the process is straightforward, individual modules can be replaced or extended on an as-needed basis.
More often then not when the transition takes place it is happening because there the existing code base is “stale” or difficult to maintain. In these cases a strategic decision needs to be made, what part of the code base can be “walled off” or trusted while you work on new critical systems? Once that decision is made the work begins in earnest. Ideally, the models are derived from the base requirements, not engineered from the existing flawed code base. Often this is when the lack of the original requirements are uncovered.