Mapping Model-Based Design to Thanksgiving

The Thanksgiving meal and MBD have a lot in common; both are times when people come together, both involve many moving parts, and both are better with some forethought and planning.  (No one can cook a turkey at the last minute).  With that in mind here is how the courses in Thanksgiving meal map onto the MBD process.

The salad: documentation

Image result for image salad

Let us be honest, when you think of Thanksgiving you generally don’t think of a salad, but your pretty happy you have something light given the heavy items in the meal.  That is why it maps onto documentation in MBD.  Something you are happy to have from your tool providers and something that you hope exists a year from now when you come back to look at what you put together.  
Bonus thought: Like salad documentation is good for you

Cranberries: requirements

For many people

Image result for cranberries

cranberries enter their life at Thanksgiving and depart after Christmas.   They know it is supposed to be part of the meal but they don’t really know why they are having them.   In the Thanksgiving meal, the cranberries provide the bitter / acid component to what is otherwise a sweet and fatty meal.  With Model-Based Design requirements provide the harsh acid of truth against which your design is measured.
Bonus thought: In the same way that cranberries grow in a bog, requirements grow out of the confusing of initial requests.

Mashed potatoes: testing

Image result for mashed potatoes

Buttery, starchy, with garlic or herbs,  mashed potatoes could (in my humble opinion) form a meal in and of themselves.  Once the gravy from the turkey is added in the is added in the analogy to testing becomes clear.  Mashed potatoes bond with the turkey through gravy, testing bonds with the models by wrapping their outputs in loving bits of analysis.
Bonus thought: in the same way mash potatoes can become to heavy if you don’t prep them well, testing can become burdensome if you don’t plan ahead.

Turkey: the model

Image result for turkey

In the same way that the turkey is the center of the meal, the model is the center of Model-Based Design.  Much like a turkey, you can overcook them (poor architecture leading to hard to swallow models).  Undercook them (poorly design models with gaps or holes that lead to poisoned outcomes).  However, with just a little bit of forethought, you can have a juicy delicious bird.
Bonus thought: Much like deep frying turkeys was a fad a few years ago there are fads in software development.  In the end, good development practices don’t change; be clear, be complete and test; don’t fall for fads.

The dessert: certification

Image result for dessert pie

Certification is the dessert of the meal.  It may come at the end of everything but I bet you have been thinking about it during the rest of the meal.

Bonus thought: Certification does not mean you have a good product.  It just means you have followed a process.  Take your time to do it right and you can have your cake (pie) and eat it too.

 

 

Final thought:  

There are many parts of Model-Based Design, they all have their uses, and we all will have our favorite parts.  For those of you in the US, happy Thanksgiving.  

fz181122.gif

Traceability

What is traceability and why do we care about it?  First a definition


Traceability is the ability to describe and follow the life of a requirement in both a forwards and backwards direction (i.e., from its origins, through its development and specification, to its subsequent deployment and use, and through periods of ongoing refinement 

There are 3 critical points to this aspects:

  1. An independent documenting method: for an object to be traceable it must be documented. This has 2 parts
    1. A unique identifier for the object
    2. A version history of the object
  2. Linkages between objects: the documenting (linking) method must include dependencies between objects.   This can include
    1. Created by (e.g. this model generates that code)
    2. Dependent on (e.g. this requirement is completed by its parent)
    3. Fulfills:(e.g. this test case implementation fulfilled the test requirement)
  3. Bi-directional: The objects can be traced in any direction.  (Note: not all relationships are parent/child: there can be “associated” linkages)
Image result for connect the dots
Traceability allows us to “connect the dots”

Why should we care?

Traceability is about organization; the ability to easily find information.  For small systems, for self-projects, traceability may bean unneeded burden.  However when you have multiple people working on a project. the ability to trace between objects is how we information is relayed.

Does A == C?

I recently taught a class on testing fundamentals, in it I made the comment that by my estimate there are over 8,000 lines of code in MATLAB dedicated to the simple (simple seeming) test

A == B

Why?  What is testing for equality so hard?  Let’s break it down.

  1. Data types and numerical precision: depending on the selected data type the resolution to determine “equal” may not be present.  You can end up with false positives and false negatives
  2. Tolerances: You can take data type into account by adding tolerances into comparison.
    1. Absolute tolerance: abs(A-B) < tol
    2. Relative tolerance: abs(A-B) < per_tol * A
  3. But what about zero: Percentage tolerance is good, but what do you do when the value is zero?
    1. Relative tolerance (mean): abs(A-B) < per_tol * mean(A);
    2. Realitive tolerance (max): abs(A-B) < per_tol * max(A);
    3. Realitive tolerance (moving average) : abs(A-B) < per_tol * mean(A(i-N) : A(i+N))
  4. What about noise: for measured data how do you handle the “junk data”?
  5. What about missing data: much like junk data what do you do with missing data points?
  6. What about data shifts (temporal or other): it is fairly common for comparison operations to take place with “shifted” data.  Where one signal is offset by some fixed amount in time.
  7. What about non-standard data formats: how do you handle the comparison of a structure of data?  Do all elements in the structure have to match to “pass”?   Do you apply the same standard of tolerances to all elements?

Image result for pass no pass

You can quickly see where my estimate of 8K lines of code come from.  Why then do I mention this?  Two reasons

  1. Start thinking about the complexity in “simple” tests
  2. Stop creating test operations when they already exist

Image result for reinventing the wheel

Endnote

This is written in the context of testing.  Any sort of algorithmic or logical code will, of course, use comparison operations.  For those cases keep 2 simple rules in mind

  1. Do not use floating point equivalence operations:
  2. Take into account the “else” conditions

 

 

What do you mean when you say…

Recently I had a discussion about the meaning of words with my wife.  Most of the time, most people, play some degree of fast and loose with the definition of words and the structure of their sentences.  However, there are some aspects of life and work where that will not stand, doctors visits, political discussions, and the writing of requirements.  With that in mind, here is a couple simple “mad lib” for writing a clear requirement.

Form 1 Response Requirement: When <Subject> is <State> then <Action> shall happen to <Action object>.
In this form, the requirement specifies a response.  For example, when my wife (subject) comes into the room (state) then I (action object) smile (action).

(Note: this should be fleshed out with definitions of “the room” and “smile”; e..g how long after, for how long.  The good news is that this is a testable requirement.  My wife enters rooms all the time so I can test it out tonight!

Form 2 State Check: When <Subject> is <State> then <Measured Object> shall have value <State>

This form enforces existing conditions, it can also be written in a “Before <Subject> is <State>…” form.  An example is “Before the car <subject> is placed in park <state> the vehicle <measured object> shall have a velocity less then 0.1 mph <state>.

So what templates do you use?

Image result for engineering mad libs

 

 

Engineering and dancing…

Completion of a task is accomplished by performing a sequence of steps.  The more steps in the sequence the more likely you are to make a mistake; either by forgetting a step or doing the step out of order.  One method for reducing the likelihood of making a mistake is the creation of sub-tasks.  This is where the analogy to dancing comes in to play.

When you first learn to dance you learn basic steps; the waltz’s box step, the tangos 8-count “Slow, Slow, Quick Quick Slow”… Once the basic step is mastered (and heaven help me one day I will master the box step) then additional “sub-tasks” can be learned.  There are four virtues of sub-steps.

  1. Low chance of order mistakes:  shorter tasks have a lower risk for errors due to their simplicity
  2. Low cost for errors: if a mistake is made in a sub-task it is, often, isolated to that sub-task and it can be quickly re-run
  3. Decomposition: frequently when broken into sub-tasks, the task can be distributed to multiple people.
  4. Ability to chain together: The sub-tasks can be decomposed into multiple “routines” and reused in multiple processes.

In general, processes that have 3 ~ 5 steps are considered “easy” to remember and master.  Going above 8 steps in a process results in increased possibilities of human error.

 

 

Q&A: Questions answered

In today’s post, I will address some commonly asked question:

How long does it take to learn MATLAB and Simulink?

The answer to this question is dependent on a number of factors.  First, do you already know a programing language?  Are you familiar with control algorithms?  Do you have supporting people who know the toolset already?

Assuming a basic level of programming & software knowledge and a controls background with basic support most people will start using Simulink to basic develop control models within 2 ~ 3 weeks.  Over the course of 3 ~ 4 months, they will learn how to develop more complex systems.  For most people that level of understanding is sufficient for all of their development needs.

Deeper master of the tools, as is required for people to develop a group’s modeling patterns and best practices can be learned over the course of 3 to 5 years.

Image result for Learning time

What is test coverage and why is it important?

Test coverage is the measure of how well the software is tested.  This can apply to MCDC coverage (the if/else checking) range coverage (e.g. did you hit every point in your tables) and temporal coverage (do you account for temporal, e.g. integral, effects).  Test coverage then tells you if you are sufficiently exercising the code base.  One important thing to keep in mind, it is possible to have 100% coverage and still have incorrect behavior.

How do I transition from a C based to Model-Based Design environment?

The answer, in part, is dependent on the state of your existing code base.  If it is well organized (e.g.encapsulated with independent modules) then the process is straightforward, individual modules can be replaced or extended on an as-needed basis.

More often then not when the transition takes place it is happening because there the existing code base is “stale” or difficult to maintain.  In these cases a strategic decision needs to be made, what part of the code base can be “walled off” or trusted while you work on new critical systems?  Once that decision is made the work begins in earnest.  Ideally, the models are derived from the base requirements, not engineered from the existing flawed code base.  Often this is when the lack of the original requirements are uncovered.

Image result for rope bridge

 

If you always pass…

It is a common sense statement that you want your software to pass your validation tests.  However, if you always pass your tests how do you know your test can catch a failure?

Step 1: Validating tests components

Ideally, Image result for bug trapwhen you write tests you are assembling them from validated components, e.g. every time you write a test you are not reinventing the wheel.  For a test component to be validated the following must hold

  • Fully defined interface: The inputs and output(s) must be fully specified, e.g. if the test only works for integer data that must be specified.
  • Invalid data handling: The test component must respond correctly to invalid data, e.g. provide a failure message.
  • Validated test cases (positive, negative, invalid data): The test component should have test cases which show the response to data that results in a pass, failure, and response to invalid data.

Step 2: Validate the order of execution

When tests are created there are a series of steps taken to execute the test

  1. Environment set up: In this stage, the test environment is instantiated, running a test in the wrong environment will invalidate the test (for example running a fixed point test in a floating point environment)
  2. Test set up: The Unit Under Test (UUT) is loaded and the testing tools are brought online.
  3. Stimulus: This is the test execution, data is fed into the system and the outputs are monitored
    1. Analysis: In a subset of cases the evaluation takes place during execution.
  4. Evaluation: Now the data collected during execution is put through the test components to determine the pass / fail status.
  5. Cleanup: While not explicitly part of the test the clean up gets the environment ready for the next test…

Failure to follow these steps can result in incomplete or invalid test results

Image result for put one foot in front of the other

Step 3: Validating the test

AtImage result for stimulus and response it’s most basic testing is about stimulus and response; the unit under test (UUT) is provided a stimulus and the response of the unit is observed and compared to an expected behavior.  This is the “expected input / expected output” data most commonly discussed in testing.  In this context validating the test implies

  • Exercising range of inputs: The test should exercise the unit across its’ full range of inputs and configurations.
  • Exercise invalid inputs: Unless the component has input checking the unit should be validated against invalid inputs, e.g. what happens if you give an infinite velocity.
  • Sanity check of outputs: When first developing the test visual inspection of the outputs provides a quick
  • Inspecting the outputs: The outputs from the unit to the test component needs to be validated against the components expected input format.

Clearing up misconceptions

This blog is not saying that the UUT needs to “fail” the test to show that the testing is valid, rather it must be possible for the test component to return a failed condition.  Here are my 7 favorite “It never fails” examples

  1. Checks for negative values… when the input is an unsigned integer
  2. Looks for errors lasting more than 30 seconds… in a 10-second test
  3. Check for a function call… when the parent function is never called
  4. The difference less than X tolerance…. when the tolerance is an order of magnitude greater than the signal
  5. Check for consecutive occurrences…. when the data sampled is missing records
  6. The test code is commented out… yes, that happens

Related image

Metacognition and engineering design

I recently read an article about identifying programmers based on their programming “style“.  One thing that was interesting about the study sited in the article was that the algorithm for determining who wrote the code was more accurate for mature programmers in that they developed unique styles and were less likely to rely on pre-existing code “snippets”.

Time to reconsider

StartingImage result for metacognition any sort of work is a balance between applying known patterns and adopting new strategies.  Adopting new strategies, or at least considering them, takes time and therefore you only want to do so when warranted. The question is “How do I honestly evaluate if a new solution is required”.  I would put forth there are 4 questions that you need to ask yourself honestly

  1. Is this a new type of problem: If you are working on your 101st iteration of the same type of problem chances are your validated solution can be applied.
  2. Is it new in whole or in parts: If it is something new, start scoping how much of the problem is “new”
  3. How well do I understand the new part: This is the hard part, do you understand, in outline at least, what is new in the system?  Until you can say “Yes” to functional understanding you should not move forward.
  4. Can the new parts be treated in isolation: Once you understand what is new and what “known” you can determine the degree to which the new parts affect the behavior of the known parts.

Why ask What?

Life is to short to spend time reinventing the wheel for known problems, at the same time the quality of your work is dependent on the understanding of your problems.  By knowing when new solutions need to be applied you can minimize the amount of needless reinvention of the wheel.

The questions listed above are intended to spark a “is this a wheel” understanding.  What are your metacognition questions?

Image result for why ask what

Recent reads…

Every once in a while it is good to take a look outside and see what other people are doing, with that in mind here are a few blogs/articles that I have read recently

  1. Continuous integration methods with Simulink projects
  2. Execution order inspection for Simulink Functions
  3. Using the Simulation test manager
  4. Model-Based Design for Safety Critical Automotive systems
  5. The challenges and benefits of Model-Based Testing
  6. Model-Based Design for game architecture
  7. Model-Based Design architecture for large systems

So tell me, what are some of the articles you have been reading?