Engineering and dancing…

Completion of a task is accomplished by performing a sequence of steps.  The more steps in the sequence the more likely you are to make a mistake; either by forgetting a step or doing the step out of order.  One method for reducing the likelihood of making a mistake is the creation of sub-tasks.  This is where the analogy to dancing comes in to play.

When you first learn to dance you learn basic steps; the waltz’s box step, the tangos 8-count “Slow, Slow, Quick Quick Slow”… Once the basic step is mastered (and heaven help me one day I will master the box step) then additional “sub-tasks” can be learned.  There are four virtues of sub-steps.

  1. Low chance of order mistakes:  shorter tasks have a lower risk for errors due to their simplicity
  2. Low cost for errors: if a mistake is made in a sub-task it is, often, isolated to that sub-task and it can be quickly re-run
  3. Decomposition: frequently when broken into sub-tasks, the task can be distributed to multiple people.
  4. Ability to chain together: The sub-tasks can be decomposed into multiple “routines” and reused in multiple processes.

In general, processes that have 3 ~ 5 steps are considered “easy” to remember and master.  Going above 8 steps in a process results in increased possibilities of human error.

 

 

Q&A: Questions answered

In today’s post, I will address some commonly asked question:

How long does it take to learn MATLAB and Simulink?

The answer to this question is dependent on a number of factors.  First, do you already know a programing language?  Are you familiar with control algorithms?  Do you have supporting people who know the toolset already?

Assuming a basic level of programming & software knowledge and a controls background with basic support most people will start using Simulink to basic develop control models within 2 ~ 3 weeks.  Over the course of 3 ~ 4 months, they will learn how to develop more complex systems.  For most people that level of understanding is sufficient for all of their development needs.

Deeper master of the tools, as is required for people to develop a group’s modeling patterns and best practices can be learned over the course of 3 to 5 years.

Image result for Learning time

What is test coverage and why is it important?

Test coverage is the measure of how well the software is tested.  This can apply to MCDC coverage (the if/else checking) range coverage (e.g. did you hit every point in your tables) and temporal coverage (do you account for temporal, e.g. integral, effects).  Test coverage then tells you if you are sufficiently exercising the code base.  One important thing to keep in mind, it is possible to have 100% coverage and still have incorrect behavior.

How do I transition from a C based to Model-Based Design environment?

The answer, in part, is dependent on the state of your existing code base.  If it is well organized (e.g.encapsulated with independent modules) then the process is straightforward, individual modules can be replaced or extended on an as-needed basis.

More often then not when the transition takes place it is happening because there the existing code base is “stale” or difficult to maintain.  In these cases a strategic decision needs to be made, what part of the code base can be “walled off” or trusted while you work on new critical systems?  Once that decision is made the work begins in earnest.  Ideally, the models are derived from the base requirements, not engineered from the existing flawed code base.  Often this is when the lack of the original requirements are uncovered.

Image result for rope bridge

 

If you always pass…

It is a common sense statement that you want your software to pass your validation tests.  However, if you always pass your tests how do you know your test can catch a failure?

Step 1: Validating tests components

Ideally, Image result for bug trapwhen you write tests you are assembling them from validated components, e.g. every time you write a test you are not reinventing the wheel.  For a test component to be validated the following must hold

  • Fully defined interface: The inputs and output(s) must be fully specified, e.g. if the test only works for integer data that must be specified.
  • Invalid data handling: The test component must respond correctly to invalid data, e.g. provide a failure message.
  • Validated test cases (positive, negative, invalid data): The test component should have test cases which show the response to data that results in a pass, failure, and response to invalid data.

Step 2: Validate the order of execution

When tests are created there are a series of steps taken to execute the test

  1. Environment set up: In this stage, the test environment is instantiated, running a test in the wrong environment will invalidate the test (for example running a fixed point test in a floating point environment)
  2. Test set up: The Unit Under Test (UUT) is loaded and the testing tools are brought online.
  3. Stimulus: This is the test execution, data is fed into the system and the outputs are monitored
    1. Analysis: In a subset of cases the evaluation takes place during execution.
  4. Evaluation: Now the data collected during execution is put through the test components to determine the pass / fail status.
  5. Cleanup: While not explicitly part of the test the clean up gets the environment ready for the next test…

Failure to follow these steps can result in incomplete or invalid test results

Image result for put one foot in front of the other

Step 3: Validating the test

AtImage result for stimulus and response it’s most basic testing is about stimulus and response; the unit under test (UUT) is provided a stimulus and the response of the unit is observed and compared to an expected behavior.  This is the “expected input / expected output” data most commonly discussed in testing.  In this context validating the test implies

  • Exercising range of inputs: The test should exercise the unit across its’ full range of inputs and configurations.
  • Exercise invalid inputs: Unless the component has input checking the unit should be validated against invalid inputs, e.g. what happens if you give an infinite velocity.
  • Sanity check of outputs: When first developing the test visual inspection of the outputs provides a quick
  • Inspecting the outputs: The outputs from the unit to the test component needs to be validated against the components expected input format.

Clearing up misconceptions

This blog is not saying that the UUT needs to “fail” the test to show that the testing is valid, rather it must be possible for the test component to return a failed condition.  Here are my 7 favorite “It never fails” examples

  1. Checks for negative values… when the input is an unsigned integer
  2. Looks for errors lasting more than 30 seconds… in a 10-second test
  3. Check for a function call… when the parent function is never called
  4. The difference less than X tolerance…. when the tolerance is an order of magnitude greater than the signal
  5. Check for consecutive occurrences…. when the data sampled is missing records
  6. The test code is commented out… yes, that happens

Related image

Metacognition and engineering design

I recently read an article about identifying programmers based on their programming “style“.  One thing that was interesting about the study sited in the article was that the algorithm for determining who wrote the code was more accurate for mature programmers in that they developed unique styles and were less likely to rely on pre-existing code “snippets”.

Time to reconsider

StartingImage result for metacognition any sort of work is a balance between applying known patterns and adopting new strategies.  Adopting new strategies, or at least considering them, takes time and therefore you only want to do so when warranted. The question is “How do I honestly evaluate if a new solution is required”.  I would put forth there are 4 questions that you need to ask yourself honestly

  1. Is this a new type of problem: If you are working on your 101st iteration of the same type of problem chances are your validated solution can be applied.
  2. Is it new in whole or in parts: If it is something new, start scoping how much of the problem is “new”
  3. How well do I understand the new part: This is the hard part, do you understand, in outline at least, what is new in the system?  Until you can say “Yes” to functional understanding you should not move forward.
  4. Can the new parts be treated in isolation: Once you understand what is new and what “known” you can determine the degree to which the new parts affect the behavior of the known parts.

Why ask What?

Life is to short to spend time reinventing the wheel for known problems, at the same time the quality of your work is dependent on the understanding of your problems.  By knowing when new solutions need to be applied you can minimize the amount of needless reinvention of the wheel.

The questions listed above are intended to spark a “is this a wheel” understanding.  What are your metacognition questions?

Image result for why ask what

Recent reads…

Every once in a while it is good to take a look outside and see what other people are doing, with that in mind here are a few blogs/articles that I have read recently

  1. Continuous integration methods with Simulink projects
  2. Execution order inspection for Simulink Functions
  3. Using the Simulation test manager
  4. Model-Based Design for Safety Critical Automotive systems
  5. The challenges and benefits of Model-Based Testing
  6. Model-Based Design for game architecture
  7. Model-Based Design architecture for large systems

So tell me, what are some of the articles you have been reading?

Model-Based Design: Differencing best practices…

Difference reports are a staple of software development, they allow you to quickly see what has changed.  Model-Based Design, with its graphical environment, presents some unique issues in differencing; we will examine them with this post.

Differencing best practices

The following best practice hold regardless of the development environment

  1. Reasonable diffing cadence: Finding the difference between two objects is easy if the number of changes are small; however if large structural, or multiple-point changes have occurred difference operations become difficult if not impossible.  Perform differencing operations when no more then 5% of the object has changed.
  2. Diff function changes / ignore visual changes: Visual changes, such as the use of indents in textual languages, or block placement in graphical, do not impact the behavior of the code.  While these changes may violate modeling guidelines they should be ignored during the diff operations
  3. Include the authors: Have both the original author and updating person take part in the review for complex issues.
  4. Test after diffing: After performing the difference operations have been run the files should be run through their test suites to ensure that the changes did not have a negative impact.

Don’t judge a model by its cover

In text based languages changes in the text are it; with Model-Based development environments it is the combination of the

  • Block connections: How the blocks are connected to each other determines execution order and data flow.
  • Block parameters: How the block is parameterized determines the functionality of the block
  • Model configuration: The model can have parameters that influence the behavior system beyond the data set by the blocks

Image result for model difference simulink image

Final comment

It is possible to diff the underlying text files that define graphical models; at one point in time this was the only method that was possible.  However, doing this presents multiple problems and should be avoided.

  1. Determining what is significant: The text representations often encode information into parameters / structures.  Often it is not obvious what are significant changes versus cosmetic.
  2. Block changes: The text representation can have the same functionality “shifted” in the code to a new location depending on where it shows up in the model; however this is often not a functional change.
  3. Interpretation: When reviewing the text changes the reviewers then have to interpret what the text means.

Table-it…

Table lookup algorithms provide a powerful tool for the modeling of systems the following are some basic tips for making your table look up’s fast and accurate

Pre-process your data

Table data often comes from real-world sources.  This presents two issues

  1. Errors in the data: Validate that any anomalous readings are removed from the data
  2. Non-uniform input axis: Real-world measurements frequently have jitter on the exact “place” of measurement.  Data should be renormalized to a uniform input axis

Know your boundaries

WithImage result for the doors this is the end tables, there are always limits on the input axes range.   Understanding the behavior of the table outside of the known range is important.  There are two basic options

  1. Hold last value: In this instance, the last value at the tale boundary is used for all data outside of the axes
  2. Interpolate data: In this case, the output data is extended beyond the known boundary.  For data near the boundaries this is generally acceptable; however, the accuracy can quickly break down as inputs exceed the

There are three approaches to handling this issue

  1. Expand the range of valid data: This is the ideal solution but is often not possible due to sampling reasons.
  2. Pre-interpolate the data: Create data outside of the range with “safe” values based on engineering knowledge.
  3. Limit the input data range: Create a “hard stop” at the data edge.

Reduce, reuse, recycle…

It is common for multiple tables to share the same input axis; in this case, sharing the index lookup across multiple tables is one method for reducing the total number of calculations required by the algorithm.

Image result for pre-lookup table simulink

Data types and discontinuities

Table

Image result for the cascades virginia tech
Table: Good before and after the fall

lookup algorithms are not well suited for data with discontinuities.  When working with such data either piecewise approaches are used or the region in which the discontinuity appears has additional data points to handle the sharp change.

A similar issue is working with integer-based data and interpolation.  When the outputs from the table are integer values then either interpolation should not be enabled or all possible input coordinates should be part of the input axes.

So there you go, a few quick suggestions for working with tables!

 

 

Orthogonal redundancy…

ThereImage result for measure twice cut once is an old saying “measure twice, cut once”‘; wise words of caution before you take an irrevocable action.  However, if you have a faulty measuring tape then measuring twice will just produce the same error twice.

Orthogonal redundancy is an approach to safety-critical software where the same dependent variable is calculated using 2 or more methods.  In the woodworking example, this could be done by using a standard tape measure the first time and laser guide the second.

Achieving software orthogonality

There are three basic approaches to software orthogonality, listed in terms of “strength”

  1. Unique algorithms:
  2. Common algorithm: unique parameterization
  3. Common algorithm: unique inputs

Unique algorithm:

Using a unique algorithm has the advantage that it removes the chance of a common point of failure; e.g. if one algorithm can overflow, the second doesn’t.  (Mind you, you should catch the overflow problem).  The downside to this approach then is that you need to create validation test cases for each unique algorithm

Common algorithm: unique parameterization

InImage result for throttle body sensor volt versus angle this case, the same algorithm is used however the parameterization is different for each instance.  This is commonly seen for hardware sensors, such as unique scaling on a set of analog input sensors.  For example, as in the image shown a simple linear equation (y = m*x + b) can be used to determine the throttle angle, however, the coefficient or “m” and “b” are different.

Common algorithm: unique inputs

This final approach is used when the input source data is suspect or can fail.  The solution, in this case, is to create multiple input sources for the same data.  The throttle body example above, redundant sensors, is an example of this; a more robust example would be to have two different types of sensors.

When and how?

UsingImage result for who what when where orthogonal algorithms requires additional execution steps and memory; both for the algorithm and for the validation of results against each other.  Because of this, the use of the algorithms should be limited to safety critical calculations.

The standard way to use multiple results is with triple redundancy.  The results are compared with each other, as long as they are in agreement (within tolerance) of each other then the result is passed on.  If two of the three agree that value is used. If there is no agreement then the results are flagged as an error.

Scenario-based testing

In scenario-based testing, the test author configures the input data to “move” the system through a set of desired states or maneuvers.  A simple example of this would be a “wide-open throttle” scenario for automobiles

  1. Key set to crankImage result for prndl
    1. Hold .1 second
  2. Key set to run
  3. Together
    1. Brake to 100%
    2. PRNDL to Drive
  4. Together
    1. Brake to 0%
    2. Throttle to 100%
      Measure time to 60 MPH

Reusing scenarios

In scenario-based testing, there are often common “starting points”.  In our example above the common starting point would be the first 3 steps.  In the example that follows, using a Test Sequence block I implemented the common steps as a single, grouped, step.scenarioRuse

When the common setup is complete the sequence block then moves onto another “grouped” step which implements the specific test.  Optionally a second or third “common setup” sequence could be defined before the actual test begins.

Chained data

AnotherImage result for criminal cat method of reusing scenarios is to “chain data” together.  In this case, a series of input data files are concatenated and played in sequence into the simulation.  To continue this example the first three steps would be one data file and then either the “WOT” or “idelWarmUp” would contribute to the next data file.  (Please imagine the image is of a fat cat named “Nate”  (Con-Cat-Ton-Nate)

Why perform scenario-based testing?

Scenario-based testing allows us to validate that models behave in the way we expect and the way we designed, if I “put the pedal to the metal” the car accelerates quickly.  Most frequently I use them to

  • Validation of state machine behavior
  • Exercising diagnostic code
  • Validate requirements
  • Debug models under development