11111100010 to 11111100011 Every bit makes a difference

Welcome to the year 2019, the first post of the year (or last post of the year) generally are places for retrospectives/forward looking visions. I shall keep to the form.

Excited: 2018

There are multiple things to be excited about this year, the following are what I a found most exciting

  1. . Simplified methods for integrating C with models: MATLAB and Simulink added new functionality to improve the integration of C code and Models, either by including code in the model (C-Caller block) or by simplifying the function interface definition.
  2. Projects, data dictionaries and testing: MathWorks continues to work on the integration between Simulink Projects and the supporting tool chain, these improvements coupled with version control software integration make the tool powerful for team based development
  3. String support: For better or for worse, strings are a part of control systems, after many years Simulink has native support.
Image result for excited puppy
The corgi of excitement remembers the past

Looking forward: 2019

Related image
The lemur of happiness looks towards the future
  1. Machine learning: Ok, sure I just got started but already there are a number of interesting question (and lovely math) that I am enjoying.
  2. Systems of systems: The scope of Model-Based Design projects continues to expand, the analysis of system of systems is an area I’m excited to move into.
  3. Statistical validation: As our systems become more complex statistical analysis (DOE) becomes more and more important

If you find this content useful, please consider subscribing

Apples and Oranges

There is an old saying that “You can’t compare apples to oranges”; but frequently as an engineer, one of our jobs is to compare apples to oranges.  Why and what does that mean?

Frequently, when systems are built the actual system does not yet exist.  As a result, we need to draw analogies between the two items.  So if we are going to compare apples and oranges how do we “juice” them for the correct data?

Image result for sat question A is to B

The mapping between objects: first principals

One objective with these mappings is to reduce the volume of first principals modeling that needs to be done; e.g. instead of building up a complex system of equations, use a modified data reduction of an existing system modified for the new system.    However, to do that, we must first validate that the fundamental behavior of the systems scale between each other.  For example, scaling a 4 cylinder 1.8L engine to a 2.1L engine is straight forward.  To go from a 4 cylinder to an 8 cylinder is also straightforward if a slightly different problem.

What is the same?  What is different?

You selected the reference model because there are things that they share in common.  At the same time, there will be things that are different.  In our example of the 4 cylinder to the 8 cylinder engine, there are differences in total displacement (an easy mapping) torque output excetra.  The thing that may not be easy to map is the change to the in the vibrational aspects of the engine.  The questions to ask are

  1. What physical aspects of the system am I concerned with?
    1. Torque
    2. Fuel consumption
    3. Max rpm
  2. What aspects am I not concerned with?
    1. Vibration
    2. Heat transfer
    3. ….

If you find this content useful, please consider subscribing

The truth about regression…

So my first step into machine learning is examining supervised learning and my old friend curve fitting.  This was a fun trip down memory lane as my numerical mathematics background had me well prepared for this topic.  Further, in the 25 years since I graduated, the basic tools for performing regression analysis have greatly improved, I was able to use the built-in curve fitting tool in MATLAB to perform these operations directly.  At the same time, I quickly remembered the “gotchas” about regression…

Starting points

As a starting point, I thought I would see how well the tool would do with a known polynomial equation.  I defined a simple 3rd order equation and added some noise into the signal.  I then ran the results through the curve fitting toolbox…

The data

As a sanity check, I first tried it out using the “real” X and Y data.  As would be expected the tool spits back out the coefficients that I expected.  When I ran it with the noise (and telling it to use a 3rd order polynomial) it returned coefficients fairly close to the “real” things.

Curve fitting within the boundaries
CoefficientOriginalRegressed “Real”Regressed “Noise”

Within the defined range we have an R-squared value of 0.9925, pretty darn good.  But, the question quickly becomes, how does it hold up outside of the known range?

Pretty well actually?

So depending on how you want to look at it, as an absolute error or a percentage error you may be doing “O.k.” or not.  But what if I didn’t know what the underlying equation was?  A power function would seem like a reasonable.  If I try a first order power equation 

 y = a * exp(b * x)
or second
y = a * exp(b * x) + c * exp(d * x);


 Within the original bounds, the R-Squared value is close to the actual polynomial.  If we used the 1st order equation just in that region it executes faster then the 3rd order polynomial so we may want to go with a “fake fast fit”. (say that three times fast).

Why do we try to “fit-in”?

There are several reasons why regression is performed.  Sometimes it is because the underlying equations are unknown,  are too computationally expensive, there isn’t an equation.  Regardless of the reason when I am trying to perform a regression I perform the following steps

  1. What are the fundamental physics:  Use this to pick out which terms and functions to use.  (Note: The curve fitting toolbox has an wonderful “custom equation” option that let’s you try these equation based approaches out)
  2. Boundaries? I try to understand how the data outside of the known data set?  In my example the exponential function does not perform well much outside of my known data.


Roughly 30 minutes after I finished writing this blog post I stumbled across this post
“Explore Runge’s Polynomial Interpolation Phenomenon” by Cleve Moler.  I would be amiss to not include this link for three reasons

  1. Cleve’s post clearly articulates the mathematical underpinnings of interpolation
  2. As the creator of MATLAB, his work is why things are “greatly improved”.
  3. It’s a post about Runge, and since I had a cat named Runga-Kutta (RK) back in college well…

Warning: Bad aerospace engineering pun follows:

When I was in grad school and had that cute cat I was working on a simple project, a device to determine the terminal angle (stall) for a kite.  For the most part, I worked on this project in my apartment, and Runga loved to attack the kite’s tail, jumping at it attacking it.  That was well and good in the apartment.

Well at the early testing stages I was flying the kite off of my back deck.  One day while flying RK managed to get out.  In a leap of excitement she jumped onto the kite’s tail (knowing from experience she would land happily on the carpet).  Except, this time, the kite was 8 feet off the ground.  Needless to say, both the cat and the kite crashed to the ground proving that for this case the Kutta condition did not hold.

Image result for terminal angle stall

If you find this content useful, please consider subscribing

Learning about machine learning…

For the last two years, I have written about Model-Based Design in the context of classical control algorithms and software verification processes.  Over the next year, I am broadening the scope of this blog as I investigate how machine learning can be applied to MBD in the context of safety critical systems.

My questions

  1. What is the current state of the art in Machine learning?
    1. How can it be used for controls problems?
    2. When should it be used?  When should it not be used?
    3. Where is it the only solution?
  2. What are building blocks of machine learning?
    1. Starting small, what are the easy things to do?
    2. Hitting the middle, what is a “real” project?
    3. Common “starting” pitfalls
  3. Testing and verification
    1. Can you do black box testing?
    2. How do you test an “open world” model?

Image result for building blocks foundation

So join me on this trip…

I will share my insights, my mistakes and look forward to hearing from you…

Image result for next stop the future


Levels of testing…

In English, there is a saying, where there is smoke there is fire.  It is an interesting saying which you could take to mean multiple things

  1. Smoke will let you find problems from a distance
  2. Smoke will show up before the fire

At The MathWorks, we call our most primitive tests “smoke tests”.  A smoke test is a basic “does it turn on test”.  The idea behind smoke tests is to validate that the unit under test performs in the most basic fashion, therefore, is ready to run longer more complex tests.  For example, a set of smoke tests for a Model would look like this

  1. Perform an update diagram (Order of a minute)
    if pass then…
  2. Check the model configuration parameters  (Order of a minute)
    if pass then…
  3. Check model for unallowed blocks (Simulink Check)  (Order of a minute)
    if pass then…
  4. Run your complex test suite… (Order of a ???)


For large, more complex tests, the setup, execution and evaluation time of the tests can be on the order of hours for a complex set of tests.  The ability to “abort” running these longer tests is important as your test suite grows.  You don’t want to take up time on your CI system running tests on models that are not in the correct state.


If you find this content useful, please consider subscribing

Mapping Model-Based Design to Thanksgiving

The Thanksgiving meal and MBD have a lot in common; both are times when people come together, both involve many moving parts, and both are better with some forethought and planning.  (No one can cook a turkey at the last minute).  With that in mind here is how the courses in Thanksgiving meal map onto the MBD process.

The salad: documentation

Image result for image salad

Let us be honest, when you think of Thanksgiving you generally don’t think of a salad, but your pretty happy you have something light given the heavy items in the meal.  That is why it maps onto documentation in MBD.  Something you are happy to have from your tool providers and something that you hope exists a year from now when you come back to look at what you put together.  
Bonus thought: Like salad documentation is good for you

Cranberries: requirements

For many people

Image result for cranberries

cranberries enter their life at Thanksgiving and depart after Christmas.   They know it is supposed to be part of the meal but they don’t really know why they are having them.   In the Thanksgiving meal, the cranberries provide the bitter / acid component to what is otherwise a sweet and fatty meal.  With Model-Based Design requirements provide the harsh acid of truth against which your design is measured.
Bonus thought: In the same way that cranberries grow in a bog, requirements grow out of the confusing of initial requests.

Mashed potatoes: testing

Image result for mashed potatoes

Buttery, starchy, with garlic or herbs,  mashed potatoes could (in my humble opinion) form a meal in and of themselves.  Once the gravy from the turkey is added in the is added in the analogy to testing becomes clear.  Mashed potatoes bond with the turkey through gravy, testing bonds with the models by wrapping their outputs in loving bits of analysis.
Bonus thought: in the same way mash potatoes can become to heavy if you don’t prep them well, testing can become burdensome if you don’t plan ahead.

Turkey: the model

Image result for turkey

In the same way that the turkey is the center of the meal, the model is the center of Model-Based Design.  Much like a turkey, you can overcook them (poor architecture leading to hard to swallow models).  Undercook them (poorly design models with gaps or holes that lead to poisoned outcomes).  However, with just a little bit of forethought, you can have a juicy delicious bird.
Bonus thought: Much like deep frying turkeys was a fad a few years ago there are fads in software development.  In the end, good development practices don’t change; be clear, be complete and test; don’t fall for fads.

The dessert: certification

Image result for dessert pie

Certification is the dessert of the meal.  It may come at the end of everything but I bet you have been thinking about it during the rest of the meal.

Bonus thought: Certification does not mean you have a good product.  It just means you have followed a process.  Take your time to do it right and you can have your cake (pie) and eat it too.



Final thought:  

There are many parts of Model-Based Design, they all have their uses, and we all will have our favorite parts.  For those of you in the US, happy Thanksgiving.  



What is traceability and why do we care about it?  First a definition

Traceability is the ability to describe and follow the life of a requirement in both a forwards and backwards direction (i.e., from its origins, through its development and specification, to its subsequent deployment and use, and through periods of ongoing refinement 

There are 3 critical points to this aspects:

  1. An independent documenting method: for an object to be traceable it must be documented. This has 2 parts
    1. A unique identifier for the object
    2. A version history of the object
  2. Linkages between objects: the documenting (linking) method must include dependencies between objects.   This can include
    1. Created by (e.g. this model generates that code)
    2. Dependent on (e.g. this requirement is completed by its parent)
    3. Fulfills:(e.g. this test case implementation fulfilled the test requirement)
  3. Bi-directional: The objects can be traced in any direction.  (Note: not all relationships are parent/child: there can be “associated” linkages)
Image result for connect the dots
Traceability allows us to “connect the dots”

Why should we care?

Traceability is about organization; the ability to easily find information.  For small systems, for self-projects, traceability may bean unneeded burden.  However when you have multiple people working on a project. the ability to trace between objects is how we information is relayed.

If you find this content useful, please consider subscribing

Does A == C?

I recently taught a class on testing fundamentals, in it I made the comment that by my estimate there are over 8,000 lines of code in MATLAB dedicated to the simple (simple seeming) test

A == B

Why?  What is testing for equality so hard?  Let’s break it down.

  1. Data types and numerical precision: depending on the selected data type the resolution to determine “equal” may not be present.  You can end up with false positives and false negatives
  2. Tolerances: You can take data type into account by adding tolerances into comparison.
    1. Absolute tolerance: abs(A-B) < tol
    2. Relative tolerance: abs(A-B) < per_tol * A
  3. But what about zero: Percentage tolerance is good, but what do you do when the value is zero?
    1. Relative tolerance (mean): abs(A-B) < per_tol * mean(A);
    2. Realitive tolerance (max): abs(A-B) < per_tol * max(A);
    3. Realitive tolerance (moving average) : abs(A-B) < per_tol * mean(A(i-N) : A(i+N))
  4. What about noise: for measured data how do you handle the “junk data”?
  5. What about missing data: much like junk data what do you do with missing data points?
  6. What about data shifts (temporal or other): it is fairly common for comparison operations to take place with “shifted” data.  Where one signal is offset by some fixed amount in time.
  7. What about non-standard data formats: how do you handle the comparison of a structure of data?  Do all elements in the structure have to match to “pass”?   Do you apply the same standard of tolerances to all elements?
Image result for pass no pass

You can quickly see where my estimate of 8K lines of code come from.  Why then do I mention this?  Two reasons

  1. Start thinking about the complexity in “simple” tests
  2. Stop creating test operations when they already exist
Image result for reinventing the wheel


This is written in the context of testing.  Any sort of algorithmic or logical code will, of course, use comparison operations.  For those cases keep 2 simple rules in mind

  1. Do not use floating point equivalence operations:
  2. Take into account the “else” conditions

If you find this content useful, please consider subscribing

What do you mean when you say…

Recently I had a discussion about the meaning of words with my wife.  Most of the time, most people, play some degree of fast and loose with the definition of words and the structure of their sentences.  However, there are some aspects of life and work where that will not stand, doctors visits, political discussions, and the writing of requirements.  With that in mind, here is a couple simple “mad lib” for writing a clear requirement.

Form 1 Response Requirement: When <Subject> is <State> then <Action> shall happen to <Action object>.
In this form, the requirement specifies a response.  For example, when my wife (subject) comes into the room (state) then I (action object) smile (action).

(Note: this should be fleshed out with definitions of “the room” and “smile”; e..g how long after, for how long.  The good news is that this is a testable requirement.  My wife enters rooms all the time so I can test it out tonight!

Form 2 State Check: When <Subject> is <State> then <Measured Object> shall have value <State>

This form enforces existing conditions, it can also be written in a “Before <Subject> is <State>…” form.  An example is “Before the car <subject> is placed in park <state> the vehicle <measured object> shall have a velocity less then 0.1 mph <state>.

So what templates do you use?

Image result for engineering mad libs



Engineering and dancing…

Completion of a task is accomplished by performing a sequence of steps.  The more steps in the sequence the more likely you are to make a mistake; either by forgetting a step or doing the step out of order.  One method for reducing the likelihood of making a mistake is the creation of sub-tasks.  This is where the analogy to dancing comes in to play.

When you first learn to dance you learn basic steps; the waltz’s box step, the tangos 8-count “Slow, Slow, Quick Quick Slow”… Once the basic step is mastered (and heaven help me one day I will master the box step) then additional “sub-tasks” can be learned.  There are four virtues of sub-steps.

  1. Low chance of order mistakes:  shorter tasks have a lower risk for errors due to their simplicity
  2. Low cost for errors: if a mistake is made in a sub-task it is, often, isolated to that sub-task and it can be quickly re-run
  3. Decomposition: frequently when broken into sub-tasks, the task can be distributed to multiple people.
  4. Ability to chain together: The sub-tasks can be decomposed into multiple “routines” and reused in multiple processes.

In general, processes that have 3 ~ 5 steps are considered “easy” to remember and master.  Going above 8 steps in a process results in increased possibilities of human error.