Demos: Are you making college chili?

When I was in college I would, after swim meets (H2Okies) make up a batch of chili. It had all the right ingredients (1) and got the job done(2) but it was tuned for a very narrow audience. It wasn’t until I started cooking for and with my wife Deborah that I really learned what it means to create a meal (3) for a wide group of people (4).

Michaels Chili Recipe (College and Adult version)

College (Demo)Adult (Production)
1-lb ground beef
32-ounces canned kidney beens
32-ounces canned tomatoes
1 onion
garlic powder
red pepper (a lot of it)
cayenne pepper (a lot of it)(5)
Brown sugar (2 tbl)

Brown ground beef in pot,
“cook” diced onions in beef fat
Throw in the rest, cover with water.
Stir it every once in a while.
Add water to keep from scalding
1-lb ground beef 96% fat
1-lb dry kidney beans, sorted and soaked
16-ounces canned tomatoes
3 ~ 4 fresh tomatoes
4 ounces tomato paste
1 onion (red)
4 stalks celery
Spices: To taste and freshness
fresh garlic, pepper, salt, paprika, cumin

Saute onions, garlic and celery in olive oil
Brown beef with onions and garlic and celery
drain excess fat
On low heat add fresh diced tomatoes let sweat for 5 minutes
Add in spices
Add in kidney beans and canned tomatoes
Add water to cover beans
Simmer at low heat for 2 hours
An underspecified recipe

Demo versus production: What is the difference?

There are three things, first the dish is no longer dominated by single note, heat(6). Second, time, the college recipe was great for someone who needed something fast, e.g. throw it in and walk away; the adult version requires an investment for a greater return. Finally, reliability; hidden in the simple phrase “spice: to taste and freshness” is a decade of lessons learned.

Should you (I) make a demo? Or a prototype?

When these winter months roll around and the desire for a good hearty soup rolls around I can generalize my knowledge to a new soup. I don’t need to demo because I learned from past experiences. When I am creating a new software item I first look to see is there something I can reuse (or this), is it a known domain. If there is I don’t create a demo. If I need to learn something or I need to prove to a group that it can be done then I create a demo.

If the “demo” is something that I think I will be able use in the future then it becomes a “prototype”. If I am prototyping I put more time into the demos architecture, creation of test cases, creation of supporting infrastructure. It may not be the final product but it will be drawn upon.

The false lessons of demos

One last comment on demos; they can teach you false lessons. When you are doing things “fast and dirty” you have problems that you do not have when you follow a mature processes. When I was in college my chili was in constant state “it could burn”; it could burn because I was using a bad pot and a cheep stove with poor heat control; I havn’t burned a chili in 20+ years.

The same issues can happen with software development. When you are in rapid prototyping mode it is easy to have spaghetti code(7). This should be viewed as an failure of the development process, not of software in general.


  1. It’s hard to mess up beans, but, in all honesty the ground beef could have been of higher quality
  2. The job, in this case being twofold; first feeding very hungry college age athletes and second burning the roof of your mouth out.
  3. In college the meal was chili and corn bread; byob. I know I made salads but I don’t think I ever served one.
  4. Of course when it is just Deborah and me then the meal is perfectly tuned to us; which is another sort of perfected meal.
  5. It is a good thing that I did not know about ghost peppers back then, I would have used them, I would have used way to many of them
  6. Heat in chili is, now, an after market feature. If you want it hot you can add many different condiments to add heat. I would recommend this compote.
  7. Mushroom code and spaghetti code are similar that they develop due to a lack of planning. Spaghetti code is characterized with convoluted calling structures; mushroom code is accumulation of code on top of code.

If a tree falls… or the importance of documenting processes

There is an old engineering joke(1);

One day, a year after she(3) retires Lucy receives a call from her former boss. “Lucy, the X-43q isn’t working, I know it was your baby, um, could you come in take a look at it?” Grumbling she says “Yes, but I want 10K for the repair”; since the X-43q is key to everything they due and no-one knew what to do(2) they quickly agree. The next day she comes in, listens to the machine for 5 seconds and says “The timing belt is slack, replace the tensioner spring. Check please.”

In shock her former boss sputters, “You can’t expect 10K for 5 seconds of work!”

She replies, “The 10K isn’t for the 5 seconds of work, it is for the 10 years that where I learned what to do”(4)

Attributed to just about every senior engineer….

So to explain the title: If a tree falls in the forest it still makes a sound; however if a process isn’t written down it does exist. So what how do you write up a process?

Objectives, actions, triggers, conditions and rationale

The most basic process document has three parts

  1. Objective(s): the outcome of following the process
    • Positive: X shall occur
    • Negative: X shall not occur
  2. Actions: what you do to achieve the objective
    • Explicate: The following steps shall occur
    • Engineering judgement: (5) The following types of things shall be done
  3. Rationale: why you are taking an action, condition, or modification. There can be multiple rationale for a single process. More complex processes will have more rationale

It is rare for a process document to have just the Objective and the Actions, the two other categories are

  1. Triggers: what starts the process in motion
    • Periodic: tasks that are performed on a temporal basis
    • Conditional Event: when X happens the the task starts
    • Process Event: when a milestone in the project is reached
  2. Conditions: modifications to the process based on environmental factors
    1. Modifying: if Q happens then follow the “Q” path through the process
    2. Aborting: if R happens then halt the process and start process “Oh crud R happened”
    3. Gating: Before this process can start the following must happen.

The distinction between conditional and process events is simple; process events are expected, conditional events may happen(6).

A short example process

Process for making 3-bean soup (objective: have a tasty meal)

When the number of soups in the freezer falls under 3 (conditional trigger) then a soup shall be made. The type of soup shall be dependent on the last 2 types made (modifying condition) to prevent repetition of the soups (rationale)


  1. Verify you have 2 hours to monitor the soup (gating condition on the process)
  2. Verify you have the required ingredients (gating condition on the process)
  3. Chop, dice, mince and mix,
    do the things for soup to fix.(7)
  4. O-going: Monitor for soup overheating
    1. Add water or reduce heat (engineering cooks judgement)

With step 4 I introduce the notation “on-going”. With in a process some steps are one-off and others are on-going or repeated. In context it should be obvious why something falls into a given category


  1. Like most, it isn’t funny, it is bitter. Engineering jokes are like Aesop’s fables, they attempt to impart wisdom but we are often left wondering “if the mice have metal working technology to make a bell, why don’t they make some arms and weapons”(2)
  2. At least that is what I wondered.
  3. I’ve updated the old joke
  4. Of course if they had given her time to write up what to do she could have stayed retired
  5. There will always be some engineering judgement in processes, the goal is to guide that judgement
  6. We can use a simple cooking analogy. When you are making dinner you expect the food to finish cooking, that is a process trigger, e.g. set the table. You do not expect the food to catch on fire in the oven, that is a conditional trigger.
    1. One day I will start a blog “MBC : Michael Burke Cooks” for now accept a short rhyming couplet for what goes into making a soup (or anything else)

Software developers: review like a cooks, not accountants…

In this post I argue for performing simulation instead of model differencing to examine changes to a model. So take a deep breath(0) as I’m about to push this analogy fairly hard… When I’m in the kitchen creating a meal I take care with my ingredients, checking for freshness and the flavor balance (1), always aware of allergy issues(2). I start in a clean kitchen(3), I use a sharp knife and a cast iron pan on a gas stove, quality tools(4). When the dish is done I plate it and I judge it by how it looks and how it tastes. When it is done it is judged by what it is, not what went into it.

I have no idea what this is but…

By contrast when I balance my checkbook(5,6) in each line item is inspected for correctness and the desirability of the transaction; to understand my finances I need to know each transaction.

A deluxe version

A good model is rather like bouillabaisse(9), a complex harmony where the sum is greater than the parts; so then how do you “taste” a model? If you are working in a Model-Based Design environment(10) it means simulating the model and inspecting the outputs of the simulation. If the simulation shows the model is behaving in the expected way for a given scenario, then you know you have the “right” taste. This is a functional review.

“But wait”, you may be saying(11), “what about allergies, a clean kitchen and good tools”? These are addressed by processes and guidelines. Processes protect you from allergies by enforcing guidelines (12), test driven development (13) and development based on requirements (15). The clean kitchen, well that comes from following good processes for an extended period of time; and like any good chief would tell you, “if you are not cooking you should be cleaning”. Good tools, well, see The MathWorks.

Me with slightly darker hair

“Ok, so I see why be a cook, but why not an accountant? Don’t I need to double check things?” You may be pondering (16). Accounting comes into play if you don’t pass the taste test and if those tests don’t identify the root of the problem. Then performing a block-by-block (or line-by-line) review of model is realvant. Until then doing a model or code differencing does not provide utility.

If I have an equation, say 2+2, and then you change it to 2+3, it is easy for me to see the difference in the outcome by comparing the text. However if this is my equation

The Navier-Stokes in 3-D

The effect of a small change is not obvious by inspection. Differencing text and differencing models is a methodology held over from a time before easy simulation when it was one of the few ways to figure out what was happening. This accounting approach is still valuable as a debugging tool but it is not and should not be your primary method for reviewing models.

Disclaimer: this maybe the first post where the footnotes are longer than the article. I got carried away. I blame Terry Pratchett (17).


This is where I show just how much I can push this analogy:

  1. Deep breath: Note if you were in the kitchen while I was cooking that deep breath would be delightful, unless I’m pickling, pickles taste great but brine air is not fine air.
  2. Freshness & flavor balance: In software this I validate “am I using the most up-to-date methodologies” and are those methodologies the correct for this project.
  3. Allergies: Are a stand in for the know issues in development, either bugs or requirements that need to be met.
  4. A clean kitchen: clearly this is a plea to start your development process with a minimum of legacy issues.
  5. Quality tools: There is an old cooking saying “The sharper the knife the safer the knife”. Quality tools prevent some category of errors and make it easier to prevent others.
  6. Checkbooks: For those under the age of 25, checkbooks are a primitive form of tracking payments in which the writer of a check recorded the amount of the promissory note (check) and subtract that from a banking balance.(7)
  7. Checkbooks 2: Technically speaking out online spreadsheet that collates information for statistical analysis.
  8. Checks/Balance: This is not to be confused with the concepts of checks and balances that is baked into the US constitution.
  9. Bouillabaisse: With the hint of Safron that pulls it together.
  10. MBD Environment: If you are not already working in this environment, hopefully this blog gives you reasons to do so.
  11. May be saying: I am assuming you are very invested in this blog post and can’t help but verbally exclaim your concerns as you read it. Please be aware of the people around you.
  12. Guidelines: Modeling guidelines are like cooking best practices; codified common knowledge.
  13. Test driven development: In the same way you press on a burger to know when it is done (14) test driven development makes sure your code is well-done.
  14. Press down on the burger: This would be considered black box testing since you are not cutting into the burger.
  15. Requirements: Requirements are like recipes; they are what you start with and you may have to evolve over time. Rational evolution leads to tasty dishes, random substitution leads to the garbage disposal.
  16. Pondering: Doing this after 11, and noticing the people around you.
  17. Terry Pratchett (Sir):*_with_Footnotes

Simulink: What is an Algebraic Loop?

If you have worked with Simulink for more than a few hours you have no doubt seen the error message “Warning world about to end, Algebraic loop detected. You have 10 seconds to snip (insert unit delay) to prevent model from exploding

Ok, so I exaggerated the error message, however why there are algebraic loops in the system is often a source of confusion for people. If we look at the model above and “translate it” to into the mathmatical equation we have

 y = y - 0.5 * y

This equation, written in C is perfectly legal; however there is a hidden assumption of the initial value of Y. Let’s look at how Simulink would like you to set this up and then look at what that does depending on how you break the algebraic loop.

We have three examples of breaking the algebraic loop. For the sake of argument let’s say that the input is a constant value of 1 and let us change it to a sum operation

Form 1Form 2Form 3
T = 01.521
T = 11.7521.5
T = 21.98321.875

If the IC value for the unit delay is switched to 0 (zero) then form 1 and 2 result in the same outputs, with output values of 1, then 1.5 while form 3 has an initial output of 1, then 1, then 1.5.


General best practice is to insert the unit delay block where the data is consumed, in this example it would be placed before the “Sum” block. By doing this we ensure that only the paths that need the last pass data receive last past data. Form “3” in the example shows the effect of inserting the unit delay at the source.

A sample project: Model Unit Validation

With this post I’m going to walk through a sample project demonstrating how Units in Simulink Models can be validated. Demonstrating each stage of the project development, providing insight into how and what I was thinking at each step along the way.

Stages of project development

There are 5 stages to project development

  1. Identification of need: What is the problem you are trying to solve?
  2. Review of existing solutions: Is there an existing solution that meets needs?
  3. Task identification/estimation: What are the tasks / steps that need to be completed and what is the effort (risk) to complete them?
  4. Implementation and testing: Following a ‘test as you go’ strategy develop the modules that you need
  5. Validation of solution: Does what you did in 4 solve what you needed in 1?

Stage 1: Identification of need

As of release 2019A Simulink provides

  • Unit specification at the boundaries and through Simulink.Signal and Simulink.Parameter objects
  • Unit checking at the boundaries of models (inports and outports)

While this is very useful (and you should always define your boundary conditions) for large models knowing what the units of interior signals would be beneficial.

Problem to be solved: determine the unit of all blocks internal to the model and use that information to validate the units at the boundaries / internal to the model.

Unit checking at the Subsystem Boundary

What I wanted to be able to do was specify the unit types on the inports and outports and propagate the units through the model.

At this stage, depending on the complexity of the project, the high level requirements should be written; derived requirements will be written at the start of task 3.

Stage 2: Review of existing solutions

There are two classes of “existing solutions”. The first are solutions to “base” problem you are trying to solve; the second are solutions to the sub-tasks in the problem. In this instance we have already identified the extent of the solution to the “base” problem, the ability to check units at the boundaries of the model; for what we want this is insufficient.

Examples of “Sub-Tasks”

For the sub-tasks the Simulink API’s provide the interface required to “decode” the model to propagate the information through the model.

Stage 3: Task identification/estimation

Depending on the size of the project task identification can be decomposed to a “function” based level or lower. For large projects the “function” may in fact be a collection of functions. As you start to identify the functions required reuse of the components should be taken into account. My first pass (almost always on paper) is in a “operation required / issues” format.

  1. Propagate unit information in the model:
    1. How to determine data/unit flow
    2. How to handle data merging blocks (buses and mux blocks)
    3. How to handle subsystems
  2. Calculate output units based on inputs:
    1. Define categories of block transformations
      • Matching
      • Canceling
      • Integration / derivative
      • Null
    2. Handle effects of Parameter data types
  3. Apply information to the blocks
    1. Units (note most blocks don’t have a “unit” field
    2. Status of block (assigned, unsigned, invalid)
  4. Check calculated units against assigned
    1. Outports
    2. Simulink Signal objects

Having identified the tasks I now think about what goes into each step and if I have experience or near experience in coding it; in this case all of the tasks involved are close to things I have done before so I have faith in my estimates…. (For the record, I estimated 8 hours and it ended up taking me 7)

Step 4: Implementation and Testing

Step 4.1: Unit propagation

My first question was how to propagate the units through the model. I decided that a first, reasonable requirement was that all of the inports must have units defined. If I enforced that I could simple “walk forward” from the inports to the outport of the system.

The once that approach was selected it the implementation becomes clear; create an array of inports and trace outwards from them. As each new block is found tack it onto the end of the array removing the most recent from the list.

(Note: Since this is a demo project I am not, currently, doing anything with Mux or Bus blocks).

Step 4.2: Calculating units

The key to calculating the units is to realize that there are really only 3 fundmental operations.

  • Cancellation: through either multiplication or division units are canceled.
  • Matching: the operation (addition or subtraction) requires the input units to match
  • Integration / derivation: a “cancellation” of time.

As you look at the code you will see that I have created “num” and “den” variables, these are for the numerator and denominators of the inputs. Additional, for the sake of a demo, I have restricted the number of block inputs to 2.


  • In hindsight I didn’t actually need to find the numerator and denominator since much of what happens is through string clean up. However, conceptually, it was a required task).
  • In this example I am not handling feedback loops through unit delays, however they could be treated as a special case of a “sum” or matching required block.

As I developed the code I also wrote some basic test points to validate that the behavior was correct. The most basic tests were to determine if the units, stored as strings by Simulink, could be converted to the num/den strings

In this case you can see that I tried to cover all of the ways in which data could be encoded. One thing to note, I did not implement “compound units”. E.g. if you put in N (for Newtons) I do not cancel that in the same way you would kg*m/s^2. To do that I would, I think, first expand all the compound units to their fundamental components and then cancel.

Cancellation at the root

The final step in this process will be to validate the units at the outports. To indicate passing / failing outports I color code them “green” for pass and “red” for failed.

Step 5: Validation of the solution

The code, as implemented, provides a “taste” of the final solution. However, the way in which it was code is modular so as I want to add in new capabilities (support for additional blocks, muxs, buses, subsystems) the extensions will be easy to add. If you are interested I have included a copy of the code and sample model here.

In praise, in damnation of the simple example

Faust &

When I was first learning German language in high school the first book(1) that we read was “Emil und die Detektive” a “young adult” book.

Wann ich ersten Deutsche studiere, ich lessen die Buch “Emil and the detectives”

The final book that I read was Goethe’s Faust. Needless to say the later book was a of more interest. I remember the ideas of Faust, I remember little of the Detectives, so why did I start with YA lit?(2) And why am I writing about it a blog about Model-Based Design?

Learning fundamentals

When learning a language it is obvious that you should start with simple examples where you learn the fundamentals of the structure and expand your vocabulary. However if you never move to more complex subjects your understanding of the language (and in the case of literature life itself) will be forever arrested at a low level.

The gain block: Hello world of Simulink

The cliche first program is often the simple print out of “hello world”. In Simulink the first model is “Input, gain, output”. It can be used to show

  1. How blocks are added to the model
  2. How blocks are connected
  3. How to parameterize the model
  4. How to simulate a model….

After learning, simple models

Image result for simple models

Finally on to the subject of this post; why do we create simple models once we have learned the “language”?

  1. To validate understanding: for blocks / constructs that are more complex a simple model that exercises range of behaviors helps us understand it under all modes of operation. 
  2. Debugging: You are trying to isolate parts of the model to determine where an error in you system occurs.
  3. Instructional: You are using the model to introduce people to basic concepts

Why models are not like books…

When I read a YA book in German I knew, through daily life, that the language of the book was targeted at younger readers. When I see an example model in a new language I don’t know if the pattern I’m seeing is the “adult” pattern or the YA pattern.

The most common “faults” of example models are(3)

  • Size: large models serve no purpose in showing concept basic concepts.
    • Mitigation: create example “large model” cases.  Walk through the best practices for large models (and system of models.
  • Efficiency: to illustrate one concept we will often make other parts of the model simpler.  
    • Mitigation: explicitly state the area of the model that is expanded for instruction
  • Problem specific: sometimes examples are intended to demonstrate how to solve a specific problem but that solution maybe incorrect in other situations.
    • Mitigation: explicitly state what the scope of the solution is
  • Lack of domain knowledge: this last issue is the hardest to deal with; on occasion example models are created by people who are new to the language and, as a result, they come up with solutions that are non-advisable.
    • Mitigation: peer review and look to industry sources for examples.


  1. The second book was “The little prince”; which in hindsight was an odd choice since it was originally written in French.  Still I could never forget the word for sunrise and sunset.  (Sonnenuntergang und Sonnenaufgang; this makes singing songs from Fiddler on the Roof in German difficult)
  2. In truth, written in 1928, it provides a much more realistic view of life than modern Young Adult novels. Danke, dass Sie Goss vermissen.
  3. In all honesty I have been guilty of all of these things in my examples over the years; however I have tried to correct this through the mitigation recommendations I have listed.

Common/uncommon faults

“Z key holder”

Last week the “Z” key popped off of my laptop. As my standard language of communication is English this has not been a big deal; however on Friday when I went to write to an email in German things changed. More correctly the task (typing) was the same but the mode (language) changed. Never one to let a random thing pass I thought “wow this is a great topic for a blog post”.(1)

Testing in the “comfort zone”

Your comfort zone

It is a well known problem with testing that we tend to test things that we know the best, as a result we end up with incomplete test coverage. There are two problems, first the obvious one that we are not covering everything. Second we are wasting time by creating redundant tests in the “well known” areas.(2)

How to exit your comfort zone?

Let’s look at three types of models, mode based, continuous (e.g. equation), logical and hybrid(3).

  • Mode base: existing the “CZ” with mode based models is simple; validate that you are testing every mode in the model.
    • It should be noted that with Mode base models there is often a temporal component that must be validated.
    • Additionally the “mode to mode” paths should be validated as often initial conditions can be different depending on which mode you have transitioned from.
  • Continuous (equation): these models can be fully tested by exercising the full range of all inputs. For example, let say you have two inputs U1 and U2 with ranges [0 to 10] and [-5 to 5]. The test vectors would cover a range [0,-5: 10,5]. There are a few considerations.
    • Test spacing: depending on how sensitive the output is to changes in the input the “steps” in the coverage needs to be adjusted. E.g. could you “step” inputs of U1 in 0.1 or 0.5 increments?
    • Exclusivity: in some instances inputs are related, e.g. the temperature of your car’s engine is never less than the outside temperature.(4) This can reduce the test range
    • Temporal: another factor is “how long” are you at each data point?
  • Logical: are similar to mode based testing, however they lack the state information that mode based testing implies. Like mode based testing this is validated by exercising each logical path in the model. Tools like Simulink Design Verifier can be used to generate these test vectors.
  • Hybrid: E.g. 95% of all models. This is where design of experiments comes into play. For large systems it may not be practical to test “every point”. However that is not the goal of testing, the objective of testing is to test every operational behavior.

The upshot

Testing = Hacking(6)

Good tests take time; both to develop and to execute. Assuming a library of basic test functionality and a well written requirement document for the component you can estimate the number of test “points” as a function of the number of modes, inputs and logical branches.
TP = 1.25 * Modes + (min(0,numInputs/2) * numOutputs)^1.5 + numLogical/2; (5)

This formula empirical, and is derived from a review of test sets for well ordered models. The assumptions built into the formula are

  1. The “Mode-to-mode” connections are limited, e.g. not every mode can transition to every other mode
  2. There is a fair degree of mutual exclusivity in the input vectors.
  3. The number of tests is more sensitive to the number of outputs than number of inputs.
  4. Logical tests can often have redundant paths due to the lack of state information.

The final part of this equation is “time of construction”. Time of construction refers to how long it takes to create each test point. Both mode based and logic based tests vectors can be automatically generated, often achieving 100% coverage. (Or showing that 100% coverage is not possible and that there is an error in your logic). As a result I generally estimate the time to develop these tests as

t = (NumModes + NumLogic) * 1 minute;

The time assumes that some level of errors in modeling will be discovered and corrected. For the equation (continuous) testing the time is dependent on the coverage of the requirements; e.g. the more of the input space that the requirements cover the lower the total testing times.

t = Num_Req * 45 minutes + (10 minutes * %notCovered * 100)

Again, this is an empirical formula based on the well ordered models and an existing testing infrastructure.


  1. This blog post will be forwarded to my manager to explain why there is a $23 repair bill sent in
  2. I was once proudly told by an engineer they had over a 100 tests for their system; the problem was that those 100 tests were all dedicated to 6 out of the 27 functions in the system. We corrected that issue.
  3. That sentence should have been tested, as there are 4 “types” in there, not 3. This is what happens when you “design” a sentence with conceived notions.
  4. Ok, that statement isn’t absolutely true, if you had your car in a “cold box” and then drove it out into a warm day for a short period of time that would be true. At the same time if you are storing your care in a “cold box” you
  5. Hmmm, ending with a “;” shows just how long I have been programing in MATLAB.
  6. Again, thank you

Please consider subscribing for regular updates

The Matrix(1)

In 1999 the movie The Matrix introduced (2) millions of people to the philosophical question “how do we know if we live in the real world or a simulation”. As an engineer, working for a company that makes the “Matrix Laboratory” I have been thought about this idea and it’s logical extension; can we trick a machine into thinking it is in the real world?

Is this the real life? (3)

No comment on the movie

Simulations of reality have existed for a long time, from wind tunes, to lumped mass model to complex finite element models; they have been the backbone of engineering design. As powerful as these models have been they have been limited as simulations of physical properties, or perhaps a system of physical properties. The next generation of simulations attempts to simulate the complex nature of the real world, that is to say people and their semi-predictable behavior.

Humans as lumpy-mass models

Ok, first I’m going to need to write a lot of text here to make it all the way down this comic strip…

The world is filled with humans, roughly 7.6 billion as of the writing of this post. We are out there driving cars, walking in the street, making phone calls, cooking dinner, talking about philosophy and sitcoms. We do a lot of different things. When moving in a mass we are largely predictable; that is to say if you asked me to calculate how long it would take for 100,000 people to exit University of Michigan’s football stadium, I could give you an reasonable accurate answer for the total. If, however you asked me how long for a given person, well then it becomes more difficult. It is the aggregate behavior that is predictable. The issue is that for controllers that interact with the real world the aggregate is not enough.

So how do you go about simulating a pod of people? There are several basic methods.

  • Conway’s game of life: “humans” can be simulated by giving them a set of basic “rules”. Those rules (with weighted objective functions) determine how they operate. Note: your rules here can’t be too perfect, real humans make mistakes. (Note this is often done using cellular automation approach)
  • Genetic algorithms: The humans can be derived using genetic algorithms(5). In this case a set of baseline behaviors are defined as well as “mutations” or permutations on those behaviors.
  • Fluid dynamic analogies: Fluid dynamic models do a good job with modeling flow “restrictions” around doors and changing widths of the system.
  • Real world data (human in the loop): The most difficult to set up, but done well, mining the real-world for data on how people act, and react, provides the most accurate models. The previous three suggestions can be considered reduced form version of the “RWD-HIL”

Why do we want a matrix for our machine?

It is oft repeated question for self-driving cars, what happens if a child darts out in front of it? Because it is repeated so often it is now tested for heavily. But what about all the other things kids (or adults) do that are foolish? Ever drop something in the street and stop to pick it up? Every order a 100 tee-shirts instead of 1? Pull up on the throttle when you should have pushed down? In the end people are semi-random creatures. Creating realistic models of people allows us to create better control algorithms.


  1. The image of the “red pill” / “blue pill” should not be taken as an endorsement of A/B testing as the only validation methodology
  2. Note, it introduced people to the question, it does not mean that many people put much thought into it beyond “dude, how can you know?”
  3. Note, if we do it correctly we never have to sing “mamma, my controller just killed a man”
  4. A pod of people is of course a reference to “pod-people“; e.g. close enough to fool some people
  5. Real people are created from generic algorithms so this should wok right?

If you find benefit from this blog, consider subscribing

Stateflow scheduling: Examples

In my last post, Execution order and Simulink models, I promised a look at scheduling best practices using Stateflow; in this post I hope to deliver.

Simple periodic scheduling

In our first example we will look at simple periodic schedulers. Let’s assume we have a system with three rates, 0.01 sec, 0.1 and 0.2 seconds. This can easily be implemented in Stateflow with the following chart.

If we look at the “activation” for each of the task sets we would see the following.

Yellow = 0.01, Blue = 0.1, Red = 0.2

In this case you can see that each of the tasks are being triggered at their given rate, at 0.1 second both the 0.01 and 0.1 activate; at 0.2 all three are active. In many cases this is fine, the order in which these tasks is set by the order in the chart (e.g. 0.01, 0.1 then 0.2). However you may want to “space out” the activation. In that case a Stateflow chart like this would be the solution.

In this case three parallel states are created. The 0.1 and 0.2 rates have “offsets” so that the execute out of sync with the each of the tasks as shown in the resulting execution graph.

Offsets on the charts.

Mode based scheduling

Beyond rate based scheduling mode based scheduling is the next iteration of the scheduling examples.

Stateflow after dark…

In this example the scheduler is decomposed into three parallel states, the “Main” or rate based state; a mode based state and an event based state. The main state is similar to the two previous examples so lets look at Mode and Event.

Feeling the mode

There are three things of note here; first within this state we start the system off in the “Initialization” state. This is a safe selection as most systems start off in “Init”. Next, movement between the states is controlled by the input variable “Mode”. Use of the ‘hasChanged’ method gates the transitions between the different modes allowing the user to switch from any of the modes without having complex routing logic. Finally the mode “Emergency” is for the non-critical scheduling responses to emergencies. Any actions that fall into the true emergency mode should be event driven so their execution starts immediately.

The main event

Our final example here is event driven scheduling; within this chart we have the “React” states and a “Null” state. The null state is present to provide a “no-operations” mode when the events are not active. Two things of note, in this example events are mutually exclusive; this does not need to be the case. Second, the current example exits the “React” states after one execution. The exits could be guarded to continue execution until the event is resolved.

If you are enjoying what you are reading consider subscribing to the email version of this blog.

Execution order and Simulink models

In the last 15 years at the MathWorks I have been asked about once per month “how do I know what the execution order of my XXXX is”. If you come a textual language execution order is directly specified by the order in which you write the lines of code.

  1. My Function
    1. Do A
    2. Do B
    3. Do C

With multiple functions, a threaded OS or event driven interrupts this is more complex, but at its’ heart it is directly specified in the language. In contrast Simulink determines execution order based on Data Flow; so…

Which executes first, Chicken or Egg?

The principal of data flow based execution is that calculations are performed once data is present. We will start with the simplest example possible, one path direct feed through.

In this example we have a data flow from left to right (Inport #1 to Outport #1). The calculation is
Output = Input * Chicken * Egg

In this example we have introduce a Unit Delay block. This changes the execution order
Output = LP * Egg
LP = Input * Chicken

In this case the “Egg” calculation takes place first since we have existing information from the unit delay allowing calculation of the output value.

In our next example the execution order is arbitrary; since there is no data dependencies between “Chicken” and “Egg” output 1 or output 2 could be calculated first. (Note: Simulink provides a “Sorted Execution order”; these are the red numbers you see in the image. The lower numbers are executed first.

In this case the “Egg” came first

Controlling execution order (avoiding egg on your face)

In the last example we showed that with independent data flows execution order is resolved by Simulink; however, there are instances where the execution order will matter (1). The “obvious” and very wrong solution is to add a data dependency to the path.

Noooo never ever do this…(2)

By adding a “Zero value gain” and sum block to the “Egg path” I have forced the “Chicken path” to be executed first (4). For reasons that should be obvious this is a bad idea. Anyone who looked at the model would think “Why are they doing that” and they would be correct in asking that question. The recommended approach is to make the execution order explicite using a Function driven approach

In this case the “Egg” is executing second; this is known from the “B” number in the block, “ChickenPath” is BO, while “EggPath” is B1; the lower number executes first. For more complex execution orders a Stateflow chart can be used to define the execution order

Yes, I went for “eggs came first because chickens are evolved” as the solution to this issue…

In this “tongue in beak” example we see that “eggs” only execute during the “WeHaveEggs” and the “Dinosaur” states. Once we hit the “WeHaveBoth” state (after many eons) the Chicken executes first. In my next post I will give examples of best practices for controlling execution order with Stateflow charts.


  1. Within the subsystem the execution order does not matter. However there are several cases where it can matter.
    1. Time limited execution: If the given function has a limited time for execution allocated and it is possible that all of the calculations may not be able to be performed in that time period. In that case you would want to control the execution order
    2. Consumption by parallel process: If the data from one (or more) of the paths is used by a parallel process and that process needs data “first” then you want to control the execution process.
    3. Determinism: For some analysis locking down execution order will simplify the execution task.
  2. I debated even showing this image, however I have seen many “cleaver”(3) engineers come up with this solution.
  3. Yeah, I realized I was autocorrected to “cleaver” as in something that cuts or chops . It was an error at first. Then I realized that I liked that more, they are not really “clever” rather they are just chopping apart a problem.
  4. This could be considered a form of “playing chicken”
I love Google image search…

Please consider subscribing for regular email updates