Some roses are red…(1)

“What’s in a name? That which we call a rose
By any other name would smell as sweet.”
Romeo and Juliet: Act 2 Scene 2

Names are labels we give to things to provide mental holding places for what a thing is; I am Michael, author of this blog and you, dear folks, are my readers.(2) Humans are flexible and we can assign different names to the same thing and still know it is a rose. Computer programs however….

Naming conventions

Many moons ago I wrote a post on naming conventions; (3) this is the first stage in the software “name game.” For software names have to be descriptive (with naming conventions) and unique.

Where things go wrong: Scope

Error, stop, fault, or startup: If you look into almost any set of code you will find these common variable names. If the variables are scoped to just the given module, things are okay, but often things “leak out.”

If you look at the interface variables, e.g. the bus (structures in C), enumerations, and all global data, you will see that enumerations, given the “state” nature, commonly reuse the same names.

ICD and Data Dictionaries

First, a note on the image. Since this is a blog on Names, the use of acronyms is especially important. When I write “ICD” I am thinking of Interface Control Documents; but it can also mean “Implantable Cardiovascular Device.” But enough side bars…

Our goal to have unique names (at the project scope) aided by two things; ICD and Data Dictionaries. The object here is to augment these base tools with integrity checking tools to validate full systems.

  • Tag the scope of data in the data dictionary: Enable scope detection for the data in the system.
  • Enforce naming rules: For both root and sub-data names.

Footnotes

  1. It has always bothered me that the poem goes “Roses are red, violets are blue“; properly speaking violets are by their very name the color violet.
  2. At some point in the future your role will change and you will have another name.
  3. The image to the right is exactly what a naming convention looks like; everyone comes together and picks names.

Ring! Ring! Event driven architectures(1)

Broadly speaking, control systems are broken down into event and continuous(2) time driven domains. Continuous systems are always taking in stimulus and putting out responses. While event driven systems respond when the event happens, the rest of the time they sit patiently quite unlike an 8 year old waiting for the gates of Disneyland to open.(3) A challenge arises when you start to mix the two types of systems together; how do you mix the chocolate of continuous time with the peanut butter of events to create a greater whole?(5)

Everything was going…
smoothly then there was a break

Events interrupt the smooth execution of continuous time systems. If the event preempts the execution of the continuous time system, you are halting the logic and calculations of the system. The return to the execution of the system can pose challenges. Let’s take a look at a few…

Something’s changed

The most common problem that occurs is that when the interrupt event happens there is an overall change in the system state and the data (pre and post interrupt) is no longer in correspondence.(6) This can result in incorrect calculations and / or commands.

Falling behind

In some instances the code associated with events can take a long time; as a result the periodic update of the continuous time system can “fall behind”; potentially missing multiple updates. In some instances important data is missed during that “time out.”

What to do?

Like many situations there is no “one-size fits all” solution; but there are general rules of thumb.

  • When something has changed:
    • Buffered or protected, data can be used to resume execution of the code with contiguous data.
    • A “return from event” handler can be used to determine if a system reset is required.
  • Falling behind:
    • For the “important” events a buffer can be constructed within the event driven code.

Testing the wind

Testing interrupted code is a sticky problem as by definition, when the interrupt happens it is an unknown. Use of random interrupt generators provide one method for testing the code but cannot provide full coverage. The recommendation for this blog is to create “worst case” interrupts; e.g. look at your model and ask “what is the worst place where the execution could be interrupted” and then do it there…

Worst place for an emergency

  • In the middle of nested if/else-if/else logic: The branch you are in may no longer be valid.
  • Performing differentiation: When your “dt” changes the diff can wiff(7)
  • Fault / error detection: If the code’s job is to detect errors then, well, this is one area where you may want to protect against interrupts.(8)

Footnotes

  1. Back at the dawn of time “Ring-Ring” was the sound that a phone made, a common “event driven” experience for many people.
  2. I went back and forth 2 or 3 times on the “continuous time” driven domains. The world we live in is continuous time, it keeps on slipping into the future. However most modern control algorithms are implemented on discreet time hardware and require a discreet time implementation.
  3. Like an 8 year old who can happily read a book while waiting for the gates(4) to open, the poor control algorithm just sits there waiting for the event to ring.
  4. That comment may or may not be autobiographical.
  5. I want to distinguish between states end events; technically the shift from Neutral to First gear in a car is an event; however, it operates within the set of data for the continuous controller. The types of events to be examined are often called “asynchronous” events.
  6. Back in the old days control algorithms would send long letters betwixt each other, now with email, texts and phones correspondents are less common
  7. Wiff: to miss, to have an unpleasant smell. Note: in general, integration operations are interrupt tolerant as they average values over time.
  8. Events and errors come in all levels of severity; setting the priority of the event and the given error handling code enables your OS to arbitrate between the two functions.

Let the knife to do the work…

When you read about how to cook you will often read the phrase “when cutting, let the knife do the work.” There is, however, a single word missing that changes everything; it should read let the sharp knife do the work.” One word, big difference.

The difference “a” word makes

The right tool, set up correctly

During this pandemic I have finally learned how to sharpen knives(1) by hand for when I cook something yummy for my wife and I; let’s talk about how to “sharpen your models.”(2)

Model configuration: every model in Simulink has a set of parameters that defines how it executes and how code is generated.(3) Fortunately there is a simple utility that allows you to configure your model parameters for your target behavior.

You must specify basic behavior such as time domain, step size, and the target hardware. But once that is done, the balance of your configuration is handled through the specifications of your objectives.

Data settings: from your parameters to your signals, configuring your data creates more efficient and more controlled generated code. Consider setting the data type, storage class, and possibly units.

Block selection: mixing model domains such as discrete and continuous time blocks can result in sub-optimal performance. If multiple domains are required, partition the domains off using atomic boundaries

Final thoughts

These are of course just a slice(4) of the type of configuration you should consider when setting up your model. Ideally there is a group that is responsible for the basic set up and maintenance of your working environment. This should extend to all of the tools in your development process. For more in depth information on these tools, take a look at the CI, and version control posts in this blog.

Footnotes

  1. Please note, this is not a product endorsement, it is a process endorsement. Also, we don’t have the “stropping block” but I always thought it looks more fun to have the corded strop.
  2. Note, this isn’t a perfect analogy as when you sharpen your configuration you are done for the project, unlike knives that need regular re-sharpening.
  3. When we were first developing this tool I asked the question: How many configuration parameters does a Simulink model have, 113, 178, 238, 312? The correct answer was all of the above depending on the baseline configuration settings. As you can see, this tool is very useful.
  4. Okay, I couldn’t resist one last knife reference for the post.

Reuse is a State(flow) of mind…

I’ve written multiple posts on when and why you should reuse software components. Today I want to look at the specific use case of reusable Stateflow Atomic States.

Simple example

The most common example of reusable states I have worked with involves fault detection and management. In this simple example we start off with “NineOclock” and “all = well.” Based on an input “errorCon” we can move to “all = soSo” and finally, if we don’t recover, “all = endsWell.” In this example the transition conditions are inputs to the system (e.g., we are calculating the value of “errorCon” and “recCon” outside of the system. This works for simple examples, but what if the condition logic is more complex? What should we do then?

Parametrized Transitions!

The answer is the use of parameterized function calls. In this example the function “errorCheck” takes 3 arguments, inst, arg1, and arg2. The argument “inst” controls which branch of the switch/case function is used to calculate the transition logic. (Note: you do not need to use all of the input arguments for any given case, however they all must be passed in).

Reuse is still reuse…

Reuse with Stateflow charts has the same limitation of any other reusable function; e.g., the data types for the inputs / parameters need to be consistent across each instance of the chart. Additionally if you want to use the function based method for transition, the MATLAB functions need to be set as globally visible.

Finally, while global data can be used between instances of the chart, since it can be written to in multiple locations, care should be taken that the information is not overwritten. The most common solution to this issue is to use “bitand” functions to turn on or off bits.

Model-Based Design Walk through: Part 4: Data

This post is the fourth in a series of 8 video blogs, walking through the fundamentals of Model-Based Design. When taken as a whole, these videos provide the foundation stones for understanding and implementing Model-Based Design workflows. I will be using a simple Home A/C system for my example; however the principals apply to everything from Active suspensions to Zonal control.(1)

With this post I cover the basics of data management, both for the model and configuration settings.

  1. Requirements
    1. Requirements Management
    2. Writing clear requirements
    3. What I’m expecting: writing requirements
  2. System Architecture
    1. Modeling architecture: Fundamentals
    2. Model architecture decomposition for hardware and close loop testing
    3. Is your system architecture “Lego Legal”?
  3. Initial (shell) models
    1. Modeling architecture with room to grow
    2. The Model-Based Design Workflow…
    3. Defining your initial Model-Based Design workflow
    4. Plants resting on a table
  4. Defining and managing data
    1. Managing Data
    2. Understanding Data Usage in Model-Based Design Part I
      and
    3. Understanding Data Usage in Model-Based Design Part II
    4. The Simulink Data Dictionary
  5. V&V
    1. The 8 commandments of V&V
    2. Levels of testing
    3. Modular testing environments
  6. Refining the models
    1. Defining your initial Model-Based Design workflow
    2. Best Practices for Establishing a Model-Based Design Culture
  7. Code generation
    1. https://www.mathworks.com/solutions/embedded-code-generation.html
  8. The grab bag…
    1. A road map for Model-Based Design
    2. The next generation of Model-Based Design

Footnote

  1. Stay in the zone, even when you zone out!
  2. Take it with a grain of salt but how you pronounce the word “Data” may be dependent on ST-NG

A few thoughts on the measurement of coffee?

I like a good cup of coffee, while my wife Deborah is partial to tea and we both enjoy sharing a cup of hot cocoa on cold winter nights. (1) Recently we needed to buy a new kettle for our hot brews, so we purchased one that had a thermometer built in. This enabled brewing at the proper temperature, but we couldn’t tell what difference it was really making.

In the field, or in the model?

For experimental data it is common to “over log”; that is, to log as many data channels as your system can handle. This ensures if there is an unexpected event(2) you have the best chance of capturing and understanding the event. By contrast in simulation, the environment(3) is controlled and repeatable so unknowns are of low probability.(4) This means that “over logging” just slows down the simulation time.

How to determine what to log?

In the ideal world you are able to understand the system based off of first principles physics.(5) However, it is often the case that the system as a whole has too many interconnected models that writing out the full system of equations cannot be realistically performed. In that case how do you determine what to log?

The approach I recommend in this case is “self and nearest neighbors”. In other words if you cannot determine the full set of equations that define your whole system, break down the system into components (perhaps at the model reference or lower level) and determine what are the inputs and outputs of those systems. Take the inputs of your Unit Under Test (UUT) and the units directly connected to the UUT and use that to determine what to measure.

Back to coffee

I’ve started experimenting with coffee (okay, not as rigorously as in the milk first/tea first tea experiments), to determine the factors providing the optimal cup? There are 4 primary factors in the outcome of the cup of coffee.

  • Water temperature, Rate of extraction, coffee dose to water amount, coffee quality

The question then is, what is the relative weight to assign to each variable

GoodCup = β(1)* WaterTemp + β(2) * dr/de + β(3) * CoffeeD + β(4) * CoffeeQ

Through a few simple experiments I learned my personal weights heavily lean towards β(3) and β(4) e.g. the temperature effect was minimal. (6) In the same way when designing models, think twice before measuring once. (7)

Footnotes

  1. Perhaps one day we will buy this chocolate teapot for our hot cocoa
  2. The “unexpected” is what you most want to capture; expected data can often be calculated, it is when the dice role snake eyes that you learn the most.
  3. As an interesting side note, I often make 2 “plant” models. One that models the real world (the environment) and one that models the device I am controlling (e.g. the road and the car, air and an airplane, human veins and the I.V. system).
  4. Unlike the real world where unknowns are random events, the unknowns in simulation arise from modeling errors, and when that occurs, adding in additional logging is important.
  5. I was amused that the image of the book cover read “Note: this is not the actual book cover”! The use of a classical cover for fundamental physics seemed spot-on.
  6. There is a side benefit to brewing at the correct temperature, fewer cases of “wow that’s hot” on the first sip of coffee
  7. Unless you are cutting wood in which case it is design once; check your design and measure twice, cut once.