Hindsight is 20/20: Looking back and forward

Most years the “year in review post” seems like a good time for a lighthearted review of what has been and what is to come; with the events of this year some amount of gravity seems in order.

A safety critical life

With one of my my earliest mentee I described the job of an engineer as “trying to find the minimum in Murphy’s equation and its first derivative.”; that is to say, design for the best outcomes and protect against the worst. The blog posts this year have focused heavily on Verification and Validation methodologies (finding the minimum) and writing requirements (protecting against the worst). It is easy to see the parallels between these tasks for Model-Based Design and for life during Covid times.

The pendulum swings

With the introduction of COVID-19 vaccines there is an understandable desire to “go back to normal”; in fact some studies suggest that post events we will swing in the opposite direction; depending on how you look at it, this could be seen as embracing life or throwing caution to the wind. They key as I see it is to embrace life while remembering the lessons learned as the pendulum will swing back.

Looking forward 2021

If hindsight is 20/20, and yes I have been waiting all year to make that joke, as a ratio (20/21; 20/22;…20/99) our view into the future is less clear. In 2021 this blog will be moving to a bi-weekly update (Tuesday / Thursday) and I will be writing more “multi-part” posts to go into greater depth on topics of interest.

I want to wish everyone a safe and happy 2021; I look forward to your comments in the coming year.

Michael

Engineering Process: Fault Lines (P3)

Welcome to the home stretch! In today’s post we finish up the engineering process for the fault insertion. We left the last post with three outstanding questions; in todays post, I will walk through the design decisions that I made in implementing this solution.

  • How much of the tool should be automated?
  • What is the end user responsible for implementing?
  • How do we make this scalable?

Automation and scalability

The weak point in this solution revolves around the naming and maintaining the fault injection names; e.g., as the end user adds new “faults” into the system there is a need to create and define the global data, ensure that it is unique, and add it into the test harness.

Check and define

The first two tasks are accomplished using a masked subsystem and a setup script; when the user double clicks on the fault subsystem they are prompted for a fault name. Closing the dialog will then launch a script that checks and defines the fault data.

The first step is to check to see if the variable already exists; this is done by evaluating the base workspace to see if the data is already defined. If it is a warning, dialog is generated.

If the data does not exist, we assign the names of our fault data to the data store read and data store writes inside the library subsystem.

Finally, we assign the data to the base workspace. As part of this task we inspect the incoming signals to determine the data type and dimension.

But wait!

This code takes care of what happens when you add a block. However, in the long term you are just opening up your model with the data already in place. The next step is to provide a method for saving off the data and adding it to the test harness. In the design stage I tried out methods for auto-triggering this save and add step; however all of them had potential data conflicts. In the end the solution was to provide a script that the user could run to generate the data for themselves.

Sometimes the code is simple…

What remains?

At this point with this code, our end users have the basic functionality that is required. Additionally, because of how we implemented it, the fault blocks can be inserted anywhere in the model, not just between referenced models which was one of our “stretch goals.” What remains now is the “error checking” or ruggedization tasks required to have a product out in the field standing on it’s own. That, however, will be a future set of posts.

Engineering Process: Fault Lines (P2)

Having ended the last post on a cliff hanger, I’m going to reward your return with an answer to two questions that no doubt plagued your thoughts. “Why Fault Lines?” and “what changed in the problem statement?” The first answer is because I am inserting a “fault” on the Simulink “line.”(1)

Changes to the problem statement

At the end of the last post I had the problem statement

For a system of models, create a method for simulating dropped
signals for models not directly connected to the root level I/O

As I started working through possible solutions to this process I hit on a further problem. How do we keep this method from impacting the code generation process for the full system?

The prototype solution

The normal method for creating tests that do not change the models’ released behavior is to create a test harness that “wraps” the unit under test. However, in this instance since our objective is to work with models inside the system this approach will not work. Fortunately Simulink provides utilities to enable this.

Take control!

The Simulink Block “Environment Controller” is the first key; we can have a default path (e.g., what is deployed) and the fault path (e.g., what is simulated). When code is generated the blocks in the “faultStuff” subsystem are optimized out of the generated code.

Command from the top?

Having solved our first (new) problem(2) how do we get the data from the driving test harness to the “faultStuff” subsystem? The solution, in this case, was to use global data.

While as a general practice the use of global data is not recommended, in this case it was considered an acceptable solution for 3 reasons:

  1. The global data would only exist in the simulation version of the model
  2. Creation of the global data would be automated and not available to the developer
  3. Isolation of the global data could be validated as part of the fault-line tool

Two new tasks and a way forward

In developing my rational in support of global data I created two new tasks for myself (reasons 2 & 3).(3) However, for the prototype I just needed to create the example version of the faultLine.

There are three parts to this solution

  • The fault subsystem: this subsystem consists of two data store read blocks and one data store write block. Based on the value of “faultLineActive_<NAME>” either the model path data or the fault value (faultLineValue_<NAME>) will be passed onto the simulation.
  • The data definition: to use the global data, the Data Store Variables must be declared as part of a data dictionary. The full definition of the data e.g., type and size must be provided.
  • Excitation: the final part is the “driver” of the test; in this example a simple test sequence that turns on and off the fault at a set time.

From prototype to release

Moving from a prototype to a released version of the tool asks a new set of questions.

  • How much of the tool should be automated?
  • What is the end user responsible for implementing?
  • How do we make this scalable?

As with the last post, I will leave you on a cliff hanger…. What are the design decisions that take this from prototype to reality?

Footnotes

  1. Perhaps not as earthshaking as you hoped, but well, I do my best.
  2. The process of creating a final released product is often one of iterative problem detection and solution, with each subsequent problem being smaller then the previous one.
  3. And these new tasks fit in with the “iterative problem detection” description of footnote 2.

Engineering Process: Fault Lines (P1)

Whether you prefer fishing(1) or gardening(2) metaphors, for the next few posts I’m going to take you into the process I follow when solving a problem to show you how I fish.

The problem statement

As is often the case, I will pull a problem from work I have done across multiple customers. I do this for two reasons. First, I have vetted the solution “in the field,”(4) and second I know that this is an issue that is commonly faced.

How do we simulate dropped and corrupted signals inside our system?

Multiple

The first step

The first step in designing a solution is to understand the problem. The questions that I have are…

  • Does a solution already exist?
  • What does “inside our system” mean?
  • In the deployment code where and how can signals be corrupted?

The critical question to answer is “where and how?” There are three primary ways that signals and data can be corrupted. The first is through incorrectly written code; since there are other methods for determining this issue it is usually dropped from the analysis. The second method is through interrupt data events (e.g., the execution of a function is put on hold while a higher priority function executes). The value of data changes during the interrupt resulting in corrupted data. This is of potential interest, however it results in a very large testing scope as the interrupt can occur at any point in function. As a result solutions to this problem are placed at a lower priority.(5)

The final method is through corruption of transmitted signals (e.g., messages sent over CAN / LIN / RS-232).(6) Thinking about this issue, we get the solution to our second question: what does “inside the system” mean? Transmitted errors occur at the boundary of functions, and from there we have an answer to the first question: does a solution already exist…

What we already have…

For individual models we have a solution; test vectors can be created that simulate dropped and corrupted signals. However for a system of models, Simulink does not have a built-in method for dropping signals for models on the “inside” of the system.

Problem Restatement

With this understanding I came up with a new problem statement. One that reflected the design I would build towards. As you will see in the upcoming blog, there is further iteration on the new problem statement as the life cycle issues are considered.(7)

For a system of models create a method for simulating dropped signals for models not directly connected to the root level I/O

Footnotes

  1. E.g. give a person a fish, they eat for a day, but teach a person to fish and they eat for life.
  2. A teacher is a gardener who plants the seeds of learning in your mind(3)
  3. That both of these metaphors are about foundational tasks of humanity gives an idea how old this concept is.
  4. Since one of my customers was an agricultural company it was literally validated in multiple fields
  5. That we talk about priorities with interrupts is only appropriate as the priorities determines if the interrupt can occur.
  6. No interface, no matter how ancient ever fully goes away.
  7. This is my “cliff hanger” to bring you back to the next post.

Fault injection

Over the past few weeks I have been thinking about how to test systems while introducing faults at lower levels within the system. At the topmost level the problem is simple; faults can be treated as just a signal. At a lower level either the model would need to be instrumented or the computational chain would need to be understood.

Where are the faults?

Within controls design a fault is defined as either an interruption in, or else a delay or “crossed” signal in the data flow. Our first question is to ask “where and how do faults occur?” Faults occur at the boundaries of systems, either though direct I/O or communication protocols (CAN/LIN/Ethernet…) or through errors in global memory (pointer corruption).

Injection Blocks

In this example we wrap our UUT (unit-under-test) as a referenced model. In the test environment we point to a secondary referenced model that includes a fault-injection block. Taking this approach we can instrument the UUT without challenging the UUT.

The actual “injection” block can be comprised of almost anything. In this case I used simple test sequence block to short the signal after 5 seconds

Simple Fault

When to fault test?

Fault based testing is an edge case for testing; not all systems require or merit fault testing. A general rule of thumb is…

  • If the system is safety critical
    and
  • The system directly communicates with external systems
    and
  • The system does not have redundant communication
    then…

…the system should have fault testing implemented.

Scratching the itch: Feedback loops

At the heart of every controls system is a feedback loop. An event occurs, it is measured, there is a response and then the disturbances are measured again; continue until “controlled.” Without thinking, on a daily basis we engage with feedback loops; someone says something to you, you respond. If your answer clarifies things you move closer to a “closed loop,” if not, there is an error detected and you and your conversational partner resolve the difference.

Within Simulink models the closing of a loop is realized using a state variable; often a unit delay that provides data from the last pass to the controller.

Being a controls engineer requires two things, to understand physically what a controls loop is and programmatically how it is implemented. Today I will give examples of the 3 everyday ways in which we engage in feedback loops…

Cruise control

As you drive along the road, you see the cars in front of you. As you get closer you either slow down or switch lanes to pass. In this example your speed is the controlled variable in response to the distance between you and the other cars.

AC system

In the heat of the summer you may open up your home at night to let the cool evening air in; both freshening up your dwelling and bringing the temperature down. As the sun rises you close up the window and draw the blinds to keep the heat from making your home too hot. Opening and closing blinds and shades; this is your response to the heating input.

Scratching and Itch

There is nothing quite like that feeling when you hit the spot that itches, you scratch for a few moments and the issue goes away. However, this is a good example of a potential problem with un-dampened feedback; if you don’t stop you end up doing more harm than good.

Now it is your chance. Drop me a line in the feedback section to help me “tune” this blog.

Five Model-Based Design best practices to get you started

While this blog as a whole is dedicated to Model-Based Design, today I want to cover at a high level five basic principles for effective software development using Model-Based Design. So without further ado,(1) let’s dive in.

Plan to reuse your model

As this Paul Smith paper points out, Model-Based Design is predicated on the idea of reusing models for multiple development tasks. To enable this, planning at the start for reuse is required. Design interfaces to enable testing and configuring for code generation at the start of the process makes the reuse process simpler and more effective than “retrofitting” at the end.

Take the training

When using a new tool or process, consider taking a training course to become grounded in the basics. And beyond the basics, a training course gives you a chance to ask the questions specific to your project.

Define KPIs

To know if you are on the right track with your Model-Based Design process you need to define your Key Performance Indicators (KPIs). Start measuring them at the start of your project and at periodic points as you learn and develop; KPI’s are your guide(2) to a better process.

Jack of all trades… Define roles

While an overview knowledge of the Model-Based Design process is desirable to have and should be a goal for everyone on the project, an effective development process requires people in specific roles. The three most important are algorithm development, algorithm verification and algorithm release and deployment.

Allow time for learning

A final thought; allow time to learn and experiment. While there are common best practices each project has a set of unique challenges. As you adopt Model-Based Design look for those ways in which your systems can push the field forward. Please leave any observations in the comment section.

Footnotes

  1. “Without further ado” is an interesting phrase. Ado is defined as fuss or bother; it is the nagging question that keep people from moving forward. But since the phrase says “without further ado” it implies there is always some question before you start something big. Hence the complimentary image of “jumping in feet first.”
  2. As well as this blog.

Big Data (the other kind)

Back when I swam competitively(1) there was a saying: “train like you compete”; which is to say the habits you develop when training are the behaviors you exhibit when you compete. The same principle holds when developing a model; the block and data architecture you select during development carries into production. So… onto “big data.”

Three little bears

Like porridge or beds, we want our data to be “just right”; neither too big nor too small; so we are talking about how much RAM or ROM the data consumes. Basic data types use 8, 16, 32 or 64 bits. Depending on your processor the minimum and maximum data size may be different.(3) In designing your algorithm select the smallest data type that accurately represents your data and using simulation verify that this data type is sufficient for the full system.

Let’s get real. Talking about complex data

When I write (about complex data) I am not being imaginary, that is a hole pole apart.(4) Rather in this instance I am thinking of vectors and structures. The rule of thumb for vectors is simple, allocate enough entries for the data that needs to be stored. Structures require a rule of hand.(5).

Structures provide a method for grouping different variables and data types together into a single package. From a data organization perspective this provides a clear advantage as all the related data can be found in one location. When thinking about size of data the question becomes “how closely are you related?”

A concrete example

If I was creating a physical model of a road I would want to know the attributes of the concrete I was pouring. Attributes include tensile and compressive strength, porosity, density, thermal conductivity, cost per cubic foot and many more. My full physical model will require all of those attributes but not all parts of my model require all the parts.

  • Construction “strength” model: tensile & compressive coefficients & density
  • “Wear and tear” model: porosity, thermal conductivity
  • Lifetime cost model: cost per cubic foot…

Megastructures

If I created one “megastructure” including all the data then I would pass the following data (for this simple example):

  • 3 unneeded data bits to “strength”
  • 4 unneeded data bits to “wear and tear”
  • 5 unneeded data bits to “cost”

Over time and in more realistic examples this unnecessary data quickly adds up consuming processor memory.

Footnotes

  1. H2Okies(2)
  2. The fact that the swim team name had a chemistry pun embedded endeared me to the team right away.
  3. If your smallest data type is 16 bits, creating an 8 bit variable does not save any memory and may in some cases result in different behaviors between Simulation and the deployed code. Setting the target hardware configuration for your model addresses this issue.
  4. Complex data can also mean data with real and imaginary parts (e.g., A = 1 + 3i). More often than not it is plotted in polar graphs with “poles and holes” for the real and imaginary components.
  5. The most famous “rule of hand” would be the “right hand rule.” In this case I am just thinking that unlike rules of thumbs I have 4 more points…

Super step!

In today’s video blog I am going to cover the often overlooked Stateflow Super Step semantic and how it interacts with Stateflow function calls and Simulink subsystems. This is the first in a two part series, in the second part I will show how I validate the behavior of the chart.