ECUs with room to grow

Selecting an ECU (electronic control unit) for your project is an investment in time and financial resources. Once the selection has been made for a given product line, that ECU will be used on average for 5 years. This means that even if the ECU meets all of your needs now it may not in 3 years if you don’t plan ahead.

Types of growth

There are three types of growth that need to be accounted for:

  1. Increases in functions: as new features are added new functions are added. These functions will take up additional processing time.
  2. Increases in memory: Hand-in-hand with the new functions’ time needs are memory needs.
  3. Increases in I/O: The trickiest of the lot. Sometimes it is just additional channels of existing I/O types but in some cases it is a need for new I/O types.

As a general rule of thumb, 80% to 85% memory process utilization at initial release provides a safe margin. For hardware, two spare channels of each type is generally safe. In the case where new I/O types may be required there are two options. The first is to select a hardware device that has product family members with additional I/O types. The second is a selection of a board that supports external I/O expansion slots.

Growth in the times of DevOps

Traditionally the updates to ECU software only happened when a new product was released and it happened “in the factor.” With the growth in “over air” updates (one of the driving features of DevOps) the starting metrics need to change. The rule of thumb will need to take into account the anticipated features to be released and determine which of those will be pushed for update. The type of features to be pushed will be heavily dependent on the product type with some products receiving very few updates (e.g., medical devices with high integrity workflows) while others such as consumer devices may receive frequent updates

Ring! Ring! Event driven architectures(1)

Broadly speaking, control systems are broken down into event and continuous(2) time driven domains. Continuous systems are always taking in stimulus and putting out responses. While event driven systems respond when the event happens, the rest of the time they sit patiently quite unlike an 8 year old waiting for the gates of Disneyland to open.(3) A challenge arises when you start to mix the two types of systems together; how do you mix the chocolate of continuous time with the peanut butter of events to create a greater whole?(5)

Everything was going…
smoothly then there was a break

Events interrupt the smooth execution of continuous time systems. If the event preempts the execution of the continuous time system, you are halting the logic and calculations of the system. The return to the execution of the system can pose challenges. Let’s take a look at a few…

Something’s changed

The most common problem that occurs is that when the interrupt event happens there is an overall change in the system state and the data (pre and post interrupt) is no longer in correspondence.(6) This can result in incorrect calculations and / or commands.

Falling behind

In some instances the code associated with events can take a long time; as a result the periodic update of the continuous time system can “fall behind”; potentially missing multiple updates. In some instances important data is missed during that “time out.”

What to do?

Like many situations there is no “one-size fits all” solution; but there are general rules of thumb.

  • When something has changed:
    • Buffered or protected, data can be used to resume execution of the code with contiguous data.
    • A “return from event” handler can be used to determine if a system reset is required.
  • Falling behind:
    • For the “important” events a buffer can be constructed within the event driven code.

Testing the wind

Testing interrupted code is a sticky problem as by definition, when the interrupt happens it is an unknown. Use of random interrupt generators provide one method for testing the code but cannot provide full coverage. The recommendation for this blog is to create “worst case” interrupts; e.g. look at your model and ask “what is the worst place where the execution could be interrupted” and then do it there…

Worst place for an emergency

  • In the middle of nested if/else-if/else logic: The branch you are in may no longer be valid.
  • Performing differentiation: When your “dt” changes the diff can wiff(7)
  • Fault / error detection: If the code’s job is to detect errors then, well, this is one area where you may want to protect against interrupts.(8)

Footnotes

  1. Back at the dawn of time “Ring-Ring” was the sound that a phone made, a common “event driven” experience for many people.
  2. I went back and forth 2 or 3 times on the “continuous time” driven domains. The world we live in is continuous time, it keeps on slipping into the future. However most modern control algorithms are implemented on discreet time hardware and require a discreet time implementation.
  3. Like an 8 year old who can happily read a book while waiting for the gates(4) to open, the poor control algorithm just sits there waiting for the event to ring.
  4. That comment may or may not be autobiographical.
  5. I want to distinguish between states end events; technically the shift from Neutral to First gear in a car is an event; however, it operates within the set of data for the continuous controller. The types of events to be examined are often called “asynchronous” events.
  6. Back in the old days control algorithms would send long letters betwixt each other, now with email, texts and phones correspondents are less common
  7. Wiff: to miss, to have an unpleasant smell. Note: in general, integration operations are interrupt tolerant as they average values over time.
  8. Events and errors come in all levels of severity; setting the priority of the event and the given error handling code enables your OS to arbitrate between the two functions.

Let the knife to do the work…

When you read about how to cook you will often read the phrase “when cutting, let the knife do the work.” There is, however, a single word missing that changes everything; it should read let the sharp knife do the work.” One word, big difference.

The difference “a” word makes

The right tool, set up correctly

During this pandemic I have finally learned how to sharpen knives(1) by hand for when I cook something yummy for my wife and I; let’s talk about how to “sharpen your models.”(2)

Model configuration: every model in Simulink has a set of parameters that defines how it executes and how code is generated.(3) Fortunately there is a simple utility that allows you to configure your model parameters for your target behavior.

You must specify basic behavior such as time domain, step size, and the target hardware. But once that is done, the balance of your configuration is handled through the specifications of your objectives.

Data settings: from your parameters to your signals, configuring your data creates more efficient and more controlled generated code. Consider setting the data type, storage class, and possibly units.

Block selection: mixing model domains such as discrete and continuous time blocks can result in sub-optimal performance. If multiple domains are required, partition the domains off using atomic boundaries

Final thoughts

These are of course just a slice(4) of the type of configuration you should consider when setting up your model. Ideally there is a group that is responsible for the basic set up and maintenance of your working environment. This should extend to all of the tools in your development process. For more in depth information on these tools, take a look at the CI, and version control posts in this blog.

Footnotes

  1. Please note, this is not a product endorsement, it is a process endorsement. Also, we don’t have the “stropping block” but I always thought it looks more fun to have the corded strop.
  2. Note, this isn’t a perfect analogy as when you sharpen your configuration you are done for the project, unlike knives that need regular re-sharpening.
  3. When we were first developing this tool I asked the question: How many configuration parameters does a Simulink model have, 113, 178, 238, 312? The correct answer was all of the above depending on the baseline configuration settings. As you can see, this tool is very useful.
  4. Okay, I couldn’t resist one last knife reference for the post.

Reuse is a State(flow) of mind…

I’ve written multiple posts on when and why you should reuse software components. Today I want to look at the specific use case of reusable Stateflow Atomic States.

Simple example

The most common example of reusable states I have worked with involves fault detection and management. In this simple example we start off with “NineOclock” and “all = well.” Based on an input “errorCon” we can move to “all = soSo” and finally, if we don’t recover, “all = endsWell.” In this example the transition conditions are inputs to the system (e.g., we are calculating the value of “errorCon” and “recCon” outside of the system. This works for simple examples, but what if the condition logic is more complex? What should we do then?

Parametrized Transitions!

The answer is the use of parameterized function calls. In this example the function “errorCheck” takes 3 arguments, inst, arg1, and arg2. The argument “inst” controls which branch of the switch/case function is used to calculate the transition logic. (Note: you do not need to use all of the input arguments for any given case, however they all must be passed in).

Reuse is still reuse…

Reuse with Stateflow charts has the same limitation of any other reusable function; e.g., the data types for the inputs / parameters need to be consistent across each instance of the chart. Additionally if you want to use the function based method for transition, the MATLAB functions need to be set as globally visible.

Finally, while global data can be used between instances of the chart, since it can be written to in multiple locations, care should be taken that the information is not overwritten. The most common solution to this issue is to use “bitand” functions to turn on or off bits.

Model-Based Design Walk through: Part 4: Data

This post is the fourth in a series of 8 video blogs, walking through the fundamentals of Model-Based Design. When taken as a whole, these videos provide the foundation stones for understanding and implementing Model-Based Design workflows. I will be using a simple Home A/C system for my example; however the principals apply to everything from Active suspensions to Zonal control.(1)

With this post I cover the basics of data management, both for the model and configuration settings.

  1. Requirements
    1. Requirements Management
    2. Writing clear requirements
    3. What I’m expecting: writing requirements
  2. System Architecture
    1. Modeling architecture: Fundamentals
    2. Model architecture decomposition for hardware and close loop testing
    3. Is your system architecture “Lego Legal”?
  3. Initial (shell) models
    1. Modeling architecture with room to grow
    2. The Model-Based Design Workflow…
    3. Defining your initial Model-Based Design workflow
    4. Plants resting on a table
  4. Defining and managing data
    1. Managing Data
    2. Understanding Data Usage in Model-Based Design Part I
      and
    3. Understanding Data Usage in Model-Based Design Part II
    4. The Simulink Data Dictionary
  5. V&V
    1. The 8 commandments of V&V
    2. Levels of testing
    3. Modular testing environments
  6. Refining the models
    1. Defining your initial Model-Based Design workflow
    2. Best Practices for Establishing a Model-Based Design Culture
  7. Code generation
    1. https://www.mathworks.com/solutions/embedded-code-generation.html
  8. The grab bag…
    1. A road map for Model-Based Design
    2. The next generation of Model-Based Design

Footnote

  1. Stay in the zone, even when you zone out!
  2. Take it with a grain of salt but how you pronounce the word “Data” may be dependent on ST-NG

A few thoughts on the measurement of coffee?

I like a good cup of coffee, while my wife Deborah is partial to tea and we both enjoy sharing a cup of hot cocoa on cold winter nights. (1) Recently we needed to buy a new kettle for our hot brews, so we purchased one that had a thermometer built in. This enabled brewing at the proper temperature, but we couldn’t tell what difference it was really making.

In the field, or in the model?

For experimental data it is common to “over log”; that is, to log as many data channels as your system can handle. This ensures if there is an unexpected event(2) you have the best chance of capturing and understanding the event. By contrast in simulation, the environment(3) is controlled and repeatable so unknowns are of low probability.(4) This means that “over logging” just slows down the simulation time.

How to determine what to log?

In the ideal world you are able to understand the system based off of first principles physics.(5) However, it is often the case that the system as a whole has too many interconnected models that writing out the full system of equations cannot be realistically performed. In that case how do you determine what to log?

The approach I recommend in this case is “self and nearest neighbors”. In other words if you cannot determine the full set of equations that define your whole system, break down the system into components (perhaps at the model reference or lower level) and determine what are the inputs and outputs of those systems. Take the inputs of your Unit Under Test (UUT) and the units directly connected to the UUT and use that to determine what to measure.

Back to coffee

I’ve started experimenting with coffee (okay, not as rigorously as in the milk first/tea first tea experiments), to determine the factors providing the optimal cup? There are 4 primary factors in the outcome of the cup of coffee.

  • Water temperature, Rate of extraction, coffee dose to water amount, coffee quality

The question then is, what is the relative weight to assign to each variable

GoodCup = β(1)* WaterTemp + β(2) * dr/de + β(3) * CoffeeD + β(4) * CoffeeQ

Through a few simple experiments I learned my personal weights heavily lean towards β(3) and β(4) e.g. the temperature effect was minimal. (6) In the same way when designing models, think twice before measuring once. (7)

Footnotes

  1. Perhaps one day we will buy this chocolate teapot for our hot cocoa
  2. The “unexpected” is what you most want to capture; expected data can often be calculated, it is when the dice role snake eyes that you learn the most.
  3. As an interesting side note, I often make 2 “plant” models. One that models the real world (the environment) and one that models the device I am controlling (e.g. the road and the car, air and an airplane, human veins and the I.V. system).
  4. Unlike the real world where unknowns are random events, the unknowns in simulation arise from modeling errors, and when that occurs, adding in additional logging is important.
  5. I was amused that the image of the book cover read “Note: this is not the actual book cover”! The use of a classical cover for fundamental physics seemed spot-on.
  6. There is a side benefit to brewing at the correct temperature, fewer cases of “wow that’s hot” on the first sip of coffee
  7. Unless you are cutting wood in which case it is design once; check your design and measure twice, cut once.

Critical conditionals: initialization, termination and other conditional functions

Like many people, the COVID lock down has given me time to practice skills; I have been spending time practicing my (written) German; so if you skip to the end you can see this post “auf Deutsch”

Back to the start

Stoplights provide us with information, Green = Go (Initialize), Red = Stop (Terminate) and Yellow, according to the movie Starman, means go very fast. A long term question, within the Simulink environment has been, “what is the best way to perform initialization and termination operations?”

Old School: Direct Calls in C

Within the Simulink pallet, the “Custom Code” blocks allow you to directly insert code for the Init and Termination functions. The code will show up exactly as typed in the block. The downside of this method is that the code does not run in Simulation. (Note: This can also be done using direct calls to external C code. In these cases, getting the function to call exactly when you want can be difficult)

State School: Use of a Stateflow Diagram

A Stateflow Chart can be used to define modes of operation; in this case, the mode of operation is switched either using a flag or an event. This approach allows you to call any code (through external function calls or direct functions) and allows for reset and other event driven modes of operation. The downside to this method is that you need to ensure that the State flow chart is the first block called within your model (this can be done by having a function caller explicitly call it first).

New School: A very economical way

The “Initialization,” “Termination” (and “Reset“) subsystems are the final recommended methods for performing these functions. The code for the Initialization and Termination variants will show up in the Init and Term section of the generated code. Reset functions will show up in unique functions based off of the reset event name.

Within this subsystem you can make direct calls to C code, invoke Simulink or MATLAB functions and directly overwrite the state values for multiple blocks.

Best practices for Init and Term

MATLAB and Simulink have default initialization and termination functions for the model and the generated code. The defaults should only be overridden when the default behavior is incorrect for your model. There are 4 common reasons why a custom Init / Term functions are required; if you don’t fit into one of these, determine if you should be using this.

  • Startup / Shutdown physical hardware: for embedded systems with direct connections to embedded hardware, the Init / Term functions are required. (Note: it is a best practice to try to have your hardware systems in models external to the control algorithms. This allows you to “re-target” your control algorithm to different boards easily
  • One time computations: Many systems have processor intensive computations that need to be performed prior to the execution of the “body” of the model.
  • External data: as part of the startup / shut down process, data may need to be saved to memory / drive.
  • You just read: a blog and you want to try things out… I’m glad you want to try it, but review the preceding 3 reasons.

Bonus content

As promised, the results of practicing my (written) German skills. Und so

Ampeln liefern uns Informationen: Grün = Los (Initialisieren), Rot = Halt (Beendigungsvorgänge) und Gelb bedeuten laut Film Starman, sehr schnell zu fahren. Eine langfristige Frage in der Simulink-Umgebung lautete: “Was ist am besten, um Initialisierungs- und Beendigungsvorgänge durchzuführen?”.

Old School: Direct Calls in C


Innerhalb der Simulink -Palette können Sie mit den Blöcken “Benutzerdefinierter Code” direkt Code für die Funktionen Init und Beendigungsvorgänge existiert. Der Code wird genau so angezeigt, wie er im Block eingegeben wurde. Das Problem bei dieser Methode ist, dass der Code in Simulation nicht ausgeführt wird. (Eine dinge: Dies kann auch durch direkte Aufrufe von externem C-Code erfolgen. In diesen Fällen kann es schwierig sein, die Funktion genau dann aufzurufen, wenn Sie möchten)

State School: Use of a Stateflow Diagram

Ein Stateflow Chart kann verwendet werden, um Betriebsmodi zu definieren. In diesem Fall wird der Betriebsmodus entweder mithilfe eines Flags oder eines Ereignisses umgeschaltet. Mit diesem Ansatz können Sie einen beliebigen Code aufrufen und zurücksetzen und andere ereignisgesteuerte Betriebsmodi ausführen. Die Einschränkung diesmal Methode ist, dass Sie sicherstellen müssen, dass das Stateflow-Diagramm der erste Block ist, mit dem in Ihrem Modell aufgerufen wird (dies kann erfolgen, indem ein Funktionsaufrufer es explizit zuerst aufruft).

New School: A very economical way


Die Modellblöcke “Initialisieren”, “Beendigungsvorgänge” (und “Zurücksetzen”) sind die endgültige Methode zur Ausführung dieser Funktionen. Der Code für die Initialisierungs- und Beendigungsvorgänge Optionen wird im Abschnitt “Init” und “Term” des generierten Codes angezeigt.
Rücksetzfunktionen werden in eindeutigen Funktionen angezeigt, die auf dem Namen des Rücksetzereignisses basieren.

Innerhalb dieses Subsystems können Sie C-Code direkt aufrufen, Simulink- oder MATLAB-Funktionen aufrufen und die Zustandsraum Werte für mehrere Blöcke direkt ersetzen.

Best practices for Init and Term

MATLAB und Simulink verfügen über standardmäßige Initialisierungs- und Beendigungsfunktionen für das Modell und den generierten Code. Es gibt vier häufige Gründe, warum benutzerdefinierte Init / Term-Funktionen erforderlich sind.

Preview(opens in a new tab)

  • Physische Hardware starten / herunterfahren: Für eingebettete Systeme mit direkten Verbindungen zu eingebetteter Hardware sind die Init / Term-Funktionen erforderlich (Eine dinge: diese beste Vorgehensweise ist Ihre Hardwaresysteme außerhalb der Steuerungssysteme zu haben. Auf diese Weise konnen Sie Ihre Software schnell nue Hardware portieren.
  • Einmalige Berechnungen:Berechnungen erforderlich vor dem Start des Steueralgorithmus .
  • Externe Daten: Daten, die in externe Quellen geschrieben werden.

Testing your testing infrustructure

Ah tests! Those silent protectors of developments integrity, always watching over us on the great continuous integration (CI) system in the clouds. Praise be to them and the eternal vigilance they provide; except… What happens to your test case if your test infrastructure is incorrect?

Quis custodiet ipsos custodes(1)

There are 4 ways in which testing infrastructure can fail, from best to worst

  1. Crashing: This is the best way your test infrastructure can fail. If this happens the test ends and you know it didn’t work.
  2. False failure: In this case, the developer will be sent a message saying “fix X”. The developer will look into it and say “your infrastructure is broken.”(2)
  3. Hanging: In this case the test never completes; eventually this will be flagged and you will get to the root of the problem
  4. False pass: This is the bane of testing. The test passes so it is never checked out.

False passes

Prevention of false passes should be a primary objective in the creation of testing infrastructure; the question is “how do you do that?”

Design reviews are a critical part of preventing false passes. Remember, your testing infrastructure is one of the most heavily reused components you will ever create.

While not preventing false positives, adherence to standards and guidelines in the creation of test infrastructure will reduce common known problems and make it easier to review the object

There are 3 primary types of “self test”

  1. Golden data: the most common type of self test is to pass known data that either passes or fails the test. This shows if it is behaving as expected but can miss edge cases(3)
  2. Coverage testing: Use another tool to generate coverage tests. If this is done, then for each test vector provided by the tool provide the correct “pass or fail” result.
  3. Stress and concurrency testing: For software running in the cloud, verification that the fact that it is running in the cloud does not cause errors(4)
  4. Time: Please, don’t let this be the way you catch things… Eventually because other things fail, false positives are found through root cause analysis.

Final thoughts

In the same way that nobody(5) notices water works until they fail, it is common to ignore testing infrastructure. Having a dedicated team in support is critical to having a smooth development process.

Footnotes

  1. In this I think we all need to take a note from Sir Samuel Vimes and watch ourselves.
  2. There is an issue here; frequently developers will blame the infrastructure before checking out what they did. Over time the infrastructure developers “tune out” the development engineers.
  3. Sometimes the “edge cases” that golden data tests miss are mainstream but since they were not reported in the test specification document, they are overlooked by the infrastructure developers.
  4. The type of errors seen here are normally multiple data read / writes to the same variable or licensing issues with tools in use.
  5. And if you look at it with only one eye, failures will slip passed.

Why choose Model-Based Design?

Over the last 18 years, I’ve had a variation on this conversation on every project I have worked on. (Dramatized for a blog audience)

  • Talented Engineer (TE): This model based stuff is interesting and all, but I can write the same code in 1/2 the time and it is 10% more efficient.
  • Me (also a talented engineer): That is probably true, you write very good code. Do you enjoy writing code?

  • TE: Well no, I write code because that is what I need to do to implement my algorithm. But wait, you are admitting my code is better?
  • ME: Yes, yours is. How many other people in your group are as proficient in C? And if you don’t enjoy writing code, do you enjoy designing <MAGICAL SPACE WIDGETS>(1)?
  • TE: I Went to school so I could work on MSWs(2), I love working at MSW Co on them; and really, maybe one out of the 20 can program as well as I do.
  • ME: Ok, well, how much time do you spend coding versus designing? Debugging versus testing?
  • TE: Tell me more about MBD stuff…

Realizing the benefit

The definition of Model-Based Design that I use is simple:

The use of a model(3) that is the single source of truth to 
execute two or more tasks in the design cycle 

I work for The MathWorks, but, for a minute I will be agnostic. The definition simply says “a model.” The “model” can be a physical prototype, an analog computer, C code or, I hope, a Simulink or Simscape model. The important part is that the same model, without changes (4), is used at multiple points in the design cycle.

By my estimate I have drawn the V diagrams 1.3e5 times.(5)

If we think back to our TE in the opening section, what did they want to do? They wanted to design MSWs. They did not want to spend time creating test harness, writing test vectors, generating reports, and integrating with hardware. And why should they? TE was hired because he studied MSW and knows how to design the best MSWs; why take him away from that task? Since MBD allows users to use the same model at multiple points means that when our TE in design is done he can hand it off to another TE in the testing group who when they finish it can hand it off to a TE in integration who hands it off to a TE in release engineering. And why is this possible? Because it is much easier to find talented engineers who cover a given area very well (e.g. just test, or just release) than it is to find the magical unicorn(6) who can do all of the tasks well.

But wait! I can do all that in X

At some point down the line our TE comes back and says

  • TE: wait, just two paragraphs ago you said a Model could be C code, why should I use this graphical language?
  • ME: Wait! I just wrote that, so how did you see that? But OK, depending on your application, you may use Simulink, Stateflow or the MATLAB (textual) environment. The key is the infrastructure built up around the environment that enables the “more than one uses of the model.”

Can and should are two different beasts(7). Modern graphical modeling languages have supporting tools directly integrated into their environment. The set-up and integration is reasonably straightforward. Textual language, by their open nature, often have higher set-up and integration costs.

Making the transition / learning your way around

At first the transition to a graphical development environment (8) can seem daunting; Simulink’s base pallet has over 200 blocks(9), and knowing at first which one is the correct one to use can be confusing. However, like learning any other language you will quickly pick up the basics once you throw yourself in. Unlike learning a new programming language there are multiple transformation technologies you can apply directly to the model. When you start adopting Model-Based Design you should determine what “second task to execute” you want to adopt first. For more insights on this I would recommend viewing this roadmap.

Putting it all together

Ultimately the adoption of model based design isn’t about the tools, it is about the process. How you use each tool at each step along the way to the best effect. I welcome you to continue to join me in this space as upcoming blog posts delve more into Model-Based Design processes.

Ah the splash page image!

Model-Based Design for the VP/CTO

In past blogs I have written and talked about the Return On Investment (ROI) for adopting Model-Based Design. This link, from The MathWorks, provides another good overview on the ROI question. I want to propose another reason for this migration / adoption. Finding an engineer / scientist who knows how to develop “magical space widgets” takes time; on-boarding them them takes time. Losing them happens from frustration and boredom. This is one of the “hidden” drivers of ROI for MBD; when your people spend most of their time working on the things that interest them in ways that use their abilities and knowledge you have highly engaged employees which leads to greater innovation and higher quality.

Footnotes

  1. MAGICAL SPACE WIDGETS is a generic term for a customer project. Sometimes it is a car or a plane, or sometimes an actual spacecraft.
  2. MSW Is the agreed upon TLA for Magical Space Widgets.
  3. In the actual MBD workflow it will be multiple models, but let’s start simply
  4. Without changes is a simplification. The model you start off with at the start of the design cycle will be elaborated as it is developed. The important point is that if you took that elaborated model back to the earlier stages of the process it should still function in that stage (at a higher level)
  5. The version that I like best of the V diagram reflects the iterative nature of design, that within each stage there are iterations moving forward and back. Much like a PID controller, a good process is self correcting to errors in the process.
  6. Magical unicorns do exist, just don’t count on your process depending on them.
  7. Or in the image’s case, T-Rex
  8. OK, I’m not trying to be subtle here; once you start seeing them as development environments where you don’t throw away your work at each step along the way, the benefits become clear.
  9. Honestly I’m not sure how to count the number of “basic” functions in a textual language like C; while those 200+ blocks at first may seem like a lot, but once you realize they are targeted at the design of models you quickly pick them up.

This is “only” a test

In the last blog I introduced the best practices for designing scenario based tests. Today I am going to cover the, non Herculean(1), task of generating test vectors.

Good vector definitions have resolution down to the smallest time step

The “giddy” set-UP

Starting off happily let’s consider 3 things; the unit under test, the test harness and the analysis method.

  • Unit Under Test (UUT): The UUT is what you are testing. For the test to be valid, the unit must be fully encapsulated by the test harness. E.g. all inputs and outputs to the UUT come through the test harness.(2)
  • The test harness: (3)Provides the interface to the UUT, providing the inputs reading/logging the outputs. Test harnesses can be black, white or grey box. Test harnesses can be dynamic or static.(4)
  • Analysis method: Dynamic or static; how the results of the test execution are evaluated.

Not to put the cart before the horse but; we start with a test scenario. We need the test vectors. To have test vectors, we need a test harness. To have a test harness we need a well defined interface.(5)

Within the software testing domain (which includes MBD) a well defined interface means the following:

  • All the inputs and outputs of the system are known: Normally this is through a function interface (in C) or the root level inputs / outputs in a model
  • Type and timing are known: The execution rate (or trigger) for the UUT is known as are all of the data types and dimensions of the I/O.

Time to saddle up!

No more horsing around, once you have your interface designed, it is time to create your test harness. Given that we are working in the domain of Model-Based Design, the ideal objective is to automatically generate a test harness. (To all the neigh sayers out there)

A well defined interface!

Signal time!

There are four basic methods for creating signals

  • Manually: Ah…good old fashioned hand crafted test vectors. These take the most time but is where we normally start.
  • Automatically (general constraint): The next step up is to create test vectors using an auto generation tool. These tools generally allow for basic “types of tests” to be specified such as range, dead code, MCDC.
  • Automatically (constraints specified): The final approach is to use a test vector generation tool and apply constraints to the test vectors.
  • From device: Perhaps this is cheating, but a good percentage of input test vectors come from real world test data. They have all the pros and cons(6); noise and random data; they may not get what you are looking for but…

UUT and constraints

In this example we have the UUT and a “Test Assessment Block” as our method for imposing constraints. What we program into the Assessment Block is what we want to happen, not what we are checking against(7). For example, we could specify the input vectors for the WheelSpeed, WheelTqCmd and SlipRationDetected are at a given value and that the output vector is ABS_PWM . The automatic test vector generation would then create a set of tests that met that condition. You could then check for the cases where the ABS_Fault should be active.

COVID-19 Acceleration: issues with “from the device”

When you social distance from your co-workers you are, more often then not, social distancing from your physical hardware. This directly impacts the ability to gather “real world” test data. My prediction is that we will see 4 trends as a result.

  1. Greater use of existing real world data / public domain data sets: Lets be honest, there are times that data is gathered because it is easy to do so; go to the lab run the widget, collect the data and go. However there is, no doubt within your company and within government, and university data bases a wealth of existing data that will match what you need down to the 90% level
  2. Increased automation of test data collection: To some extent being in a lab or in a vehicle will always be required for collecting data, however many of the processes around setup, data collection and data transmission can be automated to reduce both the time on site and the frequency of the time on site.
  3. Improved physical models: I know what you are thinking, this is about collecting real world data! What sort of trick is this(8)! What I am suggesting is that collection of physical data will be prioritized for the creation of better physical models to reduce the net time in lab.
  4. In use collection: The next step will be the transmission of data from existing objects in the field back to the manufacture. The model “IC-2021” freezer in the field will, most likely, share 95% of the same hardware and software. This means you have a lab in the field.
The Lambert projection for more projects see

All of these methods will be used going forward to supplement traditional real-world data collection methods. With the physical modeling approach I am going to dive into how to select data to collect to rapidly improve the models. With the “in the field” we will look take our first look at big data methods.

Final thoughts

Test vectors are just one part of the overall testing infrastructure; the necessary starting point. We are going to keep looking at all the points along the Verification and Validation process; both in depth and at the impact that COVID conditions continue to have.

Footnotes

  1. With the use of one last Greek hero of antiquity, I hope to build a metaphor for the 12 labors of Hercules as applied to testing (with far fewer labors)
  2. We will look at how large the UUT should be in another blog post. For now, we will give the ballpark that a UUT should be linked to 5 ~ 8 related requirements. Each requirement will have multiple tests associated with it.
  3. A good test harness should be like the harness for a horse, e.g. provides a secure connection to the horse (software) enabling it to run fully, have the minimum number of attachment points (e.g. don’t overload with test points) and connect without chaffing (crashing or changing the behavior of the code).
  4. A dynamic test harness has the test validation software as part of the test harness, e.g. the UUT is evaluated as the test is run. A static test harness simply logs the data for post processing.
  5. Step 1 is to swallow a fly, today you will learn why!
  6. Noise is, and is not a problem. Since it will exist in the real world you should welcome noise into your test cases since that is what you will find once you deploy your product once and for all.
  7. As an example of what we want to happen, we may want to get an dessert (objective) but do not want one with coconut flavor (test).
  8. Not a very good trick, and 8! is 40,320.