ECUs with room to grow

Selecting an ECU (electronic control unit) for your project is an investment in time and financial resources. Once the selection has been made for a given product line, that ECU will be used on average for 5 years. This means that even if the ECU meets all of your needs now it may not in 3 years if you don’t plan ahead.

Types of growth

There are three types of growth that need to be accounted for:

  1. Increases in functions: as new features are added new functions are added. These functions will take up additional processing time.
  2. Increases in memory: Hand-in-hand with the new functions’ time needs are memory needs.
  3. Increases in I/O: The trickiest of the lot. Sometimes it is just additional channels of existing I/O types but in some cases it is a need for new I/O types.

As a general rule of thumb, 80% to 85% memory process utilization at initial release provides a safe margin. For hardware, two spare channels of each type is generally safe. In the case where new I/O types may be required there are two options. The first is to select a hardware device that has product family members with additional I/O types. The second is a selection of a board that supports external I/O expansion slots.

Growth in the times of DevOps

Traditionally the updates to ECU software only happened when a new product was released and it happened “in the factor.” With the growth in “over air” updates (one of the driving features of DevOps) the starting metrics need to change. The rule of thumb will need to take into account the anticipated features to be released and determine which of those will be pushed for update. The type of features to be pushed will be heavily dependent on the product type with some products receiving very few updates (e.g., medical devices with high integrity workflows) while others such as consumer devices may receive frequent updates

Simulink has Class

Simulink is a graphical control design environment. However, it has the ability to include MATLAB, textual-based, algorithms as part of the model. For some design problems, this is the best approach. In today’s post, I’m going to review the three primary methods for including MATLAB code in your Simulink Model.

Function / Class / System Object

There are three methods for including MATLAB code into Simulink Models; a function, MATLAB classes, or MATLAB Simulink System Objects.

FunctionClassSystem Obj
Generates C codeXXX
SimulateXXX
Supports calls to external filesXXX
Object Oriented CodeXX
Built-in I/O validation routinesX
Built-in state save / restore methodsX
Features Summary

The Class and System Objects provide functionality that is not built into the Function method. However, both require additional knowledge of how to program. A MATLAB function can be written as a simple set of equations while the Object Oriented methods require some base level programing knowlege.

And state data: memory

For models that are targeting Code Generation there is an additional consideration, memory usage. State data, or dWork data in the generated code is used for any variable that is required to be held in memory between time steps. With a MATLAB function the user can explicitly define the variables that are State Data by using the “persistent” keyword.

Example of calling a MATLAB class from within a MATLAB function block in Simulink

With the MATLAB class or MATLAB System Object, any data that is defined as a property will be stored as a dWork variable regardless of the need for a state variable. The number of state variables can be decreased in the Class implementation; however in doing so, much of the benefit of the class based approach is lost.

Co-Simulation Fundamentals

Co-Simulation is when two or more modeling tools run concurrently, exchanging data between the tools. Co-Simulation is desirable when a single tool cannot achieve either the fidelity or execution speed required to model a given element.

Types of co-simulation

There are two primary types of co-simulation: imported and networked.

  • Imported: The “primary” tool incorporates the “secondary” models into its’ framework. The primary tool is responsible for the execution and timing of the secondary tool.
  • Networked: In the networks, case the tools execute independently with a secondary program providing the data exchange layer between the tools. The data exchange layer is responsible for matching the time stamps for the tools.

Why Imported?

If your primary tool has a method for importing third party executables, like S-Functions in Simulink, then this is generally the easiest method for performing co-simulation. Once the functional interface is defined, the simulation engine of the primary tool provides the full execution context.

The Pros and Cons of Virtual Events - The Moery Company

The downside of this approach is that it’s generally less accurate than the networked option. This is most acute when either of the tools requires a variable step solution as the incorporated tools are most often run at a fixed step or a variable step set by the primary tool.

Why Network?

In contrast, the networked approach allows you to run each tool with the optimal step size for the model; this results in high accuracy but is more often than not, much slower. The second issue with this approach is, how do you synchronize the tools?

In general a 2nd or 3rd order spline should be used to match the data points between the tools for the different time steps. This means that the integration tool may need to store large amounts of data and perform significant calculations at each data exchange.

Before you start

Co-Simulation, regardless of the approach is a significant investment of time. Before adopting this methodology determine if you need this level of fidelity or speed.

Classic Texts: UML Distilled

There are some texts that serve as a foundation stone for a field or technology; Kernighan & Ritchie’s “The C programing language” for C, Smith, Prabhu and Friedman’s “Establishing A Model-Based Design Culture” for MBD and for UML it is Martin Fowler’s “UML Distilled.” While the fields have moved beyond these three texts they all act as the common starting point of the discussion. With that in mind I want to talk about why reading UML Distilled will provide a significant boost to your system level modeling abilities.(1)

The third, and latest addition of UML Distilled was written in 2003. The usage of UML and the associated tool chains have evolved since then but the core principals of the book hold up.

A definition that holds up to the test of time

The difficulty in talking about UML is that it is an open standard; as a result, different tools and different groups have variants on the implementation of the language. That is why this book is so valuable, it lays out the core nature of the major types of UML diagrams.

When to use!

Perhaps most importantly the book lays out the cases of when to use each type of UML diagram, e.g. “Class Diagrams: When to use” & “Sequence Diagrams: When to use” and… While in my view the book recommends the usage of some diagrams when I do not think the are appropriate (specifically some of the recommendations for State Machine Diagrams and Communication Diagrams) it is admirable that he provides the trade-offs between different types of UML diagrams.

Object Oriented Classes and UML

One of the virtues of UML is the ability to graphically design Object Oriented models using Class Diagrams. Combined with Sequence Diagrams, the basics of a systems level modeling tool can be defined. However, what is often missed is that the Class and Sequence diagrams need to be combined with Package and Deployment diagrams to fully implement a system level model. Often the first two are used in system design and a less efficient implementation is created.(2)

Footnotes

  1. The primary issue surrounding UML is “when to stop.” UML diagrams are not intended for the final design of software but it is tempting to keep putting more information into the UML diagram. Fowler lays out a good case for how far to go.
  2. For smaller systems, Class and Sequence diagrams are sufficient; however, for larger system-of-systems the Package and Deployment diagrams are needed.

IOT: MBD

The early promise of “Internet of Things” (IOT) offered refrigerators that would let us know when to buy more milk/ But the reality often failed the C3 test,(1) being more hype than help. However, with the growth of big data analysis IOT has come into its own.

Who is IOT for?

By its nature IOT is collecting data on the end user. For consumer goods this can be an invasion of privacy; for industrial goods this can be violation of production secrets. So the answer to “who is IOT for?” needs to be “for the customer.”

Predictive maintenance and IOT

I remember when I was 16 pulling into a friend’s driveway. His father who was in the garage looked up and said to me “your timing belt is going to break in about 5,000 miles.”(2) He knew it by the sound and he knew it by the data he had collected and analyzed. When a product ships, the designers know a subset of what they will know two, three, a dozen years in the future. IOT allows them to learn what the failure modes for the device are and then rollout those failure modes to the end user.

IOT and MBD

Model-Based Design needs to interface with IOT in two ways. What to upload and how to update. As part of the design process engineers now need to think about:

  • What data would be useful to improve performance?
  • What is the frequency of the data collection?
  • How much memory do I allocate for storing that data?

Put another way, your IOT strategy is the feeder to your DevOPs workflows.

Footnotes

  1. The C3 test, or “Chocolate Chip Cookie” test is when an operation is linear then a simple algorithm can determine when to order the next 1/2 gallon of milk. However, some things like getting chocolate chip cookies which require “milk for dunking” break the linear prediction (unless you have them all the time).
  2. He was off by about 300 miles.

Readers’ Question Response!

I had some great questions from the “Reader request” post. Today I will set about answering the first batch, the hardware questions.

Why is hardware hard? Does it need to be?

There were a set of questions connected to targeting specific hardware, e.g. how to generate the most efficient code for your hardware. Within the MATLAB / Simulink tool chain there are 3 primary tasks:

Configuring the target board

The first step is to configure the target hardware through the hardware configuration menu. This defines the word size and ending nature of the target hardware.

Calling hardware devices

MATLAB and Simulink are not designed for the development of low level device drivers for physical hardware. However they are wonderful environments for integrating through defined API’s calls to physical hardware. The recommended best practice is to create interface blocks, often masked to configure the call to the API, which route the signal from the Simulink model to the low level device driver

Speedgoat Interface block

Another best practice is to separate the IO blocks from the algorithmic code, e.g. create Input Subsystems and Output Subsystems. This allows for simulation without the need to “dummy” or “stub” out the IO blocks.

Make it go fast!

The final topic is deeper than the scope of this project; how to optimize the generated code. Assuming an appropriate configuration set has been selected the next step is to use board specific libraries. Hardware vendors often create highly optimized algorithms for a subset of their mathematical functionality. Embedded Coder can leverage those libraries though the Code Replacement utilities. One thing of note, if this approach is taken then a PIL test should be performed to verify that the simulated behavior of the mathematical operation and the replacement library are a match.

King of the H.I.L. welcomes you to the Court (Part 1)

I started out my career on the G.M. project SimuCar; a full vehicle simulation for Hardware in the Loop validation which models transmissions and HVAC systems. The project introduced me to the requirements and rigor of Real-Time simulation as well as the practical limitations of hardware “bucks.” From there I transitioned to Applied Dynamics International (ADI) a Hardware in the Loop vendor; in that role I developed a passion for rigorous simulation based testing. Since that time I’ve had a chance to work with all of the major vendors: Speedgoat, dSpace, Opal-RT, and NI. Each vendor has unique strengths, but they all share common requirements in getting you “up that H.I.L.”(1)

Planning your summiting

A H.I.L. system has 5 principal components:

  • The target controller: the unit under test.
  • The physical H.I.L. system: the environment that provides signals to the controller
  • The plant model: the simulated environment used to stimulate the controller.
  • The test runner: infrastructure for running the system, collecting data, and evaluating the results.
  • Wires and signal conditioning: the physical connections and mechanical/electrical conditioning used to “close the loop” between H.I.L. and controller.

Today I want to talk about how to plan out your first foundational stones of the H.I.L., the signal conditioning.

The wiring and signal conditioning

In my first year at SimuCar by my estimates my “initial” physical models would have destroyed close to 3 million dollars in GM proto-type hardware if we hadn’t first validated the wiring and signal conditioning. So here is how to commission your H.I.L. system:

  • Run the loopback harness: most H.I.L. companies provide a stand-alone loop back tests that enable initial validation of the hardware.
  • Beep your harness: the wiring harnesses connects the H.I.L. system to the controller. A wire connection test ensures that the H.I.L. and controller are correctly exchanging information.
  • Open loop connection validation: connect the controller to the H.I.L. system with a no-controls version of the controller to validate signals are correctly red on the controller.

What is a “no-controls” model?

The basic model architecture for a H.I.L. system plant model decomposes the system into three top-level components: Inputs, Plant and Outputs. Likewise, the controls model is decomposed into the OS, I/O, and control algorithms. The no-controls algorithm pairs a H.I.L. model without plant and a controls model without controls algorithms present. The test system then exercises the outputs and validates that they are correctly read into the controls system via the monitoring stack.

Once the signal connections and scaling are validated the same “no-controls” H.I.L. model can be reused in the testing environment.

Footnotes

  1. The initial commissioning of H.I.L. systems is critical for long-term success. This blog post aims at providing the tips for your base camp.

Head in the Cloud

The phrase “head in the clouds” describes a person who is dreaming and not focused on practical activities. But for software development where CI/CD activities run in the cloud, it is important to “get your head in the cloud.”

Develop, Distribute and Deploy

The cloud is generally used for one of three activities:

  • Development: The development workflow includes verification testing and build activities.
  • Distributed execution: In this case, the product runs in the cloud for faster execution
  • Deployment: Both the release of software and the collection of data from the field for further updates.

In today’s post we are going to focus on what makes cloud based development different from local, e.g. “desktop” based development.

Massively parallelized development

Deployment of development activities to the cloud means that the testing and code generation tasks are executed in parallel.

  • Determine task dependencies: The minority of CI tasks have dependencies. However for those that do, it is important they execute in the same thread.
  • Determine halting conditions: The greater resources of cloud-based systems often leads people to ignore “halting” conditions resulting in wasted execution cycles. The sooner an invalid CI activity is halted the sooner the root cause issue can be addressed.
  • Group by common setup: Some CI activities have expensive setup activities. These tasks should be grouped together.

Reporting and logging

When running in the cloud versus on desktop the reporting and data logging becomes critical. In most cases cloud based runs execute in virtual machines and when the task is done the VM no longer exists. Because of this, diagnostic messages and data logs are critical for debugging issues in the results.

Because it is there…(When to use your H.I.L.)

There is a famous quote from George Mallory; when asked why he was trying to climb Mount Everest he said “Because it is there.”(1) Unfortunately this is sometimes the approach that is used for when to use a Hardware In the Loop (H.I.L.) system; it is used “because we have it.”

Integrating a model into a H.I.L. system requires additional work which is not used on the final product. As a result, use of the system should be restricted to when it is needed. In this post I will review the primary reasons for using a H.I.L. system.

Validation of a 3rd party controller

Some “Third Parties”

The simplest reason for using a H.I.L. system is when the unit under test (UUT) or part of the overall system is a third party controller where you do not have access to the source code for simulation based tests. Common tasks in this example are fault detection and boundary condition testing.

Timing studies

Connecting a controller to the H.I.L. system enables complex timing studies of the controller, e.g. determining the mean, max, and statistical variation in timing for the algorithm.

Fault injection / Noise

While fault injection(2) and noise simulation can be performed in a simulated environment it generally requires modifications to the control algorithm to support this; as a result it is preferable to perform these tasks on a H.I.L. system.

Difficulty in modeling physical systems

In some cases, the creation of a physical model has a high cost in development time (3) whereas the actual physical component(4) can be hooked into the H.I.L. system easily. In this case having the physical component enables development that would not be possible in simulation and would be difficult or impossible in the actual environment.

Footnotes

  1. George Mallory died in his attempt to climb Everest, with Sir Edmund Hillary being the first to reach the top of the mountain.
  2. On my first job we designed an automated Break Out Box which was marketed as Auto B.O.B (this was for the automotive market). The running joke when there were any problems on the project was “It is Bob’s fault.”
  3. High cost in this case means the total engineering development time.
  4. When a physical component is hooked into the system it is called a “buck.” The first time I encountered this was with an early A.B.S. system. The brake dynamics were too fast to model and the tire/brakes system are easy to install.

Reader Request Time

It is that time of the yea;, I am throwing the blog open to your questions about Model-Based Design and Model-Based Systems Engineering.