IOT: MBD

The early promise of “Internet of Things” (IOT) offered refrigerators that would let us know when to buy more milk/ But the reality often failed the C3 test,(1) being more hype than help. However, with the growth of big data analysis IOT has come into its own.

Who is IOT for?

By its nature IOT is collecting data on the end user. For consumer goods this can be an invasion of privacy; for industrial goods this can be violation of production secrets. So the answer to “who is IOT for?” needs to be “for the customer.”

Predictive maintenance and IOT

I remember when I was 16 pulling into a friend’s driveway. His father who was in the garage looked up and said to me “your timing belt is going to break in about 5,000 miles.”(2) He knew it by the sound and he knew it by the data he had collected and analyzed. When a product ships, the designers know a subset of what they will know two, three, a dozen years in the future. IOT allows them to learn what the failure modes for the device are and then rollout those failure modes to the end user.

IOT and MBD

Model-Based Design needs to interface with IOT in two ways. What to upload and how to update. As part of the design process engineers now need to think about:

  • What data would be useful to improve performance?
  • What is the frequency of the data collection?
  • How much memory do I allocate for storing that data?

Put another way, your IOT strategy is the feeder to your DevOPs workflows.

Footnotes

  1. The C3 test, or “Chocolate Chip Cookie” test is when an operation is linear then a simple algorithm can determine when to order the next 1/2 gallon of milk. However, some things like getting chocolate chip cookies which require “milk for dunking” break the linear prediction (unless you have them all the time).
  2. He was off by about 300 miles.

Readers’ Question Response!

I had some great questions from the “Reader request” post. Today I will set about answering the first batch, the hardware questions.

Why is hardware hard? Does it need to be?

There were a set of questions connected to targeting specific hardware, e.g. how to generate the most efficient code for your hardware. Within the MATLAB / Simulink tool chain there are 3 primary tasks:

Configuring the target board

The first step is to configure the target hardware through the hardware configuration menu. This defines the word size and ending nature of the target hardware.

Calling hardware devices

MATLAB and Simulink are not designed for the development of low level device drivers for physical hardware. However they are wonderful environments for integrating through defined API’s calls to physical hardware. The recommended best practice is to create interface blocks, often masked to configure the call to the API, which route the signal from the Simulink model to the low level device driver

Speedgoat Interface block

Another best practice is to separate the IO blocks from the algorithmic code, e.g. create Input Subsystems and Output Subsystems. This allows for simulation without the need to “dummy” or “stub” out the IO blocks.

Make it go fast!

The final topic is deeper than the scope of this project; how to optimize the generated code. Assuming an appropriate configuration set has been selected the next step is to use board specific libraries. Hardware vendors often create highly optimized algorithms for a subset of their mathematical functionality. Embedded Coder can leverage those libraries though the Code Replacement utilities. One thing of note, if this approach is taken then a PIL test should be performed to verify that the simulated behavior of the mathematical operation and the replacement library are a match.

King of the H.I.L. welcomes you to the Court (Part 1)

I started out my career on the G.M. project SimuCar; a full vehicle simulation for Hardware in the Loop validation which models transmissions and HVAC systems. The project introduced me to the requirements and rigor of Real-Time simulation as well as the practical limitations of hardware “bucks.” From there I transitioned to Applied Dynamics International (ADI) a Hardware in the Loop vendor; in that role I developed a passion for rigorous simulation based testing. Since that time I’ve had a chance to work with all of the major vendors: Speedgoat, dSpace, Opal-RT, and NI. Each vendor has unique strengths, but they all share common requirements in getting you “up that H.I.L.”(1)

Planning your summiting

A H.I.L. system has 5 principal components:

  • The target controller: the unit under test.
  • The physical H.I.L. system: the environment that provides signals to the controller
  • The plant model: the simulated environment used to stimulate the controller.
  • The test runner: infrastructure for running the system, collecting data, and evaluating the results.
  • Wires and signal conditioning: the physical connections and mechanical/electrical conditioning used to “close the loop” between H.I.L. and controller.

Today I want to talk about how to plan out your first foundational stones of the H.I.L., the signal conditioning.

The wiring and signal conditioning

In my first year at SimuCar by my estimates my “initial” physical models would have destroyed close to 3 million dollars in GM proto-type hardware if we hadn’t first validated the wiring and signal conditioning. So here is how to commission your H.I.L. system:

  • Run the loopback harness: most H.I.L. companies provide a stand-alone loop back tests that enable initial validation of the hardware.
  • Beep your harness: the wiring harnesses connects the H.I.L. system to the controller. A wire connection test ensures that the H.I.L. and controller are correctly exchanging information.
  • Open loop connection validation: connect the controller to the H.I.L. system with a no-controls version of the controller to validate signals are correctly red on the controller.

What is a “no-controls” model?

The basic model architecture for a H.I.L. system plant model decomposes the system into three top-level components: Inputs, Plant and Outputs. Likewise, the controls model is decomposed into the OS, I/O, and control algorithms. The no-controls algorithm pairs a H.I.L. model without plant and a controls model without controls algorithms present. The test system then exercises the outputs and validates that they are correctly read into the controls system via the monitoring stack.

Once the signal connections and scaling are validated the same “no-controls” H.I.L. model can be reused in the testing environment.

Footnotes

  1. The initial commissioning of H.I.L. systems is critical for long-term success. This blog post aims at providing the tips for your base camp.

Head in the Cloud

The phrase “head in the clouds” describes a person who is dreaming and not focused on practical activities. But for software development where CI/CD activities run in the cloud, it is important to “get your head in the cloud.”

Develop, Distribute and Deploy

The cloud is generally used for one of three activities:

  • Development: The development workflow includes verification testing and build activities.
  • Distributed execution: In this case, the product runs in the cloud for faster execution
  • Deployment: Both the release of software and the collection of data from the field for further updates.

In today’s post we are going to focus on what makes cloud based development different from local, e.g. “desktop” based development.

Massively parallelized development

Deployment of development activities to the cloud means that the testing and code generation tasks are executed in parallel.

  • Determine task dependencies: The minority of CI tasks have dependencies. However for those that do, it is important they execute in the same thread.
  • Determine halting conditions: The greater resources of cloud-based systems often leads people to ignore “halting” conditions resulting in wasted execution cycles. The sooner an invalid CI activity is halted the sooner the root cause issue can be addressed.
  • Group by common setup: Some CI activities have expensive setup activities. These tasks should be grouped together.

Reporting and logging

When running in the cloud versus on desktop the reporting and data logging becomes critical. In most cases cloud based runs execute in virtual machines and when the task is done the VM no longer exists. Because of this, diagnostic messages and data logs are critical for debugging issues in the results.

Because it is there…(When to use your H.I.L.)

There is a famous quote from George Mallory; when asked why he was trying to climb Mount Everest he said “Because it is there.”(1) Unfortunately this is sometimes the approach that is used for when to use a Hardware In the Loop (H.I.L.) system; it is used “because we have it.”

Integrating a model into a H.I.L. system requires additional work which is not used on the final product. As a result, use of the system should be restricted to when it is needed. In this post I will review the primary reasons for using a H.I.L. system.

Validation of a 3rd party controller

Some “Third Parties”

The simplest reason for using a H.I.L. system is when the unit under test (UUT) or part of the overall system is a third party controller where you do not have access to the source code for simulation based tests. Common tasks in this example are fault detection and boundary condition testing.

Timing studies

Connecting a controller to the H.I.L. system enables complex timing studies of the controller, e.g. determining the mean, max, and statistical variation in timing for the algorithm.

Fault injection / Noise

While fault injection(2) and noise simulation can be performed in a simulated environment it generally requires modifications to the control algorithm to support this; as a result it is preferable to perform these tasks on a H.I.L. system.

Difficulty in modeling physical systems

In some cases, the creation of a physical model has a high cost in development time (3) whereas the actual physical component(4) can be hooked into the H.I.L. system easily. In this case having the physical component enables development that would not be possible in simulation and would be difficult or impossible in the actual environment.

Footnotes

  1. George Mallory died in his attempt to climb Everest, with Sir Edmund Hillary being the first to reach the top of the mountain.
  2. On my first job we designed an automated Break Out Box which was marketed as Auto B.O.B (this was for the automotive market). The running joke when there were any problems on the project was “It is Bob’s fault.”
  3. High cost in this case means the total engineering development time.
  4. When a physical component is hooked into the system it is called a “buck.” The first time I encountered this was with an early A.B.S. system. The brake dynamics were too fast to model and the tire/brakes system are easy to install.

Statistically, Model-Based Design is the way to go…

One of the most common objections to adoption of Model-Based Design is “I can write that code more efficiently.” My answer to that statement has been the same for 20+ years:

  • An exceptional programer can write more efficient code
    but
  • The average programer will not
    and
  • There is an opportunity cost when your best programers are working on “day-to-day” tasks.

On average, the generated code will be better then the handwritten code and your controls and software engineers will have more time for the tasks beyond coding.

Consistency is the key

When I wrote the last header I had an “off-by-one error”;(1) my fingers were shifted over on the keyboard and I wrote: “cpsostemceu os ,pmarcj.” This error was easy to spot, but an actual “off-by-one” can be hard to see.

This is where Model-Based Design shines; automated code generation will not make typological errors and it provides a simulation environment in which to see design errors.

Training and patterns (guidelines) for success

It is still possible to have design errors when working in the Model-Based Design environment; these arise from developers incorrectly using a block or having incorrect settings for the model. These errors can be avoided through the use of modeling guidelines (MAAB) and detected through he use of Simulation. In the meantime consider taking a course to better understand the MBD workflows.

Footnote

  1. Off by one errors explained:

Simulation for testing and design

At the heart of digital simulation for Model-Based Design and Model-Based Systems Engineering lies two fundamental approaches; simulation for design and simulation for testing. While each approach can support the other they should not be confused.

Simulation based testing

Fundamentally a test has 3 criteria:

  1. There are objective measurable pass and failing criteria
  2. The test is bounded in the scope of functionality covered
  3. The test is repeatable under simulation

These three criteria are part of the “locking down” acceptance process and define when a software object is ready for integration or release.

Simulation based design

Simulation for design is an exploration process; it is informed by the objective criteria of testing but it is not bound by the objectives. Simulation for design is an iterative process of refining the outcomes of simulation until the software object is ready for formal testing.

Transitioning between

The design based simulation activities can often be transitioned to formal tests, or barring a direct transformation they can inform the formal test. To facilitate this transitioning, a testing / design framework should be developed that allows the design engineer to simulate and evaluate the model within the testing framework while keeping the infrastructural costs for doing so low. The following basic infrastructural components aid in this transition.

  • Create a common “set up / tear down” method for models and data: The method for loading models and model data should be consistent for both the testers and the developers. A set up and tear down method enforces a common execution behavior for both groups.
  • Create basic evaluation methods for examining simulation results: While test developers are able to create complex evaluation methods for simulation results, the design engineer should not be burdened with the creation of these sorts of tools. In practice roughly 75% of initial testing can be encapsulated with standard modular evaluation methods.
  • Standardize requirement descriptions: While not required to meet the strict definition of the requirement document, the design engineer should be working towards that specification. A consistent requirements description language will facilitate the design process.

Why take training?

The company I work for suggests ~60 hours per year of training. In general I take a split of 2/3 new content and 1/3 known. The rationale for new content is clear but why do I refresh on the “known”?

The world turns, technology evolves

The first obvious answer is “programs and technology changes.” The MathWorks releases a new version of MATLAB / Simulink every 6 months and in each iteration there are new features and capabilities. Taking training courses online or (in the pre/post covid world) in person gives me a chance to learn about these features.

Refresh your view

No matter how long you have worked in a field there is both an opportunity to learn about parts of the field you have not worked in before and to refresh and change how you see and understand the field. (Note in this famous image, do you see the young or old woman? The arrow holds the key; is it a mouth or a necklace?)

In taking a course you learn both from going over the content and from hearing a person explain it in a way you may not have thought of before.

My training mix

And so what will I be studying this year and why?

  • Object oriented programing refresher: As more of my customers are using C++ as their development language, running through a C++ class and object fundamentals will improve my integration suggestions.
  • Machine learning for predictive maintenance: The intersection of data analysis and computational methods is a logical extension of my existing skill set as well as providing a clear focus for the Machine Learning.
  • Fuzzy logic controls systems: In all honesty, it is just an area that I have been interested in for a long time and decided this would be the year to study it.

What is your training mix?

Bite-sized MBD tips

Let me tell you all the ways…

The use of enumerated data types is often overlooked in the MATLAB / Simulink environment. Using enumerations offers two distinct advantages:

  1. Clarity: the enumeration name tells you what it is
  2. Boundary: When you use enumerations you can’t go outside the allowed set

For Ox only

The MUX block has been part of the Simulink pallet since the beginning. However, for signal combinations either use Vectors (same data types) or Buses (mixed data types).

A “Mux” Ox

Switch to…

The switch block is a useful block for routing basic signals. However it can result in unnecessary calculations being performed and when working with complex upstream calculations, it can decrease the readability of the model.

If the switch logic depends on past values, this can also be an indication that a State Chart would be a better option.