There is a famous quote from George Mallory; when asked why he was trying to climb Mount Everest he said “Because it is there.”(1) Unfortunately this is sometimes the approach that is used for when to use a Hardware In the Loop (H.I.L.) system; it is used “because we have it.”
Integrating a model into a H.I.L. system requires additional work which is not used on the final product. As a result, use of the system should be restricted to when it is needed. In this post I will review the primary reasons for using a H.I.L. system.
Validation of a 3rd party controller
The simplest reason for using a H.I.L. system is when the unit under test (UUT) or part of the overall system is a third party controller where you do not have access to the source code for simulation based tests. Common tasks in this example are fault detection and boundary condition testing.
Connecting a controller to the H.I.L. system enables complex timing studies of the controller, e.g. determining the mean, max, and statistical variation in timing for the algorithm.
Fault injection / Noise
While fault injection(2) and noise simulation can be performed in a simulated environment it generally requires modifications to the control algorithm to support this; as a result it is preferable to perform these tasks on a H.I.L. system.
Difficulty in modeling physical systems
In some cases, the creation of a physical model has a high cost in development time (3) whereas the actual physical component(4) can be hooked into the H.I.L. system easily. In this case having the physical component enables development that would not be possible in simulation and would be difficult or impossible in the actual environment.
George Mallory died in his attempt to climb Everest, with Sir Edmund Hillary being the first to reach the top of the mountain.
On my first job we designed an automated Break Out Box which was marketed as Auto B.O.B (this was for the automotive market). The running joke when there were any problems on the project was “It is Bob’s fault.”
High cost in this case means the total engineering development time.
When a physical component is hooked into the system it is called a “buck.” The first time I encountered this was with an an early A.B.S. system. The breaking dynamics were too fast to model and the tire / break system were easy to install.
One of the most common objections to adoption of Model-Based Design is “I can write that code more efficiently.” My answer to that statement has been the same for 20+ years:
An exceptional programer can write more efficient code but
The average programer will not and
There is an opportunity cost when your best programers are working on “day-to-day” tasks.
On average, the generated code will be better then the handwritten code and your controls and software engineers will have more time for the tasks beyond coding.
Consistency is the key
When I wrote the last header I had an “off-by-one error”;(1) my fingers were shifted over on the keyboard and I wrote: “cpsostemceu os ,pmarcj.” This error was easy to spot, but an actual “off-by-one” can be hard to see.
This is where Model-Based Design shines; automated code generation will not make typological errors and it provides a simulation environment in which to see design errors.
Training and patterns (guidelines) for success
It is still possible to have design errors when working in the Model-Based Design environment; these arise from developers incorrectly using a block or having incorrect settings for the model. These errors can be avoided through the use of modeling guidelines (MAAB) and detected through he use of Simulation. In the meantime consider taking a course to better understand the MBD workflows.
At the heart of digital simulation for Model-Based Design and Model-Based Systems Engineering lies two fundamental approaches; simulation for design and simulation for testing. While each approach can support the other they should not be confused.
Simulation based testing
Fundamentally a test has 3 criteria:
There are objective measurable pass and failing criteria
The test is bounded in the scope of functionality covered
The test is repeatable under simulation
These three criteria are part of the “locking down” acceptance process and define when a software object is ready for integration or release.
Simulation based design
Simulation for design is an exploration process; it is informed by the objective criteria of testing but it is not bound by the objectives. Simulation for design is an iterative process of refining the outcomes of simulation until the software object is ready for formal testing.
The design based simulation activities can often be transitioned to formal tests, or barring a direct transformation they can inform the formal test. To facilitate this transitioning, a testing / design framework should be developed that allows the design engineer to simulate and evaluate the model within the testing framework while keeping the infrastructural costs for doing so low. The following basic infrastructural components aid in this transition.
Create a common “set up / tear down” method for models and data: The method for loading models and model data should be consistent for both the testers and the developers. A set up and tear down method enforces a common execution behavior for both groups.
Create basic evaluation methods for examining simulation results: While test developers are able to create complex evaluation methods for simulation results, the design engineer should not be burdened with the creation of these sorts of tools. In practice roughly 75% of initial testing can be encapsulated with standard modular evaluation methods.
Standardize requirement descriptions: While not required to meet the strict definition of the requirement document, the design engineer should be working towards that specification. A consistent requirements description language will facilitate the design process.
The company I work for suggests ~60 hours per year of training. In general I take a split of 2/3 new content and 1/3 known. The rationale for new content is clear but why do I refresh on the “known”?
The world turns, technology evolves
The first obvious answer is “programs and technology changes.” The MathWorks releases a new version of MATLAB / Simulink every 6 months and in each iteration there are new features and capabilities. Taking training courses online or (in the pre/post covid world) in person gives me a chance to learn about these features.
Refresh your view
No matter how long you have worked in a field there is both an opportunity to learn about parts of the field you have not worked in before and to refresh and change how you see and understand the field. (Note in this famous image, do you see the young or old woman? The arrow holds the key; is it a mouth or a necklace?)
In taking a course you learn both from going over the content and from hearing a person explain it in a way you may not have thought of before.
My training mix
And so what will I be studying this year and why?
Object oriented programing refresher: As more of my customers are using C++ as their development language, running through a C++ class and object fundamentals will improve my integration suggestions.
Machine learning for predictive maintenance: The intersection of data analysis and computational methods is a logical extension of my existing skill set as well as providing a clear focus for the Machine Learning.
Fuzzy logic controls systems: In all honesty, it is just an area that I have been interested in for a long time and decided this would be the year to study it.
The use of enumerated data types is often overlooked in the MATLAB / Simulink environment. Using enumerations offers two distinct advantages:
Clarity: the enumeration name tells you what it is
Boundary: When you use enumerations you can’t go outside the allowed set
For Ox only
The MUX block has been part of the Simulink pallet since the beginning. However, for signal combinations either use Vectors (same data types) or Buses (mixed data types).
The switch block is a useful block for routing basic signals. However it can result in unnecessary calculations being performed and when working with complex upstream calculations, it can decrease the readability of the model.
If the switch logic depends on past values, this can also be an indication that a State Chart would be a better option.
I first head the question “which would you rather have, flight or invisibility?” a dozen + years ago. At the time it struck me as a slightly humorous question that could prompt people to think about what they would do with an ability. Before you make a choice — flight or invisibility — here are the ground rules:
Flight means the power to travel in the air up to 100,000 feet at a maximum velocity of 1,000 MPH. You don’t have any other powers. You’re not invincible and you don’t have super strength. Thus depending on your natural strength you probably can’t carry many people with you on your flights. Large pets or small children would be key candidates.
Invisibility means the power to make yourself unseen, as well as your clothes. But things you pick up are still visible. Food and drink are visible until digested.
Key Point: These powers are things in and of themselves; they don’t grant additional powers.(1) Unlike Math.
Math:(2) The superpower that is Additive(3)
Mathematics has several cardinal points: arithmetic, logical, and geometric. From those starting points,(4) set theory, calculus, and statistics can be derived.(5) Computers and computer science grow from the arithmetic and logical branches of mathematics.
Model-Based Design (and Model-Based Systems Engineering) is in turn is based on computer science and controls theory (which in turn is…).(6)
Knowing the antecedent…
Knowing the background of MBD/MBSE shows us how to improve the outcomes of these tools. E.g. knowing where to look when there are issues of performance and how far you backtrack to determine the solution. The process I go through when I am trying to improve a performance issue is I ask myself, what is the root cause of the problem? Is it arithmetic, computational, or logical? Knowing that, I then know how to improve the problem.
The restriction that the power is “one thing in and of itself” is to prevent the superman conundrum that you can do anything: super speed and strength and flight and laser beam eyes. This got me thinking though, could you have a superpman based on just one power that could accomplish many super-feats, and if so what would it be? Would that superpower explain some of the problems with the character of Superman’s powers? I think the answer is “Yes” and the power is teleportation. Teleportation would accomplish:
Flight: multiple small teleportations to give the appearance of flying (think like an animation)
Super strength: not actually lifting things, just teleporting them up (or bending or what not). This also resolves the issue of picking up things that are structurally unstable, e.g. they stick together because they are teleported as a whole
Super speed: I always wondered why when Superman did a “super fast cleaning” the plates didn’t explode due to friction. The answer: he just teleports the dirt off!
In this post I am talking about math, but in reality it is critical thinking that is the true base superpower. If we think of footnote 1, it is the teleportation of the mind that takes you to new places.
Pun fully intended
Hmm, thinking about this, with 2 starting points we can grow linearly, with three, quadratically…
Given different starting points, the same underlying mathematical theories have been derived in different fashions.
On February 18th, 2021, N.A.S.A. had its latest successful landing of an interplanetary probe. What makes this landing even more impressive is that like their last landing, they used a “first time” method for getting the rover to the surface.
This string of successes can be attributed to the systems engineering processes embodied in this guide. The NASA Systems Engineering Handbook has been on my MBD bookshelf for a long time, and I’m happy to say that in January of 2021 they updated the guide.
Thoughts on the updated guide
The last update to the guide was in 2007. Systems engineering has grown considerably in the use and complexity of the systems in that time. Today I want to focus on one section of the document, 5.0 Product Realization. There is a breakdown of the realization steps into 5 sub-steps: Implementation, Integration, Verification, Validation, and Transition.
The “Acquire” section has increased considerably in recognition that complex software and hardware is now more readily purchasable.
The reuse section dives into the additional work required to successfully revamp existing software for reuse. It provides key insights into what needs to be done for reintegration.
The key insight from this section is that integration is a series of integrations, from lower level components to the final product integration.
Verification and Validation
As always, NASA’s description of Verification and Validation steps is the clearest out there.
The outline of how to create a verification and validation plan should be considered the gold standard for V&V activities.
Transitioning is often neglected in large projects; it is assumed that a verified and validated product will be useable by the end team.
This document lays out the training, documentation and “on delivered team verification” required for success in delivery.