At the heart of digital simulation for Model-Based Design and Model-Based Systems Engineering lies two fundamental approaches; simulation for design and simulation for testing. While each approach can support the other they should not be confused.
Simulation based testing
Fundamentally a test has 3 criteria:
There are objective measurable pass and failing criteria
The test is bounded in the scope of functionality covered
The test is repeatable under simulation
These three criteria are part of the “locking down” acceptance process and define when a software object is ready for integration or release.
Simulation based design
Simulation for design is an exploration process; it is informed by the objective criteria of testing but it is not bound by the objectives. Simulation for design is an iterative process of refining the outcomes of simulation until the software object is ready for formal testing.
The design based simulation activities can often be transitioned to formal tests, or barring a direct transformation they can inform the formal test. To facilitate this transitioning, a testing / design framework should be developed that allows the design engineer to simulate and evaluate the model within the testing framework while keeping the infrastructural costs for doing so low. The following basic infrastructural components aid in this transition.
Create a common “set up / tear down” method for models and data: The method for loading models and model data should be consistent for both the testers and the developers. A set up and tear down method enforces a common execution behavior for both groups.
Create basic evaluation methods for examining simulation results: While test developers are able to create complex evaluation methods for simulation results, the design engineer should not be burdened with the creation of these sorts of tools. In practice roughly 75% of initial testing can be encapsulated with standard modular evaluation methods.
Standardize requirement descriptions: While not required to meet the strict definition of the requirement document, the design engineer should be working towards that specification. A consistent requirements description language will facilitate the design process.
The company I work for suggests ~60 hours per year of training. In general I take a split of 2/3 new content and 1/3 known. The rationale for new content is clear but why do I refresh on the “known”?
The world turns, technology evolves
The first obvious answer is “programs and technology changes.” The MathWorks releases a new version of MATLAB / Simulink every 6 months and in each iteration there are new features and capabilities. Taking training courses online or (in the pre/post covid world) in person gives me a chance to learn about these features.
Refresh your view
No matter how long you have worked in a field there is both an opportunity to learn about parts of the field you have not worked in before and to refresh and change how you see and understand the field. (Note in this famous image, do you see the young or old woman? The arrow holds the key; is it a mouth or a necklace?)
In taking a course you learn both from going over the content and from hearing a person explain it in a way you may not have thought of before.
My training mix
And so what will I be studying this year and why?
Object oriented programing refresher: As more of my customers are using C++ as their development language, running through a C++ class and object fundamentals will improve my integration suggestions.
Machine learning for predictive maintenance: The intersection of data analysis and computational methods is a logical extension of my existing skill set as well as providing a clear focus for the Machine Learning.
Fuzzy logic controls systems: In all honesty, it is just an area that I have been interested in for a long time and decided this would be the year to study it.
The use of enumerated data types is often overlooked in the MATLAB / Simulink environment. Using enumerations offers two distinct advantages:
Clarity: the enumeration name tells you what it is
Boundary: When you use enumerations you can’t go outside the allowed set
For Ox only
The MUX block has been part of the Simulink pallet since the beginning. However, for signal combinations either use Vectors (same data types) or Buses (mixed data types).
The switch block is a useful block for routing basic signals. However it can result in unnecessary calculations being performed and when working with complex upstream calculations, it can decrease the readability of the model.
If the switch logic depends on past values, this can also be an indication that a State Chart would be a better option.
I first head the question “which would you rather have, flight or invisibility?” a dozen + years ago. At the time it struck me as a slightly humorous question that could prompt people to think about what they would do with an ability. Before you make a choice — flight or invisibility — here are the ground rules:
Flight means the power to travel in the air up to 100,000 feet at a maximum velocity of 1,000 MPH. You don’t have any other powers. You’re not invincible and you don’t have super strength. Thus depending on your natural strength you probably can’t carry many people with you on your flights. Large pets or small children would be key candidates.
Invisibility means the power to make yourself unseen, as well as your clothes. But things you pick up are still visible. Food and drink are visible until digested.
Key Point: These powers are things in and of themselves; they don’t grant additional powers.(1) Unlike Math.
Math:(2) The superpower that is Additive(3)
Mathematics has several cardinal points: arithmetic, logical, and geometric. From those starting points,(4) set theory, calculus, and statistics can be derived.(5) Computers and computer science grow from the arithmetic and logical branches of mathematics.
Model-Based Design (and Model-Based Systems Engineering) is in turn is based on computer science and controls theory (which in turn is…).(6)
Knowing the antecedent…
Knowing the background of MBD/MBSE shows us how to improve the outcomes of these tools. E.g. knowing where to look when there are issues of performance and how far you backtrack to determine the solution. The process I go through when I am trying to improve a performance issue is I ask myself, what is the root cause of the problem? Is it arithmetic, computational, or logical? Knowing that, I then know how to improve the problem.
The restriction that the power is “one thing in and of itself” is to prevent the superman conundrum that you can do anything: super speed and strength and flight and laser beam eyes. This got me thinking though, could you have a superpman based on just one power that could accomplish many super-feats, and if so what would it be? Would that superpower explain some of the problems with the character of Superman’s powers? I think the answer is “Yes” and the power is teleportation. Teleportation would accomplish:
Flight: multiple small teleportations to give the appearance of flying (think like an animation)
Super strength: not actually lifting things, just teleporting them up (or bending or what not). This also resolves the issue of picking up things that are structurally unstable, e.g. they stick together because they are teleported as a whole
Super speed: I always wondered why when Superman did a “super fast cleaning” the plates didn’t explode due to friction. The answer: he just teleports the dirt off!
In this post I am talking about math, but in reality it is critical thinking that is the true base superpower. If we think of footnote 1, it is the teleportation of the mind that takes you to new places.
Pun fully intended
Hmm, thinking about this, with 2 starting points we can grow linearly, with three, quadratically…
Given different starting points, the same underlying mathematical theories have been derived in different fashions.
On February 18th, 2021, N.A.S.A. had its latest successful landing of an interplanetary probe. What makes this landing even more impressive is that like their last landing, they used a “first time” method for getting the rover to the surface.
This string of successes can be attributed to the systems engineering processes embodied in this guide. The NASA Systems Engineering Handbook has been on my MBD bookshelf for a long time, and I’m happy to say that in January of 2021 they updated the guide.
Thoughts on the updated guide
The last update to the guide was in 2007. Systems engineering has grown considerably in the use and complexity of the systems in that time. Today I want to focus on one section of the document, 5.0 Product Realization. There is a breakdown of the realization steps into 5 sub-steps: Implementation, Integration, Verification, Validation, and Transition.
The “Acquire” section has increased considerably in recognition that complex software and hardware is now more readily purchasable.
The reuse section dives into the additional work required to successfully revamp existing software for reuse. It provides key insights into what needs to be done for reintegration.
The key insight from this section is that integration is a series of integrations, from lower level components to the final product integration.
Verification and Validation
As always, NASA’s description of Verification and Validation steps is the clearest out there.
The outline of how to create a verification and validation plan should be considered the gold standard for V&V activities.
Transitioning is often neglected in large projects; it is assumed that a verified and validated product will be useable by the end team.
This document lays out the training, documentation and “on delivered team verification” required for success in delivery.
Not all metrics are created equal. Often in the rush to have metrics the decisions on what is collected is rushed resulting in a glut of semi useful information. This makes it difficult to review the useful information. So what are the important Key Performance Indicators (KPI) for a Model-Based Design DevOps workflow?
Data from test, data from field
In a traditional Model-Based Design workflow the data feedback ends after the product is released; the DevOps workflow brings that data back into the MBD world. How then do you make field data useful to the MBD workflow?
Central to a DevOps workflow is continual feedback, e.g. data can flow bi-directionally with updates from the product developer and error codes from the product in the field. With that in mind we need to ask, from a simulation perspective what information would we need to debug the issue?
Beyond error codes
Error codes are the minimal information required for debugging; an error of type X1258Q1ar894 occurred. It tells the end user very little and provides only slightly more information to the developer.
The next step up is the stack trace, e.g. the history of calls before the error occurred; this provides the developer with the first possible method for debugging the issue.
What is needed is the equivalent of an airplane’s “black box”; e.g. the history of the state information of the device when it failed. But a black box records everything which would quickly overwhelm your development process. So how do you select the data you need?
A refrigerator is not an airplane
For most devices a full data log of the items states is not warranted due to memory limitations and criticality of errors. Instead what can be done is
For each type of error code select a subset of data to be collected.
If the selected data is not sufficient to debug from the field, over-air updates can be pushed to increase the type and scope of the data collected.
Once the error is resolved, reduce the error logging for the error code.
The “new” KPIs
The new KPIs for a DevOps workflow stem from the central tenant of DevOp…constant feedback. Determine a metric for error code severity and error code frequency; elevate them for correction based on those two metrics. As the system is developed allocate diagnostics to areas that were difficult to simulate and validate in order to enable post deployment validation.
Every model needs some TLC, but how do you make make it easy to give the model the care it needs?
Maintenance starts with the foundation. The foundation of a model consists of the guidelines that are followed. In the MATLAB./ Simulink world that means using the M.A.A.B. style guidelines. The Modeling guidelines focus on model readability and correctness; so what is the “first floor” in our house?
The “correct” size
Size is not an absolute number, rather it needs to be viewed in a metric encompassing complexity, number of associated requirements and commonality of the blocks & code.
Common code: e.g. Reuse
The more you can reuse code across portions of your project (or projects) the simpler maintenance becomes (though you need to have the highest level of validation for reused code). For an overview of reuse I will reuse this blog.
What is complexity?
Like reuse, complexity is a topic I have covered before. The key take away here is to link complexity to maintainability; lower complexity generally means easier to maintain models. However, there is an absolute complexity to any problem e.g., you can’t make a problem simpler than it truly is. Arbitrary decomposition of a complex problem into smaller subproblems can increase the complexity of the system by making it difficult to understand the data or concept flow. Which brings us to…
Requirements and maintainability
The ultimate metric that I recommend for ensuring that models are easy to maintain is linked to the number of requirements per model. Ideally each model will be associated with one or two top-level requirements.
Since maintenance of the model is associated with either changes to the requirements or bugs found in the field, the more top level requirements associated with a model, the more frequently the model will need to be updated.
Updates to a model requires the revalidation of the model and associated requirements. If you change a model with only one associated requirement you only need to revalidate that requirement. If you change a model with multiple requirements, even if you are only changing with respect to one you need to revalidate all of the requirements. Every time you pull out a thread…
Raise the roof!
To finish our “This Old House” metaphor, we will talk about the roof. In this case the roof is our test cases. In most if not all cases of model updates the test cases will need to be recoded and revalidated. This is a secondary reason to limit the number of requirements associated with a given model; the more requirements the more test cases that must be updated.
Built well and and with proper care, your model can be a home for generations to come.