Over time the current best practices if unexamined run the risk of becoming “okay practices.” This is one of the reasons why the majority of MathWorks customers update their software roughly every 2.5 years.(1) While there are many metaphors for doing this the one I would choose for you will depend on where you are at…(2)
Bootcamp: Are you new to Model-Based Design? In this case the review is an intensive ‘hit all the major areas’ to bring you up to speed as quickly as possible.
Tune up: If you have been following a Model-Based Design process for a number of years then it is reviewing the major processes as well as focusing on any rough spots you may have detected.
Refresh: Like the tune up, you are experienced with Model-Based Design but you are starting a new project. This gives you a ‘clean slate’ to bring new processes into play.
Before you get started, define a set of objectives (the “must have” and the “nice to have” improvements) as well as a time line(3) to complete the changes. From here, there are multiple paths forward.
Consult with people who have joined your group from other companies.(5)
Review internal issue logs: find out what problems you been having and look for ways to address them.
Once you are done
Ok, really, once you are done
Before starting, the baseline objectives were set. Once those have been achieved, work should focus on the development of your product. Incremental improvements should be made over time but the focus should be on your core product. Rest assured when the next 2.5 year cycle comes around you can make your next round of improvements.
There are two primary reasons for software updates; getting access to the latest version of tools (including new and improved features) and the chance to review existing workflows.
One of the aspects of working as a consultant is that you are in a constant state of learning; every new customer brings new challenges and gives you another chance to examine how and what you recommend.
Generally 3 months to complete the main work of a version migration is common; there will be some lingering tasks that involve the use of new technology which could push the total time out to 6 months.
In reading this blog you are already doing this one!
Note, as a general best practice when an outside person comes in this is a great chance to learn what other companies are doing (though take everything with a grain of salt, the other companies process may or may not have been good)
Broadly speaking, control systems are broken down into event and continuous(2) time driven domains. Continuous systems are always taking in stimulus and putting out responses. While event driven systems respond when the event happens, the rest of the time they sit patiently quite unlike an 8 year old waiting for the gates of Disneyland to open.(3) A challenge arises when you start to mix the two types of systems together; how do you mix the chocolate of continuous time with the peanut butter of events to create a greater whole?(5)
Everything was going… smoothly then there was a break
Events interrupt the smooth execution of continuous time systems. If the event preempts the execution of the continuous time system, you are halting the logic and calculations of the system. The return to the execution of the system can pose challenges. Let’s take a look at a few…
The most common problem that occurs is that when the interrupt event happens there is an overall change in the system state and the data (pre and post interrupt) is no longer in correspondence.(6) This can result in incorrect calculations and / or commands.
In some instances the code associated with events can take a long time; as a result the periodic update of the continuous time system can “fall behind”; potentially missing multiple updates. In some instances important data is missed during that “time out.”
What to do?
Like many situations there is no “one-size fits all” solution; but there are general rules of thumb.
When something has changed:
Buffered or protected, data can be used to resume execution of the code with contiguous data.
A “return from event” handler can be used to determine if a system reset is required.
For the “important” events a buffer can be constructed within the event driven code.
Testing the wind
Testing interrupted code is a sticky problem as by definition, when the interrupt happens it is an unknown. Use of random interrupt generators provide one method for testing the code but cannot provide full coverage. The recommendation for this blog is to create “worst case” interrupts; e.g. look at your model and ask “what is the worst place where the execution could be interrupted” and then do it there…
Worst place for an emergency
In the middle of nested if/else-if/else logic: The branch you are in may no longer be valid.
Performing differentiation: When your “dt” changes the diff can wiff(7)
Fault / error detection: If the code’s job is to detect errors then, well, this is one area where you may want to protect against interrupts.(8)
Back at the dawn of time “Ring-Ring” was the sound that a phone made, a common “event driven” experience for many people.
I went back and forth 2 or 3 times on the “continuous time” driven domains. The world we live in is continuous time, it keeps on slipping into the future. However most modern control algorithms are implemented on discreet time hardware and require a discreet time implementation.
Like an 8 year old who can happily read a book while waiting for the gates(4) to open, the poor control algorithm just sits there waiting for the event to ring.
That comment may or may not be autobiographical.
I want to distinguish between states end events; technically the shift from Neutral to First gear in a car is an event; however, it operates within the set of data for the continuous controller. The types of events to be examined are often called “asynchronous” events.
Back in the old days control algorithms would send long letters betwixt each other, now with email, texts and phones correspondents are less common
Wiff: to miss, to have an unpleasant smell. Note: in general, integration operations are interrupt tolerant as they average values over time.
Events and errors come in all levels of severity; setting the priority of the event and the given error handling code enables your OS to arbitrate between the two functions.
When you read about how to cook you will often read the phrase “when cutting, let the knife do the work.” There is, however, a single word missing that changes everything; it should read let the sharp knife do the work.” One word, big difference.
The right tool, set up correctly
During this pandemic I have finally learned how to sharpen knives(1) by hand for when I cook something yummy for my wife and I; let’s talk about how to “sharpen your models.”(2)
Model configuration: every model in Simulink has a set of parameters that defines how it executes and how code is generated.(3) Fortunately there is a simple utility that allows you to configure your model parameters for your target behavior.
You must specify basic behavior such as time domain, step size, and the target hardware. But once that is done, the balance of your configuration is handled through the specifications of your objectives.
Data settings: from your parameters to your signals, configuring your data creates more efficient and more controlled generated code. Consider setting the data type, storage class, and possibly units.
Block selection: mixing model domains such as discrete and continuous time blocks can result in sub-optimal performance. If multiple domains are required, partition the domains off using atomic boundaries
These are of course just a slice(4) of the type of configuration you should consider when setting up your model. Ideally there is a group that is responsible for the basic set up and maintenance of your working environment. This should extend to all of the tools in your development process. For more in depth information on these tools, take a look at the CI, and version control posts in this blog.
Please note, this is not a product endorsement, it is a process endorsement. Also, we don’t have the “stropping block” but I always thought it looks more fun to have the corded strop.
Note, this isn’t a perfect analogy as when you sharpen your configuration you are done for the project, unlike knives that need regular re-sharpening.
When we were first developing this tool I asked the question: How many configuration parameters does a Simulink model have, 113, 178, 238, 312? The correct answer was all of the above depending on the baseline configuration settings. As you can see, this tool is very useful.
Okay, I couldn’t resist one last knife reference for the post.
For every cliché there a grain of truth and a seed of doubt.(1) First, why is this advice given and knowing that, when can you go against it? Now, let’s talk about what in this example is “the road.”
Where the rubber meets the road
When I am developing a product, I am developing that product. Any work done on tools and infrastructure beyond it takes away from my core objective. Following this analogy, the “road” is a common infrastructure developed and maintained by many; meaning that it will be a fit environment for running our tires along. We can focus on making the best tire, not worrying about potholes.
When the road doesn’t go where you need…
In 98%(2) of software development practices, the existing infrastructure will take you all the way to your goal.(3) The primary objective here should be to triage the tool chain. Ask what functionality…
… is sufficient: for these portions to use-as-is
… is near: see if existing work arounds and extensions exist
… is missing: determine if the functionality is truly needed and if so implement extensions to the existing tools
Joy and job satisfaction comes from those times when for good and just reasons you do re-invent the wheel. Today I’m here to say, let us do it one wheel at a time until the world turns around new.
You could say that in doubting this you are going against the grain.
In the 2% case, you should milk it for all it is worth.
Mind you, I look at this photo and think “If I were the road runner I would paint something on here and run right through.”
Sometimes when you are trying to solve a problem the model is just too big to figure out where the problem lies. You may know the subsystem where it occurs, but the problem is, how do you test that subsystem apart from the whole?
Starting around 2015, Simulink provided a methodology for doing this (e.g., creating test harnesses from an atomic unit within the full model).
Set up the inputs and outputs to the subsystem under “test” for logging.
Simulate the full model through one of the scenarios that causes the issue.
Use the logged data as inputs to the test harness.
The logged data provides the baseline for your exploration; e.g., it provides a basic valid set of inputs. The objective now is to vary those inputs to determine the root cause of your error, and once that is accomplished to validate your solution. (Note: This image shows that getting to the root is to keep the “tree” alive)
Save the test
A pass/fail criteria should be defined for the test harness. While the test harness may have been created as part of a debugging exercise, it should be saved as part of the regression testing suite.
The most common example of reusable states I have worked with involves fault detection and management. In this simple example we start off with “NineOclock” and “all = well.” Based on an input “errorCon” we can move to “all = soSo” and finally, if we don’t recover, “all = endsWell.” In this example the transition conditions are inputs to the system (e.g., we are calculating the value of “errorCon” and “recCon” outside of the system. This works for simple examples, but what if the condition logic is more complex? What should we do then?
The answer is the use of parameterized function calls. In this example the function “errorCheck” takes 3 arguments, inst, arg1, and arg2. The argument “inst” controls which branch of the switch/case function is used to calculate the transition logic. (Note: you do not need to use all of the input arguments for any given case, however they all must be passed in).
Reuse is still reuse…
Reuse with Stateflow charts has the same limitation of any other reusable function; e.g., the data types for the inputs / parameters need to be consistent across each instance of the chart. Additionally if you want to use the function based method for transition, the MATLAB functions need to be set as globally visible.
Finally, while global data can be used between instances of the chart, since it can be written to in multiple locations, care should be taken that the information is not overwritten. The most common solution to this issue is to use “bitand” functions to turn on or off bits.
This post is the last (number 8) in a series of 8 video blogs, walking through the fundamentals of Model-Based Design. When taken as a whole, these videos provide the foundation stones for understanding and implementing Model-Based Design workflows I will be using a simple Home A/C system for my example; however the principals apply to everything from Acting cue generation to Zip-Zap-Zop entertainment.
A number of years ago my wife and I had a chance to try out a friends augmented reality, virtual reality (AR/VR) system. Deborah proved to be graceful in the artificial world(1). On the other hand, I had a dramatic fall when “running” down a virtual mountain(2). In hindsight this is an example of a problem arising from open loop system.
Closing the virtual loop
Feedback is not enough, the feedback needs to be synced to the environment. This means that models of the person and the physical world are accurate to a degree that it fools our highly tuned sensor. As a starting point, \games have given us accurate physics models of the real world (3), however they fall short as they do not close the loop on the person in question.
A bespoke AR/VR suite
How do we seriously Taylor(4) the AR/VR to the individual? This is where an adaptive Deep Learning system can come into play. Giving the persons “overshoot” as inputs the system can learn to provide the correct feedback for any situation, “increasing” the force needed to lift an object, or by making the ground come up at a proper rate.
Avoiding “God Mode”
Video games often have a concept of “God Mode”. In this mode the player has unlimited powers, can’t be hurt, can run 1,000 km/hour. This is why an observer is needed for the Deep Learning system, to prevent feedbacks going in the wrong direction. Here is where traditional “bounded” values can be observed for any and all objects in the virtual world; e.g. the “force” to lift a 1kg object will always fall between X and Y, with the final value tuned for each user.
Learn the guitar!
As a child learning to play guitar my instructor tapped her finger on my shoulder to help me learn the rhythm of the song, saying the name of the note to play as it came down the staff. As I got better she tapped less and said less. I can now imagine a AR/VR/DL/CL(5) system that
Watches my eyes see the notes
Gives light pressure to the fingers to guide towards the string
Learns when to step back…
Helping me relearn the guitar (or any other physical skill) much more quickly…
As she is in the real world.
The world was real enough that I “knew” I needed to jump and that, since I was going down hill I would have 1/10 of a second more before I landed.
Accuracy going up if the model is of something being shot or blown up; the “seeing water flowing past you as you tube down a river” simulations are a bit further behind.
You can curve fit the first parts (e.g. a Taylor series) but the fine tuning requires…
Going for the longest set of abbreviations I could, Augmented Reality, Virtual Realty, Deep Learning Closed Loop system.
This post is number 7 in a series of 8 video blogs, walking through the fundamentals of Model-Based Design. When taken as a whole these videos provide the foundation stones for understanding and implementing Model-Based Design workflows I will be using a simple Home A/C system for my example; however the principals apply to everything from animal control to animal containment (e.g. Zoos).(1)
When I first started working in the area of Model-Based Design the topic of code generation dominated the work e.g., the need for custom TLC(2) and storage class to configure the generated code to match your requirements. However, in the last 10 years with few exceptions, the tools have evolved such that the required interfaces can be generated using standard built-in functions. This means that engineers can spend more time focusing on their project and less on the tool.
Zoos, and their role in animal management have evolved considerably over the years; perhaps my favorite “zoo-like” place is the Duke Lemur Center, a place my wife Deborah surprised me with for my birthday one year, it was so much fun!
TLC (Target Language Compiler) is a programing language used by MathWorks to customize the generated code. Over the years when I have been asked what TLC stands for my default answer has been “Truly Lovely Code” as that was the desired outcome.
This post is number 6 in a series of 8 video blogs, walking through the fundamentals of Model-Based Design. When taken as a whole, these videos provide the foundation stones for understanding and implementing Model-Based Design workflows I will be using a simple Home A/C system for my example; however the principals apply to everything from chase Avoidance controllers featuring to Zig and Zag dodging. (1)
Having a clear refining and elaboration process is key to an organized development process. The “test-as-you-go” methodology (also known as test-driven development) that I describe here provides a natural framework for ensuring that the system is in alignment with requirements throughout the development process.