If you have worked with Simulink for more than a few hours you have no doubt seen the error message “Warning world about to end, Algebraic loop detected. You have 10 seconds to snip (insert… More
Last week the “Z” key popped off of my laptop. As my standard language of communication is English this has not been a big deal; however on Friday when I went to write to an email in German things changed. More correctly the task (typing) was the same but the mode (language) changed. Never one to let a random thing pass I thought “wow this is a great topic for a blog post”.(1)
Testing in the “comfort zone”
It is a well known problem with testing that we tend to test things that we know the best, as a result we end up with incomplete test coverage. There are two problems, first the obvious one that we are not covering everything. Second we are wasting time by creating redundant tests in the “well known” areas.(2)
How to exit your comfort zone?
Let’s look at three types of models, mode based, continuous (e.g. equation), logical and hybrid(3).
- Mode base: existing the “CZ” with mode based models is simple; validate that you are testing every mode in the model.
- It should be noted that with Mode base models there is often a temporal component that must be validated.
- Additionally the “mode to mode” paths should be validated as often initial conditions can be different depending on which mode you have transitioned from.
- Continuous (equation): these models can be fully tested by exercising the full range of all inputs. For example, let say you have two inputs U1 and U2 with ranges [0 to 10] and [-5 to 5]. The test vectors would cover a range [0,-5: 10,5]. There are a few considerations.
- Test spacing: depending on how sensitive the output is to changes in the input the “steps” in the coverage needs to be adjusted. E.g. could you “step” inputs of U1 in 0.1 or 0.5 increments?
- Exclusivity: in some instances inputs are related, e.g. the temperature of your car’s engine is never less than the outside temperature.(4) This can reduce the test range
- Temporal: another factor is “how long” are you at each data point?
- Logical: are similar to mode based testing, however they lack the state information that mode based testing implies. Like mode based testing this is validated by exercising each logical path in the model. Tools like Simulink Design Verifier can be used to generate these test vectors.
- Hybrid: E.g. 95% of all models. This is where design of experiments comes into play. For large systems it may not be practical to test “every point”. However that is not the goal of testing, the objective of testing is to test every operational behavior.
Good tests take time; both to develop and to execute. Assuming a library of basic test functionality and a well written requirement document for the component you can estimate the number of test “points” as a function of the number of modes, inputs and logical branches.
TP = 1.25 * Modes + (min(0,numInputs/2) * numOutputs)^1.5 + numLogical/2; (5)
This formula empirical, and is derived from a review of test sets for well ordered models. The assumptions built into the formula are
- The “Mode-to-mode” connections are limited, e.g. not every mode can transition to every other mode
- There is a fair degree of mutual exclusivity in the input vectors.
- The number of tests is more sensitive to the number of outputs than number of inputs.
- Logical tests can often have redundant paths due to the lack of state information.
The final part of this equation is “time of construction”. Time of construction refers to how long it takes to create each test point. Both mode based and logic based tests vectors can be automatically generated, often achieving 100% coverage. (Or showing that 100% coverage is not possible and that there is an error in your logic). As a result I generally estimate the time to develop these tests as
t = (NumModes + NumLogic) * 1 minute;
The time assumes that some level of errors in modeling will be discovered and corrected. For the equation (continuous) testing the time is dependent on the coverage of the requirements; e.g. the more of the input space that the requirements cover the lower the total testing times.
t = Num_Req * 45 minutes + (10 minutes * %notCovered * 100)
Again, this is an empirical formula based on the well ordered models and an existing testing infrastructure.
- This blog post will be forwarded to my manager to explain why there is a $23 repair bill sent in
- I was once proudly told by an engineer they had over a 100 tests for their system; the problem was that those 100 tests were all dedicated to 6 out of the 27 functions in the system. We corrected that issue.
- That sentence should have been tested, as there are 4 “types” in there, not 3. This is what happens when you “design” a sentence with conceived notions.
- Ok, that statement isn’t absolutely true, if you had your car in a “cold box” and then drove it out into a warm day for a short period of time that would be true. At the same time if you are storing your care in a “cold box” you
- Hmmm, ending with a “;” shows just how long I have been programing in MATLAB.
- Again, thank you https://smbc-comics.com/
Please consider subscribing for regular updates
In 1999 the movie The Matrix introduced (2) millions of people to the philosophical question “how do we know if we live in the real world or a simulation”. As an engineer, working for a company that makes the “Matrix Laboratory” I have been thought about this idea and it’s logical extension; can we trick a machine into thinking it is in the real world?
Is this the real life? (3)
Simulations of reality have existed for a long time, from wind tunes, to lumped mass model to complex finite element models; they have been the backbone of engineering design. As powerful as these models have been they have been limited as simulations of physical properties, or perhaps a system of physical properties. The next generation of simulations attempts to simulate the complex nature of the real world, that is to say people and their semi-predictable behavior.
Humans as lumpy-mass models
Ok, first I’m going to need to write a lot of text here to make it all the way down this comic strip…
The world is filled with humans, roughly 7.6 billion as of the writing of this post. We are out there driving cars, walking in the street, making phone calls, cooking dinner, talking about philosophy and sitcoms. We do a lot of different things. When moving in a mass we are largely predictable; that is to say if you asked me to calculate how long it would take for 100,000 people to exit University of Michigan’s football stadium, I could give you an reasonable accurate answer for the total. If, however you asked me how long for a given person, well then it becomes more difficult. It is the aggregate behavior that is predictable. The issue is that for controllers that interact with the real world the aggregate is not enough.
So how do you go about simulating a pod of people? There are several basic methods.
- Conway’s game of life: “humans” can be simulated by giving them a set of basic “rules”. Those rules (with weighted objective functions) determine how they operate. Note: your rules here can’t be too perfect, real humans make mistakes. (Note this is often done using cellular automation approach)
- Genetic algorithms: The humans can be derived using genetic algorithms(5). In this case a set of baseline behaviors are defined as well as “mutations” or permutations on those behaviors.
- Fluid dynamic analogies: Fluid dynamic models do a good job with modeling flow “restrictions” around doors and changing widths of the system.
- Real world data (human in the loop): The most difficult to set up, but done well, mining the real-world for data on how people act, and react, provides the most accurate models. The previous three suggestions can be considered reduced form version of the “RWD-HIL”
Why do we want a matrix for our machine?
It is oft repeated question for self-driving cars, what happens if a child darts out in front of it? Because it is repeated so often it is now tested for heavily. But what about all the other things kids (or adults) do that are foolish? Ever drop something in the street and stop to pick it up? Every order a 100 tee-shirts instead of 1? Pull up on the throttle when you should have pushed down? In the end people are semi-random creatures. Creating realistic models of people allows us to create better control algorithms.
- The image of the “red pill” / “blue pill” should not be taken as an endorsement of A/B testing as the only validation methodology
- Note, it introduced people to the question, it does not mean that many people put much thought into it beyond “dude, how can you know?”
- Note, if we do it correctly we never have to sing “mamma, my controller just killed a man”
- A pod of people is of course a reference to “pod-people“; e.g. close enough to fool some people
- Real people are created from generic algorithms so this should wok right?
If you find benefit from this blog, consider subscribing
In my last post, Execution order and Simulink models, I promised a look at scheduling best practices using Stateflow; in this post I hope to deliver.
Simple periodic scheduling
In our first example we will look at simple periodic schedulers. Let’s assume we have a system with three rates, 0.01 sec, 0.1 and 0.2 seconds. This can easily be implemented in Stateflow with the following chart.
If we look at the “activation” for each of the task sets we would see the following.
In this case you can see that each of the tasks are being triggered at their given rate, at 0.1 second both the 0.01 and 0.1 activate; at 0.2 all three are active. In many cases this is fine, the order in which these tasks is set by the order in the chart (e.g. 0.01, 0.1 then 0.2). However you may want to “space out” the activation. In that case a Stateflow chart like this would be the solution.
In this case three parallel states are created. The 0.1 and 0.2 rates have “offsets” so that the execute out of sync with the each of the tasks as shown in the resulting execution graph.
Mode based scheduling
Beyond rate based scheduling mode based scheduling is the next iteration of the scheduling examples.
In this example the scheduler is decomposed into three parallel states, the “Main” or rate based state; a mode based state and an event based state. The main state is similar to the two previous examples so lets look at Mode and Event.
There are three things of note here; first within this state we start the system off in the “Initialization” state. This is a safe selection as most systems start off in “Init”. Next, movement between the states is controlled by the input variable “Mode”. Use of the ‘hasChanged’ method gates the transitions between the different modes allowing the user to switch from any of the modes without having complex routing logic. Finally the mode “Emergency” is for the non-critical scheduling responses to emergencies. Any actions that fall into the true emergency mode should be event driven so their execution starts immediately.
Our final example here is event driven scheduling; within this chart we have the “React” states and a “Null” state. The null state is present to provide a “no-operations” mode when the events are not active. Two things of note, in this example events are mutually exclusive; this does not need to be the case. Second, the current example exits the “React” states after one execution. The exits could be guarded to continue execution until the event is resolved.
If you are enjoying what you are reading consider subscribing to the email version of this blog.
When you program in the textual language execution order is directly specified by the order in which you write the lines of code.
- My Function
- Do A
- Do B
- Do C
With multiple functions, a threaded OS or event driven interrupts this is more complex, but at its’ heart it is directly specified in the language. In contrast Simulink determines execution order based on Data Flow.
Which executes first, Chicken or Egg?
The principal of data flow based execution is that calculations are performed once data is present. We will start with the simplest example possible, one path direct feed through.
In this example we have a data flow from left to right (Inport #1 to Outport #1). The calculation is
Output = Input * Chicken * Egg
In this example we have introduce a Unit Delay block. This changes the execution order
Output = LP * Egg
LP = Input * Chicken
In this case the “Egg” calculation takes place first since we have existing information from the unit delay allowing calculation of the output value.
In our next example the execution order is arbitrary; since there is no data dependencies between “Chicken” and “Egg” output 1 or output 2 could be calculated first. (Note: Simulink provides a “Sorted Execution order”; these are the red numbers you see in the image. The lower numbers are executed first.
Controlling execution order (avoiding egg on your face)
In the last example we showed that with independent data flows execution order is resolved by Simulink; however, there are instances where the execution order will matter (1). The “obvious” and very wrong solution is to add a data dependency to the path.
By adding a “Zero value gain” and sum block to the “Egg path” I have forced the “Chicken path” to be executed first (4). For reasons that should be obvious this is a bad idea. Anyone who looked at the model would think “Why are they doing that” and they would be correct in asking that question. The recommended approach is to make the execution order explicite using a Function driven approach
In this case the “Egg” is executing second; this is known from the “B” number in the block, “ChickenPath” is BO, while “EggPath” is B1; the lower number executes first. For more complex execution orders a Stateflow chart can be used to define the execution order
In this “tongue in beak” example we see that “eggs” only execute during the “WeHaveEggs” and the “Dinosaur” states. Once we hit the “WeHaveBoth” state (after many eons) the Chicken executes first. In my next post I will give examples of best practices for controlling execution order with Stateflow charts.
- Within the subsystem the execution order does not matter. However there are several cases where it can matter.
- Time limited execution: If the given function has a limited time for execution allocated and it is possible that all of the calculations may not be able to be performed in that time period. In that case you would want to control the execution order
- Consumption by parallel process: If the data from one (or more) of the paths is used by a parallel process and that process needs data “first” then you want to control the execution process.
- Determinism: For some analysis locking down execution order will simplify the execution task.
- I debated even showing this image, however I have seen many “cleaver”(3) engineers come up with this solution.
- Yeah, I realized I was autocorrected to “cleaver” as in something that cuts or chops . It was an error at first. Then I realized that I liked that more, they are not really “clever” rather they are just chopping apart a problem.
- This could be considered a form of “playing chicken”
Please consider subscribing for regular email updates
Newton’s method is one of the foundational concepts in numerical mathematics; it is a method for finding the minimum value of real-valued functions through a series of successive approximations. While the approach has limitations (it can be “trapped” in local minimum, it can be slow, …) it is the gateway algorithm(1).
Newton and Model-Based Design
Aside from the equations of motion, integration, theory of gravity (to start a long list) the children of Newton’s method are found in optimization problems. Optimization problems seek to find the minimize (or maximize) a value for a set of equations (or input data) based on the system input. Let’s consider an example of my Chevy Volt and optimizing energy use during my daily commute to work.
- Distance to work
- Range of battery
- Effect of starting battery temperature
- Driving conditions (highway / surface streets)
- Cabin climate control (heating / cooling)
- Cost of
- Charging the vehicle (at home / at work)
- Gas (in the Volt the gas engine charges the battery)
- Carbon emissions from the different energy sources.
When I frame this equation I set the commute as a “From-work-to-home-from-home-to-work” route. This, and thank you MathWorks, because there are free charging stations at my office, therefore any charging there is “free” (2).
- cost = C1 + C2 + C3
- C1 = Cost_per_Kilowatt_Work * Charge + E1
- C1 is at work co Cost_per_KW = 0 + E1 * Charge
- C2 = Cost_per_Kilowatt_Home * Charge + E2
- C2 = (0.12 + E2) * Charge
- C3 = Cost_Per_Gallon * Gallons_Per_Kilowatt * Charge + E3
- C3 = (3.50 * 30 + E3) * Charge
- C1 = Cost_per_Kilowatt_Work * Charge + E1
In this case the E1, E2 and E3 are the external costs for use of power. I don’t directly pay these costs(3) but from an ethical standpoint let us remember them.
The total charge is a function of distance, driving conditions and environmental conditions (do I run the heat or AC).
- Charge = F(Distance,DriveCond) + F(Cabin,Environment)
Skipping the analysis here to the conclusions. Since heating a car from the battery is very energy intensive, and since driving above 40 has a hit on fuel economy the “target” for short drives is to pre-heat the car (while at work) and to drive in a most sedate fashion. For medium drives the key is to use the engine heat the car and battery up at the start of the drive…
The “costs” of our decisions:
In the equations above there are “E” terms assigned to each of the costs, the “externalities” in economic terms. These “E” terms can be used to “weigh” the optimization function to push (or pull) the outcome towards a given solution. In some cases the value of the weights can be calculated, other times they are assigned based on a desired outcome. For example, if I was creating an optimization equation for the “ultimate chocolate bar” I would have a heavy penalty against coconut; there is nothing inherently wrong about coconut I just can’t stand it.
Integrating content in this post (4)
This post was written due to N simple observations
- Complex problems can be first understood using “base” methods: Real world optimization routines rarely used a Newton’s method due to efficiency issues, however for understanding optimization the simplicity of a Newton’s method can’t be beat.
- Concepts are Queen / King: The concept behind the Newton’s method, that successive approximations can lead to a numerical solution underpins multiple fields, optimization, feedback loops for controls, noise reduction…
- Reviewing “base” methods can lead to new understanding: In writing this post, and reviewing information on optimization and the basis of calculus (5) I have figured out solutions to a few problems that are plaguing me now, that of course will be in a future post
- The gateway numerical method: Next thing you know you will be diving into implicit and explicate solutions to P.D.E.s
- There is still an external cost for charging at work, that energy is produce somewhere so some amount of greenhouse gas is being produced. This sort of “local” or self optimization can be seen as a leading cause of global climate change.
- This isn’t completely true, since I breath I do pay a direct cost to my health due to air pollution.
- What post featuring Newton would be complete without integration
- Calculus: from Latin, literally ‘small pebble’, with enough “small pebbles” you can “rock” the mathematical world.
If you find value in these posts, consider subscribing
Sometimes, when you wander the web, you come across a story that makes you think about your job in a new way. This one, about “legal Lego builds” did just that.
Lambrecht describes “the model that forever changed LEGO,” an Audi TT that was difficult to put together, required the user to deform components for them to fit, and came with no instructions.
The article (or more accurately the linked PDF) is interesting for three reasons.
- Change came after failure: Lego’s are a famous brand and, having been around for 80+ years you would assume the have their bricks together(1). However, entering the brave new world of “build sets” the found that they needed to adopt standard building rules
- Unit test to find system problems: Some integration issues can only be detected in the full system, however upfront consideration of interfaces and tolerances can prevent large scale issues.
- Legal but on the boarder: The PDF shows legal / illegal and “boarder” cases. Sometimes the “boarder” is the only solution to the problem; but when you find yourself in a “it’s the only way” case, spend some time to figure out if that is really the only way.
With Model-Based Design what are the “small rules” that I would recommend following?
- Adopt a style guide: For MATLAB and Simulink Models consider the use of the M.A.A.B. Style Guidelines.
- Speed counts: A slow small function slows down your system. Each additional system slow system (or repeated instances of the same system) add up to a slow integration.
- Self contained: Models should be able to execute on their own, e.g. not requiring external infrastructure to execute. This is the distinction between a functional and an integration model.
- Swiss army knife: When I’m out hiking a Swiss army knife is a reasonable lightweight tool to bring to handle unexpected issues. Models should serve a purpose, not 100, that is why we have systems.
- No apologies for the bad play on words.
- Google image search can return things you would never expect
- Yes this is a real
If you find this material interesting, please subscribe to the blog for regular email updates
The area under the curve, infinitesimally small slices, the integral is integral to control algorithms. With that in mind I wanted to point out a few “edge cases” in their use with in the Simulink environment.
Case 1: Standard execution
The standard execution would be inside of a continuously called subsystem. From calculus, remember that the integral of sine = cosine + C; which is what we see when we plot the results. So far so good.
Case 2: Execution context
Let’s look now at what happens when your execution context is not contiguous; in this case a conditionally called subsystem. In this case I will use an enabled subsystem to periodically call the integrator.
For this first example I reduced the sample rate by 50%. (E.g. the ramp function toggled between 0 and 1). As a result while the shape of the output is correct (a cosine) the amplitude is 1/2 the original value. I could correct by adding a gain factor on the output (or in the integral).
Note: when sampling data it is critical that the minimum sampling frequency is at least 2X faster than the frequency of the signal. This is know as the Nyquist Frequency
Case 3: Non-periodic
In some case sampling is event based. In this case it is critical that the events are more frequent than the Nyquist Frequency. However, since they cannot be assumed to be at a simple periodic frequency you cannot use a gain factor to correct for the sampling bias.
The simplest solution is to increase the frequency of the sample, the higher the frequency the more accurate the results.
Baring that a custom integrator can be applied that interpolates off of the last N data points and the temporal gaps; however this approach will not work if the temporal gaps are large or if the data is rapidly changing…
Case 4: Function called integrators
The final case to consider is when the integrator is part of a function call system. In most cases this will act like the periodic instance show above; there is however a special case, when the same function is called more than once in a single time step. In this instance only the first signal passed into the function will be “integrated”. Why?
Remember that integration can be expressed as a sum of slices multiplied by the time step. For the first call to the function the delta in time is the time step. For subsequent calls the time delta is zero.
If you need this functionality consider passing in the DT value as an external variable or performing the sum of two different function calls.
If you are enjoying these posts, please consider subscribing to them.
Handing a model off between developers, or from developer to user, is one most common tasks in Model-Based Design. So what steps should you follow to insure that the hand-off is successful?
Step 0: Agree upon the requirements
A few weeks ago I made muffins, lemon poppy-seed; while my wife was happy to receive the muffins she had requested chocolate-chip muffins, a classic requirement error.
In the much simpler world of model hand off the following items need to be defined
- The functional interface: inputs, outputs and rate of execution
- Behavioral characteristics: what behaviors does the model cover; what are the “corner cases” that it does not cover.
- Supporting files: most models require models, libraries and data. For parameterized models the same “model” will act differently with different data.
- Acceptance criteria: a set of defined metrics that define what is required; these should be derived from the behavioral characteristics.
Step 1: Model validation
Assuming you have acceptance criteria the model validation is the process of validating the model against the criteria. Ideally the methods for validating the model are established at the start of your project and are run routinely as the model is developed.
Step 2: Wrap it up!
Delivery of the model is important, there is nothing more frustrating then getting your shinny new model only to find out you are missing a library or a data file. There are several methods for addressing this
- Version control software: If the model is checked in as part of a project the end user can check out the full repository (note: this can result in file bloat)
- Use of Simulink Projects: A tool from the MathWorks that allows for the definition of model projects. It will analyze the required files for you and create a package for distribution.
If you find this content useful, please consider subscribing
If you work in the controls design space then the PID (Proportional, Integral, Derivative) control element is an old friend. For those of you who do not work in the domain here is a quick overview.
Observe and correct errors
A PID controller can be viewed as an optimization function with three terms. The system attempts to minimize the error between the observed and desired values.
- P term: The greater the relative error the stronger the command
- I term: The longer the error goes on the stronger the command
- D term: The greater the change in the error the more the command changes
All wound upI
Integral windup refers to the situation in a PID feedback controller where a large change in setpoint occurs (say a positive change) and the integral terms accumulates a significant error during the rise. This results in overshooting the target. Over time the error (now negative) will drive the integral to zero however this will result in an extended period of time in error. There are several solution, including this one as documented by The MathWorks
The derivative term in the PID control “dampens” the rate of change of the error term. However, due to difficulties in tuning systems with derivative terms this is often left out of the control algorithm
Overview of all terms
- Rise time: How long it takes the system to reach the target value
- Overshoot: How much the model goes “past” the target value”
- Settling time: How long it takes the system to zero out the error
- Steady state error: What is the final error (can it reach zero?)
- Stability: Effect of noise on the system
As you increase value the term…
|Decrease||Decrease||No effect||Improve if D is small|
If you find this content useful, please consider subscribing
Recently I was talking with a co-worker about an image detection model and they mentioned the now “classic” question of how do you deal with weather obfuscation of data.
We quickly moved from static obstruction (the snow covered stop sign) to the dynamic obstruction of rain and snow.
Starting from first principals you can quickly out line the factors that contribute to the model
- Rain / Snow density (drops / flakes per m^3): This is a measure of storm “intensity”
- Size: How large are the snow flakes / rain drops
- Wind speed: This effects how the flakes / drops move, it is complicated by wind gusts.
- Camera velocity: Is your camera moving?
- Depth: How far away is the object that you are viewing
In some cases it may be necessary to fully model a the behavior. In others a simplified characteristic model can be used. Let us look at what we determined was important.
What really matters…
Given that objects, in general, don’t change their shape(1) it is possible to filter out the noise of rain or snow. What we need to understand is how rain or snow obscure objects.
- Interposition: Every drop of rain and every flake of snow acts as a “point barrier” between the camera and the object.
Take the simple example: for an object 10 meters away the extent of “obscuring” is a function of the rain density and the droplet size; further the distance between the camera and the object (depth) determines the extend of the obstruction. What then should we do with with the wind? How should we model the movement?
It turns out, not much. When I compared the effect of random movement of the patterns with the wind models the effect on obstruction there were negligible difference for the vision application.
The moral of the story…
- Modeling systems is a question about what is required for your analysis. Knowing this I can create the minimal model that is required to perform my tests.
- Developing simplified models start with an understanding of the “real” system
Footnote: 1: The profile of the object may change as the relative angle is changed.
If you find this content useful, please consider subscribing