A sample project: Model Unit Validation

With this post I’m going to walk through a sample project demonstrating how Units in Simulink Models can be validated. Demonstrating each stage of the project development, providing insight into how and what I was thinking at each step along the way.

Stages of project development

There are 5 stages to project development

  1. Identification of need: What is the problem you are trying to solve?
  2. Review of existing solutions: Is there an existing solution that meets needs?
  3. Task identification/estimation: What are the tasks / steps that need to be completed and what is the effort (risk) to complete them?
  4. Implementation and testing: Following a ‘test as you go’ strategy develop the modules that you need
  5. Validation of solution: Does what you did in 4 solve what you needed in 1?

Stage 1: Identification of need

As of release 2019A Simulink provides

  • Unit specification at the boundaries and through Simulink.Signal and Simulink.Parameter objects
  • Unit checking at the boundaries of models (inports and outports)

While this is very useful (and you should always define your boundary conditions) for large models knowing what the units of interior signals would be beneficial.

Problem to be solved: determine the unit of all blocks internal to the model and use that information to validate the units at the boundaries / internal to the model.
†

Unit checking at the Subsystem Boundary

What I wanted to be able to do was specify the unit types on the inports and outports and propagate the units through the model.

At this stage, depending on the complexity of the project, the high level requirements should be written; derived requirements will be written at the start of task 3.

Stage 2: Review of existing solutions

There are two classes of “existing solutions”. The first are solutions to “base” problem you are trying to solve; the second are solutions to the sub-tasks in the problem. In this instance we have already identified the extent of the solution to the “base” problem, the ability to check units at the boundaries of the model; for what we want this is insufficient.

Examples of “Sub-Tasks”

For the sub-tasks the Simulink API’s provide the interface required to “decode” the model to propagate the information through the model.

Stage 3: Task identification/estimation

Depending on the size of the project task identification can be decomposed to a “function” based level or lower. For large projects the “function” may in fact be a collection of functions. As you start to identify the functions required reuse of the components should be taken into account. My first pass (almost always on paper) is in a “operation required / issues” format.

  1. Propagate unit information in the model:
    1. How to determine data/unit flow
    2. How to handle data merging blocks (buses and mux blocks)
    3. How to handle subsystems
  2. Calculate output units based on inputs:
    1. Define categories of block transformations
      • Matching
      • Canceling
      • Integration / derivative
      • Null
    2. Handle effects of Parameter data types
  3. Apply information to the blocks
    1. Units (note most blocks don’t have a “unit” field
    2. Status of block (assigned, unsigned, invalid)
  4. Check calculated units against assigned
    1. Outports
    2. Simulink Signal objects

Having identified the tasks I now think about what goes into each step and if I have experience or near experience in coding it; in this case all of the tasks involved are close to things I have done before so I have faith in my estimates…. (For the record, I estimated 8 hours and it ended up taking me 7)

Step 4: Implementation and Testing

Step 4.1: Unit propagation

My first question was how to propagate the units through the model. I decided that a first, reasonable requirement was that all of the inports must have units defined. If I enforced that I could simple “walk forward” from the inports to the outport of the system.

The once that approach was selected it the implementation becomes clear; create an array of inports and trace outwards from them. As each new block is found tack it onto the end of the array removing the most recent from the list.

(Note: Since this is a demo project I am not, currently, doing anything with Mux or Bus blocks).

Step 4.2: Calculating units

The key to calculating the units is to realize that there are really only 3 fundmental operations.

  • Cancellation: through either multiplication or division units are canceled.
  • Matching: the operation (addition or subtraction) requires the input units to match
  • Integration / derivation: a “cancellation” of time.

As you look at the code you will see that I have created “num” and “den” variables, these are for the numerator and denominators of the inputs. Additional, for the sake of a demo, I have restricted the number of block inputs to 2.

Notes:

  • In hindsight I didn’t actually need to find the numerator and denominator since much of what happens is through string clean up. However, conceptually, it was a required task).
  • In this example I am not handling feedback loops through unit delays, however they could be treated as a special case of a “sum” or matching required block.

As I developed the code I also wrote some basic test points to validate that the behavior was correct. The most basic tests were to determine if the units, stored as strings by Simulink, could be converted to the num/den strings

In this case you can see that I tried to cover all of the ways in which data could be encoded. One thing to note, I did not implement “compound units”. E.g. if you put in N (for Newtons) I do not cancel that in the same way you would kg*m/s^2. To do that I would, I think, first expand all the compound units to their fundamental components and then cancel.

Cancellation at the root

The final step in this process will be to validate the units at the outports. To indicate passing / failing outports I color code them “green” for pass and “red” for failed.

Step 5: Validation of the solution

The code, as implemented, provides a “taste” of the final solution. However, the way in which it was code is modular so as I want to add in new capabilities (support for additional blocks, muxs, buses, subsystems) the extensions will be easy to add. If you are interested I have included a copy of the code and sample model here.

In praise, in damnation of the simple example

Faust &
Mephistopheles 

When I was first learning German language in high school the first book(1) that we read was “Emil und die Detektive” a “young adult” book.

Wann ich ersten Deutsche studiere, ich lessen die Buch “Emil and the detectives”

The final book that I read was Goethe’s Faust. Needless to say the later book was a of more interest. I remember the ideas of Faust, I remember little of the Detectives, so why did I start with YA lit?(2) And why am I writing about it a blog about Model-Based Design?

Learning fundamentals

When learning a language it is obvious that you should start with simple examples where you learn the fundamentals of the structure and expand your vocabulary. However if you never move to more complex subjects your understanding of the language (and in the case of literature life itself) will be forever arrested at a low level.

The gain block: Hello world of Simulink

The cliche first program is often the simple print out of “hello world”. In Simulink the first model is “Input, gain, output”. It can be used to show

  1. How blocks are added to the model
  2. How blocks are connected
  3. How to parameterize the model
  4. How to simulate a model….

After learning, simple models

Image result for simple models

Finally on to the subject of this post; why do we create simple models once we have learned the “language”?

  1. To validate understanding: for blocks / constructs that are more complex a simple model that exercises range of behaviors helps us understand it under all modes of operation. 
  2. Debugging: You are trying to isolate parts of the model to determine where an error in you system occurs.
  3. Instructional: You are using the model to introduce people to basic concepts

Why models are not like books…

When I read a YA book in German I knew, through daily life, that the language of the book was targeted at younger readers. When I see an example model in a new language I don’t know if the pattern I’m seeing is the “adult” pattern or the YA pattern.

The most common “faults” of example models are(3)

  • Size: large models serve no purpose in showing concept basic concepts.
    • Mitigation: create example “large model” cases.  Walk through the best practices for large models (and system of models.
  • Efficiency: to illustrate one concept we will often make other parts of the model simpler.  
    • Mitigation: explicitly state the area of the model that is expanded for instruction
  • Problem specific: sometimes examples are intended to demonstrate how to solve a specific problem but that solution maybe incorrect in other situations.
    • Mitigation: explicitly state what the scope of the solution is
  • Lack of domain knowledge: this last issue is the hardest to deal with; on occasion example models are created by people who are new to the language and, as a result, they come up with solutions that are non-advisable.
    • Mitigation: peer review and look to industry sources for examples.

Footnotes

  1. The second book was “The little prince”; which in hindsight was an odd choice since it was originally written in French.  Still I could never forget the word for sunrise and sunset.  (Sonnenuntergang und Sonnenaufgang; this makes singing songs from Fiddler on the Roof in German difficult)
  2. In truth, written in 1928, it provides a much more realistic view of life than modern Young Adult novels. Danke, dass Sie Goss vermissen.
  3. In all honesty I have been guilty of all of these things in my examples over the years; however I have tried to correct this through the mitigation recommendations I have listed.

Common/uncommon faults

“Z key holder”

Last week the “Z” key popped off of my laptop. As my standard language of communication is English this has not been a big deal; however on Friday when I went to write to an email in German things changed. More correctly the task (typing) was the same but the mode (language) changed. Never one to let a random thing pass I thought “wow this is a great topic for a blog post”.(1)

Testing in the “comfort zone”

Your comfort zone

It is a well known problem with testing that we tend to test things that we know the best, as a result we end up with incomplete test coverage. There are two problems, first the obvious one that we are not covering everything. Second we are wasting time by creating redundant tests in the “well known” areas.(2)

How to exit your comfort zone?

Let’s look at three types of models, mode based, continuous (e.g. equation), logical and hybrid(3).

  • Mode base: existing the “CZ” with mode based models is simple; validate that you are testing every mode in the model.
    • It should be noted that with Mode base models there is often a temporal component that must be validated.
    • Additionally the “mode to mode” paths should be validated as often initial conditions can be different depending on which mode you have transitioned from.
  • Continuous (equation): these models can be fully tested by exercising the full range of all inputs. For example, let say you have two inputs U1 and U2 with ranges [0 to 10] and [-5 to 5]. The test vectors would cover a range [0,-5: 10,5]. There are a few considerations.
    • Test spacing: depending on how sensitive the output is to changes in the input the “steps” in the coverage needs to be adjusted. E.g. could you “step” inputs of U1 in 0.1 or 0.5 increments?
    • Exclusivity: in some instances inputs are related, e.g. the temperature of your car’s engine is never less than the outside temperature.(4) This can reduce the test range
    • Temporal: another factor is “how long” are you at each data point?
  • Logical: are similar to mode based testing, however they lack the state information that mode based testing implies. Like mode based testing this is validated by exercising each logical path in the model. Tools like Simulink Design Verifier can be used to generate these test vectors.
  • Hybrid: E.g. 95% of all models. This is where design of experiments comes into play. For large systems it may not be practical to test “every point”. However that is not the goal of testing, the objective of testing is to test every operational behavior.

The upshot

Testing = Hacking(6)

Good tests take time; both to develop and to execute. Assuming a library of basic test functionality and a well written requirement document for the component you can estimate the number of test “points” as a function of the number of modes, inputs and logical branches.
TP = 1.25 * Modes + (min(0,numInputs/2) * numOutputs)^1.5 + numLogical/2; (5)

This formula empirical, and is derived from a review of test sets for well ordered models. The assumptions built into the formula are

  1. The “Mode-to-mode” connections are limited, e.g. not every mode can transition to every other mode
  2. There is a fair degree of mutual exclusivity in the input vectors.
  3. The number of tests is more sensitive to the number of outputs than number of inputs.
  4. Logical tests can often have redundant paths due to the lack of state information.

The final part of this equation is “time of construction”. Time of construction refers to how long it takes to create each test point. Both mode based and logic based tests vectors can be automatically generated, often achieving 100% coverage. (Or showing that 100% coverage is not possible and that there is an error in your logic). As a result I generally estimate the time to develop these tests as

t = (NumModes + NumLogic) * 1 minute;

The time assumes that some level of errors in modeling will be discovered and corrected. For the equation (continuous) testing the time is dependent on the coverage of the requirements; e.g. the more of the input space that the requirements cover the lower the total testing times.

t = Num_Req * 45 minutes + (10 minutes * %notCovered * 100)

Again, this is an empirical formula based on the well ordered models and an existing testing infrastructure.

Footnotes

  1. This blog post will be forwarded to my manager to explain why there is a $23 repair bill sent in
  2. I was once proudly told by an engineer they had over a 100 tests for their system; the problem was that those 100 tests were all dedicated to 6 out of the 27 functions in the system. We corrected that issue.
  3. That sentence should have been tested, as there are 4 “types” in there, not 3. This is what happens when you “design” a sentence with conceived notions.
  4. Ok, that statement isn’t absolutely true, if you had your car in a “cold box” and then drove it out into a warm day for a short period of time that would be true. At the same time if you are storing your care in a “cold box” you
  5. Hmmm, ending with a “;” shows just how long I have been programing in MATLAB.
  6. Again, thank you https://smbc-comics.com/

Please consider subscribing for regular updates

The Matrix(1)

In 1999 the movie The Matrix introduced (2) millions of people to the philosophical question “how do we know if we live in the real world or a simulation”. As an engineer, working for a company that makes the “Matrix Laboratory” I have been thought about this idea and it’s logical extension; can we trick a machine into thinking it is in the real world?

Is this the real life? (3)

No comment on the movie

Simulations of reality have existed for a long time, from wind tunes, to lumped mass model to complex finite element models; they have been the backbone of engineering design. As powerful as these models have been they have been limited as simulations of physical properties, or perhaps a system of physical properties. The next generation of simulations attempts to simulate the complex nature of the real world, that is to say people and their semi-predictable behavior.

Humans as lumpy-mass models

Ok, first I’m going to need to write a lot of text here to make it all the way down this comic strip…

The world is filled with humans, roughly 7.6 billion as of the writing of this post. We are out there driving cars, walking in the street, making phone calls, cooking dinner, talking about philosophy and sitcoms. We do a lot of different things. When moving in a mass we are largely predictable; that is to say if you asked me to calculate how long it would take for 100,000 people to exit University of Michigan’s football stadium, I could give you an reasonable accurate answer for the total. If, however you asked me how long for a given person, well then it becomes more difficult. It is the aggregate behavior that is predictable. The issue is that for controllers that interact with the real world the aggregate is not enough.

So how do you go about simulating a pod of people? There are several basic methods.

  • Conway’s game of life: “humans” can be simulated by giving them a set of basic “rules”. Those rules (with weighted objective functions) determine how they operate. Note: your rules here can’t be too perfect, real humans make mistakes. (Note this is often done using cellular automation approach)
  • Genetic algorithms: The humans can be derived using genetic algorithms(5). In this case a set of baseline behaviors are defined as well as “mutations” or permutations on those behaviors.
  • Fluid dynamic analogies: Fluid dynamic models do a good job with modeling flow “restrictions” around doors and changing widths of the system.
  • Real world data (human in the loop): The most difficult to set up, but done well, mining the real-world for data on how people act, and react, provides the most accurate models. The previous three suggestions can be considered reduced form version of the “RWD-HIL”

Why do we want a matrix for our machine?

It is oft repeated question for self-driving cars, what happens if a child darts out in front of it? Because it is repeated so often it is now tested for heavily. But what about all the other things kids (or adults) do that are foolish? Ever drop something in the street and stop to pick it up? Every order a 100 tee-shirts instead of 1? Pull up on the throttle when you should have pushed down? In the end people are semi-random creatures. Creating realistic models of people allows us to create better control algorithms.

Footnotes

  1. The image of the “red pill” / “blue pill” should not be taken as an endorsement of A/B testing as the only validation methodology
  2. Note, it introduced people to the question, it does not mean that many people put much thought into it beyond “dude, how can you know?”
  3. Note, if we do it correctly we never have to sing “mamma, my controller just killed a man”
  4. A pod of people is of course a reference to “pod-people“; e.g. close enough to fool some people
  5. Real people are created from generic algorithms so this should wok right?

If you find benefit from this blog, consider subscribing

Stateflow scheduling: Examples

In my last post, Execution order and Simulink models, I promised a look at scheduling best practices using Stateflow; in this post I hope to deliver.

Simple periodic scheduling

In our first example we will look at simple periodic schedulers. Let’s assume we have a system with three rates, 0.01 sec, 0.1 and 0.2 seconds. This can easily be implemented in Stateflow with the following chart.

If we look at the “activation” for each of the task sets we would see the following.

Yellow = 0.01, Blue = 0.1, Red = 0.2

In this case you can see that each of the tasks are being triggered at their given rate, at 0.1 second both the 0.01 and 0.1 activate; at 0.2 all three are active. In many cases this is fine, the order in which these tasks is set by the order in the chart (e.g. 0.01, 0.1 then 0.2). However you may want to “space out” the activation. In that case a Stateflow chart like this would be the solution.

In this case three parallel states are created. The 0.1 and 0.2 rates have “offsets” so that the execute out of sync with the each of the tasks as shown in the resulting execution graph.

Offsets on the charts.

Mode based scheduling

Beyond rate based scheduling mode based scheduling is the next iteration of the scheduling examples.

Stateflow after dark…

In this example the scheduler is decomposed into three parallel states, the “Main” or rate based state; a mode based state and an event based state. The main state is similar to the two previous examples so lets look at Mode and Event.

Feeling the mode

There are three things of note here; first within this state we start the system off in the “Initialization” state. This is a safe selection as most systems start off in “Init”. Next, movement between the states is controlled by the input variable “Mode”. Use of the ‘hasChanged’ method gates the transitions between the different modes allowing the user to switch from any of the modes without having complex routing logic. Finally the mode “Emergency” is for the non-critical scheduling responses to emergencies. Any actions that fall into the true emergency mode should be event driven so their execution starts immediately.

The main event

Our final example here is event driven scheduling; within this chart we have the “React” states and a “Null” state. The null state is present to provide a “no-operations” mode when the events are not active. Two things of note, in this example events are mutually exclusive; this does not need to be the case. Second, the current example exits the “React” states after one execution. The exits could be guarded to continue execution until the event is resolved.

If you are enjoying what you are reading consider subscribing to the email version of this blog.

Execution order and Simulink models

When you program in the textual language execution order is directly specified by the order in which you write the lines of code.

  1. My Function
    1. Do A
    2. Do B
    3. Do C

With multiple functions, a threaded OS or event driven interrupts this is more complex, but at its’ heart it is directly specified in the language. In contrast Simulink determines execution order based on Data Flow.

Which executes first, Chicken or Egg?

The principal of data flow based execution is that calculations are performed once data is present. We will start with the simplest example possible, one path direct feed through.

In this example we have a data flow from left to right (Inport #1 to Outport #1). The calculation is
Output = Input * Chicken * Egg

In this example we have introduce a Unit Delay block. This changes the execution order
Output = LP * Egg
LP = Input * Chicken

In this case the “Egg” calculation takes place first since we have existing information from the unit delay allowing calculation of the output value.

In our next example the execution order is arbitrary; since there is no data dependencies between “Chicken” and “Egg” output 1 or output 2 could be calculated first. (Note: Simulink provides a “Sorted Execution order”; these are the red numbers you see in the image. The lower numbers are executed first.

In this case the “Egg” came first

Controlling execution order (avoiding egg on your face)

In the last example we showed that with independent data flows execution order is resolved by Simulink; however, there are instances where the execution order will matter (1). The “obvious” and very wrong solution is to add a data dependency to the path.

Noooo never ever do this…(2)

By adding a “Zero value gain” and sum block to the “Egg path” I have forced the “Chicken path” to be executed first (4). For reasons that should be obvious this is a bad idea. Anyone who looked at the model would think “Why are they doing that” and they would be correct in asking that question. The recommended approach is to make the execution order explicite using a Function driven approach

In this case the “Egg” is executing second; this is known from the “B” number in the block, “ChickenPath” is BO, while “EggPath” is B1; the lower number executes first. For more complex execution orders a Stateflow chart can be used to define the execution order

Yes, I went for “eggs came first because chickens are evolved” as the solution to this issue…

In this “tongue in beak” example we see that “eggs” only execute during the “WeHaveEggs” and the “Dinosaur” states. Once we hit the “WeHaveBoth” state (after many eons) the Chicken executes first. In my next post I will give examples of best practices for controlling execution order with Stateflow charts.

Footnotes:

  1. Within the subsystem the execution order does not matter. However there are several cases where it can matter.
    1. Time limited execution: If the given function has a limited time for execution allocated and it is possible that all of the calculations may not be able to be performed in that time period. In that case you would want to control the execution order
    2. Consumption by parallel process: If the data from one (or more) of the paths is used by a parallel process and that process needs data “first” then you want to control the execution process.
    3. Determinism: For some analysis locking down execution order will simplify the execution task.
  2. I debated even showing this image, however I have seen many “cleaver”(3) engineers come up with this solution.
  3. Yeah, I realized I was autocorrected to “cleaver” as in something that cuts or chops . It was an error at first. Then I realized that I liked that more, they are not really “clever” rather they are just chopping apart a problem.
  4. This could be considered a form of “playing chicken”
I love Google image search…

Please consider subscribing for regular email updates

Newton’s Method

Newton’s method is one of the foundational concepts in numerical mathematics; it is a method for finding the minimum value of real-valued functions through a series of successive approximations.  While the approach has limitations (it can be “trapped” in local minimum, it can be slow, …) it is the gateway algorithm(1).

Newton in Action

Newton and Model-Based Design

Aside from the equations of motion, integration, theory of gravity (to start a long list) the children of Newton’s method are found in optimization problems. Optimization problems seek to find the minimize (or maximize) a value for a set of equations (or input data) based on the system input. Let’s consider an example of my Chevy Volt and optimizing energy use during my daily commute to work.

Apparently you to can have a “Newton’s apple tree”
  • Distance to work
  • Range of battery
    • Effect of starting battery temperature
    • Driving conditions (highway / surface streets)
    • Cabin climate control (heating / cooling)
  • Cost of
    • Charging the vehicle (at home / at work)
    • Gas (in the Volt the gas engine charges the battery)
  • Externalities
    • Carbon emissions from the different energy sources.

When I frame this equation I set the commute as a “From-work-to-home-from-home-to-work” route. This, and thank you MathWorks, because there are free charging stations at my office, therefore any charging there is “free” (2).

  • cost = C1 + C2 + C3
    • C1 = Cost_per_Kilowatt_Work * Charge + E1
      • C1 is at work co Cost_per_KW = 0 + E1 * Charge
    • C2 = Cost_per_Kilowatt_Home * Charge + E2
      • C2 = (0.12 + E2) * Charge
    • C3 = Cost_Per_Gallon * Gallons_Per_Kilowatt * Charge + E3
      • C3 = (3.50 * 30 + E3) * Charge

In this case the E1, E2 and E3 are the external costs for use of power. I don’t directly pay these costs(3) but from an ethical standpoint let us remember them.

Wrong type of charge

The total charge is a function of distance, driving conditions and environmental conditions (do I run the heat or AC).

  • Charge = F(Distance,DriveCond) + F(Cabin,Environment)

Skipping the analysis here to the conclusions. Since heating a car from the battery is very energy intensive, and since driving above 40 has a hit on fuel economy the “target” for short drives is to pre-heat the car (while at work) and to drive in a most sedate fashion. For medium drives the key is to use the engine heat the car and battery up at the start of the drive…

The “costs” of our decisions:

In the equations above there are “E” terms assigned to each of the costs, the “externalities” in economic terms. These “E” terms can be used to “weigh” the optimization function to push (or pull) the outcome towards a given solution. In some cases the value of the weights can be calculated, other times they are assigned based on a desired outcome. For example, if I was creating an optimization equation for the “ultimate chocolate bar” I would have a heavy penalty against coconut; there is nothing inherently wrong about coconut I just can’t stand it.

Integrating content in this post (4)

This post was written due to N simple observations

  • Complex problems can be first understood using “base” methods: Real world optimization routines rarely used a Newton’s method due to efficiency issues, however for understanding optimization the simplicity of a Newton’s method can’t be beat.
  • Concepts are Queen / King: The concept behind the Newton’s method, that successive approximations can lead to a numerical solution underpins multiple fields, optimization, feedback loops for controls, noise reduction…
  • Reviewing “base” methods can lead to new understanding: In writing this post, and reviewing information on optimization and the basis of calculus (5) I have figured out solutions to a few problems that are plaguing me now, that of course will be in a future post
I wish it was still called the “science of fluxions”

Footnotes

  1. The gateway numerical method: Next thing you know you will be diving into implicit and explicate solutions to P.D.E.s
  2. There is still an external cost for charging at work, that energy is produce somewhere so some amount of greenhouse gas is being produced. This sort of “local” or self optimization can be seen as a leading cause of global climate change.
  3. This isn’t completely true, since I breath I do pay a direct cost to my health due to air pollution.
  4. What post featuring Newton would be complete without integration
  5. Calculus: from Latin, literally ‘small pebble’, with enough “small pebbles” you can “rock” the mathematical world.

If you find value in these posts, consider subscribing

Is your system architecture “Lego Legal”?

Sometimes, when you wander the web, you come across a story that makes you think about your job in a new way. This one, about “legal Lego builds” did just that.

Lambrecht describes “the model that forever changed LEGO,” an Audi TT that was difficult to put together, required the user to deform components for them to fit, and came with no instructions.

The article (or more accurately the linked PDF) is interesting for three reasons.

  1. Change came after failure: Lego’s are a famous brand and, having been around for 80+ years you would assume the have their bricks together(1). However, entering the brave new world of “build sets” the found that they needed to adopt standard building rules
  2. Unit test to find system problems: Some integration issues can only be detected in the full system, however upfront consideration of interfaces and tolerances can prevent large scale issues.
  3. Legal but on the boarder: The PDF shows legal / illegal and “boarder” cases. Sometimes the “boarder” is the only solution to the problem; but when you find yourself in a “it’s the only way” case, spend some time to figure out if that is really the only way.
Image result for legos
A large scale Lego build (2)

Small rules

With Model-Based Design what are the “small rules” that I would recommend following?

  1. Adopt a style guide: For MATLAB and Simulink Models consider the use of the M.A.A.B. Style Guidelines.
  2. Speed counts: A slow small function slows down your system. Each additional system slow system (or repeated instances of the same system) add up to a slow integration.
  3. Self contained: Models should be able to execute on their own, e.g. not requiring external infrastructure to execute. This is the distinction between a functional and an integration model.
  4. Swiss army knife: When I’m out hiking a Swiss army knife is a reasonable lightweight tool to bring to handle unexpected issues. Models should serve a purpose, not 100, that is why we have systems.
The 16999 Swiss Army Knife (3)

Footnotes

  1. No apologies for the bad play on words.
  2. Google image search can return things you would never expect
  3. Yes this is a real

If you find this material interesting, please subscribe to the blog for regular email updates

Everything is our sum

The area under the curve, infinitesimally small slices, the integral is integral to control algorithms. With that in mind I wanted to point out a few “edge cases” in their use with in the Simulink environment.

Case 1: Standard execution

The standard execution would be inside of a continuously called subsystem. From calculus, remember that the integral of sine = cosine + C; which is what we see when we plot the results. So far so good.

Case 2: Execution context

Let’s look now at what happens when your execution context is not contiguous; in this case a conditionally called subsystem. In this case I will use an enabled subsystem to periodically call the integrator.

50% sample rate…

For this first example I reduced the sample rate by 50%. (E.g. the ramp function toggled between 0 and 1). As a result while the shape of the output is correct (a cosine) the amplitude is 1/2 the original value. I could correct by adding a gain factor on the output (or in the integral).

Note: when sampling data it is critical that the minimum sampling frequency is at least 2X faster than the frequency of the signal. This is know as the Nyquist Frequency

Case 3: Non-periodic

In some case sampling is event based. In this case it is critical that the events are more frequent than the Nyquist Frequency. However, since they cannot be assumed to be at a simple periodic frequency you cannot use a gain factor to correct for the sampling bias.

The blue line is the continuous integral

The simplest solution is to increase the frequency of the sample, the higher the frequency the more accurate the results.

Green: 100%, yellow 80%, blue 50%, red 20%

Baring that a custom integrator can be applied that interpolates off of the last N data points and the temporal gaps; however this approach will not work if the temporal gaps are large or if the data is rapidly changing…

Case 4: Function called integrators

The final case to consider is when the integrator is part of a function call system. In most cases this will act like the periodic instance show above; there is however a special case, when the same function is called more than once in a single time step. In this instance only the first signal passed into the function will be “integrated”. Why?

Remember that integration can be expressed as a sum of slices multiplied by the time step. For the first call to the function the delta in time is the time step. For subsequent calls the time delta is zero.

If you need this functionality consider passing in the DT value as an external variable or performing the sum of two different function calls.

Integration by Parts
In the end numerical integration means we never have to remember these formulas…

If you are enjoying these posts, please consider subscribing to them.

Best practices for Model Handoffs

Handing a model off between developers, or from developer to user, is one most common tasks in Model-Based Design.  So what steps should you follow to insure that the hand-off is successful?

Image result for steps

Step 0: Agree upon the requirements

A few weeks ago I made muffins, lemon poppy-seed; while my wife was happy to receive the muffins she had requested chocolate-chip muffins, a classic requirement error.

Image result for muffins

In the much simpler world of model hand off the following items need to be defined

  • The functional interface: inputs, outputs and rate of execution
  • Behavioral characteristics: what behaviors does the model cover; what are the “corner cases” that it does not cover.
  • Supporting files: most models require models, libraries and data. For parameterized models the same “model” will act  differently with different data.
  • Acceptance criteria: a set of defined metrics that define what is required; these should be derived from the behavioral characteristics.

Step 1: Model validation

Assuming you have acceptance criteria the model validation is the process of validating the model against the criteria.  Ideally the methods for validating the model are established at the start of your project and are run routinely as the model is developed.

Image result for validation coffee
Note: You can be a coffee person and a validation engineer.

Step 2: Wrap it up!

Delivery of the model is important, there is nothing more frustrating then getting your shinny new model only to find out you are missing a library or a data file.  There are several methods for addressing this

  1. Version control software:  If the model is checked in as part of a project the end user can check out the full repository (note: this can result in file bloat)
  2. Use of  Simulink Projects: A tool from the MathWorks that allows for the definition of model projects.  It will analyze the required files for you and create a package for distribution.
Image result for Wrap

If you find this content useful, please consider subscribing