Prior to the start of sheltering-in place, with the news of COVID-19 dominating the news I saw a marked uptick in the number of paper towels showing up, often on the floor due to overflow, of the bathroom at work. While on the one (cleaner) hand, I was happy to see an increase in hand washing, on the other hand it drew attention to the fact that people had not been washing their hands regularly. This is a common human behavior; we respond to a problem by starting what we should have been doing all along, e.g. “Dentist” Flossing.(1)
Emergency!!! Start testing
Frequently testing starts in response to a major issue, the ‘oh #@$#%!” moment. Then, a few weeks or months later, the support for testing dies down when the problem is passed. There are problems with this
Testing done after an emergency tends to test against the emergency condition; e.g. there isn’t a generic response to improve test coverage across the board. (2)
It is created by inexperienced test developers. Writing good tests requires a knowledge of testing best practices
It is done without supporting infrastructure; as a result the tests are both more complicated to write and more likely to fail.
It is too late now! Or is it?
Even companies with existing test automation are seeing their infrastructure stress tested by the increased demands placed on it by remote workers. There are 3 things that can be done to improve the results in the short term and “harden” your infrastructure for the long term
Create basic test patterns (hook) for end users: If everyone writes their own testing patterns (which is common) the system cannot integrate all the different methods used.
Keep the interface simple for the end user
Give them flexibility for what is inside the test
Provide the error handling (crash protection) for them.
Set up a test review group: There should be both automatic checking that the tests submitted conform to the standard and a review group that validates that the tests are covering what is described.
Leverage existing technologies and workflows for cloud based testing: Use of Jenkins or MUNITs take advantage of the wealth of existing tools and processes that exist for testing in the model based design environment.
Final thoughts
The best testing is formalized and repeatable and easy (for the end user) to understand how it is used. Most importantly, over time, the use of the testing infrastructure becomes habit, and that is how we keep “bugs” from spreading.
Footnotes
Dentist flossing: people who don’t floss normally but start flossing furiously the week before a visit to the dentist.
A narrow focus on fixing a bug may not get to the root cause of the problem. While the goal should be to fix the existing issue the time spent in root causing major issues is, almost always worth spending.
How do you measure a Program? In Lines of Code, in variables, in functions, or schedules calling.(1) The real question of how do you measure the size of a model needs to be taken back one step; what do you want to get out of your measurements?
The measurement of the model will help us to understand 5 things:
How hard it will be to test
What it takes to develop, how much effort to maintain
How long it will take to execute
Is it complete against requirements
How easy will it be for others to interpret the model
Likewise there are a set of standard measurements that we make
Lines of Code/Blocks in a Model: the literal counting of the number of functional blocks or lines of code. This can be improved by measuring the LOC per function or the blocks per atomic subsystem.
Cyclomatic complexity: A measure of the total number of independent paths through a model, e.g. how many choices can you make?(2)
Floating point operations (FLOPS): This is a count of the total number of floating point operations which can be used to estimate execution time.(3)
Interface complexity: a measurement of how complex the interface to the model is, e.g. how many inputs, outputs and how co-related they are.(4)
Requirements coverage: a measure of how the model maps onto requirements.(5)
Test
Develop Maintain
Execute
Requirements
Understand
LOC
x
x
x
Cyclomatic
x
x
x
FLOPS
x
Interface
x
x
x
Requirements
x
x
x
Mapping what we test to what we want to undertand
In the table above, the X’s correspond to the strong links between the two; in truth, all of the tests provide some insight into the objectives.
Focus on development
The use of metrics often comes late in the development cycle closer to release; however they are best employed from the start of the project. There are N reasons
Time to completion: over time, the use of metrics can allow you to to predict when the project will be completed
Error reduction: There is a strong correlation between metrics and bugs in code
Modularization: Metrics should be used to determine when models should be split into multiple models for development.
Modularization can drive you off your rocker
Who you are,(6) the role you play in the workflow will influence the desired size of the models; there are a Quadrophenia(7) number of roles. In general the desire for smaller models ranges from the tester (“smallest”) to the release engineer (“largest”).
Tester
Developer
Integrator
Release
So looping round, how do you square the circle and use the core metrics to create a model development workflow that satisfies the conflicting desires? The short answer is, you cannot. As a result, you need to prioritize the members of the development team that have the greatest impact on overall product development time and quality…the test engineers and developers. This may result in models that are smaller then the integration engineers desire. However, since the highest proportion of software bugs are at the unit level, priority should be given to the unit level development and test. To make it easier for the integration and release engineers to develop, consider using a “shell model” approach to speed up the initial development workflow.
Coda
Before adopting any metrics you should be clear what you intend to measure(8) and what you want to do with that information. One company I consulted with had a “number of cups of coffee per lines of code metric”; but in the end it only showed that they had an espresso machine. On that note, remember to pick your metrics well.
Footnotes
Perhaps one day I will release the album “The greatest hits from MBD”, vocals by my lovely wife Deborah.
If I hadn’t used the “Choose your own adventure” image in a recent blog post I would have used that instead of the ruler for this section. Darn-it, I need to plan ahead!
This is a tricky metric, as a single function can have multiple FLOP counts depending on the conditional path the model executes for any given cycle. Because of this, FLOP is generally reported as min, average and max.
By related inputs, if you have 4 inputs such as Right Front tire pressure, Left Front tire pressure, etc., those are related. However the price of coffee, inches of rain in Spain, and number of ants at a picnic, those are not related.
Requirement coverage answers three questions; first, are you meeting all the requirements for your model? Second, are you testing all your requirements? Third, and often missed, are you putting too many requirements into too small of a space? In other words, when I see a model that has 15 requirements associated with it, I start to question the validity of the requirements and the reality of the implementation.
Yes the lyrics are reversed here, Who, are you, Who Who Who Who?
For those of you who have seen the movie you know that in reality the four roles Jimmy plays are really one; this is much like software development where many people play multiple roles.
Even “simple” metrics like lines of code can be difficult to fully specify. Is a line of code all of the code between the assignment (far left) and the semicolon on the “far right”? What do you do with code that spans multiple lines? Comments (inlined?)
What is the Model-Based Design Cartographer and why do you need one? A guide, like the Fylgja, mapping out the changes to the domain of Model-Based Design. A painter illustrating how the established footpaths transform into superhighways, (1) a bard chronicling the frontier territories transformation into states.(2) I’m inventing the maps of Model-Based Design, pushing out the borders into where the dragons used to lay.(3)(4)
The Three Opportunities
The reward for dragon slaying is a hoard of gold;(5) the reward for solving engineering problems is the understanding that leads to better tools and processes.(6) Currently Model-Based Design faces three “dragons.” Today I peer out with one eye (7) looking down the tree of possibilities.
Branching out in understanding
Dragon 1: Effective application of AI/DL/ML to controls problems (and more)
AI/DL/ML, call it what you will, this is the rough and tumble frontier town of Model-Based Design. AI/DL/ML and Model-Based Design have a lot to offer each other. AI/DL/ML can solve control problems that traditional methods are not able to resolve. Model-Based Design can provide the experimental infrastructure to develop these algorithms. What is missing is the confidence in the algorithms allowing for deployment in the field. How can you validate a black box?
AI/DL/ML systems are “Black Box” systems. Unlike traditional black box systems, with mathematical approaches for determining the limits of the system, (8) AI/DL/ML systems do not easily lend themselves to analysis. So what can we do?
This is where Model-Based Design, coupled with Design of Experiments comes into play. With a sufficiently complex plant model statistical analysis can be performed on the system to establish the boundaries of the behavior. Further if the system is pertubated with external noise you can establish a the boundaries of operation with even higher confidence.
Dragon two: Understanding Event Driven and Distributed Systems
On the other side of the world in another frontier town, we have event driven modeling; with event driven modeling the effect of an action upstream often has chaotic downstream impacts. (9) Modeling the initial driving actions allows for prediction of the final resolution.
Many modern electronic devices operate in an event driven environment. These “events” are often orders of magnitude more complex than traditional “a button was pushed.” (10) Coupled with the highly distributed nature of event driven environments, this results in complex architectures.
Again the ability to simulate (using models) come to the front of the stage.(11) Using the correct modeling language to define the event driven simulation, coupled with appropriate workflows to verify the algorithms is of paramount importance. Assuming that, one approach to the validation is similar to existing coverage analysis methodologies, where the highly temporal nature of the events is the key variable of analysis.
Dragon 3: Bounding Exponential Growth
The modern control algorithms have grown exponentially to the point where established Model-Based Design approaches have difficulty encompassing the full development cycle. The three primary challenges are increase of “system of systems,” the inability to simulate massive system of systems, and the management of information throughout the design process. (12)
Complexity can be like the Hydra, without fire to cauterize the cut.
The answer lies in the redefinition of the boundaries between Model-Based Systems Engineering (MBSE) and Model-Based Design. Using the meta-data rich nature of systems engineering with their high levels of abstraction and traceability while leveraging the rich simulation and validation methodologies of Model-Based Design, we have the ability to start to untangle the knot growth.
From Art to Engineering (by way of Science): Drawing a new map
There is Art, there is Science and there is Engineering. When a field is in the Art Stage things are accomplished by masterly people who have a feel for what needs to be done. There are some astonishing results but they are rarely reproducible.
We are in the middle of the Science Stage of the “Next generation of Model-Based Design.” This is the point where measurements are taken and the field is defined. Scientists are explorers, drawing the map.
The final stage is engineering, where the map is drawn and the roads (those paths that anyone can travel down) are paved.
I am your Model-Based Design Cartographer. My hope is that we are entering a “Renaissance” for Model-Based Design, a time when Artists were Scientists and Engineers are all in one
Footnotes
In this analogy “footpaths” are the initial rough workflows that individual groups work out, the “superhighway” is the final industry established best practice workflows.
The territories cover a wide range of terrain, what we are doing is “taming the wild west.”
One could say that the change from the existing state of Model-Based Design to the next state is “how the Wrum turns”
In Das Ring des Nebelungen, Fanfnir turns himself into a dragon to protect the hoard of gold. In much the same way some of our “problems” arise from earlier solutions.
Most dragons moved away from the Gold Standard in the 20th Century. Unfortunately paper currency and fire breathing lizards do not mix. This is the real cause for the Great Depression.
With engineering, processes are the “real gold.” A good tool is just that, good; how to best use the tool is priceless.
Wotan gave up one eye so that he could see more clearly; I hope today this blog can serve this purpose.
Not all systems can be analyzed in this method, however even in the worst cases it is possible to create reduced order models which can provide data on the limits of the system.
Event driven systems do not lend themselves to traditional mathematical analysis; hence the “chaotic” nature of their responses. However their boundaries and the effects of actions can be discovered. And truth be told, when you first read Hamlet did you think that everyone would be dead at the end? Only Horatio knows the truth, and an unseen actor (Fortinbras) becomes King.
The event may still be triggered by a button push but the data associated with that push (the button ecosystem) is much richer.
All the worlds a stage and all the controllers and networks merely players They have their exits (conditions) and their entrance (initialization) And one ECU in it’s time sends many a signal
I selected the word “information” here as the types of “data” that exist for system of systems design that are so much richer than the single or multi controller examples and often encompass things beyond the traditional controls engineering definitions.
A few weeks ago I wrote a post and translated it into German; when I asked one of my German co-workers he said the quality was overall good, but there was some selection of phrases that did not quite work. This is what happens when you (1) translate something versus writing it; literal translations can “miss the mark.”
Hitting the target, but not the bullseye
From Text to Graphical (2)
In another previous post I wrote about selecting the correct modeling language. This post has a similar theme but even if you have picked the correct modeling language, direct translations between languages still have five possible problems:
Repeating old mistakes: when you copy from one language into another mistakes get translated over too.(3)
Missing out on the “cool” features: every language has capabilities that differentiate them from the others; it is those features that often drives adoption.
Losing the old “cool” feature: sometimes when you migrate away from an old language you lose out on the “cool” feature of that language; trying to reproduce it in the new environment leads to problems.(4)
You miss the chance to make it better: When you port things into a new language it should be a time to figure out how to make it into a better version of itself.
You are not learning the new language: The act of writing/thinking in a language helps you learn the language.
Und jetzt auf Deutsch
Vor zwei Wochen hat ich einen Blogeintrag geschrieben und es das ins Deusche ubersetze. Um die Qualitat der Deusche Blogeintrag zu uberprufen, ich hat mit einem Deutsche Kollegen gesprochen. (Vielen Danke Stephan!) Er sagte es war “Gut, aber das sind Dinge, die nicht direckt ubersetze wurden”. Das ist die Problem mit ubersetze ein Blogeintrag, nicht es schriben. Wenn Sie in einer Sprache denken, vermissen Sie der Begriff der anderen.(5)
Und zu, das ist die Problem…
Von Text zur Grafik
Vor diesem Blogeintrag hat ich uber die Auswahl der richtigen Grafiksprache fur Irhen Projekt gerschriben. Wenn Sie das richtige Grafiksprache haben, gibt es immer noch funf mogliche Probleme.
Wieder den gleichen Fehler machen: Wenn Sie ubersetze mit kein uberprufen, die Probleme von die alte Programmiersprache Ausfuhrung immer mit sich sind.
Sie bekommst kein “Gut neue Dinge” : Jeden Programmiersprache habt Funktionen und Vorteile das einzigartig zu es sind. Dass sind warum Sie entschieden, es zu verwenden.es zu verwenden.
Sie bekommst kein “Gut alte Dinge”: Die letze punkt ist richtic auch fur die alte Programmiersprache. Bevor Sie sich andern, uberlegen Sie, was Sie verliern konten.
Wenn Sie nur ubersetz, haben Sie keine Gelegenheit, sich zu verbessern: (6) Immer mit die neu Programmiersprache denkst “Was kann ich besser machen?”
Denkst imndie sprache zu die sprache erlernen: Zu lernen “Mach es einfach!” (7)
Final thoughts
Interestingly, writing this the second time in German I found myself going back and updating the English version. Rethinking it in a new language made me reconsider the original, hopefully making it a stronger version of itself.
Or perhaps gained?
Footnote
In this case I am assuming that “you” are not a professional translator; if you are, you would already know the point of the post, translate the spirit and not the body.
Since the Model-Based Design environment also includes text based languages (e.g. MATLAB in our case) sometimes it is from Text to Text.
There is always the chance to introduce new mistakes.
Not all work should be “ported” into the new environment; for instance low level drivers are often best written in C (or even assembly).
Requirements writing is one of the “classics” of Model-Based Design (1,2) and if you are, even in passing, part of the software development process then the next image is something you have heard 1E09 times.
If the “cost” of finding a bug goes up the later we find it, why are there still so many “bugs” in the requirements phase?
Now, in the times of COVID, because informal communication has declined, clear requirements take on even greater weight.
What is a requirement bug? A taxonomy
Before we talk about what needs to be in a requirements document, I want explain the common ways in which we write requirements incorrectly.
Pretty bugs (meet the beetle)
These are not “true” bugs (3), rather they are feature requests that do not have an impact on the product. For example, I worked on an infusion pump project about 7 years ago. One of the requirements was that the housing of the pump should be “gun metal gray.” Since I was helping with the software portion of this project, the color of the pump was not a relevant requirement.(4)
What made this requirement worse was there are some instances where the color of something is critical. For instance on that same pump there was the following requirement (roughly)
The emergency shut-off button shall be red in color. No other buttons on the device shall be red. The button shall be raised 1/4 inch from the panel and shall be in the lower right hand corner of the device. No other buttons shall be within 1 inch of the emergency shut off button.
Everything about that requirement was to ensure that when someone needed to “stop” the device right then and there, they would have the best chance to do it. By having another “color” requirement mixed into the engineering spec you give people the wrong impression.
Stale bugs
First, if you ever do an image search on “stale bread” I was shocked to see how many posts there were on “never throw away stale bread.” (5)
Second, what is a stale bug? A stale bug is any requirement that was not updated as the project changes; these could be called “orphaned” bugs.
In some cases the stale bugs represent parts of the project that has been moved passed. In this case they should be archived(6) and moved out of the general review process. In other cases it is that the maintenance portion of the requirements writing process has not been followed.
Communication breakdown bugs
At the edges of your kingdom(7) where you trade your precious knowledge, one word misspoken can lead to war.(8) These bugs are the result of different groups not communicating what their component needs and what it will deliver. These can be thought of as the “everyone knows that bug.” The one I remember best was with a transmission group who “knew” that rotational speed was in rads / second and the engine group who “knew” that it was in revolutions / second.
Wishy washy bugs (9)
A wishy washy bug is when, well you know, when the person who is, well the group, or you know the way in which it is put down, or not put down….
The above is an example of “wishy washy”. It is a requirement that does not meet the fundamental aspects of a requirement
Unambiguous: the requirement can only be interpreted in one way
Scoped: the requirement pertains to one aspect of the project
Testable: the requirement can be verified to be achieved
What I’m expecting
I have written before about requirements, today I want to take a new focus, thinking about who will be using what you write.(10)
Before you write
As you set out to write a requirement, ask yourself the following questions (you will need to answer them all).
Is this requirement needed: Not every task needs a requirement. Some “requirements” are simply notes of things that need to be done.
Does this requirement depend on other requirements: If so you need to reference them
Can this be written without project specific jargon: If not do you have a “dictionary” to explain the jargon?(11)
Is this one requirement or multiple: Don’t try and fit everything in one requirement! (At the same time don’t make every detail its own requirement.)
What supporting material would make the requirement clear: A picture can be worth 1,000 words; a free body diagram or plot is worth 1,000,000.
As you write
As you write you need to keep thinking “am I answering the questions from step 1?” Beyond that, are you
Sticking to the scope of your requirement?
Writing in an unambiguous format?
Writing in a testable fashion?
Avoiding the soap suds of wishy-washy writing?(12)
After you write; feedback and sign off
Before a requirement is accepted into the workflow there should be a formal sign-off step. If the person signing off looks at it for a minute and says “yeah that makes sense” either they are a subject matter expert or they have not really reviewed the requirement. This happens far too often.
When you select your reviewer they should
Have knowledge of the area of the requirement and (preferably) the areas impacted by the requirement
Have a level of responsibility for the impact if the requirement is incorrect
Have an understanding of the test-ability of the area
The last item is key. A requirement can be written in such a fashion that to an outsider it appears testable but to someone who works on the project they would know it could not be tested as written. For example; I worked on a bio-reactor hardware in the loop project a few years ago. One of the requirements which was critical was “the core temperature of the bio reactor shall never exceed 87 deg C.” When I read this I thought, great, this is a clear requirement, except there was no measurement device to check the core temperature.(13)
Wrapping it up
Writing good requirements takes time, however think of them as a gift to the developer and yourself; or perhaps think of it as a gift to the developer and a “gift wish list” to yourself.
Good requirements are a gift
Footnotes
In reality all areas of work have “requirements documents.” In recent years I have started thinking of requirements documents as “contracts” that we negotiate between developers and the end users.
For the image “100 best classic movies” I find the statement problematic; above a given level each “movie” is a distinct thing, e.g. you could say “the best murder mystery” or “the best road trip movie” but “best classics” seems redundant.
Beetles are members of the order Coeloptera which do fall into the insect kingdom, so yes, they (beetles) are bugs, but pretty bugs are not bugs.
On the other hand, having this in the fabrication spec would been the logical location.
On stale bread, if you were using a good quality bread to start with then I can see some uses of the stale bread: bread crumbs for “breading,” croutons, bread pudding or possibly French toast.
Never get rid of old requirements, sometimes the “stale” bug can come back to life and if you have to reinvent it, then it is redundant work.
In this case “Kingdom” equals component
And in this case “war” means really long arguments about whose fault it is when the integration fails.
I honestly have no idea what could be going on in that world, on the one hand singing cows look like they would be fun, on the other hand I think the goose would be the better horn player.
In about 50% of the cases, the requirements are written by the same person who will implement the requirement (often after they have created the object).
I should be clear, often using project specific terminology is a good thing, e.g. it simplifies the writing process; just be sure that everyone agrees on the terminology before you use it.
Why yes, this is a repeat of the list from earlier on, these tests cannot be repeated enough.
Keep in scope
Write unambiguously
Write testable requirements (12)
We were able to take measurements at the boundaries of the container and we then used heat transfer coefficients to determine (approximately) what the core temperature would have been. It was a fun example of both thermodynamics and boundary layer theory.
I develop models in Simulink (1), which actually means I create models in one of 4+ languages: Simulink, Stateflow, MATLAB or SimScape. Selecting the “correct” language for the task makes the difference between easy to implementation or complex realizations.
The short answer
Each language has areas that they excel (2) at, in short they are
Sim(XXX): The SimMechanics, SimFluids,… set of tools is intended for medium / high fidelity physical models (3)
Stateflow: Modeling of state based events and flow chart logic
MATLAB: Complex mathematical equations, interfacing with external C code.
Simulink: Data flow and controls logic
The, longer, real answer
Looking to the right, it will take you a few seconds to translate the diagram to C = sqrt(A^2+B^2), e.g. the Pythagorean theorem, but you were able to do so without undue difficulty. This could very easily be put into a MATLAB block and have the code look exactly as written above.(4) However, at this level it may not be worth doing so due to the transition between modeling languages.
When to switch? (5)
As a rule of thumb I ask the following questions
What is the total number of blocks / lines of code / states involved in the current language: If the answer is less then 10 blocks, LOC or 3 States then I stick in my current language (assuming it is less in the other language).
Am I putting it into a subsystem: If I am grouping the portion into a unique subsystem then I will always switch to the “best” language; at that point the “transition jarring” isn’t an issue.
Am I replicating a built-in function: If the operation you are doing exists as a built in function in a different language then switch now! (6)
Final thoughts
There are two driving factors behind modeling language selection; clarity and efficiency. Happily in this case, the clearest representation is almost always the most efficient implementation. (7,8)
Footnotes
In reality I now advise people on how to develop models; much like I am doing now.
In the same way that “Excel” is a very good spreadsheet but it should not be used a a programming tool
We will look at the dividing line between medium/high and high fidelity models in a future post. More often than not this is as far as you need to go for developing your control algorithm.
If I do need to perform this calculation often I would do that.
Knowing when to switch is much easier then the wiring diagram for the 3-way switch.
MATLAB and Simulink have powerful and efficient implementations of many basic algorithmic functions, re-creating them both as a time sync and a potential source of errors.
In my Pythagorean example the generated code would have been the same in either case
The reason it is most often the most efficient is because all of the global optimizations built into the selected language, e.g. it is less likely to use unneeded states, variables or operations.
Model decomposition is the process of dividing the model into logical or functional units. In today’s video, I walk through the basic, high level model decomposition.
Before this blog existed I wrote a LinkedIn blog post “What’s on your whiteboard“. Whiteboards, as a rapid iterating development are a favorite of co-workers around the world, so what can we do now working remotely to replicate the “whiteboard experience?”
Gather round the whiteboard…
Before we figure out how to replicate a whiteboard remotely (1) we need to talk about why we whiteboard.
Architecture & Stereotypes: (2) Outlining at a conceptual level how the product works.
Free-body diagrams: (3) Showing the interactions between forces
Note-taking: Sometimes the white board serves as a running checklist of what has been discussed and what has been agreed to.
Timelines: Writing out timelines for the project development.
Your virtual whiteboard
In many posts I have written “with Model-Based Design, a single model is evolved throughout the design process”. Let’s take a step back and think now of the system. Can we start using and evolving a system throughout the design process?
Model-Based Design has an answer to the Architecture & Stereotype use case; the collaborative (4) use of tools like System Composer or UML diagrams. What is more, once this collaborative use is established the whiteboard isn’t “erased” at the end of the session, it becomes part of the product.
One thing on note
For mature organizations with an in-place requirements and bug tracking process, “notes” can directly be added to the requirements or bug tracking infrastructure. While not as “easy” as the early white boarding example, capturing these actions during the meeting reduces the possibility of “transcription errors”.
Transcription errors: A cancer in your development process
Footnotes
I do know that many video conference tools offer “whiteboards” but with few exceptions I have not found those to be good environments to rapid prototype in.
Half the time when I think “stereotypes” I think of Cambridge Audio, Sony and Panasonic.
Other fields have their equivalent to FBDs, for now I am using that as a generic term.
The key word though is collaborative, if it is just one person “sketching” the diagram without feedback then while still a powerful tool it is not a “white board” experience.
Like many people, the COVID lock down has given me time to practice skills; I have been spending time practicing my (written) German; so if you skip to the end you can see this post “auf Deutsch”
Back to the start
Stoplights provide us with information, Green = Go (Initialize), Red = Stop (Terminate) and Yellow, according to the movie Starman, means go very fast. A long term question, within the Simulink environment has been, “what is the best way to perform initialization and termination operations?”
Old School: Direct Calls in C
Within the Simulink pallet, the “Custom Code” blocks allow you to directly insert code for the Init and Termination functions. The code will show up exactly as typed in the block. The downside of this method is that the code does not run in Simulation. (Note: This can also be done using direct calls to external C code. In these cases, getting the function to call exactly when you want can be difficult)
State School: Use of a Stateflow Diagram
A Stateflow Chart can be used to define modes of operation; in this case, the mode of operation is switched either using a flag or an event. This approach allows you to call any code (through external function calls or direct functions) and allows for reset and other event driven modes of operation. The downside to this method is that you need to ensure that the State flow chart is the first block called within your model (this can be done by having a function caller explicitly call it first).
New School: A very economical way
The “Initialization,” “Termination” (and “Reset“) subsystems are the final recommended methods for performing these functions. The code for the Initialization and Termination variants will show up in the Init and Term section of the generated code. Reset functions will show up in unique functions based off of the reset event name.
Within this subsystem you can make direct calls to C code, invoke Simulink or MATLAB functions and directly overwrite the state values for multiple blocks.
Best practices for Init and Term
MATLAB and Simulink have default initialization and termination functions for the model and the generated code. The defaults should only be overridden when the default behavior is incorrect for your model. There are 4 common reasons why a custom Init / Term functions are required; if you don’t fit into one of these, determine if you should be using this.
Startup / Shutdown physical hardware: for embedded systems with direct connections to embedded hardware, the Init / Term functions are required. (Note: it is a best practice to try to have your hardware systems in models external to the control algorithms. This allows you to “re-target” your control algorithm to different boards easily
One time computations: Many systems have processor intensive computations that need to be performed prior to the execution of the “body” of the model.
External data: as part of the startup / shut down process, data may need to be saved to memory / drive.
You just read: a blog and you want to try things out… I’m glad you want to try it, but review the preceding 3 reasons.
Bonus content
As promised, the results of practicing my (written) German skills. Und so
Ampeln liefern uns Informationen: Grün = Los (Initialisieren), Rot = Halt (Beendigungsvorgänge) und Gelb bedeuten laut Film Starman, sehr schnell zu fahren. Eine langfristige Frage in der Simulink-Umgebung lautete: “Was ist am besten, um Initialisierungs- und Beendigungsvorgänge durchzuführen?”.
Old School: Direct Calls in C
Innerhalb der Simulink -Palette können Sie mit den Blöcken “Benutzerdefinierter Code” direkt Code für die Funktionen Init und Beendigungsvorgänge existiert. Der Code wird genau so angezeigt, wie er im Block eingegeben wurde. Das Problem bei dieser Methode ist, dass der Code in Simulation nicht ausgeführt wird. (Eine dinge: Dies kann auch durch direkte Aufrufe von externem C-Code erfolgen. In diesen Fällen kann es schwierig sein, die Funktion genau dann aufzurufen, wenn Sie möchten)
State School: Use of a Stateflow Diagram
Ein Stateflow Chart kann verwendet werden, um Betriebsmodi zu definieren. In diesem Fall wird der Betriebsmodus entweder mithilfe eines Flags oder eines Ereignisses umgeschaltet. Mit diesem Ansatz können Sie einen beliebigen Code aufrufen und zurücksetzen und andere ereignisgesteuerte Betriebsmodi ausführen. Die Einschränkung diesmal Methode ist, dass Sie sicherstellen müssen, dass das Stateflow-Diagramm der erste Block ist, mit dem in Ihrem Modell aufgerufen wird (dies kann erfolgen, indem ein Funktionsaufrufer es explizit zuerst aufruft).
New School: A very economical way
Die Modellblöcke “Initialisieren”, “Beendigungsvorgänge” (und “Zurücksetzen”) sind die endgültige Methode zur Ausführung dieser Funktionen. Der Code für die Initialisierungs- und Beendigungsvorgänge Optionen wird im Abschnitt “Init” und “Term” des generierten Codes angezeigt. Rücksetzfunktionen werden in eindeutigen Funktionen angezeigt, die auf dem Namen des Rücksetzereignisses basieren.
Innerhalb dieses Subsystems können Sie C-Code direkt aufrufen, Simulink- oder MATLAB-Funktionen aufrufen und die Zustandsraum Werte für mehrere Blöcke direkt ersetzen.
Best practices for Init and Term
MATLAB und Simulink verfügen über standardmäßige Initialisierungs- und Beendigungsfunktionen für das Modell und den generierten Code. Es gibt vier häufige Gründe, warum benutzerdefinierte Init / Term-Funktionen erforderlich sind.
Physische Hardware starten / herunterfahren: Für eingebettete Systeme mit direkten Verbindungen zu eingebetteter Hardware sind die Init / Term-Funktionen erforderlich (Eine dinge: diese beste Vorgehensweise ist Ihre Hardwaresysteme außerhalb der Steuerungssysteme zu haben. Auf diese Weise konnen Sie Ihre Software schnell nue Hardware portieren.
Einmalige Berechnungen:Berechnungen erforderlich vor dem Start des Steueralgorithmus .
Externe Daten: Daten, die in externe Quellen geschrieben werden.
I have drawn the software design “V” roughly 1e04 times.(1) Over time, the scope of what is in the V for Model-Based Design has increased. Sometimes I think there should be a Model-Based Design equivalent to Moore’s Law (4); perhaps the “delta V”.(5)
In the arms of the V
Often when asked “what’s next?”, deeper is the answer. Improve a product, improve the process, increase (or decrease) the outputs; the less obvious answer is what can be brought into the embrace of the V.
As the scope of Model-Based Design expands, it has done so in a technology first, workflow second hodgepodge.(6) Uniquely what we are now seeing is the embrace of model based systems engineering which focuses on workflow first and tools second.
What is a system?
Definitions of systems are generally unsatisfying as there is not consensus on what comprises a system; however the definition I like best is…
Open and Closed Systems: A system is commonly defined as a group of interacting units or elements that have a common purpose. … The boundaries of open systems, because they interact with other systems or environments, are more flexible than those of closed systems, which are rigid, and largely impenetrable
Come together, right now, over???
I doubt the Beatles were thinking of software systems when they wrote this song (7) but it gets to the heart of why systems are important. They give people a place to “come together.” No model is, or should be, an island.(8) But how do we come together? The answer is through abstraction.
How does the centipede walk? (9)
The objective of a systems integration environment is to provide an abstracted language so that software, hardware, controls, physical modelers… can all exchange “information” without having to understand the heart of each other’s domain.
At a minimum, a system acts as both a “living” ICD document and as an ICD verification tool. When working in the metadata rich Model-Based Design environment a system integration tool can do much more.
Used well, metadata is an asset, used poorly it can sink your ship.(10) Use of a systems integration tool provides a natural “organizational” layer on top of the metadata. System interactions, data dependencies, system wide requirement coverage all can be accessed from within a system level model.
The Worm Ouroboros: Model-Based Design or Model Based System Engineering
I’ve been asked “What is the boundary between MBD and MBSE”? Right now where it lies is an open question (11) and not at present a fruitful question. Five years ago there was a clear answer, and five years from now we will again have clarity, but right now we are in a land rush (12) where developers can explore new ideas and stake out new domains. What is next? A better understanding of what is.
Footnotes
I came to this number using a Fermi Estimation methodology. Years in MBD Community ~ 20, Customer per year ~ 50, Drawings per customer ~ 10 Years mentoring in MBD ~ 10, mentored per year 4, drawings per mentored ~ 20 (2) tot = 20 * 50 * 10 + 10 * 4 * 20 = 10,800. (3)
In the end, the mentored end up drawing it and having their own way of talking about the design V.
The Fermi estimate may make you think that the 800 from mentoring is not important; however, if I count the number of times the people I taught in turn draw this, then I may hit another order of magnitude.
It always seemed that Moore’s law should be called “Moore’s extrapolation.”
You could think of each software release as an instantaneous acceleration.
Technology first, workflow second: a tool for solving a specific problem is developed; how that tool fits into the overall workflow comes later (along with refinements to the tool).
And I doubt it was an endorsement of open source software.
No man is an island entire of itself; every man is a piece of the continent, a part of the main;
No block is a model entire of itself; every block is a peace of a subsystem, a call from the main();
The Centipede’s Dilemma, is a something organizations hit when they start to examine how they do something. Much like the Coyote who can run on air until they look down.
Though I suppose in the case of the titanic they had the data to go by, it a structural problem.
The image of an “open system” was chosen to lay the ground work for this section.
For those of you less familiar with United States history, a “land rush” refers to a time period in American history when settlers could “ride and claim” a parcel of land. When I was a child this was taught as a positive moment in history; as an adult, it is easy to see the casualties and costs to Native Americans.