Rabbit Holes: Constraining your model(s)…

The question of “how big should my model be” is an evergreen Model-Based Design question. Too small and time is wasted on build, test and review tasks. Too large and the model is difficult to debug and maintain. The recommended size has always been a range with one critical factor: the ease of communicating information from one person to another. But in the COVID remote working environment, has the sweet spot shifted and if so, what can Model-Based Design do to address this?

Going down the rabbit hole

The English idiom “going down the rabbit hole” is a reference to Lewis Carroll’s ‘Alice in Wonderland’ and refers to any pursuit where action is taken but the outcomes are ‘never ending and nonsensical’. Remote work can easily lead people down rabbit holes due to a lack of communication. In the office, sitting next to Sid & Beatrice, Marie & Pierre I have opportunities for regular, small sanity checks. How can we foster that same culture from behind a webcam?

Bring down the wall

When working remotely there are 2 keys to effective Model-Based Design reviews

  1. Select the focus of your review: Are you performing an architectural review or a functional review?
    • Architectural: provide the I.C.D. and the execution flow of the atomic units
    • Functional: Provide the simulation results and the requirement documentation
  2. Leverage integration models: Here is the MBD Key, the use of shell (child) and integration models gives you a natural decomposition between the architectural review (integration models) and the functional reviews (shell/child)

How this keeps you whole

Rabbit holes are best avoided through timely reviews. By leveraging the Integration / Shell concepts, a natural break point in the review process exists.

Shell models map to between 2 ~ 4 high level requirements. When you have completed the work on any of the requirements, call for a review.

Integration models map onto system level requirements. When any of these change, call for a review.

Reflecting back to the start

In the same spirit of avoiding rabbit holes, we want to prevent the mirror problem of too many reviews. Unless the review is part of an Agile Stand up, 45 minutes to an hour is an appropriate amount of time. If reviews are routinely taking less then 30 minutes, reconsider the tempo of the meetings. While time spent in reviews is important if you spend too much time in them you end up with the Red Queen’s Dilemma.

In praise of the offset…

I am now waltzing into a short 1-2-3 posting on scheduling. Multi-rate systems exist for two reasons, either the calculations do not need to update frequently, or the calculations take too long to fit them all into one rate. In either case the “offset” is a valuable tool.

Lets take the hypothetical case of a 3 task system

  • Task 1: Runs at 0.2 sec rate
  • Task 2: Runs at 0.4 sec rate
  • Task 3: Runs at 0.4 sec rate

The first way of arranging these would be (a very simple dance with a high probability of stepping on your partner’s toes).

All together now! No “toe stepping” but close

If the total execution time of T1, T2 and T3 is less than your base rate (0.1 seconds here) you are fine but if not, you have overruns. The next option is to introduce an offset (and in this version of the waltz, you have no chance of toe stepping).

In this case, each task is running at a different time and the order of execution is the same (1, then 2, then 3). Everything is good right?

What about incoming data?

When everything runs at the same time step, then everything is using data from the same time step, e.g. T1, T2 and T3 are all using T(0), then T(0.2), then… input data. In the offset case for the first execution it looks like

  • T1 uses T(0) data
  • T2 uses T(0.1) data
  • T1 uses T(0.2) data
  • T3 uses T(0.3) data…

In many (if not most) cases, using the freshest data will not cause an issue. However if there are synchronization issues then using a “sample and hold” approach to the data may be required.

The ABC’s of Testing Interfaces

What is a testing interface and how is it different from a test?

  • A test is: A formalized measurement and objective comparison to validate the behavior of a system
  • A measurement is: A repeatable operation in which a quantifiable result is returned
  • An objective comparison is: An operation in which a pass / fail result can be returned
  • A test interface is: A generic method for calling a test(s)…
It isn’t enough to have connections,
you need to know how to connect

Good versus ok interfaces, USB-C and USB-B…

USB-B is a pretty good interface; it allows for multiple devices to be connected, it is robust (e.g. you can jam it in a thousand times and it still works) and yet, it is only okay due to the “how many times before I get it in correctly” issue. USB-C, in addition to supporting faster data transfer,(1) solves the real issue, human usability.

It is for the developer

Good interfaces are designed for the end user in mind; e.g. the person who is not the expert on how to do the thing you want them to do. So what does that mean for how it should be implemented?

  • Consistency: you will have multiple test interfaces; keep the calling format consistent between them. For example if the unit under test is always an input argument, don’t have as the first argument in some and the last in others.
  • Error handling: If there are common errors then the interface should check for and if possible, correct the errors.
  • Fast: Introduction of an interface has some overhead; however, done properly there can be an overall reduction in execution time.
  • Informative: The testing interface (and infrastructure behind it) can implement error handling and messaging that would represent a burden for an end user to implement.

Footnotes

  1. USB-C data transfer rate is even greater then most people realize, e.g. the 15 seconds lost each time you try and connect a USB-B port is time you are not transferring data.

My 15 Year MathWorks Anniversary…

On August 23rd, 2020, I woke up with a voice mail from Jack Little congratulating me on 15 years with The MathWorks. Honestly, it seems like yesterday that I started there. I thought today I would take some time to reflect on what I have learned in the last 15 years.

A slightly longer timeline

The Workflow Engineer

When I joined MathWorks I was the first “Workflow Engineer” that they hired. My job was to examine how customers used MathWorks products, understand what the limitations were, and make recommendations on how to make the tools and processes better. Some things have never changed.

M.A.A.B. Starting off with Style

Not to be puckish, but holding court for MAAB was perhaps the foundation stone of my wider understanding of how MathWorks customers used and wanted to use Model-Based Design Workflows. Further this brought me into the world of safety critical guidelines.

Making MathWorks MISRA-able

Understanding the MISRA-C guidelines and contributing to the 2012 guidelines was my proving grounds (not to be confused with the GM proving grounds where I used to work) for software best practices. I very much enjoyed the challenge of formulating guidelines to enforce MISRA compliance and getting to the root cause of code generation issues

Product development

I did a brief stint of time in the MathWorks product development group, working on what is now called Simulink Check and the Requirements Traceability tool (Simulink Check still looks a lot like it did when I worked on it, though the requirements tool has greatly evolved). It was during this time that my connection to software verification deepened. Over time, I began to understand the difference between verifying software versus control algorithms; the root difference is in the constraint on control algorithms, no one ever passes a string into your velocity controller.

Onward: Consulting

The last 9 years have seen me in a consulting role, driven by a desire to directly help varied customers while expanding my own knowledge. During this time I branched out from my Automotive background to enter into Industrial Automation, Aerospace, and Medical Devices. About 5 years ago, the “itch to teach” also sprung back up and this blog was born.

Next 15 years?

If the past is any indicator, 15 years from now, I will be writing about the new, new best practices for Model-Based Design and helping to define what those boundaries are.

Last Thoughts

I hope you will forgive a more “personal” blog post. I will return to the normal content on Wednesday. For all of those of you who I have worked with and learned from over the years, thank you and I look forward to working with you again.

Michael

Don’t plan for 101 Variations

Somewhere between the famous Henry Ford quote of “Any customer can have a car painted any color that he wants so long as it is black” and near infinite clothing options of the video game world lies the reality of most finished(1) software projects; half a dozen main variants with three to four sub variants, some of which are mutually exclusive.

However, one common mistake in the software development process is a tendency to plan for all outcomes or in other words, have 101(2) variants.

How do we get here?

The proliferation of variants happens under two basic scenarios; “scope creep” and a failure to define requirements up front.(3) In either of these cases engineers start creating “what if” scenarios. “What if the car needs to drive faster than 90 mph?”; “What if the XXX?” “What if my refrigerator needs to run on Mars?”(4)

Each of these scenarios is “possible” and for the most part come from engineers trying to ensure that the resulting code is robust but it also comes from a lack of leadership.

Hold on to your requirements

This is where requirements traceability shines. If engineers as they work have a clear definition of what they need to be working on, and that definition is always present than the outcome is predictable.

What happens when you let go?

Adding additional variants adds costs in 3 ways

  1. Additional testing required: Not only does the additional variant need testing in a stand alone mode, but it needs to be tested in an integrated system.
  2. Adding complexity to interfaces: Unique variants often require unique interfaces. The more interfaces that are added the more likely that there will be integration issues
  3. Speed!: Each additional variant adds in overhead for each step of the development process, from testing to code generation.

Footnotes

  1. “Finished” with software can be a problematic concept; there can always be patches and updates. For the purpose of this article, finished refers to the first shipment of the product.
  2. When I write “101” here I am not saying “0b101”.
  3. Often “scope creep” occurs when there are poorly defined requirements, but it can also happen when the requirements are not well understood or enforced.
  4. These questions are in decreasing order of possible variants, unless you work for NASA then it is something you should consider.

Not to Digress but it’s Time to Regress!

“What is the best way to solve the problem?” This is of course a very context dependent question. The best depends on what you need out of your solution. Often in the embedded context best means fast, low memory and “accurate enough.” This is where regression comes into play.

Regressions ~= Approximations

I have been doing a lot of coordinate transformation problems over the last couple of months: sine, cosine heavy operations, neither of which are efficient on my target hardware. My first thought was to use a Taylor’s series approximation (cos(x) = 1 – x^2 / 2! + x^4/4! – x^6/6!) however even with precompuation of the factorials (and leveraging the earlier powers to get the higher powers) the performance of this was too intensive.

My solution was to use a piece-wise quadratic regression. While I paid a cost in determining which region I was in, the total execution time was 1/5 the direct cosine call and 1/2 the Taylor series.

Simple illustrations

As this last simple example shows, you don’t need the highest resolution to represent your target. Regressions need to be “good enough”. But how do you know if it is good enough?

Ask the following questions…

  • What range do I need to cover: The larger the range of the regression the more terms are required to keep the error level down. Understand the range of your inputs when designing the regression.
  • What resolution do I need: To hit 99.999% accuracy requires a more complex regression then requiring 98% accuracy.
  • Are there spikes: Validate there are no spikes (i.e. large errors) in the range under consideration.
  • Are errors cumulative: One trick to speed up regressions is to use last pass data, 99.999% of the time this does not introduce errors, but look out for the 0.001%.
  • How expensive is my regression: Finally, make sure the regression you come up with is less computationally expensive than the equations you had.

Connecting the dots: the power of Physics

When performing a regression for control systems, more often then not the basic underlying equations are understood. Because of this the correct form for the regression can be determined up front. For more information on performing regression I would recommend this link from MathWorks.

What’s Lost in Translation: OEM to Vendor, Vendor to OEM

As the title may hint, this post will feature another of my German Language Attempts. (1) Today I want to address a thorny issue: how does company A work with company B in a Model-Based Design context? How do you share and protect intellectual property while taking advantage of all of the benefits of Model-Based Design?

Requirements, ICD, and Models

In a traditional development environment, an OEM would provide their vendor with a set of requirement documents (2) which may or may not include an ICD. The difference with Model-Based Design are the Models. (3)

There are three ways that models impact the OEM / Vendor relationship. (4)

  1. Model as test harness: The OEM can provide plant and test harness to validate the behavior of the vendors model.
  2. Model as requirement (5): The model can act as a second layer of requirements, the executable spec.
  3. Model for integration: The model can be used to provide an integration harness with the larger system.

Each of these three items enable faster development by having fewer mistakes in hand off and enabling early verification.

What is “new” with Models

There is additional information required when exchanging models instead of code(6) or object files. It is the associated meta data of the models that needs to be exchanged. These are the “new” things that needs to be specified up front.

  1. What are the model configuration settings
  2. How do you store data (parameters)
  3. What is your architectural approach
  4. Modeling standards…

I have a secret

The final question with OEM / Vendor relationships is IP; how do you exchange models without giving away the secret sauce? Within the MATLAB / Simulink environment there is the ability to create protected models that can be used to generate code and simulate the model without giving away the “sauce”.

Auf Deutsch!

Ja, es ist die Zeit fur eine Deutsch sprachige Version des Blogs. Die Frage von Heute ist, “Wie arbeite Büro “A” mit Büro “B” und Model-Based Design?” Wie teilen Sie Modelle und schützen gleichzeitig das geistige Eigentum? Können Sie dies tun, ohne die Vorteile des MBD zu verlieren?

Anforderungen, ICD, and Modelle

In einer traditionellen Entwicklungsumgebung würde ein OEM seinem Lieferanten eine Reihe von Anforderungs dokumenten zur Verfügung stellen, die ein ICD enthalten können oder auch nicht. Der Unterschied zum modellbasierten Design sind die Modelle.

Das sind drei Möglichkeiten, wie sich Modelle auf die Beziehung zwischen OEM und Lieferant auswirken.

  1. Modell als Testgeschirr: Der OEM kann Anlagen und Testgeschirre zur Verfügung stellen, um das Verhalten des Modells des Herstellers zu validieren.
  2. Modell als Anforderungen: Das Modell kann als zweite Anforderungsschicht, die ausführbare Spezifikation, fungieren.
  3. Modell fur integration: Das Modell kann verwendet werden, um einen Integrationskabelbaum mit dem größeren System bereitzustellen.

Jeder dieser drei Punkte ermöglicht eine schnellere Entwicklung, da sie weniger Fehler in der Hand haben und eine frühzeitige Überprüfung ermöglichen.

Was ist “neu” an Modellen?

Beim Austausch von Modellen anstelle von Code- oder Objektdateien sind zusätzliche Informationen erforderlich. Es sind die zugehörigen Metadaten der Modelle, die ausgetauscht werden müssen. Dies sind die “neuen” Dinge, die im Vorfeld spezifiziert werden müssen.

  1. Was sind die Einstellungen der Modellkonfiguration
  2. Wie speichern Sie Daten (Parameter)
  3. Was ist Ihr architektonischer Ansatz?
  4. Modellierung von Standards…

Ich habe ein Geheimnis

Die letzte Frage bei den Beziehungen zwischen OEM und Lieferanten ist das geistige Eigentum; wie tauscht man Modelle aus, ohne die geheime Soße zu verraten? Innerhalb der MATLAB/Simulink-Umgebung gibt es die Möglichkeit, geschützte Modelle zu erstellen, die zur Codegenerierung und Simulation des Modells verwendet werden können, ohne die “Soße” zu verraten

Footnotes

  1. As I think about “translation,” for languages or code, I ponder what is lost in translation, through miss communication, transcription errors, and “missing content”. The value of the “Model-Centric” approach to development; without the chance for those errors, becomes clear.
  2. More often than not, when requirement documents are handed off it is an iterative process with several back and forths before the requirements are finalized.
  3. I highly recommend taking a look at the NASA site on model rockets, not only is it a very good primer on rocketry in general, it is also filled with a joy of understanding.
  4. I remember playing “3 or more” in middle school, eventually we realized it was a solved game and had to invent our own rules.
  5. The model should not be treated as the only requirement document, rather it acts as a supplement or derived requirements document. It is possible to use the model as the only requirement document, however in that case the requirements model is not what you use for the production.
  6. It may be more accurate to say there is different information required; when exchanging Code you need to define coding standards, calling rates….

A short, sharp shock to your testing cycle(1)

The following dramatization is for education purposes only:

  • Me (as Actor 1): “did you run the test suite before you checked in your code?”
  • Actor 2: “No, it takes too long to run.”

Cue dramatic music…

I understand where Actor 2 was coming from despite the day I lost to debugging their issue; to run the full test suite on their local machine (which would have exposed the issue) would have taken 2 hours during which they would not have been able to perform other tasks. At the same time the CI process, due to heavy loading, would have taken roughly the same amount of time. They took a common approach of verifying their component worked without examining its impact on the rest of the system; but for them as an individual, the productivity cost was too high. This could be seen as a case of unaccounted externality).

Identify your bottleneck: Moore’s Law is not enough(2)

Often the first response when tests are running too slowly is to “throw more computational power” at the problem. While this may be part of the solution it is not sufficient for a large organization’s testing woes. Rather, identifying which parts of the process are running slowly, or unnecessarily so should be your first step

Bottleneck #1

The bottleneck with perhaps the greatest impact is the failure to optimize testing routines. While correctness and robustness of test routines must be the first priority, speed of execution needs to be a close second. For example, several years ago I was working with a company on their test suite and they had a comparison function that was used in 70% of all of their tests. Their implementation used a for loop as part of the comparison operation, which was ~6% slower than the vector based operation in MATLAB.

When I pointed out the speed difference, they thought they would see a 6% speed improvement in their test cases by making the change; in reality they saw close to an 11% speed improvement due to the way the same function was used in multiple locations. If you are using MATLAB for your development environment I would recommend using the profiler features to identify where you have unnecessarily (3) slow code.

Bottleneck #2

Before jumping in the shower I stick my hand under the water to test the temperature.(4) I do not look at the weather forecast, check my calendar or see if the stove is still on. In other words, I run the test that is required for what I am currently doing. There are two complimentary approaches to reducing the testing load: dependency analysis and meta data tagging.

  • Dependency analysis: In this instance tests are associated with a model and then the model’s dependencies are determined. If any of the files that the model depend on change then the associated tests are run.
  • Meta data: In this case a test is considered to be of a given “tag/type”; tests of the given type are run as a batch. The tags can be related to a system (such as engine or battery) or a common task (like standards checking).

Bottleneck #3

The final bottleneck is unneeded or redundant tests. Sometimes this is when tests exist in the system for projects that are no longer active; sometimes it is when there are redundant tests. Regular pruning of the tests can take care of this issue.

Footnotes

  1. One day as a self challenge, I am going to try to post Model-Based Design using a Patter Song format. “I write the intro Models for the modern MBDer”
  2. Moore’s law should really be called Moore’s Observation.
  3. Unnecessarily is an important word. Remember, if you have a choice between robust & correct versus fast, then choose robust & correct. Or better yet, find the cases where you can take the fast path and those you cannot and run the correct routine for the situation.
  4. This should be considered a safety critical test; too hot and you burn yourself, too cold and well then, your shower isn’t good.

“Old” Methodologies… and Model-Based Design

My second semester in graduate school, fresh off of the introductory Computational Fluid Dynamics (CFD) course, the experimental methods Professor posed the following problem:

Assume you have a wall with 8 layers, each with a different insulating material, hi.  At the start of the experiment, the temperatures at the boundaries are Tlower and Tupper.  Assuming the experiment runs for 1 hour, what are the final temperatures at each layer, assuming no additional heat input?

Most of the class members proceeded to implement a “simple” (1) CFD algorithm to solve this problem.  On the day of submission, the professor demonstrated his solution to the problem, a simple series of differential equations solved using a Laplace transformation, something we had all learned as undergraduates.

What he demonstrated simply and easily is that often basic methodologies provide simple and robust solutions to problems.

Related image

Bridging the old to the new

In a blog dedicated to Model-Based Design, a “new” technology, why would I take the time to write about “old” technologies?

  1. Provide a map between old and new: When you are learning a new area, understanding how it maps onto previous domains speeds up the learning process.
  2. Old and new are really the same: This primarily relates to design patterns and development workflows.  While the tools to do the work may change the task remains the same.
  3. The new is an evolution of the old: In software development, this is frequently the case that the new solution is really an evolution of the old solution.  As a result,the approaches that you learned from the old can be used in the new.
  4. Old methodologies provides insight into the fundamental nature of the problem:  It is common that after multiple iterations, fundamental issues inherent to the problem are understood and encoded in the solution
  5. Sometimes old is better: There are times when reinventing the wheel should be done; but in other cases, well, you have a fine wheel.

Old methods, new domain: what has changed?

The primary change that we see in the migration between traditional and Model-Based Design processes is the degree to which processes can be automated natively in the design tool-chain. Lets look at the highlights

  • Requirements: Like all software design processes everything starts with the requirements document. With Model-Based Design it is possible to fully automate the traceability and requirements checking into the development process.
  • Development: One of the primary advantages of Model-Based Design is the ability to perform experiments (simulations) natively in the development environment. This greatly simplifies the design process as the developer can easily generate the stimulus / response information.
  • Implementation: In the MBD environment the creation of the target software is from the development models through to the code generation process.
  • Verification: Since the implementation is from the development model the verification process is simplified, the tests on the development model provide the baseline for the release tests
  • Release: Here, the integrity checks that would have been manual can be offloaded to build in automation.

Footnotes

  1. A “simple” CFD algorithm meaning about 2,000 lines of code not including calls to LINPACK(2)
  2. Take a look at the 3rd contributor to LINPACK, e.g. Cleve Moler, the inventor of MATLAB. Little did I know at the time, the company I would be working for.
  3. The phrase “Don’t reinvent the wheel” misses the fact that there are times when your existing wheel isn’t doing everything you need. When that is the case, it is time to reinvent it.