Readers questions! Your chance to ask! (1)

Do you have a question about Model-Based Design? About Juggling, swimming, how Model-Based Design could make your Juggling Routines go Swimmingly? Here is your chance to ask me, and I will respond in an upcoming post!

While you are at it, sign up and get this blog by email.

Footnotes

  1. If I included spam I would have questions like:
    • “Do you want to be a millionaire?” Yes, and a socially responsible one.
    • “Your car warranty is running out! Do you want to renew?” While they got the car right, it was one that my wife Deborah and I sold 20 years ago.
    • “How much wood would a wood chuck chuck if…”

When differences matter: debugging

If you are chopping up vegetables for a salad: carrots, peppers, onions… the order in which you chop and add these to the bowl does not matter. On the other hand if you are sauteing those same vegetables the order added to the pan matters a great deal.(1)

Mathematically,
X = A + B
and
X = B + A
are the same;(2) however if those were lines of code a basic differencing tool would flag this as a change. The same problem holds true when differencing a model.

When creating models the transformations (throwing into a bowl or in a hot pan) applied to data (your vegetables) matters when determining if two things are equivalent.

Why automated differencing fails

Automated differencing detects structural changes; in some instances that is significant(3) but in others it is like our salad example, something trivial. Differencing tools lack the ability to determine context. So when and why should we perform differencing?

Differencing to debug

Debugging in 4 steps

  1. Detect an issue: Ideally the issue is caught through the use of regression testing.
  2. Determine if it is a bug: As you evolve your algorithm, outputs may change. Determine if the change represents a deviation from your requirements. (5)
  3. Determine the portion of the model responsible (tracking): In general this means tracking down where the variable in question is calculated and then backtracking to the change.
  4. Implement the patch: once you have determined the source of the problem, implement the solution. (6)

I use differencing as a tool when I am debugging a problem; this is the tracking stage of debugging. For complex models this is a time saver in determining what the possible changes are that created the bug.

Filter!

While there are many types of filters, they all perform a seperation function; letting what you want through and catching the rest. When setting up your differencing tool you want to turn on the “ignore formatting” changes options.

For a text differencing tool, formatting changes would include things like spaces, tabs, and line breaks. For a graphical differencing tool this would include things like block positions or names, e.g. you are filtering out non-transformative changes.

Final thoughts: differencing for reviewing?

In general I do not find model differencing a useful tool during model reviews. It distracts the reviewers from understanding what changed functionally by having them focus on what changed structurally. Unless you are performing an architectural review I recommend reviewing functional simulation results versus the model diagram.(7)

Footnotes

  1. Carrots or onions first? It depends on what you want to achieve. If you are going for caramelized onions then those go in first, but if you want the carrots to caramelize then they need to go in first. Either way the peppers will go in near the end.
  2. This is the commutative property.
  3. In this example the second image has a flower in it. This is significant as it gives the bees a source of food. This allows them to produce honey which in turn leads to the bears bee-ing (4) happier (hence the two holding hands).
  4. Of course the bees may not be happier with the bears taking their honey.
  5. There are generally 2 types of tests: requirements based and baseline. In requirements based tests, a failure should always indicate a problem (unless you wrote the test with tolerances that were too tight). With regression testing, it is possible that a change in the output does not impact the requirements.
  6. Note: if the change was made by someone other than yourself (e.g. a developer of another module), consult with them to understand the reason for the change. It may be that the requirements have shifted.
  7. Google Image search returns some wonderful and sometimes odd results. In this case the search term was “structural versus functional changes.”

Wacky from WebEx: Better meetings with MBD

The following is a two week running average(1)

  1. Can you hear me? (8 times per day)
  2. I can’t hear you! (4 times per day)
  3. Hold on, let me restart that (3 times per day)
  4. I love Video Calls (sarcastically: 5 times per day; honestly… still waiting)

The use of WebEx, Teams, or morose code is the central fact of life during the COVID shelter in place as is the general irked response to these meeting malfunctions. There are good general tips on Video Conference best practices. What they are all trying to address is how to make the most of an artificial interaction environment.(2)

The good news is that Model-Based Design offers a singularly powerful platform in which to collaborate.

Home – Personal Web Solutions

What is new for MBD Webex meetings? What to do?

What then is different about meetings over WebEx and what makes it unique for Model-Based Design?

  • Attention span is lower in Video Conference meetings: Homework is key
    • But by it’s nature MBD lends itself well to the generation of artifacts that support online reviews.
  • Focus on the “item” is key: knowing what you want to share and how to best present it is critical
    • The graphical nature of models is simpler than text based languages for online reviews. When presenting validation results, graphs (i.e. plots with error bars) should be leveraged.

How to make it better with Model-Based Design

Ok, first off let’s say it…many people during meetings are following links instead of following the meeting. What can Model-Based Design do to support people “being there” and engaged during the meeting? Assuming you and your audience have done your homework(3) then it becomes a question of “what types of things should I be reviewing and what do I need to present?”

These recommendations are useful for any design process. Note: underlined “Homework” are areas where the automation and simulation aspects of Model-Based Design greatly improve the process…

  • Interfaces: when reviewing interfaces the ICD is your best friend
    • Objective(s):
      1. Sign off / agreement on the proposed interface
      2. Validation that the generated interface matches the agreed upon version
      3. Modification to an agreed upon interface
    • Homework:
      1. Document who produces and consumes the information
      2. Generated document of the realized and proposed ICD (4)
      3. Again, document who produces and consumes the information as well as the impact on downstream modules.
  • Requirements: For now assume that the requirements are written and you are performing validation steps.
    • Objective(s):
      1. Ensure that everyone understands the requirements in the same way
      2. Demonstrate that all of the requirements have been validated
      3. Hold an exception or modification meeting
      4. Demonstrate traceability of the requirement
    • Homework:
      1. Consider using the model to demonstrate the requirement
      2. Provide the test cases that demonstrate that the requirement has been met
      3. Provide the technical data that shows why the requirement cannot be met (this can be done through the use of closed loop simulation)
      4. Generate the report showing the full requirement’s traceability
  • Standards: Conformance to standards allow for the easier communication
    • Objective(s):
      1. Validation of standards that require “engineering judgment”(5)
    • Homework:
      1. Provide the referenced guidelines as well as any commentary on the rationale for the guideline. In safety critical conditions, provide validation that the compliance adheres to safety standard requirements.
  • Bugs: when you are trying to get to the root cause of an issue
    • Objective(s):
      1. Understand what is causing the problem and what the required outcome is
      2. Understand what is allowable under the requirements for the feature
    • Homework:
      1. Provide information on what has been tried to debug the issue (and be willing to revisit it)
      2. Provide simulation data that demonstrates the bug “in action”
      3. Isolate the bug to the “smallest” module possible
What is engineering judgement(7)

Foot notes

  1. These statistics represent 2 ~ 4 meetings per day, 5 days a week. Everyone has said items 1 or 2 at least once.
  2. In an average meeting much of the information is communicated informally through body language, side conversations and the ability of people to react to how everyone in the room is responding.
  3. What is “meeting homework”? Three things:
    1. Define what you want to cover in the meeting; information to present and questions to be answered.
    2. Provide the audience with information beforehand so they can respond to your questions/presentation.
    3. Have supporting information ready to present during the meeting; it’s important to prevent the number 3 “let me restart that” more than once in a meeting.
  4. There are multiple tools to generate the ICD document from a Simulink model; the comparison to the proposed interface can likewise be automated.
  5. Engineering judgement isn’t about having an opinion, it is about knowing the impact of your decision if you are wrong.

Stop Cursing at Recursion

A recursive function (1) is a function that calls itself until an exit condition is met (2) or until a bounded number of iterations has passed. (3) This is extremely useful when the problem involves a nested branch searching with an unknown number of branches.

Meanwhile(4) in Model-Based Design…

In general, safety standards(5) frown upon recursive functions due to their unbounded nature. Determining if recursion can be used in your algorithm is a 2 step process (with some recursion involved).

  1. Determine the memory use and execution time of the recursive function
  2. Determine the “worst case to solution” input data
    • Calculate time and memory usage
    • If this is acceptable for your system then exit
      Otherwise
    • Go back to step 2…

Note, in this example, worst case means that you are looking at how many interactions are required to reach an acceptable solution. Acceptable is not the same as fully converged; you want a solution that is close enough to correct for your controller’s requirement.

Credit SMBC Comics:

When?

Most traditional control problems do not require recursion to solve; however with the onset of adaptive controls and more importantly networked controllers, the need to exhaustively cover all branches in the solution come into play. Again, it comes down to 3 questions

  1. Is there a closed form solution: Can you mathematically solve the problem in either a known number of steps or directly?
    If “no” continue on
  2. Is there an unknown depth or branch issue: Is the final path known ahead of time?
    If “no” continue
  3. Do you need it now: In some cases you can perform recursion the slow way, e.g. the same function is called once per time step maintaining state information.
    If you “need it now” you need recursion!

Footnotes

  1. I had multiple typos (and misleading auto-corrections) when writing this
    • re-cursive: a note about handwriting
    • re-cur-son: a note about a dog’s puppy
  2. When you meet an exit condition do you say hello or goodbye? Perhaps Aloha, or Ciao, or Shalom, or Annyeong or Salām is the correct word to use.
  3. This is critical in an embedded system where consistent timing and memory limitations are critical.
  4. While loops are similar to recursion in that they execute until an exit condition is reached, they do not enable branching determination.
  5. By default in Simulink the recursive code generation is prohibited for this reason. It can be enabled as shown here; the rest of this blog is about when to consider using it.

Standing up for Model-Based Design: Agile Workflows

For the past 8 years I have been helping customers adopt agile workflows as part of their Model-Based Design process. As acceptance of agile processes grows I have seen a corresponding growth in MBD / Agile best practices. With 5 points, I will illustrate how Model-Based Design and Agile Development workflows fit together.

Continuous Integration: The opposable thumb

Model-Based Design and Continuous Integration (CI) go together hand in glove; the central nature of continuous testing to agile development processes is why I consider this the “thumb” of MBD / Agile workflows. And as for why Model-Based Design with Simulink is so powerful for CI systems? Built in access to simulation and therefore testing.

Rapid iterations / system integration: Pinky

One of the goals of Agile workflows is to allow engineers to work independently while being able to collaborate seamlessly. The key to this is the use of integration models or system architecture tools. The individual engineer works on their component while having an integration framework to validate against.

Why pinky: Because testing (thumb) and integration are the boundaries of Agile.

Making the most out of Stand Ups: Ring

Stand up meetings, or check-ins are intended to be quick reviews of outstanding issues. Having effective tools to aid in the review is critical. The graphical nature of Model-Based Design (and the ability to easily generate plots / graphs) means that the information required can easily be conveyed. (Note: bad stand up meetings can seem like they last for 24 hours. I would recommend this site for best practices)

Why ring: In stand ups we all stand around the table

Cross group collaboration: Middle

Working across groups is one of the higher order objectives of Agile workflows. Model-Based Design, with the use of architectural models, allows groups to not just design the control algorithms but the system level architecture, the physical requirements and hardware interfaces as well.

Why middle: Because this is the center of all the work.

One model, many uses: Reuse: Index

Chose the “right” one

In an ideal Agile workflow your work product is iterated throughout the design cycle. In a standard Model-Based Design workflow, you reuse the same model from the start to finish.

Why index: because the one concept indexes into the other.

Final thoughts / lack of footnotes

In writing a post about “the hand in glove,” footnotes didn’t seem right, so the final thoughts will have to do. Agile workflows, like the waterfall or test driven development (TDD) or one of many others are good practices to follow assuming that the work environment is following rational design processes. Know when to bend and when to hold fast to the process. For another view on Model-Based Design and Agile workflows I highly recommend this post.

Rabbit Holes: Constraining your model(s)…

The question of “how big should my model be” is an evergreen Model-Based Design question. Too small and time is wasted on build, test and review tasks. Too large and the model is difficult to debug and maintain. The recommended size has always been a range with one critical factor: the ease of communicating information from one person to another. But in the COVID remote working environment, has the sweet spot shifted and if so, what can Model-Based Design do to address this?

Going down the rabbit hole

The English idiom “going down the rabbit hole” is a reference to Lewis Carroll’s ‘Alice in Wonderland’ and refers to any pursuit where action is taken but the outcomes are ‘never ending and nonsensical’. Remote work can easily lead people down rabbit holes due to a lack of communication. In the office, sitting next to Sid & Beatrice, Marie & Pierre I have opportunities for regular, small sanity checks. How can we foster that same culture from behind a webcam?

Bring down the wall

When working remotely there are 2 keys to effective Model-Based Design reviews

  1. Select the focus of your review: Are you performing an architectural review or a functional review?
    • Architectural: provide the I.C.D. and the execution flow of the atomic units
    • Functional: Provide the simulation results and the requirement documentation
  2. Leverage integration models: Here is the MBD Key, the use of shell (child) and integration models gives you a natural decomposition between the architectural review (integration models) and the functional reviews (shell/child)

How this keeps you whole

Rabbit holes are best avoided through timely reviews. By leveraging the Integration / Shell concepts, a natural break point in the review process exists.

Shell models map to between 2 ~ 4 high level requirements. When you have completed the work on any of the requirements, call for a review.

Integration models map onto system level requirements. When any of these change, call for a review.

Reflecting back to the start

In the same spirit of avoiding rabbit holes, we want to prevent the mirror problem of too many reviews. Unless the review is part of an Agile Stand up, 45 minutes to an hour is an appropriate amount of time. If reviews are routinely taking less then 30 minutes, reconsider the tempo of the meetings. While time spent in reviews is important if you spend too much time in them you end up with the Red Queen’s Dilemma.

In praise of the offset…

I am now waltzing into a short 1-2-3 posting on scheduling. Multi-rate systems exist for two reasons, either the calculations do not need to update frequently, or the calculations take too long to fit them all into one rate. In either case the “offset” is a valuable tool.

Lets take the hypothetical case of a 3 task system

  • Task 1: Runs at 0.2 sec rate
  • Task 2: Runs at 0.4 sec rate
  • Task 3: Runs at 0.4 sec rate

The first way of arranging these would be (a very simple dance with a high probability of stepping on your partner’s toes).

All together now! No “toe stepping” but close

If the total execution time of T1, T2 and T3 is less than your base rate (0.1 seconds here) you are fine but if not, you have overruns. The next option is to introduce an offset (and in this version of the waltz, you have no chance of toe stepping).

In this case, each task is running at a different time and the order of execution is the same (1, then 2, then 3). Everything is good right?

What about incoming data?

When everything runs at the same time step, then everything is using data from the same time step, e.g. T1, T2 and T3 are all using T(0), then T(0.2), then… input data. In the offset case for the first execution it looks like

  • T1 uses T(0) data
  • T2 uses T(0.1) data
  • T1 uses T(0.2) data
  • T3 uses T(0.3) data…

In many (if not most) cases, using the freshest data will not cause an issue. However if there are synchronization issues then using a “sample and hold” approach to the data may be required.

The ABC’s of Testing Interfaces

What is a testing interface and how is it different from a test?

  • A test is: A formalized measurement and objective comparison to validate the behavior of a system
  • A measurement is: A repeatable operation in which a quantifiable result is returned
  • An objective comparison is: An operation in which a pass / fail result can be returned
  • A test interface is: A generic method for calling a test(s)…
It isn’t enough to have connections,
you need to know how to connect

Good versus ok interfaces, USB-C and USB-B…

USB-B is a pretty good interface; it allows for multiple devices to be connected, it is robust (e.g. you can jam it in a thousand times and it still works) and yet, it is only okay due to the “how many times before I get it in correctly” issue. USB-C, in addition to supporting faster data transfer,(1) solves the real issue, human usability.

It is for the developer

Good interfaces are designed for the end user in mind; e.g. the person who is not the expert on how to do the thing you want them to do. So what does that mean for how it should be implemented?

  • Consistency: you will have multiple test interfaces; keep the calling format consistent between them. For example if the unit under test is always an input argument, don’t have as the first argument in some and the last in others.
  • Error handling: If there are common errors then the interface should check for and if possible, correct the errors.
  • Fast: Introduction of an interface has some overhead; however, done properly there can be an overall reduction in execution time.
  • Informative: The testing interface (and infrastructure behind it) can implement error handling and messaging that would represent a burden for an end user to implement.

Footnotes

  1. USB-C data transfer rate is even greater then most people realize, e.g. the 15 seconds lost each time you try and connect a USB-B port is time you are not transferring data.

My 15 Year MathWorks Anniversary…

On August 23rd, 2020, I woke up with a voice mail from Jack Little congratulating me on 15 years with The MathWorks. Honestly, it seems like yesterday that I started there. I thought today I would take some time to reflect on what I have learned in the last 15 years.

A slightly longer timeline

The Workflow Engineer

When I joined MathWorks I was the first “Workflow Engineer” that they hired. My job was to examine how customers used MathWorks products, understand what the limitations were, and make recommendations on how to make the tools and processes better. Some things have never changed.

M.A.A.B. Starting off with Style

Not to be puckish, but holding court for MAAB was perhaps the foundation stone of my wider understanding of how MathWorks customers used and wanted to use Model-Based Design Workflows. Further this brought me into the world of safety critical guidelines.

Making MathWorks MISRA-able

Understanding the MISRA-C guidelines and contributing to the 2012 guidelines was my proving grounds (not to be confused with the GM proving grounds where I used to work) for software best practices. I very much enjoyed the challenge of formulating guidelines to enforce MISRA compliance and getting to the root cause of code generation issues

Product development

I did a brief stint of time in the MathWorks product development group, working on what is now called Simulink Check and the Requirements Traceability tool (Simulink Check still looks a lot like it did when I worked on it, though the requirements tool has greatly evolved). It was during this time that my connection to software verification deepened. Over time, I began to understand the difference between verifying software versus control algorithms; the root difference is in the constraint on control algorithms, no one ever passes a string into your velocity controller.

Onward: Consulting

The last 9 years have seen me in a consulting role, driven by a desire to directly help varied customers while expanding my own knowledge. During this time I branched out from my Automotive background to enter into Industrial Automation, Aerospace, and Medical Devices. About 5 years ago, the “itch to teach” also sprung back up and this blog was born.

Next 15 years?

If the past is any indicator, 15 years from now, I will be writing about the new, new best practices for Model-Based Design and helping to define what those boundaries are.

Last Thoughts

I hope you will forgive a more “personal” blog post. I will return to the normal content on Wednesday. For all of those of you who I have worked with and learned from over the years, thank you and I look forward to working with you again.

Michael

Don’t plan for 101 Variations

Somewhere between the famous Henry Ford quote of “Any customer can have a car painted any color that he wants so long as it is black” and near infinite clothing options of the video game world lies the reality of most finished(1) software projects; half a dozen main variants with three to four sub variants, some of which are mutually exclusive.

However, one common mistake in the software development process is a tendency to plan for all outcomes or in other words, have 101(2) variants.

How do we get here?

The proliferation of variants happens under two basic scenarios; “scope creep” and a failure to define requirements up front.(3) In either of these cases engineers start creating “what if” scenarios. “What if the car needs to drive faster than 90 mph?”; “What if the XXX?” “What if my refrigerator needs to run on Mars?”(4)

Each of these scenarios is “possible” and for the most part come from engineers trying to ensure that the resulting code is robust but it also comes from a lack of leadership.

Hold on to your requirements

This is where requirements traceability shines. If engineers as they work have a clear definition of what they need to be working on, and that definition is always present than the outcome is predictable.

What happens when you let go?

Adding additional variants adds costs in 3 ways

  1. Additional testing required: Not only does the additional variant need testing in a stand alone mode, but it needs to be tested in an integrated system.
  2. Adding complexity to interfaces: Unique variants often require unique interfaces. The more interfaces that are added the more likely that there will be integration issues
  3. Speed!: Each additional variant adds in overhead for each step of the development process, from testing to code generation.

Footnotes

  1. “Finished” with software can be a problematic concept; there can always be patches and updates. For the purpose of this article, finished refers to the first shipment of the product.
  2. When I write “101” here I am not saying “0b101”.
  3. Often “scope creep” occurs when there are poorly defined requirements, but it can also happen when the requirements are not well understood or enforced.
  4. These questions are in decreasing order of possible variants, unless you work for NASA then it is something you should consider.