Code from a simpler time…

Legacy code, that is to say code from prior to your Model-Based Design environment can either act like an eagle allowing you to soar higher, or an albatross, a weight around your neck. Like the ships crew, think twice on your code and what you shoot for.

Migrate or encapsulate?

The decision on how to treat existing code is predicated on a series of questions.

  1. In the current form, is it performing all the required functions? If the code (as is) is complete, then consider calling it in the “as is” fashion.
  2. Dose the code need to be expanded or changed? If major changes are anticipated then migration to the Model-Based Design environment should be considered.
  3. Is the code encapsulated? If the code is not well encapsulated then calling it from an external environment will difficult. The existing code should be refactored into smaller portions and then re-evaluated.
  4. Dose the code perform functions not well suited for the Model-Based Design environment? There are some functions, such as device drivers that are best written in textual languages.

En-cap-so-early…

If the decision is made to bring existing legacy code into the Model-Based Design environment then efforts should be made to do this as early in the development process as possible to facilitate the testing of the encapsulated code. There are 3 types of tests that should be performed.

  1. Simulation based tests: Does the legacy code provide meaningful data during simulation?
  2. Code generation: Is the call to the legacy code correct in the generated code?
  3. Encapsulation: Is the legacy code fully encapsulated, or does it require additional legacy code? Often this is an iterative process to pair down to the minimum code.

Simulation time stubs

In some cases the legacy code is collecting data from outside of the system e.g., in the cases where the code is for hardware or communication. When testing the code there are two options, static and dynamic “stubbing.”

In a static stub scenario, the return value(s) from the legacy code are set to a valid value that the code could provide.

In the dynamic stubbing scenario a model of the external code is created to simulate the values that the code could provide. To fully exercise the system the simulated code should provide not only expected values but potential error values which the system could produce.

Making the call

There are multiple ways that legacy code can be called from within the Model-Based Design environment. My current recommendation is to use the coder.ceval methodology.

Model-Based Design Walk-through: Part 1: Requirements

This post is the first in a series of 8 video blogs, walking through the fundamentals of Model-Based Design. When taken as a whole, these videos provide the foundation stones for understanding and implementing Model-Based Design workflows. I will be using a simple Home A/C system for my example, however the principles apply to everything from Airplanes to Zeppelins.(1)

  1. Requirements
    1. Requirements Management
    2. Writing clear requirements
    3. What I’m expecting: writing requirements
  2. System Architecture
    1. Modeling architecture: Fundamentals
    2. Model architecture decomposition for hardware and close loop testing
    3. Is your system architecture “Lego Legal”?
  3. Initial (shell) models
    1. Modeling architecture with room to grow
    2. The Model-Based Design Workflow…
    3. Defining your initial Model-Based Design workflow
    4. Plants resting on a table
  4. Defining and managing data
    1. Managing Data
    2. Understanding Data Usage in Model-Based Design Part I
      and
    3. Understanding Data Usage in Model-Based Design Part II
    4. The Simulink Data Dictionary
  5. V&V
    1. The 8 commandments of V&V
    2. Levels of testing
    3. Modular testing environments
  6. Refining the models
    1. Defining your initial Model-Based Design workflow
    2. Best Practices for Establishing a Model-Based Design Culture
  7. Code generation
    1. https://www.mathworks.com/solutions/embedded-code-generation.html
  8. The grab bag…
    1. A road map for Model-Based Design
    2. The next generation of Model-Based Design
Model-Based Design: Part 1: Requirements

Footnotes (and video comments)

  1. There is a joke here about Lead/Lag controls for Zeppelins, but I am not buying a stairway to better humor.
  2. I considered calling the “Fresh Air” requirement the “Next Personal Requirement.”
  3. In this example, I am going to use a simple control and plant model; my objective is to show the Model-Based Design process.

Honing the edge: cutting into Refinement

When I start a new project I go through the following basic stages

  1. Identify what I am trying to solve: Review the requirements and ensure I know what it is I need to do.
  2. Review methods that I know about on how to solve it: Most problems have multiple solutions; review the pros and cons.
  3. Select method that fits within project constraints: Based on project constraints and the pros/cons, select the method.
  4. Implement: Do the work.(1)
  5. Refine: Based on feedback and time left, refine the approach.(2)

Today I want to talk about the refinement stage: what it is, when, and how to do it.

Before you run a marathon, have a good running form

Refinement starts once the baseline version of the project is completed, that is to say you have

  • Met all the baseline requirements
  • Have unit tests in place for all the baseline requirements
  • Have completed integration into the larger system

Refining!

If you do it poorly you will pay penalties over and over(3) but done well, there are great benefits. So what should you refine? In design there are 4 things to optimize, they are: (4)

  1. Clarity: Improving the design to make it easier to understand and maintain.
  2. Speed: Reducing the total FLOPS for both mean and edge cases!
  3. Memory usage: Reducing the RAM/ROM, for both mean and edge cases!
  4. Re-usability: Create methods for reusing the design for multiple instances.

What is most important to your project?

Items 1 through 4 are often in competition; it is easier to go faster if you use more memory but clarity may suffer if you have multiple uses. So the question becomes, “what is most important to your project as a whole?” Take into account how often the code is called and determine its priority within the execution context.

Clarity

The first rule of clarity is documenting your model. Documentation should be done at a level that aids understanding. Putting comments next to each block (this is a table look-up for calculating the XXX based on YYY) provides too much information. Comment on the function of a group of blocks. The second rule is to use common patterns; do not have 6 different ways for performing the same operation.

Speed

Before you can go fast you need to know what is going slow. The first step in optimization then is to run a profiler on your system. There are two types of speed you want to examine; average and worst case. Depending on your overall system needs you may want to focus on one over the other. This can be done by implementing conditional code where a given path only executes under edge cases.

Memory

First off, elephants are a horrible storage medium; sure, you pay them in yummy peanuts but the retrieval process is a real three ring circus. Memory can be reduced in two primary methods

  1. Reduce what you pass in: Look at the data you are passing into your function. Is all of it needed? Frequently in the development process we “overpack.”
  2. Remove intermediate variables: This is less of an issue for automatic code generation. However, if you find yourself creating variables to monitor the code this will lead to memory bloat.

Reusability

Rather then write something again, I will reuse these posts on reuse… (Yes, I am reusing this joke)

Final thoughts

Remember as you refine your code, regression tests are your best friend. Consider adding performance tests as well if you are working on memory and speed issues. Follow this advice and “go for the gold!”

Footnotes

  1. While doing the work may take the most time, steps 1-3 are what makes sure that it takes the least amount of time.
  2. For projects that are well defined, the “refinement” may take place while in stage 4 (implementation). However, when doing something new I always make sure I have something working and solid before I start the refining stage.
  3. I’m not saying that is why it is re-fine, e.g. a penalty paid but…
  4. In computer programming this is known as “doing your bit”; if you “byte off more than you can chew.”

Polar bears in a snowstorm discussing philosophy (e.g. black box testing for DL/ML systems)

A recent article announced “Deep Learning Machine Beats Humans in IQ Test.” It is an interesting read but it raises a question on many minds these last few years; how do you test a device that is an unbounded black box?(1)

Testing DL/ML programs is a unique challenge. Unlike conventional software testing, the algorithm you are testing does not have a viewable code base (e.g. you can’t inspect for overflow, underflow, or pointers). So what do you test?

The first question is “what are you protecting against?” This is not the inverse of what you are trying to achieve. Let’s take a hypothetical case; you have developed a ML/DL system that controls a robot to give full body massages.(2)

The objective of the robot is to relive pain and tension in the body. What we are protecting against is damage to the body such as bruising or muscle tears.

Another way of thinking about this is the “law of unintended consequences.” Reading Asimov’s Robot stories you would see that the three laws were not enough to keep things from going off the rails. So what can we do?

Fencing the problem

In some cases the solution is simple; put an observer in place and fence off the dangerous areas. In our massaging robot example, the “fence” could include things such as

  • Maximum force
  • Maximum compression
  • Maximum speed

In some cases fencing isn’t a solution so how do we remain “en-guard”?

Black swans

There is no way to dance around this one; AI/ML/DL systems in the real world have to deal with real world problems. That is to say they need to “expect the unexpected.”(3)

AI/ML/DL systems work based on data models; any event that is outside of the data model is still responded to and interpreted as something that is known. Since it is not possible to know all the unknowns upfront the question becomes “are there meta-unknowns?” Can you come up with classes of things that you have not trained for, that you can give rules for how to respond?

Sticking with our massaging robot, someone with scar tissue from surgery may have a different underlying musculature layout. This is a different problem from someone who has a temporary strain that can be corrected.

Fail Safe

To “fail safe” we need to first know what “safe” is. Fail safe for an engine when you are parked would be to shut down whereas while driving it may be to reduce power. AI/ML/DL systems can employ scenario based fail safe approaches to determine what the correct action is when a black swan or fence condition occurs.

0.3048 Meter notes(4)

  1. I’m describing this as an unbounded black box since the actual range of inputs cannot be defined as in a traditional system. For example, you could say a voltage signal goes from 0 to 42 volts, but how do you define the information encoded in an image when you don’t know what the system has determined to be salient features?
  2. I’m not saying that sitting at a desk much of the day has made me interested in creating such a machine but…
  3. For many years the “No one expects the Spanish Inquisition” was a fine joke; however given time, everyone expects the Spanish Inquisition and it is no longer an edge case.
  4. 1 foot is 0.3048 meters, hence these can be translated into “foot” notes.

Readers questions! Your chance to ask! (1)

Do you have a question about Model-Based Design? About Juggling, swimming, how Model-Based Design could make your Juggling Routines go Swimmingly? Here is your chance to ask me, and I will respond in an upcoming post!

While you are at it, sign up and get this blog by email.

Footnotes

  1. If I included spam I would have questions like:
    • “Do you want to be a millionaire?” Yes, and a socially responsible one.
    • “Your car warranty is running out! Do you want to renew?” While they got the car right, it was one that my wife Deborah and I sold 20 years ago.
    • “How much wood would a wood chuck chuck if…”

When differences matter: debugging

If you are chopping up vegetables for a salad: carrots, peppers, onions… the order in which you chop and add these to the bowl does not matter. On the other hand if you are sauteing those same vegetables the order added to the pan matters a great deal.(1)

Mathematically,
X = A + B
and
X = B + A
are the same;(2) however if those were lines of code a basic differencing tool would flag this as a change. The same problem holds true when differencing a model.

When creating models the transformations (throwing into a bowl or in a hot pan) applied to data (your vegetables) matters when determining if two things are equivalent.

Why automated differencing fails

Automated differencing detects structural changes; in some instances that is significant(3) but in others it is like our salad example, something trivial. Differencing tools lack the ability to determine context. So when and why should we perform differencing?

Differencing to debug

Debugging in 4 steps

  1. Detect an issue: Ideally the issue is caught through the use of regression testing.
  2. Determine if it is a bug: As you evolve your algorithm, outputs may change. Determine if the change represents a deviation from your requirements. (5)
  3. Determine the portion of the model responsible (tracking): In general this means tracking down where the variable in question is calculated and then backtracking to the change.
  4. Implement the patch: once you have determined the source of the problem, implement the solution. (6)

I use differencing as a tool when I am debugging a problem; this is the tracking stage of debugging. For complex models this is a time saver in determining what the possible changes are that created the bug.

Filter!

While there are many types of filters, they all perform a seperation function; letting what you want through and catching the rest. When setting up your differencing tool you want to turn on the “ignore formatting” changes options.

For a text differencing tool, formatting changes would include things like spaces, tabs, and line breaks. For a graphical differencing tool this would include things like block positions or names, e.g. you are filtering out non-transformative changes.

Final thoughts: differencing for reviewing?

In general I do not find model differencing a useful tool during model reviews. It distracts the reviewers from understanding what changed functionally by having them focus on what changed structurally. Unless you are performing an architectural review I recommend reviewing functional simulation results versus the model diagram.(7)

Footnotes

  1. Carrots or onions first? It depends on what you want to achieve. If you are going for caramelized onions then those go in first, but if you want the carrots to caramelize then they need to go in first. Either way the peppers will go in near the end.
  2. This is the commutative property.
  3. In this example the second image has a flower in it. This is significant as it gives the bees a source of food. This allows them to produce honey which in turn leads to the bears bee-ing (4) happier (hence the two holding hands).
  4. Of course the bees may not be happier with the bears taking their honey.
  5. There are generally 2 types of tests: requirements based and baseline. In requirements based tests, a failure should always indicate a problem (unless you wrote the test with tolerances that were too tight). With regression testing, it is possible that a change in the output does not impact the requirements.
  6. Note: if the change was made by someone other than yourself (e.g. a developer of another module), consult with them to understand the reason for the change. It may be that the requirements have shifted.
  7. Google Image search returns some wonderful and sometimes odd results. In this case the search term was “structural versus functional changes.”

Wacky from WebEx: Better meetings with MBD

The following is a two week running average(1)

  1. Can you hear me? (8 times per day)
  2. I can’t hear you! (4 times per day)
  3. Hold on, let me restart that (3 times per day)
  4. I love Video Calls (sarcastically: 5 times per day; honestly… still waiting)

The use of WebEx, Teams, or morose code is the central fact of life during the COVID shelter in place as is the general irked response to these meeting malfunctions. There are good general tips on Video Conference best practices. What they are all trying to address is how to make the most of an artificial interaction environment.(2)

The good news is that Model-Based Design offers a singularly powerful platform in which to collaborate.

Home – Personal Web Solutions

What is new for MBD Webex meetings? What to do?

What then is different about meetings over WebEx and what makes it unique for Model-Based Design?

  • Attention span is lower in Video Conference meetings: Homework is key
    • But by it’s nature MBD lends itself well to the generation of artifacts that support online reviews.
  • Focus on the “item” is key: knowing what you want to share and how to best present it is critical
    • The graphical nature of models is simpler than text based languages for online reviews. When presenting validation results, graphs (i.e. plots with error bars) should be leveraged.

How to make it better with Model-Based Design

Ok, first off let’s say it…many people during meetings are following links instead of following the meeting. What can Model-Based Design do to support people “being there” and engaged during the meeting? Assuming you and your audience have done your homework(3) then it becomes a question of “what types of things should I be reviewing and what do I need to present?”

These recommendations are useful for any design process. Note: underlined “Homework” are areas where the automation and simulation aspects of Model-Based Design greatly improve the process…

  • Interfaces: when reviewing interfaces the ICD is your best friend
    • Objective(s):
      1. Sign off / agreement on the proposed interface
      2. Validation that the generated interface matches the agreed upon version
      3. Modification to an agreed upon interface
    • Homework:
      1. Document who produces and consumes the information
      2. Generated document of the realized and proposed ICD (4)
      3. Again, document who produces and consumes the information as well as the impact on downstream modules.
  • Requirements: For now assume that the requirements are written and you are performing validation steps.
    • Objective(s):
      1. Ensure that everyone understands the requirements in the same way
      2. Demonstrate that all of the requirements have been validated
      3. Hold an exception or modification meeting
      4. Demonstrate traceability of the requirement
    • Homework:
      1. Consider using the model to demonstrate the requirement
      2. Provide the test cases that demonstrate that the requirement has been met
      3. Provide the technical data that shows why the requirement cannot be met (this can be done through the use of closed loop simulation)
      4. Generate the report showing the full requirement’s traceability
  • Standards: Conformance to standards allow for the easier communication
    • Objective(s):
      1. Validation of standards that require “engineering judgment”(5)
    • Homework:
      1. Provide the referenced guidelines as well as any commentary on the rationale for the guideline. In safety critical conditions, provide validation that the compliance adheres to safety standard requirements.
  • Bugs: when you are trying to get to the root cause of an issue
    • Objective(s):
      1. Understand what is causing the problem and what the required outcome is
      2. Understand what is allowable under the requirements for the feature
    • Homework:
      1. Provide information on what has been tried to debug the issue (and be willing to revisit it)
      2. Provide simulation data that demonstrates the bug “in action”
      3. Isolate the bug to the “smallest” module possible
What is engineering judgement(7)

Foot notes

  1. These statistics represent 2 ~ 4 meetings per day, 5 days a week. Everyone has said items 1 or 2 at least once.
  2. In an average meeting much of the information is communicated informally through body language, side conversations and the ability of people to react to how everyone in the room is responding.
  3. What is “meeting homework”? Three things:
    1. Define what you want to cover in the meeting; information to present and questions to be answered.
    2. Provide the audience with information beforehand so they can respond to your questions/presentation.
    3. Have supporting information ready to present during the meeting; it’s important to prevent the number 3 “let me restart that” more than once in a meeting.
  4. There are multiple tools to generate the ICD document from a Simulink model; the comparison to the proposed interface can likewise be automated.
  5. Engineering judgement isn’t about having an opinion, it is about knowing the impact of your decision if you are wrong.

Stop Cursing at Recursion

A recursive function (1) is a function that calls itself until an exit condition is met (2) or until a bounded number of iterations has passed. (3) This is extremely useful when the problem involves a nested branch searching with an unknown number of branches.

Meanwhile(4) in Model-Based Design…

In general, safety standards(5) frown upon recursive functions due to their unbounded nature. Determining if recursion can be used in your algorithm is a 2 step process (with some recursion involved).

  1. Determine the memory use and execution time of the recursive function
  2. Determine the “worst case to solution” input data
    • Calculate time and memory usage
    • If this is acceptable for your system then exit
      Otherwise
    • Go back to step 2…

Note, in this example, worst case means that you are looking at how many interactions are required to reach an acceptable solution. Acceptable is not the same as fully converged; you want a solution that is close enough to correct for your controller’s requirement.

Credit SMBC Comics:

When?

Most traditional control problems do not require recursion to solve; however with the onset of adaptive controls and more importantly networked controllers, the need to exhaustively cover all branches in the solution come into play. Again, it comes down to 3 questions

  1. Is there a closed form solution: Can you mathematically solve the problem in either a known number of steps or directly?
    If “no” continue on
  2. Is there an unknown depth or branch issue: Is the final path known ahead of time?
    If “no” continue
  3. Do you need it now: In some cases you can perform recursion the slow way, e.g. the same function is called once per time step maintaining state information.
    If you “need it now” you need recursion!

Footnotes

  1. I had multiple typos (and misleading auto-corrections) when writing this
    • re-cursive: a note about handwriting
    • re-cur-son: a note about a dog’s puppy
  2. When you meet an exit condition do you say hello or goodbye? Perhaps Aloha, or Ciao, or Shalom, or Annyeong or Salām is the correct word to use.
  3. This is critical in an embedded system where consistent timing and memory limitations are critical.
  4. While loops are similar to recursion in that they execute until an exit condition is reached, they do not enable branching determination.
  5. By default in Simulink the recursive code generation is prohibited for this reason. It can be enabled as shown here; the rest of this blog is about when to consider using it.

Standing up for Model-Based Design: Agile Workflows

For the past 8 years I have been helping customers adopt agile workflows as part of their Model-Based Design process. As acceptance of agile processes grows I have seen a corresponding growth in MBD / Agile best practices. With 5 points, I will illustrate how Model-Based Design and Agile Development workflows fit together.

Continuous Integration: The opposable thumb

Model-Based Design and Continuous Integration (CI) go together hand in glove; the central nature of continuous testing to agile development processes is why I consider this the “thumb” of MBD / Agile workflows. And as for why Model-Based Design with Simulink is so powerful for CI systems? Built in access to simulation and therefore testing.

Rapid iterations / system integration: Pinky

One of the goals of Agile workflows is to allow engineers to work independently while being able to collaborate seamlessly. The key to this is the use of integration models or system architecture tools. The individual engineer works on their component while having an integration framework to validate against.

Why pinky: Because testing (thumb) and integration are the boundaries of Agile.

Making the most out of Stand Ups: Ring

Stand up meetings, or check-ins are intended to be quick reviews of outstanding issues. Having effective tools to aid in the review is critical. The graphical nature of Model-Based Design (and the ability to easily generate plots / graphs) means that the information required can easily be conveyed. (Note: bad stand up meetings can seem like they last for 24 hours. I would recommend this site for best practices)

Why ring: In stand ups we all stand around the table

Cross group collaboration: Middle

Working across groups is one of the higher order objectives of Agile workflows. Model-Based Design, with the use of architectural models, allows groups to not just design the control algorithms but the system level architecture, the physical requirements and hardware interfaces as well.

Why middle: Because this is the center of all the work.

One model, many uses: Reuse: Index

Chose the “right” one

In an ideal Agile workflow your work product is iterated throughout the design cycle. In a standard Model-Based Design workflow, you reuse the same model from the start to finish.

Why index: because the one concept indexes into the other.

Final thoughts / lack of footnotes

In writing a post about “the hand in glove,” footnotes didn’t seem right, so the final thoughts will have to do. Agile workflows, like the waterfall or test driven development (TDD) or one of many others are good practices to follow assuming that the work environment is following rational design processes. Know when to bend and when to hold fast to the process. For another view on Model-Based Design and Agile workflows I highly recommend this post.

Rabbit Holes: Constraining your model(s)…

The question of “how big should my model be” is an evergreen Model-Based Design question. Too small and time is wasted on build, test and review tasks. Too large and the model is difficult to debug and maintain. The recommended size has always been a range with one critical factor: the ease of communicating information from one person to another. But in the COVID remote working environment, has the sweet spot shifted and if so, what can Model-Based Design do to address this?

Going down the rabbit hole

The English idiom “going down the rabbit hole” is a reference to Lewis Carroll’s ‘Alice in Wonderland’ and refers to any pursuit where action is taken but the outcomes are ‘never ending and nonsensical’. Remote work can easily lead people down rabbit holes due to a lack of communication. In the office, sitting next to Sid & Beatrice, Marie & Pierre I have opportunities for regular, small sanity checks. How can we foster that same culture from behind a webcam?

Bring down the wall

When working remotely there are 2 keys to effective Model-Based Design reviews

  1. Select the focus of your review: Are you performing an architectural review or a functional review?
    • Architectural: provide the I.C.D. and the execution flow of the atomic units
    • Functional: Provide the simulation results and the requirement documentation
  2. Leverage integration models: Here is the MBD Key, the use of shell (child) and integration models gives you a natural decomposition between the architectural review (integration models) and the functional reviews (shell/child)

How this keeps you whole

Rabbit holes are best avoided through timely reviews. By leveraging the Integration / Shell concepts, a natural break point in the review process exists.

Shell models map to between 2 ~ 4 high level requirements. When you have completed the work on any of the requirements, call for a review.

Integration models map onto system level requirements. When any of these change, call for a review.

Reflecting back to the start

In the same spirit of avoiding rabbit holes, we want to prevent the mirror problem of too many reviews. Unless the review is part of an Agile Stand up, 45 minutes to an hour is an appropriate amount of time. If reviews are routinely taking less then 30 minutes, reconsider the tempo of the meetings. While time spent in reviews is important if you spend too much time in them you end up with the Red Queen’s Dilemma.