In todays video post I talk through the basics considerations in scheduler design.
Category: Uncategorized
Flying or Invisibility or Math?

I first head the question “which would you rather have, flight or invisibility?” a dozen + years ago. At the time it struck me as a slightly humorous question that could prompt people to think about what they would do with an ability. Before you make a choice — flight or invisibility — here are the ground rules:
Flight means the power to travel in the air up to 100,000 feet at a maximum velocity of 1,000 MPH. You don’t have any other powers. You’re not invincible and you don’t have super strength. Thus depending on your natural strength you probably can’t carry many people with you on your flights. Large pets or small children would be key candidates.
Invisibility means the power to make yourself unseen, as well as your clothes. But things you pick up are still visible. Food and drink are visible until digested.
Key Point: These powers are things in and of themselves; they don’t grant additional powers.(1) Unlike Math.
Math:(2) The superpower that is Additive(3)

Mathematics has several cardinal points: arithmetic, logical, and geometric. From those starting points,(4) set theory, calculus, and statistics can be derived.(5) Computers and computer science grow from the arithmetic and logical branches of mathematics.
Model-Based Design (and Model-Based Systems Engineering) is in turn is based on computer science and controls theory (which in turn is…).(6)
Knowing the antecedent…

Knowing the background of MBD/MBSE shows us how to improve the outcomes of these tools. E.g. knowing where to look when there are issues of performance and how far you backtrack to determine the solution. The process I go through when I am trying to improve a performance issue is I ask myself, what is the root cause of the problem? Is it arithmetic, computational, or logical? Knowing that, I then know how to improve the problem.
Footnotes
- The restriction that the power is “one thing in and of itself” is to prevent the superman conundrum that you can do anything: super speed and strength and flight and laser beam eyes. This got me thinking though, could you have a superpman based on just one power that could accomplish many super-feats, and if so what would it be? Would that superpower explain some of the problems with the character of Superman’s powers? I think the answer is “Yes” and the power is teleportation. Teleportation would accomplish:
- Flight: multiple small teleportations to give the appearance of flying (think like an animation)
- Super strength: not actually lifting things, just teleporting them up (or bending or what not). This also resolves the issue of picking up things that are structurally unstable, e.g. they stick together because they are teleported as a whole
- Super speed: I always wondered why when Superman did a “super fast cleaning” the plates didn’t explode due to friction. The answer: he just teleports the dirt off!
- …
- In this post I am talking about math, but in reality it is critical thinking that is the true base superpower. If we think of footnote 1, it is the teleportation of the mind that takes you to new places.
- Pun fully intended
- Hmm, thinking about this, with 2 starting points we can grow linearly, with three, quadratically…
- Given different starting points, the same underlying mathematical theories have been derived in different fashions.
- The “X is derived from Y” all the way down back to first principles modeling
Mode switching in Simulink
In today’s video blog I show how to create smooth transitions between modes in the Simulink environment.
How NASA is able to lands on Mars

On February 18th, 2021, N.A.S.A. had its latest successful landing of an interplanetary probe. What makes this landing even more impressive is that like their last landing, they used a “first time” method for getting the rover to the surface.
This string of successes can be attributed to the systems engineering processes embodied in this guide. The NASA Systems Engineering Handbook has been on my MBD bookshelf for a long time, and I’m happy to say that in January of 2021 they updated the guide.

Thoughts on the updated guide
The last update to the guide was in 2007. Systems engineering has grown considerably in the use and complexity of the systems in that time. Today I want to focus on one section of the document, 5.0 Product Realization. There is a breakdown of the realization steps into 5 sub-steps: Implementation, Integration, Verification, Validation, and Transition.

- Implementation:
- The “Acquire” section has increased considerably in recognition that complex software and hardware is now more readily purchasable.
- The reuse section dives into the additional work required to successfully revamp existing software for reuse. It provides key insights into what needs to be done for reintegration.
- Integration
- The key insight from this section is that integration is a series of integrations, from lower level components to the final product integration.

- Verification and Validation
- As always, NASA’s description of Verification and Validation steps is the clearest out there.
- The outline of how to create a verification and validation plan should be considered the gold standard for V&V activities.

- Transition
- Transitioning is often neglected in large projects; it is assumed that a verified and validated product will be useable by the end team.
- This document lays out the training, documentation and “on delivered team verification” required for success in delivery.
MBD: DevOps KPI

Not all metrics are created equal. Often in the rush to have metrics the decisions on what is collected is rushed resulting in a glut of semi useful information. This makes it difficult to review the useful information. So what are the important Key Performance Indicators (KPI) for a Model-Based Design DevOps workflow?

Data from test, data from field

In a traditional Model-Based Design workflow the data feedback ends after the product is released; the DevOps workflow brings that data back into the MBD world. How then do you make field data useful to the MBD workflow?
Central to a DevOps workflow is continual feedback, e.g. data can flow bi-directionally with updates from the product developer and error codes from the product in the field. With that in mind we need to ask, from a simulation perspective what information would we need to debug the issue?
Beyond error codes

Error codes are the minimal information required for debugging; an error of type X1258Q1ar894 occurred. It tells the end user very little and provides only slightly more information to the developer.
The next step up is the stack trace, e.g. the history of calls before the error occurred; this provides the developer with the first possible method for debugging the issue.

What is needed is the equivalent of an airplane’s “black box”; e.g. the history of the state information of the device when it failed. But a black box records everything which would quickly overwhelm your development process. So how do you select the data you need?
A refrigerator is not an airplane

For most devices a full data log of the items states is not warranted due to memory limitations and criticality of errors. Instead what can be done is
- For each type of error code select a subset of data to be collected.
- If the selected data is not sufficient to debug from the field, over-air updates can be pushed to increase the type and scope of the data collected.
- Once the error is resolved, reduce the error logging for the error code.
The “new” KPIs
The new KPIs for a DevOps workflow stem from the central tenant of DevOp…constant feedback. Determine a metric for error code severity and error code frequency; elevate them for correction based on those two metrics. As the system is developed allocate diagnostics to areas that were difficult to simulate and validate in order to enable post deployment validation.

How to maintain a model…
Every model needs some TLC, but how do you make make it easy to give the model the care it needs?

Maintenance starts with the foundation. The foundation of a model consists of the guidelines that are followed. In the MATLAB./ Simulink world that means using the M.A.A.B. style guidelines. The Modeling guidelines focus on model readability and correctness; so what is the “first floor” in our house?
The “correct” size
Size is not an absolute number, rather it needs to be viewed in a metric encompassing complexity, number of associated requirements and commonality of the blocks & code.

Common code: e.g. Reuse
The more you can reuse code across portions of your project (or projects) the simpler maintenance becomes (though you need to have the highest level of validation for reused code). For an overview of reuse I will reuse this blog.
What is complexity?
Like reuse, complexity is a topic I have covered before. The key take away here is to link complexity to maintainability; lower complexity generally means easier to maintain models. However, there is an absolute complexity to any problem e.g., you can’t make a problem simpler than it truly is. Arbitrary decomposition of a complex problem into smaller subproblems can increase the complexity of the system by making it difficult to understand the data or concept flow. Which brings us to…
Requirements and maintainability

The ultimate metric that I recommend for ensuring that models are easy to maintain is linked to the number of requirements per model. Ideally each model will be associated with one or two top-level requirements.
Since maintenance of the model is associated with either changes to the requirements or bugs found in the field, the more top level requirements associated with a model, the more frequently the model will need to be updated.
Updates to a model requires the revalidation of the model and associated requirements. If you change a model with only one associated requirement you only need to revalidate that requirement. If you change a model with multiple requirements, even if you are only changing with respect to one you need to revalidate all of the requirements. Every time you pull out a thread…

Raise the roof!

To finish our “This Old House” metaphor, we will talk about the roof. In this case the roof is our test cases. In most if not all cases of model updates the test cases will need to be recoded and revalidated. This is a secondary reason to limit the number of requirements associated with a given model; the more requirements the more test cases that must be updated.
Built well and and with proper care, your model can be a home for generations to come.


Consulting Ethics:
There is a story I tell to new consultants; on my first consulting contract, 20+ years ago I had a customer request an automation of one of their data dictionary tasks. Over the course of 3 months I iterated on their requirements and delivered the final product. While the customer was very happy, 6 months later I learned they were not using what I had delivered. In hindsight I recognize my mistake; I worked on what was requested to work on before I established the root issues.
Despite the customer’s satisfaction, this reveals an ethical issue: my responsibility as an expert to understand what the underlying process and technical challenges are and provide my advice on what should be done for the best outcome regardless of other stakeholders’ positions.
- The world first: I will not take on projects that negatively impact the world or unfairly disadvantage any group.
- Never overstate what can be done: Provide honest assessments of the problems and the abilities of myself, my coworkers and the tools that I am working with.
- Teach and learn: Every contract is a chance to both teach and to learn.
Leadership in software development: MBD/MBSE

Complex systems do not “fall into place” of their own volition; if one domino is out of place(1) the chain of blocks will halt. When projects are incremental updates then “day-to-day” leadership is all that is required. But when you are shooting for the moon, new processes are required and understanding orbital mechanics is critical…
Adoption and migration

As a consultant my job is to enable moonshots as I guide companies on the adoption of Model-Based Design software processes. With every company I’ve guided, with the exception of some startups,(2) there is a contingency that says a version of Admiral Hopper’s(3) dangerous phrase. The objections decompose into three categories…
- The way we do it now is positive adjective.(4) The objection here is generally on a technical basis and can be addressed.
- It will take too long to adopt: In the short run, this objection is accurate. But companies that operate in the short run do not have long runs.
- But then what would I do? Adopting new processes necessitates changes to organizations; in some cases this does mean that some roles are changed or are eliminated.(5) But more often this is a chance for people to take on new more dynamic roles.
What leaders offer

Most people want their work to mean something. Leading the MBD/MBSE adoption process makes it easy to offer each individual something(6) that will facilitate their work. Defining with them the current limitations of their processes and enumerating the benefits along the way to change is the first step in transformation.
The <positive adjective> case

Groups raise objections to Model-Based Design based on the perceived technical merits of their current processes. The essence of these arguments can be distilled to the contrast between a master craftsperson and an assembly line. In their objections they select the work of the most skilled team member and compare it to the automated products.
The quality of a master craftsperson’s work will almost always be higher but it takes longer to produce. I shift the question to: where should a master craftsperson be spending their time? In the creation of screws or on the design of the watch? Model-Based Design automates low level processes allowing masters to invest in the high level vision aspect of the project

A second <positive adjective> case exists due to the tension between software and controls engineers. Each has a unique skill set that is critical to the overall product; each has part of their task that is dependent on the other’s work. Unguided, this is a source of conflict.(7) MBD / MBSE provides a common ground where the two sides work together; controls engineers can design to their imagination’s limit while the software engineers can integrate, safe in the knowledge of functional interfaces. Guided, MBD / MBSE enables the two groups to work together while focusing on their domains.
Timing

Establishing a new workflow takes time. But if the development timeline is longer than 6 months the downstream ROI is significant.
Migration steps
- Need identification
- Training
- Initial rollout
- Workflow augmentation / tooling
- “Final” rollout
Most companies I have worked with come to me at step 3, initial role out.(8) From the initial to final rollout out stage, there is generally a 2 year process with the first benefits realized at the 3 month mark.

The key to maintaining forward motion on a project is to define functional interfaces such that work products can be utilized at any(9) stage in the migration process. Having a road map of the adoption process enables you to see this with clarity.
Security

The truth is that often resistance to change comes from perceived job or group security. In 10+ years of guiding groups towards new processes the only time I have seen groups lose “territory” was when they resisted change arbitrarily.(10)
In the end, everyone in a company is there because they have skills and insights that will benefit the project; the objective of a MBD cartographer is to show them the way. In the end it comes down to showing them they are the engineer, not the hammer.(11)
Are you ready to go on journey?
Most modern organizations are in a state of consistent incremental change. This sort of unguided updating will result in inconstant and incomplete processes. Deciding the journey you need to take is the first step; let’s talk about where you want to go.

Footnotes
- Proper planning however means that no critical path should be dependent on a single domino. The planner has redundancy built into the system.
- In startups there is often the the inverse problem, a rejection of all traditional development practices thus “throwing the baby out with the bathwater.”
- For those not familiar with Admiral Hopper, I recommend this link to see her foundational impact on computer science.
- If you haven’t played “MadLibs” this site gives you a rough idea.
- While some roles are eliminated it is rare that I have seen people laid off due to migration to Model-Based Design / Model-Based Systems Engineering. Also see (10)
- People within an organization are aware in a visceral sense of the problems they face with existing workflows. Tied to their daily tasks they don’t have the freedom to imagine and enact change; leadership illuminates the path forward.
- In an idealized workflow the controls engineer, who has a CS background writes the algorithms in a highly efficient modular fashion that is simply integrated into the network typology defined by the software group.
- Part of my job as a consultant is to identify if they have sufficiently covered steps 1 and 2. When you are new to a process it is easy to overlook benefits and pitfalls of the process and make plans that do not reflect best practices.
- In practice there will be some “throw away” work; however this can be minimized with proper planning.
- The groups that raised issues and facilitated the process grew in size and responsibilities.
- A short version of it can be found here.
First Principles of Physical Model Verification

Chicken or egg, physical model or prototype hardware? Physical models are used in Model-Based Design to create closed loop simulations to validate control algorithms. To close the loop with confidence the physical model needs to be validated against the real world. And there is the rub; how do you validate a physical model when you may or may not have access to the hardware?
The real world, or something like it…

It is rare that a product is truly “new” and so we can often start off with an atlas(1) that provides initial directions. From there, validating the physical model takes the following series of steps
- Determine which variables influence your outputs
- Determine which of those variables can directly and safely(2) be measured
- Determine the accuracy of those measurements
- Determine the operating conditions of your model(3)
- Find the “boundary” effects, such as hard stops or friction / stiction interactions.
C3:(4) Collect, clean, correlate!

The first two C’s, collect and clean, are performed on the proto-type hardware or taken from existing research. Once collected, outlying data is discarded and the process of correlating the physical model to the real world data starts.
In an ideal world the inputs to the physical hardware can be directly fed to the physical model and the outputs compared. The model is then “tuned” to reach the highest level of correlation.
Building up from simpler models…

Sometimes the full physical device is not available for data collection. In this case your physical model architecture is built from correlated “submodules.” In this case the accuracy of calculations is even more critical. For the simple case where the accuracy is consistent across the full operating domain it can be calculated as
acc(tot) = acc(1) * acc(2) * … * acc(n)
However, since it is more common that the accuracy is dependent on where in the operating envelope you exercise the model it should be calculated as
acc(tot,r1) = acc(1,r1) * acc(2,r1) * … * acc(n,r1)
acc(tot,r2) = acc(1,r2) * acc(2,r2) * … * acc(n,r2)
acc(tot,rm) = acc(1,rm) * acc(2,rm) * … * acc(n,rm)
In operating regions where sub-models have lower accuracy it is important to maintain high levels of accuracy in the other sub-models.
Footnote
- This is both a reference to the image and a chance to say, image searches on the phrase “the real world” bring up the MTV show and almost nothing else.
- Remember, one of the reasons for physical models are to take the place of tests in unsafe operating conditions!
- Sometimes multiple models are required to handle different operating conditions.
- The “3” here was in the mathematical sense, C to the third power. The 4 in parenthesis is the footnote.
Testing requirements: Part 5: Test runners!
In part five of this testing series, I look at the use of test runners; the tool that enables the execution and evaluation of your tests.
Footnotes
- As promised, the MATLAB Unit test documentation
- Not promised, but here for you anyway, Simulink Test Manager.





