I think I’ve written this before: Revisiting Reuse

I’ve seen the question many times “Why do you care so much about reuse?” So giving my reusable answer I say “when you do something from scratch you have a new chance to make the same mistakes.” (1) If you look at your daily work, you will see we already reuse more than we realize.

When I go looking for images for “reuse” what I find most of the time are clever projects where you take a used plastic bottle and make it into a planter, or an egg carton becomes a place to start seeds.(2) What I want to talk about today is reuse for the same purpose, e.g. reuse like a hammer, a tool that you use to pound and shape the environment over and over.(3)

Hammer time(4)

Why do I care about reuse? Reuse is a company’s greatest asset; it is the accumulated knowledge over time. No one talks about “reusing a wheel” but that is what we are doing, we are reusing a highly successful concept.

So how do we get into the wheel house?(5) The first step is to identify a need, something that you (or ideally many people) need to use / do regularly.

When writing tests I frequently need to get my model to a given “state” before the test can begin. Creating the test vectors to do that manually is time consuming and error prone.

Once you have done that, think if there is a way that the task can be automated.

The solution I found was to leverage an existing tool, Simulink Design Verifier, and use the “objective” blocks to define my starting state. The tool then finds my initial test vectors to get me to where I want to be.

As described right now, this is a “proto-wheel.” It is a design process that I use (and have taught customers to use) but it is not fully reusable (yet). Why is that?(6)

Horseshoes and hand grenades(7)

This fails the “wheel test” in 2 fundemental ways

  1. It isn’t universal: every time I use it I need to recreate the interface, e.g. manually define the goal state
  2. It may not work: there are some conditions for which this approach will not find a solution.

Becoming a wheelwright: doing it right

If you want this to become “wheel like” you need to address the ways in which it fails. Here is how I plan to do that.

Create a specification language: by creating a specification language I, and my customers, will be able to quickly define the target state. Further, the specification language will ensure that errors in specification do not enter into the design

Analyze the design space: when a tool doesn’t work there is a reason; in some cases it can be deduced through mathematical analysis, in others, through analysis of failure cases. I am currently “tuning the spokes” on this wheel.

But will it roll? What is my (your) role (8) in making it happen?

In the end, a good idea and strong execution is not enough. The key to widespread reuse is getting it used by people outside the original (in this case testing) community. Until you do that it is only a specialty hammer.(9)

Getting those outside people to adopt a new tool or method is about getting people to care about the problem and the solution.(11)

Picking up a “hammer” is a 4 step process

  1. Know you have a problem: sometimes when you have been done something one way for a long time you don’t even realize there is an issue.
  2. Know there is a solution: if you don’t know hammers exist you will keep hitting things with rocks. It gets things done but your hand hurts.
  3. Have time to try out the solution: even the best hammer can be slower than your trusty rock the first few times you use it.
  4. Give the hammer maker time to make you a better hammer: chances are even the best tool will need refinement after the first few users.

Final comments: Why now?

Reuse reduces the introduction of errors into the system. Remember, “when you do something from scratch you have a new chance to make a the same mistake.”(13) And when working remotely during Covid, the chance to do so increases. Start looking at the tasks you do regularly and ask the need and automation questions.

Footnotes

  1. My wife Deborah always raves about my strawberry rhubarb crisps. But even after making over 100 of them over our 25 years together, I still can get the sugar to corn starch to lemon ratios a bit wrong if I don’t watch what I am doing.
  2. From a total energy usage I do question if we would be better off just recycling the bottle and egg carton.
  3. I started to think of the song “If I had a Hammer.” As it is a Pete Seeger song, it isn’t surprising that it is about civil and social rights. As a kid, the line “I’d hammer out love” seemed odd to me; hammers were blunt tools. When I got older I saw the other uses of hammers, to bend and shape things, to knock things into place. When you write software, write it like it’s a tool that can do all the things you need it to do.
  4. As a child of the 80’s “U Can’t Touch This” (hammer time) was at one time on nearly constant replay on the radio.
  5. First, make sure you are developing in an area that you know well so you know what needs to be done over and over.
  6. When I started this blog, I referenced the serial novels of the 19th Century. There are times I am tempted to end a blog post on a cliff hanger, but not today.
  7. When I first wrote the section title I thought “this must be a modern phrase” as while horse shoes have a long history, hand grenades are relatively new (e.g. 200 year old?) I was wrong. The earliest version can be traced back to Greek fire (or earlier)
  8. Homonyms, as a lover of puns, have always been something I have loved.
  9. If you thought I was done with the hammer metaphor you were wrong, I’m bringing it back at the end to drive my point home(11)
  10. Because that is what you do with hammers
  11. A blog post about this methodology would be one way of getting people to know about it.
  12. And in the spirit of reuse, I reused this from the start

Baskin Robbins: 31 flavors of models

Perhaps it is a coincidence, but, when I looked up the definition of “variant” online the example sentence was about an illness (1). The concept behind variants is appealing; within one model hierarchy, include multiple configurations of your target. But from a testing perspective, well….

If you go to Baskin Robbins ice cream, home of 31 flavors, and ordered a 2 scope cone you would have (31!/(31-2)!) 930 patterns, meaning it would take you almost 18 years to try them all if you did this once per week(2). So if you have an integration model with 8 referenced models, each model having 3 variants; well, just how many weeks do you want to test for?

From age 8 ~12 Farrell’s was the “hot spot” for birthday parties…

Related and unrelated variants

To simplify the design and testing process the variants in a model should be related, e.g. if you have a variant for your car, manual or automatic transmission, then a related variant could be for the wave plate (auto) versus clutch (manual) models. An unrelated variant could be for the HVAC system.

Please note, the ice-cream references were supposed to end in the last paragraph, but the “which of these don’t belong” image I found had an ice-cream cone. That was not my intention but it is too late now.

Defining your inclusion matrix

What this matrix shows us (3) is which model variants

  • Are allowed with other variants
  • Are required by other variants
  • Are not impacted by other variants

Validate the matrix

There are two primary methods for validating the inclusion matrix: inlined and pre-computed. With the inlined version, the full set of conditions for a variant to be selected is coded into the variant selection process. With the pre-computed version, the variant logic exists external to the component and the only final value is evaluated in the component.

As you would expect, there are pros and cons to both approaches (4). Having the logic in the component makes it easier for the developer of the module to understand what is going on; however it makes it more difficult for a system developer to have a global view of the variants. On balance the external computation is more likely to provide robust process.

Variants versus reuse

In our first example, an automatic versus manual transmission, this was clearly a variant; the models would be significantly different. But what if we were talking about a 4 speed versus a 5 speed transmission? Could that model, with some refactoring and change of data, be reused?

The key thing to keep in mind is that while a re-used model will require additional tests cases, many of the “basic” tests would already exist and could be modified for the reused data. On the other hand, a model variant will require a whole (or nearly whole) new set of tests.

Why now?

Why is this blog post part of the “impact of COVID” collection, as this is good advice at anytime. The answer is simple: as we social distance, the informal communication that makes poor design (5) tolerable is less prevalent.

This example COVID drives
Adopt this and thrive. (6)

Smokey’s look had changed by 1951; I’m not sure why he is wearing pants and none of the others have them on.

Footnote

  1. The jest here is that the over use of variants can lead to multiple bugs in your software due to the increased complexity
  2. In reality this would take far less time as there are some combinations that should never be considered; for instance, anything with coconut.
  3. The Matrix (the movie) also shows us that Hollywood had (and has) an ill informed conception of what computer programming / hacking really looks like.
  4. All design is about the pros and cons, there are few things that are 100% “good”
  5. Informal communication makes that poor design tolerable, but it doesn’t mean that anyone is happy.
  6. I’m not sure if this is an example of “doggerel” or “bearerel” poetry. Hopefully you can bear with me.

Testing your testing infrustructure

Ah tests! Those silent protectors of developments integrity, always watching over us on the great continuous integration (CI) system in the clouds. Praise be to them and the eternal vigilance they provide; except… What happens to your test case if your test infrastructure is incorrect?

Quis custodiet ipsos custodes(1)

There are 4 ways in which testing infrastructure can fail, from best to worst

  1. Crashing: This is the best way your test infrastructure can fail. If this happens the test ends and you know it didn’t work.
  2. False failure: In this case, the developer will be sent a message saying “fix X”. The developer will look into it and say “your infrastructure is broken.”(2)
  3. Hanging: In this case the test never completes; eventually this will be flagged and you will get to the root of the problem
  4. False pass: This is the bane of testing. The test passes so it is never checked out.

False passes

Prevention of false passes should be a primary objective in the creation of testing infrastructure; the question is “how do you do that?”

Design reviews are a critical part of preventing false passes. Remember, your testing infrastructure is one of the most heavily reused components you will ever create.

While not preventing false positives, adherence to standards and guidelines in the creation of test infrastructure will reduce common known problems and make it easier to review the object

There are 3 primary types of “self test”

  1. Golden data: the most common type of self test is to pass known data that either passes or fails the test. This shows if it is behaving as expected but can miss edge cases(3)
  2. Coverage testing: Use another tool to generate coverage tests. If this is done, then for each test vector provided by the tool provide the correct “pass or fail” result.
  3. Stress and concurrency testing: For software running in the cloud, verification that the fact that it is running in the cloud does not cause errors(4)
  4. Time: Please, don’t let this be the way you catch things… Eventually because other things fail, false positives are found through root cause analysis.

Final thoughts

In the same way that nobody(5) notices water works until they fail, it is common to ignore testing infrastructure. Having a dedicated team in support is critical to having a smooth development process.

Footnotes

  1. In this I think we all need to take a note from Sir Samuel Vimes and watch ourselves.
  2. There is an issue here; frequently developers will blame the infrastructure before checking out what they did. Over time the infrastructure developers “tune out” the development engineers.
  3. Sometimes the “edge cases” that golden data tests miss are mainstream but since they were not reported in the test specification document, they are overlooked by the infrastructure developers.
  4. The type of errors seen here are normally multiple data read / writes to the same variable or licensing issues with tools in use.
  5. And if you look at it with only one eye, failures will slip passed.

The next generation of Model-Based Design

I predict that in just one year the state of Model-Based Design could (1) see a jump equivalent to 7 years of normal progress; COVID-19 has brought to the forefront a needed set of transformations that will reshape the processes and infrastructure that define Model-Based Design. It gives us an opportunity to realize a new vision both for these times and beyond.

In a painting, every brush stroke(2) matters; but it is only in the collection of the strokes that the full image is revealed. Fortunately, in the software development process, you do not need all of the “strokes” to see the full picture; each improvement stroke provides a return on investment. By intelligently clustering the strokes together you see a multiplicative effect.

The objective of this new series of blogs is to provide the strokes and to define the clusters so the order of adoption can be optimized.

The obvious change due to COVID-19 is working remotely; this change exposes multiple areas where Model-Based Design should be improved. Cultural needs feed into process changes, which then mandate enhanced automation; these are the “strokes” that we will look at.

This image has an empty alt attribute; its file name is cluster.png
Example grouping of CPA into Clusters

The clusters

I want to introduce a few of the clusters I have already identified as “ready” for transformation. As this blog continues, this list will be expanded and refined. For now…

This image has an empty alt attribute; its file name is 1200px-Cluster-2.svg_.png

The review process

Current review process depend on two things, informal communication before the actual review and the highly interactive nature of in person reviews; both of these suffer in the remote working environment. To create a better review process there are several “strokes” that are needed. Changes in Architectural style to make review easier, up-front communication through the use of ICDs, and automation to validate prior to the meeting.

Creation and validation of physical models

At first glance, the creation of physical models should not be impacted by COVID-19. If we are building our models from first principles then those principles are the same if we are in the same room or not (3). However, in practice, first principle models are not practical and simplifying assumptions need to be made, which in turn means that the model needs to be validated against real world data(4). How do you collect that data when you need to social distance? How do you validate it?

Requirements life cycle

The requirements life cycle will perhaps see the most important changes. Requirements act as the primary source of truth in the development process; as a result, having a robust, understandable requirements life cycle is critical. We will need to see improvements in the way requirements are written, tested and maintained.

The testing life cycle

Testing should be like breathing, something you do automatically to keep you alive (5). The testing life cycle is impacted by COVID in a number of ways, first there is the stress on testing infrastructure (tests need to be move to continuous integration (CI) systems). Next, there is an impact on the development of tests when the developer and the test engineer don’t sit next to each other,(6) there is less informal communication that provides bullet resistant (7) tests.

The release process

The smallest mistake early in your development process can have a butterfly effect (8) on the downstream process. The use of automation at all stages of the release process will need to change to prevent the small flaps early on leading to large problems down stream. If we follow the automation upstream we will see that there need to be cultural changes that support people in the use of automation.

The strokes

The strokes are updates to classical Model-Based Design topics; areas where the existing shortfalls are exposed by the current working conditions.

  • Cultural
    • Improving formal documentation
    • Enhancing and simplifying informal communication
    • Meeting your meeting responsibilities
    • Welcome aboard, on-boarding at a distance
  • Process & “Style”
    • Workflows
      • One of these things is not like the others: Version controls
      • What I’m expecting: writing requirements
      • Follow the leader, improving the traceability process
      • Get your MBD license: Certification time!
    • Architectural changes
      • Mega fauna models : “Right sizing” your models
      • Come together, right now: model integration
      • They have a word for that in… Selecting the correct modeling language
      • Multi-generation code development: integrating legacy code
      • Put on your model reorg boots!
      • Baskin Robbins 31 flavors of models
    • Development changes
      • Workout routine for physical models
      • How do you know what you know? Validation methodologies
      • Polymorphic functionality
      • I think I’ve written this before! Revisiting reuse
    • Testing changes
      • A shock to your testing cylce
      • Send in the robots: test automation
      • The ABCs of testing interfaces
      • No bubbles: standardized testing
  • Automation
    • Look at this cool thing I wrote: when and how to automate
    • Compound interest: return on investment for automation
    • The ice cream problem: bullet proof automation

I will be posting blogs on these topics about once per week.

Footnotes

  1. I write “could” because all changes are dependent on taking action; now is the time to start.
  2. I have often wondered to what degree the Pointillism school of art influenced early computer graphics which were sprite based; I also have wondered if the term “sprite” is in part, due to the number of early fantasy computer games that included sprites.
  3. I tried for a long time to think of a “Spooky action at a distance” joke that would fit in here but wasn’t able to. Perhaps you could say after working as part of a team for long enough you know how everyone thinks, so you are “developing at a distance.”
  4. Even when you don’t need to simplify the model, real world validation is often recommended for complex systems.
  5. We could push this analogy pretty far; under stress you breathe/test more heavily. If you train your systems you can run much harder before you are out of breath
  6. In the best cases, organizations have separate development and testing roles. When they are combined into one, developer is the test; you are sitting next to yourself and sitting alone which can lead to developer bias in the creation of tests.
  7. I write “bullet resistant” not “bullet proof” in recognition that to get to “bullet proof” is part of the process of validating your tests (see this on developing testing)
  8. The more common use of the butterfly effect relates to chaos theory, e.g. a butterfly flaps its wings and triggers a tornado. However when I first learned of this it came from a Ray Bradbury story, the Sound of Thunder.

Don’t ISO-late

When it comes to safety standards, such as the ISO-26262, the old adage “better late than never” can be both dangerous (1) and costly. The simplest, somewhat humorous, description of a safety critical standard that I have read is the following

  • Say what you are going to do
  • Do what you said you would do
  • Verify that what you did matches what you said
  • Generate reports

While it is fictitious in its simplicity, it gets to the heart of the matter. Safety critical processes are about being able to show that each step along the development path you both plan out what you intend to do and verify that what you did matches your intentions (plan, do, show).

If that is all

You have to do, how hard can it be? Well first, let’s talk about what it takes to “show” that you did what you said. First you need to be able to show traceability between all artifacts. This means you need a robust tool that will show the link between

  • Requirements to the model
  • Requirements to tests
  • Test results to the model
  • Test results to the requirements
  • The model to the generated code
  • Integration tests to requirements
  • And between all the other components

Furthermore those links need to take into account the version control for all the units under test. Setting up the “hooks”(2) for all of these components is a task that needs to be done at the start of the project. There is a reason for the old joke about airplane development: “For every pound of plan you have 20 pounds of paper”(3)

Just like in the movie, being a “tracer” is not an easy job

Now that you have your “outline” what now(4)

Tracing the steps is just the “first” step; next, you need to validate the behavior of every tool (including your tracing tools) along the way. There are 5 basic steps for validating a software tool

  1. Create a validation plan: Define what it is you will be testing, under what conditions, what is the environment, who will perform the validation…
  2. Define system requirements: The creation of a system requirements (SRS) document breaks down into two parts; infrastructural and functional. During this time a system risk analysis document is created along with mitigation strategies.
  3. Create a validation protocol and test specs: Definition of both test plan (how you will test) as well as the specific test cases. The creation of a traceability(3) matrix between the plan and the test plan is created.
  4. Perform the testing: Execution of the tests defined in step 3
  5. Review and update: Collect the results of step 4, for any issues that failed the validation plan, determine if the fall under the mitigation strategies or if the plan / tool needs to be updated.

Getting your ducks in a row

Assuming you get all your ducks in a row (6) what next? The next step is to roll out how you will use those tools to your end users. Part of the software validation process is specifying how the tool is used; this can take the form of modeling guidelines (MAAB Style guide), defined test frame works or other workflow tools.

The only time is NOW

As these tasks start mounting up you can see why the “better late than never” will not work for a safety critical workflow; by the time “late” comes along, you have already been developing algorithms without the guidelines, creating artifacts without traceability and using tools that may, or may not, be certifiable.

There is, of course, good news. Processes learned in one project can be reused (as can verification artifacts) so much of what you a face is a one time upfront cost. If this is your company’s first time, you can also leverage industry best practices such as the IEC Certification Kit.

Footnotes

  1. People often talk about things being dangerous that are not, in fact, dangerous. However when it comes to safety standards, the failure to start early can result in critical mistakes entering into the project which can lead to injury and even death.
  2. Hooks is a term used to describe the infrastructure to connect different components together.
  3. A joke about “paper airplanes” would make sense about now.
  4. Since an “outline” is a “tracing”
  5. The longer you work in the safety critical world, the more often you will hear the term “safety critical”
  6. And for a software development project there will be multiple “ducks” to validate from Test vector generators, code generators, compilers, test harness builders….

Why choose Model-Based Design?

Over the last 18 years, I’ve had a variation on this conversation on every project I have worked on. (Dramatized for a blog audience)

  • Talented Engineer (TE): This model based stuff is interesting and all, but I can write the same code in 1/2 the time and it is 10% more efficient.
  • Me (also a talented engineer): That is probably true, you write very good code. Do you enjoy writing code?

  • TE: Well no, I write code because that is what I need to do to implement my algorithm. But wait, you are admitting my code is better?
  • ME: Yes, yours is. How many other people in your group are as proficient in C? And if you don’t enjoy writing code, do you enjoy designing <MAGICAL SPACE WIDGETS>(1)?
  • TE: I Went to school so I could work on MSWs(2), I love working at MSW Co on them; and really, maybe one out of the 20 can program as well as I do.
  • ME: Ok, well, how much time do you spend coding versus designing? Debugging versus testing?
  • TE: Tell me more about MBD stuff…

Realizing the benefit

The definition of Model-Based Design that I use is simple:

The use of a model(3) that is the single source of truth to 
execute two or more tasks in the design cycle 

I work for The MathWorks, but, for a minute I will be agnostic. The definition simply says “a model.” The “model” can be a physical prototype, an analog computer, C code or, I hope, a Simulink or Simscape model. The important part is that the same model, without changes (4), is used at multiple points in the design cycle.

By my estimate I have drawn the V diagrams 1.3e5 times.(5)

If we think back to our TE in the opening section, what did they want to do? They wanted to design MSWs. They did not want to spend time creating test harness, writing test vectors, generating reports, and integrating with hardware. And why should they? TE was hired because he studied MSW and knows how to design the best MSWs; why take him away from that task? Since MBD allows users to use the same model at multiple points means that when our TE in design is done he can hand it off to another TE in the testing group who when they finish it can hand it off to a TE in integration who hands it off to a TE in release engineering. And why is this possible? Because it is much easier to find talented engineers who cover a given area very well (e.g. just test, or just release) than it is to find the magical unicorn(6) who can do all of the tasks well.

But wait! I can do all that in X

At some point down the line our TE comes back and says

  • TE: wait, just two paragraphs ago you said a Model could be C code, why should I use this graphical language?
  • ME: Wait! I just wrote that, so how did you see that? But OK, depending on your application, you may use Simulink, Stateflow or the MATLAB (textual) environment. The key is the infrastructure built up around the environment that enables the “more than one uses of the model.”

Can and should are two different beasts(7). Modern graphical modeling languages have supporting tools directly integrated into their environment. The set-up and integration is reasonably straightforward. Textual language, by their open nature, often have higher set-up and integration costs.

Making the transition / learning your way around

At first the transition to a graphical development environment (8) can seem daunting; Simulink’s base pallet has over 200 blocks(9), and knowing at first which one is the correct one to use can be confusing. However, like learning any other language you will quickly pick up the basics once you throw yourself in. Unlike learning a new programming language there are multiple transformation technologies you can apply directly to the model. When you start adopting Model-Based Design you should determine what “second task to execute” you want to adopt first. For more insights on this I would recommend viewing this roadmap.

Putting it all together

Ultimately the adoption of model based design isn’t about the tools, it is about the process. How you use each tool at each step along the way to the best effect. I welcome you to continue to join me in this space as upcoming blog posts delve more into Model-Based Design processes.

Ah the splash page image!

Model-Based Design for the VP/CTO

In past blogs I have written and talked about the Return On Investment (ROI) for adopting Model-Based Design. This link, from The MathWorks, provides another good overview on the ROI question. I want to propose another reason for this migration / adoption. Finding an engineer / scientist who knows how to develop “magical space widgets” takes time; on-boarding them them takes time. Losing them happens from frustration and boredom. This is one of the “hidden” drivers of ROI for MBD; when your people spend most of their time working on the things that interest them in ways that use their abilities and knowledge you have highly engaged employees which leads to greater innovation and higher quality.

Footnotes

  1. MAGICAL SPACE WIDGETS is a generic term for a customer project. Sometimes it is a car or a plane, or sometimes an actual spacecraft.
  2. MSW Is the agreed upon TLA for Magical Space Widgets.
  3. In the actual MBD workflow it will be multiple models, but let’s start simply
  4. Without changes is a simplification. The model you start off with at the start of the design cycle will be elaborated as it is developed. The important point is that if you took that elaborated model back to the earlier stages of the process it should still function in that stage (at a higher level)
  5. The version that I like best of the V diagram reflects the iterative nature of design, that within each stage there are iterations moving forward and back. Much like a PID controller, a good process is self correcting to errors in the process.
  6. Magical unicorns do exist, just don’t count on your process depending on them.
  7. Or in the image’s case, T-Rex
  8. OK, I’m not trying to be subtle here; once you start seeing them as development environments where you don’t throw away your work at each step along the way, the benefits become clear.
  9. Honestly I’m not sure how to count the number of “basic” functions in a textual language like C; while those 200+ blocks at first may seem like a lot, but once you realize they are targeted at the design of models you quickly pick them up.

This is “only” a test

In the last blog I introduced the best practices for designing scenario based tests. Today I am going to cover the, non Herculean(1), task of generating test vectors.

Good vector definitions have resolution down to the smallest time step

The “giddy” set-UP

Starting off happily let’s consider 3 things; the unit under test, the test harness and the analysis method.

  • Unit Under Test (UUT): The UUT is what you are testing. For the test to be valid, the unit must be fully encapsulated by the test harness. E.g. all inputs and outputs to the UUT come through the test harness.(2)
  • The test harness: (3)Provides the interface to the UUT, providing the inputs reading/logging the outputs. Test harnesses can be black, white or grey box. Test harnesses can be dynamic or static.(4)
  • Analysis method: Dynamic or static; how the results of the test execution are evaluated.

Not to put the cart before the horse but; we start with a test scenario. We need the test vectors. To have test vectors, we need a test harness. To have a test harness we need a well defined interface.(5)

Within the software testing domain (which includes MBD) a well defined interface means the following:

  • All the inputs and outputs of the system are known: Normally this is through a function interface (in C) or the root level inputs / outputs in a model
  • Type and timing are known: The execution rate (or trigger) for the UUT is known as are all of the data types and dimensions of the I/O.

Time to saddle up!

No more horsing around, once you have your interface designed, it is time to create your test harness. Given that we are working in the domain of Model-Based Design, the ideal objective is to automatically generate a test harness. (To all the neigh sayers out there)

A well defined interface!

Signal time!

There are four basic methods for creating signals

  • Manually: Ah…good old fashioned hand crafted test vectors. These take the most time but is where we normally start.
  • Automatically (general constraint): The next step up is to create test vectors using an auto generation tool. These tools generally allow for basic “types of tests” to be specified such as range, dead code, MCDC.
  • Automatically (constraints specified): The final approach is to use a test vector generation tool and apply constraints to the test vectors.
  • From device: Perhaps this is cheating, but a good percentage of input test vectors come from real world test data. They have all the pros and cons(6); noise and random data; they may not get what you are looking for but…

UUT and constraints

In this example we have the UUT and a “Test Assessment Block” as our method for imposing constraints. What we program into the Assessment Block is what we want to happen, not what we are checking against(7). For example, we could specify the input vectors for the WheelSpeed, WheelTqCmd and SlipRationDetected are at a given value and that the output vector is ABS_PWM . The automatic test vector generation would then create a set of tests that met that condition. You could then check for the cases where the ABS_Fault should be active.

COVID-19 Acceleration: issues with “from the device”

When you social distance from your co-workers you are, more often then not, social distancing from your physical hardware. This directly impacts the ability to gather “real world” test data. My prediction is that we will see 4 trends as a result.

  1. Greater use of existing real world data / public domain data sets: Lets be honest, there are times that data is gathered because it is easy to do so; go to the lab run the widget, collect the data and go. However there is, no doubt within your company and within government, and university data bases a wealth of existing data that will match what you need down to the 90% level
  2. Increased automation of test data collection: To some extent being in a lab or in a vehicle will always be required for collecting data, however many of the processes around setup, data collection and data transmission can be automated to reduce both the time on site and the frequency of the time on site.
  3. Improved physical models: I know what you are thinking, this is about collecting real world data! What sort of trick is this(8)! What I am suggesting is that collection of physical data will be prioritized for the creation of better physical models to reduce the net time in lab.
  4. In use collection: The next step will be the transmission of data from existing objects in the field back to the manufacture. The model “IC-2021” freezer in the field will, most likely, share 95% of the same hardware and software. This means you have a lab in the field.
The Lambert projection for more projects see

All of these methods will be used going forward to supplement traditional real-world data collection methods. With the physical modeling approach I am going to dive into how to select data to collect to rapidly improve the models. With the “in the field” we will look take our first look at big data methods.

Final thoughts

Test vectors are just one part of the overall testing infrastructure; the necessary starting point. We are going to keep looking at all the points along the Verification and Validation process; both in depth and at the impact that COVID conditions continue to have.

Footnotes

  1. With the use of one last Greek hero of antiquity, I hope to build a metaphor for the 12 labors of Hercules as applied to testing (with far fewer labors)
  2. We will look at how large the UUT should be in another blog post. For now, we will give the ballpark that a UUT should be linked to 5 ~ 8 related requirements. Each requirement will have multiple tests associated with it.
  3. A good test harness should be like the harness for a horse, e.g. provides a secure connection to the horse (software) enabling it to run fully, have the minimum number of attachment points (e.g. don’t overload with test points) and connect without chaffing (crashing or changing the behavior of the code).
  4. A dynamic test harness has the test validation software as part of the test harness, e.g. the UUT is evaluated as the test is run. A static test harness simply logs the data for post processing.
  5. Step 1 is to swallow a fly, today you will learn why!
  6. Noise is, and is not a problem. Since it will exist in the real world you should welcome noise into your test cases since that is what you will find once you deploy your product once and for all.
  7. As an example of what we want to happen, we may want to get an dessert (objective) but do not want one with coconut flavor (test).
  8. Not a very good trick, and 8! is 40,320.

Your Thread in the Labyrinth

I come to you now with white sails unfurled. For the last two years I have walked the twists and turns of the Model-Based Design labyrinth, working in depth with a single customer. All the while marking the walls with chalk and unfurling my ball of twine so that having slayed the minotaur of process(1) I could return to recount my deeds(2).

Two thirds of my way into the maze a shift occurred, one that happened for all of us; the onset of the COVID-19 virus. Like an earthquake, the effects of the virus had an impact on the “shape” of the maze. Some passages had small changes, some massive deadfalls. Being deep in the heart of the maze when it happened has given me insights into how Model-Based Design has, and needs, to shift in response. (Changes which I have found to be both of use now and of long term benefit to the development cycle)

To sail beyond the sunset

Two years ago when I set out on this odyssey(3), I had laid out a sea chart to guide people through the boundary waters of Model-Based Design. These two years have given me a chance to see both the Kraken and the Treasure in the depths of that sea. Two years ago I wrote an introductory post on scenario based testing; in which I laid out the rationale for the testing and a basic methodology for developing these tests; let us get go deeper.

The adventure starts now: the scenario

A scenario based test(5) should be described in two parts

  1. What happens: (description) e.g. what are the steps that take you from point A to point B.
  2. What cannot happen: (prescribed) As you go from point A to point B what if C happens then your test fails.

A,B,C, simple as 1,2,3 right? Well yes, if it ever was just A,B and not C.

The high C’s

Lets continue thinking about a state machine where our objective is to get from State A to B. For many years I my observation has been that the average number of state transitions to get between to “states of interest” ranges between 6 to 8. If each state visited along the way has 2 exit points and if there are multiple ways to get from A to B then the total number of described transitions is on the order of 6 to 8 and each transition can have multiple required conditions.

The seven deadly sins(6,8)

Ok, not really 7 but…

  1. What is proscribed can change: The allowed behavior or event often changes between your current states(7). Often the proscribed value of a variable in your starting state is what you need to make a later transition.
    • The mistake: Setting a test to monitor “if variable X = 1” you fail
      • Recommendation: Evaluate the scope of each proscribed behavior and assign to only the active state. (Note this has the side benefit of faster running tests!)
  2. Synchronicity of event: A common conditional logic that you will encounter is “(input1 == 1) && (input2==0)”. On the surface this seems reasonable; but what if input 1 and input 2 are discreet events and only occur for one time step?
    • The mistake: The test is written in such a way that you have input1 and input2 hitting the correct value at the same time, but often in the “real world” there is some “jitter” in the signals.
      • Recommendation: If you have signal that have “jitter” consider having a temporary buffer variable that hold the value for a set number of cycles.
    • Note from the “labyrinth”: these sort of “bugs” take a long time to track down since the “test” passed but the actual device failed.
      • Recommendation: For all event based inputs add a notation to the Interface Control Document (ICD). This should be used as part of the test structure to determine if you have accounted for jitter.
  3. Not an exact match: For floating point numbers the scenario detection needs to use “fuzzy logic”. E.g. if the scenario calls for the vehicle to accelerate to 88 kph then the test should be read “when VehSpeed >= 88 && VehSpeed <= 88 + delta”. E.g. give some “wiggle room” in the event.
    1. The mistake: This is a common violation, so much so that we have MISRA 13.3 to cover it.
      • Recommendations: write “check checkers” to check your checks to validate they are valid(9)
    2. Notes from the “Labyrinth”: Surprising to me this error has been more common in test infrastructure than in the units under test. From what I have seen this is a function of not running the same analysis tests on the test code as on the production code. Remember, it dose not matter where the bug comes in)
  4. Not all routes are the same: This is an issue of under specified use cases; in the one case we can think of the difference of getting from the ground floor to the top floor of a 10 story building; option 1 take the stairs, option 2 take an elevator; both gets you there but one is better for your heart.
    • The mistake: When there are multiple routes a “shake up” of the multiple route can occur where parts of one route are melded onto another
      • Recommendation: For multi-route systems create one test case for each route.
    • Notes from the “Labyrinth”: The existence of multiple routes is, often, not taken into account in the design specification. The multiple routes are found when analysis tools (Such as Simulink Design Verifier) are used.

Next Steps

Having covered the specification portion of the test case in the next blog post we will cover the best practices for generating the test vectors while reducing the human input. In this section we will cover how to do this and why they are different when working in the post COVID-19 world.

Footnotes

  1. It is the minotaur of process since process is about the journey between two, or more, states. Having successfully navigate the maze once future trips through are no longer mysteries.
  2. In reality I will be recounting both the deeds of my own work and the many wonderful people I have been working with over the past 2 years.
  3. If learning about Model-Based Design can be thought of as a hero’s journey then let me be your wise elderly mentor(4)
  4. Except, please, I don’t want to be the mentor who is killed off in the 4th act
  5. Scenario based tests can be derived from use cases.
  6. Branching out from Greek myths to medieval concepts.
  7. Living in California now I think complaining about the weather should be proscribe behavior. Even after two years we can’t get over how wonderful it is year round.
  8. Ok, to many bullet points may be one of the “new” deadly sins
  9. When I worked on the original version of the Model Advisor for The MathWorks we had more discussion about “what is a check (guideline) versus check (test function)” then I can possibly remember.

Best practices for model cleanup

In this blog I have written a lot about “mushroom” and “spaghetti” code; today I’m going to write about the best practices for updating and cleaning up those models

Should I update?

Before you start you should ask yourself three questions

  1. Beyond cleanup are there additional modifications needed to the model? (No)
  2. Is the model, as written, performing it’s intended function? (Yes)
  3. Do I have tests cases that cover the full range of behavior of the model? (Yes)

If you answered as indicated (no,yes,yes) then stop. Spend time on another part of your code that does not meet those criteria(1). Otherwise lets start…

Baselining the model

The first step in cleaning up code is baselining the model. This activity consists of N steps

  1. Back up the model’s current state: Ideally this is already handled by your version control software but….
  2. Generate baseline test vectors: To the degree possible create baseline tests, these could be auto-generated.
  3. Generate baseline metrics: Generate the baseline metrics for the model, ram / rom usage, execution time, model coverage…
  4. Create the “Difference Harness”: The difference harness compares the original model to the update model by passing in the initial test vectors and comparing the outputs.

What is different about today?

The next question to ask in your refactoring is “do I re-factor or do I redo”? Depending on the state of the model there are times when simply re-doing the model from scratch is the better choice. This is often the case when the model was created before requirements existed and, as a result, does not meet them; that would make for a very short article though so let us assume that you are refactoring. First figure out what needs and what should change. To do that ask the following questions.

  • Review the requirements: what parts of the requirements are met, which are incorrect and which are missing?
    • Prioritize missing and incorrect requirements
  • Is it possible to decompose the model into sub-components: In most cases, the answer is no, or yes but it is tangled. It wouldn’t be mushroom code if you could.
    • Create partitioning to enable step-based modifications
  • Identify global data and complex routing: Minimization of global data should be an objective of update, complex routing is an indication that the model is not “conceptually” well decomposed
    • Move sections of the model to minimize signal routing and use of global data
  • Identify the “problem” portions of the model: Which sections of the model most frequently has bugs?
    • Squash them.

Once you have asked these questions you understand your priorities in updating the model

Begin modification

First understand the intent of the section of the model, either through inspection or through review of the requirements . Once you understand what the intention is you can start to simplify and clarify.

  • Simplifying logical statements / state charts
    • Run tool such as Simulink Design Verifier to check for dead branches, trim or fix
    • Look for redundant logical checks (multiple transitions all using the same “root” condition check)
    • Look for redundant states (multiple states exist all with the same entry and exit conditions)
  • Mathematical equations
    • Did they create blocks to replicate built in blocks? (Tables, sine, transfer functions)
      • Replace them with built-in blocks
    • Are complex equations being modeled as Simulink blocks?
      • Replace them with a MATLAB function
  • Size (to big or to small)
  • Partitioning rationale

Footnotes

  1. With mushroom code it is highly unlikely that you have tests cases that cover the full range of behavior of the model; model (or code) coverage should not be confused with full behavioral coverage since it is possible to auto-generate tests cases that would cover the full model without every understanding what that coverage means
  2. One advantage of having this blog for 3+ years is I can mine back article for information. Hopefully you will as well. What I mine is yours, nuggets of MBD wisdom.

Interface control documents and data dictionaries

Interface control documents (ICD) and data dictionaries are two parts of a mature MBD infrastructure. The question I often hear is “what is the boundary between the two artifacts”? First a high-level refresher:

  • The Data Dictionary: an artifact used to share a set of common data definitions external to the model and codebase.
    • Objective: provide common and consistent data definition between developers
  • The ICD: an artifact used to share interface information between components external to the model and codebase; often derived from or part of the requirements document set.
    • Objective: provide a common interface definition to simplify the integration of components when multiple people are working on a project.

An example of an ICD spec is

function namemyIncredibleFunction
function prototype(double mintGum, single *thought, something *else)
Call rateevent-driven
Multi-thread interruptibleyes
Function information
VariableTypeDimensionpassby
mintGumdouble1value
thoughtsingle4reference
somethingstructure10reference
function specification

And here is where the boundary question comes up. In specifying the data type and dimension in the ICD I am duplicating information that exists in the data dictionary; violating the single source of truth objective.

Duplication can be dangerous

So what is the flow of information here? I would suggest something like this…

  • The ICD document is created as part of the initial requirement specifications
  • The data interface request is used to inform the initial creation of data in the data dictionary
  • Once created the data is owned by the data dictionary

Infrastructure: making your artifacts work for you

Data dictionaries serve an obvious purpose, they are a repository for your data. On the other hand, interface control documents can seem like a burdensome overhead; which it will be without proper supporting infrastructure. If you remember the objective of the ICD, to simplify integration, then the need for tool support becomes obvious. When a developer checks in a new component it should be

  • Checked against its own ICD
  • Checked against the ICD for functions it calls and is called by
  • Its ICD should be checked against the data dictionary to validate the interface definition

With those three checks in place, early detection of invalid interfaces will be detected and integration issues can easily be avoided.

ICDs and the MATLAB / Simulink environment

Recently MathWorks released the System Composer tool. While I have not had a chance to try it out yet it offers some of the functionality desired above. I would be interested to learn of anyone’s experience with the tool