Thanksgiving: a resouce constrained and concurrent execution problem

NOTE: This post is a recycled from my LinkedIn blogging days.  Happy holidays for my readers in the US
This November 24th, at 4 P.M. sharp all my requirements must be met; both the high-level requirements (enough food to overfill 8 people) and the derived requirements (a selection of traditional Thanksgiving food, vegetarian, and lactose/gluten free options.) Further I will be operating under constraints of both time (do I really want to get up at midnight. to start the suvee turkey) and space (my kitchen is great for a small meal but lacks the resources to really multi-task.)(1) In other-words this sounds like 85% of the engineering projects I have seen near release.

Part 1: Reuse

Reuse of proven recipes (not reuse of food, that happens the week after) While there is always one or two new “flashy” dishes, for the most part, I work against a known timeline of known dishes. This ensures the quality of both dishes and allows me time for any “issues” with the new dishes.

Part 2: Resource allocation/scheduling

Over 6 lbs of onions will be chopped on Thursday for 6 different dishes (soup, salad, stuffing, caramelized onions, green beans, cold bean salad). I could chop them each time I came to a new recipe however that would lead to unneeded tears. Instead of knowing what my recipes call for I will “pre-chop.”

Part 3: Proper tools

The last things I will do before going to sleep Wednesday night is to sharpen my knives and run all my prep bowls and pans through the wash. I will pull out my pressure cooker and immersion blender but I will not get out the food processor. Proper tools does not mean every tool, and again my counter space is limited.

Part 4: Music

Honestly, I can’t cook without music playing. Try chopping to a dub-step beat, or caramelizing to Carmen.(2)

Part 5: Compromise

In a limited resource kitchen, it is not possible to get everything to the point (crispness, juiciness temperature), you want it at the point you need. Prioritize and restructure(3). There are reasons I am having a cold bean salad, e.g. I can make it the night before, it doesn’t take up valuable stove/oven space and it is very tasty.

Linking it back to software development

With the possible exception of part 4 (Music), I think we can all see the direct links to software development. Whether working in a single person kitchen or in 3 star Michelin restaurant, on a hobbyist home quadcopter or on the next generation pacemaker there will always be requirements, there will always be resource limitations and there will always be deadlines(4).

In the end, with proper planning, you can create a meal (product) that you can be proud of and that, hopefully, you will enjoy developing (making) as much as I will enjoy my cooking.

Updates

First here is an image of the dinner in progress. Second, yes that is a Turkey on my shirt. I am a happy H2Okie alumni. Next, if I may push one last analogy. I am of the cooking school that “cleans as you go.” We could liken this to “test as you go” methodologies of development. (Note: while the link is to an agile development method it applies to any design methodology.)

Notes:

1.) Just to be clear, I really enjoy cooking these large meals

2.) Take care when you first start “rhythm chopping”. You should start and finish with 10 fingers

3.) In many a meal and in many a project at some point you will realize what you want to do/make is not possible; normally due lack of onions/time.

4.)I dislike the use of the phrase “time pressure” or “deadlines” given their connotation of stress in delivery. With proper planning and resources, the stress in development should be minimized.

BIO

Michael Burke is a consultant with The MathWorks and former coordinator for the MathWorks Automotive Advisory Board (MAAB). I currently focus on Model-Based Design Process Adoption and Establishment projects. Views expressed in this article do not represent the views of The MathWorks.

What is the measure of a model?

In past posts, I have written about writing understandable models (Simulink and Stateflow.)  With this post, I want to address measures of clarity (or it’s inverse complexity.)  Note: in this post, I will be focusing specifically on Simulink metrics as contrasted with C based metrics.

Measurements in the C world

So what should be Baby angel sharks with ruler at Deep Sea World_900x583px.jpgmeasured and how do you evaluate the measurements?  In traditional C based development there multiple metrics such as…

  1. Lines of code (LOC):  A simple measure of overall project “size”.
    Note: A sub-metric is lines of code per function (LOC/Func)
  2. Coding standard compliance: A set of guidelines for how code should be formatted and structured (e.g. MISRA)
  3. Cyclomatic complexity: A measure of the number of logical paths through the program
  4. Depth of inheritance: A C++ measure of how deep the class definition extends to the “root” class.  Can be applied to function call depth as well.
  5. Reuse: The degree to which code is reused in the project.
    Note: a better measure is the degree of reuse across projects but this is more difficult to capture with automated tools.
  6. Coupling/Cohesion: Measures of the direct dependencies of modules on other modules.  Lose coupling supports modular programming
  7. Much more… : A list of some additional code metrics can be found here:  Software Metrics : Wikipedia

cyclomatic_complexity.png

Model-Based Design metrics

Within the Model-Based Design world, there are both direct and analogous metrics of the C-based versions.

  1. Total block count (TBC): The total block count maps onto the LOC metric.  Likewise, a Blocks per Atomic Subsystem can be compared to the LOC/Function metric.
  2. Guideline compliance: Modeling guidelines, such as the MAAB, map on to C based guidelines.
  3. Model complexity: Maps onto cyclomatic complexity.  It should be noted that the model complexity and cyclomatic complexity of the generated code will be close but not exact.
  4. Subsystem/reference depth: A measure of how many layers of hierarchy exist in the model
  5. Reuse: The use of libraries and referenced models that can directly
  6. Coupling: Simulink models do not have an analogous metric for coupling.  By their nature, they are atomic units without coupling.
  7. Much more….

Evaluating measurements

There is no such thing as a perfect model or perfect bit of C code.  In the end, all of the metrics above are measured against pass/fail thresholds.  For example, common thresholds for the Model Metrics include

  1. Blocks per atomic subsystem < 50
  2. Guideline compliance > 90%
    Note: with some guidelines that must be passed regardless of percentage

 

Measuring the model

With Models, as with text-based development environments, there are a variety of tools for collecting metrics.  Within the Simulink environment, it is possible to write stand-alone scripts to automate this process or use the Model Metrics tool to automatically collect this information.

Plants resting on a table

In a previous post I wrote about fidelity in plant models.  For this post I want to focus, briefly, on three fundamental plant modeling tools that can be used to create simple and fast plant models.

  1. Table-lookup functions and adaptive tables
  2. Regression functions
  3. Transfer functions

Lookup tables

Lookup tables form the backbone download.jpgof many plant models.  From thermodynamic property lookups to engine torque and power curves they allow users to quickly access multi-input / single output data.

The efficiency and memory usage of tables can be be dramatically improved when the conditions of their use is well understood.  Common methods include

  1. Index search from last index:  In cases of slow moving data it is faster to start the indexing into the table by starting from the last time steps value.
  2. Reuse of index across multiple tables:  In many instances the same X or Y axis will be used across multiple tables.  The calculation cost for finding the index can be decreased through a pre-lookup function.
  3. Chose the correct interpolation function:  The efficiency of tables is dependent on the interpolation function.
  4. Pick and condition your data:  Frequently data from real world measurements is not evenly spaced.  By preconditioning the data (e.g. smoothing out data and providing even spacing for axis…)

Regression functions

Regression functions allow for modeling of non-linear multi-variable functions.  Two cautions regression functions.  First is in ensuring that data is valid throughout the input variable range and that the accuracy is sufficient at all points in the range.

1469941315925.jpg

The second caution is to validate that the regression equation does not become computational burdensome.

Transfer functions

The final “simple” plant model that I want to cover is the use of transfer functionsimages.png.  Transfer functions can be used to model a system when the mathematical equations describing the system are know and involve state (e.g. derivative) information.  (Note for multi-input, multi-output functions State Space equations can be used)

Transfer functions have the advantage over table look ups and regression functions that the represent the actual underlying physical system from a theoretically correct perspective.   In some instances the function may need to be “tuned” to take into accounts imperfections in real world systems.

Final thoughts

In the end there are many different methods for creating plant models.  Frequently the “correct” approach is to use a combination of “simple” and “advanced” modeling methods.  Take the time to think about what you require from your plant and the correct combinations can be selected.images.jpg

 

Interface control documents and Model-Based Design

An Interface Control Document (ICD) can be defined as

document that describes the interface(s) to a system or subsystem. It may describe the inputs and outputs of a single system or the interface between two systems or subsystems.

Within the traditional textual based development process, an ICD was either a text-based document or, a UML type diagram or a combination of the two.  Within the MBD development process, it is not entirely clear if additional supporting documentation is required or if the Model can serve as the ICD.

With most topics that I write about I have reached a firm conclusion on what is the accepted best practice.  In this area, I still have open questions.  In this post, I lay out the pros and cons for using models as ICDs.

Why models are sufficient:

The simplest argument as to why models are sufficient is that the models can be used in place of UML diagrams provided the interface has the sufficent markup.  For example in the image below the Simulink Interface View provides the data types and rates of all the inputs and outputs to the system.

interfaceView

When the model is part of a model hierarchy than the calling structure can be derived from the model.  (Simulink Model Dependency View)

Why models are lacking:

While the two views above are good, they lack information that is commonly found in ICD documents; the function interface (e.g. the C or C++ calling methods) and the data interface.  The models contain and use this information however they do not, natively, display the information.   Note: this is a limitation of UML diagrams as also.

The next issue with models as an ICD document is a question of “push and pull.”  By having the model as a development artifact and the ICD document you need to implement a change request process.

Lacking-Leaders-4-NEW_04.gifWhat can be done?

Use of automatic report generation can augment the information provided natively by the model.  Doing this could, in fact, generate a “standard” text-based ICD with the advantage being that the model would stay the single source of truth.

As with most issues in the development where there is not a native tool it is the implementation of a process that helps to bridge the gap.  All ready for text-based ICDs people have change request processes in place.  The question with an MBD approach is who implements the change at the model, and at the system level?

As always, please feel free to share your thoughts in the comment section.

 

 

 

Understanding the merge block

In Simulink, the merge block is documentation states

“The Merge block combines its inputs into a single output line
whose value at any time is equal to the most recently computed
output of its driving blocks.”

As clear as that statement is there are still questions about the behavior of the merge block.  This post attempts to clarify those questions.

Quiet night

The first and most common question is “what happens when none of my subsystems are enabled/triggered?”  In this example, we have 3 subsystems “A”, “B”, and “C” which are enabled when the driving signal is equal to their enumerated namesake.  The output from the subsystems is a simple enumerated constant equal to the subsystems name.  E.g. subsystem A outputs a value of “A”…

merge_1

However, the way I have configured the driving signal it includes an enumerated value of “abcs.D” in addition to the A,B,C values.

mergeOut

In this example when the value of the driving signal is equal to abcs.D none of the subsystems are enabled.  In this case, the merge block simply outputs the last value input into the system.

Default merge subsystems

In the example above there is an unpredictable behavior due to the lack of a “default” subsystem.

download.jpg

merge_3

The “Default” subsystem should execute every time step when the other systems are not running.  In this example, it is enforced through simple logical operations.

Multiple execution

In versions of Simulink from 15a and on (it could be earlier but that is the latest I had installed on my laptop) if you try and execute more then one subsystem during the same time step you will get an error message

merge_2In this case, I tried driving my two subsystems “LiveCat” and “DeadCat” with a “Schrodinger Waveform”™ vector.  When the vector was in the “Both” mode then both subsystems were active and the error message was triggered.

wave

 

My consulting 7 commandments

As a consultant, there are N rules that I hold for myself and my customers.

1: Thou shall ask questions

At all stages of a consulting engagement, including the pre-engagement work, asking clarifying questions is critical to a project success.  I have seen projects fail due to consultants not wanting to ask questions (e.g. they don’t want to look like they don’t understand) and from customers not wanting to answer questions (they want to keep their information “private”).

download

2: Thou shall know the limits of your ability

I love learning new technologies and sciences.   I will take on projects where I am stretching myself; I will not take on projects where am pushing beyond my abilities. I will either recommend a co-worker or another company.download.png

3: Thou shall provide honest estimates

An honest estimate takes into account the information provided by the customer and the industry/domain knowledge of the consultant.  The estimate should provide a list of assumptions baked into the proposal; the expected deliverables and the limitations of the estimate.download.jpg

4: Thou shall communicate regularly

Once a project has started regular communication with the customer is essential to guarantee that the project remains on track and that the customers needs have not changed.regex_shirt_1_1024x1024.jpg

5: Thou shall teach

As I work with a customer I am always teaching them what I am doing, both the how and the why.  If at the end of a consulting engagement my customer does not understand what I did then I consider that failure.images.jpg

6: Thou shall be around afterward

After a project is completed, even after the budget has run out, I am still available to answer questions that arise.  I do for three reasons.  First, there are always issues that arise 2 ~ 3 months down the road.  Second, documentation, no matter how good, can always be clarified.  Third, it is just polite.

you-hang-around-ill-go-ahead.jpg

7: Thou shall get to know the client

On most projects, I will work with the client for 200+ hours.  Getting to know you, my client, makes for more enjoyable working conditions for everyone involved.

images.jpg

Clarity of communication: handoffs

In a traditional software design process, there are multiple handoff points were artifacts are given from person-to-person, group-to-group.  These handoffs are places where errors can be introduced into the product.

In general, there are two types of handoffs; transformative and non-transformative(1).  With a transformative handoff, the artifact is changed; either through updating of existing material or translating it from one form to another (for example taking textual requirements and writing C code)

Each handoff introduces a potential error point where mistakes in translation can occur.  The most common errors occur during translation handoffs but they are common even in update events.

Why do handoff errors occur?

Errors1481381882-20161210 are introduced in the development cycle due to imprecise communication.  This miscommunication can be compared to the errors introduced in the party game “Telephone.”(2)  Even the best intentions cannot prevent them.

How do you minimize handoff errors?

If handoff errors cannot be fully prevented how then do you minimize them?

  1. Minimize translation handoffs:  As covered in previous posts use of models enable handoffs between roles using a single source of truth.
  2. Build verification activities into translation handoffs: Verification of the design intent between transformative handoffs.  Use of test suites and verifying against requirements.  (Note: this requires that the requires are written in a clear fashion)
    Note: Regression testing can be used for update handoffs.
  3. Minimize hands offs:  Through the use of models the total number of handoffs in the development process can be reduced.
  4. Stress clarity in communication:  Clear communication begins with well-written requirement documents and continues with documented models.  Send engineers to classes on requirements writing and reading, enforce coding standards that promote understandable models.

Final thoughts

Communication errors will occur during the design process; our objective is to minimize them.  They range from famous, like the Mars orbiter metric/English crash to the more prosaic, like recipes that write “salt to taste” (3) in giving a recipe.   By stressing the 4 ways to minimize handoff errors the total development time and costs can be minimized.

Notes

  1. Non-transformative handoffs include things such as design reviews or migration of code into a compiler chain.
  2. The cartoon image is from http://www.smbc-comics.com/.  Thank you to the author, Zach Weinersmith, for his permission to include it.
  3. The problem with “salt-to-taste” is that many modern cooks avoid the use of salt and, as a result, end up with bland food.

Empowering engineers to adopt MBD

What does it mean to empower engineers? A base definition found through ever handy Google, is

impPower2

Empowering engineers to adopt Model-Based Design fitsGM-NASA-RoboGlove-03_opt into this base definition like a hand in glove.   But how do we extend beyond the base solution?  So what can we do to tailor the adoption empowerment?

Support phase 1

As already covered, the Model-Based Design roadmap first phase is an initial research and proof on concept phase.  How is this supported?  There are three methods of support

  1. Education:  Both formally through training and informally through readings such as this blog.
  2. Time: The initial research stage takes between 1 and 3 months depending on the existing level of knowledge.  Successful adoption of MBD processes requires dedicated time by the establishment team.
  3. Failure: The scope of the initial adoption phase should not be on a critical path.  The establishment team needs the leeway to make mistakes during their initial investigation.

Ongoing support

The ongoing support consists of 3 factors

  1. Specialization of tasks: engineers and software architects should be allowed to work in their domain.  Requiring everyone to learn all tools and steps in the workflow creates an unnecessary burden.
  2. Provide the required tools: Not every engineer needs every tool.  However, identifying the tools required and providing them to the engineers will enable them to quickly do their required work
  3. Automate: where possible automate common tasks.  Nothing is more demotivating than the requirement to perform repetitive tasks.

see-say-do-banner.png

Final thoughts

Empowering engineers to adopt and use Model-Based Design is little different from any other process.  The central difference is in the initial adoption phase where the Education, Time and Failure requirements exist.

Notes:

I have no connection to Santa Cruz college.  I just couldn’t resist the “slug support team” for part of this post.

Fault analysis

By some estimates, fault detection and the subsequent error handling averages between 30% and 50% of the algorithmic code for embedded systems.  However, despite the high percentage of code devoted to fault detection the literature devoted to this topic is less commonly read.

In a previous video post, Fault Detection, I looked at common patterns for fault detection algorithms and decomposition between fault detection and control algorithms.  In this post, I will cover the validation of fault detection algorithms.

Requirements: the validation starting point

To begin, the fault detection algorithm should have an independent requirement specifying what conditions constitutes a fault.

req2

The example above is the requirements for an engine temperature fault monitoring system.  It defines what is monitored and the severity of the faults.  Importantly it does not define how fault detection system will be implemented.

Fault system implementation

Once the requirements are written and validated for correctness, the fault system can be implemented.

engFault

In this case, I implemented the fault detection algorithm as a Stateflow chart.  Noise in the signal was handled by using a debounce variable “delta” to bouncing between the InitETM and MoveToFault modes.

Fault system validation

The next step is to write the test cases that will validate the requirements document.  From the technical description, 6 test conditions (or cases) that can be defined

  1. Engine operating in safe temperature range:  Maps toe TD.1
  2. Engine operating above critical temperature range: Maps to TD.2
  3. Engine stays in ENGCRITTEMP state after entering ENGCRTTEMP state: Maps to TD.2.a
  4. Engine operating in unsafe temperature range for less than 5 seconds: Maps to TD.3.a
  5. Engine operating in unsafe temperature range for more than 5 seconds: Maps to TD.3.a
  6. After entering ENGOVERHEAT state engine temperature is less than maxEngTemp for more than 10 seconds: Maps to TD.3.b

In the act of writing the test cases, it is discovered that the requirements were underspecified.  The requirement reads “noise in the signal shall be accounted for” but it does not specify the level of noise.  At this point, the requirement should be updated to include information on the level of noise in the signal.

notesOnNoise.jpg

Final thoughts

Fundamentally, the process of validating fault detection systems is the same as validating any other software construct.  In addition to the manual methods of defining tests software tools such as Simulink Design Verifier can be used verify coverage of the model.

Image result for catching bugs

6 Tips for readable Stateflow charts

On the heels of a popular “5 tips for readable Simulink models” I am following up with a companion post.  While much of this material can be found in the “Stateflow best practices” document found in this sites reference section these are the 5 I find most critical.

Background

First a few background concepts. The concepts of “levels” in a Stateflow Chart is a measure of how many states exist within a state.  The count starts at the highest level state and increments for each substate.

levels

Stateflow includes the model construct of a “Subchart.”  Like subsystems in Simulink, subcharts encapsulate all the states and transitions in the state into a single “masked state.”

When counting levels a subcharted state counts as one state regardless of how many states exist within the subchart.

#1 Consistency

There are two main aspects to consistency in Stateflow charts decomposition of transition information and placement of transition information.

Transition information consists of both the transition condition (or event) and the action.

corrected

The image above shows 4 methods for decomposing the transition condition and action.  In general, I recommend a separate transition for the condition and the action.  This is for two reasons.  First, for complex conditions, the length of the text can make it difficult to read; adding in additional text in the form of the action just aggravates this issue.  Second, by placing the action on a second line it is possible for multiple transitions to use the same action.

A slight modification to the previous image canc2 show the importance of constant placement.  If the placement of information is inconsistent, e.g. in some cases above and some below, left or right, it becomes difficult to associate the transition information with a given transition.

#2 Maximum states per level & maximum levels per chart

For any given State I recommend a maximum of 3 levels of depth; if more than 3 levels are required consider creating a subchart for the part of the state requiring greater depth.

levelsLikewise, I recommend an absolute maximum depth of 5 states.  The first recommendation promotes readability at any given level within the chart.  The second recommendation promotes understanding of the chart as a whole.

#3 Number of States

As a general rule of thumb, I limit the number of states that I have on any given level to between 30 and 50.  When I find the number of states at a level exceeding that value I repartition the chart using subcharts.

#4 Resize!

Even more than in Simulink, resizing states can dramatically improve the readability of the chart.  There are 3 things to consider in the size of the state

  1. Is the state large enough to hold all the text in the state
    overflow.jpg
  2. Is the state large enough to have all of the transitions reasonably spaced out (e.g. will the text on the ouput transitions be readable?)
    bigEnough
  3. Is the state larger than it needs to be?  When states are larger than required they take up valuable screen space.
    toBig

#5 Stright lines, please

In the majority of cases the use of straight lines, with junctions to help with routing, provide the clearest diagram appearance.  The exception to this recommendation is for “self-loop back” transitions such as resets.

selfLoop

#6 Temporal logic!

Use of temporal logic, instead of self-defined counters, ensures that the duration intent is clear in the chart.timverVCnt

In this example, if the time step is equal to 0.01 seconds then the two transitions result in the same action (transitioning after 1 second).  However if the time step is something other than 0.01 seconds the evaluation would be different.  Because of this when the intention is to transition after a set time temporal logic is always preferable.

Final thoughts

Again these are just a few tips on how to make your Stateflow charts more readable.  I would, as always, be happy to hear your suggestions.