6 Tips for readable Stateflow charts

On the heels of a popular “5 tips for readable Simulink models” I am following up with a companion post.  While much of this material can be found in the “Stateflow best practices” document found in this sites reference section these are the 5 I find most critical.

Background

First a few background concepts. The concepts of “levels” in a Stateflow Chart is a measure of how many states exist within a state.  The count starts at the highest level state and increments for each substate.

levels

Stateflow includes the model construct of a “Subchart.”  Like subsystems in Simulink, subcharts encapsulate all the states and transitions in the state into a single “masked state.”

When counting levels a subcharted state counts as one state regardless of how many states exist within the subchart.

#1 Consistency

There are two main aspects to consistency in Stateflow charts decomposition of transition information and placement of transition information.

Transition information consists of both the transition condition (or event) and the action.

corrected

The image above shows 4 methods for decomposing the transition condition and action.  In general, I recommend a separate transition for the condition and the action.  This is for two reasons.  First, for complex conditions, the length of the text can make it difficult to read; adding in additional text in the form of the action just aggravates this issue.  Second, by placing the action on a second line it is possible for multiple transitions to use the same action.

A slight modification to the previous image canc2 show the importance of constant placement.  If the placement of information is inconsistent, e.g. in some cases above and some below, left or right, it becomes difficult to associate the transition information with a given transition.

#2 Maximum states per level & maximum levels per chart

For any given State I recommend a maximum of 3 levels of depth; if more than 3 levels are required consider creating a subchart for the part of the state requiring greater depth.

levelsLikewise, I recommend an absolute maximum depth of 5 states.  The first recommendation promotes readability at any given level within the chart.  The second recommendation promotes understanding of the chart as a whole.

#3 Number of States

As a general rule of thumb, I limit the number of states that I have on any given level to between 30 and 50.  When I find the number of states at a level exceeding that value I repartition the chart using subcharts.

#4 Resize!

Even more than in Simulink, resizing states can dramatically improve the readability of the chart.  There are 3 things to consider in the size of the state

  1. Is the state large enough to hold all the text in the state
    overflow.jpg
  2. Is the state large enough to have all of the transitions reasonably spaced out (e.g. will the text on the ouput transitions be readable?)
    bigEnough
  3. Is the state larger than it needs to be?  When states are larger than required they take up valuable screen space.
    toBig

#5 Stright lines, please

In the majority of cases the use of straight lines, with junctions to help with routing, provide the clearest diagram appearance.  The exception to this recommendation is for “self-loop back” transitions such as resets.

selfLoop

#6 Temporal logic!

Use of temporal logic, instead of self-defined counters, ensures that the duration intent is clear in the chart.timverVCnt

In this example, if the time step is equal to 0.01 seconds then the two transitions result in the same action (transitioning after 1 second).  However if the time step is something other than 0.01 seconds the evaluation would be different.  Because of this when the intention is to transition after a set time temporal logic is always preferable.

Final thoughts

Again these are just a few tips on how to make your Stateflow charts more readable.  I would, as always, be happy to hear your suggestions.

 

Writing clear requirements

This past week I have been reviewing the NASA Systems Engineering Handbook.  As I read through it I was struck by their description of the requirements writing process.  With this post, I will share a few thoughts.

Requirement inputs

In the diagram that follows they call out four inputs to the requirements writing process.  Two out of the four are frequently missed in the development of requirements. “Baseline enabling support strategies” and “Measure of effectiveness” writtingRequirements

From the NASA document:

Baselined Enabling Support Strategies: These describe the enabling products that were identified in the Stakeholder Expectations Definition Process as needed to develop, test, produce, operate, or dispose of the end product. They also include descriptions of how the end product will be supported throughout the life cycle.

What makes this unique?  By looking at the enabling products it expands the requirement writing to the system level.  E.g. the requirement is not an individual part, it can leverage existing infrastructure.

Measures of Effectiveness: These MOEs were identified during the Stakeholder Expectations Definition Process as measures that the stakeholders deemed necessary to meet in order for the project to be considered a success (i.e., to meet success criteria).

What makes this unique?  Writing test criteria is standard for requirements (or at least it should be.)  What is unique is explicitly calling out the stakeholder requirements of success in a defined and agreed upon document.  Note, the stakeholder requirements are not how the requirement document will present the requirements.

Requirement metadata

The metadata associated with requirements is often overlooked (or ignored in the  tools that support it.)  The intent behind the metadata is to

  1. Make maintenance easier
  2. Support traceability
  3. Support project planning

reqMeta

The table above provides insight in how the objectives can be met.  First, by providing ownership information it supports the maintenance objective by removing the question of whom to contact when there are questions on the requirements.  Second, traceability is explicitly called out in this table.

Finally the project planning aspect.  In explicitly specifying the verification method, lean and level better estimates of the time required for validation can be assigned.

Validation of requirements

The final aspect I want to comment on, though the whole document is worth reading, is the validation of requirements

validateRequirents

The NASA document defines 6 aspects for validating the requirements.  What is significant is that they explicitly defining validation of requirements as a step in the development process.  Multiple times in this Model-Based Design blog I have stressed the importance of using simulation to quickly detect design errors.  However, no amount of simulation can find an error if the requirements are not correctly defined.

Final thoughts

Go take a look at the NASA systems engineering document.  It is well worth the read.

5 tips for more readable Simulink models

With this post, I will share some of the methods I have used over the years to make my Simulink models more readable.

Resize subsystems and reorder ports

Resizing subsystems is a common suggestion to making diagrams more readable.  A step beyond that is to reorder the ports so that the connections between multiple subsystems are more readable.

With this Before/After example, I have done three things

  1. Resized the source and destination blocks
  2. Changed the port order on the source block
  3. Offset the two destination blocks

MATLAB functions for equations

When I am entering in an equation into Simulink, I ask myself 2 questions

  1. Is there a built-in function for this equation (e.g., Integrators, Transfer Functions, table lookups)?  Use the built-in function
  2. Is the equation more than 3 operations long?  Use a MATLAB function.

In text form, the Pythagorean theorem is quickly recognized and understood.  When written out as a Simulink equation it takes slightly longer to understand.

Note: I do have a caveat, if the mathematical operations are series of gains, for instance when converting from one unit to another, then keeping the calculations in Simulink is fine.

Use of  Virtual busses for conditionally executed subsystems

Merging data from conditionally executed subsystems requires the use of the Merge block.  When the subsystems have multiple output ports routing the data quickly becomes cumbersome.  This can be addressed by creating a virtual bus to pack and then unpack the signals.

virtBus

Note: Using a virtual bus will allow Simulink/Embedded Coder to optimize the memory placement of the signals.  If a structure is desired, then a bus object should be defined and used.

The rule of 40, 5 and 2

When I create a Simulink subsystem, I aim to have a limited number of active blocks in the subsystem (40).  A limited number of used inputs (5) and a limited number of calculated outputs (2).

  1. Active blocks:  Any block that performs an operation.  This count does not include inports, outports, mux, demux, bus…
    1. Why: When the total number of blocks goes above 40 the ability to understand what is going on in the subsystem decreases.
  2. Used inputs: Bus signals can enter the subsystem, and only one signal off of the bus may be used.  A “used signal” is one that is actively used as part of the subsystems calculations.
    1. Why: The number of used inputs is a good metric for how “focused” the subsystem is in addressing a specific issue.
  3. The number of outputs: directly relates to the first two metrics, e.g.h, how focused the subsystem is on a specific issue.

Notes: Subsystems that are integration subsystems (see this Model Architecture post) can and should break this rule.)

Stay out of the deep end, beware of breadth

As a general rule of thumb, I recommend that models have a “depth” of 3.  Navigation up and down the hierarchy quickly can lose a reviewer.  Likewise, for given model, I recommend between 30 and 60 subsystems in total.

Image result for pool deep end

This recommendation holds for a single model.  For integration models, each “child” model should be treated as a single unit.

Final thoughts

These are just a few of the recommendations that I have hit upon in the past 18 years.  I would be curious to hear your thoughts and recommendations.

Interfacing with hardware in Model-Based Design context

Interfaces between low-level device drivers and algorithmic software have multiple unique issues.  These issues exist in traditional text-based development processes and in MBD workflows.  Let’s review the challenges and methods for meeting the challenges.

Hardware challenge

Image result for hardware challengeI decompose the hardware challenges into two categories; conceptual and technical.

Conceptual challenges

For software engineers, the concepts behind hardware interfaces are frequently a source of error.

  1. Finite resolution:  Physical hardware has a finite resolution.  A 12-bit A/D converter will not provide data at a higher resolution than 12-bits.
  2. Temporal non-determinism:  Readings from hardware, unless specifically configured to do so, are not assured to be from the same iteration of the algorithm.
  3. Corrupted data: Data from hardware sources can be corrupted in multiple methods.  The software needs to handle these corruptions in a robust fashion.

Technical challenges

The technical challenges are standard component-to-component interface issues.

  1. Data conversion: Information comes from the hardware in units of counts or encoded information.  This data needs to be converted into engineering units for use in the system.
  2. Hardware/Software interface architecture:  The method for interfacing the hardware and software components requires a stricter encapsulation than software-to-software architectural components.
  3. Component triggering: Hardware components can be triggered using one of three basic methods.  Schedule based triggering, event-based triggers or interrupt based triggers.

Addressing the hardware challenges

Understanding the hardware challenges we can now address them.  The conceptual challenges are addressed through education.

Conceptual challenges

  1. Finite resolution: Analog-to-Digital Converter Testing
    Kent H. Lundberg (MIT)
  2. Temporal non-determinism: The temporal logic of programs
  3. Corrupted data: Removing spikes from signals

Image result for hardware addresses

Technical challenges

Technical challenges are handled with education and patterns.

  • Data conversion: Data conversion is done through any number of simple algorithms, from y = m*x + b equations, table look ups or polynomials.
    scaling
  • Hardware/Software interface architecture:  Interfaces to the hardware run through a Hardware Abstraction Layer (HAL).  The HAL functions can be directly called from within the Model-Based Design environment.
    Related image
    Because the HAL is a discreet function the call to the hardware should encapsulated on a per function basis.  (Note: multiple calls can be made to the function if it is reentrant, however this tends to be less efficient)
    hardwareScaling
    The connection and scaling of the hardware is broken into 3 sub-components shown above.

    • Access to the low level device drivers
    • Data filtering
    • Data scaling
      The top level model architecture interfaces the
  • Component triggering: Hardware components can be triggered using one of three basic methods.  Schedule based triggering, event-based triggers or interrupt based triggers.  Information on how to trigger component can be found here.

Final thoughts

Well defined interfaces between hardware and software is provide clarity in communicating design intent.  The model architecture can be developed from the basic architecture proposed here, with the hardware inputs and outputs being a top level integration system.

 

 

Image result for marriage of hardware and software

On the road again…

Interacting customers are the way I learn; each time I go on the road I have the chance to interact with my customer; see the challenges they face and the ways in which they attempt to solve them.  So with reflection, what are the top 3 things I learn from customers.

Number 1: Clear is only clear in hindsight

Model-Based Design processes involve many design patterns (small work objects) and workflows (multiple patterns executed in a logical sequence).  Customer patterns and workflows, both the good and bad, evolved over time in response to challenges they faced.  Often the work that I am brought in to do is to help my customers both simplify and improve their existing workflows.

Often the work that I am brought in to do is to help my customers analyze then simplify and their existing patterns and workflows.  This analysis both allows me to learn from the customer as well as share the lessons I have learned over time from other customers.

Image result for hindsight

Number 2: Teams matter

The development process is only as strong as the team working on the project.  Ideal teams have a coordinated objective and a unified understanding of how the Model-Based Design process should proceed.  Communication of both the lessons learned and the obstacles encountered makes is a major key to succeeding.

Image result for tug of war

Number 3: Deadlines can shift …

At the start of an MBD adoption process deadlines are set, often, with a limited understanding of the full set of tasks involved in migration. Image result for deadline They are estimates. The way customers evaluate and update deadlines should be based on the following rationale.

  1. Unexpected efficiencies found post migration
  2. Additional tasks required to validate migration
  3. Increased or decreased scope of project
  4. Are there externally mandated deadlines:  Some deadlines are attached to other projects

Final thoughts

One of the great joys of being a consultant is having the chance to work with a wide range of individuals, each of whom brings a unique insight into the software development process.  I look forward to my next 20 years of interactions.

Why Adopt?

These blog posts have focused on the adoption of Model-Based Design.  The choice of the word “adoption” was intentional.  When I visit a customer I tell them the following.

“80% of what I will recommend is generic best practices, common
across all Model-Based Design.  The next 10% is a selection of common
patterns in use relevant to your industry and regulatory needs.
The last 10% is the unique part of your development; your intellectual property”

Why do I say this?  Model-Based Design, from an architecture, data, and V&V perspective is now a mature field.  In a mature field, time should be spent on developing the IP aspects of design, not infrastructural components.  To that end, there is a significant body of best practices available for companies to reference.  (See the reference page for a small subset.)

How to succeed at adoption?

As this blog has spoken about on a number of occasions adoption is a process.  To succeed there are 5 key activities that need to be performed

Related image

  1. Take background training
  2. Education on existing MBD frameworks (see references)
  3. Identify non-conforming cases (your 10% IP)
  4. Validate MBD approach for non-conforming cases
  5. Utilize external resources

External resources: final thoughts

Success often comes from knowing when to ask for outside help, either from other groups within your company who have already blazed a trail or from outside support groups (such as training and consulting.)  Utilizing support early in the adoption process enables a faster rate of adoption with fewer implementation issues.

 

Image result for external resources

 

Translate: essence, not the content

Very few projects start off with a clean slate; the majority have some body of existing text-based code (C/C++/Asembler) which needs to be either translated or wrapped into the Model-Based Design environment.  For the cases where translation is the desired path, the objective should be the translation of the essence (e.g. requirements) not the content.

Why translate essence, not content?

First, every programing language has unique constructs which may or may not be directly replicable in other languages.  Because of this, a common failure mode is to try and directly replicate coding patterns in the MBD environment.

catTrans_1.jpg

Second, when you translate the based on the requirements you have the opportunity to improve upon the existing code.

There is a function for that…

It is common with text-based algorithms to implement basic functions such as table look ups, integrators, etcetera.  While in some edge cases the text-based implementation is more efficient this is less common with the growing maturity of Model-Based Design tools.

blocks

Further, the small efficiency gains from the existing implementation are frequently less important than the clarity found by using built in blocks.

It is really a…

In text based languages truth tables and state machines are implemented as either a series of if/then/else or switch/case statements.  Within MBD environment both truth tables and state machines have direct implementations.

Pushing things too far…

The final note, there are some areas where text based modeling makes the most sense.  Generally, this is in the area of long complex equations.  While they can be rendered in block form they are more easily read in text form.  With that in mind, I recommend using MATLAB blocks for longer equations.

simpleMath

The image above, the Pythagorean theorem, is relatively simple.  Yet even it would be more easily read as

C = sqrt(A^2 + B^2)

Final thoughts (part 1)

When translation occurs it is important that the new implementation is validated against the behavior of the existing code.  Failure to do so can result in larger system level errors.

Final thoughts (part 2)

In my translation example, I used the simple phrase “I have the cutest cat in the world”  I submit the following images to back that claim up

 

 

 

Clean code: clean models?

In a recent customer conversation, I was asked if there was a mapping of Model-Based Design (MBD) constructs to the concepts of “smells” found in  Martin Fowler Clean Code. Fowler’s book was written targeting the Java, however, many of the concepts have direct mapping onto other programming languages.  Maping onto other object-oriented text-based languages are easily seen.  Hence the question of how OO smells could be mapped onto MBD.

Note: I am reviewing this in the context of developing embedded code using Simulink and Embedded Coder. The Clean Code was written with object-oriented user interfacing code (e.g. web pages, spreadsheets,…)

Fowler’s book defines 7 primary sources of smells:

  1. Comments:  Covers lack of comments, obsolete comments or out of date comments
  2. Environment: Covers the automation of build and test operations
  3. Functions: Covers definition of function interfaces and dead code
  4. General: 36 guidelines that cover a range of coding best practices
  5. Java: Issues specific to the Java programing language
  6. Names: Covers fundamentals of naming conventions
  7. Tests: Covers best practices for testing

Image result for 7 deadly sins

Quick overview

For some of the smells, there is a direct and easy mapping.

Comments: Models are, to an extent, self-documenting.  Additional documentation should be added as needed.  Fowler’s recommendation on keeping comments concise and up-to-date are directly mappable.

Environment: The smells in this section deal with automation of the build and test steps. There is a direct mapping for these smells and the recommendations for automation are standard for MBD environments

Naming conventions: For information on naming conventions (from and earlier MathWorks blog I wrote): A few thought on Naming Conventions

Tests: The smells for tests are standard recommendations for testing.  Earlier blog posts on testing can provide the mapping onto these smells.

Image result for quick

Deeper look

Of the 7 smells, I want to spend more time looking at both the “Functions” and “General” groupings.

Image result for the deep

Functions

Fowler has 4 smells related to functions, of the 4 MBD conforms to 1.5 of them.

  1. Too many arguments:  Control of physical systems require multiple bits of information.  Fowler’s recommendation of 0 ~3 input arguments doesn’t hold.  (It is also a function of object oriented programming that allows for that rule).  At the same time validate that the inputs to the function are required.  The MBD smell for function arguments is found in non-required inputs.
  2. Output arguments:  Like input arguments modeling control systems requires output arguments to continue the control flow.
  3. Flag arguments: Control algorithms frequently depend on modal conditions.  It is preferable to use these conditions to control enabled type subsystems or modal logic within a Stateflow chart.  Hence the .5 agreement on this one.
  4. Dead functions: Full agreement here; avoid dead functions at all cost.

Image result for enabled subsystem simulink

General

The generalImage result for general category includes 36 different code smells; I have subcategorized them into N themes

  1. Clear functionality: The behavior and function of the model should be clear from its construction.  Further, all functionality of the model should be implemented.
  2. Readability: The model should be easy to read.  The MAAB style guidelines have a direct mapping for this category.
  3. Encapsulation: The model’s functionality should exist within a scope of the model; e.g. there should not be calculations dependent on methods outside of the model.  I wrote about these issues in the Model Architecture section of this blog.
  4. Scope of  variables: With the exception of user defined I/O and parameters MBD tools automatically define the scope of variables making these rules, for the most part, irrelevant
  5. Reuse: Smells dealing with the reuse, and failure to reuse.  These rules are directly mappable looking at the use of Referenced Models and Libraries to achieve the aim.
  6. Object-oriented issues: Not applicable

Final thoughts

The concepts put forth in the book “Clean Code” represent a useful set of guidelines for understanding coding best practices.  To the extent that models map onto code the concepts behind “Clean Code” apply.    However, MBD abstracts many concepts behind coding into a higher level language, placing the clarity and encapsulation of the actual code into the hands of the code generation tool.

Side note:

Late in my conversation with the customer, I realized that I was talking to someone from Denmark about smells.  I regret that I did not take the opportunity to make a reference to Hamlet.  (Something is rotten in the state of Denmark)

Image result for hamlet