Know your history: The Stateflow block

The history junction block in Stateflow™ is a simple, powerful, and often misunderstood entity.  In this blog post, I will try to explain the use of the history junction, through a, hopefully, humorous example.

The Model

For this example, I have coded two versions of the same model, one with the history block and one without.  By doing so I can illustrate the effect of the block.

HistoryJunction

In this example, the first state is the “MortgageStatus”; it has three substates, start of loan, paying off loan and you own your home.  If all progresses normally you will be in the final state after 30 iterations of the chart.

However, there is a second top level state, “BankFraudOccured.”  For this model, I have this configured to be true every 10th cycle.  So let’s look at the results with and without the history junction.

The results

The function of the history junction is to allow users to return to the last active substate within a super state; without it, the user returns to the default substate.  If we look at the two graphs we will see the effect of history junction.

mortgageResults

In the left-hand graph, the instance with the History Junction, the effect of the “bank fraud event” does not impact the payoff of the mortgage.  However, in the right-hand graph, the bank fraud “resets” the mortgage to the “NEW” state and keeps the lender paying off indefinitely.   This can be clearly seen by looking at the Sequence Viewer.

sequenceMortgage.png

The red boxes her show where the model transitions to and from “BankFraud.”  With the history junction in place, you go right back to paying.  Without you follow the default junction to the starting substate.

Words of caution

So in this first example, we had a just a single state connected to our main state. Now let us take a look at a multi-state example

refi.png

In this example, the History Junction would resultmonopoly

in the state returning to “Paying Off” without going through the “Start of Loan” state.   The history junction applies to all transitions into and out of the parent state.   To correct for this the “Refinance State” needs to be to be “Start of Loan” state

refiRight.png

Final thoughts

The first iteration of MAAB style guidelines recommended against the use of the history junction.  With version 2.0 this recommendation was removed and it opened up new modeling patterns for our end users.  Use of the history junction enables multiple design patterns that are difficult to create without it.  Hopefully, this post sheds some light on its proper use.

Resources in support of adoption

One of the most common questions asked about adopting Model-Based Design is “What sort of resources do I need to succeed?”   Since it is a common question I have a ready answer, there are three things that you need.

  1. Support from management:  Managerial support for initial projects can be at the local level, however, for full adoption across a company VP level support is required.
  2. A correctly scoped project:  As written in early blogs identifying the correct initial project is critical for success.
  3. Engineering resources: The engineer resources represent who will be doing the work on the project.  This is the subject of the current blog.

Engineering resources

In the initial stage, my recommendation the-rule-of-threeis that you have at least 3 engineers with

  • 5+ years with the company
    (Preferably one with at least 10)
  • 80% of their time dedicated to the project
  • Exposure to multiple stages of the software development process

Why these recommendations?  First with respect to experience; there are two aspects here.  You need someone who understands the complexity of the existing project, past problems, and past successes so they can accurately judge the MBD processes.  Second, people with experience know who to talk to when there are issues that are outside of their scope.

The next aspectInteracting-with-Teams1.jpg is the percentage of “on-project” allocation.  Adopting any new process requires dedicated time to study, learn, try, fail and adapt.  If resources are split between multiple projects the time required to digest the new information will be lost.

The reason for multi-stage exposure should be self-evident, the engineers need to understand how their suggested changes to the development process affect people across the organization, not just in their local domain.  At a minimum, these experienced engineers should know when to pull in outside resources for consultation.

Final thoughts: Why 3?

The rationale behind having a minimum team of three engineers is to provide diverse viewpoints on the decisions required as part of the adoption process.   Following these recommendations greatly increases the probability of initial projects succeeding.

34-1024-1024-l
Macbeth witches brew recipe

 

 

 

 

 

Education of an orginization…

Successful deployment of a Model-Based Design process is dependent on the education of the development group and then, in turn, the final engineers.  The training requirements for the deployment group and the engineers have some overlap, however, by their nature, the development group will have higher level requirements

Core and supporting areas for training

Adopting Model-Based Design requires learning new skills and adapting existing skills to a new environment.

landing-training-1150px-wide

There are 4 core areas for training an additional 3 supporting areas

Core

  1. Model and system architecture: The model and system architecture provides guidelines for model construction and system integration.
  2. Data management:  Data management enables data driven development workflows and allows for design of experiments within the MBD context
  3. Requirements workflow: Software development processes should begin with well-written requirements documents.  Model-Based Design simplifies the traceability of requirements from creation to final deployment.
  4. Verification and validation: The verification and validation processes are greatly accelerated in the Model-Based Design environment due to the simplification of early simulation of the software artifacts.

Supporting

  1. Version control processes: Use of version control software continues from the traditional software development processes.  The key lays with the artifacts under management.
  2. Documentation processes: Documentation covers both the creation of training material and the generation of tracking and V&V documentation.
  3. Bug tracking software: Like version control processes bug tracking software continues the use from traditional software development processes.

Methods of training

When adopting Model-Based Desing training consists of both formal training and informal training.  The first line of training is tool specific training, e.g. how to use individual tools.  This provides the grounding on their use.

Once a baseline understanding of the tools has been acquired training on the company specific workflow needs to take place.  For the development team, this training takes place either through reading papers on best practices for Model-Based Design adoption or hiring external resources.

Once the development team has defined the MBD workflow they should document the process and provide training to other members of the engineering team.

Final thoughts

Resources invested in training pay a quick return on investment enabling engineers and software developers to speak using the same language and concepts.  The importance of developing custom training and workflows from the training cannot be stressed enough. And while 90% of Model-Based Design workflows will be common between companies the long-term success of any project is dependent on developing training that directly addresses the unique challenges of your organization.

Safety critical systems and MBD

The design of safety critical systems can be defined as

A safetycritical system or life-critical system is a
system whose failure or malfunction may
result in one (or more) of the following outcomes:
death or serious injury to people.
loss or severe damage to equipment/property.

The design of these systems places the highest burden on the team of engineers, knowing that their actions may directly impact another person’s life.  So what should an engineer do?

Process and standards

To help in the development of safety-critical software multiple standards documents have been developed

  • DO-178C: Software Considerations in Airborne Systems and Equipment Certification
  • ISO-26262: an international standard for functional safety of electrical and/or electronic systems in production automobiles
  • IEC-61508: is a basic functional safety standard applicable to all kinds of industry. It defines functional safety as: “part of the overall safety relating to the EUC (Equipment Under Control) and the EUC control system which depends on the correct functioning of the E/E/PE safety-related systems, other technology safety-related systems and external risk reduction facilities.”

The standard documents are one part of what is required to implement a safety critical system.  The other part is a process that embodies the guidelines of the standard document.
compas.pngIn general, there are 4 parts of a standard guideline that must be addressed in the software development process

  1. Validation of tool behavior
  2. Creation and traceability of requirements
  3. Compliance with software development best practices
  4. Adherence to verification and validation processes

Tool validation

Tool validation consists of two steps

  1. Develop and execute a validation plan to ensure the software tool (i.e., MATLAB and add on products) is working as anticipated and producing the right results. (Exhaustive testing at this stage isn’t expected.)
  2. Validate and ensure your algorithm is working as you expect. Is it producing the right results based on your requirements?

ValidateThere are essentially three main steps to creating a software tool validation plan

  1. Create a tool validation plan: Identify risks, define contexts of use, and perform validation activities to reduce risk to an acceptable level. Typical items to document include hazard assessment, tool role in the development process, standard operating procedures, validation approaches, resources, and schedule.
  2. Develop a validation protocol: This includes test cases, expected results, and assumptions.
  3. Execute that validation protocol: Run test cases, and create a final tool validation report to document the validation activity.

Use of requirements

Creation of safety critical software starts with the developmertfmnt of testable requirements.  The development of the high-level requirements and derived requirements are then mapped onto the artifacts in the development process.

Once mapped the requirements to the artifacts they need to be analyzed for both coverage and correctness.  The correctness aspect is covered in the verification and validation step.

There are two types of coverage, requirements coverage and artifact coverage.  One hundred percent coverage should be achieved for both coverage types.

  1. Requirements coverage: validation that every requirement is linked to an artifact in the system
  2. Artifact coverage: The percentage of artifacts that have a requirement associated with them.  In this case, an “artifact” may be resolved down to a single line of code for some systems.

The final part of the requirements workflow is the tracing of requirements through the development cycle.  Tracing requirements is the process of mapping the requirement onto specific artifacts and validating the behavior of the requirements through each step of the development process.

d820457679217ea5aa67eab72f51851e

Verification and validation

The V&V portion of the development process serves 3 ends.

  1. Validation of the tools in use
  2. Verification of requirements
  3. Enforcement of development standards

Of the three tasks, the first two have been previously covered; so let’s look at the third, enforcement of development standards.  Software languages have coding standards, C and C++ have the MISRA-C standard while Simulink has the MAAB standard.    Validation tools can ensure that the code or models are in compliance with the standard.

Software development best practices

Like the development standards, there are existing documents of best practices for software development.  Selection of and adherence to such workflows are required for safety critical workflows.  The reference section of this blog includes some best practice workflows for MBD.
2017-06-06_9-33-40

 

 

 

!Over the HIL (Hardware in the Loop)

Over the years my most common projects for hardware in the loop (HIL) systems have been “plant simulator” modes.   With this post, I will take a deeper look at these types of HIL systems.

typesOfHill.png

Elements of a plant simulator

The plant simulator environment consists of 4 elements

  1. The combined environment and plant model: Running on the HIL system are plant and environment models.
  2. The HIL system and physical connections: The HIL system simulates the plant system and communicates to the control hardware over a set of defined physical I/O.
  3. The control hardware: The control hardware is the limiting device in this system.  It has both processing limitations (RAM/ROM, fixed or floating point) as well as hardware limitations (I/O resolution, CPU clock resolution)
  4. The control algorithm: The control algorithm, deployed to hardware is the unit under test (UUT).
  5. Testing infrastructure:  While not part of the plant simulator, testing infrastructure, is part of every HIL system.

 

Hill_With_Plant.png

Limitations of hardware: why you use a HIL system

Recommended best practices for developing software with a Model-Based Design is to perform Model in the Loop (MIL) testing as part of the early development process. Upon completion of the initial development process, the HIL system is used to validate processor-specific behavior.  The two most common issues are I/O update frequency and I/O resolution.SBF throttle body

Let’s take the example of a throttle body; the device has a range of 2.5 degrees (hard stop close) to 90 degrees (full open).  It has three linear encoded voltage signals to represent the position of the throttle using a standard y = mx + b formula.  If we look at a 0 to 5 volt potentiometer sensor and assume the signal goes the full range then the formula becomes

V1 = m1 * alpha
where m1 = (5-0)/((90-2.5)*pi) = 0.0182
V1 = 0.0182 * alpha

If we now look at the other end of the system, the Hardware in the loop system it is standard to have 12 bit resolution for Analog to Digital cards.  This means that over the 5 volt range we have a resolution of

bit resolution  = 5 volts / 2^12 = 0.0012 volts

Combining these two formulas we find that the HIL system can resolve data at

Degree resolution = 0.0012 / 0.0182 = 0.0671 degrees

This resolution is, most likely, more than sufficient for the plant model.  In fact, it is probably a finer resolution that the potentiometer on the throttle body provides.  The act of performing these calculations is done to ensure that the resolution is sufficient for the systems needs.  Let’s look at a simple change and assume that the role is reversed, the HIL system is sending a 0 to 5-volt signal and the embedded devices is receiving the data.  Further, with a simple change to the analog to digital card from a 12-bit to  8-bit resolution

Bit resolution = 5 volts / 2^8 = 0.0391
Degree resolution = 0.0391/0.0182 = 1.0738 degrees

Most likely this would not be sufficient for the performance of the system.

Final thoughts

HIL systems are used for a number of tasks; one critical task is the “shake out” of the hardware interfaces.  As with all HIL tasks it has the advantage of moving the tests of expensive (or non-existent)  prototype hardware and offering a level of repeatability that cannot be achieved in the field.  For more information on the plant model aspect of HIL testing, you can view my blog on plant modeling.

 

 

A call for topics…

In this post, I would like to put out a request for topics related to Model-Based Design that are open questions for you, either with respect to establishing an MBD culture or use of specific workflows.

Feel free to either leave the questions in the comments of this blog or through the LinkedIn page.

A three-body problem: Juggling and Model-Based Design Process Adoption

When I was 15 I made a bet with a friend to see who could learn to juggle first; as a future engineer I set the following measurable specifications:

  • 3 balls in the air
  • For at least 1 minute
  • Be able to stop without dropping any balls

With these objectives, we both went off to learn to juggle. My friend jumped into it immediately working with 3 balls. On the other hand, I started out simply practicing throwing one ball back and forth until I had that down pat; taking my time to build up in 3 weeks I won a snickers bar. For an example of juggling, I have the following video

Model-Based Design Adoption

So how does this connect to adopting a Model-Based Design process? The analogy should hold true; a fully fleshed out MBD process has multiple tools and technologies that can be adopted. Mastering the basics before you move onto the next technology.

Phased tool adoption

There are core, secondary and process dependent tools. An example of a core tool/process is Simulink and Modeling Guidelines; develop your model in a readable efficient format. Secondary tools include formal method tools (Simulink Verification and Validation) or physical modeling. Process dependent tools include the requirements traceability functionality required by some high integrity processes.

Working your core

So what do I call my core?

  • Simulink, Stateflow, MATLAB:  Tools for algorithm design
  • Data dictionary: Create data driven models
  • Simulink Test: Start testing with your basic unit tests
  • Simulink Projects and Model Reference: Develop components from your model to enable working in a group.

Rollout!

The validation, group and departmental phase each have a rollout component. This post covers the group and departmental rollout which share common requirements differing only in the level of formality of the tasks.  Sucessfull completion of the rollout is dependent on three things, a defined process, education support and supporting staff.

Defined process

Upon completion of the validation phase, the Model-Based Design Workflow is defined.  This process needs to be documented and, to the degree possible automated.

ProcessModel1

Education program

Rolling out a new process to staff requires an educational program.  In general for a Model-Based Design process, this consists of 3 types of instruction.

    1. Base tool training: Most MBD workflows utilize supporting tools such as Simulink™ and Embedded Coder®.  For smaller companies, this training is normally provided by the tool vendor.  For large companies, in-house training on these tools may be developed.(1)
    2. Workflow training: This training instructs the end users on the parts of the development process operate and how they fit together(2).
    3. Migration training: The transition to a new process requires a migration of artifacts between the two systems.  A basic overview of how artifacts are converted needs to be presented.

education-rotator-image

In the validation phase, the workflow training should be informal; the final formalized version of the training is developed at the end of the validation phase.  The validation phase will not have a migration training component.

For the group roll out it is critical that all of the processes are fully documented to ensure that employees can reference the information after completion of training.

Supporting staff

The rollout is dependent on the existing of the supporting staff.  This staff draws from the process adoption team.  This group is responsible for the creation of the workflow documentation, answering questions from the adopters and developing any custom tools required by the end users.

118251024

Final thoughts

Successfully rolling out a new is dependent on gathering feedback from the end users.  This feedback should be used to improve the training and the actual processes.  Remember, all processes can be improved through the use of feedback and KPI monitoring.

Footnotes

(1)There is a trade-off to developing in-house training.  While it allows the company to customize their training it requires in-house resources and may miss features or capabilities that the in-house staff are not aware of.
(2)Generally, there are multiple “workflow trainings.”  One for each role that a person may play.  Each training focuses on the tasks required for the specific role while still providing information on the other tasks within the workflow.

Managing data

In previous posts, I have covered data attributes and data usage.  In this post, I cover data management.  Within the Model-Based Design workflow, and traditional hand coding environments, there is a concept of model scoped and common data.  This blog post will use Simulink specific concepts for Data Dictionaries to show how scoped data can be achieved.

What is in common?

Deciding what goes118eb73e994c025de7f60b0689c4de10 into the common versus the model specific data dictionary is the primary question that needs to be asked at both the start of the project and throughout the model elaboration process.  There is always a temptation to “dump” data into the common data dictionary to “simplify” data access.  While in the short run it simplifies access, in the long run, doing so creates unmanageable data repositories.  So, again, the question is “what goes in there?”

Common data type specification

commonDataTypesThe common data types consist of four primary entries, each of which is created as a separate sub-dictionary.

  • Structure definitions
  • Enumerated data types
  • Data type aliases
  • Model configurations

In all 4 cases, these bits of information should be used in a global scope.  For example, structures used as an interface definition between two models or an enumerated data type that is used for modal control across multiple models.  In contrast, structures that are local to a single model should not be part of the common data types sub-dictionary.

Common data

Like the common data types, the commoncommonData data consists of sub-dictionaries.  In this case, there are three.

  • Physical constants
  • Conversion factors
  • Common parameters

The first two are simple to understand; instead of having the engineer put in 9.81 (m/s) for each instance of the force of acceleration a physical constant (accelGravMetric) can be defined.  Likewise, instead of hard coding 0.51444 you could have a parameter Knots_to_meter_p_sec.  (Note: in the first case, 9.81 is a value that most engineers would know off the top of their head.  The second case most people will not recognize and it results in “magic numbers” in the code.  This is compounded when people “compact” multiple conversion factors into a single conversion calculation and the information is lost)

The final sub-dictionary, common parameters, is the most difficult to scope.  Ideally, it should be limited to parameters that are used in more than one model; or more than one integration model.  To prevent the “mushroom growth” of data in the common parameter data dictionary regular pruning should be applied.

Pruning your data

Pruning data is the process of examining entries in a data dictionary and determining if they are needed in the common data or in a model specific dictionary.  Within the Simulink environment, this can be accomplished using the model explorer or programmatically

datauri-file-1

Model and integration model data dictionaries

In the section on model architecture, we discussed the concept of “integration models.”  An integration model consists of multiple sub-models, which, in turn, may contain sub-models.

IntegrationModelDD

The pattern for the integration model data dictionary mirrors the pattern that was shown in the initial diagram; the “twig” of the model tree references the branches, which in turn reference all the way back to the root.

dataDictonary

Final thoughts

The use of scoped data dictionaries allows users to logically organize their data while minimizing the amount of work that individual contributors need to take to maintain the data.  This approach does not eliminate the need for data maintenance however it does provide tools to aid in the work.