!Over the HIL (Hardware in the Loop)

Over the years my most common projects for hardware in the loop (HIL) systems have been “plant simulator” modes.   With this post, I will take a deeper look at these types of HIL systems.

typesOfHill.png

Elements of a plant simulator

The plant simulator environment consists of 4 elements

  1. The combined environment and plant model: Running on the HIL system are plant and environment models.
  2. The HIL system and physical connections: The HIL system simulates the plant system and communicates to the control hardware over a set of defined physical I/O.
  3. The control hardware: The control hardware is the limiting device in this system.  It has both processing limitations (RAM/ROM, fixed or floating point) as well as hardware limitations (I/O resolution, CPU clock resolution)
  4. The control algorithm: The control algorithm, deployed to hardware is the unit under test (UUT).
  5. Testing infrastructure:  While not part of the plant simulator, testing infrastructure, is part of every HIL system.

 

Hill_With_Plant.png

Limitations of hardware: why you use a HIL system

Recommended best practices for developing software with a Model-Based Design is to perform Model in the Loop (MIL) testing as part of the early development process. Upon completion of the initial development process, the HIL system is used to validate processor-specific behavior.  The two most common issues are I/O update frequency and I/O resolution.SBF throttle body

Let’s take the example of a throttle body; the device has a range of 2.5 degrees (hard stop close) to 90 degrees (full open).  It has three linear encoded voltage signals to represent the position of the throttle using a standard y = mx + b formula.  If we look at a 0 to 5 volt potentiometer sensor and assume the signal goes the full range then the formula becomes

V1 = m1 * alpha
where m1 = (5-0)/((90-2.5)*pi) = 0.0182
V1 = 0.0182 * alpha

If we now look at the other end of the system, the Hardware in the loop system it is standard to have 12 bit resolution for Analog to Digital cards.  This means that over the 5 volt range we have a resolution of

bit resolution  = 5 volts / 2^12 = 0.0012 volts

Combining these two formulas we find that the HIL system can resolve data at

Degree resolution = 0.0012 / 0.0182 = 0.0671 degrees

This resolution is, most likely, more than sufficient for the plant model.  In fact, it is probably a finer resolution that the potentiometer on the throttle body provides.  The act of performing these calculations is done to ensure that the resolution is sufficient for the systems needs.  Let’s look at a simple change and assume that the role is reversed, the HIL system is sending a 0 to 5-volt signal and the embedded devices is receiving the data.  Further, with a simple change to the analog to digital card from a 12-bit to  8-bit resolution

Bit resolution = 5 volts / 2^8 = 0.0391
Degree resolution = 0.0391/0.0182 = 1.0738 degrees

Most likely this would not be sufficient for the performance of the system.

Final thoughts

HIL systems are used for a number of tasks; one critical task is the “shake out” of the hardware interfaces.  As with all HIL tasks it has the advantage of moving the tests of expensive (or non-existent)  prototype hardware and offering a level of repeatability that cannot be achieved in the field.  For more information on the plant model aspect of HIL testing, you can view my blog on plant modeling.

 

 

A three-body problem: Juggling and Model-Based Design Process Adoption

When I was 15 I made a bet with a friend to see who could learn to juggle first; as a future engineer I set the following measurable specifications:

  • 3 balls in the air
  • For at least 1 minute
  • Be able to stop without dropping any balls

With these objectives, we both went off to learn to juggle. My friend jumped into it immediately working with 3 balls. On the other hand, I started out simply practicing throwing one ball back and forth until I had that down pat; taking my time to build up in 3 weeks I won a snickers bar. For an example of juggling, I have the following video

Model-Based Design Adoption

So how does this connect to adopting a Model-Based Design process? The analogy should hold true; a fully fleshed out MBD process has multiple tools and technologies that can be adopted. Mastering the basics before you move onto the next technology.

Phased tool adoption

There are core, secondary and process dependent tools. An example of a core tool/process is Simulink and Modeling Guidelines; develop your model in a readable efficient format. Secondary tools include formal method tools (Simulink Verification and Validation) or physical modeling. Process dependent tools include the requirements traceability functionality required by some high integrity processes.

Working your core

So what do I call my core?

  • Simulink, Stateflow, MATLAB:  Tools for algorithm design
  • Data dictionary: Create data driven models
  • Simulink Test: Start testing with your basic unit tests
  • Simulink Projects and Model Reference: Develop components from your model to enable working in a group.

Rollout!

The validation, group and departmental phase each have a rollout component. This post covers the group and departmental rollout which share common requirements differing only in the level of formality of the tasks.  Sucessfull completion of the rollout is dependent on three things, a defined process, education support and supporting staff.

Defined process

Upon completion of the validation phase, the Model-Based Design Workflow is defined.  This process needs to be documented and, to the degree possible automated.

ProcessModel1

Education program

Rolling out a new process to staff requires an educational program.  In general for a Model-Based Design process, this consists of 3 types of instruction.

    1. Base tool training: Most MBD workflows utilize supporting tools such as Simulink™ and Embedded Coder®.  For smaller companies, this training is normally provided by the tool vendor.  For large companies, in-house training on these tools may be developed.(1)
    2. Workflow training: This training instructs the end users on the parts of the development process operate and how they fit together(2).
    3. Migration training: The transition to a new process requires a migration of artifacts between the two systems.  A basic overview of how artifacts are converted needs to be presented.

education-rotator-image

In the validation phase, the workflow training should be informal; the final formalized version of the training is developed at the end of the validation phase.  The validation phase will not have a migration training component.

For the group roll out it is critical that all of the processes are fully documented to ensure that employees can reference the information after completion of training.

Supporting staff

The rollout is dependent on the existing of the supporting staff.  This staff draws from the process adoption team.  This group is responsible for the creation of the workflow documentation, answering questions from the adopters and developing any custom tools required by the end users.

118251024

Final thoughts

Successfully rolling out a new is dependent on gathering feedback from the end users.  This feedback should be used to improve the training and the actual processes.  Remember, all processes can be improved through the use of feedback and KPI monitoring.

Footnotes

(1)There is a trade-off to developing in-house training.  While it allows the company to customize their training it requires in-house resources and may miss features or capabilities that the in-house staff are not aware of.
(2)Generally, there are multiple “workflow trainings.”  One for each role that a person may play.  Each training focuses on the tasks required for the specific role while still providing information on the other tasks within the workflow.

Managing data

In previous posts, I have covered data attributes and data usage.  In this post, I cover data management.  Within the Model-Based Design workflow, and traditional hand coding environments, there is a concept of model scoped and common data.  This blog post will use Simulink specific concepts for Data Dictionaries to show how scoped data can be achieved.

What is in common?

Deciding what goes118eb73e994c025de7f60b0689c4de10 into the common versus the model specific data dictionary is the primary question that needs to be asked at both the start of the project and throughout the model elaboration process.  There is always a temptation to “dump” data into the common data dictionary to “simplify” data access.  While in the short run it simplifies access, in the long run, doing so creates unmanageable data repositories.  So, again, the question is “what goes in there?”

Common data type specification

commonDataTypesThe common data types consist of four primary entries, each of which is created as a separate sub-dictionary.

  • Structure definitions
  • Enumerated data types
  • Data type aliases
  • Model configurations

In all 4 cases, these bits of information should be used in a global scope.  For example, structures used as an interface definition between two models or an enumerated data type that is used for modal control across multiple models.  In contrast, structures that are local to a single model should not be part of the common data types sub-dictionary.

Common data

Like the common data types, the commoncommonData data consists of sub-dictionaries.  In this case, there are three.

  • Physical constants
  • Conversion factors
  • Common parameters

The first two are simple to understand; instead of having the engineer put in 9.81 (m/s) for each instance of the force of acceleration a physical constant (accelGravMetric) can be defined.  Likewise, instead of hard coding 0.51444 you could have a parameter Knots_to_meter_p_sec.  (Note: in the first case, 9.81 is a value that most engineers would know off the top of their head.  The second case most people will not recognize and it results in “magic numbers” in the code.  This is compounded when people “compact” multiple conversion factors into a single conversion calculation and the information is lost)

The final sub-dictionary, common parameters, is the most difficult to scope.  Ideally, it should be limited to parameters that are used in more than one model; or more than one integration model.  To prevent the “mushroom growth” of data in the common parameter data dictionary regular pruning should be applied.

Pruning your data

Pruning data is the process of examining entries in a data dictionary and determining if they are needed in the common data or in a model specific dictionary.  Within the Simulink environment, this can be accomplished using the model explorer or programmatically

datauri-file-1

Model and integration model data dictionaries

In the section on model architecture, we discussed the concept of “integration models.”  An integration model consists of multiple sub-models, which, in turn, may contain sub-models.

IntegrationModelDD

The pattern for the integration model data dictionary mirrors the pattern that was shown in the initial diagram; the “twig” of the model tree references the branches, which in turn reference all the way back to the root.

dataDictonary

Final thoughts

The use of scoped data dictionaries allows users to logically organize their data while minimizing the amount of work that individual contributors need to take to maintain the data.  This approach does not eliminate the need for data maintenance however it does provide tools to aid in the work.

 

 

 

Defining your initial Model-Based Design workflow

Upon completion of the validation stage of Model-Based Design adoption, your group will be ready to define an initial MBD workflow.  The defined process should include steps to cover each stage of the software development cycle from requirements gathering all the way through acceptance testing.

sdlc_vmodel.png

Critical workflows

While there should be an overall Model-Based Design workflow there are 3 primary sub-workflows that are both critical and tightly linked to each other.  As a result development of these sub-workflows takes priority.  They are the requirements, model elaboration, and testing sub-workflows.

Requirements sub-workflow

Model-Based Design embraces methodologies for creating requirements driven development.  The primary objective of the requirements workflow is to establish linkages between both the models and tests to the requirements.  As the diagram shows the initial requirements linkages flow down through the development and testing procedures. 2017-05-08_9-24-46

The feedback to the requirements comes from both the models (at elaboration stage) and the tests (at test execution stage.)  Results from each stage of the development process are feed into coverage analysis tools to produce human readable reports which track the level of compliance with the requirements.

Results from each stage of the development process are feed into coverage analysis tools to produce human readable reports which track the level of compliance with the requirements.

Model Elaboration sub-workflow

The model elaboration workflow supports the development of both the system level and components level models.  The process depends on a regular synchronization between the individual developers (D1, D2, …, DN) and the system integrator.

modelElaboration

The second point to remember is that the system level arbitration is both between the system level model and all of the components under development by other developers.

Testing sub-workflow

The testing sub-workflow examines how tests are defined, when they are executed and how information is presented to users.  For a given model the tests assigned to that model consists of the common testing requirements for the project and the specific tests dictated by the model specific requirement.

testWF_2.png

Critical to the testing workflow is the feedback and update of test suites.  As the model is elaborated and as new issues are discovered the initial test suite should be updated to reflect the changes in the model.   Additional changes may need to be fed back to the requirements if the results warrant it.

Final thoughts

The overall Model-Based Design workflow consists of multiple sub-workflows.  The top level workflow is built up from the example sub-workflows and additional workflows as needed (such as version control and documentation workflows).

Automation of the workflows is critical for success in adoption. At each stage in the creation of the workflows the questions of “what can be automated” and “how can people view the results” should be asked.

Model architecture decomposition for hardware and close loop testing

Developing a new product of any kind is an iterative process.  The speed of the process is increased using a closed-loop testing methodology.   1144482_HiRes1In a closed loop system, the outputs of the unit under test (UUT) are connected to the plant model.  The plant model outputs are, in turn, connect to the inputs of the UUT.  Utilizing the ‘closed loop’ setup both the UUT and the plant responds in a “real world” fashion.

The logical question then is

  • how do we decompose
    • The model’s interface and functions to support closed loop testing
      while
    • Allowing the final model to be deployed to the final target environment

One model, many paths…

One of the primary objectives of using a Model-Based Design workflow is the ability to use a single model, or group of models, through the full development cycle.  Selecting an appropriate top-level architecture simplifies this process.

closedLoop2

The model above is a closed loop system with both plant and environmental models.  The plant section of the overall model reacts to the feedback from the controller while the parameters from the environmental model are independent of the feedback from the model; such as outside temperature or pressure.

Since the model is intended to run in simulation the hardware inputs and hardware outputs will either be pass through subsystems or signal routing systems.  Control over what is in that subsystem accomplished through the use of a variant subsystem.  In general, it is a best practice to have a single “target” variant parameter that controls the variants for all of the top-level models.

This decomposition, using variants, allows for the control algorithms to be reused throughout the development process.

The hardware model

ControlWithHardware

In the second example, the model is configured for the deployment to the target hardware.  Blocks inside the “Hardware Input” and “Hardware Output” models would include links to the device drivers on the board.  Again model variants could be used to reconfigure the hardware input and output subsystems.

As a side note, it is possible to reuse the model in “one model, many paths…”  In that case, the Plant and Environment models would be pass throughs set through the variant parameter.

Scaling

Those with sharp eyes will notice that the models are called “hardware input and scaling.”  The scaling component converts the digital signals into engineering units.  At the high level, the decomposition looks like the following image.  On the right (blue) “raw” hardware inputs are read from the sensors.  The data is then passed into the green scaling subsystems.

hardwareScaling_High_Level.png

Looking into the analog scaling subsystem we a combination of a simple linear scaling (output = m * input + b) and a Stateflow chart which arbitrates between the redundant signals.hwScale

Note, in this case, a design decision was made to place sensor faults in the hardware systems.  This was done since the arbitration between the three throttle sensors were being combined into a single throttle position output locally.

Final thoughts

 

There are multiple approaches to defining the top-level architecture; the key is to define the archtecture early on in the development process to enable integration testing early in the development cycle.