Managing data

In previous posts, I have covered data attributes and data usage.  In this post, I cover data management.  Within the Model-Based Design workflow, and traditional hand coding environments, there is a concept of model scoped and common data.  This blog post will use Simulink specific concepts for Data Dictionaries to show how scoped data can be achieved.

What is in common?

Deciding what goes118eb73e994c025de7f60b0689c4de10 into the common versus the model specific data dictionary is the primary question that needs to be asked at both the start of the project and throughout the model elaboration process.  There is always a temptation to “dump” data into the common data dictionary to “simplify” data access.  While in the short run it simplifies access, in the long run, doing so creates unmanageable data repositories.  So, again, the question is “what goes in there?”

Common data type specification

commonDataTypesThe common data types consist of four primary entries, each of which is created as a separate sub-dictionary.

  • Structure definitions
  • Enumerated data types
  • Data type aliases
  • Model configurations

In all 4 cases, these bits of information should be used in a global scope.  For example, structures used as an interface definition between two models or an enumerated data type that is used for modal control across multiple models.  In contrast, structures that are local to a single model should not be part of the common data types sub-dictionary.

Common data

Like the common data types, the commoncommonData data consists of sub-dictionaries.  In this case, there are three.

  • Physical constants
  • Conversion factors
  • Common parameters

The first two are simple to understand; instead of having the engineer put in 9.81 (m/s) for each instance of the force of acceleration a physical constant (accelGravMetric) can be defined.  Likewise, instead of hard coding 0.51444 you could have a parameter Knots_to_meter_p_sec.  (Note: in the first case, 9.81 is a value that most engineers would know off the top of their head.  The second case most people will not recognize and it results in “magic numbers” in the code.  This is compounded when people “compact” multiple conversion factors into a single conversion calculation and the information is lost)

The final sub-dictionary, common parameters, is the most difficult to scope.  Ideally, it should be limited to parameters that are used in more than one model; or more than one integration model.  To prevent the “mushroom growth” of data in the common parameter data dictionary regular pruning should be applied.

Pruning your data

Pruning data is the process of examining entries in a data dictionary and determining if they are needed in the common data or in a model specific dictionary.  Within the Simulink environment, this can be accomplished using the model explorer or programmatically

datauri-file-1

Model and integration model data dictionaries

In the section on model architecture, we discussed the concept of “integration models.”  An integration model consists of multiple sub-models, which, in turn, may contain sub-models.

IntegrationModelDD

The pattern for the integration model data dictionary mirrors the pattern that was shown in the initial diagram; the “twig” of the model tree references the branches, which in turn reference all the way back to the root.

dataDictonary

Final thoughts

The use of scoped data dictionaries allows users to logically organize their data while minimizing the amount of work that individual contributors need to take to maintain the data.  This approach does not eliminate the need for data maintenance however it does provide tools to aid in the work.

 

 

 

Software versus Engineering decomposition requirements

In this video blog, I provide a few brief thoughts on handling software versus engineering decomposition of models.  Following a Model-Based Design approach, the requirements for both fields can be achieved.

Defining your initial Model-Based Design workflow

Upon completion of the validation stage of Model-Based Design adoption, your group will be ready to define an initial MBD workflow.  The defined process should include steps to cover each stage of the software development cycle from requirements gathering all the way through acceptance testing.

sdlc_vmodel.png

Critical workflows

While there should be an overall Model-Based Design workflow there are 3 primary sub-workflows that are both critical and tightly linked to each other.  As a result development of these sub-workflows takes priority.  They are the requirements, model elaboration, and testing sub-workflows.

Requirements sub-workflow

Model-Based Design embraces methodologies for creating requirements driven development.  The primary objective of the requirements workflow is to establish linkages between both the models and tests to the requirements.  As the diagram shows the initial requirements linkages flow down through the development and testing procedures. 2017-05-08_9-24-46

The feedback to the requirements comes from both the models (at elaboration stage) and the tests (at test execution stage.)  Results from each stage of the development process are feed into coverage analysis tools to produce human readable reports which track the level of compliance with the requirements.

Results from each stage of the development process are feed into coverage analysis tools to produce human readable reports which track the level of compliance with the requirements.

Model Elaboration sub-workflow

The model elaboration workflow supports the development of both the system level and components level models.  The process depends on a regular synchronization between the individual developers (D1, D2, …, DN) and the system integrator.

modelElaboration

The second point to remember is that the system level arbitration is both between the system level model and all of the components under development by other developers.

Testing sub-workflow

The testing sub-workflow examines how tests are defined, when they are executed and how information is presented to users.  For a given model the tests assigned to that model consists of the common testing requirements for the project and the specific tests dictated by the model specific requirement.

testWF_2.png

Critical to the testing workflow is the feedback and update of test suites.  As the model is elaborated and as new issues are discovered the initial test suite should be updated to reflect the changes in the model.   Additional changes may need to be fed back to the requirements if the results warrant it.

Final thoughts

The overall Model-Based Design workflow consists of multiple sub-workflows.  The top level workflow is built up from the example sub-workflows and additional workflows as needed (such as version control and documentation workflows).

Automation of the workflows is critical for success in adoption. At each stage in the creation of the workflows the questions of “what can be automated” and “how can people view the results” should be asked.

Model architecture decomposition for hardware and close loop testing

Developing a new product of any kind is an iterative process.  The speed of the process is increased using a closed-loop testing methodology.   1144482_HiRes1In a closed loop system, the outputs of the unit under test (UUT) are connected to the plant model.  The plant model outputs are, in turn, connect to the inputs of the UUT.  Utilizing the ‘closed loop’ setup both the UUT and the plant responds in a “real world” fashion.

The logical question then is

  • how do we decompose
    • The model’s interface and functions to support closed loop testing
      while
    • Allowing the final model to be deployed to the final target environment

One model, many paths…

One of the primary objectives of using a Model-Based Design workflow is the ability to use a single model, or group of models, through the full development cycle.  Selecting an appropriate top-level architecture simplifies this process.

closedLoop2

The model above is a closed loop system with both plant and environmental models.  The plant section of the overall model reacts to the feedback from the controller while the parameters from the environmental model are independent of the feedback from the model; such as outside temperature or pressure.

Since the model is intended to run in simulation the hardware inputs and hardware outputs will either be pass through subsystems or signal routing systems.  Control over what is in that subsystem accomplished through the use of a variant subsystem.  In general, it is a best practice to have a single “target” variant parameter that controls the variants for all of the top-level models.

This decomposition, using variants, allows for the control algorithms to be reused throughout the development process.

The hardware model

ControlWithHardware

In the second example, the model is configured for the deployment to the target hardware.  Blocks inside the “Hardware Input” and “Hardware Output” models would include links to the device drivers on the board.  Again model variants could be used to reconfigure the hardware input and output subsystems.

As a side note, it is possible to reuse the model in “one model, many paths…”  In that case, the Plant and Environment models would be pass throughs set through the variant parameter.

Scaling

Those with sharp eyes will notice that the models are called “hardware input and scaling.”  The scaling component converts the digital signals into engineering units.  At the high level, the decomposition looks like the following image.  On the right (blue) “raw” hardware inputs are read from the sensors.  The data is then passed into the green scaling subsystems.

hardwareScaling_High_Level.png

Looking into the analog scaling subsystem we a combination of a simple linear scaling (output = m * input + b) and a Stateflow chart which arbitrates between the redundant signals.hwScale

Note, in this case, a design decision was made to place sensor faults in the hardware systems.  This was done since the arbitration between the three throttle sensors were being combined into a single throttle position output locally.

Final thoughts

 

There are multiple approaches to defining the top-level architecture; the key is to define the archtecture early on in the development process to enable integration testing early in the development cycle.

Collecting feedback…

Please forgive the early post…

When developing a control system feedback is critical; in creating a company wide software proces feedback (from your employees) is even more importaint.  What is the best way to gather that information and what is the information that you should be collecting?

AAEAAQAAAAAAAAIUAAAAJDM0YTBmOTg1LTFlY2MtNGI0MS1iNWJiLTcwNWY5NmRmZWExNQ

What did your bug reports tell you?

Bug tracking systems serves as the “first pass” for information reference.  When developing the software process a category of “workflow issues” should be included in the tracking software.  These workflow bugs will show problems related to

  • Poor documentation: The primary way users learn about the Model-Based Design workflow is through the documentation.
  • Architecture interfaces: Poor interfaces, either for model or data integration will emerge as new design patters are exploreed by new groups.  The process adoption team must determine if the interface should be extended or a new interface defined for the group specific requirements.
  • Test failures:
    • Modeling guidelines: Failures in modeling guidelines will show where users have difficulty in conforming to modeling standards.
    • Regression tests failures: These can indicate an improperly defined regression test system.  During the inital development of the test environent it is common for there to be errors in the system.

bugReport

Direct feedback / viewing

At the one, two and six month marks groups new to the process should be brought in for a formal process review meeting.  During the meeting the following activities should take place.

  • Design reviews:  The models, tests and data managment files should be reviewed to ensure that best practices are followed.
  • Pain points: Request feedback from the teams to capture existing pain points.

Final thoughts

Collecting feedback from new teams is critical understanding where processes can be improved.  The development is as always an iterative process requiring input from teams outside the inital “core” team.

Integration with existing software

In 90% of cases, Model-Based Design software there is an integrating with an existing software base.  The primary question “who integrates into whom?”  With answers of either the MBD integrates into the Existing or the Existing integrates into the MBD(1).

The ability to integrate is dependent on the defined interfaces.  There are three interfaces of interest

  • Schedule: The calling method (e.g. timing) must be defined.
  • Function: The function signature must be well defined.
  • Data: Encapsulation of data simplifies the integration

Assuming that the interfaces are well defined then the MBD, or the existing software can easily incorporate the software entities.

A into B or B into A?

The question of what integrates into what can be easy or difficult.  When one environment, say the existing code, consists of utility functions then those functions should be integrated into MBD environment.  If both environments consist of a large body of functions and modules, then the question becomes more difficult.

fault-line-earthquake.jpg

The first option is to have both code bodies sit “side by side.”  In this case, the interfaces are defined at the top level of the code bases, and the communicate only through this interface.  This is a ‘clean’ approach, but it can create a bottleneck for the data.

The second option is to decompose one of the existing code bases and integrate it into the other environment.  This option takes more upfront work, but it is more flexible and robust in the long run.

Integration methods within Simulink

Simulink provides multiple methods for integrating C and C++ code into MBD models.  The methods for integrating the code assumes that the schedule, function, and data interfaces are well defined as described above.

Final thoughts

This post, by its’ nature, is shorter than most.  Information on what “well defined” and “encapsulated data” is covered in other blog posts.

Footnotes

(1)There is, of course, the third case of the “Klein bottle integration”

klienBottle

Creating documentation!

The title above is one of the few times you will see an exclamation point after the word “documentation.”  However, you should see it more often.  Document generation, as part of an overall Model-Based Design workflow, enables developers to improve their models more quickly and more efficiently.  For the most part, the required documentation can be automatical generated as part of the Model-Based Design workflow.

Types of documentation

Test and verification activity reports

There are two primary types of test and verification reports; instance and trending.

  • Instance report:  This report provides the information on the results of a given “instance” of one or more tests.
  • Trending report: Trending reports collect high-level statics such as the total number of tests passed/failed.

coveragePolyspace

Traceability reports

Traceability reports provide information on the coverage of the requirements.  These reports should show the following information

  • Percentage of requirements with links to models
  • Requirements with tests
  • Requirement test status

Unlike test and verification, traceability reports are always trending reports.

“Process” and “documentation” reports

The final type of report tracks the adherence to a defined process; this type of documentation is most frequently seen in safety critical systems.  A process document tracks how models move through the development process.  It is used to verify that all defined stages in the development cycle are completed.

processTracking.jpg

Final thoughts

Documentation serves a vital purpose in the development of software.  It both allows developers to understand the state of their product (tracking documents).  For companies following safety critical processes, both the traceability and process documents are artifacts that are used in support of the development cycle.

rtfm