Engineering Process: Fault Lines (P2)

Having ended the last post on a cliff hanger, I’m going to reward your return with an answer to two questions that no doubt plagued your thoughts. “Why Fault Lines?” and “what changed in the problem statement?” The first answer is because I am inserting a “fault” on the Simulink “line.”(1)

Changes to the problem statement

At the end of the last post I had the problem statement

For a system of models, create a method for simulating dropped
signals for models not directly connected to the root level I/O

As I started working through possible solutions to this process I hit on a further problem. How do we keep this method from impacting the code generation process for the full system?

The prototype solution

The normal method for creating tests that do not change the models’ released behavior is to create a test harness that “wraps” the unit under test. However, in this instance since our objective is to work with models inside the system this approach will not work. Fortunately Simulink provides utilities to enable this.

Take control!

The Simulink Block “Environment Controller” is the first key; we can have a default path (e.g., what is deployed) and the fault path (e.g., what is simulated). When code is generated the blocks in the “faultStuff” subsystem are optimized out of the generated code.

Command from the top?

Having solved our first (new) problem(2) how do we get the data from the driving test harness to the “faultStuff” subsystem? The solution, in this case, was to use global data.

While as a general practice the use of global data is not recommended, in this case it was considered an acceptable solution for 3 reasons:

  1. The global data would only exist in the simulation version of the model
  2. Creation of the global data would be automated and not available to the developer
  3. Isolation of the global data could be validated as part of the fault-line tool

Two new tasks and a way forward

In developing my rational in support of global data I created two new tasks for myself (reasons 2 & 3).(3) However, for the prototype I just needed to create the example version of the faultLine.

There are three parts to this solution

  • The fault subsystem: this subsystem consists of two data store read blocks and one data store write block. Based on the value of “faultLineActive_<NAME>” either the model path data or the fault value (faultLineValue_<NAME>) will be passed onto the simulation.
  • The data definition: to use the global data, the Data Store Variables must be declared as part of a data dictionary. The full definition of the data e.g., type and size must be provided.
  • Excitation: the final part is the “driver” of the test; in this example a simple test sequence that turns on and off the fault at a set time.

From prototype to release

Moving from a prototype to a released version of the tool asks a new set of questions.

  • How much of the tool should be automated?
  • What is the end user responsible for implementing?
  • How do we make this scalable?

As with the last post, I will leave you on a cliff hanger…. What are the design decisions that take this from prototype to reality?

Footnotes

  1. Perhaps not as earthshaking as you hoped, but well, I do my best.
  2. The process of creating a final released product is often one of iterative problem detection and solution, with each subsequent problem being smaller then the previous one.
  3. And these new tasks fit in with the “iterative problem detection” description of footnote 2.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.