User-friendly testing environments: Analysis and testing

Within a software development organization, whether for embedded code or a desktop application, there are distinct roles.  They are the controls engineer, the system architect, and the quality engineer.  Depending on the size of the development team some of these roles may be done by a single person.

Analysis versus testing

During the development phase of a project, the controls engineer should perform analysis tasks on the model.  These analysis tasks enable the controls engineer to determine if the algorithm they are developing is functionally correct and is compliance with the requirements.

It is common for the analysis tasks to be performed in an informal fashion.  It is common for engineers to simulate a model and then view the graphs of the outputs to determine if they have correctly implemented the algorithm.

The differentiating word in this description is informal.  When comparing analysis with testing we see that testing (either verification or validation) requires a formalized and “locked down” framework.  How then can the informal analysis be used during the formal testing?

Transitioning from analysis to testing

Ideally, the transition from informal analysis to functional combined-testing-analysistesting would flow seamlessly.   However, it is often the case that the work done in the analysis phase is thrown away in the transition to the testing phase.   This is understandable in a non-MBD environment but with the single truth approach of MBD, the analysis results should not be thrown away.  This is where the idea of “golden data” comes into use.  Golden Data is a set of data, both inputs, and outputs that an experienced engineer verifies as meeting the requirements of the algorithm.

Enabling the use of golden data to create test cases

who-moved-my-data-why-tracking-changes-and-sources-of-data-is-critical-to-your-data-lake-success-by-russ-savage-cask-3-638The easiest way to enable the use of golden data is to provide controls engineers with a simple interface in which they can provide the analysis data set and the information that transforms it into a testing data set.

Analysis data is transformed into test data by providing a method for “locking down” the results.  To locked down data the controls engineer needs to provide the test engineer information on what is expected from the analysis data.  This information could include the following types of golden data tests.

  • Strict:  The output data from testing must match the golden output data exactly.  This is normally done for Boolean or integer outputs.
  • Tolerance:  The output data from testing must match the golden output data within some bounded tolerance.  Tolerances can be absolute, or percentage.  Note special care needs to be taken with data with values around zero for percentage based tolerances.
  • Temporal: The output from testing must match the golden output data within some time duration.  These tests can also include tolerance and strict conditions.

In addition to the type of golden data tests to run the controls engineers should include information on which requirements the test maps onto.

Formal tests in support of development

In the same way that golden data can support testing the formal testing can support the controls engineers by informing them of the constraints that the requirements place on their design.  This can only be achieved if the tests are easy for the controls engineers to run.

What is “user-friendly?”

User-friendly interfaces for testing are defined by the following characteristics

  1. Data is accepted in “natural” format:  Any formatting or interpolation of the data is performed by the testing environment.
  2. Test results are presented in “human readable” format:  The results from the tests should be provided both in a summary format (pass/fail) and with detailed data, such as graphs and tabular data.
  3. Selection and execution of tests should be simple: Tests should be launchable from a user interface that provides a list of the valid tests and enables the running of tests in either single or batch modes.
  4. Test files should be automatically associated:  The management of test data (inputs and results) should be handled by the test manager.

friendly

Final thoughts

This blog post has described how information should be shared and how tests should be run.  In an upcoming post, I will cover the basics of modular test design.

 

 

If / elseif/ else: Why didn’t you ask me that in the first place?

Conditional logic, such as if/else, switch/case constructs, truth tables and state machines are common to most programming languages.  In today’s blog, there are two aspects of conditional logic that I want to address in today’s post,  independence, efficiency versus clarity.

Independence

Let’s compare two simple examples, in the first we have a single variable “A”, in the second we have three variables “A”, “B” and “C”

iflogicelseifstartingpoint

If we consider the first case the order of the comparisons (A==1) and (A==2) can be changed with no impact on the functional behavior of the code.  This is because the two conditions are mutually exclusive (1).  However in the second case, without formal analysis,  we cannot say if the order affects the functional behavior.  That is to say, if both A, B, and C can be true at the same time then the order of evaluation impacts the resulting functional behavior (2).

Frequently people are not aware of this possibility and as a have difficulty in evaluating the functional behavior of their code(3).

Efficiency versus clarity

For most people, the standard gear shift in cars is well known, with the order of states PRNDL as a standard (Park,  Reverse, Neutral, Drive, Low).  Therefore you commonly see conditional logic (or state logic) that represents the PRNDL like this…

prndl

However, from an efficiency point of view, this is poorly defined.  The majority of the time the transitions you will see are from park to reverse, park to drive, drive to park or reverse to park.  This would suggest the following organization

prndl_eff.

Why is this more efficient?  Simply put it reduces the average number of comparison operations required to reach the desired end state.

Compound conditions

The previous example was fairly straight forward; there is a single variable with independent states.  It is more common to find compound conditions defining the if/then/else logic.  Let’s look at our friends, A, B and C again.

compoundif

This first example is a classic compound if statement.  In this example, the logic is fairly straight forward.

deeplogicSo which of these is more readable?  This is a difficult question to answer.  In this case, the indented format of the code makes the binary nature of the if/else-if/else conditions clear.  However, if I introduced more complex evaluations, non-binary operations, then it would perhaps a hybrid of the two formats would be preferable.

Rules of thumb

  1. Keep the number of comparisons on a single line 3 or less:  Exceeding 3 comparisons per line make validation of the coverage of the case difficult.
  2. Keep the depth of if/if/if trees to 3 or less:  Traceability back to the top of the tree decreases at depths greater than 3.
  3. Place the binary comparisons at the top level of your if/if/if trees:   Placing the binary operations at the top of the tree reduces the overall number of if/else required.

Final thoughts

Formatting and density of information are frequently “hot topics” of debate.  The rules of thumb listed above should not be considered a final verdict but rather a starting point for discussion.  As always I welcome all comments.

Footnotes

(1) Note this may not be true if you move to a quantum computer where a single bit could maintain multiple states.  Until then you are fine this statement holds true.

(2) The second screen shot has a common error; lack of documentation.  The code should be commented to state if the code can be reordered.

(3)The common scenario has a person perplexed why they do not call “callBsFunction” when B is true.  Not realizing that it is gated by the “A” is true if statement.

Testing is software

This blog is a re-posting of early work from linkedin; I will be re-posting this week while I am at the Software Design for Medical Devices Europe conference in Munich.

Enabling the adoption of Model-Based Design

Test early, test often, test against requirements and test using formal methods. This is the mantra that developers (hopefully) hear. But what does it mean in practice? How do you produce effective and maintainable tests? I will argue that the first step is to think of test development in the same light as software development. Good testing infrastructure has requirements, is built from reusable components and written in a clear fashion to facilitate extensions and debugging efforts.

Why should you care?

In my 20+ years working in software, 2/3 of it in a consultative role, the most common problem I am called in to work on is mushroom code(1). Mushroom code is the end result of unstructured development, new algorithms are added on top of existing algorithms with little understanding of what it is feeding on. The result is an organic mess that is hard to sort out. This is prevalent in algorithmic development and even more common in testing which is often done “late and under the gun”

Testing components

A fully developed testing infrastructure consists of 5 components, a manager, execution methods, harnesses, reporting methods, and evaluation methods.

1.) Evaluation methods: use the data created through the execution of the test to determine the pass / fail / percentage complete status of the test:

Example a.) A MCDC test the evaluation would determine the percentage of conditions taken

Example b.) A regression test could compare the output values of between the baseline version of the code and the current release.

2.) Reporting methods: take the data from the evaluation methods and generate both human readable and history reports. The history reports are used to track overall trends in the software development process.(2)

3.) Harness: the harness provides a method for calling the unit under test (UUT) without modifying the UUT. Note test harnesses facilitate black box testing, e.g. the internal states of the unit under test are not known. However if internal states of the UUT are outputs at the root level of the model then white box testing can be done using the unit under test.(3)

4.) Execution methods: is how the test is run. This could be the simulation of a Simulink model, the execution of a .exe file, static testing (as with Polyspace) or the Real-Time execution (4)of the code.

As the name implies there is more than one “execution method.” They should be developed as a general class that allows the same method (simulation) to be applied to multiple harnesses. Each instance of a execution method applied to a harness is considered a test case.

5.) Test manager: is were all of these components come together. The test manager

  • Holds a list of the test cases
  • Automates the loading of associated test data
  • Triggers the execution of the test
  • Triggers the evaluation of the results
  • Triggers the generate of the test report

Sadly it will not yet fetch you a cold beverage.

Notes

1.) Mushroom code and spaghetti code are similar that they develop due to a lack of planning. Spaghetti code is characterized with convoluted calling structures; mushroom code is accumulation of code on top of code.

2.) An interesting list of what should go into a report can be found here.

3.) Any model can be turned into white box testing if global data is used. However the use of global data potential introduces additional failure cases.

4.) Yes, this blog retreads the work from 6 months ago, however it is good to review these issues.

Four people Adopting Model-Based Design

With the past few blogs we have looked at the tasks associated with adopting MBD.  In this post I want to address how to work with the people involved in these changes.  To start, there are 4 typical types of people involved

  1. Enthusiastic adopters:  Excited by the opportunity to use modeling techniques in their every day activities.  They do not require motivation to adopt these new processes.
  2. Evidence based adopters:  Strong support for the process once sufficient (1)  information has been provided to show the benefits of adopting Model Based Design.
  3. Schedule based blockers: Objections to the adoption due to concerns around impacts on deliverable.
  4. Security based blockers: Objections based around concerns around job security due to changes in processes and required knowledge base.

As a consultant I have found working with type 2 and 3 people is the most rewarding for both the initial start to adoption and the long term outcomes.

Working with enthusiastic adopters

Enthusiastic adopters tend to be non-critical adopters.  As a result when they talk to other people who are taking blocking positions they do not know how to articulate the “why” behind use of MBD; this can lead to frustrated interactions.  At the same time the energy they bring to the adoption process should not be discounted as it will help to drive the project forward.

Keys to working with them

  • Help them to use their energy to motivate other people past pain points.
  • Help them to see where and why there are valid “blocking” questions

Working with evidence based adopters

Evidence based adopters are, ilebpsmallto a degree, the polar opposite of the enthusiastic adopters.  They are people who ask questions about how MBD will be used within their process.  For them the exploration phase, with the associated background research is critical.  Once they understand how the new tools and process benefit them and their company they can explain it to other people who have questions.

Keys to working with them

  • Provide background papers from SAE, IEEE, AIAA,…
  • Work with them to create demonstration models to evaluate performance based on their actual work
  • Review their current workflow to understand where improvements can be introduced
  • Acknowledge pain points that exist with adoption of MBD into their frame work.  Look for ways with them to mitigate those issues

Working with schedule based blockers

Schedule based blockers are often the managerialdownload equivalent of the “evidence based backers”. The oft quoted phrase of “Changing processes during an active project is  like changing the tires on a moving car.” Schedule based blockers often have legitimate reasons for why changing processes at a given point in time is not correct.  However, like evidence based backers by showing them the decreased development time that can be reached utilizing MBD methodologies can remove the blocks.

Keys to working with them

  • Discuss the long term(2) scheduling benefits of Model-Based Design
  • Show how a phased adoption approach can mitigate the scheduling impact
  • Acknowledge difficulties in transitioning during an active project and look for ways to mitigate those issues.

Working with security based blockers

There is a small percentage of employees who perceiveja15featurejobs5 new technologies as a threat to their job security.  While Model-Based Design can be used to reduce a workforce it is normally used to empower groups.  When working with them it is important to explain the limitation of any tool, the best saw will not build a house on its own; it still needs a hand to guide it.

Keys to working with them

  • Talk about their domain expertise and how it maps onto the new tools(3)
  • Show them how new methodologies (automation) can remove some drudge work tasks

Final thoughts: #1

People’s motivations stem from rational points of view; understanding their point of view allows you to communicate with them and address the issues that they bring to the table.  It is only through meaningful consideration of all points of view that consensus can be reached.

Final thoughts: #2

About 5 to 6 years ago I was working in a coffee shop when a young girl came up to me, pointing at the laptop screen she said “Is that an airplane”.

I told her “Well it is the model of an airplane that I am making”

After a pause she responded “Cool…. does it fly?”

With the push of a button I showed her this demo, the NASA lifting body example.

asbhl20landing

She watched for a good 2 to 3 minutes as I tweaked parameters making the plane rise, fall and eventually crash before she gushed out “You draw pictures and you make planes fly that is incredible”

I never forget how incredible what I can do is.

Footnotes

(1) The line between evidence based backers and security based blockers is clear when you see how they respond to information that addresses their question.  An evidence based backer will move towards adoption as their issues are addressed.  Security based blockers will continue to add new issues until you focus on the keys to working with them.

(2) Long term in this context implies 6 months to 1 year; i.e. enough time to move into a new phase in product development.

(3) There is sometimes a false perception that tools take away the need for domain knowledge.  More often then not showing how new tools empower people to do more with their knowledge is enough to move them from blocker to backer.

Software Design for Medical Devices Europe

I happy to say that next week, Feb 21st through Feb 23 I will be attending the Software Design for Medical Devices Europe conference in Munich.  I will be present a seminar on adoption of Model-Based Design and support one of my customers in his presentation on his experience with implementing MBD at his company.

Projects of interest (II): Listening for success

What is success? How do you define it, how do you measure it?  With software projects, it is easy to say when a project is complete but a complete project is not always a successful project.

So hear, at an abstract level, is my definition of a successful project.  The project…

  • solves the underlying (“big-picture”) requirements:  It is possible, even common, in the translating the initial (or user) requirements into derived requirements that the “big-picture” objectives are lost.  You see this reflected in tools and products that are functionally correct but more difficult to use or fail to provide the experience the user wants.
  • informs future work: A successful project can inform future work in two ways.  First directly, the work done on one project may be reused in subsequent projects.  The second is through acquisition of new knowledge(1) .
  • mentors junior people on the project: Every project is an opportunity for junior people to develop new skills and deeper understanding of the what they are working on(2) .

Background

17 years ago, on my first project as very green consultant,
I made the mistake of doing exactly what the green-consultantscustomer asked me to do.  Their request was to help them automate the routing of signals around a complex multi-level model.

I did what they asked, I learned a lot; efficient recursive programming, how to handle complex regular expressions, error handling for ill-defined systems.  The customer received a tool that did exactly what they asked for and they used it for the next 3 years.

So how was this project completed yet not successful?  First I didn’t step back to ask “what do they need?”  The customer thought they needed a way to route signals; in truth they needed a better model architecture.  The reason that they stopped using the tool 3 years later is that they realized this for themselves and developed a better model decomposition.

The second way in which this project failed is that it did not inform future work.  By its’ very nature it was a dead end tool; keeping them trapped  in an inefficient model architecture.  While I learned things that information was not applicable for my customer.

How to start succeeding?

Between the title of this post and my measures of success the answer should be clear.  At the start of my engagement I should have talked with and listened to my clients; that would have lead to the understanding that their architecture was in poor shape, I would have understood their underlying(3)  requirements.

student_success

Once you have the true objectives in mind make sure to review them periodically to ensure that the project has not drifted away from them.  Think about how the current project can inform future work, either directly through reuse or through education.  If it is for reuse budget the time in development to enable future reuse(4).

Footnotes

(1) There is a practical balance to be struck when learning new things on a project.  The time spent on learning new methods / tools should not slow down the progress of the project.  Ideally the knowledge would be gained organically while working on the project.

(2) Mentorship on projects is often informal; even basic act of discussing what and why design decisions have been made with junior colleagues will aid in their development.

(3) I am using “underlying” and “base” requirements to refer to the “big-picture” requirements from which all others are derived.  Given that the term for these big-picture requirements vary from field to field I hope that this will still be clear.

(4) Enabling reuse requires additional design and testing time.  A general rule of thumb is  to allocate an additional 10% ~15% of the development time.  I will write more about reuse in a future blog post.

Selecting the initial project

When setting the scope of the initial project it is critical to remember the objectives of the initial phase as outlined in earlier posts.  They are:

  1. Understand how artifacts from models integrate with existing artifacts
  2. Establish baseline testing activities
  3. Implement version control for new modeling artifacts
  4. Identify initial model and data architecture

The motivation behind these objectives is to determine how the models fit into your overall process.  To do this a system of appropriate complexity needs to be selected.

Integration

Within the software community the term “spaghetti code”(1) is used to describe software that is poorly partitioned lacking in well-defined interfaces.  puzzleAs the name implies integration with spaghetti code is difficult to break apart and difficult to integrate in a “clean” fashion.

For the first project a section of code should be selected with a reasonably clean interface.  The primary thing to avoid is a section of code that is heavily dependent on global data.  For the initial system, you are almost always integrating the new component into the existing system.  You should target components with a limited number of I/O and no or limited dependence on global data.

(Note: this section has assumed that you are working on a project with existing code that must be integrated with.)

Questions to ask

Will the model need to…

  1. call existing supporting functions?
  2. access global data?
  3. have context dependent execution?

The objective is to select a model that minimizes those requirements.

V&V

The verification(2) and validation tasks(3) play a key part in determining the scope of the initial project.  A good module would include

  1. Modal systems:  State charts with event based logic; allows for verification of context based execution.
  2. Conditional execution: If/then/else logic and enabled / disabled subsystems; allows for testing of model coverage
  3. Dynamic systems: Requires closed loop simulation with stimulus; validates the response characteristics of the system.

11519783-vector-check-marks-stock-vector-check-box-tick

For each of these three items the expected outcome should be fairly well know; the objective is not to find major errors, though this sometimes happens during initial projects, but to understand how you will do these activities.

Questions to ask

Will the model…

  1. produce measurable outputs?
  2. have multiple modes of operation?
  3. have dynamic responses?

The objective is to select a model that maximizes these requirements.

Version control

While one of the objectives of the initial adoption is to understand the artifacts that will be placed under version control this objective has little impact on the setting of the scope of the system.  Version control will be covered in a later post.  For now you can read one of my earlier linked in blog posts on version control.

Model and data architecture

The selected component should enable you to validate your selected architecture as outlined in the model architecture and data sections of this blog.architecture

Questions to ask

Will the model need to have…

  1. parameterizable data calculations.
  2. a functional interface definition.
  3. conditional execution interfaces.

The objective is to select a model that maximizes these requirements.

Complexity

The final metric to consider is algorithmic complexity.  While Model-Based Design allows people to develop more complex algorithms, for the initial project an algorithm that is well understood should be selected.  The objective is to examine how models fit into your development workflow, not to validate the capabilities of the model(4).

Questions to ask

The algorithm should be

  1. an existing or extension of an existing algorithm
  2. part of the project that will first deploy the Model-Based Design process.

The objective is to select a model that maximizes these requirements.

Footnotes

(1) I prefer the term “mushroom code” as it better explains how this code comes about.  It is generally code that has grown on top of well written code but due to organic decay is now difficult to take apart.

(2) Verification is intended to check that a product meets a set of design specifications

(3) Validation is intended to ensure a product, that meets the operational needs of the user.

(4) The capabilities of the modeling environment should be assessed; however this is a separate task from developing your Model-Based Design workflow.

(5) The header image is a reference to the song from “The Sound of Music: Do-Re-Mi ” which provides the advice on where to start.

Initial Adoption: Leveraging existing processes

A common misconception around adopting Model-Based Design is the degree to which things are “new“.  For most software / engineering groups adopting Model-Based Design is an exercise that consists of three main activities

  1. Redefining the development workflow Model-Based Design allows you to perform multiple operations earlier in the development cycle than traditional C based development workflows.
  2. Refactoring existing processes: Model-Based Design allows automation of some development tasks and processes.
  3. Re-targeting process tools: The tools used by the existing processes will change to reflect the automation and simulation tools that Model-Based Design provides.

While Model-Based Design adds in additional processes; it has been my experience that customers have used 70% or more of their existing processes for items 1 and 2.  Further, the automation gain from MBD simplifies many of their existing processes.

Redefining the development workflow

Model-Based Design has been used in conjunction with multiple design workflows, most commonly I reference a “double V feedback” workflow.  Regardless of the development workflow that you follow the key development concept is the movement of simulation-based design and verification into the front end of development.

2017-02-02_8-27-41
Delta V’s(1)

One of the key arguments in favor of Model-Based Design is the ability to simulate models early in the development process to improve the design of the system through system analysis; or as we said back in my automotive days “models, not metal.”   

In the same way that earlybugs implementation of functional and system level design reduces development time the ability to test models early in the development process both reduces development time and improves the likelihood of detecting bugs early in the process.

Refactoring existing processes

Following the same principals from redefining the development workflow refactoring existing processes considers three things

  1. Is the existing process still required?
  2. Can the existing process be automated?
  3. Are there new processes requirements or opportunities?

Is the existing process still required

By moving to Model-Based Design there are some processes that are no longer required, due to automation within the tool.  For example in model-based design processes that include a code generation step a manual code review process are generally not required.

Can the existing process be automated

New methods for automating processes routinely are created; even without moving to a Model-Based Design workflow a yearly review of processes that could be automated is a good best practice.  MBD workflows tend to have high levels of automation due to their built-in integration with automation tools such as Jenkins (continuous integration) or Rational Doors (requirements management)(2).

Are there new processes requirements or opportunities

Transitioning to Model-Based Design workflow will introduce new process.  This happens for three primary reasons.  Firs, the movement to MBD was driven by out side standards and the standard (such as DO-178C) requires a process to be performed.  Second the transition allows for a new process that will improve the final product, such as design based simulation.  The final reason new processes are adopted is in support of the interface between existing and new project development tasks and objects.

Re-targeting tools used by processes

The tools used by the existing processes will change to reflect the  the automation and simulation tools that Model-Based Design provides.  This blog will goes into more detail in “The role of supporting tools in Model-Based Design” and “Model-Based Design Building Blocks” posts (3).

strategist-clipart-target

Footnotes

(1) In just a casual search I found over 50 graphical representations of the “software design V.”  As with all things make sure there is clarity around what you are talking.
(2) While both Doors and Jenkins are solid tools in their categories the selection of them is not an endorsement for that role.
(3) Currently these two sections are still a work in progress.  More detail will be filled in as this blog continues developing.