Addressing challenges of team-based development

In one of my first consulting engagements over 15 years ago, a customer said to me a something that sticks with me through to today regarding working in a group.

“Nobody knows my mind like I do, and even I don’t always know it.” (anon)

What he joked about are the issues that arise when working as part of a group.

Image result for group

Benefits of team-based development

Before we address the challenges of team-based development let us talk about the benefits.  Broadly speaking there are 3 primary types of benefits.

  1. Faster development time: By distributing the work over multiple people the project can be completed more quickly.
  2. Multiple areas of expertise: Additional people can bring domain-specific knowledge to a project.
  3. Error reduction: Having multiple people work on a project can reduce the chance of “developer blindness” where you do not see your own mistakes.
  4. Chance for “team” lunches: When you work as part of a group you can have group celebrations.  When you work by yourself it is just lunch.

Image result for benefits

What are the challenges?

There are three primary types of challenges for team-based development.  They are

  1. Communication:  Both ensuring that all information required is communicated and that it is clearly expressed.
  2. Blocking: When more than one person requires direct access to a set of files to for their work or their work is dependent on another person.
  3. Standards: Every developer has different ways of solving problems; in some instances, these approaches will be in conflict.

 

Related image
Note: I am not addressing personalities with this post.

 

Mitigating these the challenges

As the title of this section states, these challenges can be mitigated, but never fully eliminated.  The following recommendations will help reduce these challenges.

Challenge 1: Communication

Good communication starts with a commitment to good communication; the team needs to recognize the need for some form of formal transfer of knowledge.  Often this takes the form of a Requirement Document.  However, it is not enough just to have a requirement document; it needs to be used.  Use of a requirement document implies the following

  1. Referenced: The requirement document is referenced during the creation of artifacts
  2. Tested: Test cases are derived from the requirements documented
  3. Traced: The use of the requirements is validated throughout the development cycle
  4. Living: The document is updated as changes are required.

Failure to follow these steps will lead to communication breakdown.

1481381882-20161210
credit: https://www.smbc-comics.com/

Challenge 2: Blocking

Blocking is addressed through architectural constructs and version control methodologies.  Models can be architected to allow each person to work on individual components while still facilitating integration into a large system level model.  In the instances where two people need to work on the same model, and it cannot be subdivided, then version control software can be used to create a branch for each person to work on and then merge their changes once they have completed there work.

It is of the highest importance to validate the models’ behavior after the merge to ensure that the functionality added by each person still works in the merged model and that the baseline functionality has not changed.

Image result for road block

Challenge 3: Standards

While standards may be complete or incomplete, there is no “right” standard.  The key is complying with those standards.

A “complete” standard is often a series of standards addressing

  1. Each stage in the development:
    1. How to write requirements
    2. How to write tests
  2. The handoff between stages:
    1. What artifacts are created when you start developing a model
    2. What artifacts are created when you run a test

Image result for up to my standards

 

Final thoughts

In this post, I have not specifically addressed Model-Based Design.  The recommendations for mitigation can be directly linked to earlier posts I have made on topics such as modeling standards, version control, and model architecture.  Finally, with models acting as the “single source of truth” during the development cycle many of the handoffs and blocking issues of team-based development can be avoided.

 

Software Design for Medical Devices

I am happy to write that for the second time I will be presenting at the Software Design for Medical Devices conference in Munich Germany Feb 19th and 20th.  I will be in Munich the balance of the week answering questions about Model-Based Design both for the medical industry and in general.  If you are based in or around Munich please feel free to contact me.

Mit freundlichsten Grüßen, Michael

Verification: Phased lockdown

When I release a model it will

  • Reach 100% requirements coverage for the model
  • Reach 90% test coverage of requirements
    • With 100% passing
  • Be in full compliance with 70 Modeling Guidelines
    • Reach 90% compliance with an additional 7
  • Achieve 95% MISRA compliance
    •  100% with exception rationale

However, if I asked anyone to reach these levels early on in the development process then I would both slow down the process and increase the frustration of the developers.

Image result for I hate my manager

What is a phased approach to verification?

The phased approach to verification imposes increasing levels of verification compliance as the model progresses from the research phase to the final release.releasePhases

The following recommendations are rough guidelines for how the verification rigor is applied at each phase.

Research phase

The research phase has the lowest level of rigor.  The model and data from this phase may or may not be reused in later phases.  The model should meet the functional requirements within a predetermined tolerance.   Modeling guidelines, requirements coverage, and other verification tasks should not be applied at this phase.

Initial phase

With the model in the initial phase, we have the first model that will be developed into the released model. With this in mind, the following verification tasks should be followed

  • Verify the interface against the specification:  The model’s interface should be locked down at the start of development.  This allows the model to be integrated into the system level environment.
  • Compile with model architecture guidelines:  Starting the model development with compliant architecture prevents the need to rearchitect the model later in development.
  • Create links to high-level requirements:  The high-level requirements should be established with the initial model.

Development phase

The development phase is an iterative process.  Because of this, the level of verification compliance will increase as the model is developed.  As would be expected the level of requirements coverage will increase as the implementation of the requirements is done.  The verification of the requirements should directly follow their implementation.

death-spiral

With respect to the increasing compliance with modeling and MISRA compliance; in general, I recommend the following.

  • 50% guideline compliance/MISRA at the start of the development phase
  • 70% guideline compliance/MISRA when 50% of the requirements are implemented
  • 90% guideline compliance/MISRA when 80%  of the requirements are implemented

Release phase

With the release phase, I finally hit the targets I initially described.  Entering this phase from development all of the functional requirements should be complete. The main task of the release phase is the final verification of requirements and compliance with guidelines (model and code).

Additionally, the release phase may include a “targeting” component; where the model which was designed for a generic environment is configured for one or more types of target hardware.  In this case the functionality of the component should be verified for each target.Image result for release

Final thoughts

Ramping up compliance with verification tasks is a standard workflow.  The suggested levels of compliance during the development phase should be adjusted based on a number of factors including

  • Reuse of components:  When components are reused the compliance should be higher from the start of development.
  • Compliance requirements: If you are following a safety critical workflow, such as DO-178C or IEC-61508, then the compliance should be higher from the start of development.
  • Group size:  The more a model is shared among multiple people the sooner the model should be brought into compliance with modeling guidelines.  This facilitates understanding of the model under development

 

Model-Based Design: The handoff between companies…

Outsourcing design is common in most industries; from simple sub-components to the full system.  In both cases the process by which the handoff between companies happens is critical.

It starts before you start (pre-project work)

Before theImage result for start first model is exchanged, before the first requirement is written the two companies need to agree on  the following items

  1. What materials are delivered
    1. Models, protected models?
    2. Data dictionary?
    3. Test models, test cases?
    4. Generated code?
    5. Requirements?
    6. Interface control document (ICD)?
  2. How are the materials delivered?
    1. Using Simulink Projects?
    2. Binaries?
  3. What level of model customization is enabled?
    1. Parameter tunning?
    2. variant tuning?
  4. How will the requirements be validated?
    1. Requirements tracking through traceability matrix?
    2. Requirements based testing?

Regardless of what is specified the information that is required needs to be clearly defined.

Stages of development

An additional factor to consider is the stage of the development.  The materials that are handed over during the initial versus final stages of development will be different.  Normally during the early stages of development, the level of compliance will be lower.  As the development matures the rigor for compliance increases.

Related image

Recommendations for delivery

The following are recommendations for three stages of a “Company-to-Company” project.  The stages I will look at are “Initial specification”, “functional review” and “final delivery”

Image result for package delivery

Initial specification

In the initial specification phase, the Requesting Company (RC) is providing requirements to the Providing Company (PC).  At a minimum, they should provide the following information.

  1. Functional requirement document: A formal document describing how the software should perform
  2. Required level of testing for acceptance: A description of the level, or class, of testing to be performed.  This may include some existing acceptance tests.
  3. Interface control document (ICD):  Description of the software interface, including I/O, rates and tolerances
  4. “Real-world data”:  Any specification sheets for the unit under design and or any existing performance data from the unit

Image result for init

Functional review

In most cases, the functional review stage is an iterative process with the Providing Company providing updates to the Requesting Company.

Related image

At a minimum, the providing company should provide the following artifacts

  1. Simulatable model:  The model could be delivered in a protected mode, keeping the PC intellectual property protected.
  2. Requirements traceability report:  A report on the current status of requirements implementation.  Note: at this stage, the not all of the requirements may have been implemented.
  3. Verification results:  Related to the requirements report the verification results demonstrate the compliance with the requirements.  Note: at this stage, not all of the tests may have been implemented or in a passing stage.
  4. Change requests: During the functional reviews the PC should provide change and clarification requests

Depending on what was decided in the pre-work phase the providing company may provide the test environment and test cases.

In response to these artifacts, the requesting company should provide, at a minimum, the following documents.

  1. Change request response: The requesting company should respond with approval or clarifying information.
  2. Change request: The requesting company should also formally provide any requests to the

Final Deliverable

With the final deliverables, the providing company should provide the same materials as in functional reviews.  The difference is that in the final review requirements traceability and verification results should be completed.  Any requirements that could not be met should have been addressed in the final functional review change request.

handOffFlow

Final thoughts

The process of handing off artifacts between companies in the Model-Based Design environment is nearly identical to that of the traditional text-based environment.  The primary difference is that MBD enables the simulation of the model enabling the requesting company to easily verify the requirements.

Likewise, the specification of which artifacts, and in what format, will be exchanged in the pre-work phase is critical to the success of work between companies.

 

 

 

Things I learned in 2017: MBD edition

With 2017 behind us and 97 blog posts under my belt, it seemed like a good time for some reflection on the state of Model-Based Design.

  1. New industries adopt, old industries expand
    In the past 3 years, the Medical device industry has embraced the core aspects of Model-Based Design feverishly.  At the same time existing strong users, such as Aerospace and Automotive have expanded the tool suite they use to include things such as big data and image processing
    Image result for Cross industry
  2. Growth of continuous integration 
    The use of CI systems for model and system level validation continues to grow.  This is aided by both the growing ease of use for CI systems and…
    Image result for CI images
  3. Improvements in testing infrastructure
    Testing infrastructure, from test managers to test reports continues to mature making it easier for end users to develop reusable, scalable testing environments.  Further, it lowers the bar for developing tests allowing software and systems engineers to both run and create tests.

    Image result for Simulink test
    Simulink Test Manager
  4. Puppies!
    Seriously, of all the desktop backgrounds I have used during presentations this one, of my wife and a photo-bombing dog, was the most liked.

    Epping_Deb_8
    My wife at Epping Forest (not our dog)
  5. Traning
    2017 was a lean year for many customers, and in an effort to save on costs they cut back on training.  As a result, the start of many of my engagements involved basic training.  Fortunately, this is a trend that is already changing.
    Image result for eye of the tiger
  6. Things get real (time)
    2017 featured a large increase in the number of Hardware In the Loop (HIL) projects that I worked on.  This came about due to three things

    • Improvements in the Simulink Real-Time API
    • Lower cost of Hardware In the Loop systems
    • Improved testing support for Hardware in the Loop systems (see item 3)
      Image result for real time SLRT
      Note: I have no idea why, but this image came up in the search for real-time.  I am keeping it

       

Final thoughts

2017 was a great year for Model-Based Design projects.  I expect an increase in both the number and depth of these projects in 2018.  I look forward to continuing this blog and the eventual conversion to the book.

Automation do’s and don’ts

As an engineer automation is part of my day-to-day work.  From the ‘start.m” function that runs at the start of MATLAB, the excel formulas that smooth pivot tables, or the GIT macro that allows me to merge two branches.  These are automation functions that other people have already created.  Some of these automatons are things so “common” that I forget they were not they are in fact automation.  What then leads me to automate a task?

The 6 questions

Before I automate a process I ask myself the following questions

  1. How often do I perform the task?
    Once a day? Once a week? Once a quarter?
  2. How long does the task take?
    How long does the task take both for my self and the system running the process?
  3. Do others perform this task?
    Do they follow the same process?  Does variance in the process cause problems?  Do you have a way to push the automation out to others?
  4. How many decision points are there in the process?
    Decision points are a measure of the complexity of the process.
  5. Is the process static?
    Is the process still evolving?  If so how often does it change?
  6. Is it already automated?
    Oddly enough if you found it worthwhile to automate someone else may have already done the work.
    autoFreq

When to and not to automate

If the process is already automated, or if the process is changing frequently it is obvious that work should not be put into the automation.  In general, I put look at a threshold in terms of person-hours per week normalized by the number of people working on the project compared to the effort to implement the automation.

If the person-hours per week is low (under 1) or the return on automation duration is high (over 6 months) then I do not consider the automation a worthwhile investment.

Final thoughts

Automation, when done right, saves us time and allows us to perform other more interesting tasks.  However, it is easy to get stuck in “automation for automation sake” development.  I leave you with two humorists take on automation.

 

Automation
XKCD: Automation

 

 

SMBC: Predictions

 

Model-Based Design and high integrity workflows

First off what qualifies as “High Integrity Software?”  The base “reference” document that I use is the “NIST Special Publication 500-204: High Integrity Software Standards and Guidelines”

2017-06-27_15-49-24.png

Originally written to support the nuclear power industry it provides a valuable insight into what it means to be “safety critical”

criticalDef

In short, the software must function dependably (in a measurable and definable fashion) for critical functions.  Critical functions being defined as having failure modes that could cause serious injury, loss of life or property.

Model-Based Design and safety-critical software

When considering software design using MBD methodologies for safety-critical software everything starts with the requirements and the validation that those requirements are correctly implemented (this is true for all software).   I consider 4 primary factors

  1. Enhanced understanding of requirements
  2. Enhanced traceability
  3. Handoff error reduction
  4. Automated code generation

Enhanced understanding of requirements

Model-Based Design improves the understanding of requirements in 3 ways.  First, in general, models are easier to interpret than code.  Second, models allow you to easily simulate and visualize their behavior simplifying the understanding of the requirements.  Finally, the ability to link requirements to sections of a model and have those requirements show up in test results improves the chance that the requirements will be correctly implemented.  Image result for simulation animation stateflow

Enhanced traceability

Traceability refers to the act of following the implementation, modification, and validation of requirements.  Model-Based Design improves this process since a single model can be used as the design artifact at multiple stages in the development.  Meaning that once the link between the requirement and the model is made it is maintained.

Image result for requirements

Handoff error reduction

The droppedHandOffhandoff of software artifacts between people and roles (e.g. software developer to software integrator to software test engineer) is a well know point for the introduction of errors.  With Model-Based Design, the same model is used at each stage preventing hand-off errors.

Automated code generation

The use of automatically generated code prevents syntactical errors to which people are prone.  Many standards now allow you to claim credit for the use of auto code in the prevention of these errors.

Final thoughts

Developing safety critical systems for any industry requires following common best practices and established guidelines.  Following a Model-Based Design approach helps with the automation and validation of many of these steps while avoiding critical handoff errors.

Getting more out of your test reports

Did I pass or did I fail?  Yes or No?  What more do I need to know?  Putting aside the failure case, where knowing how you failed is important, let’s start by talking about what information you can know and why you would care about it.

Passing information

Image result for passing laneFirst, remember that there are multiple ways in which a test can “pass.”  Just like in school there can be A, B and C passing grades.  The way the grade is determined is, in part, related to the test type.

  • Coverage:  Pass is determined by reaching a given level of coverage.
  • Standards compliance: Passing is determined by being under a given level of errors and not violating any “critical” standards.
  • Baseline: Executing within a given fault tolerance.
  • Performance: Execution of the process under a maximum and average time

File churn

Another metric of interest to the system and testing engineers is the file “churn rate”.  From the testing perspective, there are two types of churn rate.  First how often is the file updated, second how often is the file referenced by updated files.Related image

Files with high “self-churn” are under development and, in general, should have test cases added as the development process matures.  Files with high “reference churn” are, in contrast, generally mature files that are referenced as utilities or as data.  These files should be “fully” locked down with test cases.

Failure is an option

Just like with passing there are multiple types of failures corresponding to the types of “passing.”  The main question is what sort of information do we bring out from the tests?

Image result for failure

  • Cause of failure:  There are 4 primary causes of failure
    • Test did not meet explicit criteria
    • Test failed to run (test harness bug)
    • Dependency failure (supporting files not present or failing their tests)
    • Performance failure

For each type of failure different “rich” information is desired.

Explicit criteria

For the explicit criteria case the cause of failure, as defined by the test, should be provided.  Any relevant plots, error diagnostics (e.g. line of code or block in model), as well as expected results, should be provided.

Failure to run

In a subset of cases, the failure will be in the testing infrastructure.  In this case, the location of the test infrastructure failure should be reported.  To prevent these types of errors when the testing infrastructure is developed test cases for it should be created.

Dependency failure

A dependency failure is a case of the “expected criteria” failure.  E.g. when working with a system of models one or more of the dependent models or bits of data has an error.  Dependency errors can occur in one of two ways.

  1. The dependent model changed and errors were introduced
  2. The interface between the parent and dependent model changed (in the parent) causing errors

If the error is of the first type then the report is the same as in the explicit error case.  For the second case, an interface report should be created detailing the change in the interface.

Infrastructure performance

The final note for this post is a few thoughts on the infrastructure performance.  Over time as additional tests are added to a system the total time to execute tests will increase.  Therefore monitoring both the execution time of individual tests as well as reusable test components is critical.

Image result for infrastructure failure

When profiling the performance of individual test components having “tests for the tests” is important as you want to make sure that when you improve the performance of a test component you do not change its behavior.