Aus München

Diese Woche hatte ich das Vergnügen, an der SDMD Konferenz in München teilzunehmen.  Neben zwei Papiere präsentieren hatte ich die Gelegenheit, mich sowohl auf der Konferenz als auch danach mit Menschen aus der Medizintechnikbranche zu treffen.  Was folgt, sind meine Beobachtungen.

Erinnerungen an vergangene Arbeit

Ich hatteImage result for back to the future 3 meine Arbeit mit MBD beginnen 25 jahr wenige mit General Motors; für deise Autoindustrie.  Auf das Zeit MBD war neu und ehrlich gesagt, waren die Werkzeuge weniger ausgereift.  Leute hatte eine Bedürfnis für Verbesserungen aber alles war nicht klar wie zu vorgangen.  Es war die “Wild West.”

Über Zeit,  Best Practices entwickeln von Industrie Erfahrung.  Durch den frühen 2000er Jahren moderne Prozesse waren vorhanden.

Dieser Zeit für die Medizinprodukteindustrie erinnert mich dieser Zeit Autoindustrie. (Aber mit die Werkzeuge ausgereift.)

Alt probleme, alt frage, neu antworten

Die Medizinprodukteindustrie Image result for old problemsist die gleiche Probleme sehen das Autoindustrie vor 20 Jahr; Zusätzlich zu, dass sie konfrontiert sind regulatorische Fragen.  Die Medizinprodukteindustrie ist, nur natralich, vorsichtig wann neue Prozesse übernehmen.  because of this “use cases” from other industries are required for validation of the process.

Letzte Worte

Drie ding: Erste für Alles das klar sind im meine im Deutsche schreiben; habe ich mein Mitarbeiter zu danken. Alles das sind nicht klar ich meine Entschuldigung gibt.  Zweite, München ist ein Stadt mit schöne Gebäude und wunderbare Menschen.  Letzt ich habe learnen zu essen das Brezeln für Frühstück und das war sehr gut.

Agile development and Model-Based Design

If in the heart of agile development can be seen in the concepts of quick iterations, leveraging test points for quality assurance coupled to a close team-based collaboration then Model-Based Design are the veins and blood that compose the body of your work.

Agile is a concept andImage result for circulatory system
a process; how that concept is implemented is up to the development team.

If we review the key concepts behind Model-Based Design and Agile Development then the mapping between is obvious.

Use models for architectural decomposition:  Models are used to break down large problems into smaller components.  These smaller components can easily be integrated into larger system-level models created by other people in the development team.  The use of models and a modeling architecture strongly supports close team-based collaboration.

Use of simulation: simulation is the younger brother of testing.  Using models developers can quickly and easily exercise their models to determine the functional correctness of system under test.  Once the initial models are “correct”  they can be locked down with a set of formal tests.  Those formal tests often are derived directly from simulation used for design.

Image result for there can be only one Model as the single truth: When we look at the elaboration process that a Model-Based Design process follows it is clear that the iterative nature of an agile process is a close fit.  Models can both provide a tight integration with requirements while allowing for the fast evolution of those requirements.  In fact, the use of simulation as part of the development process allows developers to quickly find issues with their requirements.

Final thoughts

Agile design processes are as good as the people who commit to them.  A good understanding of what is and what is not part of the agile development process is important to the success of the project.  (This is, of course, true of any product development.)  For another perspective on Agile development and Model-Based Design, this link provided a good overview.



Reusable software components…

This post is a companion post to the “Automation do’s and don’ts”.  Here I will examine organizational hurdles that stall the creation of reusable components.

The reuse of software is a common object stated by most companies, but, with exception of a set of narrow cases, most companies struggle with this objective.  In my experience, there are 6 reasons for this struggle

  1. Lack of ownership:  There isn’t a group (or person for smaller companies) who has the responsibility and authority to ensure this task succeeds.
    Note: often the lack of authority on the person/groups part is the larger part of the problem.
  2. Failure to allocate time:  Turning a component into a reusable component can add between 10% to 15% to the development time.  If time is not budgeted for the additional development a “buggy” reusable component is released.
  3. Lack of awareness/documentation: The greatest software tool is useless if no one knows about it or it is poorly documented.
  4. Narrow use case: The component is created and its’ use is so limited that only a few people will ever use it.
  5. Wide use cases: Wide use cases often lead to complex reuse components that either do nothing well or become so bloated that they are difficult to configure and maintain.
  6. Bugs: Every time a person uses a “reusable component” and it fails to do what it is supposed to do it encourages people to not look at reusable components.

So how do you avoid those pitfalls?

Image result for pitfall

What type of reuse?

I break down reuse into two categories, formal and informal.  Informal reuse is common for individuals and within small groups.  It is when a component is regularly used to perform a task by people who know how to use it well or are able to work with its’ “quirks.”

Informal reuse is a good practice however it should not be confused with formal reuse which is the topic of this post.  With formal reuse, the component is used by people who are not experts on the underlying assumptions and methods of the object.  Because of this they are not tolerant of “quirks” and need a solution that is dependable and documented.

It should be noted that many “failed” reuse attempts arise out of taking informal reusable components and treating them like formal reusable components.

Deciding when to reuse

Before I automate a process I ask myself the following questions to prevent the “to narrow” and “to wide” blocking issues.

  1. How often do I perform the task?
    Once a day? Once a week? Once a quarter?
  2. How long does the task take?
    How long does the task take both for my self and the system running the process?
  3. Do others perform this task?
    Do they follow the same process?  Does variance in the process cause problems?  Do you have a way to push the automation out to others?
  4. How many decision points are there in the process?
    Decision points are a measure of the complexity of the process.
  5. Is the process static?
    Is the process still evolving?  If so how often does it change?
  6. Is it already automated?
    Oddly enough if you found it worthwhile to automate someone else may have already done the work.

Image result for thumb war

Taking responsibility

Image result for i didn't do it memeIssues 1 (lack of ownership) 3 (lack of awareness and documentation, and 6 (bugs) can be addressed by having a person or group who has the task of creating and maintaining components

The maintenance of the component has three primary tasks.  First, the creation of test cases to ensure the component continues to work as expected.  Second, updating the component to support new use cases.  Third, knowing when to “branch” components to keep them from becoming to complicated.


For some organizations allocating time to the development process can be the greatest hurdle to creating reusable components.  The time invested does not show an immediate return on investment and there are pressing deadlines.  However, if the rules of thumb in “deciding when to reuse” are followed the long-term benefits will outweigh the short-term cost.

Image result for give yourself time

Final thoughts

The final topic is how to encourage engineers to actually reuse the components.  This is, in part, dependent on how well the components are documented and how easy they are to accesses.  In the end, they need to understand how it benefits them; e..g less time spent “reinventing the wheel” and more time to work on their actual projects.




Anyone who has worked with software for more than 3 years knows that migration between software releases is a fact of life; having that process be smooth and easy is not always a fact of life (anyone remember Windows ME pains?)

Making migration easy(er)

One of my early swimming coaches was famous for saying “you win the race by how you train.”  I have found this advice to be true in most aspects of my life.  Projects succeed or fail based on the preparation you do as much as your execution.

Image result for moving
Credit: XKCD

Preparing for migration

In preparing for migration I start by asking 3 questions

  1. What things are we doing now that are working well?
  2. What things are we doing now that are hard to do?
  3. What things do we want to do that we can’t do now?

The Image result for improvementfirst question focuses on maintaining current functionality.  The second and third look at how to make things better.  Improvements to processes can be made either through refactoring of existing processes (or creating new processes) or through the adoption of new tools.
One of the critical things to keep in mind with software upgrades is that it is not just changing tools.  It is, or should be, about changing processes.  [Note: for minor migrations of a single tool the associated processes may or may not require updates.]

A few thoughts on type two problems

The “type 2” problems, “what things are we doing now that are hard to do?” can be further thought about in a few components.

  • The process runs slowly:  Frequently, but not always, upgrades in software can provide an increase in speed.  Additional, process changes, may provide speed improvements.
  • The process is complicated to execute:  Complex processes can be difficult to execute. Often complex processes were developed due to limitations in the tool when they were initially developed.
  • The process has bugs: Before upgrading software validate that the bugs in the software have been resolved.

The more things change the more they stay the same…

When you upgrade you still want some things to be static: your results.  The best method for ensuring that your results (deliverables, code,…) remain the same is by developing test cases that “lock down” your deliverables.

When comparing test results between different tools there are a couple of things to keep in mind.  First, for every test as “acceptable” change should be defined as there may be small deviations which have no effect on the overall systems performance (though for some tests no change will be allowed.)  Second, in some cases testing in newer versions of software bugs that were not detected before may be uncovered.

Image result for The more things change the more they stay the same
Credit: Bill Waterson

Testing the testing environment

Image result for worm ouroborosAs a last note; if as part of your migration you are are updating your testing environment then you need to validate the behavior of your testing environment.  This is generally done through manual inspection of a subset of the full test suite.  The key factor is to have a subset that contains a full set of all types of tests are performed by the testing environment.



Addressing challenges of team-based development

In one of my first consulting engagements over 15 years ago, a customer said to me a something that sticks with me through to today regarding working in a group.

“Nobody knows my mind like I do, and even I don’t always know it.” (anon)

What he joked about are the issues that arise when working as part of a group.

Image result for group

Benefits of team-based development

Before we address the challenges of team-based development let us talk about the benefits.  Broadly speaking there are 3 primary types of benefits.

  1. Faster development time: By distributing the work over multiple people the project can be completed more quickly.
  2. Multiple areas of expertise: Additional people can bring domain-specific knowledge to a project.
  3. Error reduction: Having multiple people work on a project can reduce the chance of “developer blindness” where you do not see your own mistakes.
  4. Chance for “team” lunches: When you work as part of a group you can have group celebrations.  When you work by yourself it is just lunch.

Image result for benefits

What are the challenges?

There are three primary types of challenges for team-based development.  They are

  1. Communication:  Both ensuring that all information required is communicated and that it is clearly expressed.
  2. Blocking: When more than one person requires direct access to a set of files to for their work or their work is dependent on another person.
  3. Standards: Every developer has different ways of solving problems; in some instances, these approaches will be in conflict.


Related image
Note: I am not addressing personalities with this post.


Mitigating these the challenges

As the title of this section states, these challenges can be mitigated, but never fully eliminated.  The following recommendations will help reduce these challenges.

Challenge 1: Communication

Good communication starts with a commitment to good communication; the team needs to recognize the need for some form of formal transfer of knowledge.  Often this takes the form of a Requirement Document.  However, it is not enough just to have a requirement document; it needs to be used.  Use of a requirement document implies the following

  1. Referenced: The requirement document is referenced during the creation of artifacts
  2. Tested: Test cases are derived from the requirements documented
  3. Traced: The use of the requirements is validated throughout the development cycle
  4. Living: The document is updated as changes are required.

Failure to follow these steps will lead to communication breakdown.


Challenge 2: Blocking

Blocking is addressed through architectural constructs and version control methodologies.  Models can be architected to allow each person to work on individual components while still facilitating integration into a large system level model.  In the instances where two people need to work on the same model, and it cannot be subdivided, then version control software can be used to create a branch for each person to work on and then merge their changes once they have completed there work.

It is of the highest importance to validate the models’ behavior after the merge to ensure that the functionality added by each person still works in the merged model and that the baseline functionality has not changed.

Image result for road block

Challenge 3: Standards

While standards may be complete or incomplete, there is no “right” standard.  The key is complying with those standards.

A “complete” standard is often a series of standards addressing

  1. Each stage in the development:
    1. How to write requirements
    2. How to write tests
  2. The handoff between stages:
    1. What artifacts are created when you start developing a model
    2. What artifacts are created when you run a test

Image result for up to my standards


Final thoughts

In this post, I have not specifically addressed Model-Based Design.  The recommendations for mitigation can be directly linked to earlier posts I have made on topics such as modeling standards, version control, and model architecture.  Finally, with models acting as the “single source of truth” during the development cycle many of the handoffs and blocking issues of team-based development can be avoided.


Software Design for Medical Devices

I am happy to write that for the second time I will be presenting at the Software Design for Medical Devices conference in Munich Germany Feb 19th and 20th.  I will be in Munich the balance of the week answering questions about Model-Based Design both for the medical industry and in general.  If you are based in or around Munich please feel free to contact me.

Mit freundlichsten Grüßen, Michael

Verification: Phased lockdown

When I release a model it will

  • Reach 100% requirements coverage for the model
  • Reach 90% test coverage of requirements
    • With 100% passing
  • Be in full compliance with 70 Modeling Guidelines
    • Reach 90% compliance with an additional 7
  • Achieve 95% MISRA compliance
    •  100% with exception rationale

However, if I asked anyone to reach these levels early on in the development process then I would both slow down the process and increase the frustration of the developers.

Image result for I hate my manager

What is a phased approach to verification?

The phased approach to verification imposes increasing levels of verification compliance as the model progresses from the research phase to the final release.releasePhases

The following recommendations are rough guidelines for how the verification rigor is applied at each phase.

Research phase

The research phase has the lowest level of rigor.  The model and data from this phase may or may not be reused in later phases.  The model should meet the functional requirements within a predetermined tolerance.   Modeling guidelines, requirements coverage, and other verification tasks should not be applied at this phase.

Initial phase

With the model in the initial phase, we have the first model that will be developed into the released model. With this in mind, the following verification tasks should be followed

  • Verify the interface against the specification:  The model’s interface should be locked down at the start of development.  This allows the model to be integrated into the system level environment.
  • Compile with model architecture guidelines:  Starting the model development with compliant architecture prevents the need to rearchitect the model later in development.
  • Create links to high-level requirements:  The high-level requirements should be established with the initial model.

Development phase

The development phase is an iterative process.  Because of this, the level of verification compliance will increase as the model is developed.  As would be expected the level of requirements coverage will increase as the implementation of the requirements is done.  The verification of the requirements should directly follow their implementation.


With respect to the increasing compliance with modeling and MISRA compliance; in general, I recommend the following.

  • 50% guideline compliance/MISRA at the start of the development phase
  • 70% guideline compliance/MISRA when 50% of the requirements are implemented
  • 90% guideline compliance/MISRA when 80%  of the requirements are implemented

Release phase

With the release phase, I finally hit the targets I initially described.  Entering this phase from development all of the functional requirements should be complete. The main task of the release phase is the final verification of requirements and compliance with guidelines (model and code).

Additionally, the release phase may include a “targeting” component; where the model which was designed for a generic environment is configured for one or more types of target hardware.  In this case the functionality of the component should be verified for each target.Image result for release

Final thoughts

Ramping up compliance with verification tasks is a standard workflow.  The suggested levels of compliance during the development phase should be adjusted based on a number of factors including

  • Reuse of components:  When components are reused the compliance should be higher from the start of development.
  • Compliance requirements: If you are following a safety critical workflow, such as DO-178C or IEC-61508, then the compliance should be higher from the start of development.
  • Group size:  The more a model is shared among multiple people the sooner the model should be brought into compliance with modeling guidelines.  This facilitates understanding of the model under development


Model-Based Design: The handoff between companies…

Outsourcing design is common in most industries; from simple sub-components to the full system.  In both cases the process by which the handoff between companies happens is critical.

It starts before you start (pre-project work)

Before theImage result for start first model is exchanged, before the first requirement is written the two companies need to agree on  the following items

  1. What materials are delivered
    1. Models, protected models?
    2. Data dictionary?
    3. Test models, test cases?
    4. Generated code?
    5. Requirements?
    6. Interface control document (ICD)?
  2. How are the materials delivered?
    1. Using Simulink Projects?
    2. Binaries?
  3. What level of model customization is enabled?
    1. Parameter tunning?
    2. variant tuning?
  4. How will the requirements be validated?
    1. Requirements tracking through traceability matrix?
    2. Requirements based testing?

Regardless of what is specified the information that is required needs to be clearly defined.

Stages of development

An additional factor to consider is the stage of the development.  The materials that are handed over during the initial versus final stages of development will be different.  Normally during the early stages of development, the level of compliance will be lower.  As the development matures the rigor for compliance increases.

Related image

Recommendations for delivery

The following are recommendations for three stages of a “Company-to-Company” project.  The stages I will look at are “Initial specification”, “functional review” and “final delivery”

Image result for package delivery

Initial specification

In the initial specification phase, the Requesting Company (RC) is providing requirements to the Providing Company (PC).  At a minimum, they should provide the following information.

  1. Functional requirement document: A formal document describing how the software should perform
  2. Required level of testing for acceptance: A description of the level, or class, of testing to be performed.  This may include some existing acceptance tests.
  3. Interface control document (ICD):  Description of the software interface, including I/O, rates and tolerances
  4. “Real-world data”:  Any specification sheets for the unit under design and or any existing performance data from the unit

Image result for init

Functional review

In most cases, the functional review stage is an iterative process with the Providing Company providing updates to the Requesting Company.

Related image

At a minimum, the providing company should provide the following artifacts

  1. Simulatable model:  The model could be delivered in a protected mode, keeping the PC intellectual property protected.
  2. Requirements traceability report:  A report on the current status of requirements implementation.  Note: at this stage, the not all of the requirements may have been implemented.
  3. Verification results:  Related to the requirements report the verification results demonstrate the compliance with the requirements.  Note: at this stage, not all of the tests may have been implemented or in a passing stage.
  4. Change requests: During the functional reviews the PC should provide change and clarification requests

Depending on what was decided in the pre-work phase the providing company may provide the test environment and test cases.

In response to these artifacts, the requesting company should provide, at a minimum, the following documents.

  1. Change request response: The requesting company should respond with approval or clarifying information.
  2. Change request: The requesting company should also formally provide any requests to the

Final Deliverable

With the final deliverables, the providing company should provide the same materials as in functional reviews.  The difference is that in the final review requirements traceability and verification results should be completed.  Any requirements that could not be met should have been addressed in the final functional review change request.


Final thoughts

The process of handing off artifacts between companies in the Model-Based Design environment is nearly identical to that of the traditional text-based environment.  The primary difference is that MBD enables the simulation of the model enabling the requesting company to easily verify the requirements.

Likewise, the specification of which artifacts, and in what format, will be exchanged in the pre-work phase is critical to the success of work between companies.




Things I learned in 2017: MBD edition

With 2017 behind us and 97 blog posts under my belt, it seemed like a good time for some reflection on the state of Model-Based Design.

  1. New industries adopt, old industries expand
    In the past 3 years, the Medical device industry has embraced the core aspects of Model-Based Design feverishly.  At the same time existing strong users, such as Aerospace and Automotive have expanded the tool suite they use to include things such as big data and image processing
    Image result for Cross industry
  2. Growth of continuous integration 
    The use of CI systems for model and system level validation continues to grow.  This is aided by both the growing ease of use for CI systems and…
    Image result for CI images
  3. Improvements in testing infrastructure
    Testing infrastructure, from test managers to test reports continues to mature making it easier for end users to develop reusable, scalable testing environments.  Further, it lowers the bar for developing tests allowing software and systems engineers to both run and create tests.

    Image result for Simulink test
    Simulink Test Manager
  4. Puppies!
    Seriously, of all the desktop backgrounds I have used during presentations this one, of my wife and a photo-bombing dog, was the most liked.

    My wife at Epping Forest (not our dog)
  5. Traning
    2017 was a lean year for many customers, and in an effort to save on costs they cut back on training.  As a result, the start of many of my engagements involved basic training.  Fortunately, this is a trend that is already changing.
    Image result for eye of the tiger
  6. Things get real (time)
    2017 featured a large increase in the number of Hardware In the Loop (HIL) projects that I worked on.  This came about due to three things

    • Improvements in the Simulink Real-Time API
    • Lower cost of Hardware In the Loop systems
    • Improved testing support for Hardware in the Loop systems (see item 3)
      Image result for real time SLRT
      Note: I have no idea why, but this image came up in the search for real-time.  I am keeping it


Final thoughts

2017 was a great year for Model-Based Design projects.  I expect an increase in both the number and depth of these projects in 2018.  I look forward to continuing this blog and the eventual conversion to the book.

Automation do’s and don’ts

As an engineer automation is part of my day-to-day work.  From the ‘start.m” function that runs at the start of MATLAB, the excel formulas that smooth pivot tables, or the GIT macro that allows me to merge two branches.  These are automation functions that other people have already created.  Some of these automatons are things so “common” that I forget they were not they are in fact automation.  What then leads me to automate a task?

The 6 questions

Before I automate a process I ask myself the following questions

  1. How often do I perform the task?
    Once a day? Once a week? Once a quarter?
  2. How long does the task take?
    How long does the task take both for my self and the system running the process?
  3. Do others perform this task?
    Do they follow the same process?  Does variance in the process cause problems?  Do you have a way to push the automation out to others?
  4. How many decision points are there in the process?
    Decision points are a measure of the complexity of the process.
  5. Is the process static?
    Is the process still evolving?  If so how often does it change?
  6. Is it already automated?
    Oddly enough if you found it worthwhile to automate someone else may have already done the work.

When to and not to automate

If the process is already automated, or if the process is changing frequently it is obvious that work should not be put into the automation.  In general, I put look at a threshold in terms of person-hours per week normalized by the number of people working on the project compared to the effort to implement the automation.

If the person-hours per week is low (under 1) or the return on automation duration is high (over 6 months) then I do not consider the automation a worthwhile investment.

Final thoughts

Automation, when done right, saves us time and allows us to perform other more interesting tasks.  However, it is easy to get stuck in “automation for automation sake” development.  I leave you with two humorists take on automation.


XKCD: Automation



SMBC: Predictions