If in the heart of agile development can be seen in the concepts of quick iterations, leveraging test points for quality assurance coupled to a close team-based collaboration then Model-Based Design are the veins and… More
In one of my first consulting engagements over 15 years ago, a customer said to me a something that sticks with me through to today regarding working in a group.
“Nobody knows my mind like I do, and even I don’t always know it.” (anon)
What he joked about are the issues that arise when working as part of a group.
Benefits of team-based development
Before we address the challenges of team-based development let us talk about the benefits. Broadly speaking there are 3 primary types of benefits.
- Faster development time: By distributing the work over multiple people the project can be completed more quickly.
- Multiple areas of expertise: Additional people can bring domain-specific knowledge to a project.
- Error reduction: Having multiple people work on a project can reduce the chance of “developer blindness” where you do not see your own mistakes.
- Chance for “team” lunches: When you work as part of a group you can have group celebrations. When you work by yourself it is just lunch.
What are the challenges?
There are three primary types of challenges for team-based development. They are
- Communication: Both ensuring that all information required is communicated and that it is clearly expressed.
- Blocking: When more than one person requires direct access to a set of files to for their work or their work is dependent on another person.
- Standards: Every developer has different ways of solving problems; in some instances, these approaches will be in conflict.
Mitigating these the challenges
As the title of this section states, these challenges can be mitigated, but never fully eliminated. The following recommendations will help reduce these challenges.
Challenge 1: Communication
Good communication starts with a commitment to good communication; the team needs to recognize the need for some form of formal transfer of knowledge. Often this takes the form of a Requirement Document. However, it is not enough just to have a requirement document; it needs to be used. Use of a requirement document implies the following
- Referenced: The requirement document is referenced during the creation of artifacts
- Tested: Test cases are derived from the requirements documented
- Traced: The use of the requirements is validated throughout the development cycle
- Living: The document is updated as changes are required.
Failure to follow these steps will lead to communication breakdown.
Challenge 2: Blocking
Blocking is addressed through architectural constructs and version control methodologies. Models can be architected to allow each person to work on individual components while still facilitating integration into a large system level model. In the instances where two people need to work on the same model, and it cannot be subdivided, then version control software can be used to create a branch for each person to work on and then merge their changes once they have completed there work.
It is of the highest importance to validate the models’ behavior after the merge to ensure that the functionality added by each person still works in the merged model and that the baseline functionality has not changed.
Challenge 3: Standards
While standards may be complete or incomplete, there is no “right” standard. The key is complying with those standards.
A “complete” standard is often a series of standards addressing
- Each stage in the development:
- How to write requirements
- How to write tests
- The handoff between stages:
- What artifacts are created when you start developing a model
- What artifacts are created when you run a test
In this post, I have not specifically addressed Model-Based Design. The recommendations for mitigation can be directly linked to earlier posts I have made on topics such as modeling standards, version control, and model architecture. Finally, with models acting as the “single source of truth” during the development cycle many of the handoffs and blocking issues of team-based development can be avoided.
I am happy to write that for the second time I will be presenting at the Software Design for Medical Devices conference in Munich Germany Feb 19th and 20th. I will be in Munich the balance of the week answering questions about Model-Based Design both for the medical industry and in general. If you are based in or around Munich please feel free to contact me.
Mit freundlichsten Grüßen, Michael
When I release a model it will
- Reach 100% requirements coverage for the model
- Reach 90% test coverage of requirements
- With 100% passing
- Be in full compliance with 70 Modeling Guidelines
- Reach 90% compliance with an additional 7
- Achieve 95% MISRA compliance
- 100% with exception rationale
However, if I asked anyone to reach these levels early on in the development process then I would both slow down the process and increase the frustration of the developers.
What is a phased approach to verification?
The phased approach to verification imposes increasing levels of verification compliance as the model progresses from the research phase to the final release.
The following recommendations are rough guidelines for how the verification rigor is applied at each phase.
The research phase has the lowest level of rigor. The model and data from this phase may or may not be reused in later phases. The model should meet the functional requirements within a predetermined tolerance. Modeling guidelines, requirements coverage, and other verification tasks should not be applied at this phase.
With the model in the initial phase, we have the first model that will be developed into the released model. With this in mind, the following verification tasks should be followed
- Verify the interface against the specification: The model’s interface should be locked down at the start of development. This allows the model to be integrated into the system level environment.
- Compile with model architecture guidelines: Starting the model development with compliant architecture prevents the need to rearchitect the model later in development.
- Create links to high-level requirements: The high-level requirements should be established with the initial model.
The development phase is an iterative process. Because of this, the level of verification compliance will increase as the model is developed. As would be expected the level of requirements coverage will increase as the implementation of the requirements is done. The verification of the requirements should directly follow their implementation.
With respect to the increasing compliance with modeling and MISRA compliance; in general, I recommend the following.
- 50% guideline compliance/MISRA at the start of the development phase
- 70% guideline compliance/MISRA when 50% of the requirements are implemented
- 90% guideline compliance/MISRA when 80% of the requirements are implemented
With the release phase, I finally hit the targets I initially described. Entering this phase from development all of the functional requirements should be complete. The main task of the release phase is the final verification of requirements and compliance with guidelines (model and code).
Additionally, the release phase may include a “targeting” component; where the model which was designed for a generic environment is configured for one or more types of target hardware. In this case the functionality of the component should be verified for each target.
Ramping up compliance with verification tasks is a standard workflow. The suggested levels of compliance during the development phase should be adjusted based on a number of factors including
- Reuse of components: When components are reused the compliance should be higher from the start of development.
- Compliance requirements: If you are following a safety critical workflow, such as DO-178C or IEC-61508, then the compliance should be higher from the start of development.
- Group size: The more a model is shared among multiple people the sooner the model should be brought into compliance with modeling guidelines. This facilitates understanding of the model under development
Outsourcing design is common in most industries; from simple sub-components to the full system. In both cases the process by which the handoff between companies happens is critical.
It starts before you start (pre-project work)
Before the first model is exchanged, before the first requirement is written the two companies need to agree on the following items
- What materials are delivered
- Models, protected models?
- Data dictionary?
- Test models, test cases?
- Generated code?
- Interface control document (ICD)?
- How are the materials delivered?
- Using Simulink Projects?
- What level of model customization is enabled?
- Parameter tunning?
- variant tuning?
- How will the requirements be validated?
- Requirements tracking through traceability matrix?
- Requirements based testing?
Regardless of what is specified the information that is required needs to be clearly defined.
Stages of development
An additional factor to consider is the stage of the development. The materials that are handed over during the initial versus final stages of development will be different. Normally during the early stages of development, the level of compliance will be lower. As the development matures the rigor for compliance increases.
Recommendations for delivery
The following are recommendations for three stages of a “Company-to-Company” project. The stages I will look at are “Initial specification”, “functional review” and “final delivery”
In the initial specification phase, the Requesting Company (RC) is providing requirements to the Providing Company (PC). At a minimum, they should provide the following information.
- Functional requirement document: A formal document describing how the software should perform
- Required level of testing for acceptance: A description of the level, or class, of testing to be performed. This may include some existing acceptance tests.
- Interface control document (ICD): Description of the software interface, including I/O, rates and tolerances
- “Real-world data”: Any specification sheets for the unit under design and or any existing performance data from the unit
In most cases, the functional review stage is an iterative process with the Providing Company providing updates to the Requesting Company.
At a minimum, the providing company should provide the following artifacts
- Simulatable model: The model could be delivered in a protected mode, keeping the PC intellectual property protected.
- Requirements traceability report: A report on the current status of requirements implementation. Note: at this stage, the not all of the requirements may have been implemented.
- Verification results: Related to the requirements report the verification results demonstrate the compliance with the requirements. Note: at this stage, not all of the tests may have been implemented or in a passing stage.
- Change requests: During the functional reviews the PC should provide change and clarification requests
Depending on what was decided in the pre-work phase the providing company may provide the test environment and test cases.
In response to these artifacts, the requesting company should provide, at a minimum, the following documents.
- Change request response: The requesting company should respond with approval or clarifying information.
- Change request: The requesting company should also formally provide any requests to the
With the final deliverables, the providing company should provide the same materials as in functional reviews. The difference is that in the final review requirements traceability and verification results should be completed. Any requirements that could not be met should have been addressed in the final functional review change request.
The process of handing off artifacts between companies in the Model-Based Design environment is nearly identical to that of the traditional text-based environment. The primary difference is that MBD enables the simulation of the model enabling the requesting company to easily verify the requirements.
Likewise, the specification of which artifacts, and in what format, will be exchanged in the pre-work phase is critical to the success of work between companies.
With 2017 behind us and 97 blog posts under my belt, it seemed like a good time for some reflection on the state of Model-Based Design.
- New industries adopt, old industries expand
In the past 3 years, the Medical device industry has embraced the core aspects of Model-Based Design feverishly. At the same time existing strong users, such as Aerospace and Automotive have expanded the tool suite they use to include things such as big data and image processing
- Growth of continuous integration
The use of CI systems for model and system level validation continues to grow. This is aided by both the growing ease of use for CI systems and…
- Improvements in testing infrastructure
Testing infrastructure, from test managers to test reports continues to mature making it easier for end users to develop reusable, scalable testing environments. Further, it lowers the bar for developing tests allowing software and systems engineers to both run and create tests.
Seriously, of all the desktop backgrounds I have used during presentations this one, of my wife and a photo-bombing dog, was the most liked.
2017 was a lean year for many customers, and in an effort to save on costs they cut back on training. As a result, the start of many of my engagements involved basic training. Fortunately, this is a trend that is already changing.
- Things get real (time)
2017 featured a large increase in the number of Hardware In the Loop (HIL) projects that I worked on. This came about due to three things
- Improvements in the Simulink Real-Time API
- Lower cost of Hardware In the Loop systems
- Improved testing support for Hardware in the Loop systems (see item 3)
2017 was a great year for Model-Based Design projects. I expect an increase in both the number and depth of these projects in 2018. I look forward to continuing this blog and the eventual conversion to the book.
As an engineer automation is part of my day-to-day work. From the ‘start.m” function that runs at the start of MATLAB, the excel formulas that smooth pivot tables, or the GIT macro that allows me to merge two branches. These are automation functions that other people have already created. Some of these automatons are things so “common” that I forget they were not they are in fact automation. What then leads me to automate a task?
The 6 questions
Before I automate a process I ask myself the following questions
- How often do I perform the task?
Once a day? Once a week? Once a quarter?
- How long does the task take?
How long does the task take both for my self and the system running the process?
- Do others perform this task?
Do they follow the same process? Does variance in the process cause problems? Do you have a way to push the automation out to others?
- How many decision points are there in the process?
Decision points are a measure of the complexity of the process.
- Is the process static?
Is the process still evolving? If so how often does it change?
- Is it already automated?
Oddly enough if you found it worthwhile to automate someone else may have already done the work.
When to and not to automate
If the process is already automated, or if the process is changing frequently it is obvious that work should not be put into the automation. In general, I put look at a threshold in terms of person-hours per week normalized by the number of people working on the project compared to the effort to implement the automation.
If the person-hours per week is low (under 1) or the return on automation duration is high (over 6 months) then I do not consider the automation a worthwhile investment.
Automation, when done right, saves us time and allows us to perform other more interesting tasks. However, it is easy to get stuck in “automation for automation sake” development. I leave you with two humorists take on automation.
First off what qualifies as “High Integrity Software?” The base “reference” document that I use is the “NIST Special Publication 500-204: High Integrity Software Standards and Guidelines”
Originally written to support the nuclear power industry it provides a valuable insight into what it means to be “safety critical”
In short, the software must function dependably (in a measurable and definable fashion) for critical functions. Critical functions being defined as having failure modes that could cause serious injury, loss of life or property.
Model-Based Design and safety-critical software
When considering software design using MBD methodologies for safety-critical software everything starts with the requirements and the validation that those requirements are correctly implemented (this is true for all software). I consider 4 primary factors
- Enhanced understanding of requirements
- Enhanced traceability
- Handoff error reduction
- Automated code generation
Enhanced understanding of requirements
Model-Based Design improves the understanding of requirements in 3 ways. First, in general, models are easier to interpret than code. Second, models allow you to easily simulate and visualize their behavior simplifying the understanding of the requirements. Finally, the ability to link requirements to sections of a model and have those requirements show up in test results improves the chance that the requirements will be correctly implemented.
Traceability refers to the act of following the implementation, modification, and validation of requirements. Model-Based Design improves this process since a single model can be used as the design artifact at multiple stages in the development. Meaning that once the link between the requirement and the model is made it is maintained.
Handoff error reduction
The handoff of software artifacts between people and roles (e.g. software developer to software integrator to software test engineer) is a well know point for the introduction of errors. With Model-Based Design, the same model is used at each stage preventing hand-off errors.
Automated code generation
The use of automatically generated code prevents syntactical errors to which people are prone. Many standards now allow you to claim credit for the use of auto code in the prevention of these errors.
Developing safety critical systems for any industry requires following common best practices and established guidelines. Following a Model-Based Design approach helps with the automation and validation of many of these steps while avoiding critical handoff errors.
Did I pass or did I fail? Yes or No? What more do I need to know? Putting aside the failure case, where knowing how you failed is important, let’s start by talking about what information you can know and why you would care about it.
First, remember that there are multiple ways in which a test can “pass.” Just like in school there can be A, B and C passing grades. The way the grade is determined is, in part, related to the test type.
- Coverage: Pass is determined by reaching a given level of coverage.
- Standards compliance: Passing is determined by being under a given level of errors and not violating any “critical” standards.
- Baseline: Executing within a given fault tolerance.
- Performance: Execution of the process under a maximum and average time
Another metric of interest to the system and testing engineers is the file “churn rate”. From the testing perspective, there are two types of churn rate. First how often is the file updated, second how often is the file referenced by updated files.
Files with high “self-churn” are under development and, in general, should have test cases added as the development process matures. Files with high “reference churn” are, in contrast, generally mature files that are referenced as utilities or as data. These files should be “fully” locked down with test cases.
Failure is an option
Just like with passing there are multiple types of failures corresponding to the types of “passing.” The main question is what sort of information do we bring out from the tests?
- Cause of failure: There are 4 primary causes of failure
- Test did not meet explicit criteria
- Test failed to run (test harness bug)
- Dependency failure (supporting files not present or failing their tests)
- Performance failure
For each type of failure different “rich” information is desired.
For the explicit criteria case the cause of failure, as defined by the test, should be provided. Any relevant plots, error diagnostics (e.g. line of code or block in model), as well as expected results, should be provided.
Failure to run
In a subset of cases, the failure will be in the testing infrastructure. In this case, the location of the test infrastructure failure should be reported. To prevent these types of errors when the testing infrastructure is developed test cases for it should be created.
A dependency failure is a case of the “expected criteria” failure. E.g. when working with a system of models one or more of the dependent models or bits of data has an error. Dependency errors can occur in one of two ways.
- The dependent model changed and errors were introduced
- The interface between the parent and dependent model changed (in the parent) causing errors
If the error is of the first type then the report is the same as in the explicit error case. For the second case, an interface report should be created detailing the change in the interface.
The final note for this post is a few thoughts on the infrastructure performance. Over time as additional tests are added to a system the total time to execute tests will increase. Therefore monitoring both the execution time of individual tests as well as reusable test components is critical.
When profiling the performance of individual test components having “tests for the tests” is important as you want to make sure that when you improve the performance of a test component you do not change its behavior.
How do you organize your bookshelf at home? By the author (works well for fiction), by topic (generally good for non-fiction), by size (let’s face it shelf space can be an issue in home libraries.) Any of these approaches work fine for small libraries, but when the total number of books starts getting large additional information is required to help organize your library.
Metadata and organization
If you have ever used Twitter (disclaimer: I never have) you have experienced metadata in the form of the #ILikeCats or #MyPoliticianIsGreatYoursIsTheSpawnOfSatan. Metadata is a tag on an object that allows you to augment the information about the object.
Properties versus metadata
Metadata should not be confused with properties. Properties are something inherent to the object. If we extend our book metaphor then we would see
- Book title: Nine princes in amber
- Author: Roger Zelazny
- Instance properties
- Type: softcover
- Condition: average
Properties and metadata for tests
Since properties and metadata allow you to organize tests how do I categorize them? Equally important, what do I not include?
- Model name: Name of the unit under test
- Test harness: Link to the test harness for the unit under test
- Test name: Short descriptive name
- Description: Longer descriptive text that summarizes the test
- Requirement linkages: Links to any requirements covered by the test
- Data: Input, and expected outputs
- Supported projects: To support model reuse the tests should tag for which project(s) the test is valid.
- Supported targets: A list of the targets on which the test is supported. Such as SIL, HIL or PIL testing.
- Test level: An indication of the frequency with which the test is run, e.g. check-in, nightly, weekly, build. More complex tests that take longer to run should have a higher level.
- What not to include
- Model and data dependencies: These dependencies can be programmatically determined. Specification of these dependencies will, over time, become out of date.
- Version history: The version history should be included in the version control software.
Why we care?
In the course of normal development, there is both a local testing step and a global testing step. At the local level, developers run tests against their updated models. However, the developers are not expected to know the full scope of the model they are working with; hence the use of metadata and test properties to allow the test environment to fully exercise the model once it is checked in.
By leveraging the test properties and metadata we can more easily reuse tests. Absent the metadata tests would need to be duplicated across multiple test suites; duplication which can lead to the introduction of errors in the test environment.
Hot air rises; cold air settles. This is a fundamental law of nature and yet most multi-level homes are not set up to handle this challenge (including mine). Having recently tried a “smart thermostat” and been very disappointed in it’s performance I have started to conceptually design my own home solution.
Trial 1: Zone alone
My first trial implementation was a simple manual “zone control” system; e.g. in the winter, the top floors are shut off so the heat rises, in summer the bottom is shut off.
This approach worked somewhat but since I have a three story house and the thermostat is on the middle floor I always had a temperature gradient that was greater then I liked.
Trial 2: Internet of things (IOT) and active zone control
We are now entering into the thought experiment part of the post.
In the diagram above S1 ~ S3 are simple temperature sensors that would connect over wifi to the master control (hence the IOT part of this project). The “Master Controller” would have open and close the ducts for each zone and control the Fan and heating and cooling elements.
So far so standard…
What changes this from a standard system into an interesting problem (for me at least) are the optimization constraints that I am putting onto the system
OBJ 1: Minimize the heat differential across the floors
OBJ 2: Minimize the time to achieve target heat
OBJ 3: Minimize energy usage
OBJ 4: Prevent temperature over/undershoot (e.g. don’t let the top floor overheat in winter, don’t let the basement become an ice house in summer)
To meet these objectives I was going to need a plant model; one that would allow me to model
- Multiple zones
- Heat flow between the zones
- Heat input based on damper status
- Changing heat flux from the outside of the house (e.g. day/night cycles)
As often is the case I was able to start with a basic model from The MathWorks.
Winters bane: the cold
Let’s start with the winter time problem; heat rises in the house from the lowest floor out through the attic. Fundamentally the equation can be expressed as
q = -k * Δu
Basic heat flow due to a temperature differential.
The second set of equations, e.g. the cost function, I expressed in the following fashion
TotalCost = α1 * fobj_1 + α2 * fobj_2 + α3 * fobj_3 + α4 * fobj_4
For example, the first objective function can be written as
fobj_1= abs(T1 – T2) + abs(T2 – T3)
Setting the alpha weights
The cost function uses a set of α coefficients to set the weights for each cost function. To set the value of those coefficients two things need to be understood
- The normalized value of the function:
- The “priority” of the objective:
If you told me the priority of the objectives were
That is not enough to set the alpha coefficients. For example, in the first objective function the max value is roughly 20 degrees while the second objective has a max value of 600 seconds (the last two had values of 45 and 3) Therefore my weighted objective functions become
α1== 4 * (600/20)
α2== 3 * (600/600)
α3== 1 * (600/45)
α4== 1 *(600/3)
Augmenting the model
The final step in this project, before running the simulations and optimizations, is to augment the model in support of the multiple zones. Since the duct controllers do not have a position feedback I have only on/off control over each zone. This was added to the model and the optimizations started.
I modeled heat loss in each of the zones, with the maximum heat loss in Zone 3, the highest floor. The heat loss on each floor had an impact on the optimal control results the general pattern was same.
After the initial heating (note the basement started out colder) there is a repeated pattern in the vent control. The basement, due to heat convection activates first; the top floor also activates. The middle floor, due to the convection from the basement and lower heat loss then the top floor never activates.
The model I developed for the house is based on many assumptions; the implementation of the control algorithm allows me to have optimal outcomes regardless of the actual heat convection and heat loss properties of my house. If I am able to implement this in my house I will let you know how the model compared to the actual results.