Are you in or near Huntsville AL? Would you like to meet me and have a chance to learn more about The MathWorks and MathWorks Consulting? Well then come out to the MATLAB Aerospace and Defense… More
Unlike many, this post is Simulink centric and deals with the question of global signal data within Simulink models. So first what is “signal data?” Broadly speaking within a Simulink model data elements are broken into parameters (fixed) and signals (things that change). Signals are either calculated or come in from the root level.
Within the model, the signal data is “scoped” to the line it is attached to, or in the case of a Stateflow chart or MATLAB function block, the scope of the chart/function.
Within Simulink, the exception to the rule is the Data Store. With Data Store (read and write blocks) data can be shared in different parts of a model without the use of connecting signal lines. Further, the data stores can be shared with Stateflow Charts and MATLAB functions.
In addition to acting as global data, Data Stores have the unique ability to be written to in multiple locations within a Simulink diagram. Because of this ability, they must be fully defined with the data type, dimensions, and complexity when they are first created.
Global data bad……
Global data is easy to work with, allows you to quickly share information between functions and to reduce interfaces. At the same time, it makes debugging code more difficult (where was X set?) and reduces reusability of code by expanding the dependencies of a function. But… there are times when global data is the correct solution.
When to use global data
So with these downsides when should global data be used? As a general rule of thumb, I advocate for 3 uses
- Error/Fault detection: By their nature error flags can be set by multiple causes. Because of this, the ability to write to an error flag in multiple locations is a valid rationale. Additionally, since the error flags may be needed in multiple places in the model (more so than normal data) the ability to pass this without routing is important.
- Mode data: A system should respond to mode changes all within the same execution step. Like error flags, Mode Data is shared across the full scope of a model.
- Reset flags: Reset flags are used to reset state behavior of integrators and transfer functions.
As a final note, the global property of data in Simulink models should not be confused with the scope of the data in the generated code. The scope of the data in the generated code (for both parameters and signals) can either be determined automatically by Embedded Coder or controlled through Data Objects. This will be covered in a future post.
At some point in the software development cycle, the question of single or multi-threading environment will come up. With multi-core processors more common now in embedded devices this a more frequent issue. Let’s take a look at some of the trade off’s between single and multi-threaded environments. For additional information, I recommend the following links
It just works, the program runs from start to finish in a set order and you know what happens relative to everything else. However, it may be slower than it needs to be if some of the operations can take place in parallel. If you do not have timing constraints this is a fine option to take.
If single threading can be described as “just working” then multi-threading needs to be characterized in a different fashion. We will start with some basic understanding of threads. A thread is the smallest unit of execution that an OS can instantiate; they are either event-based or periodic (temporal). Threaded operating systems can be either non-interpretable or interpretable.
Packaging your threads
Each thread should exhibit a high degree of independence from other threads; meaning the operations of “Thread A” should have a minimum dependence on the data from “Thread B.” The key word here of course is “should.” In the end the threads will need to exchange data and that is one of the complications of multi-threaded environments.
Data locking and synchronization
In a multi-threaded environment, a lock (or mutex) is a method for ensuring that a memory resource is not in use by multiple threads at the same time. E.g. if you have a shared memory space you do not want to threads writing to it at the same time (or one reading while the other is writing).
Locks provide a way of synchronizing data between threads, however, they slow down the process since the thread cannot continue until the data is unlocked. In some instances, when the operation of one thread is dependent on the outputs from another, if the locking and data synchronization is not handled correctly a race condition can occur.
Debugging multithreaded environments
Bugs in multithreaded processors generally occur when the expected order of execution does not match the intended order of execution. This can be either due to
- A thread failing to start
- A data synchronization failing
- A thread taking longer than expected and preventing another thread from running
Use of a debugger to “walk through” the code is often required to get to the root cause of the issue. However, if the bug is due to an overrun issue then using the debugger may not catch the error because in the debugging mode you are not subject to the timing limitations. In this case, either a trace log or even an oscilloscope can be employed.
For more information on debugging multithreaded environments, I suggest these links
- MSVC Debugging multithreaded issues
- Dr Dobbs: Debugging multithreading
- Stack overflow forum discussion
- Using Polyspace for detecting race conditions
Everyone knows that best practice tells us to put comments into our code/models; some of us do. The question for today’s blog is “what makes a good comment?”
The good, the bad and the just simply useless
NOTE: all examples of “bad comments” are taken from real-world experience.
- Do not repeat the information in the model:
For the given line of code:
output = input * 2;
- Bad comment: /* The output is equal to 2 times the input */
- Good comment: /* Investment pays two times the deposit */
- Useless: /* Multiplication operation */
- Explain why something is done in a given way:
For the given line of code:
bound = min(upper,max(lower,length));
- Bad comment: /* Run min/max functions on length */
- Good comment: /* Limit ouput between upper and lower bounds */
- Useless: /* Bound value is bound */
- Are clearly written: Comments should be written in the native langue following standard grammatical rules. Avoid slang and abbreviations.
- Are “as long as they need to be”: Comments do not take the place of requirements. They are in place to explain part of the model/code. As such they should be written so they explain the concept and no more.
Why we comment? What comments can’t do…
Comments aid people in understanding what the model or code is intended to do. Comments need to be maintained as the source is updated; there is nothing worse then a 3 year old comment that has no relationship to the current object. Finally, they cannot take the place of well-written code. (In all honesty, I have no idea what the screenshot that follows does….)
Erinnerungen an vergangene Arbeit
Ich hatte meine Arbeit mit MBD beginnen 25 jahr wenige mit General Motors; für deise Autoindustrie. Auf das Zeit MBD war neu und ehrlich gesagt, waren die Werkzeuge weniger ausgereift. Leute hatte eine Bedürfnis für Verbesserungen aber alles war nicht klar wie zu vorgangen. Es war die “Wild West.”
Über Zeit, Best Practices entwickeln von Industrie Erfahrung. Durch den frühen 2000er Jahren moderne Prozesse waren vorhanden.
Dieser Zeit für die Medizinprodukteindustrie erinnert mich dieser Zeit Autoindustrie. (Aber mit die Werkzeuge ausgereift.)
Alt probleme, alt frage, neu antworten
Die Medizinprodukteindustrie ist die gleiche Probleme sehen das Autoindustrie vor 20 Jahr; Zusätzlich zu, dass sie konfrontiert sind regulatorische Fragen. Die Medizinprodukteindustrie ist, nur natralich, vorsichtig wann neue Prozesse übernehmen. because of this “use cases” from other industries are required for validation of the process.
Drie ding: Erste für Alles das klar sind im meine im Deutsche schreiben; habe ich mein Mitarbeiter zu danken. Alles das sind nicht klar ich meine Entschuldigung gibt. Zweite, München ist ein Stadt mit schöne Gebäude und wunderbare Menschen. Letzt ich habe learnen zu essen das Brezeln für Frühstück und das war sehr gut.
If in the heart of agile development can be seen in the concepts of quick iterations, leveraging test points for quality assurance coupled to a close team-based collaboration then Model-Based Design are the veins and blood that compose the body of your work.
Agile is a concept and
a process; how that concept is implemented is up to the development team.
If we review the key concepts behind Model-Based Design and Agile Development then the mapping between is obvious.
Use models for architectural decomposition: Models are used to break down large problems into smaller components. These smaller components can easily be integrated into larger system-level models created by other people in the development team. The use of models and a modeling architecture strongly supports close team-based collaboration.
Use of simulation: simulation is the younger brother of testing. Using models developers can quickly and easily exercise their models to determine the functional correctness of system under test. Once the initial models are “correct” they can be locked down with a set of formal tests. Those formal tests often are derived directly from simulation used for design.
Model as the single truth: When we look at the elaboration process that a Model-Based Design process follows it is clear that the iterative nature of an agile process is a close fit. Models can both provide a tight integration with requirements while allowing for the fast evolution of those requirements. In fact, the use of simulation as part of the development process allows developers to quickly find issues with their requirements.
Agile design processes are as good as the people who commit to them. A good understanding of what is and what is not part of the agile development process is important to the success of the project. (This is, of course, true of any product development.) For another perspective on Agile development and Model-Based Design, this link provided a good overview.
This post is a companion post to the “Automation do’s and don’ts”. Here I will examine organizational hurdles that stall the creation of reusable components.
The reuse of software is a common object stated by most companies, but, with exception of a set of narrow cases, most companies struggle with this objective. In my experience, there are 6 reasons for this struggle
- Lack of ownership: There isn’t a group (or person for smaller companies) who has the responsibility and authority to ensure this task succeeds.
Note: often the lack of authority on the person/groups part is the larger part of the problem.
- Failure to allocate time: Turning a component into a reusable component can add between 10% to 15% to the development time. If time is not budgeted for the additional development a “buggy” reusable component is released.
- Lack of awareness/documentation: The greatest software tool is useless if no one knows about it or it is poorly documented.
- Narrow use case: The component is created and its’ use is so limited that only a few people will ever use it.
- Wide use cases: Wide use cases often lead to complex reuse components that either do nothing well or become so bloated that they are difficult to configure and maintain.
- Bugs: Every time a person uses a “reusable component” and it fails to do what it is supposed to do it encourages people to not look at reusable components.
So how do you avoid those pitfalls?
What type of reuse?
I break down reuse into two categories, formal and informal. Informal reuse is common for individuals and within small groups. It is when a component is regularly used to perform a task by people who know how to use it well or are able to work with its’ “quirks.”
Informal reuse is a good practice however it should not be confused with formal reuse which is the topic of this post. With formal reuse, the component is used by people who are not experts on the underlying assumptions and methods of the object. Because of this they are not tolerant of “quirks” and need a solution that is dependable and documented.
It should be noted that many “failed” reuse attempts arise out of taking informal reusable components and treating them like formal reusable components.
Deciding when to reuse
Before I automate a process I ask myself the following questions to prevent the “to narrow” and “to wide” blocking issues.
- How often do I perform the task?
Once a day? Once a week? Once a quarter?
- How long does the task take?
How long does the task take both for my self and the system running the process?
- Do others perform this task?
Do they follow the same process? Does variance in the process cause problems? Do you have a way to push the automation out to others?
- How many decision points are there in the process?
Decision points are a measure of the complexity of the process.
- Is the process static?
Is the process still evolving? If so how often does it change?
- Is it already automated?
Oddly enough if you found it worthwhile to automate someone else may have already done the work.
Issues 1 (lack of ownership) 3 (lack of awareness and documentation, and 6 (bugs) can be addressed by having a person or group who has the task of creating and maintaining components
The maintenance of the component has three primary tasks. First, the creation of test cases to ensure the component continues to work as expected. Second, updating the component to support new use cases. Third, knowing when to “branch” components to keep them from becoming to complicated.
For some organizations allocating time to the development process can be the greatest hurdle to creating reusable components. The time invested does not show an immediate return on investment and there are pressing deadlines. However, if the rules of thumb in “deciding when to reuse” are followed the long-term benefits will outweigh the short-term cost.
The final topic is how to encourage engineers to actually reuse the components. This is, in part, dependent on how well the components are documented and how easy they are to accesses. In the end, they need to understand how it benefits them; e..g less time spent “reinventing the wheel” and more time to work on their actual projects.
Anyone who has worked with software for more than 3 years knows that migration between software releases is a fact of life; having that process be smooth and easy is not always a fact of life (anyone remember Windows ME pains?)
Making migration easy(er)
One of my early swimming coaches was famous for saying “you win the race by how you train.” I have found this advice to be true in most aspects of my life. Projects succeed or fail based on the preparation you do as much as your execution.
Preparing for migration
In preparing for migration I start by asking 3 questions
- What things are we doing now that are working well?
- What things are we doing now that are hard to do?
- What things do we want to do that we can’t do now?
The first question focuses on maintaining current functionality. The second and third look at how to make things better. Improvements to processes can be made either through refactoring of existing processes (or creating new processes) or through the adoption of new tools.
One of the critical things to keep in mind with software upgrades is that it is not just changing tools. It is, or should be, about changing processes. [Note: for minor migrations of a single tool the associated processes may or may not require updates.]
A few thoughts on type two problems
The “type 2” problems, “what things are we doing now that are hard to do?” can be further thought about in a few components.
- The process runs slowly: Frequently, but not always, upgrades in software can provide an increase in speed. Additional, process changes, may provide speed improvements.
- The process is complicated to execute: Complex processes can be difficult to execute. Often complex processes were developed due to limitations in the tool when they were initially developed.
- The process has bugs: Before upgrading software validate that the bugs in the software have been resolved.
The more things change the more they stay the same…
When you upgrade you still want some things to be static: your results. The best method for ensuring that your results (deliverables, code,…) remain the same is by developing test cases that “lock down” your deliverables.
When comparing test results between different tools there are a couple of things to keep in mind. First, for every test as “acceptable” change should be defined as there may be small deviations which have no effect on the overall systems performance (though for some tests no change will be allowed.) Second, in some cases testing in newer versions of software bugs that were not detected before may be uncovered.
Testing the testing environment
As a last note; if as part of your migration you are are updating your testing environment then you need to validate the behavior of your testing environment. This is generally done through manual inspection of a subset of the full test suite. The key factor is to have a subset that contains a full set of all types of tests are performed by the testing environment.
In one of my first consulting engagements over 15 years ago, a customer said to me a something that sticks with me through to today regarding working in a group.
“Nobody knows my mind like I do, and even I don’t always know it.” (anon)
What he joked about are the issues that arise when working as part of a group.
Benefits of team-based development
Before we address the challenges of team-based development let us talk about the benefits. Broadly speaking there are 3 primary types of benefits.
- Faster development time: By distributing the work over multiple people the project can be completed more quickly.
- Multiple areas of expertise: Additional people can bring domain-specific knowledge to a project.
- Error reduction: Having multiple people work on a project can reduce the chance of “developer blindness” where you do not see your own mistakes.
- Chance for “team” lunches: When you work as part of a group you can have group celebrations. When you work by yourself it is just lunch.
What are the challenges?
There are three primary types of challenges for team-based development. They are
- Communication: Both ensuring that all information required is communicated and that it is clearly expressed.
- Blocking: When more than one person requires direct access to a set of files to for their work or their work is dependent on another person.
- Standards: Every developer has different ways of solving problems; in some instances, these approaches will be in conflict.
Mitigating these the challenges
As the title of this section states, these challenges can be mitigated, but never fully eliminated. The following recommendations will help reduce these challenges.
Challenge 1: Communication
Good communication starts with a commitment to good communication; the team needs to recognize the need for some form of formal transfer of knowledge. Often this takes the form of a Requirement Document. However, it is not enough just to have a requirement document; it needs to be used. Use of a requirement document implies the following
- Referenced: The requirement document is referenced during the creation of artifacts
- Tested: Test cases are derived from the requirements documented
- Traced: The use of the requirements is validated throughout the development cycle
- Living: The document is updated as changes are required.
Failure to follow these steps will lead to communication breakdown.
Challenge 2: Blocking
Blocking is addressed through architectural constructs and version control methodologies. Models can be architected to allow each person to work on individual components while still facilitating integration into a large system level model. In the instances where two people need to work on the same model, and it cannot be subdivided, then version control software can be used to create a branch for each person to work on and then merge their changes once they have completed there work.
It is of the highest importance to validate the models’ behavior after the merge to ensure that the functionality added by each person still works in the merged model and that the baseline functionality has not changed.
Challenge 3: Standards
While standards may be complete or incomplete, there is no “right” standard. The key is complying with those standards.
A “complete” standard is often a series of standards addressing
- Each stage in the development:
- How to write requirements
- How to write tests
- The handoff between stages:
- What artifacts are created when you start developing a model
- What artifacts are created when you run a test
In this post, I have not specifically addressed Model-Based Design. The recommendations for mitigation can be directly linked to earlier posts I have made on topics such as modeling standards, version control, and model architecture. Finally, with models acting as the “single source of truth” during the development cycle many of the handoffs and blocking issues of team-based development can be avoided.
I am happy to write that for the second time I will be presenting at the Software Design for Medical Devices conference in Munich Germany Feb 19th and 20th. I will be in Munich the balance of the week answering questions about Model-Based Design both for the medical industry and in general. If you are based in or around Munich please feel free to contact me.
Mit freundlichsten Grüßen, Michael
When I release a model it will
- Reach 100% requirements coverage for the model
- Reach 90% test coverage of requirements
- With 100% passing
- Be in full compliance with 70 Modeling Guidelines
- Reach 90% compliance with an additional 7
- Achieve 95% MISRA compliance
- 100% with exception rationale
However, if I asked anyone to reach these levels early on in the development process then I would both slow down the process and increase the frustration of the developers.
What is a phased approach to verification?
The phased approach to verification imposes increasing levels of verification compliance as the model progresses from the research phase to the final release.
The following recommendations are rough guidelines for how the verification rigor is applied at each phase.
The research phase has the lowest level of rigor. The model and data from this phase may or may not be reused in later phases. The model should meet the functional requirements within a predetermined tolerance. Modeling guidelines, requirements coverage, and other verification tasks should not be applied at this phase.
With the model in the initial phase, we have the first model that will be developed into the released model. With this in mind, the following verification tasks should be followed
- Verify the interface against the specification: The model’s interface should be locked down at the start of development. This allows the model to be integrated into the system level environment.
- Compile with model architecture guidelines: Starting the model development with compliant architecture prevents the need to rearchitect the model later in development.
- Create links to high-level requirements: The high-level requirements should be established with the initial model.
The development phase is an iterative process. Because of this, the level of verification compliance will increase as the model is developed. As would be expected the level of requirements coverage will increase as the implementation of the requirements is done. The verification of the requirements should directly follow their implementation.
With respect to the increasing compliance with modeling and MISRA compliance; in general, I recommend the following.
- 50% guideline compliance/MISRA at the start of the development phase
- 70% guideline compliance/MISRA when 50% of the requirements are implemented
- 90% guideline compliance/MISRA when 80% of the requirements are implemented
With the release phase, I finally hit the targets I initially described. Entering this phase from development all of the functional requirements should be complete. The main task of the release phase is the final verification of requirements and compliance with guidelines (model and code).
Additionally, the release phase may include a “targeting” component; where the model which was designed for a generic environment is configured for one or more types of target hardware. In this case the functionality of the component should be verified for each target.
Ramping up compliance with verification tasks is a standard workflow. The suggested levels of compliance during the development phase should be adjusted based on a number of factors including
- Reuse of components: When components are reused the compliance should be higher from the start of development.
- Compliance requirements: If you are following a safety critical workflow, such as DO-178C or IEC-61508, then the compliance should be higher from the start of development.
- Group size: The more a model is shared among multiple people the sooner the model should be brought into compliance with modeling guidelines. This facilitates understanding of the model under development