The following is an idealized Model-Based Design workflow, from initial requirements to product release. The workflow assumes a multi-person team with resources to support multiple roles. It all starts with requirements… Ideally, the process starts… More
As I have written about in previous posts I recommend the use of reusable test utilities. When working in the text-based MATLAB environment how to create reusable utilities is easily understood; they are simply MATLAB functions. However, within the Simulink Test graphical environment, it may not be as clear.
Libraries and Functions
Fortunately, there is a solution; if there wasn’t there would be no post today. Within the Simulink Test environment, calls can be made to functions. The functions can be either return a value (or values) or directly set an assert or verify flag.
The functions are imported from a Simulink Library and can be constructed from MATLAB or Simulink Function blocks.
In the case of MATLAB functions that are placed in a Stateflow block with the functions export option selected.
So there you have it, a simple solution to reusable test utilities within the Simulink Test environment.
Are you in or near Huntsville AL? Would you like to meet me and have a chance to learn more about The MathWorks and MathWorks Consulting? Well then come out to the MATLAB Aerospace and Defense Smart Systems Tech Briefing.
One of the rationales for adopting Model-Based Design is an expected Return On Investment (ROI). This has three very natural questions
- What is the expected ROI?
- What is the timeframe for realizing the ROI?
- What is necessary to realize the ROI?
Unpacking the ROI questions
The first thing to recognize is that the ROI will be dependent on the “level” of adoption of Model-Based Design. The more processes of Model-Based Design that are used the greater the ROI, however, there is a corresponding delay in the realization of the ROI (see reference 1).
Further, the ROI is dependent on a having a defined implementation plan. A full MBD process includes multiple tools and tasks, without a well-defined implementation plan the dependencies between these tasks will become muddled.
Assuming a well-defined implementation plan, Most companies will start to see a return on investment after 9 months to 1 year. The majority of the ROI is generally realized after 3 years.
Hidden or “Negative” ROI
One aspect of Model-Based Design makes measuring ROI difficult, the fact that model-based approaches allow for the development of systems that are impossible (or at least extremely difficult) to develop using traditional approaches. In these cases where MBD is used to create systems of high complexity, the measured ROI may be lower than actual ROI due to the inherent complexity of the system.
Finally, what is the expected ROI? From industry examples, ROI’s as high as 80% are known to be possible (see reference 2) with ROI’s of 30~40% are considered common. Again, these results are dependent on having a good implementation plan. Hopefully, this blog, or MathWorks, will help you develop that plan.
- What is the benefit of a model-based design of embedded software systems in the car industry? By Manfred Broy Technical University Munich, Germany
- Measuring Return on Investment of Model-Based Design By Joy Lin, MathWorks
- Model-Based Design in Practice: A Survey of Outcomes for Engineers and Business Leaders. By Dr. Jerry Krasner Chief Analyst at Embedded Market Forecasters
Short answer: the real world is noisy. If you write tests that assume clean input data you are not exercising the system in a real environment. So let us talk about noise.
Types of noise and sources of noise
For this article, I will define noise as signal data entering the system from the outside of the system. Sources of noise include
- Resolution limits: all measuring devices have a limit to their resolution. A ruler with 1/8th-inch markings cannot accurately measure 1/16th-inch resolution.
- External interference: Frequently there are secondary effects that change the measurement. For example, when measuring a voltage it is common to have noise in the signal from other wires running nearby. (Which is why for some sensitive measurements shielded cables are used)
- Dynamic property: In some instances, the value of the property being measured is changing rapidly; any given measurement may be an outlier.
- Human error: For devices that human operators, well we make mistakes in how we enter information…
Types of noise, generally, map on to the sources of noise.
- Quantization (resolution): Characterized by “jumps” in the value. In dynamic systems must be tolerant of the jumps. For static (e.g. post-run analysis) the jumps can be “smoothed” using functions.
- White (external): Characterized by random values around the “actual” signal. Generally can be filtered using standard transfer functions.
- Outlier (dynamic): Characterized by occasional values outside the trending values. If the “standard” range is known then these outlier values can be ignored.
- Systematic (human): Characterized by systems being executed in a non-standard order. Systems need to be made recoverable from non-standard execution order.
Testing the wild-noise
The basic strategy for testing with noise is to “inject noise” into the system under test. How we inject can again be mapped back to our 4 types of noise
- Floor functions (quantization): Use a floor function to resolve signals to the nearest value of the inputs resolution.
- White noise generator (white): White noise generators are common functions. One important note, if the same “seed” is used for the white noise for all runs then this test has an inherent flaw.
- White noise generator (outlier): There is a special case of the white noise generator where signals are more episodic and, generally, of a larger value. In these cases, a statistical model of the outlier signals is helpful in creating this white noise generator.
- Decision tree analysis (human): Creating test cases for human error can be the most difficult. For state logic, it is possible to analyze the system to determine all possible paths.
In the end, including noise in your tests will result in more robust systems.
Unlike many, this post is Simulink centric and deals with the question of global signal data within Simulink models. So first what is “signal data?” Broadly speaking within a Simulink model data elements are broken into parameters (fixed) and signals (things that change). Signals are either calculated or come in from the root level.
Within the model, the signal data is “scoped” to the line it is attached to, or in the case of a Stateflow chart or MATLAB function block, the scope of the chart/function.
Within Simulink, the exception to the rule is the Data Store. With Data Store (read and write blocks) data can be shared in different parts of a model without the use of connecting signal lines. Further, the data stores can be shared with Stateflow Charts and MATLAB functions.
In addition to acting as global data, Data Stores have the unique ability to be written to in multiple locations within a Simulink diagram. Because of this ability, they must be fully defined with the data type, dimensions, and complexity when they are first created.
Global data bad……
Global data is easy to work with, allows you to quickly share information between functions and to reduce interfaces. At the same time, it makes debugging code more difficult (where was X set?) and reduces reusability of code by expanding the dependencies of a function. But… there are times when global data is the correct solution.
When to use global data
So with these downsides when should global data be used? As a general rule of thumb, I advocate for 3 uses
- Error/Fault detection: By their nature error flags can be set by multiple causes. Because of this, the ability to write to an error flag in multiple locations is a valid rationale. Additionally, since the error flags may be needed in multiple places in the model (more so than normal data) the ability to pass this without routing is important.
- Mode data: A system should respond to mode changes all within the same execution step. Like error flags, Mode Data is shared across the full scope of a model.
- Reset flags: Reset flags are used to reset state behavior of integrators and transfer functions.
As a final note, the global property of data in Simulink models should not be confused with the scope of the data in the generated code. The scope of the data in the generated code (for both parameters and signals) can either be determined automatically by Embedded Coder or controlled through Data Objects. This will be covered in a future post.
At some point in the software development cycle, the question of single or multi-threading environment will come up. With multi-core processors more common now in embedded devices this a more frequent issue. Let’s take a look at some of the trade off’s between single and multi-threaded environments. For additional information, I recommend the following links
It just works, the program runs from start to finish in a set order and you know what happens relative to everything else. However, it may be slower than it needs to be if some of the operations can take place in parallel. If you do not have timing constraints this is a fine option to take.
If single threading can be described as “just working” then multi-threading needs to be characterized in a different fashion. We will start with some basic understanding of threads. A thread is the smallest unit of execution that an OS can instantiate; they are either event-based or periodic (temporal). Threaded operating systems can be either non-interpretable or interpretable.
Packaging your threads
Each thread should exhibit a high degree of independence from other threads; meaning the operations of “Thread A” should have a minimum dependence on the data from “Thread B.” The key word here of course is “should.” In the end the threads will need to exchange data and that is one of the complications of multi-threaded environments.
Data locking and synchronization
In a multi-threaded environment, a lock (or mutex) is a method for ensuring that a memory resource is not in use by multiple threads at the same time. E.g. if you have a shared memory space you do not want to threads writing to it at the same time (or one reading while the other is writing).
Locks provide a way of synchronizing data between threads, however, they slow down the process since the thread cannot continue until the data is unlocked. In some instances, when the operation of one thread is dependent on the outputs from another, if the locking and data synchronization is not handled correctly a race condition can occur.
Debugging multithreaded environments
Bugs in multithreaded processors generally occur when the expected order of execution does not match the intended order of execution. This can be either due to
- A thread failing to start
- A data synchronization failing
- A thread taking longer than expected and preventing another thread from running
Use of a debugger to “walk through” the code is often required to get to the root cause of the issue. However, if the bug is due to an overrun issue then using the debugger may not catch the error because in the debugging mode you are not subject to the timing limitations. In this case, either a trace log or even an oscilloscope can be employed.
For more information on debugging multithreaded environments, I suggest these links
- MSVC Debugging multithreaded issues
- Dr Dobbs: Debugging multithreading
- Stack overflow forum discussion
- Using Polyspace for detecting race conditions
Everyone knows that best practice tells us to put comments into our code/models; some of us do. The question for today’s blog is “what makes a good comment?”
The good, the bad and the just simply useless
NOTE: all examples of “bad comments” are taken from real-world experience.
- Do not repeat the information in the model:
For the given line of code:
output = input * 2;
- Bad comment: /* The output is equal to 2 times the input */
- Good comment: /* Investment pays two times the deposit */
- Useless: /* Multiplication operation */
- Explain why something is done in a given way:
For the given line of code:
bound = min(upper,max(lower,length));
- Bad comment: /* Run min/max functions on length */
- Good comment: /* Limit ouput between upper and lower bounds */
- Useless: /* Bound value is bound */
- Are clearly written: Comments should be written in the native langue following standard grammatical rules. Avoid slang and abbreviations.
- Are “as long as they need to be”: Comments do not take the place of requirements. They are in place to explain part of the model/code. As such they should be written so they explain the concept and no more.
Why we comment? What comments can’t do…
Comments aid people in understanding what the model or code is intended to do. Comments need to be maintained as the source is updated; there is nothing worse then a 3 year old comment that has no relationship to the current object. Finally, they cannot take the place of well-written code. (In all honesty, I have no idea what the screenshot that follows does….)
Erinnerungen an vergangene Arbeit
Ich hatte meine Arbeit mit MBD beginnen 25 jahr wenige mit General Motors; für deise Autoindustrie. Auf das Zeit MBD war neu und ehrlich gesagt, waren die Werkzeuge weniger ausgereift. Leute hatte eine Bedürfnis für Verbesserungen aber alles war nicht klar wie zu vorgangen. Es war die “Wild West.”
Über Zeit, Best Practices entwickeln von Industrie Erfahrung. Durch den frühen 2000er Jahren moderne Prozesse waren vorhanden.
Dieser Zeit für die Medizinprodukteindustrie erinnert mich dieser Zeit Autoindustrie. (Aber mit die Werkzeuge ausgereift.)
Alt probleme, alt frage, neu antworten
Die Medizinprodukteindustrie ist die gleiche Probleme sehen das Autoindustrie vor 20 Jahr; Zusätzlich zu, dass sie konfrontiert sind regulatorische Fragen. Die Medizinprodukteindustrie ist, nur natralich, vorsichtig wann neue Prozesse übernehmen. because of this “use cases” from other industries are required for validation of the process.
Drie ding: Erste für Alles das klar sind im meine im Deutsche schreiben; habe ich mein Mitarbeiter zu danken. Alles das sind nicht klar ich meine Entschuldigung gibt. Zweite, München ist ein Stadt mit schöne Gebäude und wunderbare Menschen. Letzt ich habe learnen zu essen das Brezeln für Frühstück und das war sehr gut.
If in the heart of agile development can be seen in the concepts of quick iterations, leveraging test points for quality assurance coupled to a close team-based collaboration then Model-Based Design are the veins and blood that compose the body of your work.
Agile is a concept and
a process; how that concept is implemented is up to the development team.
If we review the key concepts behind Model-Based Design and Agile Development then the mapping between is obvious.
Use models for architectural decomposition: Models are used to break down large problems into smaller components. These smaller components can easily be integrated into larger system-level models created by other people in the development team. The use of models and a modeling architecture strongly supports close team-based collaboration.
Use of simulation: simulation is the younger brother of testing. Using models developers can quickly and easily exercise their models to determine the functional correctness of system under test. Once the initial models are “correct” they can be locked down with a set of formal tests. Those formal tests often are derived directly from simulation used for design.
Model as the single truth: When we look at the elaboration process that a Model-Based Design process follows it is clear that the iterative nature of an agile process is a close fit. Models can both provide a tight integration with requirements while allowing for the fast evolution of those requirements. In fact, the use of simulation as part of the development process allows developers to quickly find issues with their requirements.
Agile design processes are as good as the people who commit to them. A good understanding of what is and what is not part of the agile development process is important to the success of the project. (This is, of course, true of any product development.) For another perspective on Agile development and Model-Based Design, this link provided a good overview.
This post is a companion post to the “Automation do’s and don’ts”. Here I will examine organizational hurdles that stall the creation of reusable components.
The reuse of software is a common object stated by most companies, but, with exception of a set of narrow cases, most companies struggle with this objective. In my experience, there are 6 reasons for this struggle
- Lack of ownership: There isn’t a group (or person for smaller companies) who has the responsibility and authority to ensure this task succeeds.
Note: often the lack of authority on the person/groups part is the larger part of the problem.
- Failure to allocate time: Turning a component into a reusable component can add between 10% to 15% to the development time. If time is not budgeted for the additional development a “buggy” reusable component is released.
- Lack of awareness/documentation: The greatest software tool is useless if no one knows about it or it is poorly documented.
- Narrow use case: The component is created and its’ use is so limited that only a few people will ever use it.
- Wide use cases: Wide use cases often lead to complex reuse components that either do nothing well or become so bloated that they are difficult to configure and maintain.
- Bugs: Every time a person uses a “reusable component” and it fails to do what it is supposed to do it encourages people to not look at reusable components.
So how do you avoid those pitfalls?
What type of reuse?
I break down reuse into two categories, formal and informal. Informal reuse is common for individuals and within small groups. It is when a component is regularly used to perform a task by people who know how to use it well or are able to work with its’ “quirks.”
Informal reuse is a good practice however it should not be confused with formal reuse which is the topic of this post. With formal reuse, the component is used by people who are not experts on the underlying assumptions and methods of the object. Because of this they are not tolerant of “quirks” and need a solution that is dependable and documented.
It should be noted that many “failed” reuse attempts arise out of taking informal reusable components and treating them like formal reusable components.
Deciding when to reuse
Before I automate a process I ask myself the following questions to prevent the “to narrow” and “to wide” blocking issues.
- How often do I perform the task?
Once a day? Once a week? Once a quarter?
- How long does the task take?
How long does the task take both for my self and the system running the process?
- Do others perform this task?
Do they follow the same process? Does variance in the process cause problems? Do you have a way to push the automation out to others?
- How many decision points are there in the process?
Decision points are a measure of the complexity of the process.
- Is the process static?
Is the process still evolving? If so how often does it change?
- Is it already automated?
Oddly enough if you found it worthwhile to automate someone else may have already done the work.
Issues 1 (lack of ownership) 3 (lack of awareness and documentation, and 6 (bugs) can be addressed by having a person or group who has the task of creating and maintaining components
The maintenance of the component has three primary tasks. First, the creation of test cases to ensure the component continues to work as expected. Second, updating the component to support new use cases. Third, knowing when to “branch” components to keep them from becoming to complicated.
For some organizations allocating time to the development process can be the greatest hurdle to creating reusable components. The time invested does not show an immediate return on investment and there are pressing deadlines. However, if the rules of thumb in “deciding when to reuse” are followed the long-term benefits will outweigh the short-term cost.
The final topic is how to encourage engineers to actually reuse the components. This is, in part, dependent on how well the components are documented and how easy they are to accesses. In the end, they need to understand how it benefits them; e..g less time spent “reinventing the wheel” and more time to work on their actual projects.