In past posts, I have written about writing understandable models (Simulink and Stateflow.) With this post, I want to address measures of clarity (or it’s inverse complexity.) Note: in this post, I will be focusing specifically on Simulink metrics as contrasted with C based metrics.
Measurements in the C world
So what should be measured and how do you evaluate the measurements? In traditional C based development there multiple metrics such as…
- Lines of code (LOC): A simple measure of overall project “size”.
Note: A sub-metric is lines of code per function (LOC/Func) - Coding standard compliance: A set of guidelines for how code should be formatted and structured (e.g. MISRA)
- Cyclomatic complexity: A measure of the number of logical paths through the program
- Depth of inheritance: A C++ measure of how deep the class definition extends to the “root” class. Can be applied to function call depth as well.
- Reuse: The degree to which code is reused in the project.
Note: a better measure is the degree of reuse across projects but this is more difficult to capture with automated tools. - Coupling/Cohesion: Measures of the direct dependencies of modules on other modules. Lose coupling supports modular programming
- Much more… : A list of some additional code metrics can be found here: Software Metrics : Wikipedia
Model-Based Design metrics
Within the Model-Based Design world, there are both direct and analogous metrics of the C-based versions.
- Total block count (TBC): The total block count maps onto the LOC metric. Likewise, a Blocks per Atomic Subsystem can be compared to the LOC/Function metric.
- Guideline compliance: Modeling guidelines, such as the MAAB, map on to C based guidelines.
- Model complexity: Maps onto cyclomatic complexity. It should be noted that the model complexity and cyclomatic complexity of the generated code will be close but not exact.
- Subsystem/reference depth: A measure of how many layers of hierarchy exist in the model
- Reuse: The use of libraries and referenced models that can directly
- Coupling: Simulink models do not have an analogous metric for coupling. By their nature, they are atomic units without coupling.
- Much more….
Evaluating measurements
There is no such thing as a perfect model or perfect bit of C code. In the end, all of the metrics above are measured against pass/fail thresholds. For example, common thresholds for the Model Metrics include
- Blocks per atomic subsystem < 50
- Guideline compliance > 90%
Note: with some guidelines that must be passed regardless of percentage - …
Measuring the model
With Models, as with text-based development environments, there are a variety of tools for collecting metrics. Within the Simulink environment, it is possible to write stand-alone scripts to automate this process or use the Model Metrics tool to automatically collect this information.