Recently a client posed a question I have heard a number of times, “How many States can I have in my model before there are problems”? On the surface, this seems like an O.K. question, however, when we dig in a little we see the inherent assumptions with the question.
Size matters, after a fashion
As a basic metric, the number of states in a model is meaningless, it is akin to the question “how many lines of code before there are problems”? If someone said they had a program with one function and 100,000 lines of code you would assume that it was problematic in its complexity. On the other hand, if they said the program had 100 functions you would think that the model was well architected. Going to the other extream if the function had 1,000 functions you may think that they have created architectural problems of increased complexity.
No one builds a house with a Swiss army knife
Models are tools, they perform functions in response to inputs. It is possible to build a single function that performs a 1,000 different functions but that is rarely the correct way to go.
Rather each model should be viewed as a specialized tool performs a function or set of related functions. Again this relates to the “100 or 1,000” functions for a 100,000 lines of code. I generally consider something a “related function” if
- Uses the same inputs: E.g. the function does not need to import additional data
- Is used at the same time: E.g. the information is used in the same larger problem you are trying to solve.
For example, calculating the wheel speed and wheel torque in the same ABS braking function makes sense as they use the same input data (generally a PWM encoder) and are used at the same time (to determine the brake pulse width). However, calculating mileage in that function, which can be derived from the wheel speed, does not make sense as it is not part of the same problem you are trying to solve.
Keeping it in memory…
In this instance, I am talking about the developer’s memory. Above a given size and complexity it becomes difficult for a developer to remember all the parts of a function operate.
As a general rule of thumb, I try to stick to a “depth of 3” limit. No subsystems or nested states more than three levels deep. If there is a need for greater depth I look to see if there is a way to decompose the model or chart into referenced models and charts. One note, when measuring “depth” the count stops when a referenced model or chart is encountered as these are assumed to be atomic systems developed independently from the parent.
Benefits of decomposition
The following benefits are realized through the decomposition of models
- Simplified testing: large functions have a large number of inputs, outputs, and possible responses. Smaller models have reduced testing criteria.
- Simplified requirements linking: Generally, well decomposed aligns with the requirements by not clumping disparent functionality together.
- Improved reusability: Smaller functions are more likely to be generic or easily customizable.
- Improved readability: A smaller model can more quickly be reviewed and analyzed then a larger model.
What is the correct question?
There are two questions I would ask:
- How do I make the model functionally correct?
- How do I make the model readable?
For guidelines on that topic, you can read my Stateflow Best Practices document.