Not all metrics are created equal. Often in the rush to have metrics the decisions on what is collected is rushed resulting in a glut of semi useful information. This makes it difficult to review the useful information. So what are the important Key Performance Indicators (KPI) for a Model-Based Design DevOps workflow?
Data from test, data from field
In a traditional Model-Based Design workflow the data feedback ends after the product is released; the DevOps workflow brings that data back into the MBD world. How then do you make field data useful to the MBD workflow?
Central to a DevOps workflow is continual feedback, e.g. data can flow bi-directionally with updates from the product developer and error codes from the product in the field. With that in mind we need to ask, from a simulation perspective what information would we need to debug the issue?
Beyond error codes
Error codes are the minimal information required for debugging; an error of type X1258Q1ar894 occurred. It tells the end user very little and provides only slightly more information to the developer.
The next step up is the stack trace, e.g. the history of calls before the error occurred; this provides the developer with the first possible method for debugging the issue.
What is needed is the equivalent of an airplane’s “black box”; e.g. the history of the state information of the device when it failed. But a black box records everything which would quickly overwhelm your development process. So how do you select the data you need?
A refrigerator is not an airplane
For most devices a full data log of the items states is not warranted due to memory limitations and criticality of errors. Instead what can be done is
- For each type of error code select a subset of data to be collected.
- If the selected data is not sufficient to debug from the field, over-air updates can be pushed to increase the type and scope of the data collected.
- Once the error is resolved, reduce the error logging for the error code.
The “new” KPIs
The new KPIs for a DevOps workflow stem from the central tenant of DevOp…constant feedback. Determine a metric for error code severity and error code frequency; elevate them for correction based on those two metrics. As the system is developed allocate diagnostics to areas that were difficult to simulate and validate in order to enable post deployment validation.