There is a story I tell to new consultants; on my first consulting contract, 20+ years ago I had a customer request an automation of one of their data dictionary tasks. Over the course of 3 months I iterated on their requirements and delivered the final product. While the customer was very happy, 6 months later I learned they were not using what I had delivered. In hindsight I recognize my mistake; I worked on what was requested to work on before I established the root issues.
Despite the customer’s satisfaction, this reveals an ethical issue: my responsibility as an expert to understand what the underlying process and technical challenges are and provide my advice on what should be done for the best outcome regardless of other stakeholders’ positions.
The world first: I will not take on projects that negatively impact the world or unfairly disadvantage any group.
Never overstate what can be done: Provide honest assessments of the problems and the abilities of myself, my coworkers and the tools that I am working with.
Teach and learn: Every contract is a chance to both teach and to learn.
Complex systems do not “fall into place” of their own volition; if one domino is out of place(1) the chain of blocks will halt. When projects are incremental updates then “day-to-day” leadership is all that is required. But when you are shooting for the moon, new processes are required and understanding orbital mechanics is critical…
Adoption and migration
As a consultant my job is to enable moonshots as I guide companies on the adoption of Model-Based Design software processes. With every company I’ve guided, with the exception of some startups,(2) there is a contingencythat says a version of Admiral Hopper’s(3) dangerous phrase. The objections decompose into three categories…
The way we do it now is positive adjective.(4) The objection here is generally on a technical basis and can be addressed.
It will take too long to adopt: In the short run, this objection is accurate. But companies that operate in the short run do not have long runs.
But then what would I do? Adopting new processes necessitates changes to organizations; in some cases this does mean that some roles are changed or are eliminated.(5) But more often this is a chance for people to take on new more dynamic roles.
What leaders offer
Most people want their work to mean something. Leading the MBD/MBSE adoption process makes it easy to offer each individual something(6) that will facilitate their work. Defining with them the current limitations of their processes and enumerating the benefits along the way to change is the first step in transformation.
The <positive adjective> case
Groups raise objections to Model-Based Design based on the perceived technical merits of their current processes. The essence of these arguments can be distilled to the contrast between a master craftsperson and an assembly line. In their objections they select the work of the most skilled team member and compare it to the automated products.
The quality of a master craftsperson’s work will almost always be higher but it takes longer to produce. I shift the question to: where should a master craftsperson be spending their time? In the creation of screws or on the design of the watch? Model-Based Design automates low level processes allowing masters to invest in the high level vision aspect of the project
A second <positive adjective> case exists due to the tension between software and controls engineers. Each has a unique skill set that is critical to the overall product; each has part of their task that is dependent on the other’s work. Unguided, this is a source of conflict.(7) MBD / MBSE provides a common ground where the two sides work together; controls engineers can design to their imagination’s limit while the software engineers can integrate, safe in the knowledge of functional interfaces. Guided, MBD / MBSE enables the two groups to work together while focusing on their domains.
Establishing a new workflow takes time. But if the development timeline is longer than 6 months the downstream ROI is significant.
Workflow augmentation / tooling
Most companies I have worked with come to me at step 3, initial role out.(8) From the initial to final rollout out stage, there is generally a 2 year process with the first benefits realized at the 3 month mark.
The key to maintaining forward motion on a project is to define functional interfaces such that work products can be utilized at any(9) stage in the migration process. Having a road map of the adoption process enables you to see this with clarity.
The truth is that often resistance to change comes from perceived job or group security. In 10+ years of guiding groups towards new processes the only time I have seen groups lose “territory” was when they resisted change arbitrarily.(10) In the end, everyone in a company is there because they have skills and insights that will benefit the project; the objective of a MBD cartographer is to show them the way. In the end it comes down to showing them they are the engineer, not the hammer.(11)
Are you ready to go on journey?
Most modern organizations are in a state of consistent incremental change. This sort of unguided updating will result in inconstant and incomplete processes. Deciding the journey you need to take is the first step; let’s talk about where you want to go.
Proper planning however means that no critical path should be dependent on a single domino. The planner has redundancy built into the system.
In startups there is often the the inverse problem, a rejection of all traditional development practices thus “throwing the baby out with the bathwater.”
For those not familiar with Admiral Hopper, I recommend this link to see her foundational impact on computer science.
If you haven’t played “MadLibs” this site gives you a rough idea.
While some roles are eliminated it is rare that I have seen people laid off due to migration to Model-Based Design / Model-Based Systems Engineering. Also see (10)
People within an organization are aware in a visceral sense of the problems they face with existing workflows. Tied to their daily tasks they don’t have the freedom to imagine and enact change; leadership illuminates the path forward.
In an idealized workflow the controls engineer, who has a CS background writes the algorithms in a highly efficient modular fashion that is simply integrated into the network typology defined by the software group.
Part of my job as a consultant is to identify if they have sufficiently covered steps 1 and 2. When you are new to a process it is easy to overlook benefits and pitfalls of the process and make plans that do not reflect best practices.
In practice there will be some “throw away” work; however this can be minimized with proper planning.
The groups that raised issues and facilitated the process grew in size and responsibilities.
Chicken or egg, physical model or prototype hardware? Physical models are used in Model-Based Design to create closed loop simulations to validate control algorithms. To close the loop with confidence the physical model needs to be validated against the real world. And there is the rub; how do you validate a physical model when you may or may not have access to the hardware?
The real world, or something like it…
It is rare that a product is truly “new” and so we can often start off with an atlas(1) that provides initial directions. From there, validating the physical model takes the following series of steps
Determine which variables influence your outputs
Determine which of those variables can directly and safely(2) be measured
Determine the accuracy of those measurements
Determine the operating conditions of your model(3)
Find the “boundary” effects, such as hard stops or friction / stiction interactions.
C3:(4) Collect, clean, correlate!
The first two C’s, collect and clean, are performed on the proto-type hardware or taken from existing research. Once collected, outlying data is discarded and the process of correlating the physical model to the real world data starts.
In an ideal world the inputs to the physical hardware can be directly fed to the physical model and the outputs compared. The model is then “tuned” to reach the highest level of correlation.
Building up from simpler models…
Sometimes the full physical device is not available for data collection. In this case your physical model architecture is built from correlated “submodules.” In this case the accuracy of calculations is even more critical. For the simple case where the accuracy is consistent across the full operating domain it can be calculated as
acc(tot) = acc(1) * acc(2) * … * acc(n)
However, since it is more common that the accuracy is dependent on where in the operating envelope you exercise the model it should be calculated as
Simulation of system level models is the holy grail(1) of Model-Based Systems Engineering, giving the ability to validate system level functional requirements in a virtual environment. So what is the current state of the system-level simulation? I would say it is looking for its Galahad.
The past, the present
What should you simulate at the system level and what level of detail (fidelity) do you need to achieve? Fidelity limitations limit the answer to the first part; presently the majority of system level simulations run at low levels of fidelity to enable reasonable simulation times(2) and because it’s difficult to validate high fidelity full system level physical models.(3) As a result, scenario based testing(4) compromises the majority of system level testing.
A call to the future
When I first started working, some engineers thought that Moore’s Law would solve the simulation fidelity issue; however as fast as processing power has increased, model complexity has grown as fast or faster; the red queen’s problem. Fortunately there are other solutions.
Divide, smooth, and conquer
Not all of your physical model needs to be high fidelity in the system level simulation; components of different levels of fidelity can be joined together and then “smoothing” operations can be performed on the exchanged data if there are rate or discontinuous behaviors. The full suite of system level models can be comprised of multiple sets of models of different fidelity, allowing for specific types of system level tests.
However the greatest risk to system level simulation is not technical, it is political. Since system level simulation is new to most organizations there may not be an internal champion. Unlike many process adoptions system level simulation by it’s nature requires cooperation across multiple groups. How do you inspire an organization to believe in the holy grail?
Start small: While the ultimate objective may be the full system level simulation, errors and improvements can still be found at the subsystem integration level.
Identify past integration failures: It is a two step process; first identify the past system integration failures, second demonstrate(6) how system level simulation could have detected the issue prior to deployment.
Case studies: While the domain is maturing there is a long history of existing case studies across multiple domains.
Ideally system level simulation is started at the start of the project, moving the issue detection upstream. When it is started late in the process it can serve a secondary purpose; providing the tool for root cause analysis for system level bugs. Regardless of where you are in your process it is time to get started.
Like the grail quest, it is never complete but always transformative in a positive fashion.
In general, simulation times of 1/10 of real time are reasonable. For some scenarios slower than real time is accepted to validate critical systems.
Difficulties in validating high fidelity system level models
Practical: Lack of full system interaction data (e.g., how does the AC pump respond to high torque engine loads and high electrical loads?)
Pragmatic: Length of time required to simulate the full model.
Political: Lack of a key stake holder willing to fund the work required.
Scenario based testing is an excellent method for detecting system level faults for intermittent messaging, e.g. what are the effects of system latency in the communication of critical data.
The last image was not an endorsement of the board game Risk.
System level failures are easy to show when you have full test logs of existing products “in the field.” A static playback of the data can be provided to stand in for the model
Before we dive in, I will paraphrase my wife’s late grandmother who lived to 102. When asked why she seemed to have so few “disagreements” with people she answered “I make sure I know what they are asking before I answer.”
The last ten years of helping companies adopt processes has shown me that 90% of all successful processes are anchored in “we understand what the other person is asking so we understand what they need.”
From the Software Design V to….
Over the last 25 years the Software Design V has been the default graphical metaphor used to explain the software release process. Down from requirements to implementation, the V ramps up through testing, and finishing at release. Early V was strongly linked to the waterfall development process. As criticisms of waterfall development processes emerged, the V adapted by adding “feedback loops” between parts of the V. As agile emerged the V continued to evolve; agile’s small fast development cycles required smaller faster V and we entered into the ΔV territory. In the last 5 years with increasing prevalence of over air updates and development in the cloud, developers’ needs “broke” the V.
New shape, same objective
If you read a list of the benefits from adopting DevOps you could be excused for thinking they were the benefits of an Agile, Design V, or even a Waterfall development process.
Simplified & improved collaboration…
The fact there is so much overlap is not as a criticism of the DevOps, it is the recognition of the underlying purpose of all processes: promote clear communication between the stakeholders in the process.
So why do we need DevOps?
There are more lines of code in a modern dishwasher than in the entirety of my first car.
But it isn’t the size of the code that drives the need for DevOps.
The mathematics equations associated with a LIDAR system outstrips the complexity of the Apollo moon shot by multiple orders of magnitude.
But it isn’t mathematical complexity that slings us round towards DevOps.
The simulated hours of testing for a robotic vacuum cleaner is greater than the total time in pig models for the first artificial heart.
But it’s not the number of hours spent in simulation that is primed the pump in favor of DevOps.
So why do we need DevOps?
It is about clarity of communication. In all of the cases I listed above it was “like talking to like”; controls engineers working with controls engineers, AI people with AI people, testers with testers. The need for DevOp is driven by the interaction of groups that have not historically worked together. DevOps becomes the common language and process bridging the groups.
Feedback loops and continuous integration
The brilliance of DevOps is that it found a point of intersection between the deployment and controls groups. The process connects the concepts of Feedback Loops (controls) and Continuous Integration (development): the “X” point of the DevOps cycle. It is at that point that a common ground and common understanding is easiest to achieve.
This brings us back to the start of this blog; when you are trying to solve a problem you need to understand what the other person is saying. As the software release and update processes become more interconnected across multiple groups, the way we solve them is through the implementation of a common process and vocabulary
In this post I want to do something different; a collection of a few “basic” tips for system software, brought to you in folk wisdom format (alternating, bird, craft, bird, craft, ending on horse, of course).
Six of one, Half a dozen of another
How data is specified makes a difference. In this case “6 of one” is a direct specification of the number. “Half a dozen of another” is a calculated value which requires machine operations. When possible use direct specification of data.
A stitch in time saves 9
This is a tricky situation; preventive maintenance will almost always have downstream benefits. At the same time if you are spending time looking for the “single stitches” to be corrected you are taking time away from the core work that needs to be done. Context switching between tasks needs to be taken into account when performing corrective tasks; finish what you start, don’t ignore small problems.
A bird in the hand is worth two in the bush
Project planning is about project completion; “feature creep” is a common problem when the focus isn’t on getting (and maintaining) the “bird in hand.” A bird in the hand is wonderful, a bird at home letting you get ready to go get more birds is best.
Measure twice, cut once
With Model-Based Design or any software development process, the mantra should be measure as you go, work to refine. Software development can be seen as a series of tasks from roughing it out with a saw to fine tuning with sandpaper. You should always be measuring to know where you are.
Never look a gift horse in the mouth
Legacy software can be seen as a gift horse; it may let you ride into town but if it is knackered you will need to become the knacker, adjust and replace it. Always review your legacy software; you need to “look it in the mouth.”
For the 1/4 of my readership in Germany, I am going to try something new; a video blog in German. For my non-German readers I provide the original transcript. Upcoming posts will dive into the system engineering concepts.