My background…

Recently I had a few questions about my background.  While there is a bio page I will use this post to give a little more background about myself.

Education

I did both my undergraduate and masters degrees in Aerospace Engineering at Virginia tech.  During my undergraduate years, I worked in their wind-tunnel and while in grad school I focused on computation mathematics through studying CFD.  Additionally, while I was there I was part of the H2okies, Virginia Tech’s swim team (distance and I.M. for the curious)

Work experience: early years Hardware in the Loop

My first position out of college was workingImage result for eds gm logo for General Motors on their Hardware in the Loop (HIL) simulator SimuCar.  In that position, I created first principal models for transmissions and engines; optimizing them to run in a strict real-time environment.  Additionally, it was in this role that I was first introduced to formal testing methods.

After departing GM I took a position at Applied Dynamics International (ADI) as an application engineer.  ADI was is a Hardware in the Loop vendor and supplier of services for developing HIL test environments.

It was also during this time that I met and married my lovely wife of now 20 years Deborah.

Work experience: System modeling

After 3 years at ADI, I departed to take a position atImage result for ford logo Fords Research Center in Dearborn MI.  While there I created physical models in Simulink for AC systems, engine and complete drive trains for models intended for fuel economy analysis.

In addition to the physical modeling helped to develop a modeling framework in for full vehicle simulations.  This work is what would lead me to my next role as both a consultant.

At the time I joined New Eagle there were 3 other employees, the two owners, and a supporting sales person.  With New Eagle, I helped companies solve their Model-Based Design problems.  At the time, 2001, the two main challenges facing MBD users were targeting hardware and testing software.  To address these we developed custom solutions both code and architectural.

Work experience: MathWorks

Since 2004 I have been with the MathWorks.  Originally I was brought on as The MathWorks workflow engineer.  In that position, I helped identify the common development processes that our customers used.  My work focused on production code generation and testing methodologies.  Along the way, I helped produce the MathWorks Automotive Advisory Board style guidelines (M.A.A.B.) release 2.0 and 3.0. Additionally, I helped develop the Simulink MISRA C compliance documentation and advised on the latest release of the MISRA C standard.

Six years ago I moved back into a consulting role, both to refresh my knowledge of industry practice and to directly put the workflows I had documented into practice.

Beyond work

Outside of work I’m part of a masters swimming group and spend evenings in long walks with my wife.  I’ve done improv at Go-Comedy in Ferndale MI having graduated from Second City Detroit.   I write short stories can be found outdoors rock climbing, hiking or kayaking.  On any given weekend you are likely to find me making up a batch of soup, reading, at a play, walking or talking with my lovely wife.

2012-09-01_17-30-54_140

 

Defining objectives for phase II: The validation project

As we outlined with the road map and in the defining the initial Model-Based Design workflow upon the completion of the initial the validation phase begins.  To review, with the completion of the initial phase, the following basic concepts have been explored and a base level of mastery established.

What is next?

In the Initial Adoption phase, a small, well-defined system is selected for the evaluation of the process.  In this, the second phase, both the scope and the nature of the algorithm(s) and Model-Based Design processes are expanded.

Expanding scope

What does it mean to expand Image result for expanding telescopethe scope of an algorithm?  Basically, there are three components.

  1. Expand the working group: The initial project consisted of a small working group generally from a few groups.  The validation project should include developers from new groups to ensure that issues unique to each group are exposed.
  2. Increase the complexity: The initial project, intentionally, used a simplified test model.  The objective was to validate the Model-Based Design fundamentals, not complex control problems.  With the second phase, more complex problems should be tackled; this both exposes issues in the MBD process and the limits of the understanding of the tools.
  3. Expanding the process:  The initial phase focused on the core technologies.  The validation phase provides an opportunity to expand the scope of tools used in the Model-Based Design process.  Note caution should still be taken to not overload on tools at this still early phase in the adoption process.

Expanding nature

If there are three things that define theRelated image scope of the algorithms, what then defines the nature of your algorithm and Model-Based Design process?

  1. Systems, not models: With the validation project the designers start designing systems comprised of multiple models and integrating the generated code with existing text-based code (either incorporating the text-based code into the model or the code generated from the model into the text based code)
  2. Automation of processes: In the initial phase most processes were performed manually.  In the validation phase, the working group should look at methods for automating verification and validation processes.  This is critical as new developers on-board.

Final thoughts

This post is an introduction to the validation phase; in subsequent posts, I will go into depth on the recommended expanded processes and architectural recommendations for this phase.

The 8 commandments of V&V

In this blog post, I will review my core V&V philosophy.  The base guidelines are common for both Model-Based Design and traditional textual based development workflows

Thou shall know what it is supposed to do…

V&V activities should be driven by the requirements for the product.  Without validating the requirements there is no assurance that the final deliverable will meet the customer’s needs.
Image result for test against requirements

Thou shall start testing at the start of development…

While not all tests can be run at the start of development the sooner test points can be locked down the less likely it is thatImage result for embro issues will enter into the development.  The most common type of early testing include

  1. Interface
  2. Coverage
  3. Range and overflow
  4. Style guidelines

The advantage of performing coverage, range, and overflow testing at the start of development is that doing so simplifies these tests later on.  They can be viewed early on as preventive testing.

Thou shall develop reusable testing components…

As I have written about earlier in, ‘testing as software’reuse and ‘modular testing‘  development of reusable components is critical to the existance of a robust development environment.  The more you can reuse your test infrastructure the less likely that test based errors will exist.

 

Thou shall test to a tempo…

Your defined test suite should be run at a regular interval or on a predefined trigger. While not all tests will be run at each interval every test in the suite should be run at a minimum before version control branch is made.  Common intervals and triggers includeImage result for metronome

  • Nightly:  these tend to be shorter faster running tests that provide “sanity” checks
  • Weekly: these are the longer running tests that are often run over the weekend when processing power is freed up
  • Check in: when an engineer checks in a file,  tests associated with that file are run
  • Version control branch: prior to making a branch or revision point the full test suite should be run.  The branch or revision point should not be made until any issues are sorted out.

Thou shall make it easy to test…

If it is difficult for developers to run tests they won’t.  As a result, they may spend weeks developing only to find out they had a simple bug when they check software in.

Image result for easy testing

Easy to test means 3 things

  1. The developer does not need to know complex commands to execute the tests
  2. The test should provide meaningful feedback to the developer
  3. The test should run in a reasonable amount of time

Thou shall provide reports…

Test reports provide the information that the developers, system architects, and project planners use to track the progress of a project.  There should be a minimum of three types of reports

  1. Detailed reports: These reports are read by the developer and provide direct feedback on the work they are doing
  2. Summary reports: These reports are used by the system architect and the project planner.  They provide information on the current status of all the components
  3. Trend reports: These reports provide information on the trends in tests, e.g. are the total number of issues increasing or decreasing over time.

Image result for report

Thou shall define passing…

For some tests there are clearly defined pass/fail criteria, my car either starts on a cold winters morning or it doesn’t.  For other tests, such as coverage or style guidelines thresholds are defined.

Image result for regulatory signs and meanings

 

 

Thou shall automate…

The final commandment of V&V is to automate where it is possible.  Automation eliminates the possibility of human error and increases the likelihood that tests will be run with a regular tempo.

Image result for robot driving a car

Final thoughts

These 8 commandments of V&V are things that most engineers know of as just common best practices.  The primary hurdles center around the initial implementation of the processes and the continued maintenance of them.  While there is an upfront and ongoing cost the overall cost savings from rugged V&V processes is significant.

It is in this are where MBD shines the most; by providing an infrastructure that is well suited to automation and “test as you grow” methodologies both large and small companies can implement effective V&V processes.

No new worlds (MBD domains) to conquer…?

Within the Model-Based Design environment, there is a set of “standard” of “classic” domains in which control engineers work; discrete and continuous time control, event and state based controls or physicals modeling (for closed loop simulation).

Twenty years ago the use of neural networks, adaptive controls, and fuzzy logic controls entered into the common application for practical control systems.  Now controls engineering is starting to use the tools of deep learning algorithms.
(Note: as this post will show I am just starting to enter into using machine learning and deep learning algorithms.  I am honestly excited about having a new domain to work in!  This post is not intended to provide information on how to develop a deep learning or machine learning algorithm, rather it is to remind us that there are always new fields to exploreee.)

0-3A3h1IVOkfbCLjxq-.jpg

The game changes, the goal remains the same…

The controls community (and well beyond) have started to adopt deep learning algorithms for a simple reason, the problems we are trying to solve now are to complex for classical control domains. (Note: there needs to be care taken here, don’t use a new tool just because it is new, there is still a lot of life in classical controls)  Developing controls systems that leverage deep learning methodologies requires a different mindset.

It is like a puppy…

Not really but a  statement that stuck with me is Image result for puppy training images“with deep learning you have to train the data set and you have to clean the dataset.    The cleaning is much like the traditional cleaning is donee for regression analysis.  Once the data is cleaned the data is broken into subsets to train the algorithm.  The multiple training sets are used to validate the behavior of the “trained” algorithm.  Once the required model fidelity is reached it can be parameterized and placed in the field.

Again it is like a puppy…

In that small puppies can growlittle and large into large dogs.  Because of this not all of these algorithms will be suitable for deployment onto a hardware device.  In many cases, these algorithms are either

  1. Run in a non-real time environment
  2. Run “off chip” to free up on chip processing power
  3. Require special hardware / more expensive hardware to run

From a controls engineering perspective, all three of these issues need to be taken into consideration.  Any of them could make real time application of a  deep learning algorithm impractical.  However, selecting the correct framework and optimizing the algorithm (on the most critical output parameters) should enable you to deploy most of these algorithms to silicon.

Final thoughts

As I wrote at the top of this post I am still learning.  Some of the links I have found useful include

Model-Based Design Process Adoption (Video)

A frequent question I get is “How would you describe a processing establishment?” This video attempts to answer that question.

Key takeaways

  • Provide grounding in the basic tools and concepts of Model-Based Design
  • Provide roadmap for adopting the tools and workflows (e.g. when you should add a tool into your process)
  • Provide guidance on how to work as a group following Model-Based Design workflows

 

 

Modular testing environments

Foundations define and limit the structures we create; this is as true in Model-Based Design as it is in architecture.  With that in mind, I want to use this post to discuss the concept of modular testing environments (MTE).  First, I will point to an earlier blog post “Testing is software“, before I drill deeper into the concept of MTE.

What is a modular testing environment?

A modular testing environment consists of 5 parts

  1. Test manager:test manager provides the framework for running, evaluating and reporting on one or more test cases. Further, the test manager provides a single hook for the automation process.
  2. Test harnesses: a test harness is the software construct that “wraps” the unit under test.  Ideally, the test harness does not change the unit under test in any fashion; e.g. it allows ‘black box’ testing.
  3. Evaluation primitives: the evaluation primitives are a set of routines that are commonly used to evaluate the results of the test.  Evaluation primitives range from a simple comparison against an expected value to complex evaluations of a sequence of events.
  4. Reporting: there are two types of reports, human and machine readable.  The human readable reports are used as part of the qualification and review process.  Machine-readable reports are used for tracking of data across the project development.
  5. Data management: testing requires multiple types of data, inputs, outputs, parameters and expected results.

Why is a modular testing environment important?

Having helped hundreds of customers develop testing environments the 5 most common issues that I have encountered are

  1. Reinventing the wheel, wrong:  Even the simplest evaluation primitive can have unexpected complexities.  When people rewrite the same evaluation multiple times mistakes are bound to occur.
  2. Tell me what happened:  When tests are pulled together in an individual fashion it is common for there to be limited or inconsistent reporting methods.
  3. Fragile tests: A fragile test is one where if the inputs change in a significant fashion the test has to be completely rewritten.
  4. “Bob” has left the company:  Often tests are written by an individual and when that person leaves the information required to maintain those tests leaves with them.
  5. It takes too much time:  When engineers have to build up tests from scratch, versus assembling from components, it does take more time to create a test.  Hence, tests are not written.

Final thoughts

Verification and validation activities are central to any software development project, Model-Based Design or otherwise.  The easier you make the system to use the more your developers will embrace them.

Empowering engineers with Model-Based Design Workflow

I am a highly experienced with physical system modeling, hardware in the loop V&V, system processes, and MBD workflow engineer design implementation.   I am an average C programmer, test case and requirements specification author.  I am, honestly, a poor debugger.  Model-Based Design supports me in the areas of my weakness and makes my strengths even easier to apply.  I suspect this is true for many engineers out there.

How MBD empowers engineers

Few people go into engineering shutterstock_507301453_cropped--photograph_848w477hwith a desire to write and trace requirements.  Yet that task must be done for projects to be successful.  Likewise the process of debugging C or, shudder, assembly level instructions.  Yet again, at times, debugging of the system needs to take place.  So how do you take those required tasks and transform them?  The answer lays with moving them “up” a conceptual level.

Debugging in models through simulation: Empowerment example

Debugging a simulation is fundamentallyAAEAAQAAAAAAAAaqAAAAJGI4MmRlOWNlLTFkN2MtNGNmNy05M2JiLWIxNmQyMzE1ZWIyMw.png different from debugging textual code; when debugging a simulation the user is validating the functional behavior of the model.  In contrast, while debugging C code may involve validating the functional behavior much of the work is spent in the correction of syntactical mistakes.  If your engineers are domain experts (e.g. controls, physical modeling, discrete events) versus C code experts then having them debug in a modeled simulation utilizes their strengths.

Requirements to “move tasks up”

In order to abstract a tasks to a higher level there are two requirements.  First, the mapping from the higher level to the lower level must be consistent.  For example, to enable debugging through simulation instead of debugging C code the model must translate correctly into generated C code.

The second requirement is the automation of processes; the translation between levels should be handled automatically.  For example with the tracing of requirements from the documentation, to model, to test cases, to end report all should be processed in an automated fashion.

Final thoughts

As one of my co-workers once put it, “You don’t hire Michelangelo to paint your walls, why do you hire engineers to debug source code?”

640px-Vatican-ChapelleSixtine-Plafond.jpg

Note: You may pay him to paint the ceiling thought…

What is Model-Based Design?

For the past 5 months, I have worked on this blog without giving a definition of Model-Based Design and while the definitions of Model-Based Design vary there are a few common concepts.  (Note: for this blog post I will only be considering graphical modeling languages such as Simulink.  It is possible to follow a Model-Based Design approach using textutal languages however there are additional burdens in doing so)

One truth – many uses

Core to all Model-Based Design workflows is the concept of a “model object” which is used in multiple phases of the design process.  That model object, or collection of objects, is elaborated during the design process, e.g. transforming an initial “shell” model into a fully elaborated model at the deployment phase.  Then using the deployment phase version of the model during the validation phases.  sdlc_vmodel

By maintaining a “one truth – many uses” model the number of handoff between people and roles is reduced thereby minimizing the introduction of errors.

telephone-game

 

Model represents reality

The model both represents and helps your understanding of the real world.  The representation is inherent, you design the model to encompass the physical and event driven phenomena of the system.

one truth, many persrpectives

The aspect of understanding comes in for complex systems where it is difficult, or even impossible to understand how the system will react to a given input.  (Note: “complex” is a relative term, even a “simple” dual pendulum is not easy to visualize.)

Abstraction

Finally, the goal of MBD is to abstract implementation from design details. Modeling languages represent aggregate concepts into representative blocks.  By removing the implementation details engineers can focus on functionality and safety while software architects can system level composition.

Example of abstract art
Mondrian “Composition No. 10

Final thoughts

This short introduction, by its very nature, cannot go into detail of how Model-Based Design is applied.  However, my hope is that by keeping the three key ideas of MBD, “one truth, many uses”, “model represents reality” and “abstraction” you will have a better understanding of Model-Based Design.