Collecting feedback…

Please forgive the early post…

When developing a control system feedback is critical; in creating a company wide software proces feedback (from your employees) is even more importaint.  What is the best way to gather that information and what is the information that you should be collecting?

AAEAAQAAAAAAAAIUAAAAJDM0YTBmOTg1LTFlY2MtNGI0MS1iNWJiLTcwNWY5NmRmZWExNQ

What did your bug reports tell you?

Bug tracking systems serves as the “first pass” for information reference.  When developing the software process a category of “workflow issues” should be included in the tracking software.  These workflow bugs will show problems related to

  • Poor documentation: The primary way users learn about the Model-Based Design workflow is through the documentation.
  • Architecture interfaces: Poor interfaces, either for model or data integration will emerge as new design patters are exploreed by new groups.  The process adoption team must determine if the interface should be extended or a new interface defined for the group specific requirements.
  • Test failures:
    • Modeling guidelines: Failures in modeling guidelines will show where users have difficulty in conforming to modeling standards.
    • Regression tests failures: These can indicate an improperly defined regression test system.  During the inital development of the test environent it is common for there to be errors in the system.

bugReport

Direct feedback / viewing

At the one, two and six month marks groups new to the process should be brought in for a formal process review meeting.  During the meeting the following activities should take place.

  • Design reviews:  The models, tests and data managment files should be reviewed to ensure that best practices are followed.
  • Pain points: Request feedback from the teams to capture existing pain points.

Final thoughts

Collecting feedback from new teams is critical understanding where processes can be improved.  The development is as always an iterative process requiring input from teams outside the inital “core” team.

Projects of interest (II): Listening for success

What is success? How do you define it, how do you measure it?  With software projects, it is easy to say when a project is complete but a complete project is not always a successful project.

So hear, at an abstract level, is my definition of a successful project.  The project…

  • solves the underlying (“big-picture”) requirements:  It is possible, even common, in the translating the initial (or user) requirements into derived requirements that the “big-picture” objectives are lost.  You see this reflected in tools and products that are functionally correct but more difficult to use or fail to provide the experience the user wants.
  • informs future work: A successful project can inform future work in two ways.  First directly, the work done on one project may be reused in subsequent projects.  The second is through acquisition of new knowledge(1) .
  • mentors junior people on the project: Every project is an opportunity for junior people to develop new skills and deeper understanding of the what they are working on(2) .

Background

17 years ago, on my first project as very green consultant,
I made the mistake of doing exactly what the green-consultantscustomer asked me to do.  Their request was to help them automate the routing of signals around a complex multi-level model.

I did what they asked, I learned a lot; efficient recursive programming, how to handle complex regular expressions, error handling for ill-defined systems.  The customer received a tool that did exactly what they asked for and they used it for the next 3 years.

So how was this project completed yet not successful?  First I didn’t step back to ask “what do they need?”  The customer thought they needed a way to route signals; in truth they needed a better model architecture.  The reason that they stopped using the tool 3 years later is that they realized this for themselves and developed a better model decomposition.

The second way in which this project failed is that it did not inform future work.  By its’ very nature it was a dead end tool; keeping them trapped  in an inefficient model architecture.  While I learned things that information was not applicable for my customer.

How to start succeeding?

Between the title of this post and my measures of success the answer should be clear.  At the start of my engagement I should have talked with and listened to my clients; that would have lead to the understanding that their architecture was in poor shape, I would have understood their underlying(3)  requirements.

student_success

Once you have the true objectives in mind make sure to review them periodically to ensure that the project has not drifted away from them.  Think about how the current project can inform future work, either directly through reuse or through education.  If it is for reuse budget the time in development to enable future reuse(4).

Footnotes

(1) There is a practical balance to be struck when learning new things on a project.  The time spent on learning new methods / tools should not slow down the progress of the project.  Ideally the knowledge would be gained organically while working on the project.

(2) Mentorship on projects is often informal; even basic act of discussing what and why design decisions have been made with junior colleagues will aid in their development.

(3) I am using “underlying” and “base” requirements to refer to the “big-picture” requirements from which all others are derived.  Given that the term for these big-picture requirements vary from field to field I hope that this will still be clear.

(4) Enabling reuse requires additional design and testing time.  A general rule of thumb is  to allocate an additional 10% ~15% of the development time.  I will write more about reuse in a future blog post.

Initial adoption: objectives and metrics

Objectives and metrics

Based on the information collected from the process adoption team the objectives for the initial adoption phase should be set.  While the specifics for any given organization will be different the following outline is a standard view.

1403204517-3-keys-measure-success-loyalty-program

  1. Technical 
    1. Complete 1 or 2 “trial” models
      1. Identify the initial model architecture
      2. Identify the initial data architecture
      3. Establish baseline analysis methods
      4. Establish baseline testing methods
    2. Understand how artifacts from models integrate with existing artifacts
    3. Implement version control for new modeling artifacts
  2. Managerial
    1. Review methods for measuring model key performance indicators (KPIs)
    2. Review resources required during initial adoption phase

The technical metrics

Completion of the trial models

In a future post we will examine how to select your trial model,  but for now let’s answer the question “what does it mean to complete a trial model? ”  This decomposes into the four tasks as outlined above.  The model and data architecture are covered in some depth in previous posts so let us talk about analysis and testing.

Within the Simulink domain, a fundamental aspect of a model is the ability to simulate the behavior of the plant or the control algorithm. analog-simulationThe simulation is used during the early stage of development to analyze the model to determine if the functional behavior of the model.  The developer performs elaborates the model until the behavior functionality matches the requirements.  This is verified through simulation.  Once the model meets the requirements the functionality can be “locked down” through the use of formal tests; again using simulation.

It is worth noting that some requirements will be met before others, they should be formally locked down under test as they are achieved.

Integration with existing artifacts

For most companies, unless they are starting from
integratesoftwarea clean sheet there will be existing software components that need to be integrated with those created by the Model-Based Design process.  There are three types of integration

  1. bringing existing software into the Model-Based Design framework
  2. Bringing Model-Based Design artifacts into the existing architecture.
  3. A combination of 1 and 2.

The topic of integration will be covered in greater detail in an upcoming post.  However, the fundamental guidelines for integration (in either direction) are the following.

  • Create software objects with well-defined interfaces (encapsulation)
  • Limit dependencies of the software objects on external objects
  • Minimize the use of “glue code”(1).

Version control of objects

Version control processes use tools to enable team-based development while maintaining multiple “release” or “branches”.  During the initial phase of the project which software objects will be placed under control and how a “release” will be defined should defined.  This initial definition will be refined as the Model-Based Design process is elaborated.  This blog will go into detail on this in a future post.  The basic questions are

  • Do you include derived objects in the version control software: .c, .h, test reports…
  • How do you arbitrate check in conflictsHow do engineers resolve conflicts in their model interfaces?
  • How do you share data / models across projects: What methodology will facilitate reuse of objects across multiple projects with minimal configuration impact?

Managerial metrics

The initial adoption phase by its nature will be slower than later phases as people are still learning new capabilities and processes.  57572027-gears-and-kpi-key-performance-ind-cator-mechanismThe primary objectives during this phase are

  1. Learn what bottlenecks with the new process.
  2. Understand existing issues uncovered by the transition
  3. Determine level of resources for the next stage

The discovery of “Objective 2’s”, existing issues, often surprises people.  The act of transitioning to a new process forces the evaluation of existing processes and, more often than not, illuminates existing shortcomings.  Extra care should be taken to ensure that the new process addresses those shortcomings.

In the next stage, the validation project, the team should expand beyond the initial “core” team. Ideally, a people from outside the initial project scope should be brought in to identify developmental pain points that did not exist in the “core group” processes.

Footnotes

(1): “Glue code” is a software object created solely for the connection of two other software objects.

Adoption Time Line: Exploration Phase Part II

Continuing from an earlier post we now look at how you set the objectives for the initial adoption phase.

We need the champions

Before we proceed in setting objectives we need to talk about resources.  There are 3 resources that are required for an adoption process to succeed; they are

  1. Champions:  Technical and managerial support for the adoption process.  Without active advocates change will not happen.
  2. Time:  The champions need to have time allocated to working on the process change.  Ideally the technical champions will have 100% of their effort allocated to the adoption of the new process.  When the resources are allocated at less then 80% the change is likely to fail.
  3. Experience:  The people working on the project need to understand the current workflow so they can address its short comings and speak to the people outside of the adoption group.

An earlier blog post from LinkedIn provides additional details.

Setting goals

Based on the information collected from the process adoption team the objectives for the initial adoption phase should be set.  While the specifics for any given organization will be different the following outline is a fairly standard point

  1. Prior to start of initial adoption phase
    1. Allocate resources to the process adoption team in support of project
    2. Process adoption team completes identified required training
    3. Review reference materials to understand current industry best practices
  2. By completion  of initial adoption phase (1)
    1. Technical 
      1. Understand how artifacts from models integrate with existing artifacts
      2. Establish baseline testing activities
      3. Implement version control for new modeling artifacts
      4. Identify initial model and data architecture
    2. Managerial
      1. Review methods for measuring model key performance indicators (KPIs)
      2. Review resources required during initial adoption phase (2)

Bounding the problem

A word of caution; model-based design offers multiple tools and methods as part of the development workflow.  A common pitfall when establishing any new process is to “overreach” utilizing multiple new tools all at once, the resulting diluting of attention introduces errors of misunderstanding and results in a slower adoption of the process.  In the initial adoption phase posts, I will discuss the normal building blocks for Model-Based Design.

Next post

The next series of posts will cover model architecture and data management.  These topics will help in understanding the next phases of the adoption and establishment processes.

Footnotes

(1) The term “adoption” reflects the fact that there are existing resources to guide companies in adopting workflows.  I always encourage people to leverage existing information rather than creating new workflows from whole cloth.  This is critically important when working in a safety critical environment.
(2) Identifying the resources required for future phases should be based on the KPI information gathered from the initial adoption phase.  It should also take into account the “cost of learning” associated with starting a new process.

Model-Based Design: Preventing Factors

 

The decision to adopt a new process can be viewed from two directions, what drives and what prevents companies from embracing new methodologies. In last weeks post we looked at the driving factors, this week we will talk about the preventive factors.

Core preventive factors

The motivation behind preventive factors are easy to understand; staying with the status quo is free of immediate risk.  At first glance it facilitates project planning based on known development costs.  However time and labor estimates are only accurate in situations of incremental change, such going from version 10.3 to version 10.4 of a stand-alone electronic throttle controller (a 20+ year old technology, with minor software updates) However that is rarely the case.  The growing complexity of software systems exacerbates the problem of time and labor estimates.

Additional preventive factors

The previous product development risk is the factor most often given by managers; on the other hand, the technical staff (controls and systems engineers) consider the following issues.

  • Loss of code efficiency
  • Lack of ability to customize
  • Concern about interfacing with existing code
  • Need to train on new tools
  • Not accepted for safety critical workflows

 

How Model-Based Design addresses these factors

In my previous post, Model-Base Design Driving Factors, the issues around increasing complexity, decreasing cycle time and the drive for cost reduction.  In today’s post we will look at the engineers’ concern.  First the loss of code efficiency.

Code efficiency: Code efficiency referees to the memory usage (RAM / ROM) and the execution speed (FLOPS) required to execute a given algorithm.  Currently, automatic code generators do not produce code for single functions that is equal to the best C/C++ programmers.  However, they produce better code than the average C/C++ programmer(1) and, for large systems, can find optimizations that humans may overlook.  Further, any time a controls engineer spends becoming a better C programmer is time they are not spending becoming a better controls engineer.

In future sections of this blog, we will examine which areas are best developed using automated code generation tools and which should be done using hand coded methodologies.

Lack of ability to customize: Modern code generation tools provide users with the ability to fully customize the

  • Function interface
  • Data scope / data type
  • Function partitioning
  • Execution rates
  • Code formatting

What is more, once the customization has been defined the tool will consistently follow the pattern, unlike human programmers who are can easily make formatting and functional mistakes.

Ability to interface with existing code: When transitioning to a model centered workflow there will be existing C code that will be used the difficulty in continuing to use these functions is dependent on 4 primary factors

  1. Is the function well partitioned?
  2. Is the function intended behavior well defined?
  3. Is the code well documented?
  4. Are there test cases for the function?

As you may have summarized the ability integrate existing code into a Model-Based Design workflow is similar to the ability to reuse functionality between projects.  With modern Model-Based Design, tools allow users to easily integrate code assuming that the code is well partitioned.

Training on new tools: From a strict interpretation this critique is valid, however, it assumes a workforce that is already fully trained on existing C-based design workflows.  As this link shows, C is no longer the most common starting programming language.  Further, unlike model centered designs, does not directly map onto the engineering notation used by controls engineers.  With this in mind understanding training can be viewed as a one-time additional cost.

Next week’s blog

In next week’s blog I will examine a typical timeline for adopting Model-Based Design addresses within a group and across your organization.