What really matters…

Recently I was talking with a co-worker about an image detection model and they mentioned the now “classic” question of how do you deal with weather obfuscation of data.

Image result for stop sign with snow
You know it says “Stop” but does your algorithm?

We quickly moved from static obstruction (the snow covered stop sign) to the dynamic obstruction of rain and snow.

Starting from first principals you can quickly out line the factors that contribute to the model

  • Rain / Snow density (drops / flakes per m^3): This is a measure of storm “intensity”
  • Size: How large are the snow flakes / rain drops
  • Wind speed: This effects how the flakes / drops move, it is complicated by wind gusts.
  • Camera velocity: Is your camera moving?
  • Depth: How far away is the object that you are viewing

There already exist a number of interesting papers that examine these parameters in full. My real question is “Do I want to model the rain or the effect of the rain on my algorithms”?.

In some cases it may be necessary to fully model a the behavior. In others a simplified characteristic model can be used. Let us look at what we determined was important.

What really matters…

Given that objects, in general, don’t change their shape(1) it is possible to filter out the noise of rain or snow. What we need to understand is how rain or snow obscure objects.

  • Interposition: Every drop of rain and every flake of snow acts as a “point barrier” between the camera and the object.
What happens

Take the simple example: for an object 10 meters away the extent of “obscuring” is a function of the rain density and the droplet size; further the distance between the camera and the object (depth) determines the extend of the obstruction. What then should we do with with the wind? How should we model the movement?

Image result for wind blowing
All we are is…

It turns out, not much. When I compared the effect of random movement of the patterns with the wind models the effect on obstruction there were negligible difference for the vision application.

The moral of the story…

  • Modeling systems is a question about what is required for your analysis. Knowing this I can create the minimal model that is required to perform my tests.
  • Developing simplified models start with an understanding of the “real” system
Image result for aesop's fables

Footnote: 1: The profile of the object may change as the relative angle is changed.

If you find this content useful, please consider subscribing

Consulting ethics:

For most of the last 25 years I have worked in a consultative capacity; during that time I have come to a set of principals that define what I consider an ethical frame work as a consultant.

Honesty: Time, Talent and Tools

In all interactions with a client honesty is the foremost byword. This shows up in three areas.

Image result for three Ts

Time: Projects take time. Estimating how long a project takes is a skill that develops with experience. When providing an estimate to a client always use your best judgement as to how long the project will take enumerating the tasks to be completed and the potential unknowns. If an accurate estimate is not possible due to a lack of understanding you should consider if this is the correct type of project for you to be working on.

Image result for Talent

Talent: When I speak about talent I am thinking about the sum of the things that you or people you work with have a strong understanding of. When starting a project my rule of thumb is that my team should understand at least 80% of the project scope. Why 80% though and not 100%? My assumption is that for any project, outside of turn key implementations, there is project specific knowledge that my customer has that my group will need to learn to best help out. A critical early part of any project is coming to terms with that unknown 20%.

Tools: There an old saying “when all you have is a hammer, everything looks like a nail”. Selecting the correct tool for any job is important. Not forcing a tool because you work for a company or it is something you are familiar with is ethical. In the end select tools being aware of the cost (both of the tool and in time spent to work with the tool) versus the functionality (what percentage of the issue does the tool address.

Image result for when all you have is a hammer

Understand what customer needs…

Image result for microscope telescope

A story I have often told is about the first consulting contract I worked on over 20 years ago. The customer requirements were well written, scoping things out fully. I had the required skill set to implement what they requested and the tools were available “in-house” for the customer. After 200 hours of work (out of an estimated 220) I handed over the finished product and the customer went away happily. However, in hindsight what the customer asked me to do wasn’t what they needed me to do; I solved the symptom, not the problem.

There are two things to take from this, first during the initial interview it is critical to get to a solid understanding of what the root issue the customer is trying to solve. Second it is important to be able to convey to the customer why a proposed solution may not address the underlying problem. (If you already know how to solve the problem then you are hiring a temporary employee, not a consultant).

Stand behind what you deliver

For most consulting projects the works is completed within a handful of months. The customer will continue to use what you to taught them and delivered to them for years. In the 25 years I have been working I have always been ready to speak with old customers about what I did to help them with the unexpected curves they hit.

If you find this content useful, please consider subscribing

Math! Solving problems, making friends…

A few months ago there was an interesting article about AlphaGo and deep learning.  What made the solution interesting is that AlphaGo determined a new strategy for playing Go, e.g. it wasn’t just replicating the best of current players, it was creating something new.

Image result for AI something new

So this got me thinking about sensor placement. Let’s say you are building an autonomous vehicle; you give it radar, LIDAR, cameras, heck even a person walking 5 feet in front of the car with a red flag. But how do you know what the best configuration of those sensor is? This is where math and deep learning can come to the rescue.

It’s research, dam the budget!

With research vehicles it is common to go high end with the sensors and processors. Often the number of sensors are far in excess of what the final vehicle will have; the question then becomes “If you train your controller using 10 sensor and the final vehicle will only have 3 do you have to retrain?”

So here is the insight, in regression there is the concept of a synthetic variable. E.g. you can take the raw data X and put X^2 into the equation. What if, for sensor data, you optimize for sensor position?

Image result for multiple angles
Trig and Geometry!

Step 1: Train your model using the full set of sensors. Train it until you reach the highest confidence level you can.

Step 2: Define the set of equations that will take your original sensor data and use them to create “artificial” sensor at arbitrary locations. E.g. if you have 10 LIDAR sensors, then it could be that
AS_1 = func(X1,Y1,Z1) = func(L1,L2,L4,L6)
AS_2 = func(X2,Y2,Z2) = func(L3,L4,L7)
AS_3 = func(X3,Y3,Z3) = func(L2,L6,L9,L10)

Step 3: Using the already trained data train the new sensor array

Step 4: Optimize the models as a function of the sensor placement and number of sensors

Sensor fusion before the sensors…

I think of this approach as “sensor fusion before the sensors”. What is interesting about this approach is that it is possible that we could discover a combination of sensors (and yes this should be done with multiple sensors) that has a higher accuracy and lower costs than we expect.

Image result for sensor fusion

If you find this content useful, please consider subscribing

11111100010 to 11111100011 Every bit makes a difference

Welcome to the year 2019, the first post of the year (or last post of the year) generally are places for retrospectives/forward looking visions. I shall keep to the form.

Excited: 2018

There are multiple things to be excited about this year, the following are what I a found most exciting

  1. . Simplified methods for integrating C with models: MATLAB and Simulink added new functionality to improve the integration of C code and Models, either by including code in the model (C-Caller block) or by simplifying the function interface definition.
  2. Projects, data dictionaries and testing: MathWorks continues to work on the integration between Simulink Projects and the supporting tool chain, these improvements coupled with version control software integration make the tool powerful for team based development
  3. String support: For better or for worse, strings are a part of control systems, after many years Simulink has native support.
Image result for excited puppy
The corgi of excitement remembers the past

Looking forward: 2019

Related image
The lemur of happiness looks towards the future
  1. Machine learning: Ok, sure I just got started but already there are a number of interesting question (and lovely math) that I am enjoying.
  2. Systems of systems: The scope of Model-Based Design projects continues to expand, the analysis of system of systems is an area I’m excited to move into.
  3. Statistical validation: As our systems become more complex statistical analysis (DOE) becomes more and more important

If you find this content useful, please consider subscribing

Apples and Oranges

There is an old saying that “You can’t compare apples to oranges”; but frequently as an engineer, one of our jobs is to compare apples to oranges.  Why and what does that mean?

Frequently, when systems are built the actual system does not yet exist.  As a result, we need to draw analogies between the two items.  So if we are going to compare apples and oranges how do we “juice” them for the correct data?

Image result for sat question A is to B

The mapping between objects: first principals

One objective with these mappings is to reduce the volume of first principals modeling that needs to be done; e.g. instead of building up a complex system of equations, use a modified data reduction of an existing system modified for the new system.    However, to do that, we must first validate that the fundamental behavior of the systems scale between each other.  For example, scaling a 4 cylinder 1.8L engine to a 2.1L engine is straight forward.  To go from a 4 cylinder to an 8 cylinder is also straightforward if a slightly different problem.

What is the same?  What is different?

You selected the reference model because there are things that they share in common.  At the same time, there will be things that are different.  In our example of the 4 cylinder to the 8 cylinder engine, there are differences in total displacement (an easy mapping) torque output excetra.  The thing that may not be easy to map is the change to the in the vibrational aspects of the engine.  The questions to ask are

  1. What physical aspects of the system am I concerned with?
    1. Torque
    2. Fuel consumption
    3. Max rpm
  2. What aspects am I not concerned with?
    1. Vibration
    2. Heat transfer
    3. ….

If you find this content useful, please consider subscribing

The truth about regression…

So my first step into machine learning is examining supervised learning and my old friend curve fitting.  This was a fun trip down memory lane as my numerical mathematics background had me well prepared for this topic.  Further, in the 25 years since I graduated, the basic tools for performing regression analysis have greatly improved, I was able to use the built-in curve fitting tool in MATLAB to perform these operations directly.  At the same time, I quickly remembered the “gotchas” about regression…

Starting points

As a starting point, I thought I would see how well the tool would do with a known polynomial equation.  I defined a simple 3rd order equation and added some noise into the signal.  I then ran the results through the curve fitting toolbox…

The data

As a sanity check, I first tried it out using the “real” X and Y data.  As would be expected the tool spits back out the coefficients that I expected.  When I ran it with the noise (and telling it to use a 3rd order polynomial) it returned coefficients fairly close to the “real” things.

Curve fitting within the boundaries
CoefficientOriginalRegressed “Real”Regressed “Noise”
P40.0-3.79e-14-0.745
P31.51.53.05
P21.21.20.8709
P1-0.5-0.5-0.522
R-SquareNA1.00.9939

Within the defined range we have an R-squared value of 0.9925, pretty darn good.  But, the question quickly becomes, how does it hold up outside of the known range?

Pretty well actually?

So depending on how you want to look at it, as an absolute error or a percentage error you may be doing “O.k.” or not.  But what if I didn’t know what the underlying equation was?  A power function would seem like a reasonable.  If I try a first order power equation 

 y = a * exp(b * x)
or second
y = a * exp(b * x) + c * exp(d * x);

OrderR-square
1st0.9852
2nd.9961

 Within the original bounds, the R-Squared value is close to the actual polynomial.  If we used the 1st order equation just in that region it executes faster then the 3rd order polynomial so we may want to go with a “fake fast fit”. (say that three times fast).

Why do we try to “fit-in”?

There are several reasons why regression is performed.  Sometimes it is because the underlying equations are unknown,  are too computationally expensive, there isn’t an equation.  Regardless of the reason when I am trying to perform a regression I perform the following steps

  1. What are the fundamental physics:  Use this to pick out which terms and functions to use.  (Note: The curve fitting toolbox has an wonderful “custom equation” option that let’s you try these equation based approaches out)
  2. Boundaries? I try to understand how the data outside of the known data set?  In my example the exponential function does not perform well much outside of my known data.

Disclaimer:

Roughly 30 minutes after I finished writing this blog post I stumbled across this post
“Explore Runge’s Polynomial Interpolation Phenomenon” by Cleve Moler.  I would be amiss to not include this link for three reasons

  1. Cleve’s post clearly articulates the mathematical underpinnings of interpolation
  2. As the creator of MATLAB, his work is why things are “greatly improved”.
  3. It’s a post about Runge, and since I had a cat named Runga-Kutta (RK) back in college well…

Warning: Bad aerospace engineering pun follows:

When I was in grad school and had that cute cat I was working on a simple project, a device to determine the terminal angle (stall) for a kite.  For the most part, I worked on this project in my apartment, and Runga loved to attack the kite’s tail, jumping at it attacking it.  That was well and good in the apartment.

Well at the early testing stages I was flying the kite off of my back deck.  One day while flying RK managed to get out.  In a leap of excitement she jumped onto the kite’s tail (knowing from experience she would land happily on the carpet).  Except, this time, the kite was 8 feet off the ground.  Needless to say, both the cat and the kite crashed to the ground proving that for this case the Kutta condition did not hold.

Image result for terminal angle stall

If you find this content useful, please consider subscribing

Learning about machine learning…

For the last two years, I have written about Model-Based Design in the context of classical control algorithms and software verification processes.  Over the next year, I am broadening the scope of this blog as I investigate how machine learning can be applied to MBD in the context of safety critical systems.

My questions

  1. What is the current state of the art in Machine learning?
    1. How can it be used for controls problems?
    2. When should it be used?  When should it not be used?
    3. Where is it the only solution?
  2. What are building blocks of machine learning?
    1. Starting small, what are the easy things to do?
    2. Hitting the middle, what is a “real” project?
    3. Common “starting” pitfalls
  3. Testing and verification
    1. Can you do black box testing?
    2. How do you test an “open world” model?

Image result for building blocks foundation

So join me on this trip…

I will share my insights, my mistakes and look forward to hearing from you…

Image result for next stop the future

 

Levels of testing…

In English, there is a saying, where there is smoke there is fire.  It is an interesting saying which you could take to mean multiple things

  1. Smoke will let you find problems from a distance
  2. Smoke will show up before the fire

At The MathWorks, we call our most primitive tests “smoke tests”.  A smoke test is a basic “does it turn on test”.  The idea behind smoke tests is to validate that the unit under test performs in the most basic fashion, therefore, is ready to run longer more complex tests.  For example, a set of smoke tests for a Model would look like this

  1. Perform an update diagram (Order of a minute)
    if pass then…
  2. Check the model configuration parameters  (Order of a minute)
    if pass then…
  3. Check model for unallowed blocks (Simulink Check)  (Order of a minute)
    if pass then…
  4. Run your complex test suite… (Order of a ???)

Why?

For large, more complex tests, the setup, execution and evaluation time of the tests can be on the order of hours for a complex set of tests.  The ability to “abort” running these longer tests is important as your test suite grows.  You don’t want to take up time on your CI system running tests on models that are not in the correct state.

forestFre

If you find this content useful, please consider subscribing

Mapping Model-Based Design to Thanksgiving

The Thanksgiving meal and MBD have a lot in common; both are times when people come together, both involve many moving parts, and both are better with some forethought and planning.  (No one can cook a turkey at the last minute).  With that in mind here is how the courses in Thanksgiving meal map onto the MBD process.

The salad: documentation

Image result for image salad

Let us be honest, when you think of Thanksgiving you generally don’t think of a salad, but your pretty happy you have something light given the heavy items in the meal.  That is why it maps onto documentation in MBD.  Something you are happy to have from your tool providers and something that you hope exists a year from now when you come back to look at what you put together.  
Bonus thought: Like salad documentation is good for you

Cranberries: requirements

For many people

Image result for cranberries

cranberries enter their life at Thanksgiving and depart after Christmas.   They know it is supposed to be part of the meal but they don’t really know why they are having them.   In the Thanksgiving meal, the cranberries provide the bitter / acid component to what is otherwise a sweet and fatty meal.  With Model-Based Design requirements provide the harsh acid of truth against which your design is measured.
Bonus thought: In the same way that cranberries grow in a bog, requirements grow out of the confusing of initial requests.

Mashed potatoes: testing

Image result for mashed potatoes

Buttery, starchy, with garlic or herbs,  mashed potatoes could (in my humble opinion) form a meal in and of themselves.  Once the gravy from the turkey is added in the is added in the analogy to testing becomes clear.  Mashed potatoes bond with the turkey through gravy, testing bonds with the models by wrapping their outputs in loving bits of analysis.
Bonus thought: in the same way mash potatoes can become to heavy if you don’t prep them well, testing can become burdensome if you don’t plan ahead.

Turkey: the model

Image result for turkey

In the same way that the turkey is the center of the meal, the model is the center of Model-Based Design.  Much like a turkey, you can overcook them (poor architecture leading to hard to swallow models).  Undercook them (poorly design models with gaps or holes that lead to poisoned outcomes).  However, with just a little bit of forethought, you can have a juicy delicious bird.
Bonus thought: Much like deep frying turkeys was a fad a few years ago there are fads in software development.  In the end, good development practices don’t change; be clear, be complete and test; don’t fall for fads.

The dessert: certification

Image result for dessert pie

Certification is the dessert of the meal.  It may come at the end of everything but I bet you have been thinking about it during the rest of the meal.

Bonus thought: Certification does not mean you have a good product.  It just means you have followed a process.  Take your time to do it right and you can have your cake (pie) and eat it too.

 

 

Final thought:  

There are many parts of Model-Based Design, they all have their uses, and we all will have our favorite parts.  For those of you in the US, happy Thanksgiving.  

fz181122.gif

Traceability

What is traceability and why do we care about it?  First a definition


Traceability is the ability to describe and follow the life of a requirement in both a forwards and backwards direction (i.e., from its origins, through its development and specification, to its subsequent deployment and use, and through periods of ongoing refinement 

There are 3 critical points to this aspects:

  1. An independent documenting method: for an object to be traceable it must be documented. This has 2 parts
    1. A unique identifier for the object
    2. A version history of the object
  2. Linkages between objects: the documenting (linking) method must include dependencies between objects.   This can include
    1. Created by (e.g. this model generates that code)
    2. Dependent on (e.g. this requirement is completed by its parent)
    3. Fulfills:(e.g. this test case implementation fulfilled the test requirement)
  3. Bi-directional: The objects can be traced in any direction.  (Note: not all relationships are parent/child: there can be “associated” linkages)
Image result for connect the dots
Traceability allows us to “connect the dots”

Why should we care?

Traceability is about organization; the ability to easily find information.  For small systems, for self-projects, traceability may bean unneeded burden.  However when you have multiple people working on a project. the ability to trace between objects is how we information is relayed.

If you find this content useful, please consider subscribing