The truth about regression…

So my first step into machine learning is examining supervised learning and my old friend curve fitting.  This was a fun trip down memory lane as my numerical mathematics background had me well prepared for this topic.  Further, in the 25 years since I graduated, the basic tools for performing regression analysis have greatly improved, I was able to use the built-in curve fitting tool in MATLAB to perform these operations directly.  At the same time, I quickly remembered the “gotchas” about regression…

Starting points

As a starting point, I thought I would see how well the tool would do with a known polynomial equation.  I defined a simple 3rd order equation and added some noise into the signal.  I then ran the results through the curve fitting toolbox…

The data

As a sanity check, I first tried it out using the “real” X and Y data.  As would be expected the tool spits back out the coefficients that I expected.  When I ran it with the noise (and telling it to use a 3rd order polynomial) it returned coefficients fairly close to the “real” things.

Curve fitting within the boundaries
CoefficientOriginalRegressed “Real”Regressed “Noise”
P40.0-3.79e-14-0.745
P31.51.53.05
P21.21.20.8709
P1-0.5-0.5-0.522
R-SquareNA1.00.9939

Within the defined range we have an R-squared value of 0.9925, pretty darn good.  But, the question quickly becomes, how does it hold up outside of the known range?

Pretty well actually?

So depending on how you want to look at it, as an absolute error or a percentage error you may be doing “O.k.” or not.  But what if I didn’t know what the underlying equation was?  A power function would seem like a reasonable.  If I try a first order power equation 

 y = a * exp(b * x)
or second
y = a * exp(b * x) + c * exp(d * x);

OrderR-square
1st0.9852
2nd.9961

 Within the original bounds, the R-Squared value is close to the actual polynomial.  If we used the 1st order equation just in that region it executes faster then the 3rd order polynomial so we may want to go with a “fake fast fit”. (say that three times fast).

Why do we try to “fit-in”?

There are several reasons why regression is performed.  Sometimes it is because the underlying equations are unknown,  are too computationally expensive, there isn’t an equation.  Regardless of the reason when I am trying to perform a regression I perform the following steps

  1. What are the fundamental physics:  Use this to pick out which terms and functions to use.  (Note: The curve fitting toolbox has an wonderful “custom equation” option that let’s you try these equation based approaches out)
  2. Boundaries? I try to understand how the data outside of the known data set?  In my example the exponential function does not perform well much outside of my known data.

Disclaimer:

Roughly 30 minutes after I finished writing this blog post I stumbled across this post
“Explore Runge’s Polynomial Interpolation Phenomenon” by Cleve Moler.  I would be amiss to not include this link for three reasons

  1. Cleve’s post clearly articulates the mathematical underpinnings of interpolation
  2. As the creator of MATLAB, his work is why things are “greatly improved”.
  3. It’s a post about Runge, and since I had a cat named Runga-Kutta (RK) back in college well…

Warning: Bad aerospace engineering pun follows:

When I was in grad school and had that cute cat I was working on a simple project, a device to determine the terminal angle (stall) for a kite.  For the most part, I worked on this project in my apartment, and Runga loved to attack the kite’s tail, jumping at it attacking it.  That was well and good in the apartment.

Well at the early testing stages I was flying the kite off of my back deck.  One day while flying RK managed to get out.  In a leap of excitement she jumped onto the kite’s tail (knowing from experience she would land happily on the carpet).  Except, this time, the kite was 8 feet off the ground.  Needless to say, both the cat and the kite crashed to the ground proving that for this case the Kutta condition did not hold.

Image result for terminal angle stall

Learning about machine learning…

For the last two years, I have written about Model-Based Design in the context of classical control algorithms and software verification processes.  Over the next year, I am broadening the scope of this blog as I investigate how machine learning can be applied to MBD in the context of safety critical systems.

My questions

  1. What is the current state of the art in Machine learning?
    1. How can it be used for controls problems?
    2. When should it be used?  When should it not be used?
    3. Where is it the only solution?
  2. What are building blocks of machine learning?
    1. Starting small, what are the easy things to do?
    2. Hitting the middle, what is a “real” project?
    3. Common “starting” pitfalls
  3. Testing and verification
    1. Can you do black box testing?
    2. How do you test an “open world” model?

Image result for building blocks foundation

So join me on this trip…

I will share my insights, my mistakes and look forward to hearing from you…

Image result for next stop the future

 

Levels of testing…

In English, there is a saying, where there is smoke there is fire.  It is an interesting saying which you could take to mean multiple things

  1. Smoke will let you find problems from a distance
  2. Smoke will show up before the fire

At The MathWorks, we call our most primitive tests “smoke tests”.  A smoke test is a basic “does it turn on test”.  The idea behind smoke tests is to validate that the unit under test performs in the most basic fashion, therefore, is ready to run longer more complex tests.  For example, a set of smoke tests for a Model would look like this

  1. Perform an update diagram (Order of a minute)
    if pass then…
  2. Check the model configuration parameters  (Order of a minute)
    if pass then…
  3. Check model for unallowed blocks (Simulink Check)  (Order of a minute)
    if pass then…
  4. Run your complex test suite… (Order of a ???)

Why?

For large, more complex tests, the setup, execution and evaluation time of the tests can be on the order of hours for a complex set of tests.  The ability to “abort” running these longer tests is important as your test suite grows.  You don’t want to take up time on your CI system running tests on models that are not in the correct state.

forestFre