No new worlds (MBD domains) to conquer…?

Within the Model-Based Design environment, there is a set of “standard” of “classic” domains in which control engineers work; discrete and continuous time control, event and state based controls or physicals modeling (for closed loop simulation).

Twenty years ago the use of neural networks, adaptive controls, and fuzzy logic controls entered into the common application for practical control systems.  Now controls engineering is starting to use the tools of deep learning algorithms.
(Note: as this post will show I am just starting to enter into using machine learning and deep learning algorithms.  I am honestly excited about having a new domain to work in!  This post is not intended to provide information on how to develop a deep learning or machine learning algorithm, rather it is to remind us that there are always new fields to exploreee.)

0-3A3h1IVOkfbCLjxq-.jpg

The game changes, the goal remains the same…

The controls community (and well beyond) have started to adopt deep learning algorithms for a simple reason, the problems we are trying to solve now are to complex for classical control domains. (Note: there needs to be care taken here, don’t use a new tool just because it is new, there is still a lot of life in classical controls)  Developing controls systems that leverage deep learning methodologies requires a different mindset.

It is like a puppy…

Not really but a  statement that stuck with me is Image result for puppy training images“with deep learning you have to train the data set and you have to clean the dataset.    The cleaning is much like the traditional cleaning is donee for regression analysis.  Once the data is cleaned the data is broken into subsets to train the algorithm.  The multiple training sets are used to validate the behavior of the “trained” algorithm.  Once the required model fidelity is reached it can be parameterized and placed in the field.

Again it is like a puppy…

In that small puppies can growlittle and large into large dogs.  Because of this not all of these algorithms will be suitable for deployment onto a hardware device.  In many cases, these algorithms are either

  1. Run in a non-real time environment
  2. Run “off chip” to free up on chip processing power
  3. Require special hardware / more expensive hardware to run

From a controls engineering perspective, all three of these issues need to be taken into consideration.  Any of them could make real time application of a  deep learning algorithm impractical.  However, selecting the correct framework and optimizing the algorithm (on the most critical output parameters) should enable you to deploy most of these algorithms to silicon.

Final thoughts

As I wrote at the top of this post I am still learning.  Some of the links I have found useful include

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.