When every pico-second counts…

Often with embedded software speed(1) matters; however function always(2) matters. One common approach to improve execution speed is to take the training wheels off; that is, to remove boundary checking code.(3) Again, we have an often versus always situation; frequently that is okay but not always. So how do we make it always?

The baseline

Before we start trying to make things faster we need to perform a code profiling step (e.g. execute the code in both the standard and worst case scenarios).(4) Next, determine if you are trying to improve the standard or worst case operation. As you gather the data make sure to log both the performance (execution time) as well as the accuracy.

Training wheels off; Are we stable?

Note: this post is focusing on making sure the code is safe as you make it faster. Speed will be covered in other posts…

The first thing to check is if under normal operating conditions the system is stable (e.g., no overflows, no out of bounds errors, no integrator wind ups). For most systems this will be the case.(5) Next, it is time to check the corners.

There are three types of corner case tests that should be performed:

  1. Held: The system is driven to the corner case and the inputs are held there.
  2. In/Out: The system cycles in and out of a corner case.
  3. Sweep: Assuming there are multiple corners, the system sweeps between the different corners.


When the system has instabilities the question becomes “Where do I insert controls while minimally impacting the system performance?” The key here is to perform a trace back to the first offending block.

Often(6) the block that causes the issue is not where the issue occurs (e.g., it is upstream of the issue). However, by modifying the upstream block you can have a lower impact on the system behavior.

Final notes

As I pointed out near the start of this blog, the first step is to collect timing data on the system “as-is.” Once the changes have been made and the new data is collected the system as a whole should be evaluated.

  • Is there a significant performance improvement?
  • Have the changes impacted the accuracy of the system?
  • Have the changes made the system harder to test?
  • Have the changes reduced the clarity of the system?

Once you have asked yourself these four questions you can then decide if the modifications should become the new version of the code.


  1. Just like in the movie, you don’t want your “Speed” to end up in a crash.
  2. The difference between often and always is how you prioritize your design. Take care of always first, then often will follow.
  3. Boundary checking code (e.g., looking for data type overflows, out of range pointers, or out of range input data).
  4. Optimization for worst case scenarios may not be the same as the standard operating conditions.
  5. In the subset of cases where the standard operating conditions cause errors, this is generally an indication of a poor design. I would strongly recommend stopping here and refactoring the base design first.
  6. I didn’t realize how often I write the word “often” until I called it out in this blog. But that is often the case, we don’t see things until we look at them.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.