3 small tips for better performance

Small, smart habits in modeling add up; while a local efficiency may not seem like much, when compounded over the full design, they will result in interestingly large improvements.

Data types, bit field, enums & #defines

Selecting a data type for your variable directly impacts the amount of memory required by your program. In general you should select the smallest sized variable that covers the range of the variable. But that is just the first step in how to save memory.

When you have multiple bits of On/Off state data, packaging the data into a single variable will save memory. For example, if you have 5 On/Off variables you could either use 5 Booleans (5 * 8 = 40 bytes) or a single unsigned 8 bit integer.

The final tip has to do with constant values. If the data needs to be tuneable then the data needs to be declared as a parameter. However, for non-tunable parameters, #defines or enumerated data types should be used; they provide the same “non-literal” implementation while being inlined in the generated code.

Exit early

When creating if-elseif-else logic, order the “if-elseif-else” in order of likelihood of occurrence (e.g., put the most likely first). This prevents the need to do unneeded comparisons in n% of the time.

Additionally, consider inlining calculations into the if/elseif logic if they are only used by a subset of the comparison.

resOf = highlyComplexMatheMaticalFunction
if (lightIsOn)
elseif (lightIsOff)
elseif (resOf)
end

The “highlyComplexMatheMaticalFunction” in this example is only used in one place so the requirement to calculate it for all paths is wasted. In contrast, if you have a complex calculation that is used by multiple branches you should consider pre-calculating the results.

The final suggestion here is to provide “early exits.” If you have if/then else logic or for/while loops consider having a “break” command in the loop / logic if the desired results are reached prior to the completion of the loop.

Resample your data

Real-world data is messy, often sampled at inconsistent intervals and both over and under-sampled at some domain points. For table data, resample your data into uniform intervals that cover your full range while maintaining the required accuracy from the data. If sections of the data are “flat” consider increasing the intervals in that section while decreasing the intervals on sections with rapid changes.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.