At some point in the software development cycle, the question of single or multi-threading environment will come up.  With multi-core processors more common now in embedded devices this a more frequent issue.  Let’s take a look at some of the trade off’s between single and multi-threaded environments.  For additional information, I recommend the following links

Single threaded

It just works, the program runs from start to finish in a set order and you know what happens relative to everything else.  However, it may be slower than it needs to be if some of the operations can take place in parallel.  If you do not have timing constraints this is a fine option to take.

Image result for one thread


If single threading can be described as “just working” then multi-threading needs to be characterized in a different fashion.  We will start with some basic understanding of threads.  A thread is the smallest unit of execution that an OS can instantiate; they are either event-based or periodic (temporal).  Threaded operating systems can be either non-interpretable or interpretable.

Image result for timing diagram multi threading scheduler

Packaging your threads

Each thread should exhibit a high degree of independence from other threads; meaning the operations of “Thread A” should have a minimum dependence on the data from “Thread B.”  The key word here of course is “should.”  In the end the threads will need to exchange data and that is one of the complications of multi-threaded environments.

Data locking and synchronization

Image result for lock dataIn a multi-threaded environment, a lock (or mutex) is a method for ensuring that a memory resource is not in use by multiple threads at the same time.  E.g. if you have a shared memory space you do not want to threads writing to it at the same time (or one reading while the other is writing).

Locks provide a way of synchronizing data between threads, however, they slow down the process since the thread cannot continue until the data is unlocked.  In some instances, when the operation of one thread is dependent on the outputs from another, if the locking and data synchronization is not handled correctly a race condition can occur.

Debugging multithreaded environments

Bugs in multithreaded processors generally occur when the expected order of execution does not match the intended order of execution.  This can be either due to

  • A thread failing to start
  • A data synchronization failing
  • A thread taking longer than expected and preventing another thread from running

Image result for debugging multithreaded applications

Use of a debugger to “walk through” the code is often required to get to the root cause of the issue.  However, if the bug is due to an overrun issue then using the debugger may not catch the error because in the debugging mode you are not subject to the timing limitations. In this case, either a trace log or even an oscilloscope can be employed.

For more information on debugging multithreaded environments, I suggest these links


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.