We define a thread of execution to mean an abstract ``program counter'', i.e. a mechanism to evaluate expressions and follow the control flow of a program, together with a name space of variables and code segments in an application. Without getting too technical, a thread represents the activity of one processor (or process) in a typical MP system. For example, in a conventional uniprocessor environment each program executes with only one thread i.e. the one that starts with the invocation of the main() function and sequentially steps through the code. That thread can see and modify the global variables declared outside of main() and the variables on the procedure call stack as it executes.
There are many different ways in which multiple threads can be used to speed the execution of a program. In a master/worker ``multi-threaded'' execution model, we use one control thread, called the main, master, or scheduler, which starts with main() and a set of ``worker'' threads which help out on the parallel part of the program. It is the job of the master to signal the workers where to begin execution and where to find the data they should work on. This is accomplished by identifying loops in the program that can be executed in parallel and then distributing iterations to the worker threads as tasks. The rules for how names spaces of variables and data interact between different threads vary from model to model. In some systems, there is a shared name space of variables and all threads can see and modify these variables and, of course, suffer the consequences when two threads try to modify the same variable without proper synchronization.
Another model is the Single Program Multiple Data, or SPMD mode of execution. In this case, a set of threads each start executing the program from the beginning and follow it all the way through. The only thing different in the execution from one thread to another is that different threads operate on different data sets and different subsets of the parallel loop iterations space. In the SPMD model it is assumed that the name spaces of variables are all distinct. That is, if X is a name in a program then each thread sees a different X. Consequently, sequential work is duplicated but parallel work is shared and some sort of communication protocol is used share information.
In a pC++ program, there is a distinguished sequential main thread, invoked from a function called Processor_Main() (as it is called in Version 1.0. It will be known as main() in future versions). This thread is the main control thread and it is the scheduler of parallel operations. To define additional computational resources beyond the main thread the pC++ programmer creates processor objects, which define a set of threads.
Parallel tasks are defined in terms of threads that are associated with a processor object which defines a set of threads and their bindings to processors, i.e. physical computing resources.
One declares an array of processor objects with a statement of the form
Processors P(m,n)This defines P to be a 2 dimensional array of ``virtual processor objects'', i.e. execution threads, of size m by n. In general, processor objects can be of any dimension of any size. However, there is a very serious and disappointing limit on what is possible for Version 1.0 of pC++. As mentioned above, current compiler, operating system and hardware technology only allows one virtual processor object per physical object and this set of threads is declared as
Processors P;The reason for this limitation is that on current distributed memory machines, there is no support for communication mechanisms that operate on anything other than a physical process level. In general, software multitasking or special hardware can allow more than one thread of execution per physical processor and future versions of pC++ will support this extension.
To make a thread operate in a C/C++ world, one also needs a way to define program variables that are visible to the threads as well as variable that may be modified by that thread. Note that we make a distinction between variables that are visible and those that can be modified. Just as a const variable in C++ may be initialized and not modified, a global (file static) variable in a pC++ program can be seen and modified by the main thread, but only be seen by the processor objects. In other words, if a variable is a global static or external variable for the main thread, it is a const variable for the processor object threads. The reasons for this limitation will be described later.
In addition to rules for variable visibility for virtual processor objects, we need a way for the main thread to identify and invoke sections of code for the worker threads to execute. This is accomplished through Thread Environment Classes.