A mutual exclusion algorithm ensures that one task's execution of a critical section is not interrupted by the competing critical sections of other concurrently executing tasks.
One way to synchronize access to shared resources is to use a client-server model, in which a central entity called a
While this model simplifies resource synchronization, the resource server is a bottleneck. Synchronization primitives, such as semaphores and mutexes, and other methods introduced in a later section of this chapter, allow developers to implement complex mutual exclusion algorithms. These algorithms in turn allow dynamic coordination among competing tasks without intervention from a third party.
15.2.2 Activity Synchronization
In general, a task must synchronize its activity with other tasks to execute a multithreaded program properly.
One representative of activity synchronization methods is
The point at which the partial results are collected and the duration of the final computation is a
Barrier synchronization comprises three actions:
· a task posts its arrival at the barrier,
· the task waits for other participating tasks to reach the barrier, and
· the task receives notification to proceed beyond the barrier.
A later section of this chapter shows how to implement barrier synchronization using mutex locks and condition variables.
As shown in Figure 15.2, a group of five tasks participates in barrier synchronization. Tasks in the group complete their partial execution and reach the barrier at various times; however, each task in the group must wait at the barrier until all other tasks have reached the barrier. The last task to reach the barrier (in this example, task T5) broadcasts a notification to the other tasks. All tasks cross the barrier at the same time (conceptually in a uniprocessor environment due to task scheduling. We say 'conceptually' because in a uniprocessor environment, only one task can execute at any given time. Even though all five tasks have crossed the barrier and may continue execution, the task with the highest priority will execute next.
Figure 15.2: Visualization of barrier synchronization.
Another representative of activity synchronization mechanisms is
In rendezvous synchronization, a synchronization and communication point called an
Rendezvous synchronization is similar to synchronization using event-registers, which Chapter 8 introduces, in that both are synchronous. The issuer of the entry call is blocked if that call is not yet accepted; similarly, the task that accepts an entry call is blocked when no other task has issued the entry call. Rendezvous differs from event-register in that bidirectional data movement (input parameters and output results) is possible.
A derivative form of rendezvous synchronization, called
Figure 15.3: Simple rendezvous without data passing.
Both binary semaphores are initialized to 0. When task #1 reaches the rendezvous, it gives semaphore #2, and then it gets on semaphore #1. When task #2 reaches the rendezvous, it gives semaphore #1, and then it gets on semaphore #2. Task #1 has to wait on semaphore #1 before task #2 arrives, and vice versa, thus achieving rendezvous synchronization.
15.2.3 Implementing Barriers
Barrier synchronization is used for activity synchronization. Listing 15.1 shows how to implement a barrier-synchronization mechanism using a mutex and a condition variable.
Listing 15.1: Pseudo code for barrier synchronization.
typedef struct {
mutex_t br_lock; /* guarding mutex */
cond_t br_cond; /* condition variable */
int br_count; /* num of tasks at the barrier */
int br_n_threads; /* num of tasks participating in the barrier synchronization */
} barrier_t;
barrier(barrier_t *br) {
mutex_lock(&br-›br_lock);
br-›br_count++;
if (br-›br_count ‹ br-›br_n_threads) cond_wait (&br-›br_cond,&br-›br_lock);
else {
br-›br_count = 0;
cond_broadcast(&br-›br_cond);
}
mutex_unlock(&br-›br_lock);
}
Each participating task invokes the function barrier for barrier synchronization. The guarding mutex for