The one-way condition creation of this design implies that either there are more writer tasks than there are reader tasks, or that the production of messages is faster than the consumption of these messages.
15.7.5 Sending High Priority Data between Tasks
In many situations, the communication between tasks can carry urgent data. Urgent data must be processed in a timely fashion and must be distinguished from normal data. This process is accomplished by using signals and an urgent data message queue, as shown in Figure 15.21. For the sake of this example, the reader should assume the message queues shown in Figure 15.21 do not support a priority message delivery mechanism.
Figure 15.21: Using signals for urgent data communication.
As Chapter 8 describes, one task uses a signal to notify another of the arrival of urgent data. When the signal arrives, the receiving task diverts from its normal execution and goes directly to the urgent data message queue. The task processes messages from this queue ahead of messages from other queues because the urgent data queue has the highest priority. This task must install an asynchronous signal handler for the urgent data signal in order to receive it. The reason the signal for urgent data notification is deploying is because the task does not know of the arrival of urgent data unless the task is already waiting on the message queue.
The producer of the urgent data, which can be either a task or an ISR, inserts the urgent messages into the predefined urgent data message queue. The source signals the recipient of the urgent data. The signal interrupts the normal execution path of the recipient task, and the installed signal handler is invoked. Inside this signal handler, urgent messages are read and processed.
In this design pattern, urgent data is maintained in a separate message queue although most RTOS- supplied message queues support priority messages. With a separate message queue for urgent data, the receiver can control how much urgent data it is willing to accept and process, i.e., a flow control mechanism.
15.7.6 Implementing Reader-Writer Locks Using Condition Variables
This section presents another example of the usage of condition variables. The code shown in Listings 15.7, 15.8, and 15.9 are written in C programming language.
Consider a shared memory region that both readers and writers can access. The example reader-writer lock design has the following properties: multiple readers can simultaneously read the memory content, but only one writer is allowed to write data into the shared memory at any one time. The writer can begin writing to the shared memory when that memory region is not accessed by a task (readers or writers). Readers precede writers because readers have priority over writers in term of accessing the shared memory region.
The implementation that follows can be adapted to other types of synchronization scenarios when prioritized access to shared resources is desired, as shown in Listings 15.7, 15.8, and 15.9.
The following assumptions are made in the program listings:
1. The mutex_t data type represents a mutex object and condvar_t represents a condition variable object; both are provided by the RTOS.
2. lock_mutex, unlock_mutex, wait_cond, signal_cond, and broadcast_cond are functions provided by the RTOS. lock_mutex and unlock_mutex operate on the mutex object. wait_cond, signal_cond, and broadcast_cond operate on the condition variable object.
Listing 15.7 shows the data structure needed to implement the reader-writer lock.
Listing 15.7: Data structure for implementing reader-writer locks.
typedef struct {
mutex_t guard_mutex;
condvar_t read_condvar;
condvar_t write_condvar;
int rw_count;
int read_waiting;
} rwlock_t;
rw_count == -1 indicates a writer is active
Listing 15.8 shows the code that the writer task invokes to acquire and to release the lock.
Listing 15.8: Code called by the writer task to acquire and release locks.
acquire_write(rwlock_t *rwlock) {
lock_mutex(&rwlock-›guard_mutex);
while (rwlock-›rw_count!= 0)
wait_cond(&rwlock-›write_condvar,&rwlock-›guard_mutex);
rwlock-›rw_count = -1;
unlock_mutex(&rwlock-›guard_mutex);
}
release_write(rwlock_t *rwlock) {
lock_mutex(&rwlock-›guard_mutex);
rwlock-›rw_count = 0;
if (rwlock-›r_waiting)
broadcast_cond(&rwlock-›read_condvar,&rwlock-›guard_mutex);
else
signal_cond(&rwlock-›write_condvar,&rwlock-›guard_mutex);
unlock_mutex(&rwlock-›guard_mutex);
}
Listing 15.9 shows the code that the reader task invokes to acquire and release the lock.
Listing 15.9: Code called by the reader task to acquire and release locks.
acquire_read(rwlock_t *rwlock) {
lock_mutex(&rwlock-›guard_mutex);
rwlock-›r_waiting++;
while (rwlock-›rw_count ‹ 0)
wait_cond(&rwlock-›read_condvar,&rwlock-›guard_mutex);
rwlock-›r_waiting = 0;
rwlock-›rw_count++;
unlock_mutex(&rwlock-›guard_mutex);
}
release_read(rwlock_t *rwlock) {
lock_mutex(&rwlock-›guard_mutex);
rwlock-›rw_count--;
if (rwlock-›rw_count == 0)
signal_cond(&rwlock-›write_condvar,&rwlock-›guard_mutex);
unlock_mutex(&rwlock-›guard_mutex);
}