recursion within a mutex can be automatically built into the mutex, or it might need to be enabled explicitly when the mutex is first created.
The mutex with recursive locking is called a
As shown in Figure 6.4, when a recursive mutex is first locked, the kernel registers the task that locked it as the owner of the mutex. On successive attempts, the kernel uses an internal lock count associated with the mutex to track the number of times that the task currently owning the mutex has recursively acquired it. To properly unlock the mutex, it must be released the same number of times.
In this example, a
Do not confuse the counting facility for a locked mutex with the counting facility for a counting semaphore. The count used for the mutex tracks the number of times that the task owning the mutex has locked or unlocked the mutex. The count used for the counting semaphore tracks the number of tokens that have been acquired or released by any task. Additionally, the count for the mutex is always unbounded, which allows multiple recursive accesses.
Some mutex implementations also have built-in
Priority inversion commonly happens in poorly designed real-time embedded applications.
Enabling certain protocols that are typically built into mutexes can help avoid priority inversion. Two common protocols used for avoiding priority inversion include:
· priority inheritance protocol-ensures that the priority level of the lower priority task that has acquired the mutex is raised to that of the higher priority task that has requested the mutex when inversion happens. The priority of the raised task is lowered to its original value after the task releases the mutex that the higher priority task requires.
· ceiling priority protocol-ensures that the priority level of the task that acquires the mutex is automatically set to the highest priority of all possible tasks that might request that mutex when it is first acquired until it is released.
When the mutex is released, the priority of the task is lowered to its original value.
Chapter 16 discusses priority inversion and both the priority inheritance and ceiling priority protocols in more detail. For now, remember that a mutex supports ownership, recursive locking, task deletion safety, and priority inversion avoidance protocols; binary and counting semaphores do not.
6.3 Typical Semaphore Operations
Typical operations that developers might want to perform with the semaphores in an application include:
· creating and deleting semaphores,
· acquiring and releasing semaphores,
· clearing a semaphore’s task-waiting list, and
· getting semaphore information.
Each operation is discussed next.
6.3.1 Creating and Deleting Semaphores
Table 6.1 identifies the operations used to create and delete semaphores.
Table 6.1: Semaphore creation and deletion operations.
Operation | Description |
---|---|
Create | Creates a semaphore |
Delete | Deletes a semaphore |
Several things must be considered, however, when creating and deleting semaphores. If a kernel supports different types of semaphores, different calls might be used for creating binary, counting, and mutex semaphores, as follows:
· binary - specify the initial semaphore state and the task-waiting order.
· counting - specify the initial semaphore count and the task-waiting order.
· mutex - specify the task-waiting order and enable task deletion safety, recursion, and priority-inversion avoidance protocols, if supported.
Semaphores can be deleted from within any task by specifying their IDs and making semaphore-deletion calls. Deleting a semaphore is not the same as releasing it. When a semaphore is deleted, blocked tasks in its task-waiting list are unblocked and moved either to the ready state or to the running state (if the unblocked task has the highest priority). Any tasks, however, that try to acquire the deleted semaphore return with an error because the semaphore no longer exists.
Additionally, do not delete a semaphore while it is in use (e.g., acquired). This action might result in data corruption or other serious problems if the semaphore is protecting a shared resource or a critical section of code.