demonstrate how to implement a blocking memory allocation function.
As shown in Figure 13.7, a blocking memory allocation function can be implemented using both a counting semaphore and a mutex lock. These synchronization primitives are created for each memory pool and are kept in the control structure. The counting semaphore is initialized with the total number of available memory blocks at the creation of the memory pool. Memory blocks are allocated and freed from the beginning of the list.
Figure 13.7: Implementing a blocking allocation function using a mutex and a counting semaphore.
Multiple tasks can access the free-blocks list of the memory pool. The control structure is updated each time an allocation or a deallocation occurs. Therefore, a mutex lock is used to guarantee a task exclusive access to both the free-blocks list and the control structure. A task might wait for a block to become available, acquire the block, and then continue its execution. In this case, a counting semaphore is used.
For an allocation request to succeed, the task must first successfully acquire the counting semaphore, followed by a successful acquisition of the mutex lock.
The successful acquisition of the counting semaphore reserves a piece of the available blocks from the pool. A task first tries to acquire the counting semaphore. If no blocks are available, the task blocks on the counting semaphore, assuming the task is prepared to wait for it. If a resource is available, the task acquires the counting semaphore successfully. The counting semaphore token count is now one less than it was. At this point, the task has reserved a piece of the available blocks but has yet to obtain the block.
Next, the task tries to lock the mutex. If another task is currently getting a block out of the memory pool or if another task is currently freeing a block back into the memory pool, the mutex is in the locked state. The task blocks waiting for the mutex to unlock. After the task locks the mutex, the task retrieves the resource from the list.
The counting semaphore is released when the task finishes using the memory block.
The pseudo code for memory allocation using a counting semaphore and mutex lock is provided in Listing 13.1.
Listing 13.1: Pseudo code for memory allocation.
Acquire(Counting_Semaphore)
Lock(mutex)
Retrieve the memory block from the pool
Unlock(mutex)
The pseudo code for memory deallocation using a mutex lock and counting semaphore is provided in Listing 13.2.
Listing 13.2: Pseudo code for memory deallocation.
Lock(mutex)
Release the memory block back to into the pool
Unlock(mutex)
Release(Counting_Semaphore)
This implementation shown in Listing 13.1 and 13.2 enables the memory allocation and deallocation functions to be safe for multitasking. The deployment of the counting semaphore and the mutex lock eliminates the priority inversion problem when blocking memory allocation is enabled with these synchronization primitives. Chapter 6 discusses semaphores and mutexes. Chapter 16 discusses priority inversions.
13.5 Hardware Memory Management Units
Thus far, the discussion on memory management focuses on the management of physical memory. Another topic is the management of virtual memory. Virtual memory is a technique in which mass storage (for example, a hard disk) is made to appear to an application as if the mass storage were RAM. Virtual memory address space (also called
The address translation function differs from one MMU design to another. Many commercial RTOSes do not support implementation of virtual addresses, so this chapter does not discuss address translation. Instead, the chapter discusses the MMU's memory protection feature, as many RTOSes do support it.
If an MMU is enabled on an embedded system, the physical memory is typically divided into
· whether the page contains code (i.e., executable instructions) or data,
· whether the page is readable, writable, executable, or a combination of these, and
· whether the page can be accessed when the CPU is not in privileged execution mode, accessed only when the CPU is in privileged mode, or both.
All memory access is done through MMU when it is enabled. Therefore, the hardware enforces memory access according to page attributes. For example, if a task tries to write to a memory region that only allows for read access, the operation is considered illegal, and the MMU does not allow it. The result is that the operation triggers a memory access exception.
13.6 Points to Remember
Some points to remember include the following:
· Dynamic memory allocation in embedded systems can be built using a fixed-size blocks approach.
· Memory fragmentation can be classified into either external memory fragmentation or internal memory fragmentation.
· Memory compaction is generally not performed in real-time embedded systems.
· Management based on memory pools is commonly found in networking-related code.
· A well-designed memory allocation function should allow for blocking allocation.
· Blocking memory allocation function can be designed using both a counting semaphore and a mutex.
· Many real-time embedded RTOSes do not implement virtual addressing when the MMU is present.
· Many of these RTOSes do take advantage of the memory protection feature of the MMU.
Chapter 14: Modularizing An Application For Concurrency
14.1 Introduction
Many activities need to be completed when designing applications for real-time systems. One group of activities requires identifying certain elements. Some of the more important elements to identify include:
1. system requirements,