is 18ms, from task #3. Applying the numbers to Equation 14.2, the result is below the utility bound of 100% for task #1. Hence, task #1 is schedulable.
Looking at the second equation, task #2 can be blocked by task #3. The blocking factor
Task #2 is also schedulable as the result is below the utility bound for two tasks. Now looking at the last equation, note that
Again, the result is below the utility bound for the three tasks, and, therefore, all tasks are schedulable.
Other extensions are made to basic RMA for dealing with the rest of the assumptions associated with basic RMA, such as accounting for aperiodic tasks in real-time systems. Consult the listed references for additional readings on RMA and related materials.
14.5 Points to Remember
Some points to remember include the following:
· An outside-in approach can be used to decompose applications at the top level.
· Device dependencies can be used to decompose applications.
· Event dependencies can be used to decompose applications.
· Timing dependencies can be used to decompose applications.
· Levels of criticality of workload involved can be used to decompose applications.
· Functional cohesion, temporal cohesion, or sequential cohesion can be used either to form a task or to combine tasks.
· Rate Monotonic Scheduling can be summarized by stating that a task's priority depends on its period-the shorter the period, the higher the priority. RMS, when implemented appropriately, produces stable and predictable performance.
· Schedulability analysis only looks at how systems meet temporal requirements, not functional requirements.
· Six assumptions are associated with the basic RMA:
0 all of the tasks are periodic,
0 the tasks are independent of each other and that no interactions occur among tasks,
0 a task's deadline is the beginning of its next period,
0 each task has a constant execution time that does not vary over time,
0 all of the tasks have the same level of criticality, and
0 aperiodic tasks are limited to initialization and failure recovery work and that these aperiodic tasks do not have hard deadlines.
· Basic RMA does not account for task synchronization and aperiodic tasks.
Chapter 15: Synchronization And Communication
15.1 Introduction
Software applications for real-time embedded systems use concurrency to maximize efficiency. As a result, an application's design typically involves multiple concurrent threads, tasks, or processes. Coordinating these activities requires inter-task synchronization and communication.
This chapter focuses on:
· resource synchronization,
· activity synchronization,
· inter-task communication, and
· ready-to-use embedded design patterns.
15.2 Synchronization
Synchronization is classified into two categories:
15.2.1 Resource Synchronization
Access by multiple tasks must be synchronized to maintain the integrity of a shared resource. This process is called
As an example, consider two tasks trying to access shared memory. One task (the sensor task) periodically receives data from a sensor and writes the data to shared memory. Meanwhile, a second task (the display task) periodically reads from shared memory and sends the data to a display. The common design pattern of using shared memory is illustrated in Figure 15.1.
Figure 15.1: Multiple tasks accessing shared memory.
Problems arise if access to the shared memory is not exclusive, and multiple tasks can simultaneously access it. For example, if the sensor task has not completed writing data to the shared memory area before the display task tries to display the data, the display would contain a mixture of data extracted at different times, leading to erroneous data interpretation.
The section of code in the sensor task that writes input data to the shared memory is a critical section of the sensor task. The section of code in the display task that reads data from the shared memory is a critical section of the display task. These two critical sections are called