74 * Set up a repeating timer using signal number SIGRTMIN,

75 * set to occur every 2 seconds.

76 */

77 sig_event.sigev_value.sival_int = 0;

78 sig_event.sigev_signo = SIGRTMIN;

79 sig_event.sigev_notify = SIGEV_SIGNAL;

80 if (timer_create (CLOCK_REALTIME, &sig_event, &timer_id) == -1)

81  errno_abort ('Create timer');

82 sigemptyset (&sig_mask);

83 sigaddset (&sig_mask, SIGRTMIN);

84 sig_action.sa_handler = signal_catcher;

85 sig_action.sa_mask = sig mask;

86 sig_action.sa_flags = 0;

87 if (sigaction (SIGRTMIN, &sig_action, NULL) == -1)

88 errno_abort ('Set signal action');

89 timer_val.it_interval.tv_sec = 2;

90 timer_val.it_interval.tv_nsec = 0;

91 timer_val.it_value.tv_sec = 2;

92 timer_val.it_value.tv_nsec = 0;

93 if (timer_settime (timer_id, 0, &timer_val, NULL) == -1)

94 errno_abort ('Set timer');

95

96 /*

97 * Wait for all threads to complete.

98 */

99 for (thread_count = 0; thread_count < 5; thread_count++) {

100  status = pthread_join (sem_waiters[thread_count], NULL);

101  if (status != 0)

102  err_abort (status, 'Join thread');

103 }

104 return 0;

105 #endif

106 }

7 'Real code'

'When we were still little,' the Mock Turtle went on at last, more calmly, though still sobbing a little now and then, 'we went to school in the sea. The master was an old Turtle — we used to call him Tortoise—'

'Why did you call him Tortoise, if he wasn't one?' Alice asked.

'We called him Tortoise because he taught us,' said the Mock Turtle angrily.

Lewis Carroll, Through the Looking-Glass

This section builds on most of the earlier sections of the book, but principally on the mutex and condition variable sections. You should already understand how to create both types of synchronization objects and how they work. I will demonstrate the design and construction of barrier and read/write lock synchronization mechanisms that are built from mutexes, condition variables, and a dash of data. Both barriers and read/write locks are in common use. and have been proposed for standardization in the near future. I will follow up with a queue server that lets you parcel out tasks to a pool of threads.

The purpose of all this is to teach you more about the subtleties of using of these new threaded programming tools (that is, mutexes, condition variables and threads). The library packages may be useful to you as is or as templates. Primar-ily, though, they are here to give me something to talk about in this section and I have omitted some complication that may be valuable in real code. The error detection and recovery code, for example, is fairly primitive.

7.1 Extended synchronization

Mutexes and condition variables are flexible and efficient synchronization. tools. You can build just about any form of synchronization you need using those two things. But you shouldn't build them from scratch every time you need them. It is nice to start with a general, modular implementation that doesn't need to be debugged every time. This section shows some common and useful tools that you won't have to redesign every time you write an application that needs them.

First we'll build a barrier. The function of a barrier is about what you might guess — it stops threads. A barrier is initialized to stop a certain number of threads — when the required number of threads have reached the barrier, all are allowed to continue.

Then we'll build something called a read/write lock. A read/write lock allows multiple threads to read data simultaneously, but prevents any thread from modifying data that is being read or modified by another thread.

7.1.1 Barriers

A barrier is a way to keep the members of a group together. If our intrepid 'bailing programmers' washed up on a deserted island, for example, and they ventured into the jungle to explore, they would want to remain together, for the illusion of safety in numbers, if for no other reason (Figure 7.1). Any exploring programmer finding himself very far in front of the others would therefore wait for them before continuing.

A barrier is usually employed to ensure that all threads cooperating in some parallel algorithm reach a specific point in that algorithm before any can pass. This is especially common in code that has been decomposed automatically by creating fine-grained parallelism within compiled source code. All threads may execute the same code, with threads processing separate portions of a shared data set (such as an array) in some areas and processing private data in parallel

FIGURE 7.1 Barrier analogy

in other areas. Still other areas must be executed by only one thread, such as setup or cleanup for the parallel regions. The boundaries between these areas are often implemented using barriers. Thus, threads completing a matrix computation may wait at a barrier until all have finished. One may then perform setup for the next parallel segment while the others skip ahead to another barrier. When the setup thread reaches that barrier, all threads begin the next parallel region.

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату