int pthread_spin_trylock (pthread_spinlock_t *lock);

int pthread_spin_unlock (pthread_spinlock_t *lock);

Thread abort:

int pthread_abort (pthread_t thread);

The same POSIX working group that developed POSIX. lb and Pthreads has developed a new set of extensions for realtime and threaded programming. Most of the extensions relevant to threads (and to this book) are the result of proposals developed by the POSIX 1003.14 profile group, which specialized in 'tuning' the existing POSIX standards for multiprocessor systems.

POSIX. lj adds some thread synchronization mechanisms that have been common in a wide range of multiprocessor and thread programming, but that had been omitted from the original Pthreads standard. Barriers and spinlocks are primarily useful for flne-grained parallelism, for example, in systems that automatically

generate parallel code from program loops. Read/write locks are useful in shared data algorithms where many threads are allowed to read simultaneously, but only one thread can be allowed to update data.

10.2.1 Barriers

'Barriers' are a form of synchronization most commonly used in parallel decomposition of loops. They're almost never used except in code designed to run only on multiprocessor systems. A barrier is a 'meeting place' for a group of associated threads, where each will wait until all have reached the barrier. When the last one waits on the barrier, all the participating threads are released.

See Section 7.1.1 for details of barrier behavior and for an example showing how to implement a barrier using standard Pthreads synchronization. (Note that the behavior of this example is not precisely the same as that proposed by POSIX.1j.)

10.2.2 Read/write locks

A read/write lock (also sometimes known as 'reader/writer lock') allows one thread to exclusively lock some shared data to write or modify that data, but also allows multiple threads to simultaneously lock the data for read access. UNIX98 specifies 'read/write locks' very similar to POSIX.1j reader/writer locks. Although X/Open intends that the two specifications will be functionally identical, the names are different to avoid conflict should the POSIX standard change before approval.

If your code relies on a data structure that is frequently referenced, but only occasionally updated, you should consider using a read/write lock rather than a mutex to access that data. Most threads will be able to read the data without waiting; they'll need to block only when some thread is in the process of modifying the data. (Similarly, a thread that desires to write the data will be blocked if any threads are reading the data.)

See Section 7.1.2 for details of read/write lock behavior and for an example showing how to implement a read/write lock using standard Pthreads synchronization. (Note that the behavior of this example is not precisely the same as that proposed by POSIX.1j.)

*The POSIX working group is considering the possibility of adapting the XSH5 read/write lock definition and abandoning the original POSIX.1j names, but the decision hasn't yet been made.

10.2.3 Spinlocks

Spinlocks are much like mutexes. There's been a lot of discussion about whether it even makes sense to standardize on a spinlock interface — since POSIX specifies only a source level API, there's very little POSIX.1j says about them that distinguishes them from mutexes. The essential idea is that a spinlock is the most primitive and fastest synchronization mechanism available on a given hardware architecture. On some systems, that may be a single 'test and set' instruction — on others, it may be a substantial sequence of 'load locked, test, store conditional, memory barrier' instructions.

The critical distinction is that a thread trying to lock a spinlock does not necessarily block when the spinlock is already held by another thread. The intent is that the thread will 'spin,' retrying the lock rapidly until it succeeds in locking the spinlock. (This is one of the 'iffy' spots — on a uniprocessor it had better block, or it'll do nothing but spin out the rest of its timeslice . . . or spin to eternity if it isn't timesliced.)

Spinlocks are great for fine-grained parallelism, when the code is intended to run only on a multiprocessor, carefully tuned to hold the spinlock for only a few instructions, and getting ultimate performance is more important than sharing the system resources cordially with other processes. To be effective, a spinlock must never be locked for as long as it takes to 'context switch' from one thread to another. If it does take as long or longer, you'll get better overall performance by blocking and allowing some other thread to do useful work.

POSIX.1j contains two sets of spinlock functions: one set with a spin_ prefix, which allows spinlock synchronization between processes; and the other set with a pthread_ prefix, allowing spinlock synchronization between threads within a process. This, you will notice, is very different from the model used for mutexes, condition variables, and read/write locks, where the same functions were used and the pshared attribute specifies whether the resulting synchronization object can be shared between processes.

The rationale for this is that spinlocks are intended to be very fast, and should not be subject to any possible overhead as a result of needing to decide, at run time, how to behave. It is, in fact, unlikely that the implementation of spin_lock and pthread_spin_lock will differ on most systems, but the standard allows them to be different.

10.2.4 Condition variable wait clock

Pthreads condition variables support only 'absolute time' timeouts. That is, the thread specifies that it is willing to wait until 'Jan 1 00:00:00 GMT 2001,' rather than being able to specify that it wants to wait for '1 hour, 10 minutes.' The reason for this is that a condition variable wait is subject to wakeups for various reasons that are beyond your control or not easy to control. When you wake early from a '1 hour, 10 minute' wait it is difficult to determine how much of that

time is left. But when you wake early from the absolute wait, your target time is still 'Jan 1 00:00:00 GMT 2001.' (The reasons for early wakeup are discussed in Section 3.3.2.)

Despite all this excellent reasoning, 'relative time' waits are useful. One important advantage is that absolute system time is subject to external changes. It might be modified to correct for an inaccurate clock chip, or brought up-to-date with a network time server, or adjusted for any number of other reasons. Both relative time waits and absolute time waits remain correct across that adjustment, but a relative time wait expressed as if it were an absolute time wait cannot. That is, when you want to wait for ' 1 hour, 10 minutes,' but the best you can do is add that interval to the current clock and wait until that clock time, the system can't adjust the absolute timeout for you when the system time is changed.

POSIX.1j addresses this issue as part of a substantial and pervasive 'cleanup' of POSIX time services. The standard (building on top of POSIX. lb, which introduced the realtime clock functions, and the CLOCK_REALTIME clock) introduces a new system clock called CLOCK_MONOTONIC. This new clock isn't a 'relative timer' in the traditional sense, but it is never decreased, and it is never modified by date or time changes on the system. It increases at a constant rate. A 'relative time' wait is nothing more than taking the current absolute value of the CLOCK_MONOTONIC clock, adding some fixed offset (4200 seconds for a wait of 1 hour and 10 minutes), and waiting until that value of the clock is reached.

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату
×