diff --git a/en_US.ISO8859-1/books/arch-handbook/smp/chapter.sgml b/en_US.ISO8859-1/books/arch-handbook/smp/chapter.sgml index aec79065c0..84c219ac37 100644 --- a/en_US.ISO8859-1/books/arch-handbook/smp/chapter.sgml +++ b/en_US.ISO8859-1/books/arch-handbook/smp/chapter.sgml @@ -19,7 +19,7 @@ 2002 - 2003 + 2004 John Baldwin Robert Watson @@ -257,6 +257,11 @@ Kernel Preemption and Critical Sections + Please note that full kernel preemption as described below + is not currently implemented in the CVS tree. An implementation + similar to that described below has been implemented before in a + prototype tree. + Kernel Preemption in a Nutshell @@ -305,8 +310,8 @@ In order to minimize latency, preemptions inside of a critical section are deferred rather than dropped. If a - thread is made runnable that would normally be preempted to - outside of a critical section, then a per-thread flag is set + thread that would normally be preempted to is made runnable while the current thread is in a critical section, + then a per-thread flag is set to indicate that there is a pending preemption. When the outermost critical section is exited, the flag is checked. If the flag is set, then the current thread is preempted to @@ -329,8 +334,7 @@ to split out the MD API from the MI API and only use it in conjunction with the MI API in the spin mutex implementation. If this approach is taken, then the MD API - likely would need a rename to show that it is a separate API - now. + likely would need a rename to show that it is a separate API. @@ -366,8 +370,8 @@ run another non-realtime kernel thread, the kernel may switch out the executing thread just before it is about to sleep or execute. The cache on the CPU must then adjust to - the new thread. When the kernel returns to the interrupted - CPU, it must refill all the cache information that was lost. + the new thread. When the kernel returns to the preempted + thread, it must refill all the cache information that was lost. In addition, two extra context switches are performed that could be avoided if the kernel deferred the preemption until the first thread blocked or returned to userland. Thus, by @@ -410,6 +414,8 @@ thread is preempted it should not migrate to another CPU. + Need to describe the thread pinning API that Jeff implemented here instead. + One possible implementation is to use a per-thread nesting count td_pinnest along with a td_pincpu which is updated to the current @@ -471,7 +477,7 @@ code is careful to leave the list in a consistent state while releasing the mutex. If DIAGNOSTIC is enabled, then the time taken to execute each function is - measured, and a warning generated if it exceeds a + measured, and a warning is generated if it exceeds a threshold. @@ -760,6 +766,29 @@ Implementation Notes + + Sleep Queues + + - Lookup/release + + - Adding & waiting. + + - Timeout and signal catching. + + - Aborting a sleep. + + + + Turnstiles + + - Compare/contrast with sleep queues. + + - Lookup/wait/release. + - Describe TDF_TSNOBLOCK race. + + - Priority propagation. + + Details of the Mutex Implementation @@ -779,9 +808,7 @@ - Describe the races with contested mutexes - Why it is safe to read mtx_lock of a contested mutex - when holding sched_lock. - - - Priority propagation + when holding the turnstile chain lock. @@ -808,12 +835,12 @@ Other Random Questions/Topics - Should we pass an interlock into + - Should we pass an interlock into sema_wait? - - Generic turnstiles for sleep mutexes and sx locks. - - Should we have non-sleepable sx locks? + + - Add some info about proper use of reference counts.