sleep.ms 15 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541
  1. .HTML "Process Sleep and Wakeup on a Shared-memory Multiprocessor
  2. .TL
  3. Process Sleep and Wakeup on a Shared-memory Multiprocessor
  4. .AU
  5. Rob Pike
  6. Dave Presotto
  7. Ken Thompson
  8. Gerard Holzmann
  9. .sp
  10. rob,presotto,ken,gerard@plan9.bell-labs.com
  11. .AB
  12. .FS
  13. Appeared in a slightly different form in
  14. .I
  15. Proceedings of the Spring 1991 EurOpen Conference,
  16. .R
  17. Tromsø, Norway, 1991, pp. 161-166.
  18. .FE
  19. The problem of enabling a `sleeping' process on a shared-memory multiprocessor
  20. is a difficult one, especially if the process is to be awakened by an interrupt-time
  21. event. We present here the code
  22. for sleep and wakeup primitives that we use in our multiprocessor system.
  23. The code has been exercised by years of active use and by a verification
  24. system.
  25. .AE
  26. .LP
  27. Our problem is to synchronise processes on a symmetric shared-memory multiprocessor.
  28. Processes suspend execution, or
  29. .I sleep,
  30. while awaiting an enabling event such as an I/O interrupt.
  31. When the event occurs, the process is issued a
  32. .I wakeup
  33. to resume its execution.
  34. During these events, other processes may be running and other interrupts
  35. occurring on other processors.
  36. .LP
  37. More specifically, we wish to implement subroutines called
  38. .CW sleep ,
  39. callable by a process to relinquish control of its current processor,
  40. and
  41. .CW wakeup ,
  42. callable by another process or an interrupt to resume the execution
  43. of a suspended process.
  44. The calling conventions of these subroutines will remain unspecified
  45. for the moment.
  46. .LP
  47. We assume the processors have an atomic test-and-set or equivalent
  48. operation but no other synchronisation method. Also, we assume interrupts
  49. can occur on any processor at any time, except on a processor that has
  50. locally inhibited them.
  51. .LP
  52. The problem is the generalisation to a multiprocessor of a familiar
  53. and well-understood uniprocessor problem. It may be reduced to a
  54. uniprocessor problem by using a global test-and-set to serialise the
  55. sleeps and wakeups,
  56. which is equivalent to synchronising through a monitor.
  57. For performance and cleanliness, however,
  58. we prefer to allow the interrupt handling and process control to be multiprocessed.
  59. .LP
  60. Our attempts to solve the sleep/wakeup problem in Plan 9
  61. [Pik90]
  62. prompted this paper.
  63. We implemented solutions several times over several months and each
  64. time convinced ourselves \(em wrongly \(em they were correct.
  65. Multiprocessor algorithms can be
  66. difficult to prove correct by inspection and formal reasoning about them
  67. is impractical. We finally developed an algorithm we trust by
  68. verifying our code using an
  69. empirical testing tool.
  70. We present that code here, along with some comments about the process by
  71. which it was designed.
  72. .SH
  73. History
  74. .LP
  75. Since processes in Plan 9 and the UNIX
  76. system have similar structure and properties, one might ask if
  77. UNIX
  78. .CW sleep
  79. and
  80. .CW wakeup
  81. [Bac86]
  82. could not easily be adapted from their standard uniprocessor implementation
  83. to our multiprocessor needs.
  84. The short answer is, no.
  85. .LP
  86. The
  87. UNIX
  88. routines
  89. take as argument a single global address
  90. that serves as a unique
  91. identifier to connect the wakeup with the appropriate process or processes.
  92. This has several inherent disadvantages.
  93. From the point of view of
  94. .CW sleep
  95. and
  96. .CW wakeup ,
  97. it is difficult to associate a data structure with an arbitrary address;
  98. the routines are unable to maintain a state variable recording the
  99. status of the event and processes.
  100. (The reverse is of course easy \(em we could
  101. require the address to point to a special data structure \(em
  102. but we are investigating
  103. UNIX
  104. .CW sleep
  105. and
  106. .CW wakeup ,
  107. not the code that calls them.)
  108. Also, multiple processes sleep `on' a given address, so
  109. .CW wakeup
  110. must enable them all, and let process scheduling determine which process
  111. actually benefits from the event.
  112. This is inefficient;
  113. a queueing mechanism would be preferable
  114. but, again, it is difficult to associate a queue with a general address.
  115. Moreover, the lack of state means that
  116. .CW sleep
  117. and
  118. .CW wakeup
  119. cannot know what the corresponding process (or interrupt) is doing;
  120. .CW sleep
  121. and
  122. .CW wakeup
  123. must be executed atomically.
  124. On a uniprocessor it suffices to disable interrupts during their
  125. execution.
  126. On a multiprocessor, however,
  127. most processors
  128. can inhibit interrupts only on the current processor,
  129. so while a process is executing
  130. .CW sleep
  131. the desired interrupt can come and go on another processor.
  132. If the wakeup is to be issued by another process, the problem is even harder.
  133. Some inter-process mutual exclusion mechanism must be used,
  134. which, yet again, is difficult to do without a way to communicate state.
  135. .LP
  136. In summary, to be useful on a multiprocessor,
  137. UNIX
  138. .CW sleep
  139. and
  140. .CW wakeup
  141. must either be made to run atomically on a single
  142. processor (such as by using a monitor)
  143. or they need a richer model for their communication.
  144. .SH
  145. The design
  146. .LP
  147. Consider the case of an interrupt waking up a sleeping process.
  148. (The other case, a process awakening a second process, is easier because
  149. atomicity can be achieved using an interlock.)
  150. The sleeping process is waiting for some event to occur, which may be
  151. modeled by a condition coming true.
  152. The condition could be just that the event has happened, or something
  153. more subtle such as a queue draining below some low-water mark.
  154. We represent the condition by a function of one
  155. argument of type
  156. .CW void* ;
  157. the code supporting the device generating the interrupts
  158. provides such a function to be used by
  159. .CW sleep
  160. and
  161. .CW wakeup
  162. to synchronise. The function returns
  163. .CW false
  164. if the event has not occurred, and
  165. .CW true
  166. some time after the event has occurred.
  167. The
  168. .CW sleep
  169. and
  170. .CW wakeup
  171. routines must, of course, work correctly if the
  172. event occurs while the process is executing
  173. .CW sleep .
  174. .LP
  175. We assume that a particular call to
  176. .CW sleep
  177. corresponds to a particular call to
  178. .CW wakeup ,
  179. that is,
  180. at most one process is asleep waiting for a particular event.
  181. This can be guaranteed in the code that calls
  182. .CW sleep
  183. and
  184. .CW wakeup
  185. by appropriate interlocks.
  186. We also assume for the moment that there will be only one interrupt
  187. and that it may occur at any time, even before
  188. .CW sleep
  189. has been called.
  190. .LP
  191. For performance,
  192. we desire that multiple instances of
  193. .CW sleep
  194. and
  195. .CW wakeup
  196. may be running simultaneously on our multiprocessor.
  197. For example, a process calling
  198. .CW sleep
  199. to await a character from an input channel need not
  200. wait for another process to finish executing
  201. .CW sleep
  202. to await a disk block.
  203. At a finer level, we would like a process reading from one input channel
  204. to be able to execute
  205. .CW sleep
  206. in parallel with a process reading from another input channel.
  207. A standard approach to synchronisation is to interlock the channel `driver'
  208. so that only one process may be executing in the channel code at once.
  209. This method is clearly inadequate for our purposes; we need
  210. fine-grained synchronisation, and in particular to apply
  211. interlocks at the level of individual channels rather than at the level
  212. of the channel driver.
  213. .LP
  214. Our approach is to use an object called a
  215. .I rendezvous ,
  216. which is a data structure through which
  217. .CW sleep
  218. and
  219. .CW wakeup
  220. synchronise.
  221. (The similarly named construct in Ada is a control structure;
  222. ours is an unrelated data structure.)
  223. A rendezvous
  224. is allocated for each active source of events:
  225. one for each I/O channel,
  226. one for each end of a pipe, and so on.
  227. The rendezvous serves as an interlockable structure in which to record
  228. the state of the sleeping process, so that
  229. .CW sleep
  230. and
  231. .CW wakeup
  232. can communicate if the event happens before or while
  233. .CW sleep
  234. is executing.
  235. .LP
  236. Our design for
  237. .CW sleep
  238. is therefore a function
  239. .P1
  240. void sleep(Rendezvous *r, int (*condition)(void*), void *arg)
  241. .P2
  242. called by the sleeping process.
  243. The argument
  244. .CW r
  245. connects the call to
  246. .CW sleep
  247. with the call to
  248. .CW wakeup ,
  249. and is part of the data structure for the (say) device.
  250. The function
  251. .CW condition
  252. is described above;
  253. called with argument
  254. .CW arg ,
  255. it is used by
  256. .CW sleep
  257. to decide whether the event has occurred.
  258. .CW Wakeup
  259. has a simpler specification:
  260. .P1
  261. void wakeup(Rendezvous *r).
  262. .P2
  263. .CW Wakeup
  264. must be called after the condition has become true.
  265. .SH
  266. An implementation
  267. .LP
  268. The
  269. .CW Rendezvous
  270. data type is defined as
  271. .P1
  272. typedef struct{
  273. Lock l;
  274. Proc *p;
  275. }Rendezvous;
  276. .P2
  277. Our
  278. .CW Locks
  279. are test-and-set spin locks.
  280. The routine
  281. .CW lock(Lock\ *l)
  282. returns when the current process holds that lock;
  283. .CW unlock(Lock\ *l)
  284. releases the lock.
  285. .LP
  286. Here is our implementation of
  287. .CW sleep .
  288. Its details are discussed below.
  289. .CW Thisp
  290. is a pointer to the current process on the current processor.
  291. (Its value differs on each processor.)
  292. .P1
  293. void
  294. sleep(Rendezvous *r, int (*condition)(void*), void *arg)
  295. {
  296. int s;
  297. s = inhibit(); /* interrupts */
  298. lock(&r->l);
  299. /*
  300. * if condition happened, never mind
  301. */
  302. if((*condition)(arg)){
  303. unlock(&r->l);
  304. allow(); /* interrupts */
  305. return;
  306. }
  307. /*
  308. * now we are committed to
  309. * change state and call scheduler
  310. */
  311. if(r->p)
  312. error("double sleep %d %d", r->p->pid, thisp->pid);
  313. thisp->state = Wakeme;
  314. r->p = thisp;
  315. unlock(&r->l);
  316. allow(s); /* interrupts */
  317. sched(); /* relinquish CPU */
  318. }
  319. .P2
  320. .ne 3i
  321. Here is
  322. .CW wakeup.
  323. .P1
  324. void
  325. wakeup(Rendezvous *r)
  326. {
  327. Proc *p;
  328. int s;
  329. s = inhibit(); /* interrupts; return old state */
  330. lock(&r->l);
  331. p = r->p;
  332. if(p){
  333. r->p = 0;
  334. if(p->state != Wakeme)
  335. panic("wakeup: not Wakeme");
  336. ready(p);
  337. }
  338. unlock(&r->l);
  339. if(s)
  340. allow();
  341. }
  342. .P2
  343. .CW Sleep
  344. and
  345. .CW wakeup
  346. both begin by disabling interrupts
  347. and then locking the rendezvous structure.
  348. Because
  349. .CW wakeup
  350. may be called in an interrupt routine, the lock must be set only
  351. with interrupts disabled on the current processor,
  352. so that if the interrupt comes during
  353. .CW sleep
  354. it will occur only on a different processor;
  355. if it occurred on the processor executing
  356. .CW sleep ,
  357. the spin lock in
  358. .CW wakeup
  359. would hang forever.
  360. At the end of each routine, the lock is released and processor priority
  361. returned to its previous value.
  362. .CW Wakeup "" (
  363. needs to inhibit interrupts in case
  364. it is being called by a process;
  365. this is a no-op if called by an interrupt.)
  366. .LP
  367. .CW Sleep
  368. checks to see if the condition has become true, and returns if so.
  369. Otherwise the process posts its name in the rendezvous structure where
  370. .CW wakeup
  371. may find it, marks its state as waiting to be awakened
  372. (this is for error checking only) and goes to sleep by calling
  373. .CW sched() .
  374. The manipulation of the rendezvous structure is all done under the lock,
  375. and
  376. .CW wakeup
  377. only examines it under lock, so atomicity and mutual exclusion
  378. are guaranteed.
  379. .LP
  380. .CW Wakeup
  381. has a simpler job. When it is called, the condition has implicitly become true,
  382. so it locks the rendezvous, sees if a process is waiting, and readies it to run.
  383. .SH
  384. Discussion
  385. .LP
  386. The synchronisation technique used here
  387. is similar to known methods, even as far back as Saltzer's thesis
  388. [Sal66].
  389. The code looks trivially correct in retrospect: all access to data structures is done
  390. under lock, and there is no place that things may get out of order.
  391. Nonetheless, it took us several iterations to arrive at the above
  392. implementation, because the things that
  393. .I can
  394. go wrong are often hard to see. We had four earlier implementations
  395. that were examined at great length and only found faulty when a new,
  396. different style of device or activity was added to the system.
  397. .LP
  398. .ne 3i
  399. Here, for example, is an incorrect implementation of wakeup,
  400. closely related to one of our versions.
  401. .P1
  402. void
  403. wakeup(Rendezvous *r)
  404. {
  405. Proc *p;
  406. int s;
  407. p = r->p;
  408. if(p){
  409. s = inhibit();
  410. lock(&r->l);
  411. r->p = 0;
  412. if(p->state != Wakeme)
  413. panic("wakeup: not Wakeme");
  414. ready(p);
  415. unlock(&r->l);
  416. if(s)
  417. allow();
  418. }
  419. }
  420. .P2
  421. The mistake is that the reading of
  422. .CW r->p
  423. may occur just as the other process calls
  424. .CW sleep ,
  425. so when the interrupt examines the structure it sees no one to wake up,
  426. and the sleeping process misses its wakeup.
  427. We wrote the code this way because we reasoned that the fetch
  428. .CW p
  429. .CW =
  430. .CW r->p
  431. was inherently atomic and need not be interlocked.
  432. The bug was found by examination when a new, very fast device
  433. was added to the system and sleeps and interrupts were closely overlapped.
  434. However, it was in the system for a couple of months without causing an error.
  435. .LP
  436. How many errors lurk in our supposedly correct implementation above?
  437. We would like a way to guarantee correctness; formal proofs are beyond
  438. our abilities when the subtleties of interrupts and multiprocessors are
  439. involved.
  440. With that in mind, the first three authors approached the last to see
  441. if his automated tool for checking protocols
  442. [Hol91]
  443. could be
  444. used to verify our new
  445. .CW sleep
  446. and
  447. .CW wakeup
  448. for correctness.
  449. The code was translated into the language for that system
  450. (with, unfortunately, no way of proving that the translation is itself correct)
  451. and validated by exhaustive simulation.
  452. .LP
  453. The validator found a bug.
  454. Under our assumption that there is only one interrupt, the bug cannot
  455. occur, but in the more general case of multiple interrupts synchronising
  456. through the same condition function and rendezvous,
  457. the process and interrupt can enter a peculiar state.
  458. A process may return from
  459. .CW sleep
  460. with the condition function false
  461. if there is a delay between
  462. the condition coming true and
  463. .CW wakeup
  464. being called,
  465. with the delay occurring
  466. just as the receiving process calls
  467. .CW sleep .
  468. The condition is now true, so that process returns immediately,
  469. does whatever is appropriate, and then (say) decides to call
  470. .CW sleep
  471. again. This time the condition is false, so it goes to sleep.
  472. The wakeup process then finds a sleeping process,
  473. and wakes it up, but the condition is now false.
  474. .LP
  475. There is an easy (and verified) solution: at the end of
  476. .CW sleep
  477. or after
  478. .CW sleep
  479. returns,
  480. if the condition is false, execute
  481. .CW sleep
  482. again. This re-execution cannot repeat; the second synchronisation is guaranteed
  483. to function under the external conditions we are supposing.
  484. .LP
  485. Even though the original code is completely
  486. protected by interlocks and had been examined carefully by all of us
  487. and believed correct, it still had problems.
  488. It seems to us that some exhaustive automated analysis is
  489. required of multiprocessor algorithms to guarantee their safety.
  490. Our experience has confirmed that it is almost impossible to
  491. guarantee by inspection or simple testing the correctness
  492. of a multiprocessor algorithm. Testing can demonstrate the presence
  493. of bugs but not their absence
  494. [Dij72].
  495. .LP
  496. We close by claiming that the code above with
  497. the suggested modification passes all tests we have for correctness
  498. under the assumptions used in the validation.
  499. We would not, however, go so far as to claim that it is universally correct.
  500. .SH
  501. References
  502. .LP
  503. [Bac86] Maurice J. Bach,
  504. .I "The Design of the UNIX Operating System,
  505. Prentice-Hall,
  506. Englewood Cliffs,
  507. 1986.
  508. .LP
  509. [Dij72] Edsger W. Dijkstra,
  510. ``The Humble Programmer \- 1972 Turing Award Lecture'',
  511. .I "Comm. ACM,
  512. 15(10), pp. 859-866,
  513. October 1972.
  514. .LP
  515. [Hol91] Gerard J. Holzmann,
  516. .I "Design and Validation of Computer Protocols,
  517. Prentice-Hall,
  518. Englewood Cliffs,
  519. 1991.
  520. .LP
  521. [Pik90]
  522. Rob Pike,
  523. Dave Presotto,
  524. Ken Thompson,
  525. Howard Trickey,
  526. ``Plan 9 from Bell Labs'',
  527. .I "Proceedings of the Summer 1990 UKUUG Conference,
  528. pp. 1-9,
  529. London,
  530. July, 1990.
  531. .LP
  532. [Sal66] Jerome H. Saltzer,
  533. .I "Traffic Control in a Multiplexed Computer System
  534. MIT,
  535. Cambridge, Mass.,
  536. 1966.