1
0

net.ms 40 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335
  1. .TL
  2. The Organization of Networks in Plan 9
  3. .AU
  4. Dave Presotto
  5. Phil Winterbottom
  6. .sp
  7. presotto,philw@plan9.bell-labs.com
  8. .AB
  9. .FS
  10. Originally appeared in
  11. .I
  12. Proc. of the Winter 1993 USENIX Conf.,
  13. .R
  14. pp. 271-280,
  15. San Diego, CA
  16. .FE
  17. In a distributed system networks are of paramount importance. This
  18. paper describes the implementation, design philosophy, and organization
  19. of network support in Plan 9. Topics include network requirements
  20. for distributed systems, our kernel implementation, network naming, user interfaces,
  21. and performance. We also observe that much of this organization is relevant to
  22. current systems.
  23. .AE
  24. .NH
  25. Introduction
  26. .PP
  27. Plan 9 [Pike90] is a general-purpose, multi-user, portable distributed system
  28. implemented on a variety of computers and networks.
  29. What distinguishes Plan 9 is its organization.
  30. The goals of this organization were to
  31. reduce administration
  32. and to promote resource sharing. One of the keys to its success as a distributed
  33. system is the organization and management of its networks.
  34. .PP
  35. A Plan 9 system comprises file servers, CPU servers and terminals.
  36. The file servers and CPU servers are typically centrally
  37. located multiprocessor machines with large memories and
  38. high speed interconnects.
  39. A variety of workstation-class machines
  40. serve as terminals
  41. connected to the central servers using several networks and protocols.
  42. The architecture of the system demands a hierarchy of network
  43. speeds matching the needs of the components.
  44. Connections between file servers and CPU servers are high-bandwidth point-to-point
  45. fiber links.
  46. Connections from the servers fan out to local terminals
  47. using medium speed networks
  48. such as Ethernet [Met80] and Datakit [Fra80].
  49. Low speed connections via the Internet and
  50. the AT&T backbone serve users in Oregon and Illinois.
  51. Basic Rate ISDN data service and 9600 baud serial lines provide slow
  52. links to users at home.
  53. .PP
  54. Since CPU servers and terminals use the same kernel,
  55. users may choose to run programs locally on
  56. their terminals or remotely on CPU servers.
  57. The organization of Plan 9 hides the details of system connectivity
  58. allowing both users and administrators to configure their environment
  59. to be as distributed or centralized as they wish.
  60. Simple commands support the
  61. construction of a locally represented name space
  62. spanning many machines and networks.
  63. At work, users tend to use their terminals like workstations,
  64. running interactive programs locally and
  65. reserving the CPU servers for data or compute intensive jobs
  66. such as compiling and computing chess endgames.
  67. At home or when connected over
  68. a slow network, users tend to do most work on the CPU server to minimize
  69. traffic on the slow links.
  70. The goal of the network organization is to provide the same
  71. environment to the user wherever resources are used.
  72. .NH
  73. Kernel Network Support
  74. .PP
  75. Networks play a central role in any distributed system. This is particularly
  76. true in Plan 9 where most resources are provided by servers external to the kernel.
  77. The importance of the networking code within the kernel
  78. is reflected by its size;
  79. of 25,000 lines of kernel code, 12,500 are network and protocol related.
  80. Networks are continually being added and the fraction of code
  81. devoted to communications
  82. is growing.
  83. Moreover, the network code is complex.
  84. Protocol implementations consist almost entirely of
  85. synchronization and dynamic memory management, areas demanding
  86. subtle error recovery
  87. strategies.
  88. The kernel currently supports Datakit, point-to-point fiber links,
  89. an Internet (IP) protocol suite and ISDN data service.
  90. The variety of networks and machines
  91. has raised issues not addressed by other systems running on commercial
  92. hardware supporting only Ethernet or FDDI.
  93. .NH 2
  94. The File System protocol
  95. .PP
  96. A central idea in Plan 9 is the representation of a resource as a hierarchical
  97. file system.
  98. Each process assembles a view of the system by building a
  99. .I "name space
  100. [Needham] connecting its resources.
  101. File systems need not represent disc files; in fact, most Plan 9 file systems have no
  102. permanent storage.
  103. A typical file system dynamically represents
  104. some resource like a set of network connections or the process table.
  105. Communication between the kernel, device drivers, and local or remote file servers uses a
  106. protocol called 9P. The protocol consists of 17 messages
  107. describing operations on files and directories.
  108. Kernel resident device and protocol drivers use a procedural version
  109. of the protocol while external file servers use an RPC form.
  110. Nearly all traffic between Plan 9 systems consists
  111. of 9P messages.
  112. 9P relies on several properties of the underlying transport protocol.
  113. It assumes messages arrive reliably and in sequence and
  114. that delimiters between messages
  115. are preserved.
  116. When a protocol does not meet these
  117. requirements (for example, TCP does not preserve delimiters)
  118. we provide mechanisms to marshal messages before handing them
  119. to the system.
  120. .PP
  121. A kernel data structure, the
  122. .I channel ,
  123. is a handle to a file server.
  124. Operations on a channel generate the following 9P messages.
  125. The
  126. .CW session
  127. and
  128. .CW attach
  129. messages authenticate a connection, established by means external to 9P,
  130. and validate its user.
  131. The result is an authenticated
  132. channel
  133. referencing the root of the
  134. server.
  135. The
  136. .CW clone
  137. message makes a new channel identical to an existing channel, much like
  138. the
  139. .CW dup
  140. system call.
  141. A
  142. channel
  143. may be moved to a file on the server using a
  144. .CW walk
  145. message to descend each level in the hierarchy.
  146. The
  147. .CW stat
  148. and
  149. .CW wstat
  150. messages read and write the attributes of the file referenced by a channel.
  151. The
  152. .CW open
  153. message prepares a channel for subsequent
  154. .CW read
  155. and
  156. .CW write
  157. messages to access the contents of the file.
  158. .CW Create
  159. and
  160. .CW remove
  161. perform the actions implied by their names on the file
  162. referenced by the channel.
  163. The
  164. .CW clunk
  165. message discards a channel without affecting the file.
  166. .PP
  167. A kernel resident file server called the
  168. .I "mount driver"
  169. converts the procedural version of 9P into RPCs.
  170. The
  171. .I mount
  172. system call provides a file descriptor, which can be
  173. a pipe to a user process or a network connection to a remote machine, to
  174. be associated with the mount point.
  175. After a mount, operations
  176. on the file tree below the mount point are sent as messages to the file server.
  177. The
  178. mount
  179. driver manages buffers, packs and unpacks parameters from
  180. messages, and demultiplexes among processes using the file server.
  181. .NH 2
  182. Kernel Organization
  183. .PP
  184. The network code in the kernel is divided into three layers: hardware interface,
  185. protocol processing, and program interface.
  186. A device driver typically uses streams to connect the two interface layers.
  187. Additional stream modules may be pushed on
  188. a device to process protocols.
  189. Each device driver is a kernel-resident file system.
  190. Simple device drivers serve a single level
  191. directory containing just a few files;
  192. for example, we represent each UART
  193. by a data and a control file.
  194. .P1
  195. cpu% cd /dev
  196. cpu% ls -l eia*
  197. --rw-rw-rw- t 0 bootes bootes 0 Jul 16 17:28 eia1
  198. --rw-rw-rw- t 0 bootes bootes 0 Jul 16 17:28 eia1ctl
  199. --rw-rw-rw- t 0 bootes bootes 0 Jul 16 17:28 eia2
  200. --rw-rw-rw- t 0 bootes bootes 0 Jul 16 17:28 eia2ctl
  201. cpu%
  202. .P2
  203. The control file is used to control the device;
  204. writing the string
  205. .CW b1200
  206. to
  207. .CW /dev/eia1ctl
  208. sets the line to 1200 baud.
  209. .PP
  210. Multiplexed devices present
  211. a more complex interface structure.
  212. For example, the LANCE Ethernet driver
  213. serves a two level file tree (Figure 1)
  214. providing
  215. .IP \(bu
  216. device control and configuration
  217. .IP \(bu
  218. user-level protocols like ARP
  219. .IP \(bu
  220. diagnostic interfaces for snooping software.
  221. .LP
  222. The top directory contains a
  223. .CW clone
  224. file and a directory for each connection, numbered
  225. .CW 1
  226. to
  227. .CW n .
  228. Each connection directory corresponds to an Ethernet packet type.
  229. Opening the
  230. .CW clone
  231. file finds an unused connection directory
  232. and opens its
  233. .CW ctl
  234. file.
  235. Reading the control file returns the ASCII connection number; the user
  236. process can use this value to construct the name of the proper
  237. connection directory.
  238. In each connection directory files named
  239. .CW ctl ,
  240. .CW data ,
  241. .CW stats ,
  242. and
  243. .CW type
  244. provide access to the connection.
  245. Writing the string
  246. .CW "connect 2048"
  247. to the
  248. .CW ctl
  249. file sets the packet type to 2048
  250. and
  251. configures the connection to receive
  252. all IP packets sent to the machine.
  253. Subsequent reads of the file
  254. .CW type
  255. yield the string
  256. .CW 2048 .
  257. The
  258. .CW data
  259. file accesses the media;
  260. reading it
  261. returns the
  262. next packet of the selected type.
  263. Writing the file
  264. queues a packet for transmission after
  265. appending a packet header containing the source address and packet type.
  266. The
  267. .CW stats
  268. file returns ASCII text containing the interface address,
  269. packet input/output counts, error statistics, and general information
  270. about the state of the interface.
  271. .so tree.pout
  272. .PP
  273. If several connections on an interface
  274. are configured for a particular packet type, each receives a
  275. copy of the incoming packets.
  276. The special packet type
  277. .CW -1
  278. selects all packets.
  279. Writing the strings
  280. .CW promiscuous
  281. and
  282. .CW connect
  283. .CW -1
  284. to the
  285. .CW ctl
  286. file
  287. configures a conversation to receive all packets on the Ethernet.
  288. .PP
  289. Although the driver interface may seem elaborate,
  290. the representation of a device as a set of files using ASCII strings for
  291. communication has several advantages.
  292. Any mechanism supporting remote access to files immediately
  293. allows a remote machine to use our interfaces as gateways.
  294. Using ASCII strings to control the interface avoids byte order problems and
  295. ensures a uniform representation for
  296. devices on the same machine and even allows devices to be accessed remotely.
  297. Representing dissimilar devices by the same set of files allows common tools
  298. to serve
  299. several networks or interfaces.
  300. Programs like
  301. .CW stty
  302. are replaced by
  303. .CW echo
  304. and shell redirection.
  305. .NH 2
  306. Protocol devices
  307. .PP
  308. Network connections are represented as pseudo-devices called protocol devices.
  309. Protocol device drivers exist for the Datakit URP protocol and for each of the
  310. Internet IP protocols TCP, UDP, and IL.
  311. IL, described below, is a new communication protocol used by Plan 9 for
  312. transmitting file system RPC's.
  313. All protocol devices look identical so user programs contain no
  314. network-specific code.
  315. .PP
  316. Each protocol device driver serves a directory structure
  317. similar to that of the Ethernet driver.
  318. The top directory contains a
  319. .CW clone
  320. file and a directory for each connection numbered
  321. .CW 0
  322. to
  323. .CW n .
  324. Each connection directory contains files to control one
  325. connection and to send and receive information.
  326. A TCP connection directory looks like this:
  327. .P1
  328. cpu% cd /net/tcp/2
  329. cpu% ls -l
  330. --rw-rw---- I 0 ehg bootes 0 Jul 13 21:14 ctl
  331. --rw-rw---- I 0 ehg bootes 0 Jul 13 21:14 data
  332. --rw-rw---- I 0 ehg bootes 0 Jul 13 21:14 listen
  333. --r--r--r-- I 0 bootes bootes 0 Jul 13 21:14 local
  334. --r--r--r-- I 0 bootes bootes 0 Jul 13 21:14 remote
  335. --r--r--r-- I 0 bootes bootes 0 Jul 13 21:14 status
  336. cpu% cat local remote status
  337. 135.104.9.31 5012
  338. 135.104.53.11 564
  339. tcp/2 1 Established connect
  340. cpu%
  341. .P2
  342. The files
  343. .CW local ,
  344. .CW remote ,
  345. and
  346. .CW status
  347. supply information about the state of the connection.
  348. The
  349. .CW data
  350. and
  351. .CW ctl
  352. files
  353. provide access to the process end of the stream implementing the protocol.
  354. The
  355. .CW listen
  356. file is used to accept incoming calls from the network.
  357. .PP
  358. The following steps establish a connection.
  359. .IP 1)
  360. The clone device of the
  361. appropriate protocol directory is opened to reserve an unused connection.
  362. .IP 2)
  363. The file descriptor returned by the open points to the
  364. .CW ctl
  365. file of the new connection.
  366. Reading that file descriptor returns an ASCII string containing
  367. the connection number.
  368. .IP 3)
  369. A protocol/network specific ASCII address string is written to the
  370. .CW ctl
  371. file.
  372. .IP 4)
  373. The path of the
  374. .CW data
  375. file is constructed using the connection number.
  376. When the
  377. .CW data
  378. file is opened the connection is established.
  379. .LP
  380. A process can read and write this file descriptor
  381. to send and receive messages from the network.
  382. If the process opens the
  383. .CW listen
  384. file it blocks until an incoming call is received.
  385. An address string written to the
  386. .CW ctl
  387. file before the listen selects the
  388. ports or services the process is prepared to accept.
  389. When an incoming call is received, the open completes
  390. and returns a file descriptor
  391. pointing to the
  392. .CW ctl
  393. file of the new connection.
  394. Reading the
  395. .CW ctl
  396. file yields a connection number used to construct the path of the
  397. .CW data
  398. file.
  399. A connection remains established while any of the files in the connection directory
  400. are referenced or until a close is received from the network.
  401. .NH 2
  402. Streams
  403. .PP
  404. A
  405. .I stream
  406. [Rit84a][Presotto] is a bidirectional channel connecting a
  407. physical or pseudo-device to user processes.
  408. The user processes insert and remove data at one end of the stream.
  409. Kernel processes acting on behalf of a device insert data at
  410. the other end.
  411. Asynchronous communications channels such as pipes,
  412. TCP conversations, Datakit conversations, and RS232 lines are implemented using
  413. streams.
  414. .PP
  415. A stream comprises a linear list of
  416. .I "processing modules" .
  417. Each module has both an upstream (toward the process) and
  418. downstream (toward the device)
  419. .I "put routine" .
  420. Calling the put routine of the module on either end of the stream
  421. inserts data into the stream.
  422. Each module calls the succeeding one to send data up or down the stream.
  423. .PP
  424. An instance of a processing module is represented by a pair of
  425. .I queues ,
  426. one for each direction.
  427. The queues point to the put procedures and can be used
  428. to queue information traveling along the stream.
  429. Some put routines queue data locally and send it along the stream at some
  430. later time, either due to a subsequent call or an asynchronous
  431. event such as a retransmission timer or a device interrupt.
  432. Processing modules create helper kernel processes to
  433. provide a context for handling asynchronous events.
  434. For example, a helper kernel process awakens periodically
  435. to perform any necessary TCP retransmissions.
  436. The use of kernel processes instead of serialized run-to-completion service routines
  437. differs from the implementation of Unix streams.
  438. Unix service routines cannot
  439. use any blocking kernel resource and they lack a local long-lived state.
  440. Helper kernel processes solve these problems and simplify the stream code.
  441. .PP
  442. There is no implicit synchronization in our streams.
  443. Each processing module must ensure that concurrent processes using the stream
  444. are synchronized.
  445. This maximizes concurrency but introduces the
  446. possibility of deadlock.
  447. However, deadlocks are easily avoided by careful programming; to
  448. date they have not caused us problems.
  449. .PP
  450. Information is represented by linked lists of kernel structures called
  451. .I blocks .
  452. Each block contains a type, some state flags, and pointers to
  453. an optional buffer.
  454. Block buffers can hold either data or control information, i.e., directives
  455. to the processing modules.
  456. Blocks and block buffers are dynamically allocated from kernel memory.
  457. .NH 3
  458. User Interface
  459. .PP
  460. A stream is represented at user level as two files,
  461. .CW ctl
  462. and
  463. .CW data .
  464. The actual names can be changed by the device driver using the stream,
  465. as we saw earlier in the example of the UART driver.
  466. The first process to open either file creates the stream automatically.
  467. The last close destroys it.
  468. Writing to the
  469. .CW data
  470. file copies the data into kernel blocks
  471. and passes them to the downstream put routine of the first processing module.
  472. A write of less than 32K is guaranteed to be contained by a single block.
  473. Concurrent writes to the same stream are not synchronized, although the
  474. 32K block size assures atomic writes for most protocols.
  475. The last block written is flagged with a delimiter
  476. to alert downstream modules that care about write boundaries.
  477. In most cases the first put routine calls the second, the second
  478. calls the third, and so on until the data is output.
  479. As a consequence, most data is output without context switching.
  480. .PP
  481. Reading from the
  482. .CW data
  483. file returns data queued at the top of the stream.
  484. The read terminates when the read count is reached
  485. or when the end of a delimited block is encountered.
  486. A per stream read lock ensures only one process
  487. can read from a stream at a time and guarantees
  488. that the bytes read were contiguous bytes from the
  489. stream.
  490. .PP
  491. Like UNIX streams [Rit84a],
  492. Plan 9 streams can be dynamically configured.
  493. The stream system intercepts and interprets
  494. the following control blocks:
  495. .IP "\f(CWpush\fP \fIname\fR" 15
  496. adds an instance of the processing module
  497. .I name
  498. to the top of the stream.
  499. .IP \f(CWpop\fP 15
  500. removes the top module of the stream.
  501. .IP \f(CWhangup\fP 15
  502. sends a hangup message
  503. up the stream from the device end.
  504. .LP
  505. Other control blocks are module-specific and are interpreted by each
  506. processing module
  507. as they pass.
  508. .PP
  509. The convoluted syntax and semantics of the UNIX
  510. .CW ioctl
  511. system call convinced us to leave it out of Plan 9.
  512. Instead,
  513. .CW ioctl
  514. is replaced by the
  515. .CW ctl
  516. file.
  517. Writing to the
  518. .CW ctl
  519. file
  520. is identical to writing to a
  521. .CW data
  522. file except the blocks are of type
  523. .I control .
  524. A processing module parses each control block it sees.
  525. Commands in control blocks are ASCII strings, so
  526. byte ordering is not an issue when one system
  527. controls streams in a name space implemented on another processor.
  528. The time to parse control blocks is not important, since control
  529. operations are rare.
  530. .NH 3
  531. Device Interface
  532. .PP
  533. The module at the downstream end of the stream is part of a device interface.
  534. The particulars of the interface vary with the device.
  535. Most device interfaces consist of an interrupt routine, an output
  536. put routine, and a kernel process.
  537. The output put routine stages data for the
  538. device and starts the device if it is stopped.
  539. The interrupt routine wakes up the kernel process whenever
  540. the device has input to be processed or needs more output staged.
  541. The kernel process puts information up the stream or stages more data for output.
  542. The division of labor among the different pieces varies depending on
  543. how much must be done at interrupt level.
  544. However, the interrupt routine may not allocate blocks or call
  545. a put routine since both actions require a process context.
  546. .NH 3
  547. Multiplexing
  548. .PP
  549. The conversations using a protocol device must be
  550. multiplexed onto a single physical wire.
  551. We push a multiplexer processing module
  552. onto the physical device stream to group the conversations.
  553. The device end modules on the conversations add the necessary header
  554. onto downstream messages and then put them to the module downstream
  555. of the multiplexer.
  556. The multiplexing module looks at each message moving up its stream and
  557. puts it to the correct conversation stream after stripping
  558. the header controlling the demultiplexing.
  559. .PP
  560. This is similar to the Unix implementation of multiplexer streams.
  561. The major difference is that we have no general structure that
  562. corresponds to a multiplexer.
  563. Each attempt to produce a generalized multiplexer created a more complicated
  564. structure and underlined the basic difficulty of generalizing this mechanism.
  565. We now code each multiplexer from scratch and favor simplicity over
  566. generality.
  567. .NH 3
  568. Reflections
  569. .PP
  570. Despite five year's experience and the efforts of many programmers,
  571. we remain dissatisfied with the stream mechanism.
  572. Performance is not an issue;
  573. the time to process protocols and drive
  574. device interfaces continues to dwarf the
  575. time spent allocating, freeing, and moving blocks
  576. of data.
  577. However the mechanism remains inordinately
  578. complex.
  579. Much of the complexity results from our efforts
  580. to make streams dynamically configurable, to
  581. reuse processing modules on different devices
  582. and to provide kernel synchronization
  583. to ensure data structures
  584. don't disappear under foot.
  585. This is particularly irritating since we seldom use these properties.
  586. .PP
  587. Streams remain in our kernel because we are unable to
  588. devise a better alternative.
  589. Larry Peterson's X-kernel [Pet89a]
  590. is the closest contender but
  591. doesn't offer enough advantage to switch.
  592. If we were to rewrite the streams code, we would probably statically
  593. allocate resources for a large fixed number of conversations and burn
  594. memory in favor of less complexity.
  595. .NH
  596. The IL Protocol
  597. .PP
  598. None of the standard IP protocols is suitable for transmission of
  599. 9P messages over an Ethernet or the Internet.
  600. TCP has a high overhead and does not preserve delimiters.
  601. UDP, while cheap, does not provide reliable sequenced delivery.
  602. Early versions of the system used a custom protocol that was
  603. efficient but unsatisfactory for internetwork transmission.
  604. When we implemented IP, TCP, and UDP we looked around for a suitable
  605. replacement with the following properties:
  606. .IP \(bu
  607. Reliable datagram service with sequenced delivery
  608. .IP \(bu
  609. Runs over IP
  610. .IP \(bu
  611. Low complexity, high performance
  612. .IP \(bu
  613. Adaptive timeouts
  614. .LP
  615. None met our needs so a new protocol was designed.
  616. IL is a lightweight protocol designed to be encapsulated by IP.
  617. It is a connection-based protocol
  618. providing reliable transmission of sequenced messages between machines.
  619. No provision is made for flow control since the protocol is designed to transport RPC
  620. messages between client and server.
  621. A small outstanding message window prevents too
  622. many incoming messages from being buffered;
  623. messages outside the window are discarded
  624. and must be retransmitted.
  625. Connection setup uses a two way handshake to generate
  626. initial sequence numbers at each end of the connection;
  627. subsequent data messages increment the
  628. sequence numbers allowing
  629. the receiver to resequence out of order messages.
  630. In contrast to other protocols, IL does not do blind retransmission.
  631. If a message is lost and a timeout occurs, a query message is sent.
  632. The query message is a small control message containing the current
  633. sequence numbers as seen by the sender.
  634. The receiver responds to a query by retransmitting missing messages.
  635. This allows the protocol to behave well in congested networks,
  636. where blind retransmission would cause further
  637. congestion.
  638. Like TCP, IL has adaptive timeouts.
  639. A round-trip timer is used
  640. to calculate acknowledge and retransmission times in terms of the network speed.
  641. This allows the protocol to perform well on both the Internet and on local Ethernets.
  642. .PP
  643. In keeping with the minimalist design of the rest of the kernel, IL is small.
  644. The entire protocol is 847 lines of code, compared to 2200 lines for TCP.
  645. IL is our protocol of choice.
  646. .NH
  647. Network Addressing
  648. .PP
  649. A uniform interface to protocols and devices is not sufficient to
  650. support the transparency we require.
  651. Since each network uses a different
  652. addressing scheme,
  653. the ASCII strings written to a control file have no common format.
  654. As a result, every tool must know the specifics of the networks it
  655. is capable of addressing.
  656. Moreover, since each machine supplies a subset
  657. of the available networks, each user must be aware of the networks supported
  658. by every terminal and server machine.
  659. This is obviously unacceptable.
  660. .PP
  661. Several possible solutions were considered and rejected; one deserves
  662. more discussion.
  663. We could have used a user-level file server
  664. to represent the network name space as a Plan 9 file tree.
  665. This global naming scheme has been implemented in other distributed systems.
  666. The file hierarchy provides paths to
  667. directories representing network domains.
  668. Each directory contains
  669. files representing the names of the machines in that domain;
  670. an example might be the path
  671. .CW /net/name/usa/edu/mit/ai .
  672. Each machine file contains information like the IP address of the machine.
  673. We rejected this representation for several reasons.
  674. First, it is hard to devise a hierarchy encompassing all representations
  675. of the various network addressing schemes in a uniform manner.
  676. Datakit and Ethernet address strings have nothing in common.
  677. Second, the address of a machine is
  678. often only a small part of the information required to connect to a service on
  679. the machine.
  680. For example, the IP protocols require symbolic service names to be mapped into
  681. numeric port numbers, some of which are privileged and hence special.
  682. Information of this sort is hard to represent in terms of file operations.
  683. Finally, the size and number of the networks being represented burdens users with
  684. an unacceptably large amount of information about the organization of the network
  685. and its connectivity.
  686. In this case the Plan 9 representation of a
  687. resource as a file is not appropriate.
  688. .PP
  689. If tools are to be network independent, a third-party server must resolve
  690. network names.
  691. A server on each machine, with local knowledge, can select the best network
  692. for any particular destination machine or service.
  693. Since the network devices present a common interface,
  694. the only operation which differs between networks is name resolution.
  695. A symbolic name must be translated to
  696. the path of the clone file of a protocol
  697. device and an ASCII address string to write to the
  698. .CW ctl
  699. file.
  700. A connection server (CS) provides this service.
  701. .NH 2
  702. Network Database
  703. .PP
  704. On most systems several
  705. files such as
  706. .CW /etc/hosts ,
  707. .CW /etc/networks ,
  708. .CW /etc/services ,
  709. .CW /etc/hosts.equiv ,
  710. .CW /etc/bootptab ,
  711. and
  712. .CW /etc/named.d
  713. hold network information.
  714. Much time and effort is spent
  715. administering these files and keeping
  716. them mutually consistent.
  717. Tools attempt to
  718. automatically derive one or more of the files from
  719. information in other files but maintenance continues to be
  720. difficult and error prone.
  721. .PP
  722. Since we were writing an entirely new system, we were free to
  723. try a simpler approach.
  724. One database on a shared server contains all the information
  725. needed for network administration.
  726. Two ASCII files comprise the main database:
  727. .CW /lib/ndb/local
  728. contains locally administered information and
  729. .CW /lib/ndb/global
  730. contains information imported from elsewhere.
  731. The files contain sets of attribute/value pairs of the form
  732. .I attr\f(CW=\fPvalue ,
  733. where
  734. .I attr
  735. and
  736. .I value
  737. are alphanumeric strings.
  738. Systems are described by multi-line entries;
  739. a header line at the left margin begins each entry followed by zero or more
  740. indented attribute/value pairs specifying
  741. names, addresses, properties, etc.
  742. For example, the entry for our CPU server
  743. specifies a domain name, an IP address, an Ethernet address,
  744. a Datakit address, a boot file, and supported protocols.
  745. .P1
  746. sys = helix
  747. dom=helix.research.bell-labs.com
  748. bootf=/mips/9power
  749. ip=135.104.9.31 ether=0800690222f0
  750. dk=nj/astro/helix
  751. proto=il flavor=9cpu
  752. .P2
  753. If several systems share entries such as
  754. network mask and gateway, we specify that information
  755. with the network or subnetwork instead of the system.
  756. The following entries define a Class B IP network and
  757. a few subnets derived from it.
  758. The entry for the network specifies the IP mask,
  759. file system, and authentication server for all systems
  760. on the network.
  761. Each subnetwork specifies its default IP gateway.
  762. .P1
  763. ipnet=mh-astro-net ip=135.104.0.0 ipmask=255.255.255.0
  764. fs=bootes.research.bell-labs.com
  765. auth=1127auth
  766. ipnet=unix-room ip=135.104.117.0
  767. ipgw=135.104.117.1
  768. ipnet=third-floor ip=135.104.51.0
  769. ipgw=135.104.51.1
  770. ipnet=fourth-floor ip=135.104.52.0
  771. ipgw=135.104.52.1
  772. .P2
  773. Database entries also define the mapping of service names
  774. to port numbers for TCP, UDP, and IL.
  775. .P1
  776. tcp=echo port=7
  777. tcp=discard port=9
  778. tcp=systat port=11
  779. tcp=daytime port=13
  780. .P2
  781. .PP
  782. All programs read the database directly so
  783. consistency problems are rare.
  784. However the database files can become large.
  785. Our global file, containing all information about
  786. both Datakit and Internet systems in AT&T, has 43,000
  787. lines.
  788. To speed searches, we build hash table files for each
  789. attribute we expect to search often.
  790. The hash file entries point to entries
  791. in the master files.
  792. Every hash file contains the modification time of its master
  793. file so we can avoid using an out-of-date hash table.
  794. Searches for attributes that aren't hashed or whose hash table
  795. is out-of-date still work, they just take longer.
  796. .NH 2
  797. Connection Server
  798. .PP
  799. On each system a user level connection server process, CS, translates
  800. symbolic names to addresses.
  801. CS uses information about available networks, the network database, and
  802. other servers (such as DNS) to translate names.
  803. CS is a file server serving a single file,
  804. .CW /net/cs .
  805. A client writes a symbolic name to
  806. .CW /net/cs
  807. then reads one line for each matching destination reachable
  808. from this system.
  809. The lines are of the form
  810. .I "filename message",
  811. where
  812. .I filename
  813. is the path of the clone file to open for a new connection and
  814. .I message
  815. is the string to write to it to make the connection.
  816. The following example illustrates this.
  817. .CW Ndb/csquery
  818. is a program that prompts for strings to write to
  819. .CW /net/cs
  820. and prints the replies.
  821. .P1
  822. % ndb/csquery
  823. > net!helix!9fs
  824. /net/il/clone 135.104.9.31!17008
  825. /net/dk/clone nj/astro/helix!9fs
  826. .P2
  827. .PP
  828. CS provides meta-name translation to perform complicated
  829. searches.
  830. The special network name
  831. .CW net
  832. selects any network in common between source and
  833. destination supporting the specified service.
  834. A host name of the form \f(CW$\fIattr\f1
  835. is the name of an attribute in the network database.
  836. The database search returns the value
  837. of the matching attribute/value pair
  838. most closely associated with the source host.
  839. Most closely associated is defined on a per network basis.
  840. For example, the symbolic name
  841. .CW tcp!$auth!rexauth
  842. causes CS to search for the
  843. .CW auth
  844. attribute in the database entry for the source system, then its
  845. subnetwork (if there is one) and then its network.
  846. .P1
  847. % ndb/csquery
  848. > net!$auth!rexauth
  849. /net/il/clone 135.104.9.34!17021
  850. /net/dk/clone nj/astro/p9auth!rexauth
  851. /net/il/clone 135.104.9.6!17021
  852. /net/dk/clone nj/astro/musca!rexauth
  853. .P2
  854. .PP
  855. Normally CS derives naming information from its database files.
  856. For domain names however, CS first consults another user level
  857. process, the domain name server (DNS).
  858. If no DNS is reachable, CS relies on its own tables.
  859. .PP
  860. Like CS, the domain name server is a user level process providing
  861. one file,
  862. .CW /net/dns .
  863. A client writes a request of the form
  864. .I "domain-name type" ,
  865. where
  866. .I type
  867. is a domain name service resource record type.
  868. DNS performs a recursive query through the
  869. Internet domain name system producing one line
  870. per resource record found. The client reads
  871. .CW /net/dns
  872. to retrieve the records.
  873. Like other domain name servers, DNS caches information
  874. learned from the network.
  875. DNS is implemented as a multi-process shared memory application
  876. with separate processes listening for network and local requests.
  877. .NH
  878. Library routines
  879. .PP
  880. The section on protocol devices described the details
  881. of making and receiving connections across a network.
  882. The dance is straightforward but tedious.
  883. Library routines are provided to relieve
  884. the programmer of the details.
  885. .NH 2
  886. Connecting
  887. .PP
  888. The
  889. .CW dial
  890. library call establishes a connection to a remote destination.
  891. It
  892. returns an open file descriptor for the
  893. .CW data
  894. file in the connection directory.
  895. .P1
  896. int dial(char *dest, char *local, char *dir, int *cfdp)
  897. .P2
  898. .IP \f(CWdest\fP 10
  899. is the symbolic name/address of the destination.
  900. .IP \f(CWlocal\fP 10
  901. is the local address.
  902. Since most networks do not support this, it is
  903. usually zero.
  904. .IP \f(CWdir\fP 10
  905. is a pointer to a buffer to hold the path name of the protocol directory
  906. representing this connection.
  907. .CW Dial
  908. fills this buffer if the pointer is non-zero.
  909. .IP \f(CWcfdp\fP 10
  910. is a pointer to a file descriptor for the
  911. .CW ctl
  912. file of the connection.
  913. If the pointer is non-zero,
  914. .CW dial
  915. opens the control file and tucks the file descriptor here.
  916. .LP
  917. Most programs call
  918. .CW dial
  919. with a destination name and all other arguments zero.
  920. .CW Dial
  921. uses CS to
  922. translate the symbolic name to all possible destination addresses
  923. and attempts to connect to each in turn until one works.
  924. Specifying the special name
  925. .CW net
  926. in the network portion of the destination
  927. allows CS to pick a network/protocol in common
  928. with the destination for which the requested service is valid.
  929. For example, assume the system
  930. .CW research.bell-labs.com
  931. has the Datakit address
  932. .CW nj/astro/research
  933. and IP addresses
  934. .CW 135.104.117.5
  935. and
  936. .CW 129.11.4.1 .
  937. The call
  938. .P1
  939. fd = dial("net!research.bell-labs.com!login", 0, 0, 0, 0);
  940. .P2
  941. tries in succession to connect to
  942. .CW nj/astro/research!login
  943. on the Datakit and both
  944. .CW 135.104.117.5!513
  945. and
  946. .CW 129.11.4.1!513
  947. across the Internet.
  948. .PP
  949. .CW Dial
  950. accepts addresses instead of symbolic names.
  951. For example, the destinations
  952. .CW tcp!135.104.117.5!513
  953. and
  954. .CW tcp!research.bell-labs.com!login
  955. are equivalent
  956. references to the same machine.
  957. .NH 2
  958. Listening
  959. .PP
  960. A program uses
  961. four routines to listen for incoming connections.
  962. It first
  963. .CW announce() s
  964. its intention to receive connections,
  965. then
  966. .CW listen() s
  967. for calls and finally
  968. .CW accept() s
  969. or
  970. .CW reject() s
  971. them.
  972. .CW Announce
  973. returns an open file descriptor for the
  974. .CW ctl
  975. file of a connection and fills
  976. .CW dir
  977. with the
  978. path of the protocol directory
  979. for the announcement.
  980. .P1
  981. int announce(char *addr, char *dir)
  982. .P2
  983. .CW Addr
  984. is the symbolic name/address announced;
  985. if it does not contain a service, the announcement is for
  986. all services not explicitly announced.
  987. Thus, one can easily write the equivalent of the
  988. .CW inetd
  989. program without
  990. having to announce each separate service.
  991. An announcement remains in force until the control file is
  992. closed.
  993. .LP
  994. .CW Listen
  995. returns an open file descriptor for the
  996. .CW ctl
  997. file and fills
  998. .CW ldir
  999. with the path
  1000. of the protocol directory
  1001. for the received connection.
  1002. It is passed
  1003. .CW dir
  1004. from the announcement.
  1005. .P1
  1006. int listen(char *dir, char *ldir)
  1007. .P2
  1008. .LP
  1009. .CW Accept
  1010. and
  1011. .CW reject
  1012. are called with the control file descriptor and
  1013. .CW ldir
  1014. returned by
  1015. .CW listen.
  1016. Some networks such as Datakit accept a reason for a rejection;
  1017. networks such as IP ignore the third argument.
  1018. .P1
  1019. int accept(int ctl, char *ldir)
  1020. int reject(int ctl, char *ldir, char *reason)
  1021. .P2
  1022. .PP
  1023. The following code implements a typical TCP listener.
  1024. It announces itself, listens for connections, and forks a new
  1025. process for each.
  1026. The new process echoes data on the connection until the
  1027. remote end closes it.
  1028. The "*" in the symbolic name means the announcement is valid for
  1029. any addresses bound to the machine the program is run on.
  1030. .P1
  1031. .ta 8n 16n 24n 32n 40n 48n 56n 64n
  1032. int
  1033. echo_server(void)
  1034. {
  1035. int dfd, lcfd;
  1036. char adir[40], ldir[40];
  1037. int n;
  1038. char buf[256];
  1039. afd = announce("tcp!*!echo", adir);
  1040. if(afd < 0)
  1041. return -1;
  1042. for(;;){
  1043. /* listen for a call */
  1044. lcfd = listen(adir, ldir);
  1045. if(lcfd < 0)
  1046. return -1;
  1047. /* fork a process to echo */
  1048. switch(fork()){
  1049. case 0:
  1050. /* accept the call and open the data file */
  1051. dfd = accept(lcfd, ldir);
  1052. if(dfd < 0)
  1053. return -1;
  1054. /* echo until EOF */
  1055. while((n = read(dfd, buf, sizeof(buf))) > 0)
  1056. write(dfd, buf, n);
  1057. exits(0);
  1058. case -1:
  1059. perror("forking");
  1060. default:
  1061. close(lcfd);
  1062. break;
  1063. }
  1064. }
  1065. }
  1066. .P2
  1067. .NH
  1068. User Level
  1069. .PP
  1070. Communication between Plan 9 machines is done almost exclusively in
  1071. terms of 9P messages. Only the two services
  1072. .CW cpu
  1073. and
  1074. .CW exportfs
  1075. are used.
  1076. The
  1077. .CW cpu
  1078. service is analogous to
  1079. .CW rlogin .
  1080. However, rather than emulating a terminal session
  1081. across the network,
  1082. .CW cpu
  1083. creates a process on the remote machine whose name space is an analogue of the window
  1084. in which it was invoked.
  1085. .CW Exportfs
  1086. is a user level file server which allows a piece of name space to be
  1087. exported from machine to machine across a network. It is used by the
  1088. .CW cpu
  1089. command to serve the files in the terminal's name space when they are
  1090. accessed from the
  1091. cpu server.
  1092. .PP
  1093. By convention, the protocol and device driver file systems are mounted in a
  1094. directory called
  1095. .CW /net .
  1096. Although the per-process name space allows users to configure an
  1097. arbitrary view of the system, in practice their profiles build
  1098. a conventional name space.
  1099. .NH 2
  1100. Exportfs
  1101. .PP
  1102. .CW Exportfs
  1103. is invoked by an incoming network call.
  1104. The
  1105. .I listener
  1106. (the Plan 9 equivalent of
  1107. .CW inetd )
  1108. runs the profile of the user
  1109. requesting the service to construct a name space before starting
  1110. .CW exportfs .
  1111. After an initial protocol
  1112. establishes the root of the file tree being
  1113. exported,
  1114. the remote process mounts the connection,
  1115. allowing
  1116. .CW exportfs
  1117. to act as a relay file server. Operations in the imported file tree
  1118. are executed on the remote server and the results returned.
  1119. As a result
  1120. the name space of the remote machine appears to be exported into a
  1121. local file tree.
  1122. .PP
  1123. The
  1124. .CW import
  1125. command calls
  1126. .CW exportfs
  1127. on a remote machine, mounts the result in the local name space,
  1128. and
  1129. exits.
  1130. No local process is required to serve mounts;
  1131. 9P messages are generated by the kernel's mount driver and sent
  1132. directly over the network.
  1133. .PP
  1134. .CW Exportfs
  1135. must be multithreaded since the system calls
  1136. .CW open,
  1137. .CW read
  1138. and
  1139. .CW write
  1140. may block.
  1141. Plan 9 does not implement the
  1142. .CW select
  1143. system call but does allow processes to share file descriptors,
  1144. memory and other resources.
  1145. .CW Exportfs
  1146. and the configurable name space
  1147. provide a means of sharing resources between machines.
  1148. It is a building block for constructing complex name spaces
  1149. served from many machines.
  1150. .PP
  1151. The simplicity of the interfaces encourages naive users to exploit the potential
  1152. of a richly connected environment.
  1153. Using these tools it is easy to gateway between networks.
  1154. For example a terminal with only a Datakit connection can import from the server
  1155. .CW helix :
  1156. .P1
  1157. import -a helix /net
  1158. telnet ai.mit.edu
  1159. .P2
  1160. The
  1161. .CW import
  1162. command makes a Datakit connection to the machine
  1163. .CW helix
  1164. where
  1165. it starts an instance
  1166. .CW exportfs
  1167. to serve
  1168. .CW /net .
  1169. The
  1170. .CW import
  1171. command mounts the remote
  1172. .CW /net
  1173. directory after (the
  1174. .CW -a
  1175. option to
  1176. .CW import )
  1177. the existing contents
  1178. of the local
  1179. .CW /net
  1180. directory.
  1181. The directory contains the union of the local and remote contents of
  1182. .CW /net .
  1183. Local entries supersede remote ones of the same name so
  1184. networks on the local machine are chosen in preference
  1185. to those supplied remotely.
  1186. However, unique entries in the remote directory are now visible in the local
  1187. .CW /net
  1188. directory.
  1189. All the networks connected to
  1190. .CW helix ,
  1191. not just Datakit,
  1192. are now available in the terminal. The effect on the name space is shown by the following
  1193. example:
  1194. .P1
  1195. philw-gnot% ls /net
  1196. /net/cs
  1197. /net/dk
  1198. philw-gnot% import -a musca /net
  1199. philw-gnot% ls /net
  1200. /net/cs
  1201. /net/cs
  1202. /net/dk
  1203. /net/dk
  1204. /net/dns
  1205. /net/ether
  1206. /net/il
  1207. /net/tcp
  1208. /net/udp
  1209. .P2
  1210. .NH 2
  1211. Ftpfs
  1212. .PP
  1213. We decided to make our interface to FTP
  1214. a file system rather than the traditional command.
  1215. Our command,
  1216. .I ftpfs,
  1217. dials the FTP port of a remote system, prompts for login and password, sets image mode,
  1218. and mounts the remote file system onto
  1219. .CW /n/ftp .
  1220. Files and directories are cached to reduce traffic.
  1221. The cache is updated whenever a file is created.
  1222. Ftpfs works with TOPS-20, VMS, and various Unix flavors
  1223. as the remote system.
  1224. .NH
  1225. Cyclone Fiber Links
  1226. .PP
  1227. The file servers and CPU servers are connected by
  1228. high-bandwidth
  1229. point-to-point links.
  1230. A link consists of two VME cards connected by a pair of optical
  1231. fibers.
  1232. The VME cards use 33MHz Intel 960 processors and AMD's TAXI
  1233. fiber transmitter/receivers to drive the lines at 125 Mbit/sec.
  1234. Software in the VME card reduces latency by copying messages from system memory
  1235. to fiber without intermediate buffering.
  1236. .NH
  1237. Performance
  1238. .PP
  1239. We measured both latency and throughput
  1240. of reading and writing bytes between two processes
  1241. for a number of different paths.
  1242. Measurements were made on two- and four-CPU SGI Power Series processors.
  1243. The CPUs are 25 MHz MIPS 3000s.
  1244. The latency is measured as the round trip time
  1245. for a byte sent from one process to another and
  1246. back again.
  1247. Throughput is measured using 16k writes from
  1248. one process to another.
  1249. .DS C
  1250. .TS
  1251. box, tab(:);
  1252. c s s
  1253. c | c | c
  1254. l | n | n.
  1255. Table 1 - Performance
  1256. _
  1257. test:throughput:latency
  1258. :MBytes/sec:millisec
  1259. _
  1260. pipes:8.15:.255
  1261. _
  1262. IL/ether:1.02:1.42
  1263. _
  1264. URP/Datakit:0.22:1.75
  1265. _
  1266. Cyclone:3.2:0.375
  1267. .TE
  1268. .DE
  1269. .NH
  1270. Conclusion
  1271. .PP
  1272. The representation of all resources as file systems
  1273. coupled with an ASCII interface has proved more powerful
  1274. than we had originally imagined.
  1275. Resources can be used by any computer in our networks
  1276. independent of byte ordering or CPU type.
  1277. The connection server provides an elegant means
  1278. of decoupling tools from the networks they use.
  1279. Users successfully use Plan 9 without knowing the
  1280. topology of the system or the networks they use.
  1281. More information about 9P can be found in the Section 5 of the Plan 9 Programmer's
  1282. Manual, Volume I.
  1283. .NH
  1284. References
  1285. .LP
  1286. [Pike90] R. Pike, D. Presotto, K. Thompson, H. Trickey,
  1287. ``Plan 9 from Bell Labs'',
  1288. .I
  1289. UKUUG Proc. of the Summer 1990 Conf. ,
  1290. London, England,
  1291. 1990.
  1292. .LP
  1293. [Needham] R. Needham, ``Names'', in
  1294. .I
  1295. Distributed systems,
  1296. .R
  1297. S. Mullender, ed.,
  1298. Addison Wesley, 1989.
  1299. .LP
  1300. [Presotto] D. Presotto, ``Multiprocessor Streams for Plan 9'',
  1301. .I
  1302. UKUUG Proc. of the Summer 1990 Conf. ,
  1303. .R
  1304. London, England, 1990.
  1305. .LP
  1306. [Met80] R. Metcalfe, D. Boggs, C. Crane, E. Taf and J. Hupp, ``The
  1307. Ethernet Local Network: Three reports'',
  1308. .I
  1309. CSL-80-2,
  1310. .R
  1311. XEROX Palo Alto Research Center, February 1980.
  1312. .LP
  1313. [Fra80] A. G. Fraser, ``Datakit - A Modular Network for Synchronous
  1314. and Asynchronous Traffic'',
  1315. .I
  1316. Proc. Int'l Conf. on Communication,
  1317. .R
  1318. Boston, June 1980.
  1319. .LP
  1320. [Pet89a] L. Peterson, ``RPC in the X-Kernel: Evaluating new Design Techniques'',
  1321. .I
  1322. Proc. Twelfth Symp. on Op. Sys. Princ.,
  1323. .R
  1324. Litchfield Park, AZ, December 1990.
  1325. .LP
  1326. [Rit84a] D. M. Ritchie, ``A Stream Input-Output System'',
  1327. .I
  1328. AT&T Bell Laboratories Technical Journal, 68(8),
  1329. .R
  1330. October 1984.