1
0

names.html 23 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695
  1. <html>
  2. <title>
  3. data
  4. </title>
  5. <body BGCOLOR="#FFFFFF" TEXT="#000000" LINK="#0000FF" VLINK="#330088" ALINK="#FF0044">
  6. <H1>The Use of Name Spaces in Plan 9
  7. </H1>
  8. <DL><DD><I>Rob Pike<br>
  9. Dave Presotto<br>
  10. Ken Thompson<br>
  11. Howard Trickey<br>
  12. Phil Winterbottom<br>
  13. Bell Laboratories, Murray Hill, NJ, 07974
  14. USA<br>
  15. </I></DL>
  16. <DL><DD><H4>ABSTRACT</H4>
  17. <DL>
  18. <DT><DT>&#32;<DD>
  19. NOTE:<I> Appeared in
  20. Operating Systems Review,
  21. Vol. 27, #2, April 1993, pp. 72-76
  22. (reprinted from
  23. Proceedings of the 5th ACM SIGOPS European Workshop,
  24. Mont Saint-Michel, 1992, Paper n&#186; 34).
  25. </I><DT>&#32;<DD></dl>
  26. <br>
  27. Plan 9 is a distributed system built at the Computing Sciences Research
  28. Center of AT&amp;T Bell Laboratories (now Lucent Technologies, Bell Labs) over the last few years.
  29. Its goal is to provide a production-quality system for software
  30. development and general computation using heterogeneous hardware
  31. and minimal software. A Plan 9 system comprises CPU and file
  32. servers in a central location connected together by fast networks.
  33. Slower networks fan out to workstation-class machines that serve as
  34. user terminals. Plan 9 argues that given a few carefully
  35. implemented abstractions
  36. it is possible to
  37. produce a small operating system that provides support for the largest systems
  38. on a variety of architectures and networks. The foundations of the system are
  39. built on two ideas: a per-process name space and a simple message-oriented
  40. file system protocol.
  41. </DL>
  42. <P>
  43. The operating system for the CPU servers and terminals is
  44. structured as a traditional kernel: a single compiled image
  45. containing code for resource management, process control,
  46. user processes,
  47. virtual memory, and I/O. Because the file server is a separate
  48. machine, the file system is not compiled in, although the management
  49. of the name space, a per-process attribute, is.
  50. The entire kernel for the multiprocessor SGI Power Series machine
  51. is 25000 lines of C,
  52. the largest part of which is code for four networks including the
  53. Ethernet with the Internet protocol suite.
  54. Fewer than 1500 lines are machine-specific, and a
  55. functional kernel with minimal I/O can be put together from
  56. source files totaling 6000 lines. [Pike90]
  57. </P>
  58. <P>
  59. The system is relatively small for several reasons.
  60. First, it is all new: it has not had time to accrete as many fixes
  61. and features as other systems.
  62. Also, other than the network protocol, it adheres to no
  63. external interface; in particular, it is not Unix-compatible.
  64. Economy stems from careful selection of services and interfaces.
  65. Finally, wherever possible the system is built around
  66. two simple ideas:
  67. every resource in the system, either local or remote,
  68. is represented by a hierarchical file system; and
  69. a user or process
  70. assembles a private view of the system by constructing a file
  71. name space
  72. that connects these resources. [Needham]
  73. </P>
  74. <H4>File Protocol
  75. </H4>
  76. <P>
  77. All resources in Plan 9 look like file systems.
  78. That does not mean that they are repositories for
  79. permanent files on disk, but that the interface to them
  80. is file-oriented: finding files (resources) in a hierarchical
  81. name tree, attaching to them by name, and accessing their contents
  82. by read and write calls.
  83. There are dozens of file system types in Plan 9, but only a few
  84. represent traditional files.
  85. At this level of abstraction, files in Plan 9 are similar
  86. to objects, except that files are already provided with naming,
  87. access, and protection methods that must be created afresh for
  88. objects. Object-oriented readers may approach the rest of this
  89. paper as a study in how to make objects look like files.
  90. </P>
  91. <P>
  92. The interface to file systems is defined by a protocol, called 9P,
  93. analogous but not very similar to the NFS protocol.
  94. The protocol talks about files, not blocks; given a connection to the root
  95. directory of a file server,
  96. the 9P messages navigate the file hierarchy, open files for I/O,
  97. and read or write arbitrary bytes in the files.
  98. 9P contains 17 message types: three for
  99. initializing and
  100. authenticating a connection and fourteen for manipulating objects.
  101. The messages are generated by the kernel in response to user- or
  102. kernel-level I/O requests.
  103. Here is a quick tour of the major message types.
  104. The
  105. <TT>auth</TT>
  106. and
  107. <TT>attach</TT>
  108. messages authenticate a connection, established by means outside 9P,
  109. and validate its user.
  110. The result is an authenticated
  111. <I>channel</I>
  112. that points to the root of the
  113. server.
  114. The
  115. <TT>clone</TT>
  116. message makes a new channel identical to an existing channel,
  117. which may be moved to a file on the server using a
  118. <TT>walk</TT>
  119. message to descend each level in the hierarchy.
  120. The
  121. <TT>stat</TT>
  122. and
  123. <TT>wstat</TT>
  124. messages read and write the attributes of the file pointed to by a channel.
  125. The
  126. <TT>open</TT>
  127. message prepares a channel for subsequent
  128. <TT>read</TT>
  129. and
  130. <TT>write</TT>
  131. messages to access the contents of the file, while
  132. <TT>create</TT>
  133. and
  134. <TT>remove</TT>
  135. perform, on the files, the actions implied by their names.
  136. The
  137. <TT>clunk</TT>
  138. message discards a channel without affecting the file.
  139. None of the 9P messages consider caching; file caches are provided,
  140. when needed, either within the server (centralized caching)
  141. or by implementing the cache as a transparent file system between the
  142. client and the 9P connection to the server (client caching).
  143. </P>
  144. <P>
  145. For efficiency, the connection to local
  146. kernel-resident file systems, misleadingly called
  147. <I>devices,</I>
  148. is by regular rather than remote procedure calls.
  149. The procedures map one-to-one with 9P message types.
  150. Locally each channel has an associated data structure
  151. that holds a type field used to index
  152. a table of procedure calls, one set per file system type,
  153. analogous to selecting the method set for an object.
  154. One kernel-resident file system, the
  155. mount device,
  156. translates the local 9P procedure calls into RPC messages to
  157. remote services over a separately provided transport protocol
  158. such as TCP or IL, a new reliable datagram protocol, or over a pipe to
  159. a user process.
  160. Write and read calls transmit the messages over the transport layer.
  161. The mount device is the sole bridge between the procedural
  162. interface seen by user programs and remote and user-level services.
  163. It does all associated marshaling, buffer
  164. management, and multiplexing and is
  165. the only integral RPC mechanism in Plan 9.
  166. The mount device is in effect a proxy object.
  167. There is no RPC stub compiler; instead the mount driver and
  168. all servers just share a library that packs and unpacks 9P messages.
  169. </P>
  170. <H4>Examples
  171. </H4>
  172. <P>
  173. One file system type serves
  174. permanent files from the main file server,
  175. a stand-alone multiprocessor system with a
  176. 350-gigabyte
  177. optical WORM jukebox that holds the data, fronted by a two-level
  178. block cache comprising 7 gigabytes of
  179. magnetic disk and 128 megabytes of RAM.
  180. Clients connect to the file server using any of a variety of
  181. networks and protocols and access files using 9P.
  182. The file server runs a distinct operating system and has no
  183. support for user processes; other than a restricted set of commands
  184. available on the console, all it does is answer 9P messages from clients.
  185. </P>
  186. <P>
  187. Once a day, at 5:00 AM,
  188. the file server sweeps through the cache blocks and marks dirty blocks
  189. copy-on-write.
  190. It creates a copy of the root directory
  191. and labels it with the current date, for example
  192. <TT>1995/0314</TT>.
  193. It then starts a background process to copy the dirty blocks to the WORM.
  194. The result is that the server retains an image of the file system as it was
  195. early each morning.
  196. The set of old root directories is accessible using 9P, so a client
  197. may examine backup files using ordinary commands.
  198. Several advantages stem from having the backup service implemented
  199. as a plain file system.
  200. Most obviously, ordinary commands can access them.
  201. For example, to see when a bug was fixed
  202. <DL><DT><DD><TT><PRE>
  203. grep 'mouse bug fix' 1995/*/sys/src/cmd/8&#189;/file.c
  204. </PRE></TT></DL>
  205. The owner, access times, permissions, and other properties of the
  206. files are also backed up.
  207. Because it is a file system, the backup
  208. still has protections;
  209. it is not possible to subvert security by looking at the backup.
  210. </P>
  211. <P>
  212. The file server is only one type of file system.
  213. A number of unusual services are provided within the kernel as
  214. local file systems.
  215. These services are not limited to I/O devices such
  216. as disks. They include network devices and their associated protocols,
  217. the bitmap display and mouse,
  218. a representation of processes similar to
  219. <TT>/proc</TT>
  220. [Killian], the name/value pairs that form the `environment'
  221. passed to a new process, profiling services,
  222. and other resources.
  223. Each of these is represented as a file system &#173;
  224. directories containing sets of files &#173;
  225. but the constituent files do not represent permanent storage on disk.
  226. Instead, they are closer in properties to UNIX device files.
  227. </P>
  228. <P>
  229. For example, the
  230. <I>console</I>
  231. device contains the file
  232. <TT>/dev/cons</TT>,
  233. similar to the UNIX file
  234. <TT>/dev/console</TT>:
  235. when written,
  236. <TT>/dev/cons</TT>
  237. appends to the console typescript; when read,
  238. it returns characters typed on the keyboard.
  239. Other files in the console device include
  240. <TT>/dev/time</TT>,
  241. the number of seconds since the epoch,
  242. <TT>/dev/cputime</TT>,
  243. the computation time used by the process reading the device,
  244. <TT>/dev/pid</TT>,
  245. the process id of the process reading the device, and
  246. <TT>/dev/user</TT>,
  247. the login name of the user accessing the device.
  248. All these files contain text, not binary numbers,
  249. so their use is free of byte-order problems.
  250. Their contents are synthesized on demand when read; when written,
  251. they cause modifications to kernel data structures.
  252. </P>
  253. <P>
  254. The
  255. <I>process</I>
  256. device contains one directory per live local process, named by its numeric
  257. process id:
  258. <TT>/proc/1</TT>,
  259. <TT>/proc/2</TT>,
  260. etc.
  261. Each directory contains a set of files that access the process.
  262. For example, in each directory the file
  263. <TT>mem</TT>
  264. is an image of the virtual memory of the process that may be read or
  265. written for debugging.
  266. The
  267. <TT>text</TT>
  268. file is a sort of link to the file from which the process was executed;
  269. it may be opened to read the symbol tables for the process.
  270. The
  271. <TT>ctl</TT>
  272. file may be written textual messages such as
  273. <TT>stop</TT>
  274. or
  275. <TT>kill</TT>
  276. to control the execution of the process.
  277. The
  278. <TT>status</TT>
  279. file contains a fixed-format line of text containing information about
  280. the process: its name, owner, state, and so on.
  281. Text strings written to the
  282. <TT>note</TT>
  283. file are delivered to the process as
  284. <I>notes,</I>
  285. analogous to UNIX signals.
  286. By providing these services as textual I/O on files rather
  287. than as system calls (such as
  288. <TT>kill</TT>)
  289. or special-purpose operations (such as
  290. <TT>ptrace</TT>),
  291. the Plan 9 process device simplifies the implementation of
  292. debuggers and related programs.
  293. For example, the command
  294. <DL><DT><DD><TT><PRE>
  295. cat /proc/*/status
  296. </PRE></TT></DL>
  297. is a crude form of the
  298. <TT>ps</TT>
  299. command; the actual
  300. <TT>ps</TT>
  301. merely reformats the data so obtained.
  302. </P>
  303. <P>
  304. The
  305. <I>bitmap</I>
  306. device contains three files,
  307. <TT>/dev/mouse</TT>,
  308. <TT>/dev/screen</TT>,
  309. and
  310. <TT>/dev/bitblt</TT>,
  311. that provide an interface to the local bitmap display (if any) and pointing device.
  312. The
  313. <TT>mouse</TT>
  314. file returns a fixed-format record containing
  315. 1 byte of button state and 4 bytes each of
  316. <I>x</I>
  317. and
  318. <I>y</I>
  319. position of the mouse.
  320. If the mouse has not moved since the file was last read, a subsequent read will
  321. block.
  322. The
  323. <TT>screen</TT>
  324. file contains a memory image of the contents of the display;
  325. the
  326. <TT>bitblt</TT>
  327. file provides a procedural interface.
  328. Calls to the graphics library are translated into messages that are written
  329. to the
  330. <TT>bitblt</TT>
  331. file to perform bitmap graphics operations. (This is essentially a nested
  332. RPC protocol.)
  333. </P>
  334. <P>
  335. The various services being used by a process are gathered together into the
  336. process's
  337. name space,
  338. a single rooted hierarchy of file names.
  339. When a process forks, the child process shares the name space with the parent.
  340. Several system calls manipulate name spaces.
  341. Given a file descriptor
  342. <TT>fd</TT>
  343. that holds an open communications channel to a service,
  344. the call
  345. <DL><DT><DD><TT><PRE>
  346. mount(int fd, char *old, int flags)
  347. </PRE></TT></DL>
  348. authenticates the user and attaches the file tree of the service to
  349. the directory named by
  350. <TT>old</TT>.
  351. The
  352. <TT>flags</TT>
  353. specify how the tree is to be attached to
  354. <TT>old</TT>:
  355. replacing the current contents or appearing before or after the
  356. current contents of the directory.
  357. A directory with several services mounted is called a
  358. <I>union</I>
  359. directory and is searched in the specified order.
  360. The call
  361. <DL><DT><DD><TT><PRE>
  362. bind(char *new, char *old, int flags)
  363. </PRE></TT></DL>
  364. takes the portion of the existing name space visible at
  365. <TT>new</TT>,
  366. either a file or a directory, and makes it also visible at
  367. <TT>old</TT>.
  368. For example,
  369. <DL><DT><DD><TT><PRE>
  370. bind("1995/0301/sys/include", "/sys/include", REPLACE)
  371. </PRE></TT></DL>
  372. causes the directory of include files to be overlaid with its
  373. contents from the dump on March first.
  374. </P>
  375. <P>
  376. A process is created by the
  377. <TT>rfork</TT>
  378. system call, which takes as argument a bit vector defining which
  379. attributes of the process are to be shared between parent
  380. and child instead of copied.
  381. One of the attributes is the name space: when shared, changes
  382. made by either process are visible in the other; when copied,
  383. changes are independent.
  384. </P>
  385. <P>
  386. Although there is no global name space,
  387. for a process to function sensibly the local name spaces must adhere
  388. to global conventions.
  389. Nonetheless, the use of local name spaces is critical to the system.
  390. Both these ideas are illustrated by the use of the name space to
  391. handle heterogeneity.
  392. The binaries for a given architecture are contained in a directory
  393. named by the architecture, for example
  394. <TT>/mips/bin</TT>;
  395. in use, that directory is bound to the conventional location
  396. <TT>/bin</TT>.
  397. Programs such as shell scripts need not know the CPU type they are
  398. executing on to find binaries to run.
  399. A directory of private binaries
  400. is usually unioned with
  401. <TT>/bin</TT>.
  402. (Compare this to the
  403. ad hoc
  404. and special-purpose idea of the
  405. <TT>PATH</TT>
  406. variable, which is not used in the Plan 9 shell.)
  407. Local bindings are also helpful for debugging, for example by binding
  408. an old library to the standard place and linking a program to see
  409. if recent changes to the library are responsible for a bug in the program.
  410. </P>
  411. <P>
  412. The window system,
  413. <TT>8&#189;</TT>
  414. [Pike91], is a server for files such as
  415. <TT>/dev/cons</TT>
  416. and
  417. <TT>/dev/bitblt</TT>.
  418. Each client sees a distinct copy of these files in its local
  419. name space: there are many instances of
  420. <TT>/dev/cons</TT>,
  421. each served by
  422. <TT>8&#189;</TT>
  423. to the local name space of a window.
  424. Again,
  425. <TT>8&#189;</TT>
  426. implements services using
  427. local name spaces plus the use
  428. of I/O to conventionally named files.
  429. Each client just connects its standard input, output, and error files
  430. to
  431. <TT>/dev/cons</TT>,
  432. with analogous operations to access bitmap graphics.
  433. Compare this to the implementation of
  434. <TT>/dev/tty</TT>
  435. on UNIX, which is done by special code in the kernel
  436. that overloads the file, when opened,
  437. with the standard input or output of the process.
  438. Special arrangement must be made by a UNIX window system for
  439. <TT>/dev/tty</TT>
  440. to behave as expected;
  441. <TT>8&#189;</TT>
  442. instead uses the provision of the corresponding file as its
  443. central idea, which to succeed depends critically on local name spaces.
  444. </P>
  445. <P>
  446. The environment
  447. <TT>8&#189;</TT>
  448. provides its clients is exactly the environment under which it is implemented:
  449. a conventional set of files in
  450. <TT>/dev</TT>.
  451. This permits the window system to be run recursively in one of its own
  452. windows, which is handy for debugging.
  453. It also means that if the files are exported to another machine,
  454. as described below, the window system or client applications may be
  455. run transparently on remote machines, even ones without graphics hardware.
  456. This mechanism is used for Plan 9's implementation of the X window
  457. system: X is run as a client of
  458. <TT>8&#189;</TT>,
  459. often on a remote machine with lots of memory.
  460. In this configuration, using Ethernet to connect
  461. MIPS machines, we measure only a 10% degradation in graphics
  462. performance relative to running X on
  463. a bare Plan 9 machine.
  464. </P>
  465. <P>
  466. An unusual application of these ideas is a statistics-gathering
  467. file system implemented by a command called
  468. <TT>iostats</TT>.
  469. The command encapsulates a process in a local name space, monitoring 9P
  470. requests from the process to the outside world &#173; the name space in which
  471. <TT>iostats</TT>
  472. is itself running. When the command completes,
  473. <TT>iostats</TT>
  474. reports usage and performance figures for file activity.
  475. For example
  476. <DL><DT><DD><TT><PRE>
  477. iostats 8&#189;
  478. </PRE></TT></DL>
  479. can be used to discover how much I/O the window system
  480. does to the bitmap device, font files, and so on.
  481. </P>
  482. <P>
  483. The
  484. <TT>import</TT>
  485. command connects a piece of name space from a remote system
  486. to the local name space.
  487. Its implementation is to dial the remote machine and start
  488. a process there that serves the remote name space using 9P.
  489. It then calls
  490. <TT>mount</TT>
  491. to attach the connection to the name space and finally dies;
  492. the remote process continues to serve the files.
  493. One use is to access devices not available
  494. locally. For example, to write a floppy one may say
  495. <DL><DT><DD><TT><PRE>
  496. import lab.pc /a: /n/dos
  497. cp foo /n/dos/bar
  498. </PRE></TT></DL>
  499. The call to
  500. <TT>import</TT>
  501. connects the file tree from
  502. <TT>/a:</TT>
  503. on the machine
  504. <TT>lab.pc</TT>
  505. (which must support 9P) to the local directory
  506. <TT>/n/dos</TT>.
  507. Then the file
  508. <TT>foo</TT>
  509. can be written to the floppy just by copying it across.
  510. </P>
  511. <P>
  512. Another application is remote debugging:
  513. <DL><DT><DD><TT><PRE>
  514. import helix /proc
  515. </PRE></TT></DL>
  516. makes the process file system on machine
  517. <TT>helix</TT>
  518. available locally; commands such as
  519. <TT>ps</TT>
  520. then see
  521. <TT>helix</TT>'s
  522. processes instead of the local ones.
  523. The debugger may then look at a remote process:
  524. <DL><DT><DD><TT><PRE>
  525. db /proc/27/text /proc/27/mem
  526. </PRE></TT></DL>
  527. allows breakpoint debugging of the remote process.
  528. Since
  529. <TT>db</TT>
  530. infers the CPU type of the process from the executable header on
  531. the text file, it supports
  532. cross-architecture debugging, too.
  533. Care is taken within
  534. <TT>db</TT>
  535. to handle issues of byte order and floating point; it is possible to
  536. breakpoint debug a big-endian MIPS process from a little-endian i386.
  537. </P>
  538. <P>
  539. Network interfaces are also implemented as file systems [Presotto].
  540. For example,
  541. <TT>/net/tcp</TT>
  542. is a directory somewhat like
  543. <TT>/proc</TT>:
  544. it contains a set of numbered directories, one per connection,
  545. each of which contains files to control and communicate on the connection.
  546. A process allocates a new connection by accessing
  547. <TT>/net/tcp/clone</TT>,
  548. which evaluates to the directory of an unused connection.
  549. To make a call, the process writes a textual message such as
  550. <TT>'connect</TT>
  551. <TT>135.104.53.2!512'</TT>
  552. to the
  553. <TT>ctl</TT>
  554. file and then reads and writes the
  555. <TT>data</TT>
  556. file.
  557. An
  558. <TT>rlogin</TT>
  559. service can be implemented in a few of lines of shell code.
  560. </P>
  561. <P>
  562. This structure makes network gatewaying easy to provide.
  563. We have machines with Datakit interfaces but no Internet interface.
  564. On such a machine one may type
  565. <DL><DT><DD><TT><PRE>
  566. import helix /net
  567. telnet tcp!ai.mit.edu
  568. </PRE></TT></DL>
  569. The
  570. <TT>import</TT>
  571. uses Datakit to pull in the TCP interface from
  572. <TT>helix</TT>,
  573. which can then be used directly; the
  574. <TT>tcp!</TT>
  575. notation is necessary because we routinely use multiple networks
  576. and protocols on Plan 9&#173;it identifies the network in which
  577. <TT>ai.mit.edu</TT>
  578. is a valid name.
  579. </P>
  580. <P>
  581. In practice we do not use
  582. <TT>rlogin</TT>
  583. or
  584. <TT>telnet</TT>
  585. between Plan 9 machines. Instead a command called
  586. <TT>cpu</TT>
  587. in effect replaces the CPU in a window with that
  588. on another machine, typically a fast multiprocessor CPU server.
  589. The implementation is to recreate the
  590. name space on the remote machine, using the equivalent of
  591. <TT>import</TT>
  592. to connect pieces of the terminal's name space to that of
  593. the process (shell) on the CPU server, making the terminal
  594. a file server for the CPU.
  595. CPU-local devices such as fast file system connections
  596. are still local; only terminal-resident devices are
  597. imported.
  598. The result is unlike UNIX
  599. <TT>rlogin</TT>,
  600. which moves into a distinct name space on the remote machine,
  601. or file sharing with
  602. <TT>NFS</TT>,
  603. which keeps the name space the same but forces processes to execute
  604. locally.
  605. Bindings in
  606. <TT>/bin</TT>
  607. may change because of a change in CPU architecture, and
  608. the networks involved may be different because of differing hardware,
  609. but the effect feels like simply speeding up the processor in the
  610. current name space.
  611. </P>
  612. <H4>Position
  613. </H4>
  614. <P>
  615. These examples illustrate how the ideas of representing resources
  616. as file systems and per-process name spaces can be used to solve
  617. problems often left to more exotic mechanisms.
  618. Nonetheless there are some operations in Plan 9 that are not
  619. mapped into file I/O.
  620. An example is process creation.
  621. We could imagine a message to a control file in
  622. <TT>/proc</TT>
  623. that creates a process, but the details of
  624. constructing the environment of the new process &#173; its open files,
  625. name space, memory image, etc. &#173; are too intricate to
  626. be described easily in a simple I/O operation.
  627. Therefore new processes on Plan 9 are created by fairly conventional
  628. <TT>rfork</TT>
  629. and
  630. <TT>exec</TT>
  631. system calls;
  632. <TT>/proc</TT>
  633. is used only to represent and control existing processes.
  634. </P>
  635. <P>
  636. Plan 9 does not attempt to map network name spaces into the file
  637. system name space, for several reasons.
  638. The different addressing rules for various networks and protocols
  639. cannot be mapped uniformly into a hierarchical file name space.
  640. Even if they could be,
  641. the various mechanisms to authenticate,
  642. select a service,
  643. and control the connection would not map consistently into
  644. operations on a file.
  645. </P>
  646. <P>
  647. Shared memory is another resource not adequately represented by a
  648. file name space.
  649. Plan 9 takes care to provide mechanisms
  650. to allow groups of local processes to share and map memory.
  651. Memory is controlled
  652. by system calls rather than special files, however,
  653. since a representation in the file system would imply that memory could
  654. be imported from remote machines.
  655. </P>
  656. <P>
  657. Despite these limitations, file systems and name spaces offer an effective
  658. model around which to build a distributed system.
  659. Used well, they can provide a uniform, familiar, transparent
  660. interface to a diverse set of distributed resources.
  661. They carry well-understood properties of access, protection,
  662. and naming.
  663. The integration of devices into the hierarchical file system
  664. was the best idea in UNIX.
  665. Plan 9 pushes the concepts much further and shows that
  666. file systems, when used inventively, have plenty of scope
  667. for productive research.
  668. </P>
  669. <H4>References
  670. </H4>
  671. <br>&#32;<br>
  672. [Killian] T. Killian, ``Processes as Files'', USENIX Summer Conf. Proc., Salt Lake City, 1984
  673. <br>
  674. [Needham] R. Needham, ``Names'', in
  675. Distributed systems,
  676. S. Mullender, ed.,
  677. Addison Wesley, 1989
  678. <br>
  679. [Pike90] R. Pike, D. Presotto, K. Thompson, H. Trickey,
  680. ``Plan 9 from Bell Labs'',
  681. UKUUG Proc. of the Summer 1990 Conf.,
  682. London, England,
  683. 1990
  684. <br>
  685. [Presotto] D. Presotto, ``Multiprocessor Streams for Plan 9'',
  686. UKUUG Proc. of the Summer 1990 Conf.,
  687. London, England,
  688. 1990
  689. <br>
  690. [Pike91] Pike, R., ``8.5, The Plan 9 Window System'', USENIX Summer
  691. Conf. Proc., Nashville, 1991
  692. <br>&#32;<br>
  693. <A href=http://www.lucent.com/copyright.html>
  694. Copyright</A> &#169; 2004 Lucent Technologies Inc. All rights reserved.
  695. </body></html>