compiler.html 31 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093
  1. <html>
  2. <title>
  3. data
  4. </title>
  5. <body BGCOLOR="#FFFFFF" TEXT="#000000" LINK="#0000FF" VLINK="#330088" ALINK="#FF0044">
  6. <H1>Plan 9 C Compilers
  7. </H1>
  8. <DL><DD><I>Ken Thompson<br>
  9. ken@plan9.bell-labs.com<br>
  10. </I></DL>
  11. <DL><DD><H4>ABSTRACT</H4>
  12. <DL>
  13. <DT><DT>&#32;<DD>
  14. NOTE:<I> Originally appeared, in a different form, in
  15. Proceedings of the Summer 1990 UKUUG Conference,
  16. pp. 41-51,
  17. London, 1990.
  18. </I><DT>&#32;<DD></dl>
  19. <br>
  20. This paper describes the overall structure and function of the Plan 9 C compilers.
  21. A more detailed implementation document
  22. for any one of the compilers
  23. is yet to be written.
  24. </DL>
  25. <H4>1 Introduction
  26. </H4>
  27. <br>&#32;<br>
  28. There are many compilers in the series.
  29. Six of the compilers (MIPS 3000, SPARC, Intel 386, Power PC, DEC Alpha, and Motorola 68020)
  30. are considered active and are used to compile
  31. current versions of Plan 9.
  32. Several others (Motorola 68000, Intel 960, ARM 7500, AMD 29000) have had only limited use, such as
  33. to program peripherals or experimental devices.
  34. <H4>2 Structure
  35. </H4>
  36. <br>&#32;<br>
  37. The compiler is a single program that produces an
  38. object file.
  39. Combined in the compiler are the traditional
  40. roles of preprocessor, lexical analyzer, parser, code generator,
  41. local optimizer,
  42. and first half of the assembler.
  43. The object files are binary forms of assembly
  44. language,
  45. similar to what might be passed between
  46. the first and second passes of an assembler.
  47. <br>&#32;<br>
  48. Object files and libraries
  49. are combined by a loader
  50. program to produce the executable binary.
  51. The loader combines the roles of second half
  52. of the assembler, global optimizer, and loader.
  53. The names of the compliers, loaders, and assemblers
  54. are as follows:
  55. <DL><DT><DD><TT><PRE>
  56. SPARC <TT>kc</TT> <TT>kl</TT> <TT>ka</TT>
  57. Power PC <TT>qc</TT> <TT>ql</TT> <TT>qa</TT>
  58. MIPS <TT>vc</TT> <TT>vl</TT> <TT>va</TT>
  59. Motorola 68000 <TT>1c</TT> <TT>1l</TT> <TT>1a</TT>
  60. Motorola 68020 <TT>2c</TT> <TT>2l</TT> <TT>2a</TT>
  61. ARM 7500 <TT>5c</TT> <TT>5l</TT> <TT>5a</TT>
  62. Intel 960 <TT>6c</TT> <TT>6l</TT> <TT>6a</TT>
  63. DEC Alpha <TT>7c</TT> <TT>7l</TT> <TT>7a</TT>
  64. Intel 386 <TT>8c</TT> <TT>8l</TT> <TT>8a</TT>
  65. AMD 29000 <TT>9c</TT> <TT>9l</TT> <TT>9a</TT>
  66. </PRE></TT></DL>
  67. There is a further breakdown
  68. in the source of the compilers into
  69. object-independent and
  70. object-dependent
  71. parts.
  72. All of the object-independent parts
  73. are combined into source files in the
  74. directory
  75. <TT>/sys/src/cmd/cc</TT>.
  76. The object-dependent parts are collected
  77. in a separate directory for each compiler,
  78. for example
  79. <TT>/sys/src/cmd/vc</TT>.
  80. All of the code,
  81. both object-independent and
  82. object-dependent,
  83. is machine-independent
  84. and may be cross-compiled and executed on any
  85. of the architectures.
  86. <H4>3 The Language
  87. </H4>
  88. <br>&#32;<br>
  89. The compiler implements ANSI C with some
  90. restrictions and extensions
  91. [ANSI90].
  92. Most of the restrictions are due to
  93. personal preference, while
  94. most of the extensions were to help in
  95. the implementation of Plan 9.
  96. There are other departures from the standard,
  97. particularly in the libraries,
  98. that are beyond the scope of this
  99. paper.
  100. <H4>3.1 Register, volatile, const
  101. </H4>
  102. <br>&#32;<br>
  103. The keyword
  104. <TT>register</TT>
  105. is recognized syntactically
  106. but is semantically ignored.
  107. Thus taking the address of a
  108. <TT>register</TT>
  109. variable is not diagnosed.
  110. The keyword
  111. <TT>volatile</TT>
  112. disables all optimizations, in particular registerization, of the corresponding variable.
  113. The keyword
  114. <TT>const</TT>
  115. generates warnings (if warnings are enabled by the compiler's
  116. <TT>-w</TT>
  117. option) of non-constant use of the variable,
  118. but does not affect the generated code.
  119. <H4>3.2 The preprocessor
  120. </H4>
  121. <br>&#32;<br>
  122. The C preprocessor is probably the
  123. biggest departure from the ANSI standard.
  124. <br>&#32;<br>
  125. The preprocessor built into the Plan 9 compilers does not support
  126. <TT>#if</TT>,
  127. although it does handle
  128. <TT>#ifdef</TT>
  129. and
  130. <TT>#include</TT>.
  131. If it is necessary to be more standard,
  132. the source text can first be run through the separate ANSI C
  133. preprocessor,
  134. <TT>cpp</TT>.
  135. <H4>3.3 Unnamed substructures
  136. </H4>
  137. <br>&#32;<br>
  138. The most important and most heavily used of the
  139. extensions is the declaration of an
  140. unnamed substructure or subunion.
  141. For example:
  142. <DL><DT><DD><TT><PRE>
  143. typedef
  144. struct lock
  145. {
  146. int locked;
  147. } Lock;
  148. typedef
  149. struct node
  150. {
  151. int type;
  152. union
  153. {
  154. double dval;
  155. float fval;
  156. long lval;
  157. };
  158. Lock;
  159. } Node;
  160. Lock* lock;
  161. Node* node;
  162. </PRE></TT></DL>
  163. The declaration of
  164. <TT>Node</TT>
  165. has an unnamed substructure of type
  166. <TT>Lock</TT>
  167. and an unnamed subunion.
  168. One use of this feature allows references to elements of the
  169. subunit to be accessed as if they were in
  170. the outer structure.
  171. Thus
  172. <TT>node->dval</TT>
  173. and
  174. <TT>node->locked</TT>
  175. are legitimate references.
  176. <br>&#32;<br>
  177. When an outer structure is used
  178. in a context that is only legal for
  179. an unnamed substructure,
  180. the compiler promotes the reference to the
  181. unnamed substructure.
  182. This is true for references to structures and
  183. to references to pointers to structures.
  184. This happens in assignment statements and
  185. in argument passing where prototypes have been
  186. declared.
  187. Thus, continuing with the example,
  188. <DL><DT><DD><TT><PRE>
  189. lock = node;
  190. </PRE></TT></DL>
  191. would assign a pointer to the unnamed
  192. <TT>Lock</TT>
  193. in
  194. the
  195. <TT>Node</TT>
  196. to the variable
  197. <TT>lock</TT>.
  198. Another example,
  199. <DL><DT><DD><TT><PRE>
  200. extern void lock(Lock*);
  201. func(...)
  202. {
  203. ...
  204. lock(node);
  205. ...
  206. }
  207. </PRE></TT></DL>
  208. will pass a pointer to the
  209. <TT>Lock</TT>
  210. substructure.
  211. <br>&#32;<br>
  212. Finally, in places where context is insufficient to identify the unnamed structure,
  213. the type name (it must be a
  214. <TT>typedef</TT>)
  215. of the unnamed structure can be used as an identifier.
  216. In our example,
  217. <TT>&node->Lock</TT>
  218. gives the address of the anonymous
  219. <TT>Lock</TT>
  220. structure.
  221. <H4>3.4 Structure displays
  222. </H4>
  223. <br>&#32;<br>
  224. A structure cast followed by a list of expressions in braces is
  225. an expression with the type of the structure and elements assigned from
  226. the corresponding list.
  227. Structures are now almost first-class citizens of the language.
  228. It is common to see code like this:
  229. <DL><DT><DD><TT><PRE>
  230. r = (Rectangle){point1, (Point){x,y+2}};
  231. </PRE></TT></DL>
  232. <H4>3.5 Initialization indexes
  233. </H4>
  234. <br>&#32;<br>
  235. In initializers of arrays,
  236. one may place a constant expression
  237. in square brackets before an initializer.
  238. This causes the next initializer to assign
  239. the indicated element.
  240. For example:
  241. <DL><DT><DD><TT><PRE>
  242. enum errors
  243. {
  244. Etoobig,
  245. Ealarm,
  246. Egreg
  247. };
  248. char* errstrings[] =
  249. {
  250. [Ealarm] "Alarm call",
  251. [Egreg] "Panic: out of mbufs",
  252. [Etoobig] "Arg list too long",
  253. };
  254. </PRE></TT></DL>
  255. In the same way,
  256. individual structures members may
  257. be initialized in any order by preceding the initialization with
  258. <TT>.tagname</TT>.
  259. Both forms allow an optional
  260. <TT>=</TT>,
  261. to be compatible with a proposed
  262. extension to ANSI C.
  263. <H4>3.6 External register
  264. </H4>
  265. <br>&#32;<br>
  266. The declaration
  267. <TT>extern</TT>
  268. <TT>register</TT>
  269. will dedicate a register to
  270. a variable on a global basis.
  271. It can be used only under special circumstances.
  272. External register variables must be identically
  273. declared in all modules and
  274. libraries.
  275. The feature is not intended for efficiency,
  276. although it can produce efficient code;
  277. rather it represents a unique storage class that
  278. would be hard to get any other way.
  279. On a shared-memory multi-processor,
  280. an external register is
  281. one-per-processor and neither one-per-procedure (automatic)
  282. or one-per-system (external).
  283. It is used for two variables in the Plan 9 kernel,
  284. <TT>u</TT>
  285. and
  286. <TT>m</TT>.
  287. <TT>U</TT>
  288. is a pointer to the structure representing the currently running process
  289. and
  290. <TT>m</TT>
  291. is a pointer to the per-machine data structure.
  292. <H4>3.7 Long long
  293. </H4>
  294. <br>&#32;<br>
  295. The compilers accept
  296. <TT>long</TT>
  297. <TT>long</TT>
  298. as a basic type meaning 64-bit integer.
  299. On all of the machines
  300. this type is synthesized from 32-bit instructions.
  301. <H4>3.8 Pragma
  302. </H4>
  303. <br>&#32;<br>
  304. The compilers accept
  305. <TT>#pragma</TT>
  306. <TT>lib</TT>
  307. <I>libname</I>
  308. and pass the
  309. library name string uninterpreted
  310. to the loader.
  311. The loader uses the library name to
  312. find libraries to load.
  313. If the name contains
  314. <TT>%O</TT>,
  315. it is replaced with
  316. the single character object type of the compiler
  317. (e.g.,
  318. <TT>v</TT>
  319. for the MIPS).
  320. If the name contains
  321. <TT>%M</TT>,
  322. it is replaced with
  323. the architecture type for the compiler
  324. (e.g.,
  325. <TT>mips</TT>
  326. for the MIPS).
  327. If the name starts with
  328. <TT>/</TT>
  329. it is an absolute pathname;
  330. if it starts with
  331. <TT>.</TT>
  332. then it is searched for in the loader's current directory.
  333. Otherwise, the name is searched from
  334. <TT>/%M/lib</TT>.
  335. Such
  336. <TT>#pragma</TT>
  337. statements in header files guarantee that the correct
  338. libraries are always linked with a program without the
  339. need to specify them explicitly at link time.
  340. <br>&#32;<br>
  341. They also accept
  342. <TT>#pragma</TT>
  343. <TT>hjdicks</TT>
  344. <TT>on</TT>
  345. (or
  346. <TT>yes</TT>
  347. or
  348. <TT>1</TT>)
  349. to cause subsequently declared data, until
  350. <TT>#pragma</TT>
  351. <TT>hjdicks</TT>
  352. <TT>off</TT>
  353. (or
  354. <TT>no</TT>
  355. or
  356. <TT>0</TT>),
  357. to be laid out in memory tightly packed in successive bytes, disregarding
  358. the usual alignment rules.
  359. Accessing such data can cause faults.
  360. <br>&#32;<br>
  361. Two
  362. <TT>#pragma</TT>
  363. statements allow type-checking of
  364. <TT>print</TT>-like
  365. functions.
  366. The first, of the form
  367. <DL><DT><DD><TT><PRE>
  368. #pragma varargck argpos error 2
  369. </PRE></TT></DL>
  370. tells the compiler that the second argument to
  371. <TT>error</TT>
  372. is a
  373. <TT>print</TT>
  374. format string (see the manual page
  375. <A href="/magic/man2html/2/print"><I>print</I>(2))
  376. </A>that specifies how to format
  377. <TT>error</TT>'s
  378. subsequent arguments.
  379. The second, of the form
  380. <DL><DT><DD><TT><PRE>
  381. #pragma varargck type "s" char*
  382. </PRE></TT></DL>
  383. says that the
  384. <TT>print</TT>
  385. format verb
  386. <TT>s</TT>
  387. processes an argument of
  388. type
  389. <TT>char*</TT>.
  390. If the compiler's
  391. <TT>-F</TT>
  392. option is enabled, the compiler will use this information
  393. to report type violations in the arguments to
  394. <TT>print</TT>,
  395. <TT>error</TT>,
  396. and similar routines.
  397. <H4>4 Object module conventions
  398. </H4>
  399. <br>&#32;<br>
  400. The overall conventions of the runtime environment
  401. are important
  402. to runtime efficiency.
  403. In this section,
  404. several of these conventions are discussed.
  405. <H4>4.1 Register saving
  406. </H4>
  407. <br>&#32;<br>
  408. In the Plan 9 compilers,
  409. the caller of a procedure saves the registers.
  410. With caller-saves,
  411. the leaf procedures can use all the
  412. registers and never save them.
  413. If you spend a lot of time at the leaves,
  414. this seems preferable.
  415. With callee-saves,
  416. the saving of the registers is done
  417. in the single point of entry and return.
  418. If you are interested in space,
  419. this seems preferable.
  420. In both,
  421. there is a degree of uncertainty
  422. about what registers need to be saved.
  423. Callee-saved registers make it difficult to
  424. find variables in registers in debuggers.
  425. Callee-saved registers also complicate
  426. the implementation of
  427. <TT>longjmp</TT>.
  428. The convincing argument is
  429. that with caller-saves,
  430. the decision to registerize a variable
  431. can include the cost of saving the register
  432. across calls.
  433. For a further discussion of caller- vs. callee-saves,
  434. see the paper by Davidson and Whalley [Dav91].
  435. <br>&#32;<br>
  436. In the Plan 9 operating system,
  437. calls to the kernel look like normal procedure
  438. calls, which means
  439. the caller
  440. has saved the registers and the system
  441. entry does not have to.
  442. This makes system calls considerably faster.
  443. Since this is a potential security hole,
  444. and can lead to non-determinism,
  445. the system may eventually save the registers
  446. on entry,
  447. or more likely clear the registers on return.
  448. <H4>4.2 Calling convention
  449. </H4>
  450. <br>&#32;<br>
  451. Older C compilers maintain a frame pointer, which is at a known constant
  452. offset from the stack pointer within each function.
  453. For machines where the stack grows towards zero,
  454. the argument pointer is at a known constant offset
  455. from the frame pointer.
  456. Since the stack grows down in Plan 9,
  457. the Plan 9 compilers
  458. keep neither an
  459. explicit frame pointer nor
  460. an explicit argument pointer;
  461. instead they generate addresses relative to the stack pointer.
  462. <br>&#32;<br>
  463. On some architectures, the first argument to a subroutine is passed in a register.
  464. <H4>4.3 Functions returning structures
  465. </H4>
  466. <br>&#32;<br>
  467. Structures longer than one word are awkward to implement
  468. since they do not fit in registers and must
  469. be passed around in memory.
  470. Functions that return structures
  471. are particularly clumsy.
  472. The Plan 9 compilers pass the return address of
  473. a structure as the first argument of a
  474. function that has a structure return value.
  475. Thus
  476. <DL><DT><DD><TT><PRE>
  477. x = f(...)
  478. </PRE></TT></DL>
  479. is rewritten as
  480. <DL><DT><DD><TT><PRE>
  481. f(&amp;x, ...).
  482. </PRE></TT></DL>
  483. This saves a copy and makes the compilation
  484. much less clumsy.
  485. A disadvantage is that if you call this
  486. function without an assignment,
  487. a dummy location must be invented.
  488. <br>&#32;<br>
  489. There is also a danger of calling a function
  490. that returns a structure without declaring
  491. it as such.
  492. With ANSI C function prototypes,
  493. this error need never occur.
  494. <H4>5 Implementation
  495. </H4>
  496. <br>&#32;<br>
  497. The compiler is divided internally into
  498. four machine-independent passes,
  499. four machine-dependent passes,
  500. and an output pass.
  501. The next nine sections describe each pass in order.
  502. <H4>5.1 Parsing
  503. </H4>
  504. <br>&#32;<br>
  505. The first pass is a YACC-based parser
  506. [Joh79].
  507. Declarations are interpreted immediately,
  508. building a block structured symbol table.
  509. Executable statements are put into a parse tree
  510. and collected,
  511. without interpretation.
  512. At the end of each procedure,
  513. the parse tree for the function is
  514. examined by the other passes of the compiler.
  515. <br>&#32;<br>
  516. The input stream of the parser is
  517. a pushdown list of input activations.
  518. The preprocessor
  519. expansions of
  520. macros
  521. and
  522. <TT>#include</TT>
  523. are implemented as pushdowns.
  524. Thus there is no separate
  525. pass for preprocessing.
  526. <H4>5.2 Typing
  527. </H4>
  528. <br>&#32;<br>
  529. The next pass distributes typing information
  530. to every node of the tree.
  531. Implicit operations on the tree are added,
  532. such as type promotions and taking the
  533. address of arrays and functions.
  534. <H4>5.3 Machine-independent optimization
  535. </H4>
  536. <br>&#32;<br>
  537. The next pass performs optimizations
  538. and transformations of the tree, such as converting
  539. <TT>&*x</TT>
  540. and
  541. <TT>*&x</TT>
  542. into
  543. <TT>x</TT>.
  544. Constant expressions are converted to constants in this pass.
  545. <H4>5.4 Arithmetic rewrites
  546. </H4>
  547. <br>&#32;<br>
  548. This is another machine-independent optimization.
  549. Subtrees of add, subtract, and multiply of integers are
  550. rewritten for easier compilation.
  551. The major transformation is factoring:
  552. <TT>4+8*a+16*b+5</TT>
  553. is transformed into
  554. <TT>9+8*(a+2*b)</TT>.
  555. Such expressions arise from address
  556. manipulation and array indexing.
  557. <H4>5.5 Addressability
  558. </H4>
  559. <br>&#32;<br>
  560. This is the first of the machine-dependent passes.
  561. The addressability of a processor is defined as the set of
  562. expressions that is legal in the address field
  563. of a machine language instruction.
  564. The addressability of different processors varies widely.
  565. At one end of the spectrum are the 68020 and VAX,
  566. which allow a complex mix of incrementing,
  567. decrementing,
  568. indexing, and relative addressing.
  569. At the other end is the MIPS,
  570. which allows only registers and constant offsets from the
  571. contents of a register.
  572. The addressability can be different for different instructions
  573. within the same processor.
  574. <br>&#32;<br>
  575. It is important to the code generator to know when a
  576. subtree represents an address of a particular type.
  577. This is done with a bottom-up walk of the tree.
  578. In this pass, the leaves are labeled with small integers.
  579. When an internal node is encountered,
  580. it is labeled by consulting a table indexed by the
  581. labels on the left and right subtrees.
  582. For example,
  583. on the 68020 processor,
  584. it is possible to address an
  585. offset from a named location.
  586. In C, this is represented by the expression
  587. <TT>*(&name+constant)</TT>.
  588. This is marked addressable by the following table.
  589. In the table,
  590. a node represented by the left column is marked
  591. with a small integer from the right column.
  592. Marks of the form
  593. addressable<TT>A<sub>i</sub></TT>are
  594. marks of the form
  595. not<TT>N<sub>i</sub></TT>are
  596. <DL><DT><DD><TT><PRE>
  597. Node Marked
  598. name A<sub>1</sub>
  599. const A<sub>2</sub>
  600. &amp;A<sub>1</sub> A<sub>3</sub>
  601. A<sub>3</sub>A<sub>1</sub> N<sub>1</sub> (note that this is not addressable)
  602. *N<sub>1</sub> A<sub>4</sub>
  603. </PRE></TT></DL>
  604. Here there is a distinction between
  605. a node marked
  606. a<TT>A<sub>1</sub></TT>and
  607. the<TT>A<sub>4</sub></TT>because
  608. is<TT>A<sub>4</sub></TT>node
  609. So to extend the table:
  610. <DL><DT><DD><TT><PRE>
  611. Node Marked
  612. &amp;A<sub>4</sub> N<sub>2</sub>
  613. N<sub>2</sub>N<sub>1</sub> N<sub>1</sub>
  614. </PRE></TT></DL>
  615. The full addressability of the 68020 is expressed
  616. in 18 rules like this,
  617. while the addressability of the MIPS is expressed
  618. in 11 rules.
  619. When one ports the compiler,
  620. this table is usually initialized
  621. so that leaves are labeled as addressable and nothing else.
  622. The code produced is poor,
  623. but porting is easy.
  624. The table can be extended later.
  625. <br>&#32;<br>
  626. This pass also rewrites some complex operators
  627. into procedure calls.
  628. Examples include 64-bit multiply and divide.
  629. <br>&#32;<br>
  630. In the same bottom-up pass of the tree,
  631. the nodes are labeled with a Sethi-Ullman complexity
  632. [Set70].
  633. This number is roughly the number of registers required
  634. to compile the tree on an ideal machine.
  635. An addressable node is marked 0.
  636. A function call is marked infinite.
  637. A unary operator is marked as the
  638. maximum of 1 and the mark of its subtree.
  639. A binary operator with equal marks on its subtrees is
  640. marked with a subtree mark plus 1.
  641. A binary operator with unequal marks on its subtrees is
  642. marked with the maximum mark of its subtrees.
  643. The actual values of the marks are not too important,
  644. but the relative values are.
  645. The goal is to compile the harder
  646. (larger mark)
  647. subtree first.
  648. <H4>5.6 Code generation
  649. </H4>
  650. <br>&#32;<br>
  651. Code is generated by recursive
  652. descent.
  653. The Sethi-Ullman complexity completely guides the
  654. order.
  655. The addressability defines the leaves.
  656. The only difficult part is compiling a tree
  657. that has two infinite (function call)
  658. subtrees.
  659. In this case,
  660. one subtree is compiled into the return register
  661. (usually the most convenient place for a function call)
  662. and then stored on the stack.
  663. The other subtree is compiled into the return register
  664. and then the operation is compiled with
  665. operands from the stack and the return register.
  666. <br>&#32;<br>
  667. There is a separate boolean code generator that compiles
  668. conditional expressions.
  669. This is fundamentally different from compiling an arithmetic expression.
  670. The result of the boolean code generator is the
  671. position of the program counter and not an expression.
  672. The boolean code generator makes extensive use of De Morgan's rule.
  673. The boolean code generator is an expanded version of that described
  674. in chapter 8 of Aho, Sethi, and Ullman
  675. [Aho87].
  676. <br>&#32;<br>
  677. There is a considerable amount of talk in the literature
  678. about automating this part of a compiler with a machine
  679. description.
  680. Since this code generator is so small
  681. (less than 500 lines of C)
  682. and easy,
  683. it hardly seems worth the effort.
  684. <H4>5.7 Registerization
  685. </H4>
  686. <br>&#32;<br>
  687. Up to now,
  688. the compiler has operated on syntax trees
  689. that are roughly equivalent to the original source language.
  690. The previous pass has produced machine language in an internal
  691. format.
  692. The next two passes operate on the internal machine language
  693. structures.
  694. The purpose of the next pass is to reintroduce
  695. registers for heavily used variables.
  696. <br>&#32;<br>
  697. All of the variables that can be
  698. potentially registerized within a procedure are
  699. placed in a table.
  700. (Suitable variables are any automatic or external
  701. scalars that do not have their addresses extracted.
  702. Some constants that are hard to reference are also
  703. considered for registerization.)
  704. Four separate data flow equations are evaluated
  705. over the procedure on all of these variables.
  706. Two of the equations are the normal set-behind
  707. and used-ahead
  708. bits that define the life of a variable.
  709. The two new bits tell if a variable life
  710. crosses a function call ahead or behind.
  711. By examining a variable over its lifetime,
  712. it is possible to get a cost
  713. for registerizing.
  714. Loops are detected and the costs are multiplied
  715. by three for every level of loop nesting.
  716. Costs are sorted and the variables
  717. are replaced by available registers on a greedy basis.
  718. <br>&#32;<br>
  719. The 68020 has two different
  720. types of registers.
  721. For the 68020,
  722. two different costs are calculated for
  723. each variable life and the register type that
  724. affords the better cost is used.
  725. Ties are broken by counting the number of available
  726. registers of each type.
  727. <br>&#32;<br>
  728. Note that externals are registerized together with automatics.
  729. This is done by evaluating the semantics of a ``call'' instruction
  730. differently for externals and automatics.
  731. Since a call goes outside the local procedure,
  732. it is assumed that a call references all externals.
  733. Similarly,
  734. externals are assumed to be set before an ``entry'' instruction
  735. and assumed to be referenced after a ``return'' instruction.
  736. This makes sure that externals are in memory across calls.
  737. <br>&#32;<br>
  738. The overall results are satisfactory.
  739. It would be nice to be able to do this processing in
  740. a machine-independent way,
  741. but it is impossible to get all of the costs and
  742. side effects of different choices by examining the parse tree.
  743. <br>&#32;<br>
  744. Most of the code in the registerization pass is machine-independent.
  745. The major machine-dependency is in
  746. examining a machine instruction to ask if it sets or references
  747. a variable.
  748. <H4>5.8 Machine code optimization
  749. </H4>
  750. <br>&#32;<br>
  751. The next pass walks the machine code
  752. for opportunistic optimizations.
  753. For the most part,
  754. this is highly specific to a particular
  755. processor.
  756. One optimization that is performed
  757. on all of the processors is the
  758. removal of unnecessary ``move''
  759. instructions.
  760. Ironically,
  761. most of these instructions were inserted by
  762. the previous pass.
  763. There are two patterns that are repetitively
  764. matched and replaced until no more matches are
  765. found.
  766. The first tries to remove ``move'' instructions
  767. by relabeling variables.
  768. <br>&#32;<br>
  769. When a ``move'' instruction is encountered,
  770. if the destination variable is set before the
  771. source variable is referenced,
  772. then all of the references to the destination
  773. variable can be renamed to the source and the ``move''
  774. can be deleted.
  775. This transformation uses the reverse data flow
  776. set up in the previous pass.
  777. <br>&#32;<br>
  778. An example of this pattern is depicted in the following
  779. table.
  780. The pattern is in the left column and the
  781. replacement action is in the right column.
  782. <DL><DT><DD><TT><PRE>
  783. MOVE a-&#62;b (remove)
  784. (sequence with no mention of <TT>a</TT>)
  785. USE b USE a
  786. (sequence with no mention of <TT>a</TT>)
  787. SET b SET b
  788. </PRE></TT></DL>
  789. <br>&#32;<br>
  790. Experiments have shown that it is marginally
  791. worthwhile to rename uses of the destination variable
  792. with uses of the source variable up to
  793. the first use of the source variable.
  794. <br>&#32;<br>
  795. The second transform will do relabeling
  796. without deleting instructions.
  797. When a ``move'' instruction is encountered,
  798. if the source variable has been set prior
  799. to the use of the destination variable
  800. then all of the references to the source
  801. variable are replaced by the destination and
  802. the ``move'' is inverted.
  803. Typically,
  804. this transformation will alter two ``move''
  805. instructions and allow the first transformation
  806. another chance to remove code.
  807. This transformation uses the forward data flow
  808. set up in the previous pass.
  809. <br>&#32;<br>
  810. Again,
  811. the following is a depiction of the transformation where
  812. the pattern is in the left column and the
  813. rewrite is in the right column.
  814. <DL><DT><DD><TT><PRE>
  815. SET a SET b
  816. (sequence with no use of <TT>b</TT>)
  817. USE a USE b
  818. (sequence with no use of <TT>b</TT>)
  819. MOVE a-&#62;b MOVE b-&#62;a
  820. </PRE></TT></DL>
  821. Iterating these transformations
  822. will usually get rid of all redundant ``move'' instructions.
  823. <br>&#32;<br>
  824. A problem with this organization is that the costs
  825. of registerization calculated in the previous pass
  826. must depend on how well this pass can detect and remove
  827. redundant instructions.
  828. Often,
  829. a fine candidate for registerization is rejected
  830. because of the cost of instructions that are later
  831. removed.
  832. <H4>5.9 Writing the object file
  833. </H4>
  834. <br>&#32;<br>
  835. The last pass walks the internal assembly language
  836. and writes the object file.
  837. The object file is reduced in size by about a factor
  838. of three with simple compression
  839. techniques.
  840. The most important aspect of the object file
  841. format is that it is independent of the compiling machine.
  842. All integer and floating numbers in the object
  843. code are converted to known formats and byte
  844. orders.
  845. <H4>6 The loader
  846. </H4>
  847. <br>&#32;<br>
  848. The loader is a multiple pass program that
  849. reads object files and libraries and produces
  850. an executable binary.
  851. The loader also does some minimal
  852. optimizations and code rewriting.
  853. Many of the operations performed by the
  854. loader are machine-dependent.
  855. <br>&#32;<br>
  856. The first pass of the loader reads the
  857. object modules into an internal data
  858. structure that looks like binary assembly language.
  859. As the instructions are read,
  860. code is reordered to remove
  861. unconditional branch instructions.
  862. Conditional branch instructions are inverted
  863. to prevent the insertion of unconditional branches.
  864. The loader will also make a copy of a few instructions
  865. to remove an unconditional branch.
  866. <br>&#32;<br>
  867. The next pass allocates addresses for
  868. all external data.
  869. Typical of processors is the MIPS,
  870. which can reference &#177;32K bytes from a
  871. register.
  872. The loader allocates the register
  873. <TT>R30</TT>
  874. as the static pointer.
  875. The value placed in
  876. <TT>R30</TT>
  877. is the base of the data segment plus 32K.
  878. It is then cheap to reference all data in the
  879. first 64K of the data segment.
  880. External variables are allocated to
  881. the data segment
  882. with the smallest variables allocated first.
  883. If all of the data cannot fit into the first
  884. 64K of the data segment,
  885. then usually only a few large arrays
  886. need more expensive addressing modes.
  887. <br>&#32;<br>
  888. For the MIPS processor,
  889. the loader makes a pass over the internal
  890. structures,
  891. exchanging instructions to try
  892. to fill ``delay slots'' with useful work.
  893. If a useful instruction cannot be found
  894. to fill a delay slot,
  895. the loader will insert
  896. ``noop''
  897. instructions.
  898. This pass is very expensive and does not
  899. do a good job.
  900. About 40% of all instructions are in
  901. delay slots.
  902. About 65% of these are useful instructions and
  903. 35% are ``noops.''
  904. The vendor-supplied assembler does this job
  905. more effectively,
  906. filling about 80%
  907. of the delay slots with useful instructions.
  908. <br>&#32;<br>
  909. On the 68020 processor,
  910. branch instructions come in a variety of
  911. sizes depending on the relative distance
  912. of the branch.
  913. Thus the size of branch instructions
  914. can be mutually dependent.
  915. The loader uses a multiple pass algorithm
  916. to resolve the branch lengths
  917. [Szy78].
  918. Initially, all branches are assumed minimal length.
  919. On each subsequent pass,
  920. the branches are reassessed
  921. and expanded if necessary.
  922. When no more expansions occur,
  923. the locations of the instructions in
  924. the text segment are known.
  925. <br>&#32;<br>
  926. On the MIPS processor,
  927. all instructions are one size.
  928. A single pass over the instructions will
  929. determine the locations of all addresses
  930. in the text segment.
  931. <br>&#32;<br>
  932. The last pass of the loader produces the
  933. executable binary.
  934. A symbol table and other tables are
  935. produced to help the debugger to
  936. interpret the binary symbolically.
  937. <br>&#32;<br>
  938. The loader places absolute source line numbers in the symbol table.
  939. The name and absolute line number of all
  940. <TT>#include</TT>
  941. files is also placed in the
  942. symbol table so that the debuggers can
  943. associate object code to source files.
  944. <H4>7 Performance
  945. </H4>
  946. <br>&#32;<br>
  947. The following is a table of the source size of the MIPS
  948. compiler.
  949. <DL><DT><DD><TT><PRE>
  950. lines module
  951. 509 machine-independent headers
  952. 1070 machine-independent YACC source
  953. 6090 machine-independent C source
  954. 545 machine-dependent headers
  955. 6532 machine-dependent C source
  956. 298 loader headers
  957. 5215 loader C source
  958. </PRE></TT></DL>
  959. <br>&#32;<br>
  960. The following table shows timing
  961. of a test program
  962. that plays checkers, running on a MIPS R4000.
  963. The test program is 26 files totaling 12600 lines of C.
  964. The execution time does not significantly
  965. depend on library implementation.
  966. Since no other compiler runs on Plan 9,
  967. the Plan 9 tests were done with the Plan 9 operating system;
  968. the other tests were done on the vendor's operating system.
  969. The hardware was identical in both cases.
  970. The optimizer in the vendor's compiler
  971. is reputed to be extremely good.
  972. <DL><DT><DD><TT><PRE>
  973. 4.49s Plan 9 <TT>vc</TT> <TT>-N</TT> compile time (opposite of <TT>-O</TT>)
  974. 1.72s Plan 9 <TT>vc</TT> <TT>-N</TT> load time
  975. 148.69s Plan 9 <TT>vc</TT> <TT>-N</TT> run time
  976. 15.07s Plan 9 <TT>vc</TT> compile time (<TT>-O</TT> implicit)
  977. 1.66s Plan 9 <TT>vc</TT> load time
  978. 89.96s Plan 9 <TT>vc</TT> run time
  979. 14.83s vendor <TT>cc</TT> compile time
  980. 0.38s vendor <TT>cc</TT> load time
  981. 104.75s vendor <TT>cc</TT> run time
  982. 43.59s vendor <TT>cc</TT> <TT>-O</TT> compile time
  983. 0.38s vendor <TT>cc</TT> <TT>-O</TT> load time
  984. 76.19s vendor <TT>cc</TT> <TT>-O</TT> run time
  985. 8.19s vendor <TT>cc</TT> <TT>-O3</TT> compile time
  986. 35.97s vendor <TT>cc</TT> <TT>-O3</TT> load time
  987. 71.16s vendor <TT>cc</TT> <TT>-O3</TT> run time
  988. </PRE></TT></DL>
  989. <br>&#32;<br>
  990. To compare the Intel compiler,
  991. a program that is about 40% bit manipulation and
  992. about 60% single precision floating point was
  993. run on the same 33 MHz 486, once under Windows
  994. compiled with the Watcom compiler, version 10.0,
  995. in 16-bit mode and once under
  996. Plan 9 in 32-bit mode.
  997. The Plan 9 execution time was 27 sec while the Windows
  998. execution time was 31 sec.
  999. <H4>8 Conclusions
  1000. </H4>
  1001. <br>&#32;<br>
  1002. The new compilers compile
  1003. quickly,
  1004. load slowly,
  1005. and produce
  1006. medium quality
  1007. object code.
  1008. The compilers are relatively
  1009. portable,
  1010. requiring but a couple of weeks' work to
  1011. produce a compiler for a different computer.
  1012. For Plan 9,
  1013. where we needed several compilers
  1014. with specialized features and
  1015. our own object formats,
  1016. this project was indispensable.
  1017. It is also necessary for us to
  1018. be able to freely distribute our compilers
  1019. with the Plan 9 distribution.
  1020. <br>&#32;<br>
  1021. Two problems have come up in retrospect.
  1022. The first has to do with the
  1023. division of labor between compiler and loader.
  1024. Plan 9 runs on multi-processors and as such
  1025. compilations are often done in parallel.
  1026. Unfortunately,
  1027. all compilations must be complete before loading
  1028. can begin.
  1029. The load is then single-threaded.
  1030. With this model,
  1031. any shift of work from compile to load
  1032. results in a significant increase in real time.
  1033. The same is true of libraries that are compiled
  1034. infrequently and loaded often.
  1035. In the future,
  1036. we may try to put some of the loader work
  1037. back into the compiler.
  1038. <br>&#32;<br>
  1039. The second problem comes from
  1040. the various optimizations performed over several
  1041. passes.
  1042. Often optimizations in different passes depend
  1043. on each other.
  1044. Iterating the passes could compromise efficiency,
  1045. or even loop.
  1046. We see no real solution to this problem.
  1047. <H4>9 References
  1048. </H4>
  1049. <br>&#32;<br>
  1050. [Aho87] A. V. Aho, R. Sethi, and J. D. Ullman,
  1051. Compilers - Principles, Techniques, and Tools,
  1052. Addison Wesley,
  1053. Reading, MA,
  1054. 1987.
  1055. <br>&#32;<br>
  1056. [ANSI90] <I>American National Standard for Information Systems -
  1057. Programming Language C</I>, American National Standards Institute, Inc.,
  1058. New York, 1990.
  1059. <br>&#32;<br>
  1060. [Dav91] J. W. Davidson and D. B. Whalley,
  1061. ``Methods for Saving and Restoring Register Values across Function Calls'',
  1062. Software-Practice and Experience,
  1063. Vol 21(2), pp. 149-165, February 1991.
  1064. <br>&#32;<br>
  1065. [Joh79] S. C. Johnson,
  1066. ``YACC - Yet Another Compiler Compiler'',
  1067. UNIX Programmer's Manual, Seventh Ed., Vol. 2A,
  1068. AT&amp;T Bell Laboratories,
  1069. Murray Hill, NJ,
  1070. 1979.
  1071. <br>&#32;<br>
  1072. [Set70] R. Sethi and J. D. Ullman,
  1073. ``The Generation of Optimal Code for Arithmetic Expressions'',
  1074. Journal of the ACM,
  1075. Vol 17(4), pp. 715-728, 1970.
  1076. <br>&#32;<br>
  1077. [Szy78] T. G. Szymanski,
  1078. ``Assembling Code for Machines with Span-dependent Instructions'',
  1079. Communications of the ACM,
  1080. Vol 21(4), pp. 300-308, 1978.
  1081. <br>&#32;<br>
  1082. <A href=http://www.lucent.com/copyright.html>
  1083. Copyright</A> &#169; 2000 Lucent Technologies Inc. All rights reserved.
  1084. </body></html>