Compiler Notes

October 26, 2017 | Author: ram5584 | Category: Parsing, Compiler, Software Development, Theoretical Computer Science, Notation
Share Embed Donate


Short Description

Download Compiler Notes...

Description

COMPILERS BASIC COMPILER FUNCTIONS A compiler accepts a program written in a high level language as input and produces its machine language equivalent as output. For the purpose of compiler construction, a high level programming language is described in terms of a grammar. This grammar specifies the formal description of the syntax or legal statements in the language. Example: Assignment statement in Pascal is defined as: < variable > : = < Expression > The compiler has to match statement written by the programmer to the structure defined by the grammars and generates appropriate object code for each statement. The compilation process is so complex that it is not reasonable to implement it in one single step. It is partitioned into a series of sub-process called phases. A phase is a logically cohesive operation that takes as input one representation of the source program and produces an output of another representation. The basic phases are - Lexical Analysis, Syntax Analysis, and Code Generation. Lexical Analysis: It is the first phase. It is also called scanner. It separates characters of the source language into groups that logically belong together. These groups are called tokens. The usual tokens are: Keyword: Identifiers: Operator symbols: Punctuation symbols:

such as DO or IF, such as x or num, such as : : = READ . 4. A designation of one of the non-terminals as the start symbol. This rule offers two possibilities separated by the symbol, for the syntax of an < id - list > may consist simply of a token id (the notation id denotes an identifier that is recognized by the scanner). The second syntax. Example:

ALPHA ALPHA, BETA

ALPHA is an < id - list > that consist of another < id - list > ALPHA, followed by a comma, followed by an id BETA. Tree: It is also called parse tree or syntax tree. It is convenient to display the analysis of a source statement in terms of a grammar as a tree. Example: READ (VALUE) GRAMMAR: (read) : : = READ ( < id -list>) Example: Assignment statement: SUM : = 0 ; SUM : = + VALUE ; SUM : = - VALUE ;

System Software

187

Grammar: < assign > < exp > < term > < factor >

:: :: :: ::

= = = =

id : = < exp > < term > | < exp > - < term > < factor > | < term > * < factor > | < term > DIV < factor > id | int | ( < exp > )

Assign consists of an id followed by the token : = , followed by an expression Fig. 4(a). Show the syntax tree. Expressions are sequence of connected by the operations + and - Fig. 4(b). Show the syntax tree. Term is a sequence of < factor > S connected by * and DIV Fig. 4(c). A factor may consists of an identifies id or an int (which is also recognized by the scan) or an < exp > enclosed in parenthesis. Fig. 4(d). < assign > id := {variance }



< exp > < term >

Fig. 4 (a)

Dir X

< exp >

Fig. 4 (b)

< term > | < factor >

+

factor |

< term >

id

< factor >

int Id

Fig.4 (c) (< exp > )

Fig. 4 (d)

Fig. 4 Parse Trees

For the statement Variance : = SUMSQ Div 100 - MEAN * MEAN ; The list of simplified Pascal grammar is shown in fig.5. 1. < prog > 2. 3. 4.

5. 6.

7. 8.

: : = PROGRAM < program > VAR BEGIN < stmt > - list > END. < prog - name >: : = id < dec - list > : : = < dec > | < dec - list > ; < dec > < dec > : : = < id - list > : < type > < type > : : = integer < id - list > : : = id | < id - list > , id : : = < stmt > ; < stmt > < stmt > : : = < assign > | | < write > | < for >

Compilers

188

9. 10. 11. 12.

< assign > < exp > < term > < factor >

::= ::= ::= ::=

id : = < exp > < term > | < exp > + < term > | < exp > - < term > < factor > | < term > | DIV id ; int | (< exp >)

13. < READ > 14. < write > 15. < for > 16. < index - exp> 17. < body >

::= ::= ::= ::= ::=

READ ( < id - list >) WRITE ( < id - list >) FOR < idex - exp > Do < body > id : = < exp > To ( exp > < start > | BEGIN < start - list > END

Fig. 5 Simplified Pascal Grammar ( < prog >) | PROGRAM

< prog - name > VAR dec - list Id {STATS}

BEGIN



< dec > < stmt - list > < id - list >

(id - list)

,

(id - list ) ;

;

: < type > ↓ INTEGER

< stmt >

< write >

id {VARIANCE}

(id - list ) ,

END

id (MEAN)

< stmt - list > ;

< stmt - list >

;

WRITE

( )

< assign > (id - list ) .

id

id ;

< stmt - list >

id {VARIANCE}

id := {VARIANCE} < exp >

< assign > (id - list ) , , id {SUM} {SUMSQ}

id {I}



id {SMSQ}

;

< stmt >

< start > id : = {mean} | |

< assign >

< assign > id := | | | | Div | | | term | >term> Div id : < exp > | factor |

factor < term > | int < factor > { 0} | int {0}

| | {SUM} Next Page

int id

* | | id | [MEAN] id {MEAN}

{100} | {100} id {SUMSQ}

int

System Software

189 |

< for > FOR

Id {I}



Do

< body >

: = To BEGIN END | | < term > | | ; < stmt > | | | int int {I} {100} ; | | < stmy > | id := < read > (SUMSQ id : = {SUM} READ ( < id - list > ) < exp > + < term > < exp > + < term > id | | | {VALUE? < factor > < term > * | | | | | < factor >. id id | { value} | | {value} id id id {SUM} {SUMSQ} {value}

Fig. 6 Parse tree for the Program 1

Parse tree for Pascal program in fig.1 is shown in fig. 6 1 (a) Draw parse trees, according to the grammar in fig. 5 for the following S: (a) ALPHA

< id - list > | id { ALPHA }

(b) ALPHA, BETA, GAMMA

< id - list > < id - list >

, id

< id - list >

,

{BETA}

id [ ALPHA ]

id {GAMMA}

Compilers

190

2 (a) Draw Parse tree, according to the grammar in fig. 5 for the following < exp > S : (a) ALPHA + BETA

< exp > | < term > < term > | < factor > + | id {ALPHA}

(b) ALPHA - BETA + GAMMA < exp < exp >

-

term

< term >

< term > * factor | | < factor > < factor > id | {GAMMA} id id {ALPHA}

{BETA}

(c) ALPHA DIV (BETA + GAMMA) = DELTA < exp > < exp >

-

< term >

< term > | < term >

< factor >

Div

< factor >

{DELTA}

< factor > (

< exp >

< exp >

+

)

id {ALPHA} < term > id {BETA}

< term > factor id {GAMMA}

< factor > | id {BETA}

System Software

191

3.

Suppose the rules of the grammar for < exp > and < term > is as follows: < exp > :: = < term > | < exp > * < term> | < exp> Div < term > < term > :: = | < term > + < factor > | < term > - < factor > Draw the parse trees for the following: (a) A1 + B1 (b) A1 - B1 * G1 (c) A1 + DIV (B1 + G1) - D1 < exp > | term

(a) A1 + B1 < term > factor id {A1} (b) A1 - B1 * G1

+

< factor > | id {B1}

< exp > | teerm

teerm

-

< factor >

factor term

*

factor | id {G1}

id factor {A1} id {B1} (c) A1 DIV (B1 + A1) - D1 < exp > < exp >

DIV

< term >

< term >

< term >

< factor > |

< factor >

id {A1}

(

< exp >

-

id {D1} )

< term > < term >

+

< factor >

< factor >

Compilers

192 < factor >

id {G1}

id {B1}

LEXICAL ANALYSIS Lexical Analysis involves scanning the program to be compiled. Scanners are designed to recognize keywords, operations, identifiers, integer, floating point numbers, character strings and other items that are written as part of the source program. Items are recognized directly as single tokens. These tokens could be defined as a part of the grammar. Example: : : = | | : : = A | B | C | . . . | Z : : = 0 | 1 | 2 | . . . | 9 In a such a case the scanner world recognize as tokens the single characters A, B, . . . Z,, 0, 1, . . . 9. The parser could interpret a sequence of such characters as the language construct < ident >. Scanners can perform this function more efficiently. There can be significant saving in compilation time since large part of the source program consists of multiple-character identifiers. It is also possible to restrict the length of identifiers in a scanner than in a passing notion. The scanner generally recognizes both single and multiple character tokens directly. The scanner output consists of sequence of tokens. This token can be considered to have a fixed length code. The fig. 7 gives a list of integer code for each token for the program in fig. 5 in such a type of coding scheme, the PROGRAM is represented by the integer value 1, VAR has the integer value 2 and so on. Token Code

Program 1

VAR 2

Token Token

READ :=

WRITE +

Token Code

:= 15

+ 16

Token Code

Id 22

Int 23

BEGIN 3

END 4

END 5

INTEGER 6

FOR 7

To -

Do K

; DIV

: (

, )

17

K 18

DIV 17

( 20

) 21

Fig. 7 Token Coding Scheme

For a keyword or an operator the token loading scheme gives sufficient information. In the case of an identifier, it is also necessary to supply particular identifier name that was scanned. It is true for the integer, floating point values, character-string constant etc. A token specifier can be associated with the type of code for such tokens. This specifier gives the identifier name, integer value, etc., that was found by the scanner.

System Software

193

Some scanners enter the identifiers directly into a symbol table. The token specifier for the identifiers may be a pointer to the symbol table entry for that identifier. The functions of a scanner are:     

The entire program is not scanned at one time. Scanner is a operator as a procedure that is called by the processor when it needs another token. Scanner is responsible for reading the lines of the source program and possible for printing the source listing. The scanner, except for printing as the output listing, ignores comments. Scanner must look into the language characteristics. Example: FOTRAN : : : PASCAL : : : 

Columns 1 - 5 Statement number Column 6 Continuation of line Column 7 . 22 Program statement Blanks function as delimiters for tokens Statement can be continued freely End of statement is indicated by ; (semi column)

Scanners should look into the rules for the formation of tokens.

Example: 'READ': Should not be considered as keyword as it is within quotes. i.e., all string within quotes should not be considered as token.  

Blanks are significant within the quoted string. Blanks has important factor to play in different language

Example 1: FORTRAN Statement: Do 10 I = 1, 100 ; Do is a key word, I is identifier, 10 is the statement number Statement: Do 10 I = 1 ;It is an identifier Do 10 I = 1 Note: Blanks are ignored in FORTRAN statement and hence it is a assignment statement. In this case the scanner must look ahead to see if there is a comma (,) before it can decide in the proper identification of the characters Do. Example 2: In FORTRAN keywords may also be used as an identifier. Words such as IF, THEN, and ELSE might represent either keywords or variable names. IF (THEN .EQ ELSE) THEN IF = THEN ELSE THEN = IF ENDIF

Compilers

194

Modeling Scanners as Finite Automata Finite automatic provides an easy way to visualize the operation of a scanner. Mathematically, a finite automation consists of a finite set of states and a set of transition from one state to another. Finite automatic is graphically represented. It is shown in fig, State is represented by circle. Arrow indicates the transition from one state to another. Each arrow is labeled with a character or set of characters that can be specified for transition to occur. The starting state has an arrow entering it that is not connected to anything else. 1 State

Final State Fig. 8

Transition

Example: Finite automata to recognize tokens is gives in fig. 9. The corresponding algorithm is given in fig. 10 0-9 A-Z B 1

2 2

A-Z 3

Fig. 9

Get first Input-character If Input-character in [ 'A' . . ' Z' ] then begin while Input - character in [ 'A' . . 'Z', ' 0'. . ' 9' ] do begin get next input - character End {while} end {if first is [ 'A' .. ' Z' ] } else return (token-error) Fig. 10

SYNTACTIC ANALYSIS During syntactic analysis, the source programs are recognized as language constructs described by the grammar being used. Parse tree uses the above process for translation of statements, Parsing techniques are divided into two general classes: -- Bottom up and -- Top down. Top down methods begin with the rule of the grammar that specifies the goal of the analysis ( i.e., the root of the tree), and attempt to construct the tree so that the terminal nodes match the statement being analyzed. Bottom up methods begin with the terminal nodes of the tree and attempt to combine these into successively high - level nodes until the root is reached.

System Software

195

OPERATOR PRECEDENCE PARSING The bottom up parsing technique considered is called the operator precedence method. This method is loaded on examining pairs of consecutive operators in the source program and making decisions about which operation should be performed first. Example: A + B * C - D

(1)

The usual procedure of operation multiplication and division has higher precedence over addition and subtraction. Now considering equation (1) the two operators (+ and *), we find that + has lower precedence than *. This is written as +⋖ * [+ has lower precedence *] Similarly ( * and - ), we find that * ⋗ - [* has greater precedence -]. The operation precedence method uses such observations to guide the parsing process.

PROGRAM VAR BEGIN END INTEGER FOR READ WRITE TO DO ; : , := + * DIV ) (





⋖⋖ ⋖ ⋗





IntId

()DIV*

WRITEREASFOR INTEGER ⋖⋖⋖

≐≐ ⋗⋗

-+: =,

(2)

⋗ ENDENDBEGINVAR



::DOTO

A+B*C -D

⋖ ⋖ ⋖

<

⋗ ≐ ≐

⋖ ⋗ ⋗ ⋗⋗ ⋗ ⋗

⋗⋗ ⋗⋗ ⋗⋗ ⋗⋗ ⋗⋗ ⋗⋗ ⋗⋗





⋖⋖⋖ ⋖⋖⋖

⋗ ⋗⋖ ⋗

⋖⋖ ⋖

≐ ⋗ ⋗⋗⋗ ⋗⋗⋗ ⋗⋗⋗ ⋗ ⋗ ⋗ ⋗

⋗ ⋗ ⋗ ⋗

⋗ ⋗ ⋗ ⋗

⋖⋖⋖⋖



⋖⋖ ⋗⋗ ⋗⋗ ⋗⋗ ⋗⋗ ⋗⋗ ⋗⋗

⋖ ⋖ ⋖ ⋗

⋖ ⋖ ⋖ ⋗

⋖ ⋖ ⋗ ⋖ ⋗ ⋗⋖

⋗⋗⋖ ⋗ ⋗⋗⋖ ⋗ ⋖⋖⋖ ≐ ⋗ ⋗ ⋗



⋖ ⋖ ⋖ ⋖ ≐ ⋖⋖ ⋖⋖ ⋖⋖ ⋖⋖ ⋖⋖ ⋖⋖

Compilers

196 ⋗

id Int

⋗⋗ ⋗⋗

⋗⋗ ⋗ ⋗ ⋗⋗ ⋗

⋗ ≐ ⋗⋗ ⋗⋗

⋗ ⋗ ⋗ ⋗

⋗ ⋗

Fig 11 Precedence Matrix for the Grammar for fig 5

Equation (2) implies that the sub expression B * C is to be computed before either of the other operations in the expression is performed. In times of the parse tree this means that the * operation appears at a lower level than does either + or -. Thus a bottom up parses should recognize B * C by interpreting it in terms of the grammar, before considering the surrounding terms. The first step in constructing an operatorprecedence parser is to determine the precedence relations between the operators of the grammar. Operator is taken to mean any terminal symbol (i.e., any token). We also have precedence relations involving tokens such as BEGIN, READ, id and ( . For the grammar in fig. 5, the precedence relations is given in the fig. 11. Example: PROGRAM



VAR ; These two tokens have equal precedence

Begin ⋖ FOR ; BEGIN has lower precedence over FOR. There are some values which do not follows precedence relations for comparisons. Example: ; ⋗ END and END ⋗ ; i.e., when ; is followed by END, the ' ; ' has higher precedence and when END is followed by ; the END has higher precedence. In all the statements where precedence relation does not exist in the table, two tokens cannot appear together in any legal statement. If such combination occurs during parsing it should be recognized as error. Let us consider some operator precedence for the grammar in fig. 5. Example:

Pascal Statement:

BEGIN READ (VALUE);

These Pascal statements scanned from left to right, one token at a time. For each pair of operators, the precedence relation between them is determined. Fig. 12(a) shows the parser that has identified the portion of the statement delimited by the precedence relations ⋖ and ⋗ to be interpreted in terms of the grammar. (a) . . . BEGIN READ ( id ) ⋖ ≐⋖ ⋗ (b) . . . BEGIN READ ( < N1 > ) ; ⋖ ≐ ≐ ⋗ (c) . . . BEGIN < N2 > ; (d) ... READ

< N2 > (

id

)

System Software

197

(VALUE) Fig. 12

According to the grammar id may be considered as < factor > . (rule 12), (rule 9) or a < id-list > (rule 6). In operator precedence phase, it is not necessary to indicate which non-terminal symbol is being recognized. It is interpreted as non-terminal < N1 >. Hence the new version is shown in fig. 12(b). An operator-precedence parser generally uses a stack to save token that have been scanned but not yet parsed, so it can reexamine them in this way. Precedence relations hold only between terminal symbols, so < N1 > is not involved in this process and a relationship is determined between (and). READ () corresponds to rule 13 of the grammar. This rule is the only one that could be applied in recognizing this portion of the program. The sequence is simply interpreted as a sequence of some interpretation < N2 >. Fig. 12(c) shows this interpretation. The parser tree is given in fig. 12(d). Note: (1) The parse tree in fig. 1 and fig. 12 (d) are same except for the name of the non-terminal symbols involved. (2) The name of the non-terminals is arbitrarily chosen. Example:

VARIANCE ; = SUMSQ DIV 100 - MEAN * MEAN (i) . . id 1 : = id 2 Div . . ⋖ ≐ ⋖ ⋗

.



(ii) . . . id 1 : = Div int ⋖ ≐ ⋖ ⋖ ⋗

{SUMSQ}

(iii) . . . id 1 : = Div ⋖









{SUMSQ}

int {100}



(iv) . . . . id 1 : = - id 3 * ⋖ ≐ ⋖ ⋖ ⋗

DIV id2 {SUMSQ}

v) . . . . id 1 : = - * id 4 ⋖ ≐ ⋖ ⋖ ⋖ ⋗

int {100}

;

id 3 {MEAN}



(vi) . . . id 1 : = - * ≐ ⋖ ⋖ ⋗

id 4 {MEAN}

Compilers

198 (vii) . . . id 1 : = - ⋖ ≐ ⋖ ⋗



*

id 3 {MEAN} (viii) . . . id : = ⋖ ≐ ⋗

id 4 {MEAN}

(ix)



-



. . .

id 1 {VARIANCE}

:=







DIV id 2 {SUMSQ}

*

int id 3 id 4 {100} {MEAN} {MEAN}

SHIFT REDUCE PARSING The operation procedure parsing was developed to shift reduce parsing. This method makes use of a stack to store tokens that have not yet been recognized in terms of the grammar. The actions of the parser are controlled by entries in a table, which is somewhat similar to the precedence matrix. The two main actions of shift reducing parsing are Shift: Push the current token into the stack. Reduce: Recognize symbols on top of the stack according to a rule of a grammar Example:

BEGIN

READ ( id ) . . .

Steps Token Stream 1. . . . BEGIN READ ( id ) . . .

Stack

Shif t 2. . . . BEGIN

READ ( id )

Shif t

BEGIN

System Software

199 3. . . . BEGIN

4. . . BEGIN

READ ( id ) . . . Shif t READ ( id ) . . .

READ BEGIN (

5. . . . BEGIN

Shif t READ ( id ) . . .

6. . . . BEGIN

Shif t READ ( id ) . . .

READ BEGIN

id

Shif t

( READ BEGIN . < id-list > ( READ BEGIN

Explanation 1. The parser shift (pushing the current token onto the stack) when it encounters BEGIN 2 to 4. The shift pushes the next three tokens onto the stack. 5. The reduce action is invoked. The reduce converts the token on the top of the stack to a non-terminal symbol from the grammar. 6. The shift pushes onto the stack, to be reduced later as part of the READ statement. Note: Shift roughly corresponds to the action taken by an operator – precedence parses when it encounters the relation ⋖ and ≐. Reduce roughly corresponds to the action taken when an operator precedence parser encounters the relation ⋗. RECURSIVE DESCENT PARSING Recursive-Descent is a top-down parsing technique. A recursive-descent parser is made up of a precedence for each non-terminal symbol in the grammar. When a precedence is called it attempts to find a sub-string of the input, beginning with the current token, that can be interpreted as the non-terminal with which the procedure is associated. During this process it may call other procedures, or call itself recursively to search for other non-terminals. If the procedure finds the non-terminal that is its goal, it returns an indication of success to its caller. It also advances the current-token pointer past the sub-string it has just recognized. If the precedence is unable to find a sub-string that can be interpreted as to the desired non-terminal, it returns an indication of failure. Example: < read > : : = READ ( < id - list > )

Compilers

200

The procedure for < read > in a recursive descent parser first examiner the next two input, looking for READ and (. If these are found, the procedures for < read > then call the procedure for < id - list >. If that procedure succeeds, the < read > procedure examines the next input token, looking for). If all these tests are successful, the < read > procedure returns an indication of success. Otherwise the procedure returns a failure. There are problems to write a complete set of procedures for the grammar of fig. 15. Example: The procedure for < id - list >, corresponding to rule 6 would be unable to decide between its alternatives since id and < id-list > can begin with id. : : = id | < id-list >, id If the procedure somehow decided to try the second alternative , it would immediately call itself recursively to find an . This causes unending chain. Topdown parsers cannot be directly used with a grammar that contains this kind of immediate left recursion. Similarly the problem occurs for rules 3, 7, 10 and 11. Hence the fig. 13 shows the rules 3, 6, 7, 10 and 11 modification. 3 6 7 10 11

< dec - list > : : < id - list > :: < stmt - list > : : < exp > :: = < term > : : =

= < dec > { ; } = id {; id } = < stmt > { ; < stmt > } < term > { + < term . | -- < term > } < factor > { + < factor > | Div < factor >.}

Fig. 13 Fig. 14 illustrates a recursive-descent parse of the READ statement: READ (VALUE); The modified grammar is considered in the procedure for the non-terminal and < id-list >. It is assumed that TOKEN contains the type of the next input token.

PROCEDURE READ BEGIN ROUND : = FALSE If TOKEN + 8 { read } THEN BEGIN advance to next token IF TOKEN + 20 { ( } THEN BEGIN advance to next token IF IDLIST returns success THEN IF token = 21 { ) } THEN BEGIN FOUND : = TRUE advance to next token END { if ) } END { if READ } IF FOUND = TRUE THEN return success else failure end (READ)

System Software

201 Fig. 14

Procedure IDLIST begin FOUND = FALSE if TOKEN = 22 {id} then begin FOUND : = TRUE advance to Next token while (TOKEN = 14 {,}) and (FOUND = TRUE) do begin advance to next token if TOKEN = 22 {id} then advance to next token else FOUND = FALSE End {while} End {if id} if FOUND : = TRUE then return success else return failure end {IDLIST} Fig. 15 The fig. 15 IDLIST procedure shows an error message if ( , ) is not followed by a id. It indicates the failure in the return statement. If the sequence of tokens such as " id, id " could be a legal construct according to the grammar, this recursive-descent technique would not work properly. Fig. 16 shows a graphic representation of the recursive parsing process for the statement being analyzed. (i)

In this part, the READ procedure has been invoked and has examined the tokens READ and ' ( " from the input stream (indicated by the dashed lines). (ii) In this part, the READ has called IDLIST (indicated by the solid line), which has examined the token id. (iii) In this part, the IDLIST has returned to READ indicating success; READ has then examined the input token. Note that the sequence of procedure calls and token examinations has completely defined the structures of the READ statement. The parser tree was constructed beginning at the root, hence the term top-down parsing. (i) READ

READ

(II) READ

READ IDLIST

(iii)

READ

READ

IDLIST

Compilers

202 (

(

( id { Value }

id { Value

Fig. 16

Fig. 17 illustrates a recursive discard parse of the assignment statement. Variance: = SUNSQ DIVISION - MEAN * MEAN The fig. 17 shows the procedures for the non-terminal symbols that are involved in parsing this statement. Procedure ASSIGN begin FOUND = FALSE if TOKEN = 22 {id} then begin advance to Next token if TOKEN = 15 {: =} then begin advance to next token if EXP returns success then FOUND : = TRUE end {if : =} if FOUND : = TRUE then return success else return failure end {ASSIGN} Procedure EXP begin FOUND = FALSE If TERM returns success then begin FOUND: = TRUE while ((TOKEN = 16 {+ } ) or (TOKEN = 17 { - } ) ) and (FOUND = TRUE) do begin advance to next token if TERM returns success then FOUND = FALSE end {while} end {if TERM} if FOUND : = TRUE then return success else return failure

203

System Software

end {EXP} Procedure TERM begin FOUND : = FALSE If FACTOR returns success then begin FOUND : = TRUE while ((TOKEN = 18 { * }) or (TOKEN = 19 {DIV }) and (FOUND = TRUE) do begin advance to next token if TERM returns failure then FOUND : = FALSE end {while} end {if FACTOR} if FOUND : = TRUE then return success else return failure end {TERM} Procedure FACTOR begin FOUND : = FALSE if (TOKEN = 22 { id } ) or (TOKEN = 23 {int } ) then begin FOUND : = TRUE advance to next token end { if id or int } else if TOKEN = 20 { ( } then begin advance to next token if EXP returns success then if TOKEN = 21 { ) } then begin (FOUND = TRUE) advance to next token end { if ) } end {if ( } if FOUND : = TRUE then return success else return failure end {FACTOR} Fig. 17 Recursive-Descent Parse of an Assignment Statement

Compilers

204

A step-by-step representation of the procedure calls and token examination is shown in fig. 1

(i)

(ii)

ASSIGN

id 1

:=

{ VARIANCE }

(iii) ASSIGN

ASSIGN

id 1 : = { VARIANCE }

:=

id 1

EXP

EXP

{VARIANCE} TERM

(iv)

(v)

id 1 := {VARIANCE}

EXP

(vi)

ASSIGN

id 1 := {VARIANCE}

TERM

EXP

ASSIGN

id 1 := {VARIANCE}

TERM

TERM

EXP TERM

FACTOR

FACTOR

id 2

FACTOR

DIV

id 2

{SUMSQ}

int

{SUMSQ}

(vii)

id 2

{100}

{SUMSQ}

ASSIGN

id 1 := {VARIANCE}

FACTOR

EXP -

TERM

FACTOR

TERM

FACTOR

DIV

id 2 {SUMSQ}

(viii)

int {100}

id 3 {MEANS}

ASSIGN

id 1 := (VARIANCE}

EXP

FACTOR

DIV

FACTOR

int {100}

System Software

205 TERM

FACTOR

TERM

FACTOR

FACTOR

DIV

id 2

DIV

int

{SUMSQ}

FACTOR

*

{100}

id 3 {MEANS}

id 4 {MEANS}

Fig. 18 Step by step Representation for Variance : = SUMSQ Div 100 - MEAN * Mean

GENERATION OF OBJECT CODE After the analysis of system, the object code is to be generated. The code generation technique used in a set of routine, one for each rule or alternative rule in the grammar. The routines that are related to the meaning of he compounding construct in the language is called the semantic routines. When the parser recognizes a portion of the source program according to some rule of the grammar, the corresponding semantic routines are executed. These semantic routines generate object code directly and hence they are referred as code generation routines. The code generation routines that is discussed are designed for the use with the grammar in fig. .5. This grammar is used for code generations to emphasize the point that code generation techniques need not be associated with any particular parsing method. The parsing technique discussed in 1.3 does not follow the constructs specified by this grammar. The operator precedence method ignores certain non-terminal and the recursive-descent method must use slightly modified grammar. The code generation is for the SIC/XE machine. The technique use two data structure: (1) A List (2) A Stack List Count: A variable List count is used to keep a count of the number of items currently in the list. The token specifiers are denoted by ST (token) Example:

id int

ST (id) ; name of the identifier ST (int) ; value of the integer, # 100

The code generation routines create segments of object code for the compiled program. A symbolic representation is given to these codes using SIC assembler language. LC (Location Counter): It is a counter which is updated to reflect the next variable address in the compiled program (exactly as it is in an assembler). Application Process to READ Statement: (read) + JSUB WORD WORD

< id - list > READ

(

) {VALUE}

XREAD 1 VALUE

Compilers

206 Fig. 19(a) Parse Tree for Read

Using the rule of the grammar the parser recognizes at each step the left most sub-string of the input that can be interpreted. In an operator precedence parse, the recognition occurs when a sub-string of the input is reduced to some non-terminal . In a recursive-descent parse, the recognition occurs when a procedure returns to its caller, indicating success. Thus the parser first recognizes the id VALUE as an < id list >, and then recognizes the complete statement as a < read >. The symbolic representation of the object code to be generated for the READ statement is as shown in fig. 19(b). This code consists of a call to a statement XREAD, which world be a part of a standard library associated with the compiler. The subroutine any program that wants to perform a READ operation can call XREAD. XREAD is linked together with the generated object program by a linking loader or a linkage editor. The technique is commonly used for the compilation of statements that perform voluntarily complex functions. The use of a subroutine avoids the repetitive generation of large amounts of in-line code, which makes the object program smaller. The parameter list for XREAD is defined immediately after the JSUB that calls it. The first word is the number of variable that will be assigned values by the READ. The following word gives the addresses of three variables. Fig. 19(c) shows the routines that might be used to accomplish the code generation. < id - list > : : = id add ST (id) to list add 1 to List_count 2. < id - list > : : = < id - list >, id add ST (id) to list add 1 to LC List_Current 3. < read > : : = READ (< id - list >) generate [ + JSUB XREAD ] record external reference to XREAD generate [WORD List - count] for each item on list of do begin remove ST (ITEM) from list generate [WORD ST (ITEM)] end List _count : = 0 1.

Fig. 19 (c) Routine for READ Code Generation

The first two statements (1) and (2) correspond to alternative structure for < id list >, that is < id - list > : : = id | < id - list >, id. In each case the token specifies ST (id) for a new identifier being called to the < id - list > is inserted into the list used by the code-generation routine, and list-count is updated to reflect the insertion. After the entire < id-list > has been parsed, the list contains the token specifiers for all the identifiers that are part of the < id- list >. When

System Software

207

the < read > statement is recognized, the token specifiers are removed from the list and used to generate the object code for the READ. Code-generation Process for the Assignment Statement Example: VARIANCE: = SUMSQ DIV 100 - MEAN * MEAN The parser tree for this statement is shown in fig. 20. Most of the work of parsing involves the analysis of the < exp > on the right had side of the " : = " statement.: < assign > < exp > < exp >

< exp > (term)

< term > < term >

< term >

< factor >

< factor > < factor > id {VARIANCE}

:=

id { SUMSQ }

< factor > DIV

int {100}

_

id {MEAN}

*

id {MEAN}

Fig. 20

The parser first recognizes the id SUMSQ as a < factor > and < term > ; then it recognizes the int 100 as a < factor >; then it recognizes SUNSQ DIV 100 as a < term >, and so forth. The order in which the parts of the statements are recognized is the same as the order in which the calculations are to be performed. A code-generation routine is called for each portion of the statement is recognized. Example; For a rule < term >1: : = < term > 2 * < factor > a code is to be generated.

The subscripts are used to distinguish between the two occurrences of < term > . The code-generation routines perform all arithmetic operations using register A. Hence the multiple < term >2 * < factor > after multiplication is available in register A. Before multiplication one of the operand < term >2 must be located in A-register. The results after multiplication will be left in register A. So we need to keep track of the result left in register A by each segment of code that is generated. This is accomplished by extending the token-specifier idea to non-terminal nodes of the parse tree. The node specifier ST (< term1>) would be set to rA, indicating that the result of the completion is in register A. the variable REGA is used to indicate the highest level node of the parse tree when value is left in register A by the code generated so far. Clearly there can be only one such node at any point in the code-generation process. If the value corresponding to a node is not in register A, the specifier for the node is similar to a token

Compilers

208

specifier: either a pointer to a symbol table entry for the variable that contains the value or an integer constant. Fig. 21 shows the code-generation routine considering the A-register of the machine. 1.

2.

3.

4.

5.

6.

< assign > : : = id := < exp > GETA (< exp >) generate [ STA ST (id)] REGA : = null :: =< term > ST < exp > : = ST (< term >) if ST < exp > = rA then REGA : = < exp > < exp >1 : : = < exp >2 + < term > if SR (< exp >2) = rA then generate [ADD ST (< term >)] else if ST (< term >) = rA then generate [ADD ST (< exp >2)] else begin GETA (< EXP >2) generate [ADD ST(< term >)] end ST (< exp >1) : = rA REGA : = < exp >1 < exp >1 : : = < exp >2 - < term > if ST (< exp >2) = rA then generate [SUB ST (< term >)] else begin GETA (< EXP >2) generate [ SUB ST (< term >)] end SR (< exp >1) : = rA REGA : = < exp >1 < term > : : = < factor > ST (< term >) : = ST (< factor >) if ST () = rA then REGA : = < term > < term >1 : : = < term >2 * < factor > if ST (< term >2) = rA then generate [ MUL ST (< factor >)] else if S (< factor >) = rA then generate [ MUL ST (< term >2)] else begin

System Software

209

GETA (< term >2) generate [ MUL SrT(< factor >)]

7.

9. 10.

end ST (< term >1) : = rA REGA : = < term >1 < term > : : = < term >2 DIV < factor > if SR (< term >2) = rA then generate [DIV ST(< factor >)] else begin GETA (< term >2) generate [ DIV ST (< factor >)] end SR (< term >1) : = rA REGA : = < term >1 < factor > : : = id ST (< factor >) : = ST (id) < factor > : : = int ST (< factor >) : = ST (int) < factor > : : = < exp > ST (< factor >) : = ST (< exp >) if ST (< factor >) = rA then REGA : = < factor > Fig. 21 Code Generation Routines

If the node specifies for either operand is rA, the corresponding value is already in register A, the routine simply generates a MUL instruction. The node specifier for the other operand gives the operand address for this MUL. Otherwise, the procedure GETA is called. The GETA procedure is shown in fig. 22. Procedure - GETA (NODE) begin if REGA = null then generate [LDA ST (NODE) ] else if ST (NODE) π rA then begin creates a new looking variable Tempi generate [STA Tempi] record forward reference to Tempi ST (REGA) : = Tempi Generate [LDA ST (NODE)] end (if ≠ rA) ST(NODE) : = rA REGA : = NODE end {GETA }

Compilers

210 Fig. 22

The procedure GETA generates a LDA instruction to load the values associated to 2 into register A. Before loading the value into A-register, it confirms whether A is null. If it is not null it generates STA instruction to save the contents of register-A into Temp-variable. There can be number of Temp variable like Temp1, Temp2 . . . etc. The temporary variables used during a completion will be assigned storage location at the end of the object program. The node specifies for the node associated with the value previously in register A, indicated by REGA is reset to indicate the temporary variable used. After the necessary instructions are generated, the code-generation routine sets ST (< term >1) and REGA to indicate that the value corresponding to < terms >1 is now in register A. This completes the code-generation action for the * operation. The code-generation routine for ' + ' operation is the same as the ' * ' operation. The routine ' DIV ' and ' - ' are similar except that for these operations it is necessary for the first operand to be in register A. The code generation for < assign > consists of bringing the value to be assigned into register A (using GETA) and then generating a STA instruction. The remaining rules in fig. 21 do not require the generation of any instruction since no computation and data movement is involved. The object code generated for the assignment statement is shown in fig. 22. LDA DIV STA LDA MUL STA LDA SUB STA

SUMSQ * 100 TMP1 MEAN MEAN TMP2 TMP1 TMP2 VARIABLE

Fig. 22

For the grammar < prog > the code-generation routine is shown in fig. 23. When is recognized, storage locations are assigned to any temporary (Temp) variables that have been used. Any references to these variables are then fixed in the object code using the same process performed for forward references by a one-pass assembler. The compiler also generates any modification records required to describe external references to library subroutine. < prog > : : = PROGRAM < prog-name > VAR < dec list > BEGIN < stmp -- list > END. generate [LDL RETADR] generate [RSUB] for each Temp variable used do

System Software

211

generate [ Temp RESW 1] insert [ J EXADDR ] {jump to first executable instruction} in bytes 3 - 5 of object program. fix up forward reference to Temp variables generate modification records for external references generates [END]. The < prog-name > generates header information in the object program that is similar to that created from the START and EXTREF as assembler directives. It also generates instructions to save the return address and jump to the first executable instruction in the compiled program. Fig. 24 shows the code generation routine for the grammar < prog-name >. < Program > : : = id generate [START 0] generate [EXTREF XREAD, XWRITE] generate [STL RETADR] add 3 to LC {leave room for jump to first executable instruction} generate [RETADR RESW 1] Fig. 24

Similar to the previous code-generation routine fig. 25 shows the codegeneration for < dec - list >, < dec > , < write >, < for > , < index - exp > and body. < dec - list > : : = { alternatives } save LC as EXADDR {tentative address of first executable instruction} < dec > : : = > id - list > : < type > for each item on list do begin remove ST (NAME) from list enter LC symbol table as address for NAME generate [ST (NAME) RESW 1] end LIST COUNT : = 0 < write > : : = WRITE ( < id - list > ) generate [ + JSUB XWRITE] record external reference to XWRITE generate [WORD LISTCOUNT] for each item on list do begin remove ST (ITEM) from list generate [WORD ST (ITEM)] end LIST COUNT : = 0

Compilers

212

< for > : : = FOR < id ex -- exp > Do < body > POP JUMPADDR from stack {address of jump out of loop} POP ST (INDEX) from stack {index variable} POP LOOPADDR from stack {beginning address of loop} generate [LDA ST (INDEX)] generate [ADD #1] generate [ J LOOPADDR] insert [ JGT LC ] at location JUMPADDR < index - exp > : : = id : = < exp > | TO < exp >2 GETA (< exp >;) Push LC onto stack {beginning addressing loop} Push ST (id) onto stack {index variable} Generate [STA ST (id)] Generate [ COMP ST (< exp > 2)] Push LC onto stack {address of jump out of loop} and 3 to LC [ leave room for jump instruction] REGA : = null Fig. 25

There are no code-generation for the statements < type > : : = INTEGER < stmt - list > : : = {either alternative} < stmt > : : = {any alternative} < body > : : = {either alternative} For the Pascal program in fig. 1 the complete code-generation process is shown in fig. 26. 1 STATS

START EXTREF STL J 2 RETADDR RESW 3 SUM RESW SUMSQ RESW I RESW VALUE RESW MEAN RESW VARIANCE RESW 5 {EXADDR} LDA STA 6 LDA STA 7 LDA {L1}STA I

0 XREAD, XREAD, XWRITE TETADR {EXADDR} 1 1 1 1 1 1 1 #0 SUM #0 SUMSQ #1

{Program Header} {Save return address}

{SUM = 0} {SUMSQ : = 0} {FOR I : = 1 TO 100}

System Software

213

9 10 11

13 {L2} 14

15

TEMP 1` TEMP 2

COMP # 100 JGT {L2} + JSUB X READ {READ (VALUE) } WORD 1 WORD VALUE LDA SUM {SUM : = SUM + VALUE} ADD VALUE STA SUM LDA VALUE {SUMSQ : = SUMSQ * VALUE * VALUE} MUL VALUE ADD SUMSQ STA SUMSQ LDA I {END OF FOR LOOP} ADD #1 J {L1} LDA SUM {MEAN : = SUM DIVISION} DIV # 100 STA MEAN LDA SUM {VARIABLE : = SUMSQ DIV DIV # 100 100 - MEAN * MEAN} STA TEMP1 LDA MEAN MUL MEAN STA TEMP2 LDA TEMP1 SUB TEMP2 STA VARIANCE +JSUB XWRITE {WRITE (MEAN, VARIANCE) } WORD 2 WORD MEAN WORD VARIABLE LDL RETADR RSUB RESW 1 {WORKING VARIABLE USED} RESW 1 END

Fig. 25 Object Code Generated for Pascal Program

8.1 MACHINE DEPENDENT COMPILER FEATURES At an elementary level, all the code generation is machine dependent. This is because, we must know the instruction set of a computer to generate code for it. There are many more complex issues involved. They are:  

Allocation of register Rearrangement of machine instruction to improve efficiency of execution

Considering an intermediate form of the program being compiled normally does such types of code optimization. In this intermediate form, the syntax and semantics of

Compilers

214

the source statements have been completely analyzed, but the actual translation into machine code has not yet been performed. It is easier to analyze and manipulate this intermediate code than to perform the operations on either the source program or the machine code. The intermediate form made in a compiler, is not strictly dependent on the machine for which the compiler is designed. 8.1.1

INTERMEDIATE FORM OF THE PROGRAM

The intermediate form that is discussed here represents the executable instruction of the program with a sequence of quadruples. Each quadruples of the form Operation, OP1, OP2, result. Where Operation - is some function to be performed by the object code OP 1 & OP2 - are the operands for the operation and Result - designation when the resulting value is to be placed. Example 1:

SUM : = SUM + VALUE could be represented as + , SUM, Value, i, i1 := i1 , , SUM

The entry i1, designates an intermediate result (SUM + VALUE); the second quadruple assigns the value of this intermediate result to SUM. Assignment is treated as a separate operation ( : =). Example 2 : VARIANCE : = SUMSQ, DIV 100 -- MEAN * MEAN DIV, SUMSQ, #100, i1 *, MEAN, MEAN, i2 -, i1, i2, i3 ::=

i3

,

VARIABLE

Note: Quadruples appears in the order in which the corresponding object code instructions are to be executed. This greatly simplifies the task of analyzing the code for purposes of optimization. It is also easy to translate into machine instructions. For the source program in Pascal shown in fig. 1. The corresponding quadruples are shown in fig. 27. The READ and WRITE statements are represented with a CALL operation, followed by PARM quadruples that specify the parameters of the READ or WRITE. The JGT operation in quadruples 4 in fig. 27 compares the values of its two operands and jumps to quadruple 15 if the first operand is greater than the second. The J operation in quadruples 14 jumps unconditionally to quadruple 4. Line Operation OP 1

1. 2. 3. 4.

:= := := JGT

#0 #0 #1 I

OP 2

#100

Result

SUM SUMSQ I (15)

Pascal Statement

SUM : = 0 SUMSQ : = 0 FOR I : = 1 to 100

System Software

215

5. 6. 7. 9. 10.

CALL XREAD READ (VALUE) PARAM VALUE + SUM VALUE i1 SUM : = SUM + VALUE ;= i1 SUM * VALUE VALUE i2 SUMSQ : = SUMSQ + VALUE + SUMSQ i2 i3 *

11. 12. 13. 14. 15. 16. 17. 1 19. 20. 21. 22. 23.

:= i3 SUMSQ + I #1 i4 := i4 I J (4) DIV SUM #100 i5 := i5 MEAN DIV SUMSQ #100 i6 * MEAN MEAN i7 i6 i7 i8 := i8 CALL XWRITE PARAM MEAN PARAM VARIANCE

VALUE End of FOR loop MEAN : = SUM DIV 100 VARIANCE : = SUMSQ DIV 100 - MEAN * MEAN VARIANCE WRITE (MEAN, VALIANCE

Fig. .27 Intermediate Code for the Pascal Program

8.1.2

MACHINE - DEPENDENT CODE OPTIMIZATION

There are several different possibilities for performing machine-dependent code optimization . -- Assignment and use of registers: Here we concentrate the use of registers as instruction operand. The bottleneck in all computers to perform with high speed is the access of data from memory. If machine instructions use registers as operands the speed of operation is much faster. Therefore, we would prefer to keep in registers all variables and intermediate result that will be used later in the program. There are rarely as many registers available as we would like to use. The problem then becomes which register value to replace when it is necessary to assign a register for some other purpose. On reasonable approach is to scan the program for the next point at which each register value would be used. The value that will not be needed for the longest time is the one that should be replaced. If the register that is being reassigned contains the value of some variable already stored in memory, the value can simply be discarded. Otherwise, this value must be saved using a temporary variable. This is one of the functions performed by the GETA procedure. In using register assignment, a compiler must also consider control flow of the program. If they are jump operations in the program, the register content may not have the value that is intended. The contents may be changed. Usually the existence of jump instructions creates difficulty in keeping track of registers contents. One way to deal with the problem is to divide the problem into basic blocks.

Compilers

216

A basic block is a sequence of quadruples with one entry point, which is at the beginning of the block, one exit point, which is at the end of the block, and no jumps within the blocks. Since procedure calls can have unpredictable effects as register contents, a CALL operation is usually considered to begin a new basic block. The assignment and use of registers within a basic block can follow as described previously. When control passes from one block to another, all values currently held in registers are saved in temporary variables. For the problem is fig. .27, the quadruples can be divided into five blocks. They are: Block -- A

Quadruples 1 - 3

Block -- B

Quadruples 4

Block -- C

Quadruples 5 - 14

C : 5 - 14

Block -- D

Quadruples 15 - 20

D : 15 - 20

Block -- E

Quadruples 21 - 23

E : 21 - 23

A : 1-3 B:4

Fig. 28

Fig. 28 shows the basic blocks of the flow group for the quadruples in fig. 27. An arrow from one block to another indicates that control can pass directly from one quadruple to another. This kind of representation is called a flow group. -- Rearranging quadruples before machine code generation: Example :

1) 2) 3) 4)

DIV * :=

LDA DIV STA LDA MUL STA

MEAN i1 i3

SUMSQ # 100 MEAN i2 i2 i3 VARIANCE

SUMSQ # 100 T1 MEAN MEAN T2

SUB STA

i1

LDA T1 T2 VARIANCE

Fig. 29

Fig. 29 shows a typical generation of machine code from the quadruples using only a single register. Note that the value of the intermediate result, is calculated first and stored in temporary variable T1. Then the value of i2 is calculated subtracting i2 from ii.

System Software

217

Even though i2 value is in the register, it is not possible to perform the subtraction operation. It is necessary to store the value of i2 in another temporary variable T2 and then load the value of i1 from T1 into register A before performing the subtraction. The optimizing compiler could rearrange the quadruples so that the second operand of the subtraction is computed first. This results in reducing two memory accesses. Fig. 29 shows the rearrangements. * DIV :=

MEAN SUMSQ i1 i3 LDA MUL STA LDA DIV SUB STA

MEAN i2 # 100 i1 i2 i3 VARIANCE

MEAN MEAN T1 SUMSQ # 100 T1 VARIANCE

Fig. 29 Rearrangement of Quadruples for Code Optimization

-- Characteristics and Instructions of Target Machine: These may be special loop - control instructions or addressing modes that can be used to create more efficient object code. On some computers there are high-level machine instructions that can perform complicated functions such as calling procedure and manipulating data structures in a single operation. Some computers have multiple functional blocks. The source code must be rearranged to use all the blocks or most of the blocks concurrently. This is possible if the result of one block does not depend on the result of the other. There are some systems where the data flow can be arranged between blocks without storing the intermediate data in any register. An optimizing compiler for such a machine could rearrange object code instructions to take advantage of these properties. Machine Independent Compiler Features Machine independent compilers describe the method for handling structured variables such as arrays. Problems involved in compiling a block-structured language indicate some possible solution. 3.1 STRUCTURED VARIABLES Structured variables discussed here are arrays, records, strings and sets. The primarily consideration is the allocation of storage for such variable and then the generation of code to reference then. Arrays: In Pascal array declaration (i) Single dimension array:

A: ARRAY [ 1 . . 10] OF INTEGER

Compilers

218

If each integer variable occupies one word of memory, then we require 10 words of memory to store this array. In general an array declaration is ARRAY [ l .. u ] OF INTEGER Memory word allocated = ( u - l + 1) words. (ii)

Two dimension array :

B : ARRAY [ 0 .. 3, 1 . . 3 ] OF INTEGER

In this type of declaration total word memory required is 0 to 3 = 4 ; 1 - 3 = 3 ; 4 x 3 = 12 word memory locations. In general: ARRAY [ l1 .. u1, l2 . . u2.] OF INTEGER Requires ( u1 - l1 + 1) * ( u2 - l2 + 1) Memory words The data is stored in memory in two different ways. They are row-major and column major. All array elements that have the same value of the first subscript are stored in contiguous locations. This is called row-major order. It is shown in fig. 30(a). Another way of looking at this is to scan the words of the array in sequence and observe the subscript values. In row-major order, the right most subscript varies most rapidly. 0,1

0,2

0,3

Row 0

0,4

0,5

0,1

1,2

1,3

1,4

1,5

Row 1

2,1

2,2

2,3

2,4

2,5

...

Row 2

Fig. 30 (a)

Fig. 30(b) shows the column major way of storing the data in memory. All elements that have the same value of the second subscript are stored together; this is called column major order. In other words, the column major order, the left most subscript varies most rapidly. To refer to an element, we must calculate the address of the referenced element relative to the base address of the array. Compiler would generate code to place the relative address in an index register. Index addressing mode is made easier to access the desired array element. (1) One Dimensional Array: On a SIC machine to access A [6], the address is calculated by starting address of data + size of each data * number of preceding data. i.e. Assuming the starting address is 1000H Size of each data is 3 bytes on SIC machine Number of preceding data is 5 Therefore the address for A [ 6 ] is = 1000 + 3 * 5 = 1015. In general for A: ARRAY [ l . . u ] of integer, if each array element occupies W bytes of storage and if the value of the subscript is S, then the relative address of the referred element A[ S ] is given by W * ( S - l ). The code generation to perform such a calculation is shown in fig. 31. The notation A[ i2 ] in quadruple 3 specifies that the generated machine code should refer to A using index addressing after having placed the value A: ARRAY [ 1 . . 10 ] OF INTEGER .

System Software

219

. . A[ I ] : = S (1)

+ :=

I i1 #5

#1 #3

i1 i2 A [ i1 ]

Fig. 31 Code Generation for Single Dimension Array of i2 in the Index Register

(2) Multi-Dimensional Array: In multi-dimensional array we assume row major order. To access element B[ 2,3 ] of the matrix B[ 6, 4 ], we must skip over two complete rows before arriving at the beginning of row 2. Each row contains 6 elements so we have to skip 6 x 2 = 12 array elements before we come to the beginning of row 2 to arrive at B[ 2, 3 ]. To skip over the first two elements of row 2 to arrive at B[ 2, 3 ]. This makes a total of 12 + 2 = 14 elements between the beginning of the array and element B[2, 3 ]. If each element occurs 3 byte as in SIC, the B[2, 3] is located relating at 14 x 3 = 42 address within the array. Generally the two dimensional array can be written as B ; ARRAY [ l1 . . . u1, l1 . . . u1, ] OF INTEGER The code to perform such an array reference is shown in fig. 32. B : ARRAY [ 0 . . 3, 1 . . 6 ] OF INTEGER . . B[I, J] : = 5 1)

* -+ * :=

I j i1 i3 #5

#6 #1 i2 #3

i1 i2 i3 i4 A [ i1 ]

Fig. 32 Code Generation for Two Dimensional Array

The symbol - table entry for an array usually specifies the following:   

The type of the elements in the array The number of dimensions declared The lower and upper limit for each subscript.

This information is sufficient for the compiler to generate the code required for array reference. Some of the languages line FORTRAN 90, the values of ROWS and COLUMNS are not known at completion time. The compiler cannot directly generate code. Then, the compiler create a descriptor called dope vector for the array. The descriptor includes space for storing the lower and upper bounds for each array subscript. When storage is allocated for the array, the values of these bounds are computed and stored in the descriptor. The generated code for one array reference uses the values from

Compilers

220

the descriptor to calculate relative addresses as required. The descriptor may also include the number of dimension for the array, the type of the array elements and a pointer to the beginning of the array. This information can be useful if the allocated array is passed as a parameter to another procedure. In the compilation of other structured variables like recode, string and sets the same type of storage allocations are required. The compiler must store information concerning the structure of the variable and use the information to generate code to access components of the structure and it must construct a description for situation in which the required conformation is not known at compilation time. 8.3.1

MACHINE - INDEPENDENT CODE OPTIMIZATION

One important source of code optimization is the elimination of common subexpressions. These are sub-expressions that appear at more than one port in the program and that compute the same value. Let us consider the example in fig. 33. x, y : ARRAY [ 0 . . 10, 1 . . 10 ] OF INTEGER . . . FOR I : = 1 TO 10 DO X [ I, 2 * J - 1 ] : = [ I, 2 * J } Fig. 33(a)

The sub-expression 2 * J is calculated twice. An optimizing compiler should generate code so that the multiplication is performed only once and the result is used in both places. Common sub-expressions are usually detected through the analysis of an intermediate form of the program. This intermediate form is shown in fig. 33(b). Line Operation

1. 2. 3. 4. 5. 6. 7. 9. 10. 11. 12. 13. 14.

:= JGT * * --+ * -* * -+

OP 1

OP 2

#1 I I i1 #2 i3 i4 i2 i6 I i8 #2 i10 i9

#10 #1 #10 J #1 #1 i5 #3 #1 #10 J 3 1 i11 i11

Result

Pascal Statement

I (20) i1 i2 i3 i4 i5 i6 i7 i8 i9 i10

[Loop initialization]

i12

[Subscript calculation for x]

[Subscript Calculation for y]

System Software

221

15. 16. 17. 1 19. 20.

* := + := J

i12 y [ i13 } #1 i14

#3 I

i13 x [ i17 ] i17 I (2)

[Assignment Operation] [End of Loop] [Next Statement]

Fig. 33(b)

Examining fig. 33(b), the sequence of quadruples, we observe that quadruples 5 and 12 are the same except for the name of the intermediate result produced. The operand J is not changed in value between quadruples 5 and 12. It is not possible to reach quadruple 12 without passing through quadruple 5 first because the quadruples are part of the same basic block. Therefore, quadruples 5 and 12 compute the same value. This means we can delete quadruple 12 and replace any reference to its result ( i10 ), with the reference to i3, the result of quadruple 5. this information eliminates the duplicate calculation of 2 * J which we identified previously as a common expression in the source statement. After the substitution of i3 for i10 , quadruples 6 and 13 are the same except for the name of the result. Hence the quadruple 13 can be removed and substitute i4 for i11 wherever it is used. Similarly quadruple 10 and 11 can be removed because they are equivalent to quadruples 3 and 4. Line Operation

OP 1

OP 2

Result

9. 10.

:= JGT * * + * +

#1 I I i1 #2 i3 i4 i2 i6 i2

#10 #1 #10 J #1 #1 i5 #3 i4

I (16) i1 i2 i3 i4 i5 i6 i7 i12

11. 12.

* :=

i12 y [ i13 ]

#3

13. 14. 15. 16.

+ := J

#1 i14

I

1. 2. 3. 4. 5. 6. 7.

i13 x [i7 ] i14 I (2)

Pascal Statement

[Loop initialization] [Subscript calculation for x]

[Subscript Calculation for y] [assignment Operation] [End of Loop] [Next Statement]

Fig. 34

Compilers

222

Names i1 have been left unchanged, except for the substitutions first described, to make the compromise with fig. 33(b) easier. This optimized code has only 15 quadruples and hence the time taken is reduced. Another method of code optimization is the removal of loop invariants. There are sub-expressions within the loop whose values do not change from one iteration of the loop to the next. Thus the values can be calculated once, before the loop is entered, rather than being recalculated for each iteration. In the example shows in fig. 33(a), the loopinvariant computation is the term 2 * J [quadruple 5 fig. 34]. The result of this computation depends only on the operand J, which does not change the value during the execution of the loop. Thus we can move quadruple 5 in fig. 34 to a point immediately before the loop is entered. A similar arrangement can be applied to quadruples 6 and 7. Fig. 35 shows the sequence of quadruples that result from these modification. The total number of quadruples remains the same as fig. 34, however, the number of quadruples within the body of the loop has been reduced from 14 to 11. Our modification have reduced the total number of quadruples for one execution of the FOR from 181 [Fig. 23 (b) ], to 114 [Fig 25], which saves a substantial amount of time. Line Operation

1. 2. 3. 4. 5. 6. 7. 9. 10. 11. 12. 13. 14. 15. 16.

* := JGT * + * + * := + := J

OP 1

#2 i3 i4 #1 I I i1 i2 i6 i2 i12 y [ i13 ] #1 i14

OP 2

Result

Pascal Statement

J #1 #1

i3 i4 i5 I (16) i1 i2 i6 i7 i12 i13 x [i7 ] i14 I (5)

{Commutation of invariants}

#10 #1 #10 i5 #3 i4 #3 I

{Loop Initialization} {Subscript calculation for x} {Subscript Calculation for y} {assignment Operation} {End of Loop} {Next Statement}

Fig. 35

-- The optimization can be obtained by rewriting the source program. Example; The statement in fig. 36(a) could be written as shown in fig. 36 (b). FOR I : = 1 To 10 Do x [ I, 2 * J - 1 ] : = y [ I, 2 * J ] Fig. 36(a)

T1 : = 2 * J ;

System Software

223

T2 : = T1 -- 1 ; FOR : = 1 To 10 Do x [ I, T2 ] : = y[ I, T1] Fig. 36(b)

This would achieve only a part of the benefits realized by the optimization process described. Some time the statement in fig. 36(a) is preferable because it is clearer than the modified version involving T1 and T2. An optimizing compiler should allow the programmer to write source code that is clearer and easy to read and it should compile such a program into machine code that is efficient to execute. -- Code optimization of another source is the substitution of a more efficient operation for a less efficient one. Example: The FORTRAN statement: Do 10 I = 1, 20 ; To calculate the first 20 power of 2 and store it in TABLE ( I ) = 2 * * I ; TABLE In each iteration of the loop, the constant 2 is raised to the power I. The quadruples are shown in fig. 37(a). Exponentiation is represented with the operation EXP. This computation can be performed more efficiently. Here, in each iteration of the loop, the value of I is incremented by 1. The value of 2 * * I for the current iteration can be found by multiplying the value for the previous iteration by 2. This method of computing 2 * * I is much more efficient than performing series of multiplication or using a logarithms technique. This technique is shown in fig. 37(b). Line Operation

1. 2. 3. 4. 5. 6. 7.

:= EXP -* := + := JLE

OP 1

#1 #2 I i2 i1 I i4 I

OP 2

I #1 #3 #1 #20

Result

Pascal Statement

I {Loop Initialization} i1 { Calculation of 2 * i5 {Subscript calculation } i3 TABLE [ i2] {Assignment Operation} i4 {End of the Loop} I i3

Fig. 37(a) Line Operation

1. 2. 3. 4.

:= :: = := *

OP 1

#1 # (-3) #1 i1

OP 2

Result

Pascal Statement

{Initialize temporaries}

#2

i1 i3 I i1

{Loop Initialization} { Calculation of 2 * * I }

Compilers

224

5. 6. 7. 9.

+ := + := JLE

i3 i1 I i4 I

#3 #1 #20

i3 {Subscript calculation } TABLE [ i3] {Assignment Operation} i4 {End of the Loop} I (4)

Fig. 37(b)

STORAGE ALLOCATION All the program defined variable, temporary variable, including the location used to save the return address use simple type of storage assignment called static allocation. When recursively procedures are called, static allocation cannot be used. This is explained with an example. Fig. 38(a) shows the operating system calling the program MAIN. The return address from register 'L' is stored as a static memory location RETADR within MAIN. SYSTEM (1)

MAIN

SYSTEM (1)

MAIN

SYSTEM (1)

CALL SUB

RETADR

RETADR

(2)

(a)

MAIN

CALL SUB

(2)

RETADR

SUB

(b)

(3) RETADR

(c)

CALL SUB RETADR(c)

Fig. 38

In fig. 38(b) MAIN has called the procedure SUB. The return address for the call has been stored at a fixed location within SUB (invocation 2). If SUB now calls itself recursively as shown in fig. 38(c), a problem occurs. SUB stores the return address for invocation 3 into RETADR from register L. This destroys the return address for invocation 2. As a result, there is no possibility of ever making a correct return to MAIN. There is no provision of saving the register contents. When the recursive call is made, variable within SUB may set few variables. These variables may be destroyed. However, these previous values may be needed by invocation 2 or SUB after the return from the recursive call. Hence it is necessary to preserve the previous values of any variables used by SUB, including parameters, temporaries, return addresses, register save areas etc., when a recursive call is made. This is accomplished with a dynamic storage allocation technique. In this technique, each procedure call creates an activation record that contains storage for all the variables used by the procedure. If the procedure is called

System Software

225

recursively, another activation record is created. Each activation record is associated with a particular invocation of the procedure, not with the itself. An activation record is not deleted until a return has been made from the corresponding invocation. Activation records are typically allocated on a stack, with the correct record at the tip of the stack. It is shown in fig. 39(a). Fig. 39(a) corresponds to fig. 39(b). The procedure MAIN has been called; its activation record appears on the stack. The base register B has been set to indicate the starting address of this correct activation record. The first word in an activation record would normally contain a pointer PREV to the previous record on the stack. Since the record is the first, the pointer value is null. The second word of the activation record contain a portion NEXT to the first unused word of the stack, which will be the starting address for the next activation record created. The third word contain the return address for this invocation of the procedure, and the necessary words contain the values of variables used by the procedure. SYSTEMS MAIN

RETADR NEXT 0

Stack Fig. 39 (a) SYSTEM (1)

Variables For SUB

MAIN

CALL SUB B SUB Stack

RETADR NEXT PREV Variable For MAIN RETADR NE XT 0 stacl

Fig. 39(b)

In fig. 39 (b), MAIN has called the procedure SUB. A new activation record has Variables been created on the top of the stack, with register B set to indicate this new current SUBrecords have been set as shown. record. The pointers PREV and NEXT in theFor time RETADR NEXT

SYSTEM (1)

MAIN

PREV Variable For MAIN RETADR

CALL SUB

NEXT PREV Variable For MAIN RETADR NEXT 0

Compilers

226 B

CALL SUB

Fig. 39 (c)

Stack

In fig. 39(c), SUB has called itself recursively another activation record has been created for this current invocation for SUB. Note that the return address and variable values for the two invocations of SUB are kept separate by this process. When a procedure returns to its caller, the current activation record (which corresponds to the most recent invocation) is deleted. The pointer PREV in the deleted record is used to reestablish the previous activation record as the current one, and execution continues. SYSTEM (1)

Variables For SUB

MAIN

CALL SUB B SUB

Fig. 39(d)

RETADR NEXT PREV Variable For MAIN RETADR NEXT 0

Stack

Fig. 39(d) shows the stack as it would appear after SUB returns from the recursive call. Register B has been reset to point to the instruction record for the previous invocation of SUB. The return address and all the variable values in this activation record are exactly the same as they were before the recursive call. This technique is called automatic allocation of storage. When the technique is used the compiler must generate code for the reference to variables using some sort of relative addressing. In our example the compiler assigns to each variable an address that is relative to the beginning of the activation record, instead of an actual location within the object program. The address of the current activation record is, by convention contained in register B, so a reference to a variable is translated as an instruction that uses base relative addressing. The displacement in this instruction is the relative address of the variable within the activation record. The compiler must also generate additional code to manage the activation records themselves. At the beginning of each procedure there must be code to create a new activation record, linking it to the previous one and setting the appropriate pointers as

System Software

227

shown in fig. 39. This code is often called a prologue for the procedure. At the end of the procedure, there must be code to delete the current activation record, resulting pointers as needed. This code is called an epilogue. Example: IN FOTRAN 90 :ALLOCATE (MATRIX (ROWS, COLUMNS) ) allocation storage for the dynamic array MATRIX with the specified dimensions. DE-ALLOCATE MATRIX releases the storage assigned to MATRIX by a previous ALLOCATE. IN PASCAL: NEW (P) allocates storage for a variable and sets the pointer P to indicate the variable just created. DISPOSE (P) releases the storage that was previously assigned to the variable pointed to by P.

In C : MALLCO (SIZE) ; allocate a block of specified size . . . FREE (P) ; frees the storage indicated by pointer P. A variable that is dynamically allocated in this way does not occupy a fixed location in an activation record, so it cannot be referenced directly using base relative addressing. Such a variable is usually accessed using indirect addressing through a pointer variable P. Since P does occupy a fixed location in the activation record, it can be addressed in the usual way. The mechanism to allocate a storage memory to a variable can be done in any of the following ways:  A NEW or MALLOC statement would be translated into a request by the operating system for an area of storage of the required size.  The required allocation is handled through a run-time support procedure associated with the compiler. With this method, a large block of free storage called a heap is obtained from the operating system at the beginning of the program. Allocations of the storage from the heap are managed by the run-time procedure.  In some systems, the program need not free memory for storage. A runtime garbage collection procedure scans the pointer in the program and reclaims areas from the heap that are no longer used. 8.3.3

BLOCK - STRUCTURED LANGUAGE

A block is a unit that can be divided in a language. It is a portion of a program that has the ability to declare its own identifiers. This definition of a block is also meet the units such as procedures and functions. Let us consider a Pascal program with number of procedure blocks as shown in fig. 40.

Compilers

228

Each procedure corresponds to a block. Note that blocks are rested within other blocks. Example: Procedures B and D are rested within procedure A and procedure C is rested within procedure B. Each block may contain a declaration of variables. A block may also refer to variables that are defined in any block that contains it, provided the same names are not redefined in the inner block. Variables cannot be used outside the block in which they are declared. In compiling a program written in a blocks structured language, it is convenient to number the blocks as shown in fig. 40. As the beginning of each new block is recognized, it is assigned the next block number in sequence. The compiler can then construct a table that describes the block structure. It is illustrated in fig. 41. The blocklevel entry gives the nesting depth for each block. The outer most block number that is one greater than that of the surrounding block. PROCEDURE A ; VAR X, Y, Z : INTEGER ; : PROCEDURE B ; VAR W, X, Y : REAL ; : PROCEDURE C ; VAR W, V : INTEGER ; : END { C }; : END { B }; : PROCEDURE D ; VAR X, Z : CHAR ; . 2 . END { D}; END { A};

1 3

2

Fig. 40 Nested Blocks in a Program Name A B C D

Block Number Level 1 1 2 2 3 3 4 2 Fig. 41

Surrounding Block -1 2 1

Since a name can be declared more than once in a program (by different blocks), each symbol-table entry for an identifier must contain the number of the declaring block. A declaration of an identifier is legal if there has been no previous declaration of that identifier by the current block, so there can be several symbolic table entries for the same

System Software

229

name. The entries that represent declaration of the same name by different blocks can be linked together in the symbol table with a chain of pointers. When a reference to an identifier appears in the source program, the compiler must first check the symbol table for a definition of that identifier by the current block. If not such definition is found, the compiler looks for a definition by the block that surrounds the current one, then by the block that surrounds that and so on. If the outermost block is reached without finding a definition of the identifier, then the reference is an error. The search process just described can easily be implemented within a symbol table that uses hashed addressing. The hashing function is used to locate one definition of the identifier. The chain of definitions for that identifier is then searched for the appropriate entry. Most block-structured languages make use of automatic storage allocation. The variables that are defined by a block are stored in an activation record that is created each time the block is entered. If a statement refers to a variable that is declared within the current block, this variable is present in the current activation record, so it can be accessed in the usual way. It is possible to refer to a variable that is declared in some surrounding block. In that case, the most recent activation record for that block must be located to access the variable. Activation Record for C

Activation Record for C Activation Record for B Activation Stack Record (a) for A

C B A

Activation Record for C Activation Record for B (b) Activation Record for A

C B A

Fig. 42 Use of Display for Procedure

A data structure called display is used to access a variable in surrounding blocks. The display contains pointers to the most recent activation records for the current block and for all blocks that surround the current one in the source program. When a block refers to a variable that is declared in some surrounding block, the generated object code uses the display to find the activation record that contains this variable. Example: When a procedure calls itself recursively thus an activation record is created on the stack as a result of the call. Assume procedure C calls itself recursively. It is shown in fig. 42(b) the record for C is created on the stack as a result of the call. Any reference to a variable declared by C should use this most recent activation record ; the display pointer for C is changed accordingly. Variables that correspond to the previous invocation of C are not accessible for the movement, so there is no display pointer to this

Compilers

230

activation record. Activation Record for C Activation Record for B Activation Record for A Activation Record for B Activation Record for B

D A

Stack

Display Fig 42(c)

Now if procedure 'C' call procedure D the resulting stack and display are as illustrated in fig. 42(c) . An activation record for D has been created in the usual way and added to the stack. Note, that the display now contains only two pointers: one each to the activation records for D and A. This is because procedure D cannot refer to variable in B or C, except through parameters that are passed to it, even though it is called from C. According to the rules for the scope of names in as block-structured language, procedure D can refer only to variable that are declared by D or by some block that contains D in the source program. 8.4 COMPILER DESIGN OPTIONS The compiler design is briefly discussed in this section. The compiler is divided into single pass and multi pass compilers. 4.1. COMPILER PASSES One pass compiler for a subset of the Pascal language was discussed in section 1. In this design the parsing process drove the compiler. The lexical scanner was called when the parser needed another input token and a code-generation routine was invoked as the parser recognized each language construct. The code optimization techniques discussed cannot be applied in total to one-pass compiler without intermediate codegeneration. One pass compiler is efficient to generate the object code. One pass compiler cannot be used for translation for all languages. FORTRAN and PASCAL language programs have declaration of variable at the beginning of the program. Any variable that is not declared is assigned characteristic by default. One pass compiler may fix the formal reference jump instruction without problem as in one pass assembler. But it is difficult to fix if the declaration of an identifier appears after it has been used in the program as in some programming languages. Example:

X:=Y*Z

231

System Software

If all the variables x, y and z are of type INTEGER, the object code for this statement might consist of a simple integer multiplication followed by storage of the result. If the variable are a mixture of REAL and INTEGER types, one or more conversion operations will need to be included in the object code, and floating point arithmetic instructions may be used. Obviously the compiler cannot decide what machine instructions to generate for this statement unless instruction about the operands is available. The statement may even be illegal for certain combinations of operand types. Thus a language that allows forward reference to data items cannot be compiled in one pass. Some programming language requires more than two passes. Example : ALGOL-98 requires at least 3 passes. There are a number of factors that should be considered in deciding between one pass and multi pass compiler designs. (1) One Pass Compiles: Speed of compilation is considered important. Computer running students jobs tend to spend a large amount of time performing compilations. The resulting object code is usually executed only once or twice for each compilation, these test runs are not normally very short. In such an environment, improvement in the speed of compilation can lead to significant benefit in system performance and job turn around time. (2) Multi-Pass Compiles: If programs are executed many times for each compilation or if they process large amount of data, then speed of executive becomes more important than speed of compilation. In a case, we might prefer a multi-pass compiler design that could incorporate sophisticated code-optimization technique. Multi-pass compilers are also used when the amount of memory, or other systems resources, is severely limited. The requirements of each pass can be kept smaller if the work by compilation is divided into several passes. Other factors may also influence the design of the compiler. If a compiler is divided into several passes, each pass becomes simpler and therefore, easier to understand, read and test. Different passes can be assigned to different programmers and can be written and tested in parallel, which shortens the overall time require for compiler construction. INTERPRETERS An interpreter processes a source program written in a high-level language. The main difference between compiler and interpreter is that interpreters execute a version of the source program directly, instead of translating it into machine code. An interpreter performs lexical and syntactic analysis functions just like compiler and then translates the source program into an internal form. The internal form may also be a sequence of quadruples. After translating the source program into an internal form, the interpreter executes the operations specified by the program. During this phase, an interpreter can be viewed as a set of subtractions. The internal form of the program drives the execution of this subtraction. The major differences b/w interpreter and compiler are:

Compilers

232

Interpreters

Compilers

1) The process of translating a source program into some internal form is simpler and faster 2) Execution of the translated program is much slower. 3) Debugging facilities can be easily provided. 4) During execution the interpreter produce symbolic dumps of data values, trace of program execution related to the source statement.

The process of translating a source program into some internal form is slower than interpreter. Executing machine code is much faster.

5) Program testing can be done effectively using interpreter as the operation on different data can be traced. 6) Easy to handle dynamic scoping

It is difficult to test as the compiler execution file gives the final result.

Provision of bugging facilities are difficult and complicated. The compiler does not produce symbolic dumps of date value. Debugging tools are required for trace the program.

Difficult to handle dynamic scooping

Most programming languages can be either compiled or interpreted successfully. However, some languages are particularly well suited to the use of interpreter. Compilers usually generate calls to library routines to perform function such as I/O and complex conversion operations. In such cases, an interpreter might be performed because of its speed of translation. Most of the execution time for the standard program would be consumed by the standard library routines. These routines would be the same, regardless of whether a compiler or an interpreter is used. In some languages the type of a variable can change during the execution of a program. Dynamic scoping is used, in which the variable that are referred to by a function or a subroutines are determined by the sequence of calls made during execution, not by the nesting of blocks in the source program. It is difficult to compile such language efficiently and allow for dynamic changes in the types of variables and the scope of names. These features can be more easily handled by an interpreter that provides delayed binding of symbolic variable names to data types and locations. 4.3 P-CODE COMPILERS P-Code compilers also called byte of code compilers are very similar in concept to interpreters. A P-code compiler, intermediate form is the machine language for a hypothetical computers, often called pscudo-machine or P-machine. The process of using such a P-code is shown in fig, 43. The main advantage of this approach is portability of software. It is not necessary for the compiler to generate different code for different computers, because the P-code object program can be executed on any machine that has a P-code interpreter. Even the compiler itself can be transported if it is written in the language that it compiles. To accomplish this, the source version of the compiler is compiled into P-code; this P-code can then be interpreted on another compiler. In this way P-code compiler can be used without modification as a wide variety of system if a P-code interpreter is written for each different machine.

System Software

233 Source Program

P-Code Compiler

Compiler Object Program P - Code

P - Code Interpreter

Execute Fig. 43

The design of a P-machine and the associated P-code is often related to the requirements of the language being compiled. For example, the P-code for a Pascal compiler might include single P-instructions that perform:  Array subscript calculation  Handle the details of procedure entry and exit and  Perform elementary operation on sets This simplifies the code generation process, leading to a smaller and more efficient compiler. The P-code object program is often much smaller than a corresponding machine code program. This is particularly useful on machines with severely limited memory size. The interpretive execution of P-code program may be much slower than the execution of the equivalent machine code. Many P-code compilers are designed for a single user running on a dedicated micro-computer systems. In that case, the speed of execution may be relatively insignificant because the limiting factor is system performance may be the response time and " think time " of the user. If execution speed is important, some P-code compilers support the use of machine-language subtraction. By rewriting a small number of commonly used routines in machine language, rather than P-code, it is often possible to improve the performance. Of course, this approach sacrifices some of the portability associated with the use of Pcode compilers. 8.4.2

COMPILER-COMPILERS

Compiler-Compiler is software tool that can be used to help in the task of compiler construction. Such tools are also called Compiler Generators or Translator writing system. The process of using a typical compiler-compiler is shown in fig. 44. The compiler writer provides a description of the language to be translated. This description may consists of a set of lexical rules for defining tokens and a grammar for the source language. Some compiler-compilers use this information to generate a scanner and a

Compilers

234

parses directly. Others create tables for use by standard table-driven scanning and parsing routines that are supplies by the compiler - compiler. Lexical Ruler Grammar

CompilerCompiler

Scanner Parser

Semantic Routines Fig. 44

Code Generator Compiler

The compiler writer also provides a set of semantic or code-generation routines. There is one such routine for each rule of the grammar. The parser each time it recognizes the language construct described by the associated rule calls this routine. Some compilercompiler can parse a longer section of the program before calling a semantic routine. In that case, an internal form of the statements that have been analyzed, such as a portion of the parse tree, may be passed to the semantic routine. This approach is often used when code optimization is to be performed. Compiler-compilers frequently provide special languages, notations, data structures, and other similar facilities that can be used in the writing of semantic routines. The main advantage of using a compiler-compiler is case of compiler construction and testing. The amount of work required from the user varies considerably from one compiler to another depending upon the degree of flexibility provided. Compilers that are generated in this way tend to require more memory and compile programs more slowly than hand written compilers. However, the object code generated by the compiler may actually be better when a compiler-compiler is used. Because of the automatic construction of scanners and parsers and the special tools provided for writing semantic routines, the compiler writer is freed from many of the mechanical details of compiler construction. The write can therefore focus more attention on good code generation and optimization.

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF