Download Cs6660 Compiler Design Appasami...
COMPILER DESIGN G. Appasami, M.Sc., M.C.A., M.Phil., M.Tech., (Ph.D.) Assistant Professor Department of Computer Science and Engineering Dr. Paul’s Engineering Collage Pauls Nagar, Villupuram Tamilnadu, India
SARUMATHI PUBLICATIONS
Villupuram, Tamilnadu, India
First Edition: July 2015 Second Edition: April 2016
Published By
SARUMATHI PUBLICATIONS © All rights reserved. No part of this publication can be reproduced or stored in any form or by means of photocopy, recording or otherwise without the prior written permission of the author.
Price Rs. 101/
Copies can be had from
SARUMATHI PUBLICATIONS Villupuram, Tamilnadu, India.
[email protected]
Printed at Meenam Offset
Pondicherry – 605001, India
CS6660
COMPILER DESIGN
UNIT I INTRODUCTION TO COMPILERS
L T P C
3 0 0 3 5
TranslatorsCompilation and InterpretationLanguage processors The Phases of Compiler Errors Encountered in Different PhasesThe Grouping of PhasesCompiler Construction Tools Programming Language basics. UNIT II LEXICAL ANALYSIS
9
Need and Role of Lexical AnalyzerLexical ErrorsExpressing Tokens by Regular Expressions Converting Regular Expression to DFA Minimization of DFALanguage for Specifying Lexical AnalyzersLEXDesign of Lexical Analyzer for a sample Language. UNIT III SYNTAX ANALYSIS
10
Need and Role of the ParserContext Free Grammars Top Down Parsing General Strategies Recursive Descent Parser Predictive ParserLL(1) ParserShift Reduce ParserLR ParserLR (0) ItemConstruction of SLR Parsing Table Introduction to LALR Parser Error Handling and Recovery in Syntax AnalyzerYACCDesign of a syntax Analyzer for a Sample Language . UNIT IV SYNTAX DIRECTED TRANSLATION & RUN TIME ENVIRONMENT 12 Syntax directed DefinitionsConstruction of Syntax TreeBottomup Evaluation of S Attribute Definitions Design of predictive translator Type SystemsSpecification of a simple type checker Equivalence of Type ExpressionsType Conversions. RUNTIME ENVIRONMENT: Source Language IssuesStorage OrganizationStorage Allocation Parameter PassingSymbol TablesDynamic Storage AllocationStorage Allocation in FORTAN. UNIT V CODE OPTIMIZATION AND CODE GENERATION
9
Principal Sources of OptimizationDAG Optimization of Basic BlocksGlobal Data Flow Analysis Efficient Data Flow AlgorithmsIssues in Design of a Code Generator A Simple Code Generator Algorithm. TOTAL: 45 PERIODS TEXTBOOK: 1. Alfred V Aho, Monica S. Lam, Ravi Sethi and Jeffrey D Ullman, “Compilers – Principles, Techniques and Tools”, 2nd Edition, Pearson Education, 2007. REFERENCES: 1. Randy Allen, Ken Kennedy, “Optimizing Compilers for Modern Architectures: A Dependencebased Approach”, Morgan Kaufmann Publishers, 2002. 2. Steven S. Muchnick, “Advanced Compiler Design and Implementation, “Morgan Kaufmann Publishers Elsevier Science, India, Indian Reprint 2003. 3. Keith D Cooper and Linda Torczon, “Engineering a Compiler”, Morgan Kaufmann Publishers Elsevier Science, 2004. 4. Charles N. Fischer, Richard. J. LeBlanc, “Crafting a Compiler with C”, Pearson Education, 2008.
Acknowledgement
I am very much grateful to the management of paul’s educational trust, Respected principal Dr. Y. R. M. Rao, M.E., Ph.D., cherished Dean Dr. E. Mariappane, M.E., Ph.D., and helpful Head of the department Mr. M. G. Lavakumar M.E., (Ph.D.).
I thank my colleagues and friends for their cooperation and their support in my career venture. I thank my parents and family members for their valuable support in completion of the book successfully. I express my special thanks to SARUMATHI PUBLICATIONS for their continued cooperation in shaping the work. Suggestions and comments to improve the text are very much solicitated.
Mr. G. Appasami
TABLE OF CONTENTS UNIT I INTRODUCTION TO COMPILERS 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8
Translators Compilation and Interpretation Language processors The Phases of Compiler Errors Encountered in Different Phases The Grouping of Phases Compiler Construction Tools Programming Language basics
1.1 1.1 1.1 1.3 1.8 1.9 1.10 1.10
UNIT II LEXICAL ANALYSIS 2.1 2.2 2.3 2.4 2.5 2.6 2.7
Need and Role of Lexical Analyzer Lexical Errors Expressing Tokens by Regular Expressions Converting Regular Expression to DFA Minimization of DFA Language for Specifying Lexical AnalyzersLEX Design of Lexical Analyzer for a sample Language
2.1 2.3 2.3 2.6 2.9 2.10 2.12
UNIT III SYNTAX ANALYSIS 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14
Need and Role of the Parser Context Free Grammars Top Down Parsing General Strategies Recursive Descent Parser Predictive Parser LL(1) Parser Shift Reduce Parser LR Parser LR (0) Item Construction of SLR Parsing Table Introduction to LALR Parser Error Handling and Recovery in Syntax Analyzer YACC Design of a syntax Analyzer for a Sample Language
3.1 3.1 3.9 3.10 3.11 3.12 3.14 3.15 3.17 3.18 3.22 3.26 3.27 3.29
UNIT IV SYNTAX DIRECTED TRANSLATION & RUN TIME ENVIRONMENT 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15
Syntax directed Definitions Construction of Syntax Tree Bottomup Evaluation of SAttribute Definitions Design of predictive translator Type Systems Specification of a simple type checker Equivalence of Type Expressions Type Conversions RUNTIME ENVIRONMENT: Source Language Issues Storage Organization Storage Allocation Parameter Passing Symbol Tables Dynamic Storage Allocation Storage Allocation in FORTAN
4.1 4.2 4.3 4.6 4.7 4.8 4.10 4.14 4.16 4.19 4.21 4.23 4.24 4.28 4.29
UNIT V CODE OPTIMIZATION AND CODE GENERATION 5.1 5.2 5.3 5.4 5.5 5.6 5.7
Principal Sources of Optimization DAG Optimization of Basic Blocks Global Data Flow Analysis Efficient Data Flow Algorithms Issues in Design of a Code Generator A Simple Code Generator Algorithm
5.1 5.8 5.9 5.15 5.19 5.21 5.24
CS6660 __
Compiler Design
Unit I
_____1.1
UNIT I INTRODUCTION TO COMPILERS 1.1 TRANSLATORS A translator is one kind of program that takes one form of program (input) and converts into another form (output). The input program is called source language and the output program is called target language. The source language can be low level language like assembly language or a high level language like C, C++, JAVA, FORTRAN, and so on. The target language can be a low level language (assembly language) or a machine language (set of instructions executed directly by a CPU).
Source
Translator
language
Targe t langua ge
Figure 1.1: Translator Types of Translators are: (1). Compilers (2). Interpreters (3). Assemblers 1.2 COMPILATION AND INTERPRETATION A compiler is a program that reads a program in one language and translates it into an equivalent program in another language. The translation done by a compiler is called compilation. An interpreter is another common kind of language processor. Instead of producing a target program as a translation, an interpreter appears to directly execute the operations specified in the source program on inputs supplied by the user. An interpreter executes the source program statement by statement. The translation done by an interpreter is called Interpretation. 1.3 LANGUAGE PROCESSORS ® (i) Compiler A compiler is a program that can read a program in one language (the source language) and translate it into an equivalent program in another language (the target language) compilation is shown in Figure 1.2. Sourc e program Compiler (Input )
Target program (Output)
Figure 1.2: A Compiler An important role of the compiler is to report any errors in the source program that it detects during the translation process. If the target program is an executable machinelanguage program, it can then be called by the user to process inputs and produce outputs.
Input
Target Program
Figure 1.3: Running the target program
Output
CS6660 __
Compiler Design
Unit I
_____1.2
(ii) Interpreter An interpreter is another common kind of language processor. Instead of producing a target program as a translation, an interpreter appears to directly execute the operations specified in the source program on inputs supplied by the user, as shown in Figure 1.4. Source Program
Interpreter
Input
Outp ut
Figure 1.4: An interpreter The machinelanguage target program produced by a compiler is usually much faster than an interpreter (mapping inputs to outputs is easy in compiler). Compiler converts the source to target completely, but an interpreter executes the source program statement by statement. Usually interpreter gives better error diagnostics than a Compiler. (iii) Hybrid Compiler Hybrid Compiler is combination of compilation and interpretation. Java language processors combine compilation and interpretation as shown in Figure 1.4. Java source program first be compiled into an intermediate form called bytecodes. The bytecodes are then interpreted by a virtual machine. A benefit of this arrangement is that bytecodes compiled on one machine can be interpreted on another machine. Source program Translator Intermediate program Input
Virtual Machine
Outp ut
Figure 1.5: A hybrid compiler In order to achieve faster processing of inputs to outputs, some Java compilers, called just intime compilers, translate the bytecodes into machine language immediately before they run. (iv) Language processing system ® In addition to a compiler, several other programs may be required to create an executable target program, as shown in Figure 1.6. Preprocessor: Preprocessor collects the source program which is divided into modules and stored in separate files. The preprocessor may also expand shorthands called macros into source language statements. E.g. # include, #define PI .14 Compiler: The modified source program is then fed to a compiler. The compiler may produce an assemblylanguage program as its output. because assembly language is easier to produce as output and is easier to debug.
Assembler: The assembly language is then processed by a program called an assembler that produces relocatable machine code as its output.
CS6660 __
Compiler Design
Unit I
_____1.3
Linker: The linker resolves external memory addresses, where the code in one file may refer to a location in another file. Large programs are often compiled in pieces, so the relocatable machine code may have to be linked together with other relocatable object files and library files into the code that actually runs on the machine. Loader: The loader then puts together all of the executable object files into memory for execution. It also performs relocation of an object code.
Figure 1.6: A languageprocessing system Note: Preprocessors, Assemblers, Linkers and Loader are collectively called cousins of compiler. 1.4 THE PHASES OF COMPILER / STRUCTURE OF COMPILER ® The process of compilation carried out in two parts, they are analysis and synthesis. The analysis part breaks up the source program into constituent pieces and imposes a grammatical structure on them. It then uses this structure to create an intermediate representation of the source program. The analysis part also collects information about the source program and stores it in a data structure called a symbol table, which is passed along with the intermediate representation to the synthesis part. The analysis part carried out in three phases, they are lexical analysis, syntax analysis and Semantic Analysis. The analysis part is often called the front end of the compiler. The synthesis part constructs the desired target program from the intermediate representation and the information in the symbol table. The synthesis part carried out in three phases, they are Intermediate Code Generation, Code Optimization and Code Generation. The synthesis part is called the back end of the compiler.
CS6660 __
Compiler Design
Unit I
_____1.4
Figure 1.7: Phases of a compiler 1.4.1 Lexical Analysis The first phase of a compiler is called lexical analysis or scanning or linear analysis. The lexical analyzer reads the stream of characters making up the source program and groups the characters into meaningful sequences called lexemes. For each lexeme, the lexical analyzer produces as output a token of the form The first component tokenname is an abstract symbol that is used during syntax analysis, and the second component attributevalue points to an entry in the symbol table for this token. Information from the symboltable entry 'is needed for semantic analysis and code generation. For example, suppose a source program contains the assignment statement position = initial + rate * 60
(1.1)
CS6660 __
Compiler Design
Unit I
_____1.5
Figure 1.8: Translation of an assignment statement The characters in this assignment could be grouped into the following lexemes and mapped into the following tokens.
(2) (3) (4) (5) (6) (7)
(1) position is a lexeme that would be mapped into a token . where id is an abstract symbol standing for identifier and 1 points to the symbol able entry for position. The assignment symbol = is a lexeme that is mapped into the token . initial is a lexeme that is mapped into the token . + is a lexeme that is mapped into the token . rate is a lexeme that is mapped into the token . * is a lexeme that is mapped into the token . 60 is a lexeme that is mapped into the token . Blanks separating the lexemes would be discarded by the lexical analyzer. The sequence of tokens produced as follows after lexical analysis.
(1.2)
CS6660 __
Compiler Design
Unit I
_____1.6
1.4.2 Syntax Analysis The second phase of the compiler is syntax analysis or parsing or hierarchical analysis. The parser uses the first components of the tokens produced by the lexical analyzer to create a treelike intermediate representation that depicts the grammatical structure of the token stream. The hierarchical tree structure generated in this phase is called parse tree or syntax tree. In a syntax tree, each interior node represents an operation and the children of the node represent the arguments of the operation.
Figure 1.9: Syntax tree for position = initial + rate * 60 The tree has an interior node labeled * with as its left child and the integer 60 as its right child. The node represents the identifier rate. Similarly and are represented as in tree. The root of the tree, labeled =, indicates that we must store the result of this addition into the location for the identifier position. 1.4.3 Semantic Analysis The semantic analyzer uses the syntax tree and the information in the symbol table to check the source program for semantic consistency with the language definition. It ensures the correctness of the program, matching of the parenthesis is also done in this phase. It also gathers type information and saves it in either the syntax tree or the symbol table, for subsequent use during intermediatecode generation. An important part of semantic analysis is type checking, where the compiler checks that each operator has matching operands. The compiler must report an error if a floatingpoint number is used to index an array. The language specification may permit some type conversions like integer to float for float addition is called coercions. The operator * is applied to a floatingpoint number rate and an integer 60. The integer may be converted into a floatingpoint number by the operator inttofloat explicitly as shown in the figure.
Figure 1.10: Semantic tree for position = initial + rate * 60 1.4.4 Intermediate Code Generation After syntax and semantic analysis of the source program, many compilers generate an explicit lowlevel or machinelike intermediate representation. The intermediate representation have two important properties: a. It should be easy to produce b. It should be easy to translate into the target machine.
CS6660 __
Compiler Design
Unit I
_____1.7
Threeaddress code is one of the intermediate representations, which consists of a sequence of assemblylike instructions with three operands per instruction. Each operand can act like a register. The output of the intermediate code generator in Figure 1.8 consists of the threeaddress code sequence for position = initial + rate * 60 t1 = inttofloat(60) t2 = id3 * t1 t3 = id2 + t2 id1 = t3 (1.3) 1.4.5 Code Optimization The machineindependent codeoptimization phase attempts to improve the intermediate code so that better target code will result. Usually better means faster. Optimization has to improve the efficiency of code so that the target program running time and consumption of memory can be reduced. The optimizer can deduce that the conversion of 60 from integer to floating point can be done once and for all at compile time, so the inttofloat operation can be eliminated by replacing the integer 60 by the floatingpoint number 60.0. Moreover, t3 is used only once to transmit its value to id1 so the optimizer can transform (1.3) into the shorter sequence t1 = id3 * 60.0 id1 = id2 + t1 (1.4) 1.4.6 Code Generation The code generator takes as input an intermediate representation of the source program and maps it into the target language. If the target language is machine code, then the registers or memory locations are selected for each of the variables used by the program. The intermediate instructions are translated into sequences of machine instructions. For example, using registers R1 and R2, the intermediate code in (1.4) might get translated into the machine code LDF R2, id3 MULF R2, R2 , #60.0 LDF Rl, id2 ADDF Rl, Rl, R2 STF idl, Rl (1.5) The first operand of each instruction specifies a destination. The F in each instruction tells us that it deals with floatingpoint numbers. The code in (1.5) loads the contents of address id3 into register R2, then multiplies it with floatingpoint constant 60.0. The # signifies that 60.0 is to be treated as an immediate constant. The third instruction moves id2 into register R1 and the fourth adds to it the value previously computed in register R2. Finally, the value in register R1 is stored into the address of id1, so the code correctly implements the assignment statement (1.1).
CS6660 __
Compiler Design
Unit I
_____1.8
1.4.7 SymbolTable Management
The symbol table, which stores information about the entire source program, is used by all phases of the compiler. An essential function of a compiler is to record the variable names used in the source program and collect information about various attributes of each name. These attributes may provide information about the storage allocated for a name, its type, its scope. In the case of procedure names, such things as the number and types of its arguments, the method of passing each argument (for example, by value or by reference), and the type returned are maintained in symbol table. The symbol table is a data structure containing a record for each variable name, with fields for the attributes of the name. The data structure should be designed to allow the compiler to find the record for each name quickly and to store or retrieve data from that record quickly. A symbol table can be implemented in one of the following ways: O Linear (sorted or unsorted) list O Binary Search Tree O Hash table
Among the above all, symbol tables are mostly implemented as hash tables, where the source code symbol itself is treated as a key for the hash function and the return value is the information about the symbol. A symbol table may serve the following purposes depending upon the language in hand: O To store the names of all entities in a structured form at one place. O To verify if a variable has been declared. O To implement type checking, by verifying assignments and expressions. O To determine the scope of a name (scope resolution).
1.5 ERRORS ENCOUNTERED IN DIFFERENT PHASES
An important role of the compiler is to report any errors in the source program that it detects during the entire translation process. Each phases of compiler can encounter errors, after detecting errors, must be corrected to precede compilation process. The syntax and semantic phases handles large number of errors in compilation process.
Error handler handles all types of errors like lexical errors, syntax errors, semantic errors and logical errors. Lexical errors: Lexical analyzer detects errors from input characters. Name of some keywords identifiers typed incorrectly. Example: switch is written as swich. Syntax errors: Syntax errors are detected by syntax analyzer. Errors like semicolon missing or unbalanced parenthesis. Example: ((a+b* (cd)). In this statement ) missing after b. Semantic errors: Data type mismatch errors handled by semantic analyzer. Incompatible data type vale assignment. Example: Assigning a string value to integer. Logical errors: Code note reachable and infinite loops.
Misuse of operators. Codes written after end of main() block.
CS6660 __
Compiler Design
Unit I
_____1.9
1.6 THE GROUPING OF PHASES ®
Each phases deals with the logical organization of a compiler.
Activities of several phases may be grouped together into a pass that reads an input file and writes an output file. The frontend phases of lexical analysis, syntax analysis, semantic analysis, and intermediate code generation might be grouped together into one pass. Code optimization might be an optional pass. A back end pass consisting of code generation for a particular target machine.
Source program (input) Front end Lexical Analyzer Syntax analyzer Semantic Analyzer Intermediate Code Generator Source language dependent (Machine independent) Intermediate code Back end Code optimizer (optional) Code Generator Machine dependent (Source language dependent) Target program (output)
Figure 1.11: The Grouping of Phases of compiler Some compiler collections have been created around carefully designed intermediate representations that allow the front end for a particular language to interface with the back end for a certain target machine. Advantages: With these collections, we can produce compilers for different source languages for one target machine by combining different front ends. Similarly, we can produce compilers for different target machines, by combining a front end for different target machines.
CS6660 __
Compiler Design
Unit I
_____1.10
1.7 COMPILER CONSTRUCTION TOOLS ® The compiler writer, like any software developer, can profitably use modern software development environments containing tools such as language editors, debuggers, version managers, profilers, test harnesses, and so on. Writing a compiler is a tedious and time consuming task; there are some specialized tools to implement various phases of a compiler. These tools are called Compiler Construction Tools. Some commonly used compilerconstruction tools are given below:
Scanner generators Parser generators Syntaxdirected translation engines Dataflow analysis engines Codegenerator generators Compilerconstruction toolkits
[Lexical Analysis] [Syntax Analysis] [Intermediate Code] [Code Optimization] [Code Generation] [For all phases]
1. Scanner generators that produce lexical analyzers from a regularexpression description of the tokens of a language. Unix has a tool for Scanner generator called LEX. 2. Parser generators that automatically produce syntax analyzers (parse tree) from a grammatical description of a programming language. Unix has a tool called YACC which is a parser generator. 3. Syntaxdirected translation engines that produce collections of routines for walking a parse tree and generating intermediate code. 4. Dataflow analysis engines that facilitate the gathering of information about how values are transmitted from one part of a program to each other part. Dataflow analysis is a key part of code optimization. 5. Codegenerator generators that produce a code generator from a collection of rules for translating each operation of the intermediate language into the machine language for a target machine. 6. Compilerconstruction toolkits that provide an integrated set of routines for constructing various phases of a compiler. 1.8 PROGRAMMING LANGUAGE BASICS. To design an efficient compiler we should know some language basics. Important concepts from popular programming languages like C, C++, C#, and Java are listed below. Some of the Programming Language basics which are used in most of the languages are listed below. They are: The Static/Dynamic Distinction Environments and States Static Scope and Block Structure Explicit Access Control Dynamic Scope Parameter Passing Mechanisms
Aliasing
CS6660 __
Compiler Design
Unit I
_____1.11
1.8.1 The Static/Dynamic Distinction The language uses a static policy or that the issue can be decided at compile time. On the other hand, a policy that only allows a decision to be made when we execute the program is said to be a dynamic policy or to require a decision at run time. The scope of a declaration of x is the region of the program in which uses of x refer to this declaration. A language uses static scope or lexical scope if it is possible to determine the scope of a declaration by looking only at the program. Otherwise, the language uses dynamic scope. With dynamic scope, as the program runs, the same use of x could refer to any of several different declarations of x. Example: consider the use of the term "static" as it applies to data in a Java class declaration. In Java, a variable is a name for a location in memory used to hold a data value. Here, "static" refers not to the scope of the variable, but rather to the ability of the compiler to determine the location in memory where the declared variable can be found. A declaration like public static int x; This makes x a class variable and says that there is only one copy of x, no matter how many objects of this class are created. Moreover, the compiler can determine a location in memory where this integer x will be held. In contrast, had "static" been omitted from this declaration, then each object of the class would have its own location where x would be held, and the compiler could not determine all these places in advance of running the program. 1.8.2 Environments and States Programming languages affect the values of data elements or affect the interpretation of names for that data changes, as the program runs. For example, the execution of an assignment such as x = y + 1 changes the value denoted by the name x. More specifically, the assignment changes the value in whatever location is denoted by x. The location denoted by x can change at run time. If x is not a static (or "class") variable, then every object of the class has its own location for an instance of variable x. In that case, the assignment to x can change any of those "instance" variables, depending on the object to which a method containing that assignment is applied. environment state names locations(variables) values The association of names with locations in memory (the store) and then with values can be described by two mappings that change as the program runs: 1. The environment is a mapping from names to locations in the store. Since variables refer to locations ('lvalues" in the terminology of C), we could alternatively define an environment as a mapping from names to variables. 2. The state is a mapping from locations in store to their values. That is, the state maps 1 values to their corresponding rvalues, in the terminology of C. Environments change according to the scope rules of a language. Example: Consider the C program fragment, Integer i is declared a global variable, and also declared as a variable local to function f. When f is executing, the environment adjusts so that name i refers to the location reserved for the i that is local to f, and any use of i, such as the assignment i = 3 shown explicitly, refers to that location.
CS6660 __
Compiler Design
Unit I
_____1.12
Typically, the local i is given a place on the runtime stack. … int i; ... void f(..) { int i; … i=3; … } … x=i+1;
/* global i */
/* local i */
/* use of local i */
/* use of global i */
Whenever a function g other than f is executing, uses of i cannot refer to the i that is local to f. Uses of name i in g must be within the scope of some other declaration of i. An example is the explicitly shown statement x = i+l, which is inside some procedure whose definition is not shown. The i in i + 1 presumably refers to the global i. 1.8.3 Static Scope and Block Structure The scope rules for C are based on program structure; the scope of a declaration is determined implicitly by where the declaration appears in the program. Later languages, such as C+ +, Java, and C#, also provide explicit control over scopes through the use of keywords like public, private, and protected. A block is a grouping of declarations and statements. C uses braces { and } to delimit a block; the alternative use of begin and end in some languages. Example: The C++ program in Fig. 1.10 has four blocks, with several definitions of variables a and b. As a memory aid, each declaration initializes its variable to the number of the block to which it belongs.
Output 3 2 1 4 1 2 1 1 Figure 1.12: Blocks in a C++ program
CS6660 __
Compiler Design
Unit I
_____1.13
Consider the declaration int a = 1 in block B1. Its scope is all of B1, except for those blocks nested within B1 that have their own declaration of a. B2, nested immediately within B1, does not have a declaration of a, but B3 does. B4 does not have a declaration of a, so block B3 is the only place in the entire program that is outside the scope of the declaration of the name a that belongs to B1. That is, this scope includes B4 and all of B2 except for the part of B2 that is within B3. The scopes of all five declarations are summarized in Figure 1.13.
Figure 1.13: Scopes of declarations 1.8.4 Explicit Access Control Classes and structures introduce a new scope for their members. If p is an object of a class with a field (member) x, then the use of x in p.x refers to field x in the class definition. the scope of a member declaration x in a class C extends to any subclass C', except if C' has a local declaration of the same name x. Through the use of keywords like public, private, and protected, object oriented languages such as C++ or Java provide explicit control over access to member names in a super class. These keywords support encapsulation by restricting access. Thus, private names are purposely given a scope that includes only the method declarations and definitions associated with that class and any "friend" classes (the C++ term). Protected names are accessible to subclasses. Public names are accessible from outside the class. 1.8.5 Dynamic Scope Technically, any scoping policy is dynamic if it is based on factor(s) that can be known only when the program executes. The term dynamic scope, however, usually refers to the following policy: a use of a name x refers to the declaration of x in the most recently called procedure with such a declaration. Dynamic scoping of this type appears only in special situations. We shall consider two examples of dynamic policies: macro expansion in the C preprocessor and method resolution in objectoriented programming. Example: In the C program, identifier a is a macro that stands for expression (x + I). But we cannot resolve x statically, that is, in terms of the program text. #define a (x+1) int x = 2; void b() { int x = 1 ; printf (“%d\n”, a); } void c() { printf("%d\n”, a); } void main() { b(); c(); } In fact, in order to interpret x, we must use the usual dynamicscope rule. the function main first calls function b. As b executes, it prints the value of the macro a. Since (x + 1) must be substituted for a, we resolve this use of x to the declaration int x=l in function b. The reason is that b has a declaration of x, so the (x + 1) in the printf in b refers to this x. Thus, the value printed is 1.
CS6660 __
Compiler Design
Unit I
_____1.14
After b finishes, and c is called, we again need to print the value of macro a. However, the only x accessible to c is the global x. The printf statement in c thus refers to this declaration of x, and value 2 is printed. 1.8.6 Parameter Passing Mechanisms All programming languages have a notion of a procedure, but they can differ in how these procedures get their arguments. The actual parameters (the parameters used in the call of a procedure) are associated with the formal parameters (those used in the procedure definition). In callbyvalue, the actual parameter is evaluated (if it is an expression) or copied (if it is a variable). The value is placed in the location belonging to the corresponding formal parameter of the called procedure. This method is used in C and Java. In call byreference, the address of the actual parameter is passed to the callee as the value of the corresponding formal parameter. Uses of the formal parameter in the code of the callee are implemented by following this pointer to the location indicated by the caller. Changes to the formal parameter thus appear as changes to the actual parameter. A third mechanism callbyname was used in the early programming language Algol 60. It requires that the callee execute as if the actual parameter were substituted literally for the formal parameter in the code of the callee, as if the formal parameter were a macro standing for the actual parameter. 1.8.7 Aliasing There is an interesting consequence of callbyreference parameter passing or its simulation, as in Java, where references to objects are passed by value. It is possible that two formal parameters can refer to the same location; such variables are said to be aliases of one another. As a result, any two variables, which may appear to take their values from two distinct formal parameters, can become aliases of each other. Example: Suppose a is an array belonging to a procedure p, and p calls another procedure q(x, y) with a call q(a, a). Suppose also that parameters are passed by value, but that array names are really references to the location where the array is stored, as in C or similar languages. Now, x and y have become aliases of each other. The important point is that if within q there is an assignment x [10] = 2, then the value of y[10] also becomes 2.
CS6660
Compiler Design
Unit II
2.1
UNIT II LEXICAL ANALYSIS 2.1 NEED AND ROLE OF LEXICAL ANALYZER Lexical Analysis is the first phase of compiler. It reads the input characters from left to right, one character at a time, from the source program. It generates the sequence of tokens for each lexeme. Each token is a logical cohesive unit such as identifiers, keywords, operators and punctuation marks. It needs to enter that lexeme into the symbol table and also reads from the symbol table. These interactions are suggested in Figure 2.1.
Figure 2.1: Interactions between the lexical analyzer and the parser Since the lexical analyzer is the part of the compiler that reads the source text, it may perform certain other tasks besides identification of lexemes. One such task is stripping out comments and whitespace (blank, newline, tab). Another task is correlating error messages generated by the compiler with the source program. Needs / Roles / Functions of lexical analyzer
It produces stream of tokens. It eliminates comments and whitespace. It keeps track of line numbers. It reports the error encountered while generating tokens. It stores information about identifiers, keywords, constants and so on into symbol table. Lexical analyzers are divided into two processes: a) Scanning consists of the simple processes that do not require tokenization of the input, such as deletion of comments and compaction of consecutive whitespace characters into one. b) Lexical analysis is the more complex portion, where the scanner produces the sequence of tokens as output. Lexical Analysis versus Parsing / Issues in Lexical analysis 1. Simplicity of design: It is the most important consideration. The separation of lexical and syntactic analysis often allows us to simplify tasks. whitespace and comments removed by the lexical analyzer. 2. Compiler efficiency is improved. A separate lexical analyzer allows us to apply specialized techniques that serve only the lexical task, not the job of parsing. In addition, specialized buffering techniques for reading input characters can speed up the compiler significantly. 3. Compiler portability is enhanced. Inputdevicespecific peculiarities can be restricted to the lexical analyzer. Tokens, Patterns, and Lexemes A token is a pair consisting of a token name and an optional attribute value. The token name
is an abstract symbol representing a kind of single lexical unit, e.g., a particular keyword, or a
CS6660
Compiler Design
Unit II
2.2
sequence of input characters denoting an identifier. Operators, special symbols and constants are also typical tokens. A pattern is a description of the form that the lexemes of a token may take. Pattern is set of rules that describe the token. A lexeme is a sequence of characters in the source program that matches the pattern for a token. Table 2.1: Tokens and Lexemes TOKEN INFORMAL DESCRIPTION SAMPLE LEXEMES (PATTERN) if characters i, f if else characters e, l, s, e else comparison or = or == or != n1 => n = w 16. Define LR(0) items. An LR(0) item of a grammar G is a production of G with a dot at some position of the right side. Thus, production A → XYZ yields the four items A→.XYZ A→X.YZ A→XY.Z A→XYZ. 17. What is meant by viable prefixes? The set of prefixes of right sentential forms that can appear on the stack of a shiftreduce parser are called viable prefixes. An equivalent definition of a viable prefix is that it is a prefix of a right sentential form that does not continue past the right end of the rightmost handle of that sentential form. 18. Define handle. A handle of a string is a substring that matches the right side of a production, and whose reduction to the nonterminal on the left side of the production represents one step along the reverse of a rightmost derivation. A handle of a right – sentential form is a production A→ and a position of where the string may be found and replaced by A to produce the previous rightsentential form in a rightmost derivation of . That is , if S =>αAw =>α w,then A→ in the position following α is a handle of α w. 19. What are kernel & nonkernel items? Kernel items, whish include the initial item, S'→ .S, and all items whose dots are not at the left end. Nonkernel items, which have their dots at the left end. 20. What is phrase level error recovery? Phrase level error recovery is implemented by filling in the blank entries in the predictive parsing table with pointers to error routines. These routines may change, insert, or delete symbols on the input and issue appropriate error messages. They may also pop from the stack.
10
UNIT IV SYNTAX DIRECTED TRANSLATION & RUN TIME ENVIRONMENT 1. What are the benefits of intermediate code generation? A Compiler for different machines can be created by attaching different back end to the existing front ends of each machine. A Compiler for different source languages can be created by proving different front ends for corresponding source languages t existing back end. A machine independent code optimizer can be applied to intermediate code in order to optimize the code generation. 2. What are the various types of intermediate code representation? There are mainly three types of intermediate code representations.
3.
4.
Define backpatching. Backpatching is the activity of filling up unspecified information of labels using appropriate semantic actions in during the code generation process.In the semantic actions the functions used are mklist(i),merge_list(p1,p2) and backpatch(p,i) Mention the functions that are used in backpatching. function where I is an index to the array of quadruple. p2) this function concatenates two lists pointed by p1 and p2. It returns the pointer to the concatenated list. 5. What is the intermediate code representation for the expression a or b and not c? The intermediate code representation for the expression a or b and not c is the three address sequence t1 := not c t2 := b and t1 t3 := a or t2 6. What are the various methods of implementing three address statements? The three address statements can be implemented using the following methods. operator(OP),arg1,arg2,result. the symbol table. used instead of using statements. 7. Give the syntaxdirected definition for ifelse statement. 1. S → if E then S1 E.true := new_label() E.false :=S.next S1.next :=S.next S.code :=E.code | | gen_code(E.true ‘: ‘) | | S1.code β. S → if E then S1 else S2 E.true := new_label()
ing pointers are
11
E.false := new_label() S1.next :=S.next S2.next :=S.next S.code :=E.code | | gen_code(E.true ‘: ‘) | | S1.code| | gen_code(‘go to’,S.next) | |gen_code(E.false ‘:’) | | Sβ.code
12
UNIT V CODE OPTIMIZATION AND CODE GENERATION 1. Mention the properties that a code generator should possess. words, the code generated should be such that it should make effective use of the resources of the target machine. 2. List the terminologies used in basic blocks. Define and use – the three address statement a:=b+c is said to define a and to use b and c. Live and dead – the name in the basic block is said to be live at a given point if its value is used after that point in the program. And the name in the basic block is said to be dead at a given point if its value is never used after that point in the program. 3. What is a flow graph? A flow graph is a directed graph in which the flow control information is added to the basic blocks. B1 to block B2 if B2 immediately follows B1 in the given sequence. We can say that B1 is a predecessor of B2. 4. What is a DAG? Mention its applications.
Directed acyclic graph(DAG) is a useful data structure for implementing transformations on basic blocks. DAG is used in expressions. the block. outside the block. list of quadruples by eliminating the common suexpressions and not performing the assignment of the form x := y unless and until it is a must. 5. Define peephole optimization. Peephole optimization is a simple and effective technique for locally improving target code. This technique is applied to improve the performance of the target program by examining the short sequence of target instructions and replacing these instructions by shorter or faster sequence. 6. List the characteristics of peephole optimization.
7.
How do you calculate the cost of an instruction? The cost of an instruction can be computed as one plus cost associated with the source and destination addressing modes given by added cost. MOV R0,R1 1
13
MOV R1,M 2 SUB 5(R0),*10(R1) 3 8. What is a basic block? A basic block is a sequence of consecutive statements in which flow of control enters at the beginning and leaves at the end without halt or possibility of branching. Eg. t1:=a*5 t2:=t1+7 t3:=t25 t4:=t1+t3 t5:=t2+b
9. Mention the issues to be considered while applying the techniques for code optimization. over the program efficiency must be achieved without changing the algorithm of the program. 10. What are the basic goals of code movement? To reduce the size of the code i.e. to obtain the space complexity. To reduce the frequency of execution of code i.e. to obtain the time complexity. 11. What do you mean by machine dependent and machine independent optimization? machine for the instruction set used and addressing modes used for the instructions to produce the efficient target code. programming languages for appropriate programming structure and usage of efficient arithmetic properties in order to reduce the execution time. 12. What are the different data flow properties?
13.
List the different storage allocation strategies. The strategies are: Heap allocation 14. What are the contents of activation record? The activation record is a block of memory used for managing the information needed by a single execution of a procedure. Various fields f activation record are: iables
14
15. What is dynamic scoping? In dynamic scoping a use of nonlocal variable refers to the nonlocal data declared in most recently called and still active procedure. Therefore each time new findings are set up for local names called procedure. In dynamic scoping symbol tables can be required at run time. 16. Define symbol table. Symbol table is a data structure used by the compiler to keep track of semantics of the variables. It stores information about scope and binding information about names. What is code motion? Code motion is an optimization technique in which amount of code in a loop is decreased. This transformation is applicable to the expression that yields the same result independent of the number of times the loop is executed. Such an expression is placed before the loop. What are the properties of optimizing compiler? The source code should be such that it should produce minimum amount of target code. There should not be any unreachable code. Dead code should be completely removed from source language. The optimizing compilers should apply following code improving transformations on source language. i) common subexpression elimination ii) dead code elimination iii) code movement iv) strength reduction 20. Suggest a suitable approach for computing hash function. Using hash function we should obtain exact locations of name in symbol table. The hash function should result in uniform distribution of names in symbol table. The hash function should be such that there will be minimum number of collisions. Collision is such a situation where hash function results in same location for storing the names.
17.
18.
15
REFERENCES: 1. Alfred V Aho, Monica S. Lam, Ravi Sethi and Jeffrey D Ullman, “Compilers – Principles, Techniques and Tools”, 2nd Edition, Pearson Education, 2007. 2. Randy Allen, Ken Kennedy, “Optimizing Compilers for Modern Architectures: A Dependencebased Approach”, Morgan Kaufmann Publishers, 2002. 3. Steven S. Muchnick, “Advanced Compiler Design and Implementation, “Morgan Kaufmann Publishers Elsevier Science, India, Indian Reprint 2003. 4. Keith D Cooper and Linda Torczon, “Engineering a Compiler”, Morgan Kaufmann Publishers Elsevier Science, 2004. 5. Charles N. Fischer, Richard. J. LeBlanc, “Crafting a Compiler with C”, Pearson Education, 2008.