Compilers

Question 1

Generation of intermediate code based on an abstract machine model is useful in compilers because

A
it makes implementation of lexical analysis and syntax analysis easier
B
syntax-directed translations can be written for intermediate code generation
C
it enhances the portability of the front end of the compiler
D
it is not possible to generate code for real machines directly from high level language programs
Question 1 Explanation: 
In Intermediate code optimizations can also enhances the probability of optimizer.
Question 2

Type checking is normally done during

A
lexical analysis
B
syntax analysis
C
syntax directed translation
D
code optimization
Question 2 Explanation: 
Type checking is normally done during syntax directed translation.
Question 3

The number of tokens in the following C statement.

printf("i = %d, &i = %x", i, &i); 

is

A
3
B
26
C
10
D
21
Question 3 Explanation: 
We have six different types of tokens are available
(i) Keyword
(ii) Identifier
(iii) Constant
(iv) Variable
(v) String
(vi) Operator
Print = Token 1
( = Token 2
"i=%d%x" = Token 3 [Anything inside " " is one Token]
, = Token 4
i = Token 5
, = Token 6
& = Token 7
i = Token 8
) = Token 9
; = Token 10
Here, totally 10 Tokens are present in the equation.
Question 4

For the program segment given below, which of the following are true?

 program main (output);
 type link = ^data;
      data = record
         d : real;
         n : link
         end;
 var ptr : link;
 begin
    new (ptr);
    ptr:=nil;
    .ptr^.d:=5.2;
    write ln(ptr)
 end. 
A
The program leads to compile time error
B
The program leads to run time error
C
The program outputs 5.2
D
The program produces error relating to nil pointer dereferencing
E
None of the above
Question 4 Explanation: 
Note: Out of syllabus.
Question 5

In a compiler the module the checks every character of the source text is called:

A
The code generator.
B
The code optimizer.
C
The lexical analyser.
D
The syntax analyser.
Question 5 Explanation: 
Lexical analyzer phase checks every character of text to identify tokens.
Question 6

Using longer identifiers in a program will necessarily lead to:

A
Somewhat slower compilation
B
A program that is easier to understand
C
An incorrect program
D
None of the above
Question 6 Explanation: 
Lexical analyzer will take more time to recognize the longer identifiers.
Question 7

Consider a program P that consists of two source modules M1 and M2 contained in two different files. If M1 contains a reference to a function defined in M2, the reference will be resolved at

A
Edit time
B
Compile time
C
Link time
D
Load time
Question 7 Explanation: 
The link time can gives the reference to the executable file when the functions are present in the other modules.
Question 8

Match the following:

(P) Lexical analysis       (1) Graph coloring
(Q) Parsing                (2) DFA minimization
(R) Register allocation    (3) Post-order traversal
(S) Expression evaluation  (4) Production tree
A
P-2, Q-3, R-1, S-4
B
P-2, Q-1, R-4, S-3
C
P-2, Q-4, R-1, S-3
D
P-2, Q-3, R-4, S-1
Question 8 Explanation: 
P) Lexical analysis is related with FA and Regular expressions.
Q) Expression can be evaluated with postfix traversals.
R) Register allocation can be done by graph colouring.
S) The parser constructs a production tree.
Hence, answer is ( C ).
Question 9

Match the following:

(P) Lexical analysis              (i) Leftmost derivation
(Q) Top down parsing             (ii) Type checking
(R) Semantic analysis           (iii) Regular expressions
(S) Runtime environments         (iv) Activation records
A
P ↔ i, Q ↔ ii, R ↔ iv, S ↔ iii
B
P ↔ iii, Q ↔ i, R ↔ ii, S ↔ iv
C
P ↔ ii, Q ↔ iii, R ↔ i, S ↔ iv
D
P ↔ iv, Q ↔ i, R ↔ ii, S ↔ iii
Question 9 Explanation: 
Regular expressions are used in lexical analysis.
Top down parsing has left most derivation of any string.
Type checking is done in semantic analysis.
Activation records are loaded into memory at runtime.
Question 10

Match the following according to input (from the left column) to the compiler phase (in the right column) that processes it:

A
P→(ii), Q→(iii), R→(iv), S→(i)
B
P→(ii), Q→(i), R→(iii), S→(iv)
C
P→(iii), Q→(iv), R→(i), S→(ii)
D
P→(i), Q→(iv), R→(ii), S→(iii)
Question 10 Explanation: 
Character stream is input to lexical analyzer which produces tokens as output. So Q → (iv).
Token stream is forwarded as input to Syntax analyzer which produces syntax tree as output. So, S → (ii).
Syntax tree is the input for the semantic analyzer, So P → (iii).
Intermediate representation is input for Code generator. So R → (i).
Question 11

Which one of the following statements is FALSE?

A
Context-free grammar can be used to specify both lexical and syntax rules.
B
Type checking is done before parsing.
C
High-level language programs can be translated to different Intermediate Representations.
D
Arguments to a function can be passed using the program stack.
Question 11 Explanation: 
Type checking is done in semantic analysis phase after syntax analysis phase (i.e., after parsing).
There are 11 questions to complete.

Access quiz wise question and answers by becoming as a solutions adda PRO SUBSCRIBER with Ad-Free content

Register Now

If you have registered and made your payment please contact solutionsadda.in@gmail.com to get access