Compiler-Design
Question 1 |
Consider the grammar given below:
-
S → Aa
A → BD
B → b | ε
D → d | ε
Let a, b, d and $ be indexed as follows:

Compute the FOLLOW set of the non-terminal B and write the index values for the symbols in the FOLLOW set in the descending order. (For example, if the FOLLOW set is {a, b, d, $}, then the answer should be 3210)
30 | |
31 | |
10 | |
21 |
{Follow(B) = Follow(A) when D is epsilon}
Follow(B) = {d} Union {a} = {a,d}
Hence Answer is : 31
Question 2 |
Which one of the following kinds of derivation is used by LR parsers?
Leftmost in reverse | |
Rightmost in reverse | |
Leftmost | |
Rightmost |
Question 3 |
Consider the augmented grammar given below:
S' → S S → 〈L〉 | id L → L,S | S
Let I0 = CLOSURE ({[S' → ·S]}). The number of items in the set GOTO (I0 , 〈 ) is: _____.
4 | |
5 | |
6 | |
7 |

Hence, the set GOTO (I0 , 〈 ) has 5 items.
Question 4 |
Consider the following grammar and the semantic actions to support the inheritance type declaration attributes. Let X1, X2, X3, X4, X5 and X6 be the placeholders for the non-terminals D, T, L or L1 in the following table:

Which one of the following are the appropriate choices for X1, X2, X3 and X4?
X1 = L, X2 = L, X3 = L1, X4 = T | |
X1 = L, X2 = T, X3 = L1, X4 = L | |
X1 = T, X2 = L, X3 = L1, X4 = T | |
X1 = T, X2 = L, X3 = T, X4 = L1 |
L → L1, id {X3.type = X4.type } , this production has L and L1, hence X3 and X4 cannot be T.
So option 1, 3 and 4 cannot be correct.
Hence, 2 is correct answer.
Question 5 |
Which one of the following statements is FALSE?
Context-free grammar can be used to specify both lexical and syntax rules. | |
Type checking is done before parsing. | |
High-level language programs can be translated to different Intermediate Representations. | |
Arguments to a function can be passed using the program stack. |
Question 6 |
Consider the following parse tree for the expression a#b$c$d#e#f, involving two binary operators $ and #.

Which one of the following is correct for the given parse tree?
$ has higher precedence and is left associative; # is right associative | |
# has higher precedence and is left associative; $ is right associative
| |
$ has higher precedence and is left associative; # is left associative | |
# has higher precedence and is right associative; $ is left associative
|
Question 7 |
A lexical analyzer uses the following patterns to recognize three tokens T1, T2, and T3 over the alphabet {a,b,c}.
-
T1: a?(b∣c)*a
T2: b?(a∣c)*b
T3: c?(b∣a)*c
Note that ‘x?’ means 0 or 1 occurrence of the symbol x. Note also that the analyzer outputs the token that matches the longest possible prefix.
If the string bbaacabc is processes by the analyzer, which one of the following is the sequence of tokens it outputs?
T1T2T3 | |
T1T1T3 | |
T2T1T3 | |
T3T3 |
T1 : (b+c)*a + a(b+c)*a
T2 : (a+c)*b + b(a+c)*b
T3 : (b+a)*c + c(b+a)*c
Now the string is: bbaacabc
Please NOTE:
Token that matches the longest possible prefix
We can observe that the longest possible prefix in string is : bbaac which can be generated by T3.
After prefix we left with “abc” which is again generated by T3 (as longest possible prefix).
So, answer is T3T3.
Question 8 |
Consider the following intermediate program in three address code
p = a - b q = p * c p = u * v q = p + q
Which of the following corresponds to a static single assignment form of the above code?
![]() | |
![]() | |
![]() | |
![]() |
In Static Single Assignment form (SSA) each assignment to a variable should be specified with distinct names.
We use subscripts to distinguish each definition of variables.
In the given code segment, there are two assignments of the variable p
p = a-b
p = u*v
and two assignments of the variable q
q = p*c
q = p+q
So we use two variables p1, p2 for specifying distinct assignments of p and q1, q2 for each assignment of q.
Static Single Assignment form(SSA) of the given code segment is:
p1 = a-b
q1 = p1 * c
p2 = u * v
q2 = p2 + q1
Note:
As per options, given in GATE 2017 answer is B.
p3 = a - b
q4 = p3 * c
p4 = u * v
q5 = p4 + q4
Question 9 |
Consider the following grammar:

What is FOLLOW(Q)?
{R} | |
{w} | |
{w, y} | |
{w, $} |
FOLLOW (Q) = FIRST (R)
FIRST (R) = {w, ϵ} >br> Since FIRST (R) = {ϵ}, so FOLLOW (Q) → {w} ∪ FIRST(S)
FIRST(S) = {y}
So, FOLLOW (Q) = {w, y}
Question 10 |
Consider the following grammar:
stmt → if expr then expr else expr; stmt | ȯ expr → term relop term | term term → id | number id → a | b | c number → [0-9]
where relop is a relational operator (e.g., <, >, …), ȯ refers to the empty statement, and if, then, else are terminals.
Consider a program P following the above grammar containing ten if terminals. The number of control flow paths in P is ________. For example, the program
if e1 then e2 else e3
has 2 control flow paths, e1 → e2 and e1 → e3.
1024 | |
1025 | |
1026 | |
1027 |
if
if
:
:
:
(keep doing 10 times to get 10 'if')
We know that every if statement has 2 control flows as given in question. Hence,
We have 2 control flow choices for 1st 'if'
We have 2 control flow choices for 2nd 'if'
:
:
:
We have 2 control flow choices for 10th 'if'
Since all the choices are in one single structure or combination, so total choices are
2 × 2 × 2 × ........ 10 times = 210 = 1024
Question 11 |
Consider the expression (a-1)*(((b+c)/3)+d)). Let X be the minimum number of registers required by an optimal code generation (without any register spill) algorithm for a load/store architecture, in which (i) only load and store instructions can have memory operands and (ii) arithmetic instructions can have only register or immediate operands. The value of X is ___________.
2 | |
3 | |
4 | |
5 |

Question 12 |
Match the following according to input (from the left column) to the compiler phase (in the right column) that processes it:

P→(ii), Q→(iii), R→(iv), S→(i) | |
P→(ii), Q→(i), R→(iii), S→(iv) | |
P→(iii), Q→(iv), R→(i), S→(ii) | |
P→(i), Q→(iv), R→(ii), S→(iii) |
Token stream is forwarded as input to Syntax analyzer which produces syntax tree as output. So, S → (ii).
Syntax tree is the input for the semantic analyzer, So P → (iii).
Intermediate representation is input for Code generator. So R → (i).
Question 13 |
Which of the following statements about parser is/are CORRECT?
-
I. Canonical LR is more powerful than SLR.
II. SLR is more powerful than LALR.
III. SLR is more powerful than Canonical LR.
I only | |
II only | |
III only | |
II and III only |
The power in increasing order is:
LR(0) < SLR < LALR < CLR
Hence only I is true.
Question 14 |
Consider the following expression grammar G:
-
E → E - T | T
T → T + F | F
F → (E) | id
Which of the following grammars is not left recursive, but is equivalent to G?
![]() | |
![]() | |
![]() | |
![]() |
S→Sα | β
The equivalent production (after removing left recursion) is
S→βS1
S1→αS1 | ϵ
Hence after removing left recursion from: E→ E-T | T, here α = -T and β = T
E→TE1
E1→ -TE1 | ϵ
After removing left recursion from: T→T+F | F, here α=+F and β=F
T→FT1
T1→ +FT1 | ϵ
Replace E1 = X and T1 = Y
We have,
E→TX
X→-TX | ϵ
T→FY
Y→+FY | ϵ
F→(E)| id
Question 15 |
Consider the following code segment.
-
x = u - t;
y = x * v;
x = y + w;
y = t - z;
y = x * y;
The minimum number of variables required to convert the above code segment to static single assignment form is ________.
10 | |
11 | |
12 | |
13 |
Generally, subscripts are used to distinguish each definition of variables.
In the given code segment, there are two assignments of the variable x
x = u - t;
x = y + w;
and three assignments of the variable y.
y = x * v;
y = t - z;
y = x * y
Hence, two variables viz x1, x2 should be used for specifying distinct assignments of x
and for y it is named as y1, y2 and y3 for each assignment of y.
Hence, total number of variables is 10 (x1, x2, y1, y2, y3, t, u, v, w, z), and there are 5 temporary variables.
Static Single Assignment form (SSA) of the given code segment is:
x1 = u - t;
y1 = x1 * v;
x2 = y1 + w;
y2 = t - z;
y3 = x2 * y2;
Question 16 |
The attributes of three arithmetic operators in some programming language are given below.
Operator Precedence Associativity Arity + High Left Binary − Medium Right Binary ∗ Low Left Binary
The value of the expression 2 – 5 + 1 – 7 * 3 in this language is __________.
9 | |
10 | |
11 | |
12 |
2 − 5 + 1 − 7 * 3 = 2 − (5 + 1) − 7 * 3 = 2 − 6 − 7 * 3
Now, − has more precedence than *, so sub will be evaluated before * and – has right associative so (6 − 7) will be evaluated first.
2 − 6 − 7 * 3 = (2 − (6 − 7)) * 3 = (2 – (−1)) * 3 = 3 * 3 = 9
Question 17 |
Consider the following Syntax Directed Translation Scheme (SDTS), with non-terminals {S, A} and terminals {a,b}.
S → aA { print 1 } S → a { print 2 } A → Sb { print 3 }
Using the above SDTS, the output printed by a bottom-up parser, for the input aab is:
1 3 2 | |
2 2 3 | |
2 3 1 | |
syntax error |

Question 18 |
Match the following:
(P) Lexical analysis (i) Leftmost derivation (Q) Top down parsing (ii) Type checking (R) Semantic analysis (iii) Regular expressions (S) Runtime environments (iv) Activation records
P ↔ i, Q ↔ ii, R ↔ iv, S ↔ iii | |
P ↔ iii, Q ↔ i, R ↔ ii, S ↔ iv | |
P ↔ ii, Q ↔ iii, R ↔ i, S ↔ iv | |
P ↔ iv, Q ↔ i, R ↔ ii, S ↔ iii |
Top down parsing has left most derivation of any string.
Type checking is done in semantic analysis.
Activation records are loaded into memory at runtime.
Question 19 |
Which one of the following grammars is free from left recursion?
![]() | |
![]() | |
![]() | |
![]() |
The grammar in option C has indirect left recursion because of production, (S→Aa and A→Sc).
The grammar in option D also has indirect left recursion because of production, (A→Bd and B→Ae).
Question 20 |
Which one of the following is True at any valid state in shift-reduce parsing?
Viable prefixes appear only at the bottom of the stack and not inside | |
Viable prefixes appear only at the top of the stack and not inside | |
The stack contains only a set of viable prefixes | |
The stack never contains viable prefixes |
A viable prefixes is prefix of the handle and so can never extend to the right of handle, i.e., top of stack.
So set of viable prefixes is in stack.
Question 21 |
The least number of temporary variables required to create a three-address code in static single assignment form for the expression q + r/3 + s – t * 5 + u * v/w is _________.
8 | |
9 | |
10 | |
11 |
The given expression:
q+r/3+s−t∗5+u∗v/w
t1=r/3;
t2=t∗5;
t3=u∗v;
t4=t3/w;
t5=q+t1;
t6=t5+s;
t7=t6−t2;
t8=t7+t4;
So in total we need 8 temporary variables. If it was not mentioned as static single assignment then answer would have been 3 because we can re-use the same temporary variable several times.
Question 22 |
Let an represent the number of bit strings of length n containing two consecutive 1s. What is the recurrence relation for an?
an-2 + an-1 + 2n-2
| |
an-2 + 2an-1 + 2n-2 | |
2an-2 + an-1 + 2n-2 | |
2an-2 + 2an-1 + 2n-2 |
So, a1 = 0
For string of length 2,
a2 = 1
Similarly, a3 = 3
a4 = 8
Only (A) will satisfy the above values.
Question 23 |
A variable x is said to be live at a statement Si in a program if the following three conditions hold simultaneously:
- i. There exists a statement Sj that uses x
ii. There is a path from Si to Sj in the flow graph corresponding to the program
iii. The path has no intervening assignment to x including at Si and Sj

The variables which are live both at the statement in basic block 2 and at the statement in basic block 3 of the above control flow graph are
p, s, u | |
r, s, u | |
r, u | |
q, v |
A variable is live at some point if it holds a value that may be needed in the future, of equivalently if its value may be read before the next time the variable is written to.
→ '1' can be assigned by the p and s and there is no intermediate use of them before that.
→ And p and s are not to be live in the both 2 & 3.
→ And q also assigned to u not live in 2 & 3.
→ And v is live at 3 not at 2.
→ u is live at 3 and also at 2, if we consider a path of length 0 from 2-8.
Finally r, u is the correct one.
Question 24 |
In the context of abstract-syntax-tree (AST) and control-flow-graph (CFG), which one of the following is TRUE?
In both AST and CFG, let node, N2 be the successor of node N1. In the input program, the code corresponding to N2 is present after the code corresponding in N1.
| |
For any input program, neither AST nor CFG will contain a cycle
| |
The maximum number of successors of a node in an AST and a CFG depends on the input program
| |
Each node is AST and CFG corresponds to at most one statement in the input program |
Option (B) is false as CFG can contain cycle
Option (D) is false as a single node can contain block of statements.
Question 25 |
Match the following:
(P) Lexical analysis (1) Graph coloring (Q) Parsing (2) DFA minimization (R) Register allocation (3) Post-order traversal (S) Expression evaluation (4) Production tree
P-2, Q-3, R-1, S-4 | |
P-2, Q-1, R-4, S-3 | |
P-2, Q-4, R-1, S-3 | |
P-2, Q-3, R-4, S-1 |
Q) Expression can be evaluated with postfix traversals.
R) Register allocation can be done by graph colouring.
S) The parser constructs a production tree.
Hence, answer is ( C ).
Question 26 |
Consider the intermediate code given below.
-
1. i = 1
2. j = 1
3. t1 = 5 * i
4. t2 = t1 + j
5. t3 = 4 * t2
6. t4 = t3
7. a[t4] = –1
8. j = j + 1
9. if j <= 5 goto(3)
10. i = i + 1
11. if i < 5 goto(2)
The number of nodes and edges in the control-flow-graph constructed for the above code, respectively, are
5 and 7 | |
6 and 7 | |
5 and 5 | |
7 and 8 |

Question 27 |
Among simple LR (SLR), canonical LR, and look-ahead LR (LALR), which of the following pairs identify the method that is very easy to implement and the method that is the most powerful, in that order?
SLR, LALR | |
Canonical LR, LALR | |
SLR, canonical LR | |
LALR, canonical LR |
Question 28 |
Consider the following grammar G.
S → F ⎪ H F → p ⎪ c H → d ⎪ c
Where S, F and H are non-terminal symbols, p, d and c are terminal symbols. Which of the following statement(s) is/are correct?
- S1: LL(1) can parse all strings that are generated using grammar G.
S2: LR(1) can parse all strings that are generated using grammar
Only S1 | |
Only S2 | |
Both S1 and S2 | |
Neither S1 nor S2 |
For first production,

So, there is 'c' common in both the first(s) in the production of S. So not LL(1).
For LR(1),

Since R-R conflict is present. So, not LR(1).
Hence, Option (D) is the correct answer.
Question 29 |
Which one of the following is FALSE?
A basic block is a sequence of instructions where control enters the sequence at the beginning and exits at the end. | |
Available expression analysis can be used for common subexpression elimination. | |
Live variable analysis can be used for dead code elimination. | |
x=4*5 ⇒ x=20 is an example of common subexpression elimination. |
Common subexpression elimination (CSE) is a compiler optimization that searches for instances of identical expressions (i.e., they all evaluate to the same value), and analyzes whether it is worthwhile replacing them with a single variable holding the computed value.
For ex: Consider the following code:
m=y+z * p
n= y+z *k
The common subexpression is “y+z” we can be calculate it one time and replace in both expression
temp = y+z
m = temp*p
n = temp*k
Question 30 |
A canonical set of items is given below
S --> L. > R Q --> R.
On input symbol < the set has
a shift-reduce conflict and a reduce-reduce conflict. | |
a shift-reduce conflict but not a reduce-reduce conflict. | |
a reduce-reduce conflict but not a shift-reduce conflict. | |
neither a shift-reduce nor a reduce-reduce conflict. |
But if it would have asked about “>” then it will be a SR conflict.
Question 31 |
Let L be a language and L' be its complement. Which one of the following is NOT a viable possibility?
Neither L nor ![]() | |
One of L and ![]() | |
Both L and ![]() | |
Both L and ![]() |
Question 32 |
Which of the regular expressions given below represent the following DFA?

-
I) 0*1(1+00*1)*
II) 0*1*1+11*0*1
III) (0+1)*1
I and II only | |
I and III only | |
II and III only | |
I, II, and III |
So the regular expression corresponding to DFA is (0+1)*1.
Now, by using state elimination method,
So the DFA also has another equivalent regular expression: 0*1(1+00*1)*.
But the regular expression (0*1*1+11*0*1) is not equivalent to DFA, as the DFA also accept the string “11011” which cannot be generated by this regular expression.
Question 33 |
Consider the grammar defined by the following production rules, with two operators ∗ and +
S --> T * P T --> U | T * U P --> Q + P | Q Q --> Id U --> Id
Which one of the following is TRUE?
+ is left associative, while ∗ is right associative | |
+ is right associative, while ∗ is left associative | |
Both + and ∗ are right associative | |
Both + and ∗ are left associative |
P ⟶ Q + P, here P is right recursive, so + is right associative.
Question 34 |
Which one of the following is NOT performed during compilation?
Dynamic memory allocation | |
Type checking | |
Symbol table management | |
Inline expansion |
Question 35 |
For a C program accessing X[i][j][k], the following intermediate code is generated by a compiler. Assume that the size of an integer is 32 bits and the size of a character is 8 bits.
t0 = i ∗ 1024 t1 = j ∗ 32 t2 = k ∗ 4 t3 = t1 + t0 t4 = t3 + t2 t5 = X[t4]
Which one of the following statements about the source code for the C program is CORRECT?
X is declared as “int X[32][32][8]”. | |
X is declared as “int X[4][1024][32]”. | |
X is declared as “char X[4][32][8]”. | |
X is declared as “char X[32][16][2]”. |
Arr[0][1][2], this 3-D array contains
1 two dimensional array as i value is zero (if i =1 then it has 2, two D array), & the two dimension array has 2 row and 3 column.
So, In a 3-D array, i represent number of 2D arrays, j represent number of rows and k represent number of columns.
Number of 2 D array (M)=1
Number of rows (R)=2
Number of columns (C)=3

As arrays are stored in row major order, so this 2 dimension array will be stored as:

Assume base address of Arr is 1000. The address of required position is calculated as:
Arr[i][j][k]= Arr+ [i*(R*C)+(j*C)+k]*4 // multiplication to 4 is due to int takes 4 Bytes.
Arr[0][1][1] = 1000+[0*(2*3)+(1*3)+1]*4
= 1000+[ 0+3+1 ]*4
= 1000+4*4
= 1016
As you can see that in the given example of row order storing of array also has address of Arr[0][1][1] is 1016.
Now coming to the question:
X [ i ][ j ][ k ] is calculated by 3 address code X[t4]
X [ i ][ j ][ k ] = X [ t4 ] // by substituting in reverse
= X [ t3 + t2]
= X [ t1 + t0 + k*4]
= X [ t0 + t1 + k*4] // t0 and t1 swapped as swapping doesn't have any impact
= X [ i*1024 + j*32 + k*4]
= X [ i*256 + j*8 +k] *4 // taking 4 (common) outside
= X [ i*(32*8)+ (j*8) +k] *4
By comparing the above line with ....... Arr[i][j][k] = Arr+ [i*(R*C)+(j*C)+k]*4
We get R=32, C=8
It means the the declared array has 32 rows and 8 columns and since multiplication by 4 (common outside) so it was declared as INT.
But how many number of 2D arrays in this 3D array, we don't know.
Since option A is the only option with configuration: INT arr[32] [32] [8]. So it is right option.
Question 36 |
Consider the expression tree shown. Each leaf represents a numerical value, which can either be 0 or 1. Over all possible choices of the values at the leaves, the maximum possible value of the expression represented by the tree is _______.

6 | |
7 | |
8 | |
9 |


Question 37 |
One of the purposes of using intermediate code in compilers is to
make parsing and semantic analysis simpler. | |
improve error recovery and error reporting. | |
increase the chances of reusing the machine-independent code optimizer in other compilers. | |
improve the register allocation. |
Question 38 |
Which of the following statements are CORRECT?
-
1) Static allocation of all data areas by a compiler makes it impossible to implement recursion.
2) Automatic garbage collection is essential to implement recursion.
3) Dynamic allocation of activation records is essential to implement recursion.
4) Both heap and stack are essential to implement recursion.
1 and 2 only | |
2 and 3 only | |
3 and 4 only | |
1 and 3 only |
Question 39 |
A system uses 3 page frames for storing process pages in main memory. It uses the Least Recently Used (LRU) page replacement policy. Assume that all the page frames are initially empty. What is the total number of page faults that will occur while processing the page reference string given below?
4, 7, 6, 1, 7, 6, 1, 2, 7, 2
6 | |
7 | |
8 | |
9 |
Question 40 |
Consider the following two sets of LR(1) items of an LR(1) grammar.
X -> c.X, c/d X -> .cX, c/d X -> .d, c/d X -> c.X, $ X -> .cX, $ X -> .d, $
Which of the following statements related to merging of the two sets in the corresponding LALR parser is/are FALSE?
- Cannot be merged since look aheads are different.
- Can be merged but will result in S-R conflict.
- Can be merged but will result in R-R conflict.
- Cannot be merged since goto on c will lead to two different sets.
1 only | |
2 only | |
1 and 4 only | |
1, 2, 3 and 4 |
In the given LR(1) items there is not any reduce move, so after merging it will not have SR conflict and hence statement 2 and statement 3 are false.
Statement 4 is also wrong, because goto is carried on Non-Terminals symbols, not on terminal symbols, and c is a terminal symbol.
Hence all statements are false.
Question 41 |
Consider the program given below, in a block-structured pseudo-language with lexical scoping and nesting of procedures permitted.
Program main; Var ... Procedure A1; Var ... Call A2; End A1 Procedure A2; Var ... Procedure A21; Var ... Call A1; End A21 Call A21; End A21 Call A1; End main.
Consider the calling chain : Main->A1->A2->A21->A1 The correct set of activation records along with their access links is given by:
![]() | |
![]() | |
![]() | |
![]() |

Main → A1 → A2 → A21 → A1
Since, Activation records are created at procedure exit time.
A1 & A2 are defined under Main ( ). So A1 & A2 access links are pointed to main.
A21 is defined under A2, hence its access link will point to A2.
Question 42 |
For the grammar below, a partial LL(1) parsing table is also presented along with the grammar. Entries that need to be filled are indicated as E1, E2, and E3. ε is the empty string, $ indicates end of input, and, | separates alternate right hand sides of productions.
S → aAbB | bAaB | ε A → S B → S

The FIRST and FOLLOW sets for the non-terminals A and B are
FIRST(A) = {a,b,ε} = FIRST(B) FOLLOW(A) = {a,b} FOLLOW(B) = {a,b,$} | |
FIRST(A) = {a,b,$} FIRST(B) = {a,b,ε} FOLLOW(A) = {a,b} FOLLOW(B) = {$} | |
FIRST(A) = {a,b,ε} = FIRST(B) FOLLOW(A) = {a,b} FOLLOW(B) = ∅ | |
FIRST(A) = {a,b} = FIRST(B) FOLLOW(A) = {a,b} FOLLOW(B) = {a,b} |
FOLLOW(P): is the set of terminals that can appear immediately to the right of P in some sentential form.
FIRST(A) = FIRST (S)
FIRST (S) = FIRST (aAbB) and FIRST (bAaB) and FIRST (ϵ)
FIRST(S) = {a, b, ϵ}
FIRST (B) = FIRST (S) = {a, b, ϵ} = FIRST (A)
FOLLOW(A) = {b} // because of production S→a A b B
FOLLOW(A) = {a} // because of production S→ b A a B
So FOLLOW (A) = {a, b}
FOLLOW (B) = FOLLOW (S) // because of production S→ a A b B
FOLLOW (S) = FOLLOW (A) // because of production S → A
So FOLLOW (S) = {$, a, b} = FOLLOW(B)
Question 43 |
For the grammar below, a partial LL(1) parsing table is also presented along with the grammar. Entries that need to be filled are indicated as E1, E2, and E3. ε is the empty string, $ indicates end of input, and, | separates alternate right hand sides of productions.
S → aAbB | bAaB | ε A → S B → S

The appropriate entries for E1, E2, and E3 are
E1: S → aAbB,A → S E2: S → bAaB,B→S E3: B → S | |
E1: S → aAbB,S→ ε E2: S → bAaB,S → ε E3: S → ε | |
E1: S → aAbB,S → ε E2: S → bAaB,S→ε E3: B → S | |
E1: A → S,S →ε E2: B → S,S → ε E3: B →S |
S→ aAbB | bAaB | ε
The production S→ aAbB will go under column FIRST (aAbB) = a, so S→ aAbB will be in E1.
S→ bAaB will go under column FIRST(bAaB) = b, so S→ bAaB will be in E2.
S→ ε will go under FOLLOW (S) = FOLLOW(B) = {a, b, $ } , So S→ ε will go in E1, E2 and under column of $.
So E1 will have: S→ aAbB and S→ ε.
E2 will have S→ bAaB and S→ ε.
Now, B→ S will go under FIRST (S) = {a, b, ε}
Since FIRST(S) = ε so B→ S will go under FOLLOW (B) = {a, b, $}
So E3 will contain B→ S.
Question 44 |
The lexical analysis for a modern computer language such as Java needs the power of which one of the following machine models in a necessary and sufficient sense?
Finite state automata | |
Deterministic pushdown automata | |
Non-Deterministic pushdown automata | |
Turing machine |
Question 45 |
In a compiler, keywords of a language are recognized during
parsing of the program | |
the code generation | |
the lexical analysis of the program | |
dataflow analysis |
Question 46 |
Consider two binary operators ‘↑’ and ‘↓’ with the precedence of operator ↓ being lower than that of the operator ↑. Operator ↑ is right associative while operator ↓, is left associative. Which one of the following represents the parse tree for expression (7↓3↑4↑3↓2)?
![]() | |
![]() | |
![]() | |
![]() |
⇒ 7 ↓ (3 ↑ (4 ↑ 3)) ↓ 2
⇒ 7 ↓ (3 ↑ (4 ↑ 3))) ↓ 2 as ↓ is left associative
Question 47 |
Consider evaluating the following expression tree on a machine with load-store architecture in which memory can be accessed only through load and store instructions. The variables a, b, c, d and e initially stored in memory. The binary operators used in this expression tree can be evaluate by the machine only when the operands are in registers. The instructions produce results only in a register. If no intermediate results can be stored in memory, what is the minimum number of registers needed to evaluate this expression?

2 | |
9 | |
5 | |
3 |
Load R2, b ; R2 ← M[b]
Sub R1, R2 ; R1 ← R1 – R2
Load R2, c ; R2 ← M[c]
Load R3, d ; R3 ← M[d]
Add R2, R3 ; R2 ← R2 + R3
Load R3, e ; R3 ← M[e]
Sub R3, 3 ; R3 ← R3 – R2
Add R1, R3 ; R1 ← R1 + R3
Total 3 Registers are required minimum.
Question 48 |
Which data structure in a compiler is used for managing information about variables and their attributes?
Abstract syntax tree | |
Symbol table | |
Semantic stack | |
Parse Table |
Question 49 |
Which languages necessarily need heap allocation in the runtime environment?
Those that support recursion | |
Those that use dynamic scoping | |
Those that allow dynamic data structures | |
Those that use global variables |
Question 50 |
The program below uses six temporary variables a, b, c, d, e, f.
a = 1 b = 10 c = 20 d = a+b e = c+d f = c+e b = c+e e = b+f d = 5+e return d+f
Assuming that all operations take their operands from registers, what is the minimum number of registers needed to execute this program without spilling?
2 | |
3 | |
4 | |
6 |
Assume 'a' is mapped to r1, 'b' to r2 and 'c' to r3.
d = a + b, after this line if u notice 'a' is never present on right hand side, so we can map 'd' to r1.
e = c + d, after this line 'd' is never present on rhs, so we can map 'e' to r1.
at this time mapping is
r1 --- e
r2 --- b
r3 --- c
We have 3 registers for e, b and c.
f = c + e
b = c + e
These two are essentially doing same thing, after these two line 'b' and 'f' are same so we can skip computing 'f' or need not give any new register for 'f'. And wherever 'f' is present we can replace it with 'b', because neither of 'f' and 'b' are changing after these two lines, so value of these will be 'c+e' till the end of the program.
At second last line "d = 5 + e"
here 'd' is introduced, we can map it to any of the register r1 or r3, because after this line neither of 'e' or 'c' is required. Value of 'b' is required because we need to return 'd+f', and 'f' is essentially equal to 'b'
finally code becomes
r1 = 1
r2 = 10
r3 = 20
r1 = r1 + r2
r1 = r3 + r1
r2 = r3 + r1
r2 = r3 + r1
r1 = r2 + r2
r3 = 5 + r1
return r3 + r2
Therefore minimum 3 registers needed.
Question 51 |
The grammar S → aSa|bS|c is
LL(1) but not LR(1) | |
LR(1) but not LR(1) | |
Both LL(1) and LR(1) | |
Neither LL(1) nor LR(1) |

As there is no conflict in LL(1) parsing table, hence the given grammar is LL(1) and since every LL(1) is LR(1) also, so the given grammar is LL(1) as well as LR(1).
Question 52 |
Match all items in Group 1 with correct options from those given in Group 2
Group 1 Group 2 P. Regular expression 1. Syntax analysis Q. Pushdown automata 2. Code generation R. Dataflow analysis 3. Lexical analysis S. Register allocation 4. Code optimization
P-4, Q-1, R-2, S-3
| |
P-3, Q-1, R-4, S-2 | |
P-3, Q-4, R-1, S-2 | |
P-2, Q-1, R-4, S-3
|
Question 53 |
Which of the following statements are TRUE?
-
I.There exist parsing algorithms for some programming languages whose complexities are less than θ(n3).
II.A programming language which allows recursion can be implemented with static storage.
III.No L-attributed definition can be evaluated in the framework of bottom-up parsing.
IV.Code improving transformations can be performed at both source language and intermediate code level.
I and II
| |
I and IV | |
III and IV | |
I, III and IV |
Statement I is true, as the bottom up and top down parser take O(n) time to parse the string , i.e. only one scan of input is required.
Statement IV is true,Code improving transformations can be performed at both source language and intermediate code level. For example implicit type casting is also a kind of code improvement which is done during semantic analysis phase and intermediate code optimization is a topic itself which uses various techniques to improve the code such as loop unrolling, loop invariant.
Question 54 |
Which of the following describes a handle (as applicable to LR-parsing) appropriately?
It is the position in a sentential form where the next shift or reduce operation will occur.
| |
It is non-terminal whose production will be used for reduction in the next step. | |
It is a production that may be used for reduction in a future step along with a position in the sentential form where the next shift or reduce operation will occur.
| |
It is the production p that will be used for reduction in the next step along with a position in the sentential form where the right hand side of the production may be found.
|
Question 55 |
Some code optimizations are carried out on the intermediate code because
They enhance the portability of the compiler to other target processors | |
Program analysis is more accurate on intermediate code than on machine code
| |
The information from dataflow analysis cannot otherwise be used for optimization | |
The information from the front end cannot otherwise be used for optimization |
Question 56 |
Which of the following are true?
-
I. A programming language which does not permit global variables of any kind and has no nesting of procedures/functions, but permits recursion can be implemented with static storage allocation
II. Multi-level access link (or display) arrangement is needed to arrange activation records only if the programming language being implemented has nesting of procedures/functions
III. Recursion in programming languages cannot be implemented with dynamic storage allocation
IV. Nesting procedures/functions and recursion require a dynamic heap allocation scheme and cannot be implemented with a stack-based allocation scheme for activation records
V.Programming languages which permit a function to return a function as its result cannot be implemented with a stack-based storage allocation scheme for activation records
II and V only | |
I, III and IV only | |
I, II and V only | |
II, III and V only |
V. PL’s which permits a function to return a function as its result cannot be implemented with a stack-based storage allocation scheme for activation records.
II & V are True.
Question 57 |
An LALR(1) parser for a grammar G can have shift-reduce (S-R) conflicts if and only if
the SLR(1) parser for G has S-R conflicts | |
the LR(1) parser for G has S-R conflicts
| |
the LR(0) parser for G has S-R conflicts | |
the LALR(1) parser for G has reduce-reduce conflicts
|
Consider a state in LR(1) parser:
S-> x.yA, a
A-> x. , y
This has both shift and reduce conflict on symbol “y”.
Since LR(1) already has SR conflict , so resulting LALR(1) (after merging) will also have SR conflict.
Now if LR(1) doesn’t have SR conflict then it is guaranteed that the LALR(1) will never have SR conflict. The reason behind this is, as we merge those state only which has same set of canonical items except look ahead and the LR(1) canonical items has DFA (means from one state to other state the transition is from unique symbol) , so after merging also we will never see any shift conflict, only reduce-reduce may occur.
Hence An LALR(1) parser for a grammar G can have shift-reduce (S-R) conflicts if and only if the LR(1) parser for G has S-R conflicts.
Question 58 |
Which one of the following is a top-down parser?
Recursive descent parser. | |
Operator precedence parser. | |
An LR(k) parser.
| |
An LALR(k) parser. |
Question 59 |
Consider the grammar with non-terminals N = {S,C,S1},terminals T = {a,b,i,t,e}, with S as the start symbol, and the following set of rules:
S --> iCtSS1|a S1 --> eS|ϵ C --> b
The grammar is NOT LL(1) because:
it is left recursive | |
it is right recursive | |
it is ambiguous | |
it is not context-free |
This grammar has two parse tree for string “ibt ibt aea”.

Question 60 |
Consider the following two statements:
P: Every regular grammar is LL(1) Q: Every regular set has a LR(1) grammar
Which of the following is TRUE?
Both P and Q are true | |
P is true and Q is false | |
P is false and Q is true | |
Both P and Q are false |
For ex: Consider a regular grammar
S -> aS | a | ϵ
this grammar is ambiguous as for string "a" two parse tree is possible.

Hence it is regular but not LL(1).
But every regular set has a language accept or as DFA , so every regular set must have atleast one grammar which is unambiguous.
Hence, every regular set has LR(1) grammar.
Question 61 |
Consider the CFG with {S,A,B} as the non-terminal alphabet, {a,b} as the terminal alphabet, S as the start symbol and the following set of production rules:
S → aB S → bA B → b A → a B → bS A → aS B → aBB A → bAA
Which of the following strings is generated by the grammar?
aaaabb | |
aabbbb | |
aabbab | |
abbbba |
S -> aB [Using S --> aB]
-> aaBB [Using B --> aBB]
-> aabB [Using B --> b]
-> aabbS [Using B --> bS]
-> aabbaB [Using S --> aB]
-> aabbab [Using B --> b]
Question 62 |
Consider the CFG with {S,A,B} as the non-terminal alphabet, {a,b} as the terminal alphabet, S as the start symbol and the following set of production rules:
S → aB S → bA B → b A → a B → bS A → aS B → aBB A → bAA
For the correct answer strings to Q.78, how many derivation trees are there?
1 | |
2 | |
3 | |
4 |

Question 63 |
Consider the following grammar.
S → S * E S → E E → F + E E → F F → id
Consider the following LR(0) items corresponding to the grammar above.
(i) S → S * .E (ii) E → F. + E (iii) E → F + .E
Given the items above, which two of them will appear in the same set in the canonical sets-of-items for the grammar?
(i) and (ii) | |
(ii) and (iii) | |
(i) and (iii) | |
None of the above |

Question 64 |
Consider these two functions and two statements S1 and S2 about them
int work1(int *a, int i, int j) int work2(int *a, int i, int j) { { int x = a[i+2]; int t1 = i+2; a[j] = x+1; int t2 = a[t1]; return a[i+2] – 3; a[j] = t2+1; } return t2 – 3; }
- S1: The transformation form work1 to work2 is valid, i.e., for any program state and input arguments, work2 will compute the same output and have the same effect on program state as work1
S2: All the transformations applied to work1 to get work2 will always improve the performance (i.e reduce CPU time) of work2 compared to work1
S1 is false and S2 is false
| |
S1 is false and S2 is true | |
S1 is true and S2 is false | |
S1 is true and S2 is true |
So, given statement is wrong.
S1: Let us assume an array = {1,2,3,4,5} and i=0.
Let j = i+2 = 0+2 = 2
For the respective example work1 and work2 results 1 and 0, so S1 statement is False.
Question 65 |
Consider the following grammar:
S → FR R → S | ε F → id
In the predictive parser table, M, of the grammar the entries M[S,id] and M[R,$] respectively.
{S → FR} and {R → ε}
| |
{S → FR} and { } | |
{S → FR} and {R → *S} | |
{F → id} and {R → ε} |

The representation M[X,Y] means X represents Variable (rows) and Y represents terminals (columns).
The productions are filled in parsing table by the below mentioned rules:
For every production P → α, we have:
Rule 1: If P → α is a production then add this production for each terminal “t” which is in FIRST of [α] i.e., ADD P → α to M[P, a]
Rule 2: If “ϵ” belongs to FIRST of [P] then add P → α to M[P, b] where “b” represents terminals FOLLOW[P].
By the above rules, we can see that production S → FR will go M[S, a] where “a” is FIRST [FR] which is equal to FIRST[F] = id, So S → FR will go in M[S,id].
Since in the production R→ϵ , FIRST[ϵ] = ϵ, hence the production will go in M[R, b] where “b” represents terminals FOLLOW[R] and FOLLOW[R] = $, so production R→ϵ will go in M[R,$]
Question 66 |
Consider the following translation scheme.
S → ER R → *E{print("*");}R|ε E → F + E {print("+");}|F F → (S)|id {print(id.value);}
Here id is a token that represents an integer and id.value represents the corresponding integer value. For an input '2 * 3 + 4', this translation scheme prints
2 * 3 + 4 | |
2 * +3 4
| |
2 3 * 4 + | |
2 3 4+* |

Now perform post order evaluation, you will get output as,
2 3 4 + *
Question 67 |
Consider the following C code segment.
for (i = 0, i < n; i++) { for (j = 0; j < n; j++) { if (i%2) { x += (4*j + 5*i); y += (7 + 4*j); } } }
Which one of the following is false?
The code contains loop invariant computation | |
There is scope of common sub-expression elimination in this code | |
There is scope of strength reduction in this code | |
There is scope of dead code elimination in this code
|
→ 5*i can be moved out of inner loop. So can be i%2, here loop invariant computation can be done, so option A is correct.
→ 4*i, 5*j can also be replaced so there is a scope of strength reduction. So C is true.
→ But there is no dead code to eliminate, we can replace the variable representation only.
Question 68 |
The grammar A → AA | (A) | ε is not suitable for predictive-parsing because the grammar is:
ambiguous | |
left-recursive | |
right-recursive | |
an operator-grammar |
It have A → AA has left recursion.
Question 69 |
Consider the grammar:
E → E + n | E × n | n
For a sentence n + n × n, the handles in the right-sentential form of the reduction are:
n, E + n and E + n × n | |
n, E + n and E + E × n
| |
n, n + n and n + n × n | |
n, E + n and E × n |
→ E + E * n {Applying E → E * n}
→ E + n * n {Applying E → n}
→ n + n * n {Applying E → n}
We use n, E+n, E×n reductions to get a sentence n+n*n.
Question 70 |
Consider the grammar:
S → (S) | a
Let the number of states in SLR(1), LR(1) and LALR(1) parsers for the grammar be n1, n2 and n3 respectively. The following relationship holds good:
n1 < n2 < n3 | |
n1 = n3 < n2 | |
n1 = n2 = n3 | |
n1 ≥ n3 ≥ n2 |
→ LR(1) be the states of LR(1) items.
→ LR(0) items never be greater than LR(1) items then SLR(1) = LALR(1) < LR(1)
n1 = (n3) < (n2)
Question 71 |
Consider line number 3 of the following C-program.
int main ( ) { /* Line 1 */ int I, N; /* Line 2 */ fro (I = 0, I < N, I++); /* Line 3 */ }
Identify the compiler's response about this line while creating the object-module:
No compilation error
| |
Only a lexical error | |
Only syntactic errors | |
Both lexical and syntactic errors |
Question 72 |
Consider the following expression grammar. The semantic rules for expression calculation are stated next to each grammar production.
E → number E.val = number. val |E '+' E E(1).val = E(2).val + E>sup>(3).val |E '×' E E(1).val = E(2).val × E(3).val
The above grammar and the semantic rules are fed to a yacc tool (which is an LALR(1) parser generator) for parsing and evaluating arithmetic expressions. Which one of the following is true about the action of yacc for the given grammar?
It detects recursion and eliminates recursion
| |
It detects reduce-reduce conflict, and resolves
| |
It detects shift-reduce conflict, and resolves the conflict in favor of a shift over a reduce action | |
It detects shift-reduce conflict, and resolves the conflict in favor of a reduce over a shift action
|
Question 73 |
Consider the following expression grammar. The semantic rules for expression calculation are stated next to each grammar production.
E → number E.val = number. val |E '+' E E(1).val = E(2).val + E>sup>(3).val |E '×' E E(1).val = E(2).val × E(3).val
Assume the conflicts in Part (a) of this question are resolved and an LALR(1) parser is generated for parsing arithmetic expressions as per the given grammar. Consider an expression 3 × 2 + 1. What precedence and associativity properties does the generated parser realize?
Equal precedence and left associativity; expression is evaluated to 7
| |
Equal precedence and right associativity; expression is evaluated to 9 | |
Precedence of '×' is higher than that of '+', and both operators are left associative; expression is evaluated to 7 | |
Precedence of '+' is higher than that of '×', and both operators are left associative; expression is evaluated to 9 |

Hence, the answer is 9 and right associative.
Question 74 |
Which of the following grammar rules violate the requirements of an operator grammar? P,Q,R are nonterminals, and r,s,t are terminals.
- (i) P → Q R
(ii) P → Q s R
(iii) P → ε
(iv) P → Q t R r
(i) only | |
(i) and (iii) only | |
(ii) and (iii) only | |
(iii) and (iv) only |
i) On RHS it contains two adjacent non-terminals.
ii) Have nullable values.
Question 75 |
Consider a program P that consists of two source modules M1 and M2 contained in two different files. If M1 contains a reference to a function defined in M2, the reference will be resolved at
Edit time | |
Compile time | |
Link time | |
Load time |
Question 76 |
Consider the grammar rule E → E1 - E2 for arithmetic expressions. The code generated is targeted to a CPU having a single user register. The subtraction operation requires the first operand to be in the register. If E1 and E2 do not have any common sub expression, in order to get the shortest possible code
E1 should be evaluated first
| |
E2 should be evaluated first | |
Evaluation of E1 and E2 should necessarily be interleaved | |
Order to evaluation of E1 and E2 is of no consequence
|
Question 77 |
Consider the grammar with the following translation rules and E as the start symbol.
E → E1 # T {E.value = E1.value * T.value} | T {E.value = T.value} T → T1 & F {T.value = T1.value + F.value} | F {T.value = F.value} F → num {F.value = num.value}
Compute E.value for the root of the parse tree for the expression: 2 # 3 & 5 # 6 & 4.
200 | |
180 | |
160 | |
40 |
2 # 3 & 5 # 6 & 4
→ Here # means multiplication (*)
& means addition (+)
→ & is having more precedence because it is far from starting symbol, here # and & are left associatives.
2 # 3 & 5 # 6 & 4
⇒ (2 * (3+5)) * (6+4)
⇒ (2 * 8) * (10)
⇒ 16 * 10 = 160
Question 78 |
Consider the following grammar G:
S → bS| aA| b A → bA| aB B → bB| aS |a
Let Na(w) and Nb(w) denote the number of a's and b's in a string w respectively. The language L(G) ⊆ {a, b}+ generated by G is
{w|Na(w) > 3Nb(w)}
| |
{w|Nb(w) > 3Na(w)} | |
{w|Na(w) = 3k, k ∈ {0, 1, 2, ...}}
| |
{w|Nb(w) = 3k, k ∈ {0, 1, 2, ...}} |
S→baA
S→babA
S→babaB
S→babaa
n(a)=3; n(b)=2
Option A:
Na(w) > 3Nb(w)
3 > 3(2)
3 > 6 (✖️)
Option B:
Nb(w) > 3Nb(w)
2 > 3(2)
2 > 6 (✖️)
Option D:
Nb(w) = 3k
2 = 3k(✖️)
S = aA
S→aA
S→abA
S→abaB
S→abaa
n(a)=3
|n(a)|=3
→ Answer: Option C(✔️)
Question 79 |
Which of the following suffices to convert an arbitrary CFG to an LL(1) grammar?
Removing left recursion alone | |
Factoring the grammar alone | |
Removing left recursion and factoring the grammar | |
None of the above |
To convert an arbitrary CFG to an LL(1) grammar we need to remove the left recursion and as well as left factoring without that we cannot convert.
Question 80 |
Assume that the SLR parser for a grammar G has n1 states and the LALR parser for G has n2 states. The relationship between n1 and n2 is:
n1 is necessarily less than n2 | |
n1 is necessarily equal to n2
| |
n1 is necessarily greater than n2
| |
None of the above |
Question 81 |
In a bottom-up evaluation of a syntax directed definition, inherited attributes can
always be evaluated | |
be evaluated only if the definition is L-attributed | |
be evaluated only if the definition has synthesized attributes | |
never be evaluated |
L-Attributed definitions are a class of syntax directed definitions whose attributes can be evaluated by a single traversal of the parse-tree.
Question 82 |
Which of the following statements is FALSE?
In statically typed languages, each variable in a program has a fixed type | |
In un-typed languages, values do not have any types | |
In dynamically typed languages, variables have no types | |
In all statically typed languages, each variable in a program is associated with values of only a single type during the execution of the program
|
Question 83 |
Consider the grammar shown below
S → i E t S S' | a S' → e S | ε E → b
In the predictive parse table. M, of this grammar, the entries M[S', e] and M[S', $] respectively are
{S'→e S} and {S'→ε} | |
{S'→e S} and { } | |
{S'→ε} and {S'→ε} | |
{S'→e S, S'→ε} and {S'→ε} |
First(S') = {e,ε}
First(E) = {b}
Follow(S') = {e,$}
Only when 'First' contains ε, we need to consider FOLLOW for getting the parse table entry.

Hence, option (D) is correct.
Question 84 |
Consider the grammar shown below.
S → C C C → c C | d
The grammar is
LL(1) | |
SLR(1) but not LL(1)
| |
LALR(1) but not SLR(1) | |
LR(1) but not LALR(1)
|

Hence, it is LL(1).
Question 85 |
Consider the translation scheme shown below.
S → T R R → + T {print ('+');} R|ε T → num {print(num.val);}
Here num is a token that represents an integer and num.val represents the corresponding integer value. For an input string '9 + 5 + 2', this translation scheme will print
9 + 5 + 2 | |
9 5 + 2 + | |
9 5 2 + + | |
+ + 9 5 2 |

Now traverse the tree and whatever comes first to print, just print it.
Answer will be 9 5 + 2 +.
Question 86 |
Consider the syntax directed definition shown below.
S → id := E {gen (id.place = E.place;);} E → E1 + E2 {t = newtemp ( ); gen(t = E1.place + E2.place;); E.place = t} E → id {E.place = id.place;}
Here, gen is a function that generates the output code, and newtemp is a function that returns the name of a new temporary variable on every call. Assume that ti's are the temporary variable names generated by newtemp. For the statement 'X: = Y + Z', the 3-address code sequence generated by this definition is
X = Y + Z | |
t1 = Y + Z; X = t1 | |
t1= Y; t2 = t1 + Z; X = t2 | |
t1 = Y; t2 = Z; t3 = t1 + t2; X = t3
|

Question 87 |
Which of the following is NOT an advantage of using shared, dynamically linked libraries as opposed to using statically linked libraries?
Smaller sizes of executable files | |
Lesser overall page fault rate in the system | |
Faster program startup
| |
Existing programs need not be re-linked to take advantage of newer versions of libraries |
Question 88 |
(a) Construct all the parse trees corresponding to i + j * k for the grammar
E → E+E E → E*E E → id
- (b) In this grammar, what is the precedence of the two operators * and +?
(c) If only one parse tree is desired for any string in the same language, what changes are to be made so that the resulting LALR(1) grammar is non-ambiguous?
Theory Explanation is given below. |
Question 89 |
The process of assigning load addresses to the various parts of the program and adjusting the code and date in the program to reflect the assigned addresses is called
Assembly | |
Parsing | |
Relocation | |
Symbol resolution |
Question 90 |
Which of the following statements is false?
An unambiguous grammar has same leftmost and rightmost derivation | |
An LL(1) parser is a top-down parser | |
LALR is more powerful than SLR | |
An ambiguous grammar can never be LR(k) for any k |
Option C: LALR is more powerful than SLR.
Option D: An ambiguous grammar can never be LR (k) for any k, because LR(k) algorithm aren’t designed to handle ambiguous grammars. It would get stuck into undecidability problem, if employed upon an ambiguous grammar, no matter how large the constant k is.
Question 91 |
Consider the following grammar with terminal alphabet ∑{a,(,),+,*} and start symbol E. The production rules of the grammar are:
E → aA E → (E) A → +E A → *E A → ε
(a) Compute the FIRST and FOLLOW sets for E and A.
(b) Complete the LL(1) parse table for the grammar.
Theory Explanation is given below. |
Question 92 |
The syntax of the repeat-until statement is given by the gollowing grammar
S → repeat S1 until E
Where E stands for expressions, S and S1 stand for statement. The non-terminals S and S1 have an attribute code that represents generated code. The nonterminal E has two attributes. The attribute code represents generated code to evaluate the expression and store its truth value in a distinct variable, and the attribute varName contains the name of the variable in which the truth value is stored? The truth-value stored in the variable is 1 if E is true, 0 if E is false.
Give a syntax-directed definition to generate three-address code for the repeatuntil statement. Assume that you can call a function newlabel( ) that returns a distinct label for a statement. Use the operator ‘\\’ to concatenate two strings and the function gen(s) to generate a line containing the string s.
Theory Explanation is given below. |
Question 93 |
(a) Remove left-recursion from the following grammar:
S → Sa| Sb | a | b
(b) Consider the following grammar:
S → aSbS| bSaS |ε
Construct all possible parse trees for the string abab. Is the grammar ambiguous?
Theory Explanation is given below. |
Question 94 |
The number of tokens in the following C statement.
printf("i = %d, &i = %x", i, &i);
is
3 | |
26 | |
10 | |
21 |
(i) Keyword
(ii) Identifier
(iii) Constant
(iv) Variable
(v) String
(vi) Operator
Print = Token 1
( = Token 2
"i=%d%x" = Token 3 [Anything inside " " is one Token]
, = Token 4
i = Token 5
, = Token 6
& = Token 7
i = Token 8
) = Token 9
; = Token 10
Here, totally 10 Tokens are present in the equation.
Question 95 |
Which of the following derivations does a top-down parser use while parsing an input string? The input is assumed to be scanned in left to right order.
Leftmost derivation | |
Leftmost derivation traced out in reverse | |
Rightmost derivation | |
Rightmost derivation traced out in reverse |
Bottom-Up parser - Reverse of rightmost derivation
Question 96 |
Given the following expression grammar:
E → E * F | F + E | F F → F - F | id
which of the following is true?
* has higher precedence than + | |
- has higher precedence than * | |
+ and – have same precedence | |
+ has higher precedence than * |
Order of precedence is *, +, -.
Here * and + have equal preference, '-' can have higher precedence than + and *.
Question 97 |
Consider the syntax directed translation scheme (SDTS) given in the following.
Assume attribute evaluation with bottom-up parsing, i.e., attributes are evaluated immediately after a reduction.
E → E1 * T {E.val = E1.val * T.val} E → T {E.val = T.val} T → F – T1{T.val = F.val – T1.val} T → F {T.val = F.val} F → 2 {F.val = 2} F → 4 {F.val = 4}
(a) Using this SDTS, construct a parse tree for the expression
4 – 2 – 4 * 2
and also compute its E.val.
(b) It is required to compute the total number of reductions performed to parse a given input. Using synthesized attributes only, modify the SDTS given, without changing the grammar, to find E.red, the number of reductions performed while reducing an input to E.
Theory Explanation is given below. |
Question 98 |
Which of the following is the most powerful parsing method?
LL (1) | |
Canonical LR | |
SLR | |
LALR |
LR > LALR > SLR
Question 99 |
The number of tokens in the Fortran statement DO 10 I = 1.25 is
3 | |
4 | |
5 | |
None of the above |
10 → 2
I → 3
= → 4
1.25 → 5
Question 100 |
In a resident – OS computer, which of the following systems must reside in the main memory under all situations?
Assembler | |
Linker | |
Loader | |
Compiler |
Some OS may allow virtual memory may allow the loader to be located in a region of memory that is in page table.
Question 101 |
Which of the following statements is true?
SLR parser is more powerful than LALR | |
LALR parser is more powerful than Canonical LR parser | |
Canonical LR parser is more powerful than LALR parser | |
The parsers SLR, Canonical CR, and LALR have the same power |
Canonical LR parser is more powerful than LALR parser.
Question 102 |
Type checking is normally done during
lexical analysis | |
syntax analysis | |
syntax directed translation | |
code optimization |
Question 103 |
In the following grammar
X ::= X ⊕ Y/Y Y ::= Z * Y/Z Z ::= id
Which of the following is true?
‘⊕’ is left associative while ‘*’ is right associative | |
Both ‘⊕’ and ‘*’ is left associative | |
‘⊕’ is right associative while ‘*’ is left associative | |
None of the above |

⊕ is left associative.
* is right associative.
Question 104 |
A language L allows declaration of arrays whose sizes are not known during compilation. It is required to make efficient use of memory. Which of the following is true?
A compiler using static memory allocation can be written for L | |
A compiler cannot be written for L; an interpreter must be used | |
A compiler using dynamic memory allocation can be written for L | |
None of the above |
Question 105 |
The conditional expansion facility of macro processor is provided to
test a condition during the execution of the expanded program | |
to expand certain model statements depending upon the value of a condition during the execution of the expanded program | |
to implement recursion | |
to expand certain model statements depending upon the value of a condition during the process of macro expansion |
Question 106 |
Heap allocation is required for languages
that support recursion | |
that support dynamic data structures | |
that use dynamic scope rules | |
None of the above |
Question 107 |
The pass number for each of the following activities
- 1. Object code generation
2. Literals added to literal table
3. Listing printed
4. Address resolution of local symbols
That occur in a two pass assembler respectively are
1, 2, 1, 2 | |
2, 1, 2, 1 | |
2, 1, 1, 2 | |
1, 2, 2, 2 |
Pass 1:
1) Assign addresses to all statements in the program.
2) Save the values assigned to all labels for use in pass 2.
3) Perform some processing of assembler directives.
Pass 2:
1) Assemble instructions.
2) Generate data values defined by BYTE, WORD etc.
3) Perform processing of assembler directives not done during pass 1.
4) Write the program and assembling listing.
Question 108 |
Which of the following macros can put a micro assembler into an infinite loop?
(i) .MACRO M1 X .IF EQ, X ;if X=0 then M1 X + 1 .ENDC .IF NE X ;IF X≠0 then .WORD X ;address (X) is stored here .ENDC .ENDM (ii) .MACRO M2 X .IF EQ X M2 X .ENDC .IF NE, X .WORD X+1 .ENDC .ENDM
(ii) only | |
(i) only | |
both (i) and (ii) | |
None of the above |
Question 109 |
A linker is given object modules for a set of programs that were compiled separately. What information need to be included in an object module?
Object code | |
Relocation bits | |
Names and locations of all external symbols defined in the object module
| |
Absolute addresses of internal symbols |
To link to external symbols it must know the location of external symbols.
Question 110 |
A shift reduce parser carries out the actions specified within braces immediately after reducing with the corresponding rule of grammar
S → xxW {print "1"} S → y {print "2"} W → Sz {print "3"}
What is the translation of xxxxyzz using the syntax directed translation scheme described by the above rules?
23131 | |
11233 | |
11231 | |
33211 |

⇒ 23131
Note SR is bottom up parser.
Question 111 |
Generation of intermediate code based on an abstract machine model is useful in compilers because
it makes implementation of lexical analysis and syntax analysis easier | |
syntax-directed translations can be written for intermediate code generation | |
it enhances the portability of the front end of the compiler | |
it is not possible to generate code for real machines directly from high level language programs |
Question 112 |
Match the following items

(i) - (d), (ii) - (a), (iii) - (b), (iv) - (c) |
Yacc (Yet Another Compiler- Compiler) is a computer program for the UNIX operating system. It is a LALR parser generator, generating a parser, the part of a compiler that tries to make syntactic sense of the source code, specially a LALR parser, based on an analytic grammar. Yacc is written in portable C.
Question 113 |
Consider the following grammar.
S → aSB|d B → b
The number of reduction steps taken by a bottom-up parser while accepting the string aaadbbb is _______.
7 |

7 reductions total.
Question 114 |
Consider the following statements.
- I. Symbol table is accessed only during lexical analysis and syntax analysis.
II. Compilers for programming languages that support recursion necessarily need heap storage for memory allocation in the run-time environment.
III. Errors violating the condition ‘any variable must be declared before its use’ are detected during syntax analysis.
Which of the above statements is/are TRUE?
II only | |
I only | |
I and III only
| |
None of I, II and III |
II is wrong as compilers which supports recursion require stack memory in run time environment.
III is wrong “any variable must be declared before its use” is a semantic error and not syntax error.
Question 115 |
Consider the productions A⟶PQ and A⟶XY. Each of the five non-terminals A, P, Q, X, and Y has two attributes: s is a synthesized attribute, and i is an inherited attribute. Consider the following rules.
Rule 1: P.i = A.i + 2, Q.i = P.i + A.i, and A.s = P.s + Q.s Rule 2: X.i = A.i + Y.s and Y.i = X.s + A.i
Which one of the following is TRUE?
Only Rule 2 is L-attributed.
| |
Neither Rule 1 nor Rule 2 is L-attributed. | |
Both Rule 1 and Rule 2 are L-attributed. | |
Only Rule 1 is L-attributed.
|
Question 116 |
For the program segment given below, which of the following are true?
program main (output); type link = ^data; data = record d : real; n : link end; var ptr : link; begin new (ptr); ptr:=nil; .ptr^.d:=5.2; write ln(ptr) end.
The program leads to compile time error | |
The program leads to run time error | |
The program outputs 5.2 | |
The program produces error relating to nil pointer dereferencing | |
None of the above |
Question 117 |
A part of the system software, which under all circumstances must reside in the main memory, is:
text editor | |
assembler | |
linker | |
loader | |
none of the above |
Question 118 |
Consider the SLR(1) and LALR (1) parsing tables for a context free grammar. Which of the following statements is/are true?
The go to part of both tables may be different. | |
The shift entries are identical in both the tables. | |
The reduce entries in the tables may be different. | |
The error entries in the tables may be different. | |
B, C and D. |
Reduce entry and error entry may be different due to conflicts.
Question 119 |
The arithmetic expression : (a + b) * c - d / e * * f is to be evaluated on a two-address machine, where each operands, the number of registers required to evaluate this expression is ______. The number of memory access of operand is __________.
3, 4 |
So, in total 3 registers are required and 6 memory operations in total to fetch all operands.
Question 120 |
A given set of processes can be implemented by using only parbegin/parend statement, if the precedence graph of these processes is ________
properly nested. |
Question 121 |
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only: A “link editor” is a program that:
matches the parameters of the macro-definition with locations of the parameters of the macro call | |
matches external names of one program with their location in other programs | |
matches the parameters of subroutine definition with the location of parameters of subroutine call | |
acts as link between text editor and the user | |
acts as a link between compiler and user program |
1) external symbol resolution
2) relocation
Question 122 |
Choose the correct alternatives (more than one may be correct) and write the corresponding letters only: Indicate all the true statements from the following:
Recursive descent parsing cannot be used for grammar with left recursion. | |
The intermediate form the representing expressions which is best suited for code optimization is the post fix form. | |
A programming language not supporting either recursion or pointer type does not need the support of dynamic memory allocation. | |
Although C does not support call by name parameter passing, the effect can be correctly simulated in C.
| |
No feature of Pascal violates strong typing in Pascal. | |
A and D |
(B) False.
(C) It is false. The language can have dynamic data types which required dynamically growing memory when data type size increases.
(D) Is true and using macro we can do this.
(E) Out of syllabus now.
Question 123 |
Match the pairs in the following questions:
(a) Pointer data type (p) Type conversion (b) Activation record (q) Dynamic data structure (c) Repeat-until (r) Recursion (d) Coercion (s) Non-deterministic loop
(a) - (q), (b) - (r), (c) - (s), (d) - (p) |
Activation record - Recursion
Repeat until - Non-deterministic loop
Coercion - Type conversion
Question 124 |
Match the pairs in the following questions:
(a) Lexical analysis (p) DAG's (b) Code optimization (q) Syntax trees (c) Code generation (r) Push down automaton (d) Abelian groups (s) Finite automaton
(a) - (s), (b) - (p), (c) - (q), (d) - (r) |
Code optimization - DAG
Code generation - Syntax tree
Abelian groups - Push down automaton
Question 125 |
Merging states with a common core may produce __________ conflicts and does not produce ___________ conflicts in an LALR purser.
Reduce-Reduce, Shift-Reduce |
Question 126 |
In a compiler the module the checks every character of the source text is called:
The code generator. | |
The code optimizer. | |
The lexical analyser. | |
The syntax analyser. |
Question 127 |
An operator precedence parser is a
Bottom-up parser. | |
Top-down parser. | |
Back tracking parser. | |
None of the above. |
Question 128 |
Using longer identifiers in a program will necessarily lead to:
Somewhat slower compilation | |
A program that is easier to understand | |
An incorrect program | |
None of the above |
Question 129 |
Consider an ambiguous grammar G and its disambiguated version D. Let the language recognized by the two grammars be denoted by L(G) and L(D) respectively. Which one of the following is true ?
L (D) ⊂ L (G) | |
L (D) ⊃ L (G) | |
L (D) = L (G) | |
L (D) is empty |
For example, by converting NFA to DFA language will not be changed.
Question 130 |
Dynamic type checking slows down the execution | |
Dynamic type checking offers more flexibility to the programmers | |
In contrast to Static type checking, dynamic type checking may cause failure in runtime due to type errors | |
Unlike static type checking, dynamic type checking is done during compilation |
→ Type checking is all about ensuring that the program is type-safe, meaning that the possibility of type errors is kept to a minimum.
→ A language is statically-typed if the type of a variable is known at compile time instead of at runtime. Common examples of statically-typed languages include Ada, C, C++, C#, JADE, Java, Fortran, Haskell, ML, Pascal, and Scala.
→ Dynamic type checking is the process of verifying the type safety of a program at runtime. Common dynamically-typed languages include Groovy, JavaScript, Lisp, Lua, Objective-C, PHP, Prolog, Python, Ruby, Smalltalk and Tcl.
Question 131 |
which is written in a language that is different from the source language | |
compiles the whole source code to generate object code afresh | |
compiles only those portion of source code that has been modified. | |
that runs on one machine but produces object code for another machine |
1. Incremental compiler: It rebuilds all program modules, incremental compiler re-compiles only those portions of a program that have been modified.
2. Cross-compiler: If the compiled program can run on a computer whose CPU or operating system is different from the one on which the compiler runs, the compiler is a cross-compiler.
3. A bootstrap compiler: is written in the language that it intends to compile. A program that translates from a low-level language to a higher level one is a decompiler.
4. Source-to-source compiler or transpiler: A program that translates between high-level languages is usually called a source-to-source compiler or transpiler.
Question 132 |
Consist of a definition of a variable and all its uses, reachable from that definition | |
Are created using a form of static code analysis | |
Are prerequisite for many compiler optimization including constant propagation and common sub-expression elimination | |
All of the above |
→ A counterpart of a UD Chain is a Definition-Use Chain (DU Chain), which consists of a definition, D, of a variable and all the uses, U, reachable from that definition without any other intervening definitions.
→ Both UD and DU chains are created by using a form of static code analysis known as data flow analysis. Knowing the use-def and def-use chains for a program or subprogram is a prerequisite for many compiler optimizations, including constant propagation and common subexpression elimination.
Question 133 |
It is applied to a small part of the code and applied repeatedly | |
It can be used to optimize intermediate code | |
It can be applied to a portion of the code that is not contiguous | |
It is applied in the symbol table to optimize the memory requirements. |
Replacement Rules:
1. Null sequences – Delete useless operations.
2. Combine operations – Replace several operations with one equivalent.
3. Algebraic laws – Use algebraic laws to simplify or reorder instructions.
4. Special case instructions – Use instructions designed for special operand cases.
5. Address mode operations – Use address modes to simplify code.
Question 134 |
Faster | |
Slower | |
At the same speed | |
May be faster or slower |
→ Interpreter translates program one statement at a time. Scans the entire program and translates it as a whole into machine code. It takes less amount of time to analyze the source code but the overall execution time is slower.
→ Compiler scans the entire program and translates it as a whole into machine code. It takes large amount of time to analyze the source code but the overall execution time is comparatively faster.
Question 135 |
A parse tree | |
Intermediate code | |
Machine code | |
A stream of tokens |
Question 136 |
declaration | |
assignment statements | |
input and output statements | |
structural statements |
Each statement is classified as executable or non-executable.
Executable Statements
1.Arithmetic, logical, statement label (ASSIGN), and character assignment statements
2.Unconditional GO TO, assigned GO TO, and computed GO TO statements
3.Arithmetic IF and logical IF statements
4.Block IF, ELSE IF, ELSE, and END IF statements
5.CONTINUE statement
6.STOP and PAUSE statements
7.DO statement
8.READ, WRITE, and PRINT statements
9.REWIND, BACKSPACE, ENDFILE, OPEN, CLOSE, and INQUIRE statements
10.CALL and RETURN statements
11.END statement
Non-executable Statements
1.PROGRAM, FUNCTION, SUBROUTINE, ENTRY, and BLOCK DATA statements
2.DIMENSION, COMMON, EQUIVALENCE, IMPLICIT, PARAMETER, EXTERNAL, INTRINSIC, and SAVE statements
3.INTEGER, REAL, DOUBLE PRECISION, COMPLEX, LOGICAL, and CHARACTER type-statements
4.DATA statement
5.FORMAT statement
6.Statement function statement
Question 137 |
Linear list | |
Search tree | |
Hash table | |
Self organization list |
Question 138 |
Top-down parsers | |
Bottom-up parsers | |
Predictive parsers | |
None of the above |
→ Thus the structure of the resulting program closely mirrors that of the grammar it recognizes.
Question 139 |
Rightmost Derivation | |
Rightmost derivation in reverse | |
Leftmost derivation | |
Leftmost derivation in reverse |
→ The inclusive choice is used to accommodate ambiguity by expanding all alternative right-hand-sides of grammar rules.
Question 140 |
Loop optimization | |
Local optimization | |
Constant folding | |
Data flow analysis |
→ By locally, we mean a small portion of the code block at hand.
→ These methods can be applied on intermediate codes as well as on target codes.
Question 141 |
Local optimization
| |
Loop optimization
| |
Constant folding | |
Strength reduction
|
Example:
In the code fragment below, the expression (3 + 5) can be evaluated at compile time and replaced with the constant 8.
int f (void)
{
return 3 + 5;
}
Below is the code fragment after constant folding.
int f (void)
{
return 8;
}
Question 142 |
Only II is correct | |
Both I and II are correct | |
Only I is correct | |
Both I and II are incorrect |
II. In c++ also we have to write exceptions.
Question 143 |
Checks to see if the instructions are legal in the current assembly mode | |
It allocates space for the literals. | |
It builds the symbol table for the symbols and their values. | |
All of these |
Question 144 |
Replacing run time computation by compile time computation | |
Removing loop invariant computation | |
Removing common subexpressions
| |
replacing a costly operation by a relatively cheaper one |
Example: Exponentiation is replaced by multiplication and multiplication is in return replaced by addition.(x * 2 becomes x + x)
Question 145 |
It is applied to a small part of the code | |
It can be used to optimize intermediate code | |
To get the best out of this, it has to be applied repeatedly | |
It can be applied to the portion of the code that is not contiguous |
It basically works on the theory of replacement in which a part of code is replaced by shorter and faster code without change in output.
Question 146 |
S → AB
A → a
B → b
B → C
A | |
B | |
C | |
S |
Question 147 |
bottom up parsing | |
top down parsing | |
recursive parsing | |
predictive parsing |
→ The parsing methods most commonly used for parsing programming languages, LR parsing and its variations, are shift-reduce methods.

Question 148 |
S → Aa | b
A → Ac | Sd | ε
S → Aa | b A → bdA’ A’ → A’c | A’ba | A | ε | |
S → Aa | b A → A’ | bdA’, A’ → cA’ | adA’ | ε | |
S → Aa | b A → A’c | A’d A’ → bdA’ | cA | ε | |
S → Aa | b A → cA’ | adA’ | bdA’ A’ → A | ε |

Question 149 |
A bottom-up parser generates :
Leftmost derivation in reverse | |
Right-most derivation in reverse
| |
Left-most derivation | |
Right-most derivation
|
Question 150 |
(1) i=1
(2) t1=5*I
(3) t2=4*t1
(4) t3=t2
(5) a[t3]=0
(6) i=i+1;
(7) if i<15 goto(2)
33 | |
44 | |
43 | |
34 |
Step-2: From 2nd statement to 6th statement we are performing some tasks.
Step-3: It indicate if condition, so it’s another statement. And the back loop indicates goto statement.
Here, total 3 nodes and 3 edges using control flow graph.

In options they have to give like this
3 and 3
4 and 4
4 and 3
3 and 4
But they combines the node value and edge value. It seems different value.
Question 151 |
Variable Table | |
Terminal Table | |
Keyword Table | |
Identifier Table |
Symbol Table entries: Each entry in symbol table is associated with attributes that support
→ compiler in different phases.
→ Items stored in Symbol table:
→ Variable names and constants
→ Procedure and function names
→ Literal constants and strings
→ Compiler generated temporaries
→ Labels in source languages
Question 152 |
7 | |
8 | |
9 | |
13 |
Printf
(
“A%B=”
,
&
I
)
;
Question 153 |
Replace P^2 by P*P | |
Replace P*16 by P<< 4 | |
Replace pow(P,3) by P*P*P | |
Replace (P <<5) -P by P*3 |
It is a compiler optimization where expensive operations are replaced with equivalent but less expensive operations. The classic example of strength reduction converts "strong" multiplications inside a loop into "weaker" additions – something that frequently occurs in array addressing.
Examples:
Replacing a multiplication within a loop with an addition
Replacing an exponentiation within a loop with a multiplication
According to options, Option B is most suitable answer.
Question 154 |
Loop rolling | |
Loop folding | |
Loop merge | |
Loop jamming |
→ Loop fission (or loop distribution) is a compiler optimization in which a loop is broken into multiple loops over the same index range with each taking only a part of the original loop's body.
Question 155 |
Loop rolling | |
Loop folding | |
Loop merge | |
Loop jamming |
→ Loop fission (or loop distribution) is a compiler optimization in which a loop is broken into multiple loops over the same index range with each taking only a part of the original loop's body.
Question 156 |
External data segments | |
External subroutines | |
data located in other procedure | |
All of these |
● Data segment stores program data. This data could be in form of initialized or uninitialized variables, and it could be local or global.
● External subroutines are routines/procedures that are created and maintained separately from the program that will be calling them
Question 157 |
SLR parsing table | |
Canonical LR parsing table | |
LALR parsing table | |
None of these |
● It is a Look Ahead Left-to-Right (LALR) parser generator, generating a parser, the part of a compiler that tries to make syntactic sense of the source code, specifically a LALR parser, based on an analytic grammar written in a notation similar to Backus–Naur Form (BNF)
Question 158 |
the name of source program in micro computers | |
the set of instructions indicating the primitive operations in a system | |
primitive form of macros used in assembly language programming | |
program of very small size |
● A microinstruction is a bit pattern in which each bit (or combination of bits) drives the control signals of the hardware.
Question 159 |
n/2 | |
n-1 | |
2n-1 | |
2n |
1) epsilon production
2) production of the form A → a
Consider the grammar:
S → Sa | a
If we were to derive the string “aaa” whose length is 3 then the number of reduce moves that would have been required are shown below:
S→ Sa
→Saa
→aaa
This shows us that it has three reduce moves. The string length is 3 and the number of reduce moves is also 3. So presence of such kinds of production might give us the answer “n” for maximum number of reduce moves. But these productions are not allowed as per the question.
Also note that if a grammar does not have unit production then the maximum number of reduce moves can not exceed “n” where “n” denotes the length of the string.
3) No unit productions
Consider the grammar:
S→ A
A→ B
B→C
C→a
If we were to derive the string “a” whose length is 1 then the number of reduce moves that would have been required are shown below:
S→ A
A→ B
B→C
C→a
This shows us that it has four reduce moves. The string length is 1 and the number of reduce moves is 4. So presence of such kind of productions might give us the answer “n+1” or even more, for maximum number of reduce moves. But these productions are not allowed as per the question.
Now keeping in view the above points suppose we want to parse the string “abcd”. (n = 4) using bottom-up parsing where strings are parsed finding the rightmost derivation of a given string backwards. So here we are concentrating on deriving rightmost derivations only.
We can write the grammar which accepts this string which in accordance to the question, (i.e., with no epsilon- and unit-production (i.e., of type A → є and A → B) and no production of the form A→a) as follows:
S→aB
B→bC
C→cd
The Right Most Derivation for the above is:
S → aB (Reduction 3)
→ abC (Reduction 2)
→ abcd (Reduction 1)
We can see here the number of reductions present is 3.
We can get less number of reductions with some other grammar which also doesn’t produce unit or epsilon productions or production of the form A→a:
S→abA
A→ cd
The Right Most Derivation for the above is:
S → abA (Reduction 2)
→ abcd (Reduction 1)
Hence 2 reductions.
But we are interested in knowing the maximum number of reductions which comes from the 1st grammar. Hence total 3 reductions as maximum, which is (n – 1) as n = 4 here.
Question 160 |
Parsing of the program | |
the code generation | |
the lexical analysis of the program | |
dataflow analysis |
The lexical analyzer breaks these syntaxes into a series of tokens, by removing any whitespace or comments in the source code.
In programming language, keywords, constants, identifiers, strings, numbers, operators and punctuations symbols can be considered as tokens.
Question 161 |
Assembler | |
linking loader | |
cross compiler | |
load and go |
A loader which combines the functions of a relocating loader with the ability to combine a number of program segments that have been independently compiled into an executable program.
A cross compiler is a compiler capable of creating executable code for a platform other than the one on which the compiler is running.
Question 162 |
There exist parsing algorithms for some programming languages whose complexities are less than O(n3) | |
A programming language which allows recursion can be implemented with static storage allocation | |
L-attributed definition can be evaluated in the framework of bottom-up parsing | |
Code improving transformation can be performed at both source language and intermediate code level. |
→ Statement II is false, as a programming language which allows recursion requires dynamic storage allocation.
→ Statement III is True, as L-attributed definition (assume for instance the L-attributed definition has synthesized attribute only) can be evaluated in bottom up framework.
→ Statement IV is true,Code improving transformations can be performed at both source language and intermediate code level. For example implicit type casting is also a kind of code improvement which is done during semantic analysis phase and intermediate code optimization is a topic itself which uses various techniques to improve the code such as loop unrolling, loop invariant.
Question 163 |
on the number of strings/lifereacs | |
that the data segment must be defined after the code segment | |
on unconditional rump | |
that the data segment be defined before the code segment |
The .bss section is also a static memory section that contains buffers for data to be declared later in the program. This buffer memory is zero-filled.
Code segment − It is represented by .text section. This defines an area in memory that stores the instruction codes. This is also a fixed area.
Question 164 |
The grammar S ⟶ (S) | SS | ∈ is not suitable for predictive parsing because the grammar is
An Operator Grammar
| |
Right Recursive
| |
Left Recursive | |
Ambiguous
|

Question 165 |
assembler directives | |
instructions in any program that have no corresponding machine code instruction | |
instruction in any program whose presence or absence will not change the output for any input | |
none of these |
Question 166 |
Consider the following Grammar G :
S➝ A | B A➝ a | c B➝ b | c
Where {S,A,B} is the set of non-terminals, {a,b,c} is the set of terminals.
Which of the following statement(s) is/are correct ?
- S1 : LR(1) can parse all strings that are generated using grammar G.
S2 : LL(1) can parse all strings that are generated using grammar G.
Choose the correct answer from the code given below :
Code :Both S1 and S2
| |
Only S2
| |
Neither S1 nor S2
| |
Only S1
|

Since the grammar is Ambiguous so the strings generated by the grammar G can’t be parsed by LR(1) or LL(1) parser.
Question 167 |
Local optimization | |
Constant folding | |
Loop Optimization | |
Data flow analysis |
● Global optimization refers to finding the optimal value of a given function among all possible solution whereas local optimization finds the optimal value within the neighboring set of candidate solution.
● Loop optimization is the process of increasing execution speed and reducing the overheads associated with loops. It plays an important role in improving cache performance and making effective use of parallel processing capabilities. Most execution time of a scientific program is spent on loops; as such, many compiler optimization techniques have been developed to make them faster.
● Data-flow analysis is a technique for gathering information about the possible set of values calculated at various points in a computer program.
Question 168 |
Syntax | |
Struct | |
Semantic | |
none of the above |
Question 169 |
DAG | |
Control graph | |
Flow graph | |
Hamiltonian graph |
→ A flow graph is a form of digraph associated with a set of linear algebraic or differential equations.
Definition: "A signal flow graph is a network of nodes (or points) interconnected by directed branches, representing a set of linear algebraic equations. The nodes in a flow graph are used to represent the variables, or parameters, and the connecting branches represent the coefficients relating these variables to one another. The flow graph is associated with a number of simple rules which enable every possible solution [related to the equations] to be obtained."
Question 170 |
Leftmost derivation | |
rightmost derivation | |
Leftmost derivation in reverse | |
Rightmost derivation in reverse |
● Top-down parsing can be viewed as an attempt to find leftmost derivations of an input-stream by searching for parse-trees using a top-down expansion of the given formal grammar rules.
Question 171 |
It is based on the syntax | |
It is easy to modify | |
Its description is independent of any implementation | |
All of these |
A common method of syntax-directed translation is translating a string into a sequence of actions by attaching one such action to each rule of a grammar.
Question 172 |
A set of regular expressions | |
Strings of character | |
Syntax tree | |
Set of tokens |
● The lexical analyzer breaks these syntaxes into a series of tokens, by removing any whitespace or comments in the source code.
Question 173 |
E→ E*F | F+E |F
F→ F-F | id
Which of the following is true?
* has higher precedence than + | |
– has higher precedence than * | |
+ and – have same precedence | |
+ has higher precedence than * |
Order of precedence is *, +, -.
Here * and + have equal preference, '-' can have higher precedence than + and *.
Question 174 |
printf(“i=%d, &i=%x”, i&i);
13 | |
6 | |
10 | |
0 |

Question 175 |
1 only | |
1 and 2 only | |
1 and 3 only | |
1 and 4 only |
i) On RHS it contains two adjacent non-terminals.
ii) Have nullable values.
Question 176 |
Recursive descent parser | |
Shift left associative parser | |
SLR(k) parser | |
LR(k) parser |

Question 177 |
yet accept compiler constructs | |
yet accept compiler compiler | |
yet another compiler construct | |
yet another compiler compiler |
→ It is a Look Ahead Left-to-Right (LALR) parser generator, generating a parser, the part of a compiler that tries to make syntactic sense of the source code, specifically a LALR parser, based on an analytic grammar written in a notation similar to Backus-Naur Form (BNF)
Question 178 |
LALR parser is more powerful and costly as compare to other parsers | |
All CFG’s are LP and not all grammars are uniquely defined | |
Every SLR grammar is unambiguous but not every unambiguous grammar is SLR | |
LR(K) is the most general backtracking shift reduce parsing method |
Option-A: LR > LALR > SLR
Canonical LR parser is more powerful than LALR parser. So, it FALSE
Option-B: Here, LP is linear precedence. Every grammar generated by LP are CFG but all CFG's are not LP. So, it is false
Option-C: TRUE
Option-D: LR(K) is general non backtracking shift reduce parsing method. So, It is false
Question 179 |
With respect to compiler design, "recursive descent" is a ____ parsing technique that reads the inputs from ____.
top-down, right to left
| |
top-down, left to right
| |
bottom up, right to left | |
bottom up, left to right
|
→ Top down parsers reads the input from left to right and bottom up parsers are reads the input from left to right and reverse.
Question 180 |
Which of the following is NOT a bottom up, shift reduce parser?
LR parser
| |
LL parser | |
SLR parser
| |
LALR parser
|
1. SLR
2. LALR
3. CLR
4. LR(0)
Top Down parser:
1. Recursive descent
2. Non Recursive descent(LL(1))
Question 181 |
Loop optimization | |
Redundancy Elimination | |
Folding | |
All of the options |

Question 182 |
S1: LR(0) grammar and SLR(1) grammar are equivalent
S2: LR(1) grammar are subset of LALR(1) grammars
S1 only | |
S1 and S2 both | |
S2 only | |
None of the options |
FALSE: S1: LR(0) grammar and SLR(1) grammar are equivalent
FALSE: S2: LR(1) grammar are subset of LALR(1) grammars
Question 183 |
Reduces the space of the code | |
Optimization the code to reduce execution time | |
Both (A) and (B) | |
Neither (A) nor (B) |
→ The most common requirement is to minimize the time taken to execute a program; a less common one is to minimize the amount of memory occupied.
→ The growth of portable computers has created a market for minimizing the power consumed by a program
Question 184 |
Which of the following phases of the compilation process is also known as parsing?
Lexical analysis
| |
Code optimization
| |
Syntax analysis
| |
Semantic analysis
|
Question 185 |
Linker | |
Loader | |
Compiler | |
Editor |
→ A loader is a major component of an operating system that ensures all necessary programs and libraries are loaded, which is essential during the startup phase of running a program.
→ It places the libraries and programs into the main memory in order to prepare them for execution.
Question 186 |
Assembler and editor | |
Compiler and word processor | |
Only Assembler and compiler | |
Assembler,Compiler and Interpreter |
→ Assembers : Assembler are used to convert assembly language code into machine code.
→ Interpreter : An interpreter is a computer program which executes a statement directly (at runtime).
→ Examples: python , LISP
Question 187 |
LL grammar | |
ambiguous grammar | |
LR grammar | |
none of the above |
● Synthesized attributes represent information that is being passed up the parse tree.
● LR-attributed grammars allow the attributes to be evaluated on LR parsing. As a result, attribute evaluation in LR-attributed grammars can be incorporated conveniently in bottom-up parsing.
Question 188 |
if FIRST(u) ∩ FIRST(v) is empty then the CFG has to be LL(1) | |
If the CFG is LL(1) then FIRST(u) ∩ FIRST(v) has to be empty | |
Both (A) and (B) | |
None of the above |
Theorem: A context free grammar G=(V T , V N , S,P) is LL(1) if and if only if for every nonterminal A and every strings of symbols ⍺,β such that ⍺≠β and A → ⍺,β we have
1. First(⍺) ∩ First (β) ∩ Follow(A)=Θ.
2. If ⍺ *⇒ ε then First(β) ∩ Follow(A)= Θ.
If grammar is epsilon free then condition 2 is not required.
Now as per condition 1, for every non-terminal A, if we have A → u and A→v
And First(u) First(v) = φ and CFG is epsilon free then it must be LL(1) and if epsilon free CFG is LL(1) then it must satisfy the condition 1.
Question 189 |
Which of the following checks are not included in semantic analysis done by the compiler:
Type checks
| |
Spelling checks
| |
Uniquencess checks | |
Flow of control checks
|
1. Scope resolution
2. Type checking
3. Array-bound checking
Question 190 |
Parsing of the program | |
Code generation | |
Lexical analysis of the program | |
Data flow diagrams |
Different tokens or lexemes are:
→Keywords
→Identifiers
→Operators
→Constants
Question 191 |

(a)-(iii), (b)-(iv), (c)-(ii), (d)-(i) | |
(a)-(iv), (b)-(iii), (c)-(ii), (d)-(i) | |
(a)-(ii), (b)-(iv), (c)-(i), (d)-(iii) | |
(a)-(ii), (b)-(iv), (c)-(iii), (d)-(i) |
Scanner→ An IR-to-IR transformer that tries to improve the IR program in some way (Intermediate representation)
Semantic Analysis→ A part of a compiler that understand the meaning of variable names and other symbols and checks that they are used in ways consistent with their definitions
Optimizer→ An IR-to-IR transformer that tries to improve the IR program in some way (Intermediate representation)
Question 192 |
Tolerance | |
Scalability | |
Capability | |
Loading |
→ Scalability is the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth.
Note: A scalable system is any system that is flexible with its number of components.
Question 193 |
I. Lexical Analysis is specified by context-free grammars and implemented by pushdown automata.
II. Syntax Analysis is specified by regular expressions and implemented by finite-state machine.
Which of the above statement(s) is/are correct ?
Only I | |
Only II | |
Both I and II | |
Neither I nor II |
FALSE: Syntax Analysis is specified by context-free grammars and implemented by pushdown automata.
Question 194 |
Replace P + P by 2 * P or Replace 3 + 4 by 7. | |
Replace P * 32 by P << 5 | |
Replace P * 0 by 0 | |
Replace (P << 4) – P by P * 15 |
Option(B) is correct because here to speedup the multiplication operation, * operator is replaced by Left-Shift operator.
Option(C) is not correct because the method used for compiler optimization is not correct. For given statements to perform compiler optimization, we simply eliminate such statements from the code and this method of elimination is called as Algebraic Simplification.
Option(D) is also not correct because bitwise operator is more faster than multiplication.
Question 195 |
I. Resolve external references among separately compiled program units.
II. Translate assembly language to machine code.
III. Relocate code and data relative to the beginning of the program.
IV. Enforce access-control restrictions on system libraries
I and II | |
I and III | |
II and III | |
I and IV |
A linker or link editor is a computer utility program that takes one or more object files generated by a compiler and combines them into a single executable file, library file, or another 'object' file.
Principles:
1. Resolve external references among separately compiled program units
2. Relocate code and data relative to the beginning of the program.
→ Assembler, Translate assembly language to machine code.
Question 196 |
Loop unrolling | |
Strength reduction | |
Loop concatenation | |
Loop jamming |
Strength reduction: It is a machine independent code optimization technique in which a costly operation(operation which takes more time to execute) is replaced with the cheaper. operation(operation which takes less execution time)
Loop Jamming: It is a loop optimization technique in which bodies of two loops are combined together to decrease the number of loops.
Question 197 |
I. Reduction in overall program execution time.
II. Reduction in overall space consumption in memory.
III. Reduction in overall space consumption on disk.
IV. Reduction in the cost of software updates.
I and IV | |
I only | |
II and III | |
IV only |
FALSE: Reduction in overall space consumption in memory.
FALSE: Reduction in overall space consumption on disk.
FALSE: Reduction in the cost of software updates.
Note: Except statement-I remaining all statements are not related to dynamic linking.
Question 198 |
The grammar S → a Sb |bSa|SS|∈, where S is the only non-terminal symbol and ∈ is the null string, is ambiguous. | |
SLR is powerful than LALR. | |
An LL(1) parser is a top-down parser. | |
YACC tool is an LALR(1) parser generator. |
In this grammar we are having two different parse trees For string "aabb" using leftmost derivation only.

Statement B is wrong because in LALR we use lookahead symbols to put the reduce entries into the parsing table. Because of which number of blank entries in LALR parser are more than that of SLR parser which in turn increases the error detection capability of LALR parser. So LALR parser is more powerful than SLR.
Statement C is true because LL(1) parser is a top-down parser
Statement D is also true because YACC(Yet Another Compiler Compiler) is a tool which generates LALR parser for a given grammar.
Question 199 |
that avoids tests at every iteration of the loop. | |
that improves performance by decreasing the number of instructions in a basic block. | |
that exchanges inner loops with outer loops | |
that reorders operations to allow multiple computations to happen in parallel |
→ The goal of loop unwinding is to increase a program's speed by reducing (or) eliminating instructions that control the loop, such as pointer arithmetic and "end of loop" tests on each iteration.
Question 200 |
Macro processor | |
Micro preprocessor | |
Macro preprocessor | |
Dynamic Linker |
→ All preprocessing directives begins with a # symbol. For example,
#define PI 3.14
Question 201 |
LALR parser is Bottom up parser | |
A parsing algorithm which performs a left to right scanning and a right most deviation is RL (1) | |
LR parser is Bottom up parser. | |
In LL(1), the 1 indicates that there is a one - symbol look - ahead. |
TRUE: LR parser is Bottom up parser. It has SLR, LALR and SLR.
TRUE: In LL(1), the 1 indicates that there is a one - symbol look - ahead.
FALSE: A parsing algorithms scans right to left and reverse order.
Question 202 |
Syntax Analysis | |
Lexical Analysis | |
Code Generation | |
Code Optimization |
1. Tokenizing: creating a stream of “atoms”
2. Parsing: matching the atom stream with the language grammar
XML output = one way to demonstrate that the syntax analyzer works
Question 203 |
Build the symbol table | |
Construct the intermediate code | |
Separate mnemonic opcode and operand fields | |
None of the above |
Pass-1:
1. Assign addresses to all statements in the program
2. Save the values assigned to all labels for use in Pass2
3. Perform some processing of assembler directives
Pass-2:
1. Assemble instructions Generate data values defined by BYTE,WORD
2. Perform processing of assembler directives not done in Pass 1
3. Write the object program and the assembly listing
Question 204 |
Application programmer | |
System programmer | |
Operating system | |
All of the above |
Question 205 |
Build the symbol table | |
Construct the intermediate code | |
Separate mnemonic opcode and operand fields | |
None of these | |
A,B and C |
Pass-1:
1. Assign addresses to all statements in the program
2. Save the values assigned to all labels for use in Pass2
3. Perform some processing of assembler directives
Pass-2:
1. Assemble instructions Generate data values defined by BYTE,WORD
2. Perform processing of assembler directives not done in Pass-1
3. Write the object program and the assembly listing.
Activities:
1. Build the symbol table
2. Construct the intermediate code
3. Separate mnemonic opcode and operand fields
Question 206 |
Label and value | |
Only value | |
Mnemonic | |
Memory Location |
Question 207 |
Leftmost derivation | |
Rightmost derivation | |
Rightmost derivation in reverse | |
Leftmost derivation in reverse |
→ Inclusive choice is used to accommodate ambiguity by expanding all alternative right-hand-sides of grammar rules.
Question 208 |
Loader | |
Linker | |
Editor | |
Assembler |
→ Macro processors are often embedded in other programs, such as assemblers and compilers. Sometimes they are standalone programs that can be used to process any kind of text.
Question 209 |
build the symbol table | |
construct the Intermediate code | |
separate mnemonic opcode and operand field. | |
none of these |
Pass-1:
1. Assign addresses to all statements in the program
2. Save the values assigned to all labels for use in Pass2
3. Perform some processing of assembler directives
Pass-2:
1. Assemble instructions Generate data values defined by BYTE,WORD
2. Perform processing of assembler directives not done in Pass 1
3. Write the object program and the assembly listing
Question 210 |
Lexical analysis is breaking the input into tokens | |
Syntax analysis is for parsing the phrase | |
Syntax analysis is for analyzing the semantic | |
None of these |
TRUE: Syntax analysis is for parsing the phrase
FALSE: Syntax analysis is for analyzing the semantic. For analysing semantics we are using semantic analysis but not syntax analysis
Question 211 |
Compile time | |
Run time | |
Linking time | |
Pre-processing time. |
Question 212 |
Checking type compatibility | |
Suppressing duplication of error message | |
Storage allocation | |
All of these |
1. To store the names of all entities in a structured form at one place.
2. To verify if a variable has been declared.
3. To implement type checking, by verifying assignments and expressions in the source code are semantically correct.
4. To determine the scope of a name (scope resolution).
Question 213 |
Optimizing | |
One pass compiler | |
Cross compiler | |
Multipass compiler |
→ Threaded code compiler: The compiler which simply replace a string by an appropriate binary code.
→ Cross compiler: The compiler used to compile a source code for different kinds platform.
**One pass assembler and two pass assemblers are available.
Question 214 |
0 | |
1 | |
2 | |
None of these |
→ Typically k is 1 and is not mentioned. The name LR is often preceded by other qualifiers, as in SLR and LALR. The LR(k) condition for a grammar was suggested by Knuth to stand for "translatable from left to right with bound k."
Question 215 |
loop optimization | |
local optimization | |
constant folding | |
data flow analysis |
Common techniques:
Null sequences
Combine operations
Algebraic laws
Special case instructions
Address mode operations
Constant folding
Question 216 |
Identifier table | |
Page map table | |
Literal table | |
Terminal table |
Question 217 |
Check the validity of a source string | |
Determine the syntactic structure of a source string | |
Both (A) and (B) | |
None of these |
Question 218 |
LALR | |
LR | |
SLR | |
LLR |
It is a Look Ahead Left-to-Right (LALR) parser generator, generating a parser, the part of a compiler that tries to make syntactic sense of the source code, specifically a LALR parser, based on an analytic grammar written in a notation similar to Backus–Naur Form (BNF).
**YACC builds up LALR parsing table.
Question 219 |
Syntax analysis | |
Lexical analysis | |
Interpretation analysis | |
Uniform symbol generation |
Question 220 |
Compile time | |
Run time | |
Linking time | |
Pre - processing time |
Question 221 |
Shift step that advances in the input stream by K(K > 1) symbols and Reduce step that applies a completed grammar rule to some recent parse trees, joining them together as one tree with a new root symbol. | |
Shift step that advances in the input stream by one symbol and Reduce step that applies a completed grammar rule to some recent parse trees, joining them together as one tree with a new root symbol. | |
Shift step that advances in the input stream by K(K = 2) symbols and Reduce step that applies a completed grammar rule to form a single tree. | |
Shift step that does not advance in the input stream and Reduce step that applies a completed grammar rule to form a single tree. |
Question 222 |
Canonical LR parser is LR (1) parser with single look ahead terminal | |
All LR(K) parsers with K > 1 can be transformed into LR(1) parsers. | |
Both (A) and (B) | |
None of the above |
TRUE: All LR(K) parsers with K > 1 can be transformed into LR(1) parsers.
Question 223 |
Generated in first pass | |
Generated in second pass | |
Not generated at all | |
Generated and used only in second pass |
→ The first pass of the assembler reads and processes the assembly program one line at a time. In processing a single line of the assembly program the assembler can make addition(s) to the symbol table, add a (possibly partial) SML instruction to the Simpletron's memory.
→ The purpose of the second pass is to complete the partial instructions written in the first pass.
Question 224 |
allows to examine and modify the contents of registers | |
does not allow execution of a segment of program | |
allows to set breakpoints, execute a segment of program and display contents of register | |
All of the above |
Question 225 |
- First (α) ∩ First (β) ≠ {a} where a is some terminal symbol of the grammar.
- First (α) ∩ First (β) ≠ λ
I and II | |
I and III | |
II and III | |
I, II and III |
A → α | β
1. First (α) and First (β) must be disjoint if none of α and β contains NULL move.
2. At most one of the strings α or β can drive NULL move i.e. α → NULL(since First (α) and First (β) are disjoint). In this case, First (β) and Follow(A) must be disjoint.
Hence the answer is option(D).
Question 226 |
Removing left recursion alone | |
Removing the grammar alone | |
Removing left recursion and factoring the grammar | |
None of the above |
→ To convert an arbitrary CFG to an LL(1) grammar we need to remove the left recursion and as well as left factoring without that we cannot convert.
Question 227 |
shift reduce conflict only | |
reduce reduce conflict only | |
both shift reduce conflict and reduce reduce conflict | |
shift handle and reduce handle conflicts |
A shift-reduce parser works by doing some combination of Shift steps and Reduce steps, hence the name.
→ A Shift step advances in the input stream by one symbol. That shifted symbol becomes a new single-node parse tree.
→ A Reduce step applies a completed grammar rule to some of the recent parse trees, joining them together as one tree with a new root symbol.
*** A shift reduce parser suffers from both shift reduce conflict and reduce reduce conflict.
Question 228 |
An Operator Grammar | |
Right Recursive | |
Left Recursive | |
Ambiguous |

Question 229 |
S ➝ A | B
A➝ a | c
B➝ b | c
Where {S,A,B} is the set of non-terminals, {a,b,c} is the set of terminals.
Which of the following statement(s) is/are correct ?
S 1 : LR(1) can parse all strings that are generated using grammar G.
S 2 : LL(1) can parse all strings that are generated using grammar G.
Both S 1 and S 2 | |
Only S 2 | |
Neither S 1 nor S 2 | |
Only S 1 |

Since the grammar is Ambiguous so the strings generated by the grammar G can’t be parsed by LR(1) or LL(1) parser.
Question 230 |
G1 : S → SbS|a
G2 : S → aB|ab, A→GAB|a, B→ABb|b
Which of the following option is correct ?
Only G1 is ambiguous | |
Only G2 is ambiguous | |
Both G1 and G2 are ambiguous | |
Both G1 and G2 are not ambiguous |
To generate string “ababa” using G1 grammar the two different parse tree possible are:

Question 231 |
Leftmost derivation in reverse | |
Right-most derivation in reverse | |
Left-most derivation | |
Right-most derivation |
Question 232 |
reducing the range of values of input variables. | |
code optimization using cheaper machine instructions. | |
reducing efficiency of program. | |
None of the above |
Example:
Dividing by 2→ Use right shift by 2.
Multiplication by 2→ Use left shift by 2.
Question 233 |
E → E * F / F + E / F
F → F – F / id
Which of the following is true ?
* has higher precedence than + | |
– has higher precedence than * | |
+ and – have same precedence | |
+ has higher precedence than * |
Order of precedence is *, +, -.
Here * and + have equal preference, '-' can have higher precedence than + and *.
Question 234 |
Remove left recursion alone | |
Factoring grammar alone | |
Both of the above | |
None of the above |
→ Since for converting a grammar to LL(1) 3 conditions mentioned above are required.
→ Unambiguous grammar is not mentioned in options. So option (D) is the correct.
Question 235 |
LL(I) | |
Canonical LR | |
SLR | |
LALR |
LR > LALR > SLR.
But real time compilers using LALR only
Question 236 |
S1: SLR uses follow information to guide reductions. In case of LR and LALR parsers, the lookaheads are associated with the items and they make use of the left context available to the parser.
S2: LR grammar is a larger subclass of context free grammar as compared to that SLR and LALR grammars.
Which of the following is true ?
S1 is not correct and S2 is not correct. | |
S1 is not correct and S2 is correct. | |
S1 is correct and S2 is not correct. | |
S1 is correct and S2 is correct. |
Question 237 |
Leftmost derivation | |
Leftmost derivation traced out in reverse | |
Rightmost derivation traced out in reverse | |
Rightmost derivation |
→ Bottom up parsers using rightmost derivation in reverse.
Question 238 |
Paragraph by paragraph | |
Instruction by instruction | |
Line by line | |
None of the above |
→ Interpreter translates the source program into line by line. So, it is slower than compiler.
Question 239 |
Program with wheels | |
Independent from its authors | |
Independent of platform | |
None of the above |
Question 240 |
Literal table | |
Identifier table
| |
Terminal table | |
Source table |
Question 241 |
Independent two-pass processor | |
Independent one-pass processor | |
Expand macro calls and substitute arguments | |
All of the above |
1. Independent two-pass processor
2. Independent one-pass processor
3. Expand macro calls and substitute arguments
Question 242 |
S1 → AB | aaB
A → a | Aa
B → b
and the production rules of a grammar G2 as
S2 → aS2bS2 | bS2aS2 | λ
Which of the following is correct statement ?
G1 is ambiguous and G2 is not ambiguous. | |
G1 is ambiguous and G2 is ambiguous. | |
G1 is not ambiguous and G2 is ambiguous. | |
G1 is not ambiguous and G2 is not ambiguous. |

Question 243 |
S1 ⇒ Sc ⇒ SAC ⇒ SaSbc
Thus, SaSbc is a right sentential form, and its handle is
SaS | |
bc | |
Sbc | |
aSb |
And in above question aSb is a handle because it's reduction to the LHS of production A → aSb represents one step along the reverse of a rightmost derivation toward reducing to the start symbol.
Question 244 |
S → β1 | β2, A → α1A | α2A | λ | |
S → β1| β2 | β1A | β2A, A → α1A | α2A | |
S → β1 | β2, A → α1A | α2A | |
S → β1 | β2 | β1A | β2A, A → α1A | α2A | λ |
Option A can generate only {β1, β2} so it is not a correct option.
Option B is not correct because it have no terminating point strings containing {α1 , α2}
Option C is not correct because it can generate only {β1, β2}
Option D is correct answer because it can generate all the strings generated by given grammar.
Question 245 |
S1 : First(α) = { t | α⇒ *tβ for some string β}⇒ *tβ
S2 : Follow(X) = { a | S⇒ *αXaβ for some strings α and β}
Both statements S1 and S2 are incorrect. | |
S1 is incorrect and S2 is correct. | |
S1 is correct and S2 is incorrect. | |
Both statements S1 and S2 are correct. |
→ So if alpha after some step has t as first symbol (terminal) in some sentential form , then first(alpha) must be {t}
Statement-2:
α→∗tβ Here First(α) = {*} .
S→∗αXaβ
Follow(X) = {a}
So Statement S2 is correct.
Question 246 |
sub program | |
a complete program | |
a hardware portion
| |
relative coding |
→ A macro is an extension to the basic ASSEMBLER language. They provide a means for generating a commonly used sequence of assembler instructions/statements.
→ The sequence of instructions/statements will be coded ONE time within the macro definition. Whenever the sequence is needed within a program, the macro will be "called".
Question 247 |
semantic analysis | |
code generation | |
syntax analysis | |
code optimization |
→ logical errors will checked in semantic analysis.
→ Code optimization and code generation is not related to checking errors. It is reducing statement or performing optimization.
Question 248 |
Hardware | |
Compiler | |
Registers | |
None of the above |
→ Macro processors are often embedded in other programs, such as assemblers and compilers.
Question 249 |

(A + B) * C | |
A + * BC | |
A + B * C | |
A * C + B |

Question 250 |
– subtraction (highest precedence)
* multiplication
$ exponentiation (lowest precedence)
What is the result of the following expression ?
3 – 2 * 4 $ | * 2**3
– 61 | |
64 | |
512 | |
4096 |
But according to given question, we are giving precedence is
(((3 – 2) *) 4 $ | * (2**3))
Step-1: 3-2=1
Step-2: 1*4=4
Step-3: 2*3=6
Step-4: 46=4096
Note: When we are assuming ** is single(*) and there is no |* are useless symbols.
Question 251 |
A parser | |
Code optimizer | |
Code generator | |
Scanner |
Question 252 |
Cross compilation | |
One pass compilation | |
Two pass compilation | |
None of the above |
Note: We have an one and two pass assemblers but not compilers.
Question 253 |
re-allocation | |
allocation | |
linking | |
loading |
→ Re-allocation is an absolute loading scheme which loader function is accomplished by assembler
Question 254 |
A → a A b, A → b A b, A → a , A →b | |
A → a A a, A → a A b, A → c | |
A → A + A, A → a | |
Both (A) and (B) |
Option(A):
Since here we are having no Reduce-reduce OR shift-reduce conflict so it is CLR(1).




Question 255 |
S → x x W [ print “1”]
S → y [print “2”]
W → S2 [print “3”]
what is the translation of “x x x x y z z” ?
1 1 2 3 1 | |
1 1 2 3 3 | |
2 3 1 3 1 | |
2 3 3 2 1 |

⇒ 23131
SR is bottom up parser.
Note : Instead of Sz they given S2 and what operation will perform they are not mentioned
Question 256 |
LL grammar | |
Ambiguous grammar | |
LR grammar | |
None of the above |
→ Inherited Attributes are such attributes that depend on parent and/or siblings attributes.
Question 257 |
imperative and declarative statements | |
imperative statements and assembler directives | |
imperative and declarative statements as well as assembler directives | |
declarative statements and assembler directives |
Question 258 |
(i) EQU
(ii) ORIGIN
(iii) START
(iv) END
(ii), (iii) and (iv) | |
(i), (iii) and (iv) | |
(iii) and (iv) | |
(i), (ii), (iii) and (iv) |
1. EQU→ Equate
2. ORIGIN→ Origin
3. START→ Start
4. END→ End
Question 259 |
dependent on the operating system | |
dependent on the compiler | |
dependent on the hardware | |
independent of the hardware |
→ An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents.
Question 260 |
tokens are identified. | |
set of instructions are identified.
| |
the syntactic groups are identified. | |
machine instructions are identified |
1. tokens are identified
2. whether the given code is syntactically correct or not is identified.
Question 261 |
removal of all labels. | |
removal of values that never get used. | |
removal of function which are not involved. | |
removal of a module after its use. |
→ Dead code includes code that can never be executed (unreachable code), and code that only affects dead variables (written to, but never read again), that is, irrelevant to the program.
Question 262 |
it shows attribute values at each node. | |
there are no inherited attributes. | |
it has synthesized nodes as terminal nodes. | |
every non-terminal nodes is an inherited attribute. |
Features:
1. High level specification
2. Hides implementation details
3. Explicit order of evaluation is not specified
Question 263 |
user defined address symbols are correlated with their binary equivalent | |
the syntax of the statement is checked and mistakes, if any, are listed | |
object program is generated | |
semantic of the source program is elucidated. |
→ During first pass in two pass compilers contain lexical,syntactic,semantic and intermediate code generator are in front end.
→ During second pass in two pass compilers contain code optimization and code generator are in back end.
Question 264 |
one micro operation | |
one macro operation | |
one instruction to be completed in a single pulse | |
one machine code instruction |
Question 265 |
start address of the available main memory | |
total size of the program | |
actual address of the data location | |
absolute values of the operands used |
Question 266 |
next tokens are predicted. | |
length of the parse tree is predicted beforehand | |
lowest node in the parse tree is predicted. | |
next lower level of the parse tree is predicted. |
→ Predictive parser is a recursive descent parser, which has the capability to predict which production is to be used to replace the input string.
→ The predictive parser does not suffer from backtracking.
→ Predictive parsing uses a stack and a parsing table to parse the input and generate a parse tree.
Question 267 |
code optimization obtained by the use of cheaper machine instructions | |
reduction in accuracy of the output | |
reduction in the range of values of input variables | |
reduction in efficiency of the program |
Question 268 |
Top - down parsing | |
Recursive - descent parsing | |
Predicative | |
Syntax tree |
→ Predictive parser is a recursive descent parser, which has the capability to predict which production is to be used to replace the input string.
→ The predictive parser does not suffer from backtracking.
→ Predictive parsing uses a stack and a parsing table to parse the input and generate a parse tree.
Question 269 |
Rightmost derivation. | |
Rightmost derivation, in reverse. | |
Leftmost derivation. | |
Leftmost derivation in reverse. |
→ A bottom up parser generates rightmost derivation, in reverse.
Question 270 |
Checking type compatibility | |
Suppressing duplication of error message | |
Storage allocation | |
All of these above |
1. Checking type compatibility
2. Suppressing duplication of error message
3. Storage allocation
Question 271 |
use dynamic scope rules | |
support dynamic data structures | |
support recursion | |
support recursion and dynamic data structures |
→ Stack allocation is required for local variables. Space on the stack is reserved for local variables when they are declared.
Question 272 |
Constant folding | |
Induction variable | |
Strength reduction | |
Code reduction |
We are simply folding the value into 8.56 because to avoid multiplication costly operation.
Question 273 |
Binary tree | |
link list | |
Symbol table | |
Parse table |
Question 274 |
(a) input buffer
(b) stack
(c) parse table
choose the correct option from those given below:
(a) and (b) only | |
(a) and (c) only | |
(c) only | |
(a), (b) and (c) |
(a) input buffer
(b) stack
(c) parse table
Question 275 |
5 | |
6 | |
3 | |
7 |
T2=(e+f)
T3=(a*b)
T4= (T3+c)
T5=T2 * T4
T6=T1 + T5
Hence 6 operations are required.
Question 276 |
Which of the following is/are FALSE?
I) Operator precedence parser works on ambiguous grammar
II) Top-down parser works on left recursive, unambiguous and deterministic grammar
III) LL(I) is a non-recursive descent parser
IV) CLR(I) is the most powerful parser
Only II | |
I, II, III and IV | |
ll and IV | |
I, III and IV |
II) FALSE: Top-down parser works on left recursive, unambiguous and deterministic grammar
Ill) TRUE: LL(I) is a non-recursive descent parser
IV) TRUE: CLR(I) is the most powerful parser
Question 277 |
E->TE’
E’->+TE’/ε
T->FT’
T’->*FT’/ε
F->id/(E)
First(E)={id, (,ε},follow(E)={ε,) }, First(F)={id,),$}, Follow(F)={*,$,(} | |
First(E)={id, ( },follow(E)={$,) }, First(F)={id,(}, Follow(F)={*,$,),+} | |
First(E)={id, (,ε},follow(E)={ε,) }, First(F)={id,)}, Follow(F)={*,$,(,+ } | |
First(E)={id, )},follow(E)={$,) }, First(F)={id,(,$}, Follow(F)={*,$,),+} |
First (∊) = { id, ( }
Follow (∊) = { $, ) }
First (F) = { id, ( }
Follow (F) = { *, +, $, ) }
Question 278 |
Which of the following statements is TRUE for the grammar given below?
S->(L)/a
L->L.S/S
The grammar can be parsed by LR(0) parser only | |
The grammar can be parsed by LR(0) and SLR(1) parsers
| |
The grammar can be parsed by LL(1) parser only | |
The grammar can be parsed by LL(1) and LR(0) parsers |

Question 279 |
The number of tokens in the following
“C” language statement is:
printf(“The number of tokens are %d”, &tcount);
8 | |
9 | |
10 | |
11 |

Question 280 |
switch(inputvalue)
{
case 1;b=c*d; break;
Default : b = b++ ; break;
}
27 | |
29 | |
26 | |
24 |
{5
Case6 17 :8 b9 =10 c11 *12 d13 ;14 break15 ;16
Default17 :18 b19 =20 b21 ++22 ;23 break24 ;25
}26
Question 281 |
Second pass | |
First pass and second pass respectively | |
Second pass and first pass respectively | |
First pass |
So, inclusion of labels in symbol table is done in first pass and resolution of subroutine is done in second pass.
Question 282 |
Loop unrolling | |
Dead code elimination | |
Strength reduction | |
Software pipelining |
This technique is similar to loop unrolling concept in code optimization.
Question 283 |
two or more production have the same non-terminal on the left hand side | |
A derivation tree has more than one associated sentence | |
there is a sentence with more than one derivation tree corresponding to it | |
Brackets are not present in the grammar |
Question 284 |
Beta cross | |
Canadian cross | |
Mexican cross | |
X-cross |
Question 285 |
s→ T*S | T
T → U + T | U
U → a | b
Which of the following statements is wrong?
Grammar is not ambiguous | |
Priority of + over * is ensured | |
Right to left evaluation of * and + happens | |
None of these |
S-> T*S , since S has right recursion so * is right associative
T-> U+T , since T has right recursion so + is right associative
Hence right to left evaluation of * and + will happen.
+ is having more precedence than * so the precedence of + and * is clearly defined.
Question 286 |
It is left recursive | |
It is right recursive | |
It is ambiguous | |
It is not context-free |

Question 287 |
a | b | $ | |
S | E1 | E2 | S->ε |
A | A->S | A->S | error |
B | B->S | B->S | E3 |
FIST(A)=(a,b,$)
FIRST(A)={A,B} FIRST(B) = {a,b,ε}
FOLLOW(A)={A,B} FOLLOW(A) = {A,b}
FOLLOW(B)={A,b,$) FOLLOW(B) = {$}
FIRST(B) = {a,b,ε}
FOLLOW(A) = {A,b}
| |
FOLLOW(B) = {$}
| |
FIRST(A) = {a,b,ε} = FIRST(B) | |
FIRST(A) = {A,B} = FIRST(B)
FOLLOW(A)={a,b} FOLLOW(A)={a,b} |
Question 288 |
I and II | |
I and IV | |
III and IV | |
I, III and IV |