ArtificialIntelligence
Question 1 
In Artificial Intelligence (AI), an environment is uncertain if it is
Not fully observable and not deterministic  
Not fully observable or not deterministic  
Fully observable but not deterministic
 
Not fully observable but deterministic

Question 1 Explanation:
→ Deterministic AI environments are those on which the outcome can be determined based on a specific state. In other words, deterministic environments ignore uncertainty.
→ Most real world AI environments are not deterministic. Instead, they can be classified as stochastic. Selfdriving vehicles are a classic example of stochastic AI processes.
→ Most real world AI environments are not deterministic. Instead, they can be classified as stochastic. Selfdriving vehicles are a classic example of stochastic AI processes.
Question 2 
In Artificial Intelligence (AI), a simple reflex agent selects actions on the basis of
current percept, completely ignoring rest of the percept history.  
rest of the percept history, completely ignoring current percept.  
both current percept and complete percept history.
 
both current percept and just previous percept.

Question 2 Explanation:
→ Simple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history. The agent function is based on the conditionaction rule: "if condition, then action".
→ This agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered.
→ Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Note: If the agent can randomize its actions, it may be possible to escape from infinite loops.
→ This agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered.
→ Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Note: If the agent can randomize its actions, it may be possible to escape from infinite loops.
Question 3 
In heuristic search algorithms in Artificial Intelligence (AI), if a collection of admissible heuristics h_{1}.......h_{m} is available for a problem and none of them dominates any of the others, which should we choose ?
h(n) = max{h_{1} (n), ...., h_{m}(n)}  
h(n) = min{h_{1}(n), ...., h_{m}(n)}  
h(n) = avg{h_{1}(n), ...., h_{m}(n)}  
h(n) = sum{h_{1}(n), ...., h_{m}(n)} 
Question 3 Explanation:
Heuristic Search Strategies:
A key component of an evaluation function is a heuristic function h(n), which estimates the cost of the cheapest path from node ‘n’ to a goal node.
→ In connection of a search problem “heuristics” refers to a certain (but loose) upper or lower bound for the cost of the best solution.
→ Goal states are nevertheless identified: in a corresponding node ‘n’ it is required that h(n)=0
E.g., a certain lower bound bringing no information would be to set h(n) ≅ 0
→ Heuristic functions are the most common form in which additional knowledge is imported to the search algorithm.
Generating admissible heuristics from relaxed problems:
→ To come up with heuristic functions one can study relaxed problems from which some restrictions of the original problem have been removed.
→ The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem (does not overestimate).
→ The optimal solution in the original problem is, by definition, also a solution in the relaxed problem.
Example:
→ Heuristic h_{1} for the 8puzzle gives perfectly accurate path length for a simplified version of the puzzle, where a tile can move anywhere.
→ Similarly h_{2} gives an optimal solution to a relaxed 8puzzle, where tiles can move also to occupied squares.
→ If a collection of admissible heuristics is available for a problem, and none of them dominates any of the others, we can use the composite function.
h(n) = max { h_{1}(n), …, h_{m}(n) }
→ The composite function dominates all of its component functions and is consistent if none of the components overestimates. Reference:
http://www.cs.tut.fi/~elomaa/teach/AI20113.pdf
A key component of an evaluation function is a heuristic function h(n), which estimates the cost of the cheapest path from node ‘n’ to a goal node.
→ In connection of a search problem “heuristics” refers to a certain (but loose) upper or lower bound for the cost of the best solution.
→ Goal states are nevertheless identified: in a corresponding node ‘n’ it is required that h(n)=0
E.g., a certain lower bound bringing no information would be to set h(n) ≅ 0
→ Heuristic functions are the most common form in which additional knowledge is imported to the search algorithm.
Generating admissible heuristics from relaxed problems:
→ To come up with heuristic functions one can study relaxed problems from which some restrictions of the original problem have been removed.
→ The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem (does not overestimate).
→ The optimal solution in the original problem is, by definition, also a solution in the relaxed problem.
Example:
→ Heuristic h_{1} for the 8puzzle gives perfectly accurate path length for a simplified version of the puzzle, where a tile can move anywhere.
→ Similarly h_{2} gives an optimal solution to a relaxed 8puzzle, where tiles can move also to occupied squares.
→ If a collection of admissible heuristics is available for a problem, and none of them dominates any of the others, we can use the composite function.
h(n) = max { h_{1}(n), …, h_{m}(n) }
→ The composite function dominates all of its component functions and is consistent if none of the components overestimates. Reference:
http://www.cs.tut.fi/~elomaa/teach/AI20113.pdf
Question 4 
Consider following sentences regarding A*, an informed search strategy in Artificial Intelligence (AI).
 (a) A* expands all nodes with f(n)<C*.
(b) A* expands no nodes with f(n)/C*.
(c) Pruning is integral to A*.
Here, C* is the cost of the optimal solution path.
Both statement (a) and statement (b) are true.  
Both statement (a) and statement (c) are true.  
Both statement (b) and statement (c) are true.  
All the statements (a), (b) and (c) are true.

Question 4 Explanation:
A* search:
→ A* combines the value of the heuristic function h(n)and the cost to reach the node ‘n’, g(n).
→ Evaluation function f(n) = g(n) + h(n) thus estimates the cost of the cheapest solution through ‘n’.
→ A* tries the node with the lowest f(n) value first.
→ This leads to both complete and optimal search algorithm, provided that h(n) satisfies certain conditions.
Optimality of A*:
→ A* expands all nodes ‘n’ for which f(n)
→ However, all nodes n for which f(n) > C* get pruned.
→ It is clear that A* search is complete.
→ A* search is also optimally efficient for any given heuristic function, because any algorithm that does not expand all nodes with f(n)
→ Despite being complete, optimal, and optimally efficient, A* search also has its weaknesses.
→ The number of nodes for which f(n)< C* for most problems is exponential in the length of the solution.
Reference:
http://www.cs.tut.fi/~elomaa/teach/AI20113.pdf
→ A* combines the value of the heuristic function h(n)and the cost to reach the node ‘n’, g(n).
→ Evaluation function f(n) = g(n) + h(n) thus estimates the cost of the cheapest solution through ‘n’.
→ A* tries the node with the lowest f(n) value first.
→ This leads to both complete and optimal search algorithm, provided that h(n) satisfies certain conditions.
Optimality of A*:
→ A* expands all nodes ‘n’ for which f(n)
→ It is clear that A* search is complete.
→ A* search is also optimally efficient for any given heuristic function, because any algorithm that does not expand all nodes with f(n)
→ The number of nodes for which f(n)< C* for most problems is exponential in the length of the solution.
Reference:
http://www.cs.tut.fi/~elomaa/teach/AI20113.pdf
Question 5 
An agent can improve its performance by
Learning
 
Responding
 
Observing
 
Perceiving

Question 5 Explanation:
→ An intelligent agent (IA) is an autonomous entity which observes through sensors and acts upon an environment using actuators (i.e. it is an agent) and directs its activity towards achieving goals (i.e. it is "rational", as defined in economics).
→ Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent.
→ Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent.
Question 6 
Self Organizing maps are___
A type of statistical tool for data analysis  
A type of Artificial Swarm networks  
A type of particle Swarm algorithm  
None of the above 
Question 6 Explanation:
A selforganizing map or selforganizing feature map is a type of artificial neural network that is trained using unsupervised learning to produce a lowdimensional, discretized representation of the input space of the training samples, called a map, and is therefore a method to do dimensionality reduction.
Selforganizing maps differ from other artificial neural networks as they apply competitive learning as opposed to errorcorrection learning (such as backpropagation with gradient descent), and in the sense that they use a neighborhood function to preserve the topological properties of the input space.
Question 7 
Hopfield networks are a type of__
Gigabit network  
Terabyte network  
Artificial Neural network  
Wireless network 
Question 7 Explanation:
A Hopfield neural network is a type of artificial neural network invented by John Hopfield in 1982. It usually works by first learning a number of binary patterns and then returning the one
that is the most similar to a given input.
What defines a Hopfield network:
It is composed of only one layer of nodes or units each of which is connected to all the others but not itself. It is therefore a feedback network, which means that its outputs are redirected to its inputs. Every unit also acts as an input and an output of the network. Thus the number of nodes, inputs, outputs of the network are equal. Additionally, each one of the neurons in a has a binary state or activation value, usually represented as 1 or 1, which is its
particular output. The state of each node generally converges, meaning that the state of each node becomes fixed after a certain number of updates.
Question 8 
Sigmoidal feedforward artificial neural networks with one hidden layer can / are ___
Approximation any continuous function  
Approximation any disContinuous function  
Approximation any continuous function and its derivatives of arbitrary order.  
Exact modeling technique 
Question 8 Explanation:
Multilayer perceptron class of networks consists of multiple layers of computational units, usually interconnected in a feedforward way. Each neuron in one layer has directed connections to the neurons of the subsequent layer. In many applications the units of these networks apply a sigmoid function as an activation function.
A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from recurrent neural networks. The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.
A feedforward neural network is an artificial neural network wherein connections between the nodes do not form a cycle. As such, it is different from recurrent neural networks. The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.
Question 9 
Match the following :
ai, bii, ciii, div  
ai, biii, civ, dii  
aii, biii, civ, di  
aii, bii, ciii, div 
Question 9 Explanation:
Absurd→ Clearly impossible being contrary to some evident truth.
Ambiguous→ Capable of more than one interpretation or meaning.
Axiom→ An assertion that is accepted and used without a proof.
Conjecture→ An opinion preferably based on some experience or wisdom
Ambiguous→ Capable of more than one interpretation or meaning.
Axiom→ An assertion that is accepted and used without a proof.
Conjecture→ An opinion preferably based on some experience or wisdom
Question 10 
Match the following:
ai, bii, ciii, div  
ai, biii, cii, div  
aiii, bii, civ, di  
aii, biii, ci, div 
Question 10 Explanation:
Affiliate Marketing: Vendors ask partners to place logos on partner’s site. If customers click, come to vendors and buy.
Viral Marketing: Spread your brand on the net by wordofmouth. Receivers will send your information to their friends.
Group Purchasing: Aggregating the demands of small buyers to get a large volume. Then negotiate a price.
Bartering Online: Exchanging surplus products and services with the process administered completely online by an intermediary. Company receives “points” for its contribution.
Viral Marketing: Spread your brand on the net by wordofmouth. Receivers will send your information to their friends.
Group Purchasing: Aggregating the demands of small buyers to get a large volume. Then negotiate a price.
Bartering Online: Exchanging surplus products and services with the process administered completely online by an intermediary. Company receives “points” for its contribution.
Question 11 
Let P(m, n) be the statement “m divides n” where the Universe of discourse for both the variables is the set of positive integers. Determine the truth values of the following propositions.
(a)∃m ∀n P(m, n)
(b)∀n P(1, n)
(c) ∀m ∀n P(m, n)
(a)∃m ∀n P(m, n)
(b)∀n P(1, n)
(c) ∀m ∀n P(m, n)
(a)  True; (b)  True; (c)  False  
(a)  True; (b)  False; (c)  False  
(a)  False; (b)  False; (c)  False  
(a)  True; (b)  True; (c)  True 
Question 11 Explanation:
Given P(m,n) ="m divides n"
StatementA is ∃m ∀n P(m, n). Here, there exists some positive integer which divides every positive integer. It is true because there is positive integer 1 which divides every positive integer.
StatementB is ∀n P(1, n). Here, 1 divided every positive integer. It is true.
StatementC is ∀m ∀n P(m, n). Here, every positive integer divided every positive integer. It is false.
StatementA is ∃m ∀n P(m, n). Here, there exists some positive integer which divides every positive integer. It is true because there is positive integer 1 which divides every positive integer.
StatementB is ∀n P(1, n). Here, 1 divided every positive integer. It is true.
StatementC is ∀m ∀n P(m, n). Here, every positive integer divided every positive integer. It is false.
Question 12 
Which of the following is true for semidynamic environment ?
The environment itself does not change with the passage of time but the agent’s performance score does.  
The environment change while the agent is deliberating.  
Even if the environment changes with the passage of time while deliberating, the performance score does not change.  
Environment and performance score, both change simultaneously. 
Question 12 Explanation:
Semi Dynamic environment : The environment itself does not change with the passage of time but the agent’s performance score does.
Question 13 
Consider the following statements
S1: A heuristic is admissible if it never overestimates the cost to reach the goal
S2: A heuristic is monotonous if it follows triangle inequality property.
Which one of the following is true referencing the above statements?
S1: A heuristic is admissible if it never overestimates the cost to reach the goal
S2: A heuristic is monotonous if it follows triangle inequality property.
Which one of the following is true referencing the above statements?
Statement S1 is true but statement S2 is false.  
Statement S1 is false but statement S2 is true.  
Neither of the statements S1 and S2 are true  
Both the statements S1 and S2 are true. 
Question 13 Explanation:
A heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path
Question 14 
An agent can improve its performance by
Learning  
Responding  
Observing  
Perceiving 
Question 14 Explanation:
→ An intelligent agent (IA) is an autonomous entity which observes through sensors and acts upon an environment using actuators (i.e. it is an agent) and directs its activity towards achieving goals (i.e. it is "rational", as defined in economics).
→ Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent.
→ Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent.
Question 15 
In Artificial Intelligence (AI), an environment is uncertain if it is .
Not fully observable and not deterministic  
Not fully observable or not deterministic  
Fully observable but not deterministic  
Not fully observable but deterministic 
Question 15 Explanation:
→ Deterministic AI environments are those on which the outcome can be determined based on a specific state. In other words, deterministic environments ignore uncertainty.
→ Most real world AI environments are not deterministic. Instead, they can be classified as stochastic. Selfdriving vehicles are a classic example of stochastic AI processes.
→ Most real world AI environments are not deterministic. Instead, they can be classified as stochastic. Selfdriving vehicles are a classic example of stochastic AI processes.
Question 16 
Back propagation is a learning technique that adjusts weights in the neural network by propagating weight changes.
Forward from source to sink  
Backward from sink to source  
Forward from source to hidden nodes  
Backward from since to hidden nodes 
Question 16 Explanation:
→ Back propagation is a learning technique that adjusts weights in the neural network by propagating weight changes backward from sink to source.
→ Backpropagation is shorthand for "the backward propagation of errors," since an error is computed at the output and distributed backwards throughout the network’s layers.
→ Backpropagation is shorthand for "the backward propagation of errors," since an error is computed at the output and distributed backwards throughout the network’s layers.
Question 17 
In Artificial Intelligence (AI), a simple reflex agent selects actions on the basis of .
current percept, completely ignoring rest of the percept history.  
rest of the percept history, completely ignoring current percept.  
both current percept and complete percept history.  
both current percept and just previous percept. 
Question 17 Explanation:
→ Simple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history. The agent function is based on the conditionaction rule : "if condition, then action".
→ This agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered.
→ Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Note: If the agent can randomize its actions, it may be possible to escape from infinite loops.
→ This agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered.
→ Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Note: If the agent can randomize its actions, it may be possible to escape from infinite loops.
Question 18 
In heuristic search algorithms in Artificial Intelligence (AI), if a collection of admissible heuristics h1 .......hm is available for a problem and none of them dominates any of the others, which should we choose ?
h(n)=max{h1 (n),....,hm(n)}  
h(n)=min{h1 (n),....,hm(n)}  
h(n)=avg{h1 (n),....,hm(n)}  
h(n)=sum{h1 (n),....,hm(n)} 
Question 18 Explanation:
Heuristic Search Strategies:
A key component of an evaluation function is a heuristic function h(n), which estimates the cost of the cheapest path from node ‘n’ to a goal node
→ In connection of a search problem “heuristics” refers to a certain (but loose) upper or lower bound for the cost of the best solution
→ Goal states are nevertheless identified: in a corresponding node ‘n’ it is required that h(n)=0
E.g., a certain lower bound bringing no information would be to set h(n) ≅0
→ Heuristic functions are the most common form in which additional knowledge is imported to the search algorithm Generating admissible heuristics from relaxed problems
→ To come up with heuristic functions one can study relaxed problems from which some restrictions of the original problem have been removed
→ The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem (does not overestimate)
→ The optimal solution in the original problem is, by definition, also a solution in the relaxed problem
Example:
→ heuristic h 1 for the 8puzzle gives perfectly accurate path length for a simplified version of the puzzle, where a tile can move anywhere
→ Similarly h 2 gives an optimal solution to a relaxed 8puzzle, where tiles can move also to occupied squares
→ If a collection of admissible heuristics is available for a problem, and none of them dominates any of the others, we can use the composite function h(n)=max{ h 1 (n), ..., h m (n) }
→ The composite function dominates all of its component functions and is consistent if none of the components overestimates.
A key component of an evaluation function is a heuristic function h(n), which estimates the cost of the cheapest path from node ‘n’ to a goal node
→ In connection of a search problem “heuristics” refers to a certain (but loose) upper or lower bound for the cost of the best solution
→ Goal states are nevertheless identified: in a corresponding node ‘n’ it is required that h(n)=0
E.g., a certain lower bound bringing no information would be to set h(n) ≅0
→ Heuristic functions are the most common form in which additional knowledge is imported to the search algorithm Generating admissible heuristics from relaxed problems
→ To come up with heuristic functions one can study relaxed problems from which some restrictions of the original problem have been removed
→ The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem (does not overestimate)
→ The optimal solution in the original problem is, by definition, also a solution in the relaxed problem
Example:
→ heuristic h 1 for the 8puzzle gives perfectly accurate path length for a simplified version of the puzzle, where a tile can move anywhere
→ Similarly h 2 gives an optimal solution to a relaxed 8puzzle, where tiles can move also to occupied squares
→ If a collection of admissible heuristics is available for a problem, and none of them dominates any of the others, we can use the composite function h(n)=max{ h 1 (n), ..., h m (n) }
→ The composite function dominates all of its component functions and is consistent if none of the components overestimates.
Question 19 
Consider following sentences regarding A*, an informed search strategy in Artificial Intelligence (AI).
(a) A* expands all nodes with f(n) < C*.
(b) A* expands no nodes with f(n) /C*.
(c) Pruning is integral to A*.
Here, C* is the cost of the optimal solution path.
Which of the following is correct with respect to the above statements ?
(a) A* expands all nodes with f(n) < C*.
(b) A* expands no nodes with f(n) /C*.
(c) Pruning is integral to A*.
Here, C* is the cost of the optimal solution path.
Which of the following is correct with respect to the above statements ?
Both statement (a) and statement (b) are true.  
Both statement (a) and statement (c) are true.  
Both statement (b) and statement (c) are true.  
All the statements (a), (b) and (c) are true. 
Question 19 Explanation:
A* search:
→ A* combines the value of the heuristic function h(n)and the cost to reach the node ‘n’, g(n)
→ Evaluation function f(n)=g(n)+h(n) thus estimates the cost of the cheapest solution through ‘n’
→ A* tries the node with the lowest f(n) value first
→ This leads to both complete and optimal search algorithm, provided that h(n) satisfies certain conditions
Optimality of A*
→ A* expands all nodes ‘n’ for which f(n)
→ However, all nodes n for which f(n) > C* get pruned
→ It is clear that A* search is complete
→ A* search is also optimally efficient for any given heuristic function, because any algorithm that does not expand all nodes with f(n)
→ Despite being complete, optimal, and optimally efficient, A* search also has its weaknesses.
→ The number of nodes for which f(n)< C* for most problems is exponential in the length of the solution.
→ A* combines the value of the heuristic function h(n)and the cost to reach the node ‘n’, g(n)
→ Evaluation function f(n)=g(n)+h(n) thus estimates the cost of the cheapest solution through ‘n’
→ A* tries the node with the lowest f(n) value first
→ This leads to both complete and optimal search algorithm, provided that h(n) satisfies certain conditions
Optimality of A*
→ A* expands all nodes ‘n’ for which f(n)
→ It is clear that A* search is complete
→ A* search is also optimally efficient for any given heuristic function, because any algorithm that does not expand all nodes with f(n)
→ The number of nodes for which f(n)< C* for most problems is exponential in the length of the solution.
Question 20 
Consider a vocabulary with only four propositions A, B, C and D. How many models are there for the following sentence ? B ∨ C
10  
12  
15  
16 
Question 20 Explanation:
Here, number of models is nothing but number of TRUEs in final statement. In this proposition logic we got total 15 number of models
Total 12 TRUE for given sentence.
Total 12 TRUE for given sentence.
Question 21 
Consider the following statements :
(a) False╞ True
(b) If α╞ (β ∧ γ) then α╞ β and α╞ γ.
Which of the following is correct with respect to the above statements ?
Both statement (a) and statement (b) are false.  
Statement (a) is true but statement (b) is false.  
Statement (a) is false but statement (b) is true.  
Both statement (a) and statement (b) are true. 
Question 21 Explanation:
(a). False╞ True is nothing but False=FalseTrue (or) False=False V True
(b). α╞ (β ∧ γ) also write αV(β ∧ γ)
α╞ β and α╞ γ also write (αVβ)∧(αVγ)
Finally we proved as LHS=RHS. αV(β ∧ γ) = (αVβ)∧(αVγ)
So, both the statements are correct.
(b). α╞ (β ∧ γ) also write αV(β ∧ γ)
α╞ β and α╞ γ also write (αVβ)∧(αVγ)
Finally we proved as LHS=RHS. αV(β ∧ γ) = (αVβ)∧(αVγ)
So, both the statements are correct.
Question 22 
Consider the following two sentences :
(a) The planning graph data structure can be used to give a better heuristic for a planning problem.
(b) Dropping negative effects from every action schema in a planning problem results in a relaxed problem.
Which of the following is correct with respect to the above sentences ?
(a) The planning graph data structure can be used to give a better heuristic for a planning problem.
(b) Dropping negative effects from every action schema in a planning problem results in a relaxed problem.
Which of the following is correct with respect to the above sentences ?
Both sentence (a) and sentence (b) are false.  
Both sentence (a) and sentence (b) are true.  
Sentence (a) is true but sentence (b) is false  
Sentence (a) is false but sentence (b) is true. 
Question 22 Explanation:
● Negative effects put restrictions on the action schema. When these restrictions are put into place, then the number of actions that can be taken to get to the next time step decreases because with each addition of a restriction, the actions that do not meet the
restriction are filtered out.
● When these negative effects are dropped, then the number of actions increase and dropping all of the negative effects from the action schema results in a relaxed problem.
● A planning graph is a directed graph organized into levels: first a level S 0 for the initial state, consisting of nodes representing each fluent that holds in S 0 ; then a level A 0 consisting of nodes for each ground action that might be applicable in S 0 ; then alternating levels S i followed by A i ; until we reach a termination condition
● As a tool for generating accurate heuristics, we can view the planning graph as a relaxed problem that is efficiently solvable
● When these negative effects are dropped, then the number of actions increase and dropping all of the negative effects from the action schema results in a relaxed problem.
● A planning graph is a directed graph organized into levels: first a level S 0 for the initial state, consisting of nodes representing each fluent that holds in S 0 ; then a level A 0 consisting of nodes for each ground action that might be applicable in S 0 ; then alternating levels S i followed by A i ; until we reach a termination condition
● As a tool for generating accurate heuristics, we can view the planning graph as a relaxed problem that is efficiently solvable
Question 23 
A knowledge base contains just one sentence, ∃x AsHighAs (x, Everest). Consider the following two sentences obtained after applying existential instantiation.
(a) AsHighAs (Everest, Everest)
(b) AsHighAs (Kilimanjaro, Everest)
Which of the following is correct with respect to the above sentences ?
(a) AsHighAs (Everest, Everest)
(b) AsHighAs (Kilimanjaro, Everest)
Which of the following is correct with respect to the above sentences ?
Both sentence (a) and sentence (b) are sound conclusions.  
Both sentence (a) and sentence (b) are unsound conclusions  
Sentence (a) is sound but sentence (b) is unsound.  
Sentence (a) is unsound but sentence (b) is sound. 
Question 23 Explanation:
● The ∃x AsHighAs (x, Everest) means there is one element which as highest as Everest.
● In the statement (a) AsHighAs (Everest, Everest) ,both are Everest then we can’t compare.
● The statement (b) AsHighAs (Kilimanjaro, Everest) means there kilimanjaro which is as highest as Everest So this valid statement.Because we are comparing Kilimanjaro with Everest.
● In the statement (a) AsHighAs (Everest, Everest) ,both are Everest then we can’t compare.
● The statement (b) AsHighAs (Kilimanjaro, Everest) means there kilimanjaro which is as highest as Everest So this valid statement.Because we are comparing Kilimanjaro with Everest.
Question 24 
If a process is under statistical control, then it is
Maintainable  
Measurable  
Predictable  
Verifiable 
Question 24 Explanation:
→ If a process is under statistical control, then it is predictable.
→ Statistical process control(SPC) helps to ensure that the process operates efficiently, producing more specification conforming products with less waste (rework or scrap).
→ Statistical process control(SPC) helps to ensure that the process operates efficiently, producing more specification conforming products with less waste (rework or scrap).
Question 25 
Standard planning algorithms assume environment to be __________.
Both deterministic and fully observable  
Neither deterministic nor fully observable  
Deterministic but not fully observable  
Not deterministic but fully observable 
Question 25 Explanation:
→ Classical planning environments that are fully observable, deterministic, finite, static and discrete (in time, action, objects and effects).
Question 26 
Entropy of a discrete random variable with possible values {x_{1}, x_{2}, ..., x_{n}} and probability density function P(X) is :
The value of b gives the units of entropy. The unit for b=10 is :
The value of b gives the units of entropy. The unit for b=10 is :
bits  
bann  
nats  
deca 
Question 27 
For any binary (n, h) linear code with minimum distance (2t+1) or greater
2t+1  
t+1  
t  
t1 
Question 28 
Consider a Takagi  Sugeno  Kang (TSK) Model consisting rules of the form: If X_{i} is A_{i1} and... and x_{r} is A_{ir} THEN y = f_{i}(x_{1}, x_{2},.... x_{r}) = b_{i0} + b_{i1}x_{1} + b_{ir}x_{r} assume, a_{i} is the matching degree of rule i, then the total output of the model is given by:
Question 29 
Consider a single perceptron with sign activation function. The perceptron is represented by weight vector [0.4 −0.3 0.1]t and a bias θ = 0. If the input vector to the perceptron is X = [0.2 0.6 0.5] then the output of the perceptron is:
1  
0  
0.25  
1 
Question 30 
Consider the following AO graph:
Which is the best node to expand next by AO* algorithm?
Which is the best node to expand next by AO* algorithm?
A  
B  
C  
B and C 
Question 30 Explanation:
f(n)=c(n)+h(n)
where
f(n)= Best path to reach the destination node
c(n)=Cost of path
h(n)= heuristic value of a node
Cost of choosing B or C is
=(22+3)+(24+2)
=51
Note: We add cost of B and C because they belongs to AND graph.
Cost of choosing A =42+4=46
Since cost of choosing node choosing B or C that’s why we will expand ‘A’.
Question 31 
In Artificial Intelligence (AI), what is present in the planning graph?
Sequence of levels  
Literals  
Variables  
Heuristic estimates 
Question 31 Explanation:
In Artificial Intelligence (AI), Sequence of levels is present in the planning graph.
→ A planning graph consists of a sequence of levels that correspond to time steps in the LEVELS plan, where level 0 is the initial state.
→ Each level contains a set of literals and a set of actions.
→ The literals are all those that could be true at that time step, depending on the actions executed at preceding time steps.
→ The actions are all those actions that could have their preconditions satisﬁed at that time step, depending on which of the literals actually hold.
→ The planning graph records only a restricted subset of the possible negative interactions among actions i.e.., it might be optimistic about the minimum number of time steps required for a literal to become true.
→ This number of steps in the planning graph provides a good estimate of how difﬁcult it is to achieve a given literal from the initial state.
→ The planning graph is deﬁned in such a way that it can be constructed very efﬁciently.
→ Planning graphs work only for propositional planning problemsones with no variables.
→ A planning graph consists of a sequence of levels that correspond to time steps in the LEVELS plan, where level 0 is the initial state.
→ Each level contains a set of literals and a set of actions.
→ The literals are all those that could be true at that time step, depending on the actions executed at preceding time steps.
→ The actions are all those actions that could have their preconditions satisﬁed at that time step, depending on which of the literals actually hold.
→ The planning graph records only a restricted subset of the possible negative interactions among actions i.e.., it might be optimistic about the minimum number of time steps required for a literal to become true.
→ This number of steps in the planning graph provides a good estimate of how difﬁcult it is to achieve a given literal from the initial state.
→ The planning graph is deﬁned in such a way that it can be constructed very efﬁciently.
→ Planning graphs work only for propositional planning problemsones with no variables.
Question 32 
What is the best method to go for the game playing problem?
Optimal Search  
Random Search
 
Heuristic Search  
Stratified Search 
Question 32 Explanation:
→ Heuristic search is the best method to go for the game playing problem
→ A heuristic is a method that might not always find the best solution but is guaranteed to find a good solution in reasonable time.
Example: Hill Climbing, Simulated Annealing, Best first search,A* algorithm,etc..,
→ A heuristic is a method that might not always find the best solution but is guaranteed to find a good solution in reasonable time.
Example: Hill Climbing, Simulated Annealing, Best first search,A* algorithm,etc..,
Question 33 
Let R and S be two fuzzy relations defined as :
Then, the resulting relation, T, which relates elements of universe x to the elements of universe z using maxmin composition is given by :
Then, the resulting relation, T, which relates elements of universe x to the elements of universe z using maxmin composition is given by :
Question 33 Explanation:
Question 34 
A neuron with 3 inputs has the weight vector [0.2 –0.1 0.1]^{T} and a bias θ = 0. If the input vector is X = [0.2 0.4 0.2]^{T} then the total input to the neuron is :
0.20  
1.0  
0.02  
–1.0 
Question 34 Explanation:
Total input to neuron f(x)=(W^{T}XX)+Ⲑ
=(0.2*0.2)+(0.1*0.4)+(0.1*0.2)
=0.040.04+0.02
=0.02
=(0.2*0.2)+(0.1*0.4)+(0.1*0.2)
=0.040.04+0.02
=0.02
Question 35 
Which of the following neural networks uses supervised learning ?
(A) Multilayer perceptron
(B) Self organizing feature map
(C) Hopfield network
(A) Multilayer perceptron
(B) Self organizing feature map
(C) Hopfield network
(A) only  
(B) only  
(A) and (B) only  
(A) and (C) only 
Question 35 Explanation:
Question 36 
Let R and S be two fuzzy relations defined as:
Then, the resulting relation, T, which relates elements of universe X to elements of universe Z using maxmin composition is given by
Then, the resulting relation, T, which relates elements of universe X to elements of universe Z using maxmin composition is given by
Question 36 Explanation:
Question 37 
Compute the value of adding the following two fuzzy integers :
A = {(0.3, 1), (0.6, 2), (1, 3), (0.7, 4), (0.2, 5)}
B = {(0.5, 11), (1, 12), (0.5, 13)}
Where fuzzy addition is defined as μ_{A+B} (z)= max_{x + y = z}(min (μ_{A}(x), μ_{B}(x)))
Then, f (A + B) is equal to
A = {(0.3, 1), (0.6, 2), (1, 3), (0.7, 4), (0.2, 5)}
B = {(0.5, 11), (1, 12), (0.5, 13)}
Where fuzzy addition is defined as μ_{A+B} (z)= max_{x + y = z}(min (μ_{A}(x), μ_{B}(x)))
Then, f (A + B) is equal to
{(0.5, 12), (0.6, 13), (1, 14), (0.7, 15), (0.7, 16), (1, 17), (1, 18)}  
{(0.5, 12), (0.6, 13), (1, 14), (1, 15), (1, 16), (1, 17), (1, 18)}  
{(0.3, 12), (0.5, 13), (0.5, 14), (1, 15), (0.7, 16), (0.5, 17), (0.2, 18)}  
{(0.3, 12), (0.5, 13), (0.6, 14), (1, 15), (0.7, 16), (0.5, 17), (0.2, 18)} 
Question 37 Explanation:
Question 38 
A perceptron has input weights W1 = – 3.9 and W2 = 1.1 with threshold value T = 0.3. What output does it give for the input x_{1} = 1.3 and x_{2} = 2.2 ?
– 2.65  
– 2.30  
0  
1 
Question 38 Explanation:
Question 39 
How does randomized hillclimbing choose the next move each time?
It generates a random move from the moveset, and accepts this move.  
It generates a random move from the whole state space, and accepts this move.  
It generates a random move from the moveset, and accepts this move only if this move improves the evaluation function.  
It generates a random move from the whole state space, and accepts this move only if this move improves the evaluation function. 
Question 39 Explanation:
Randomized hillclimbing generates a random move from the moveset, and accepts this move only if this move improves the evaluation function.
Question 40 
Consider the following game tree in which root is a maximizing node and children are visited left to right. What nodes will be pruned by the alphabeta pruning ?
I  
HI  
CHI  
GHI 
Question 41 
Consider a 3puzzle where, like in the usual 8puzzle game, a tile can only move to an adjacent empty space. Given the initial state
which of the following state cannot be reached?
which of the following state cannot be reached?
Question 41 Explanation:
This problem, we can solve it by presence of mind.
Question 42 
A software program that infers and manipulates existing knowledge in order to generate new knowledge is known as
Data dictionary  
Reference mechanism  
Inference engine  
Control strategy 
Question 42 Explanation:
→ A software program that infers and manipulates existing knowledge in order to generate new knowledge is known as inference engine.
→ Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining.
→ Forward chaining starts with the known facts and asserts new facts.
→ Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved
→ Inference engines work primarily in one of two modes either special rule or facts: forward chaining and backward chaining.
→ Forward chaining starts with the known facts and asserts new facts.
→ Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved
Question 43 
What are the following sequence of steps taken in designing a fuzzy logic machine?
Fuzzi cation → Rule evaluation → Defuzzi cation  
Fuzzi cation → Defuzzi cation → Rule evaluation  
Rule evaluation → Fuzzi cation → Defuzzi cation  
Rule evaluation → Defuzzi cation → Fuzzi cation 
Question 43 Explanation:
Question 44 
Let R and S be two fuzzy relations defined as
Then, the resulting relation, T, which relates elements of universe of X to elements of universe of Z using maxproduct composition is given by
Then, the resulting relation, T, which relates elements of universe of X to elements of universe of Z using maxproduct composition is given by
Question 44 Explanation:
Question 45 
Match each Artificial Intelligence term in ListI that best describes a given situation in ListII
Id, IIa, IIIb, IVc  
Id, IIc, IIIa, IVb  
Id, IIc, IIIb, IVa  
Ic, IId, IIIa, IVb 
Question 45 Explanation:
Semantic network→ A method of knowledge representation that uses a graph
Frame→ A data structure representing stereotypical knowledge
Declarative knowledge→ Knowledge about what to do as opposed to how to do it
Primitive→ A premise of a rule that is not concluded by any rule
Frame→ A data structure representing stereotypical knowledge
Declarative knowledge→ Knowledge about what to do as opposed to how to do it
Primitive→ A premise of a rule that is not concluded by any rule
Question 46 
In Artificial Intelligence , a semantic network
is a graphbased method of knowledge representation where nodes represent concepts and arcs represent relations between concepts.  
is a graphbased method of knowledge representation where nodes represent relations between concepts and arcs represent concepts.  
represents an entity as a set of slots and associated rules.  
is a subset of rstorder logic. 
Question 46 Explanation:
→ In Artificial Intelligence , a semantic network is a graphbased method of knowledge representation where nodes represent concepts and arcs represent relations between concepts.
→ Semantic networks are used in natural language processing applications such as semantic parsing and wordsense disambiguation.
→ Semantic networks are used in natural language processing applications such as semantic parsing and wordsense disambiguation.
Question 47 
Which formal system provides the semantic foundation for Prolog?
Predicate calculus  
Lambda calculus  
Hoare logic  
Propositional logic 
Question 47 Explanation:
Predicate calculus provides the semantic foundation for Prolog.
Question 48 
Given the following set of prolog clauses :
father(X, Y) :
parent(X, Y),
male(X),
parent(Sally, Bob),
parent(Jim, Bob),
parent(Alice, Jane),
parent(Thomas, Jane),
male(Bob),
male(Jim),
female(Salley),
female(Alice).
How many atoms are matched to the variable ‘X’ before the query father(X, Jane) reports a Result ?
father(X, Y) :
parent(X, Y),
male(X),
parent(Sally, Bob),
parent(Jim, Bob),
parent(Alice, Jane),
parent(Thomas, Jane),
male(Bob),
male(Jim),
female(Salley),
female(Alice).
How many atoms are matched to the variable ‘X’ before the query father(X, Jane) reports a Result ?
1  
2  
3  
4  
No option is correct. 
Question 48 Explanation:
Excluded for evaluation
Question 49 
Forward chaining systems are __________ where as backward chaining systems are __________.
Data driven, Data driven  
Goal driven, Data driven  
Data driven, Goal driven  
Goal driven, Goal driven 
Question 49 Explanation:
Forward Chaining: Forward chaining starts with the available data and uses inference rules to extract more data until a conclusion is reached.
It is also known as data driven inference technique.
It is bottom up reasoning.
It is a breadth first search.
For Example: “If it is raining then i will bring the umbrella”. Here “it is raining” is a available data from which more data is extracted and a conclusion “i will bring the umbrella” is drived.
Backward Chaining: Backward chaining is an inference method described colloquially as working backward from the goal.
It is also known as goal driven inference technique.
Here we starts from a goal and apply inference rules to get some data.
It is top down reasoning.
It is a depth first search.
For Example: “If it is raining then i will bring the umbrella”. Here our conclusion is “i will bring the umbrella”. Now If I am bringing an umbrella then it can be stated that it is raining that is why I am bringing the umbrella. So here “ It is raining” is the data obtained from goal . Hence it was derived in a backward direction so it is the process of backward chaining.
It is also known as data driven inference technique.
It is bottom up reasoning.
It is a breadth first search.
For Example: “If it is raining then i will bring the umbrella”. Here “it is raining” is a available data from which more data is extracted and a conclusion “i will bring the umbrella” is drived.
Backward Chaining: Backward chaining is an inference method described colloquially as working backward from the goal.
It is also known as goal driven inference technique.
Here we starts from a goal and apply inference rules to get some data.
It is top down reasoning.
It is a depth first search.
For Example: “If it is raining then i will bring the umbrella”. Here our conclusion is “i will bring the umbrella”. Now If I am bringing an umbrella then it can be stated that it is raining that is why I am bringing the umbrella. So here “ It is raining” is the data obtained from goal . Hence it was derived in a backward direction so it is the process of backward chaining.
Question 50 
Reasoning strategies used in expert systems include __________.
Forward chaining, backward chaining and problem reduction  
Forward chaining, backward chaining and boundary mutation  
Forward chaining, backward chaining and back propagation  
Backward chaining, problem reduction and boundary mutation 
Question 50 Explanation:
Expert systems: are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code.
So we can say that expert systems are used for problem reduction.
For problem reduction an expert system can use forward or backward chaining.
Forward Chaining: Forward chaining starts with the available data and uses inference rules to extract more data until a conclusion is reached.
It is also known as data driven inference technique.
It is bottom up reasoning.
It is a breadth first search.
Backward Chaining: Backward chaining is an inference method described colloquially as working backward from the goal.
It is also known as goal driven inference technique.
Here we starts from a goal and apply inference rules to get some data.
It is top down reasoning.
It is a depth first search.
So we can say that expert systems are used for problem reduction.
For problem reduction an expert system can use forward or backward chaining.
Forward Chaining: Forward chaining starts with the available data and uses inference rules to extract more data until a conclusion is reached.
It is also known as data driven inference technique.
It is bottom up reasoning.
It is a breadth first search.
Backward Chaining: Backward chaining is an inference method described colloquially as working backward from the goal.
It is also known as goal driven inference technique.
Here we starts from a goal and apply inference rules to get some data.
It is top down reasoning.
It is a depth first search.
Question 51 
Language model used in LISP is __________.
Functional programming  
Logic programming  
Object oriented programming  
All of the above

Question 51 Explanation:
LISP is functional language. A functional language:
Uses declarative programming model.
Focus is on “ what you are doing”
Supports parallel programming.
It’s functions have no side effects.
Supports both "Abstraction over Data" and "Abstraction over Behavior".
Uses declarative programming model.
Focus is on “ what you are doing”
Supports parallel programming.
It’s functions have no side effects.
Supports both "Abstraction over Data" and "Abstraction over Behavior".
Question 52 
Consider the two class classification task that consists of the following points:
Class C_{1} : [1, 1], [1, 1], [1, 1]
Class C_{2} : [1, 1]
The decision boundary between the two classes C_{1} and C_{2} using single perceptron is given by:
Class C_{1} : [1, 1], [1, 1], [1, 1]
Class C_{2} : [1, 1]
The decision boundary between the two classes C_{1} and C_{2} using single perceptron is given by:
x_{1}  x_{2}  0.5 = 0  
 x_{1} + x_{2}  0.5 = 0  
0.5(x_{1} + x_{2})  1.5 = 0  
x_{1} + x_{2}  0.5 = 0 
Question 52 Explanation:
For such questions,
➜ You should perform multiplication and addition operation between each class matrix and given equations in options.
➜ Option which divides the class into two regions i.e., (+ve & ve) regions is the correct answer.
Class C1:
➜ You should perform multiplication and addition operation between each class matrix and given equations in options.
➜ Option which divides the class into two regions i.e., (+ve & ve) regions is the correct answer.
Class C1:
Question 53 
Consider a standard additive model consisting of rules of the form of
If x is A_{i} AND y is B_{i }THEN z is C_{i}.
Given crisp inputs x = x_{0}, y = y_{0}, the output of the model is:
If x is A_{i} AND y is B_{i }THEN z is C_{i}.
Given crisp inputs x = x_{0}, y = y_{0}, the output of the model is:
Question 54 
A bellshaped membership function is specified by three parameters (a, b, c) as follows:
Question 54 Explanation:
Question 55 
Consider the following:
(a) Evolution
(b) Selection
(c) reproduction
(d) Mutation
Which of the following are found in genetic algorithms?
(a) Evolution
(b) Selection
(c) reproduction
(d) Mutation
Which of the following are found in genetic algorithms?
(b),(c) and (d) only  
(b) and (d) only  
(a),(b),(c) and (d)  
(a),(b) and (d) only 
Question 55 Explanation:
Five phases are considered in a genetic algorithm.
1.Initial population
2.Fitness function
3.Selection
4.Crossover
5.Mutation Note: According to official key optionC is correct.
1.Initial population
2.Fitness function
3.Selection
4.Crossover
5.Mutation Note: According to official key optionC is correct.
Question 56 
Which of the following is an example of unsupervised neural network?
Back propagation network  
Hebb network  
Associative memory network
 
Selforganizing feature map 
Question 56 Explanation:
A selforganizing map (SOM) or selforganizing feature map (SOFM) is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a lowdimensional (typically twodimensional), discretized representation of the input space of the training samples, called a map, and is therefore a method to do dimensionality reduction. Selforganizing maps differ from other artificial neural networks as they apply competitive learning as opposed to errorcorrection learning (such as backpropagation with gradient descent), and in the sense that they use a neighborhood function to preserve the topological properties of the input space.
Question 57 
The STRIPS representation is
a featurecentric representation  
an actioncentric representation  
a combination of featurecentric and actioncentric representation
 
a hierarchical featurecentric representation 
Question 57 Explanation:
The STRIPS representation for an action consists of
→The precondition, which is a set of assignments of values to features that must be true for the action to occur, and
→The effect, which is a set of resulting assignments of values to those primitive features that change as the result of the action.
→The precondition, which is a set of assignments of values to features that must be true for the action to occur, and
→The effect, which is a set of resulting assignments of values to those primitive features that change as the result of the action.
Question 58 
(a)(iii), (b)(iv), (c)(i), (d)(ii)  
(a)(iii), (b)(iv), (c)(ii), (d)(i)  
(a)(iv), (b)(iii), (c)(i), (d)(ii)  
(a)(iv), (b)(iii), (c)(ii), (d)(i) 
Question 58 Explanation:
Intelligence → Judgemental
Knowledge → Codifiable, endorsed with relevance and purpose
Information → Scattered facts, easily transferable
Data → Contextual, tacit, transfer needs learning
Knowledge → Codifiable, endorsed with relevance and purpose
Information → Scattered facts, easily transferable
Data → Contextual, tacit, transfer needs learning
Question 59 
(a)(i), (b)(iv), (c)(iii), (d)(ii)  
(a)(iv), (b)(i), (c)(ii), (d)(iii)  
(a)(i), (b)(iv), (c)(ii), (d)(iii)  
(a)(iv), (b)(ii), (c)(i), (d)(iii) 
Question 59 Explanation:
Steepest – accent Hill Climbing→ Considers all moves from current state and selects best move.
Branch – and – bound → Keeps track of all partial paths which can be a candidate for further exploration
Constraint satisfaction → Discover problem state(s) that satisfy a set of constraints
Means – end – analysis → Detects difference between current state and goal state
Branch – and – bound → Keeps track of all partial paths which can be a candidate for further exploration
Constraint satisfaction → Discover problem state(s) that satisfy a set of constraints
Means – end – analysis → Detects difference between current state and goal state
Question 60 
Let W_{o} represents weight between node i at layer k and node j at layer (k – 1) of a given multilayer perceptron. The weight updation using gradient descent method is given by
Where α and E represents learning rate and Error in the output respectively?
Question 61 
Consider the following:
(a) Trapping at local maxima
(b) Reaching a plateau
(c) Traversal along the ridge.
Which of the following option represents shortcomings of the hill climbing algorithm?
(a) Trapping at local maxima
(b) Reaching a plateau
(c) Traversal along the ridge.
Which of the following option represents shortcomings of the hill climbing algorithm?
(a) and (b) only  
(a) and (c) only  
(b) and (c) only  
(a), (b) and (c) 
Question 61 Explanation:
Hill climbing limitations:
1. Local Maxima: Hillclimbing algorithm reaching the vicinity a local maximum value, gets drawn towards the peak and gets stuck there, having no other place to go.
2. Ridges: These are sequences of local maxima, making it difficult for the algorithm to navigate.
3. Plateaux: This is a flat statespace region. As there is no uphill to go, algorithm often gets lost in the plateau.
To avoid above problems using 3 standard types of hill climbing algorithm is
1. Stochastic Hill Climbing selects at random from the uphill moves. The probability of selection varies with the steepness of the uphill move.
2. FirstChoice Climbing implements the above one by generating successors randomly until a better one is found.
3. Randomrestart hill climbing searches from randomly generated initial moves until the goal state is reached.
1. Local Maxima: Hillclimbing algorithm reaching the vicinity a local maximum value, gets drawn towards the peak and gets stuck there, having no other place to go.
2. Ridges: These are sequences of local maxima, making it difficult for the algorithm to navigate.
3. Plateaux: This is a flat statespace region. As there is no uphill to go, algorithm often gets lost in the plateau.
To avoid above problems using 3 standard types of hill climbing algorithm is
1. Stochastic Hill Climbing selects at random from the uphill moves. The probability of selection varies with the steepness of the uphill move.
2. FirstChoice Climbing implements the above one by generating successors randomly until a better one is found.
3. Randomrestart hill climbing searches from randomly generated initial moves until the goal state is reached.
Question 62 
According to DempsterShafer theory for uncertainty management,
Where Bel(A) denotes Belief of event A.
There are 62 questions to complete.