Home - Ugc Net Computer Science - Practice Questions - Artificial Intelligence - Artificial Neural Networks

Artificial Intelligence - Artificial Neural Networks Online Exam Quiz

Important questions about Artificial Intelligence - Artificial Neural Networks. Artificial Intelligence - Artificial Neural Networks MCQ questions with answers. Artificial Intelligence - Artificial Neural Networks exam questions and answers for students and interviews.

2. A perceptron has two inputs x 1 and x 2 with weights w 1 and w 2 and a bias weight of w 0 . The activation function of the perceptron is h(x). The output of the perceptron is given by:

Options

A : y = h(w 1 x 1 + w 2 x 2 + w 0 )

B : y = h(w 1 + w 2 + w 0 )

C : y = w 1 x 1 + w 2 x 2 + w 0

D : y = h(w 1 x 1 + w 2 x 2 − w 0 )

3. We provide a training input x to a perceptron learning rule. The desired output is t and the actual output is o. If learning rate is η, the weight update performed by the learning rule is given by the equation?

Options

A : wi ← wi + η(t − o)

B : wi ← wi + η(t − o)x

C : wi ← η(t − o)x

D : wi ← wi + (t − o)

5. Three main basic features involved in characterizing membership function are

Options

A : Intuition, Inference, Rank Ordering

B : Fuzzy Algorithm, Neural network, Genetic Algorithm

C : Core, Support , Boundary

D : Weighted Average, center of Sums, Median

46. Self-Organizing Map (SOM) is a

Options

A : 3-layers network.

B : 2-layers network.

C : single layers network.

D : 4-layers network.

47. Which one of the following statements is TRUE? Connecting weights between two layers of neural network lies

Options

A : within the same range of input variables.

B : within the same range of output variables.

C : within a normalized scale of either (-1.0, +1.0) or (0.0, +1.0)

D : within the range of (-10.0, +10.0)

48. Back-propagation algorithm (Delta rule)

Options

A : uses the concept of direct search of optimization.

B : uses the concept of gradient-based search of optimization.

C : uses the concept of nature-inspired optimization algorithm.

D : does not use the concept of optimization.

49. In generalized delta rule, momentum constant (α) is allowed to vary in the range of

Options

A : (-1.0, +1.0)

B : (0.5, +1.5)

C : (0.0, +1.0)

D : (-10.0, +10.0)

50. Which one of the following statements is FALSE?

Options

A : Gaussian distribution is a radial basis function.

B : Radial Basis Function Network (RBFN) is a two-layer neural network.

C : RBFN is computationally less expensive compared to conventional multi-layer feed-forward neural network

D : Back-propagation neural network can capture the dynamics of a highly dynamic process.

51. In a batch mode of supervised learning,

Options

A : the number of training scenarios should be at least equal to the number of design variables.

B : the number of training scenarios should be less than the number of design variables.

C : there is no relationship between the number of training scenarios and that of design variables.

D : there is no chance of over-training.

52. To implement back-propagation algorithm in multi-layer feed-forward neural network, we have the concept of

Options

A : integration by parts.

B : chain rule of partial differentiation.

C : chain rule of exact differentiation.

D : conventional integration.

53. For input-output modeling, in terms of accuracy in predictions,

Options

A : Radial basis function neural network is found to be much better than multi-layer feed-forward neural network.

B : Multi-layer feed-forward neural network is found to be much better than Radial basis function neural network.

C : the performances of multi-layer fee-forward neural network and radial basis function neural network are comparable.

D : the performance of multi-layer feed-forward neural network and radial basis function neural network should not be compared.

54. Which one of the following statements is TRUE?

Options

A : Both multi-layer feed-forward neural network as well as radial basis function neural network can be used as clustering tools.

B : Both multi-layer feed-forward neural network as well a radial basis function neural network can be used as regression tools.

C : Multi-layer feed-forward neural network and radial basis function neural network can be used as regression and clustering tools, respectively.

D : Multi-layer feed-forward neural network and radial basis function neural network can be used as clustering and regression tools, respectively.

55. In a multi-layer feed-forward neural network, the minimum number of neurons to be put in hidden layer is

Options

A : 1

B : 2

C : 10

D : 5

56. If we change the sign of all of the weights and the bias feeding into a particular hidden unit, then, for a given input pattern

Options

A : The sign of the activation of the hidden unit will be reversed.

B : The sign of the activation of the hidden unit will remain same.

C : The output of the hidden unit will be zero irrespective of the input.

D : The output of the hidden unit will be scaled by a positive integer.

57. (Refer to Figure 1)In continuation with the above question, If we change the sign of all of the weights leading out of hidden unit also, then

Options

A : The input-output mapping function represented by the network will be negated.

B : The input-output mapping function represented by the network will be same.

C : The output of the network will always be a zero vector.

D : Nothing can be commented on the input-output mapping function.

58. In continuation with the questions 25, 26 and 27, for M hidden units, there will be M such 'sign-flip' symmetries, and thus any given weight vector will be one of a set ____ equivalent weight vectors.

Options

A : 2 M

B : 2 M-1

C : 2 D

D : 2 D-1

59. Memory can be modeled with a feedback loop as shown in figure. With |w| < 1, the system with feedback loop has

Options

A : infinite memory

B : Finite memory

C : System is stable

D : System is unstable

60. Suppose if you were to design a neural network architecture for an object recognition task which is capable of identifying the object irrespective of its orientation, then which of the following would be built into the neural network architecture?

Options

A : Invariant feature space

B : Invariance by training

C : Invariance by structure

D : Invariance by restricting the connections in the network.

61. Which of the following gives non-linearity to a neural network.

Options

A : Convolution operator

B : Stochastic gradient descent

C : Sigmoid activation

D : Non-zero bias

62. Consider a two input neuron with logistic activation function with slope parameter a = 2. Let the inputs be [-1, 1] and the weights are [0.1, 0.5] respectively. The output of the neuron is 0.73. The value of the bias b 1 is (Round bias value till 1 decimal point)?

Options

A : (type: Range) 0.09,0.11

B : (type: Range) 0.08,0.10

C : (type: Range) 0.08,0.11

D : (type: Range) 0.09,0.10

Ugc Net Computer Science - Practice Questions more Online Exam Quiz

Copyright © 2021
Exam-GK-MCQ-Questions.Com | Contact Us | Privacy Policy | Terms of Use | Sitemap | 0.018569946289062