# UGCNET-June2012-III: 2

2 votes
1.3k views

In Delta Rule for error minimization

1. weights are adjusted w.r.to change in the output
2. weights are adjusted w.r.to difference between desired output and actual output
3. weights are adjusted w.r.to difference between output and output
4. none of the above

recategorized

## 1 Answer

3 votes

Answer : weights are adjusted w.r.to difference between desired output and actual output

Reference : Delta Rule

Answer:

## Related questions

3 votes
1 answer
1
677 views
Match the following learning modes $w.r.t$. characteristics of available information for learning : a. Supervised i. Instructive information on desired responses, explicitly specified by a teacher. b. Recording ii. A priori design information for memory storing c. Reinforcement iii. Partial information ... about desired responses Codes : a b c d i ii iii iv i iii ii iv ii iv iii i ii iii iv i
3 votes
1 answer
2
599 views
Which of the following prolog programs correctly implement if G succeeds then execute goal P else execute goal $\theta$ ? if-else (G, P, $\theta$):-!,call(G), call(P). if-else (G,P, $\theta$) :- call($\theta$). if-else (G, P, $\theta$ ... $\theta$). if-else (G, P, $\theta$):-call(G), call(P), !. if-else (G,P, $\theta$) :- call($\theta$). All of the above
1 vote
1 answer
3
1.2k views
$A^*$ algorithm uses $f’=g+h’$ to estimate the cost of getting from the initial state to the goal state, where $g$ is a measure of cost getting from initial state to the current node and the function $h’$ is an estimate of the cost of getting from the current node to the goal state. To find a path involving the fewest number of steps, we should test, $g=1$ $g=0$ $h’=0$ $h’=1$
0 votes
1 answer
4
1k views
An artificial neuron receives n inputs $x_1, x_2, \dots , x_n$ with weights $w_1, w_2, \dots , w_n$ attached to the input links. The weighted sum ____ is computed to be passed on to a non-linear filter $\phi$ called activation function to release the output. $\Sigma \: w_i$ $\Sigma \: x_i$ $\Sigma \: w_i + \Sigma \: x_i$ $\Sigma \: w_i \cdot \Sigma \: x_i$