272 views
1 votes
1 votes

Suppose a classifier predicts each possible class with equal probability. If there are 10 classes, what will the cross-entropy error be on a single example?

  1. $− log(10)$
  2. $−0.1 log(1)$
  3. $− log(0.1)$
  4. $−10 log(0.1)$

2 Answers

2 votes
2 votes
As $p_{i}$ is 0.1, we can write cross entropy as follows. So answer is C.

 

$L = -\Sigma_{i=1}^{i=10}y_{i}log(p_{i}) = -log(p_{i}) = -log(0.1)$
0 votes
0 votes
In this case, since each class is predicted with equal probability, the probability assigned to the true class is 1/10 (assuming there are 10 classes). Therefore, the cross-entropy error (denoted as H) for a single example can be calculated as follows:

H = -log(1/10) = -log(0.1)

The base of the logarithm is typically 2 or e (natural logarithm).

hence answer ‘C’ is the correct answer.

Related questions

276
views
1 answers
1 votes
rajveer43 asked Jan 27
276 views
You encounter a classification task, and after training your network on 20 samples, the training converges, but the training loss is remarkably high. You ... would be to keep the same model architecture and increase the learning rate.
213
views
1 answers
1 votes
rajveer43 asked Jan 29
213 views
Suppose that you are training a neural network for classification, but you notice that the training loss is much lower than the validation loss. ... probability √Increase $L2$ regularization weightIncrease the size of each hidden layer
248
views
1 answers
0 votes
rajveer43 asked Jan 27
248 views
What is Error Analysis?(i) The process of analyzing the performance of a model through metrics such as precision, recall or F1-score.(ii) The process of ... iv) The process of identifying which parts of your model contributed to the error.
512
views
2 answers
1 votes
rajveer43 asked Jan 16
512 views
Suppose we are performing leave-one-out (LOO) validation and $10$-fold cross validation on a dataset of size $100, 000$ to pick between $4$ different ... that need to be trained for LOO validation versus $10$-fold cross validation?Answer: