0 votes 0 votes Which among the following may help to reduce overfitting demonstrated by a model Change the loss function. Reduce model complexity. Increase the training data. Increase the number of optimization routine steps. $\text{ii and i}$ $\text{ii and iii}$ $\text{i, ii, and iii}$ $\text{i, ii, iii, and iv}$ Others gateda-sample-paper-2024 + – admin asked Oct 21, 2023 • edited Oct 23, 2023 by makhdoom ghaya admin 3.5k views answer comment Share Follow See all 0 reply Please log in or register to add a comment.
2 votes 2 votes Ans: Choice B Option 2 is an obvious one and hence included in every choice 😁 Option 3 Train with more data is one of the techniques to reduce overfitting. Reference: https://www.ibm.com/topics/overfitting Riya_23 answered Oct 26, 2023 • reshown Jan 5 by ankitgupta.1729 Riya_23 comment Share Follow See all 7 Comments See all 7 7 Comments reply 6rivu commented Dec 21, 2023 reply Follow Share i.Change the loss function. and, iv. Increase the number of optimization routine step. doesn’t really reduce the overfitting problem. 0 votes 0 votes Riya_23 commented Dec 21, 2023 reply Follow Share ^ Edited. Thank you 0 votes 0 votes Aditya1205 commented Dec 25, 2023 reply Follow Share Regularization involves changing the loss function and is a way to avoid overfitting. Shouldn’t i) then be an answer? 0 votes 0 votes Riya_23 commented Dec 27, 2023 reply Follow Share @Aditya1205 In regularization, the cost function is adjusted by adding a penalty term to the sum of losses (obtained from loss function). Loss function is not changed. So, cost function = sum of losses + term that puts penalty on input features with too large weights. 0 votes 0 votes ankitgupta.1729 commented Jan 5 reply Follow Share Even if you are adjusting a cost function by adding a penalty term then also you are ultimately changing the cost function. Writing $J(w)_{Least \ square} = \sum_{i=1}^{n}(y^{(i)} – \hat{y}^{(i)})^2$ to $J(w)_{Ridge} = \sum_{i=1}^{n}(y^{(i)} – \hat{y}^{(i)})^2+\lambda||w||_2^2$ or $J(w)_{Lasso} = \sum_{i=1}^{n}(y^{(i)} – \hat{y}^{(i)})^2+\lambda||w||_1$ is still a change of cost function. 0 votes 0 votes Shockwave commented Jan 7 reply Follow Share Answer: C Statement 1 is also true Modifying the loss function can indeed help reduce overfitting. For example, using regularization terms in the loss function (like L1 or L2 regularization) can penalize overly complex models 0 votes 0 votes rajveer43 commented Feb 2 reply Follow Share here A, B, C will be the correct answer 0 votes 0 votes Please log in or register to add a comment.
1 votes 1 votes Change the loss function: Choosing a loss function that penalizes the model for overfitting tendencies or introduces regularization terms might guide the model towards better generalization. Reduce model complexity: Simplifying the model architecture, like using fewer layers or parameters, can make it less prone to memorizing the training data and more likely to generalize well to new, unseen data. Increase the training data: More data can provide the model with a broader view of the underlying patterns in the data, making it less likely to overfit specific examples. It's like giving the model a richer experience to learn from. Increase the number of optimization routine steps: increasing the number of optimization steps might make the model more prone to overfitting the training data, especially if the model is already complex. It's generally better to focus on the other strategies mentioned. So, the effective ways to combat overfitting from your list would be: Change the loss function. Reduce model complexity. Increase the training data. correct answer is C rajveer43 answered Jan 5 rajveer43 comment Share Follow See all 0 reply Please log in or register to add a comment.
0 votes 0 votes ii and iii – Answer Overfitting means the training error is low but the testing error is very high. That means the variance of the trained model is high but bias of the model is low for the given training data. To reduce the variance of the model the following steps can be taken – Reduce the model complexity(Increase regularization). Train on larger dataset. Early stopping the training. Bagging. Reference – https://www.youtube.com/watch?v=zUJbRO0Wavo&list=PLl8OlHZGYOQ7bkVbuRthEsaLr7bONzbXS&index=19&pp=iAQB https://www.youtube.com/watch?v=65UJPA10dW8&list=PLl8OlHZGYOQ7bkVbuRthEsaLr7bONzbXS&index=20&pp=iAQB 6rivu answered Dec 21, 2023 6rivu comment Share Follow See all 0 reply Please log in or register to add a comment.
0 votes 0 votes Answer: C Statement 1 is also true Modifying the loss function can indeed help reduce overfitting. For example, using regularization terms in the loss function (like L1 or L2 regularization) can penalize overly complex models Shockwave answered Jan 7 Shockwave comment Share Follow See all 0 reply Please log in or register to add a comment.