In linear regression model, how can we define cost function. also after defining cost function how to minimize the error term?
Asked
Active
Viewed 521 times
0
-
Besides the fruitful posts so far, for the difference of using RMSE and MSE in linear regression you can read [this][1]. [1]: https://datascience.stackexchange.com/questions/66712/in-linear-regression-why-we-generally-use-rmse-instead-of-mse/66730#66730 – Fatemeh Asgarinejad Jan 22 '20 at 00:53
1 Answers
1
Statistical programs, such as R, typicall use Least Squares estimation. It's a deterministic algorithm that makes a linear model find its optimal tuple of parameters. Because of this, you don't have to worry about the choice of a loss function.
In case you wanted to train your linear regression with a gradient descent algorithm, instead, you'd have to specify a loss function to run it. Classical loss functions for regression are: Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE).
Leevo
- 6,005
- 3
- 14
- 51
-
Thanks Leevo, but i did not get the meaning of below in your comment. "It's a deterministic algorithm that makes a linear model find its optimal tuple of parameters. Because of this, you don't have to worry about the choice of a loss function." Does this mean that once i run LR function in R for Linear regression, in first run itself i am going to get optimal model with least error and i do not need to reduce errors further(though variable selection and transformation etc. can be further done to improve model)? – SKB Jan 22 '20 at 04:31
-
Exactly. Linear regression is simple enough to allow for that. So when you run a linear regression in R you don't have to worry at all about Loss and optimization. – Leevo Jan 22 '20 at 08:48