1

Which specific performance evaluation metrics are used in training, validation and testing and why? I am thinking error metrics (RMSE, MAE, MSE) are used in validation, and testing should use a wide variety of metrics? I don't think performance is evaluated in training, but not 100% sure.

Specifically I am actually after deciding when to use (i.e. in training, validation or testing) correlation coefficient, RMSE, MAE and others for numeric data (e.g. Willmott's Index of Agreement, Nash-Sutcliffe coefficient, etc.)

Sorry about this being broad - I have actually been asked to define it generally (i.e. not for a specific dataset). But datasets I have been using have all numeric continuous values with supervised learning situations.

Generally I am using performance evaluation for environmental data where I am using ANN to model. I have continuous features and am predicting a continuous variable.

  • 2
    It's a very broad Question and the short answer is, it depends on the problem at hand... – Aditya Apr 14 '18 at 13:12
  • Sorry about it being broad - I have actually been asked to define it generally (i.e. not for a specific dataset). But datasets I have been using are numeric continuous values with supervised learning situations – user9645302 Apr 14 '18 at 13:19
  • 1
    Take a look at [here](https://datascience.stackexchange.com/a/26855/28175). – Green Falcon Apr 14 '18 at 13:52
  • Thanks for that - sorry, I am actually after deciding when to use specifically correlation coefficient, RMSE, MAE and others for numeric data (e.g. Willmott's Index of Agreement, Nash-Sutcliffe coefficient, etc.) – user9645302 Apr 14 '18 at 13:57
  • Depending on your task they may vary. For regression tasks you can use symmetric cost functions like RSS if you have evenly important concepts. – Green Falcon Apr 14 '18 at 14:19
  • Thanks - the situations it would probably be applied to for my course are using ANN (Artificial Neural Network) to model. – user9645302 Apr 14 '18 at 14:21

0 Answers0