0

I have 13 models ranging from simple models like Seasonal Naïve Average to complex models like Random Forests, The weights of the models is calculated based on the LPMinimize of the error during the validation period, The constraints include restricting their weights to [0,1] and the sum of weights being equal to 1, But I did notice that using weights [-1,1] I get better results, Is it okay to use negative weights? What are the implications of such a thing? This question was asked previously here which has little to no help.

  • Maybe, suppose a model learns an inverse relationship to the real relationship, then you might get better forecasts if you do the "opposite" of it. – user2974951 Dec 02 '22 at 11:07
  • @user2974951 How could a model learn a inverse relationship? Are you meaning to say there is significant difference between the train_set vs validation_set? – Justice_Lords Dec 02 '22 at 11:34
  • @Justice_Lords I think they mean that, a model might happen to be so "bad" at making predictions that if you actually take the opposite of what it predicted, it has better chances of it being correct. As an example, a binary classifier with 40% accuracy. If you reverse the predicted labels, the wrong classifications will become the correct ones and you actually have 60% accuracy. – liakoyras Dec 02 '22 at 12:43
  • @liakoyras, That sounds counterintuitive, But it does seem as a reasonable explanation! – Justice_Lords Dec 06 '22 at 05:57

0 Answers0