0

If I understand correctly, stacking uses a set of "level 1" models, creates out of fold predictions and then trains these models on the full training data. The out of fold predictions are then used to train the meta learner.

Blending uses a set of level 1 models trained on a split of the training data while the other part of the training data is used to train the meta learner.

This sounds like stacking (when compared to blending) has better trained level 1 models (on the full data) and a better trained meta learner (on out of fold predictions).

To me this sounds like stacking is pretty much always superior to blending (although blending obviously is a lot less resource intensive). What am I missing, or otherwise why do people use blending over stacking?

Koen
  • 1

0 Answers0