I am solving a Kaggle contest and my single model has reached score of 0.121, I'd like to know when to start using ensembling/stacking to improve the score.
I used lasso and xgboost and there obviously must be variance associated with those two algorithms. So stacking should theoretically give me better output than my individual algorithms.
But how to idenfity if stacking is worth it and that we've reached dead end to accuracy of a particular model?