0

I understand the concepts of L1 and L2 loss (i.e. L1 loss will force some parameter coefficients to zero while L2 will only make them approach zero). What do these do when implemented in XGBoost? Does L1 loss prune the tree more significantly than L2 loss?

Kyle
  • 1
  • 1
  • 2
    Your question seems to be about L1/L2 _regularization_, not about L1/L2 _loss_ functions. If that's the case, then see https://datascience.stackexchange.com/q/57255/55122 – Ben Reiniger Apr 21 '20 at 17:03
  • @BenReiniger You are correct, thank you for the link! – Kyle Apr 22 '20 at 15:43

0 Answers0