1

From my machine learning class,it seems L1 norm regularization is the standard way to obtain sparse and probably better fit for machine learning problems. But from pybrain, it seems L1 norm regularization is not implemented, rather L2 regularization is implemented which is quite weird to me...

It is implemented as followed:

trainer = BackpropTrainer(net, ds, weightdecay=0.01)

Is it possible for me to implement L1 norm regularization in pybrain? Even if I need to modify the source code? Thank you.

0 Answers0