12

I have tried for a while to figure out how to "shut up" LightGBM. Especially, I would like to suppress the output of LightGBM during training (i.e. feedback on the boosting steps).

My model:

params = {
            'objective': 'regression',
            'learning_rate' :0.9,
            'max_depth' : 1,
            'metric': 'mean_squared_error',
            'seed': 7,
            'boosting_type' : 'gbdt'
        }

gbm = lgb.train(params,
                lgb_train,
                num_boost_round=100000,
                valid_sets=lgb_eval,
                early_stopping_rounds=100)

I tried to add verbose=0 as suggested in the docs, but this does not work. https://github.com/microsoft/LightGBM/blob/master/docs/Parameters.rst

Does anyone know how to suppress LightGBM output during training?

Peter
  • 7,277
  • 5
  • 18
  • 47
  • 2
    Perhaps it's `verbose_eval` you're looking for? https://lightgbm.readthedocs.io/en/latest/Python-API.html – bradS Jun 17 '19 at 15:12
  • Yep, got rid of most feedback! Thanks! Any idea how I can also suppress warnings, because I still receive a lot of warnings as feedback. – Peter Jun 17 '19 at 15:17
  • 1
    What kind of errors are you getting? – bradS Jun 17 '19 at 20:57
  • It is „No further splits with positive gain“, likely caused by min_data_in_leaf. However, I would like to keep the configuration. My current application is a parameter search. – Peter Jun 17 '19 at 21:08
  • 1
    This is the latest I update on the issue that I can see: https://github.com/Microsoft/LightGBM/issues/1157#issuecomment-417373690 – bradS Jun 17 '19 at 21:13
  • Thanks! I‘ll check it soon. If you feel like, post your combined answers/suggestions in as answer, so others can follow it and I can vote it up. Cheers! – Peter Jun 17 '19 at 21:21

5 Answers5

8

Solution for sklearn API (checked on v3.3.0):

import lightgbm as lgb


param = {'objective': 'binary', "is_unbalance": 'true',
         'metric': 'average_precision'}
model_skl = lgb.sklearn.LGBMClassifier(**param)

# early stopping and verbosity
# it should be 0 or False, not -1/-100/etc
callbacks = [lgb.early_stopping(10, verbose=0), lgb.log_evaluation(period=0)]

# train
model_skl.fit(x_train, y_train,
              eval_set=[(x_train, y_train), (x_val, y_val)],
              eval_names=['train', 'valid'],
              eval_metric='average_precision',
              callbacks=callbacks)
banderlog013
  • 181
  • 1
  • 3
8

As @Peter has suggested, setting verbose_eval = -1 suppresses most of LightGBM output (link: here).

However, LightGBM may still return other warnings - e.g. No further splits with positive gain. This can be suppressed as follows (source: here ):

lgb_train = lgb.Dataset(X_train, y_train, params={'verbose': -1}, free_raw_data=False)
lgb_eval = lgb.Dataset(X_test, y_test, params={'verbose': -1},free_raw_data=False)
gbm = lgb.train({'verbose': -1}, lgb_train, valid_sets=lgb_eval, verbose_eval=False)
bradS
  • 1,547
  • 7
  • 19
6

To suppress (most) output from LightGBM, the following parameter can be set.

Suppress warnings: 'verbose': -1 must be specified in params={}.

Suppress output of training iterations: verbose_eval=False must be specified in the train{} parameter.

Minimal example:

params = {
            'objective': 'regression',
            'learning_rate' : 0.9, 
            'max_depth' : 1, 
            'metric': 'mean_squared_error',
            'seed': 7,
            'verbose': -1,
            'boosting_type' : 'gbdt'
        }

gbm = lgb.train(params,
                lgb_train,
                num_boost_round=100000,
                valid_sets=lgb_eval,
                verbose_eval=False,
                early_stopping_rounds=100)
Peter
  • 7,277
  • 5
  • 18
  • 47
4

Follow these points.

  1. Use verbose= False in fit method.
  2. Use verbose= -100 when you call the classifier.
  3. Keep silent = True (default).
Ethan
  • 1,625
  • 8
  • 23
  • 39
1

I read all the answers and issues, and tried all these approaches and yet LGBM still outputs some info (which drives me crazy). If you want to completely suppress any output during the training try this out:

with open(os.devnull, "w") as f, contextlib.redirect_stdout(f):
    gbm = lgb.cv(param, lgb_dataset)