5

For multiclass classification you would normally choose a confusion matrix to plot the error of predicted classes against the target classes.

What is the best way to visualize errors of multilabel classifiers? As multiple classes are predicted at once a mapping of prediction against target is not always possible so confusion matrices are generelly not suitable.

My first idea is to plot a bar chart where each class has a bar for missed and one for false predictions. But is there any standard for visualizing the errors which includes more information just like a confusion matrix for multiclass problems?

raspi
  • 151
  • 1
  • 5

2 Answers2

1

There is something called macro-averaging and micro-averaging.

Macro-averaging is just the TPs, TNs, FPs, and FNs of each label class summed up and averaged. You should use macro-averaging for generalizing and evaluating on the overall. You should not use it to make specific decisions.

See details of micro-averaging and more evaluation techniques in the link below.
https://towardsdatascience.com/journey-to-the-center-of-multi-label-classification-384c40229bff#6aaa

Ethan Yun
  • 329
  • 1
  • 8
  • Yes, that should give a little impression of the performance of the classifier. To extend this a little further one can also add a few more stats to the matrix like accuracy, false discovery rate and so on: https://en.wikipedia.org/wiki/F1_score#Diagnostic_testing – raspi Oct 08 '18 at 08:40
0

I found classification_report to be extremely helpful in understanding how my model is doing for each label. It generates a report detailing the f1-score, recall and precision for each label.