0

I have a multi-class classification problem that is imbalanced. The task is about animal classification.

Since it's imbalanced, I am using macro-F1 metric and the current result that I have is: 51.59.

The issue that I am facing is that, this task will be considered as a recommending task, where the accuracy of TOP-N is needed. When I compute the TOP-N accuracy, I get the following: Top-1: 88.58 Top-2: 94.86 Top-3: 96.48.

As you can see, the accuracy for the TOP-N is totally biased to the majority class, where the gap between the macro-F1 and top-1 is big.

My question is, how can I consider the class imbalance when I calculate the Top-N accuracy?

Minions
  • 252
  • 2
  • 14

1 Answers1

1

Sounds like your minority class is being poorly predicted and affecting your macro f1 score (see this answer for more info

From the sklearns top k accuracy score documentation you can pass a list of weights to 'rebalance' the score.

sample_weight, array-like of shape (n_samples,), default=None Sample weights. If None, all samples are given the same weight.

Adrian B
  • 188
  • 10