5

I have to calculate precision and recall for a university project to measure the quality of the classification output (with sklearn). Say this would be my results:

y_true = [0, 1, 2, 1, 1]
y_pred = [0, 2, 1, 2, 1]

confusion matrix:
[1 0 0]
[0 1 2]
[0 1 0]

I have read about it and the definition makes sense for me in a binary setting but with 3 labels I find it hard to interpret precision/recall.
If I use sklearn.metrics.precision/recall_score it gives me 0.4 for both (average = micro)

Now for the precision this makes somewhat sense because 2 out of 5 are correctly classified. But I am having problems interpreting the 0.4 result for recall.

solaire
  • 153
  • 1
  • 5

3 Answers3

4

sklearn.metrics.classification_report provides precision and recall for all classes along with F-score and support. It might prove to be helpful in your case of 3 classes.

aathiraks
  • 704
  • 4
  • 11
2

from sklearn.metrics import recall_score

If you then call recall_score.__dir__ (or directly read the docs here) you'll see that recall is

The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn the number of false negatives

If you go down to where they define micro, it says

'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives

Here, the true positives are $2$ (the sum of the terms on the diagonal, also known as the trace), while the sum of the false negatives plus the false positives (the off-diagonal terms) is $3$.

As $2/5=.4$, the recall (using the micro argument for average) is indeed $.4$.

Note that, using micro, precision and recall are the same. The following, in fact, returns nothing:

from numpy import random
from sklearn.metrics import recall_score, precision_score
for i in range(100):
    y_pred = random.randint(0, 3, 5)
    y_true = random.randint(0, 3, 5)
    if recall_score(y_pred, y_true, average='micro') != precision_score(y_pred, y_true, average='micro'):
        print(i)
Stephen Rauch
  • 1,783
  • 11
  • 21
  • 34
ignoring_gravity
  • 763
  • 4
  • 14
  • Thanks. It seems to me like for the 'micro' level, precision and recall change their initial meaning/intuition compared to the binary case and both result in a ratio saying "right classifications/all classifications". Which probably makes sense because on the micro level i am not interested in comparing the performance for individual labels so this is the only interesting metric left. – solaire May 23 '18 at 10:16
  • Happy to help. Remember that you can accept an answer if you found it useful ;) – ignoring_gravity May 23 '18 at 10:27
2

you can also use PyCM lib for multi-class confusion matrix analysis.

Your Problem :

>>> print(cm)
Predict          0        1        2        
Actual
0                1        0        0        
1                0        1        2        
2                0        1        0        




Overall Statistics : 

95% CI                                                           (-0.02941,0.82941)
Bennett_S                                                        0.1
Chi-Squared                                                      6.66667
Chi-Squared DF                                                   4
Conditional Entropy                                              0.55098
Cramer_V                                                         0.8165
Cross Entropy                                                    1.52193
Gwet_AC1                                                         0.13043
Joint Entropy                                                    1.92193
KL Divergence                                                    0.15098
Kappa                                                            0.0625
Kappa 95% CI                                                     (-0.60846,0.73346)
Kappa No Prevalence                                              -0.2
Kappa Standard Error                                             0.34233
Kappa Unbiased                                                   0.03226
Lambda A                                                         0.5
Lambda B                                                         0.66667
Mutual Information                                               0.97095
Overall_ACC                                                      0.4
Overall_RACC                                                     0.36
Overall_RACCU                                                    0.38
PPV_Macro                                                        0.5
PPV_Micro                                                        0.4
Phi-Squared                                                      1.33333
Reference Entropy                                                1.37095
Response Entropy                                                 1.52193
Scott_PI                                                         0.03226
Standard Error                                                   0.21909
Strength_Of_Agreement(Altman)                                    Poor
Strength_Of_Agreement(Cicchetti)                                 Poor
Strength_Of_Agreement(Fleiss)                                    Poor
Strength_Of_Agreement(Landis and Koch)                           Slight
TPR_Macro                                                        0.44444
TPR_Micro                                                        0.4

Class Statistics :

Classes                                                          0                       1                       2                       
ACC(Accuracy)                                                    1.0                     0.4                     0.4                     
BM(Informedness or bookmaker informedness)                       1.0                     -0.16667                -0.5                    
DOR(Diagnostic odds ratio)                                       None                    0.5                     0.0                     
ERR(Error rate)                                                  0.0                     0.6                     0.6                     
F0.5(F0.5 score)                                                 1.0                     0.45455                 0.0                     
F1(F1 score - harmonic mean of precision and sensitivity)        1.0                     0.4                     0.0                     
F2(F2 score)                                                     1.0                     0.35714                 0.0                     
FDR(False discovery rate)                                        0.0                     0.5                     1.0                     
FN(False negative/miss/type 2 error)                             0                       2                       1                       
FNR(Miss rate or false negative rate)                            0.0                     0.66667                 1.0                     
FOR(False omission rate)                                         0.0                     0.66667                 0.33333                 
FP(False positive/type 1 error/false alarm)                      0                       1                       2                       
FPR(Fall-out or false positive rate)                             0.0                     0.5                     0.5                     
G(G-measure geometric mean of precision and sensitivity)         1.0                     0.40825                 0.0                     
LR+(Positive likelihood ratio)                                   None                    0.66667                 0.0                     
LR-(Negative likelihood ratio)                                   0.0                     1.33333                 2.0                     
MCC(Matthews correlation coefficient)                            1.0                     -0.16667                -0.40825                
MK(Markedness)                                                   1.0                     -0.16667                -0.33333                
N(Condition negative)                                            4                       2                       4                       
NPV(Negative predictive value)                                   1.0                     0.33333                 0.66667                 
P(Condition positive)                                            1                       3                       1                       
POP(Population)                                                  5                       5                       5                       
PPV(Precision or positive predictive value)                      1.0                     0.5                     0.0                     
PRE(Prevalence)                                                  0.2                     0.6                     0.2                     
RACC(Random accuracy)                                            0.04                    0.24                    0.08                    
RACCU(Random accuracy unbiased)                                  0.04                    0.25                    0.09                    
TN(True negative/correct rejection)                              4                       1                       2                       
TNR(Specificity or true negative rate)                           1.0                     0.5                     0.5                     
TON(Test outcome negative)                                       4                       3                       3                       
TOP(Test outcome positive)                                       1                       2                       2                       
TP(True positive/hit)                                            1                       1                       0                       
TPR(Sensitivity, recall, hit rate, or true positive rate)        1.0                     0.33333                 0.0