0

Assume there are 100 images, 30 of them depict a cat, the rest do not. A machine learning model predicts the occurrence of a cat in 25 of 30 cat images. It also predicts absence of a cat in 50 of the 70 no cat images.

In this case, what are the true positive, false positive, true negative and false negative?

Nikos H.
  • 148
  • 9
JJJohn
  • 614
  • 10
  • 22
  • There are some good answers in the following post: https://datascience.stackexchange.com/questions/47725/confusion-matrix-logic/47727#47727 – Mark.F Jun 06 '19 at 05:57

1 Answers1

3

Assuming cat as a positive class,

Confusion Matrix:

TN | FP
FN | TP

  • True Positive(TP): Images which are cat and actually predicted cat i.e. 25
  • True Negative(TN): Images which are not-cat and actually predicted not-cat
    i.e. 50
  • False Positive(FP): Images which are not-cat and actually predicted as cat
    i.e. 20
  • False Negative(FN): Images which are cat and actually predicted as not-cat
    i.e. 5

Precision: TP/(TP+FP)
Recall: TP/(TP+FN)

Precision: 25/(25+20) = 0.55,
Recall: 25/(25+5) = 0.833

vipin bansal
  • 1,252
  • 9
  • 17