Number of labels: 1000, Dataset size: 200000 images Final probability for 1000 labels is in the range of 0.3 to 0.34. I was expecting large variation in probabilities. Can someone tell me what I am doing wrong. I am following this tutorial
Asked
Active
Viewed 785 times
2
-
Could you elaborate on what you mean by score here? – Nischal Hp Jan 03 '18 at 06:09
-
By score I mean probability – Ravikrn Jan 03 '18 at 08:11
-
Probability of what? The highest probability for the corresponding class? The probability for one specific class? – Jan van der Vegt Jan 03 '18 at 08:44
-
Probability of each class – Ravikrn Jan 03 '18 at 11:59
1 Answers
2
In my experience, the example code for a low number of classes (<200) works well. When moving to more classes the imbalance data makes the network converge to the same numbers. You have imbalance data because now each output is a binary classifier by its own, this doesn't happen with softmax. The way to mitigate the problem is to use weighted_cross_entropy_with_logits and set pos_weight to a positive number > 1 (10 works). But I still don't get very good results.
jorgemf
- 211
- 1
- 5