My goal is to use pre-trained VGG16 to compute the feature vectors excluding the top layer. I want to compute embedding(no training involved) per image one by one rather than feeding batches to the network as this network is just to compute embedding and not classification. In batch training I understand the importance of batch normalization but for a single image should I normalize it pixel-wise? All I can think is maybe it's useful to reduce the importance of illumination in an image. Am I missing something?
Asked
Active
Viewed 1,048 times
1 Answers
0
It all depends on how the original pretrained model was trained. If it was trained with normalized data, you should also normalize your data before giving it to the model. Otherwise, the input data distribution will not match what the network was trained with, and then you probably won't obtain good results.
VGG16 pretrained weights expect normalized data as input, so the answer is yes, you should normalize the data (subtracting the mean and dividing by the standard deviation).
If your image domain is similar to ImageNet, you may use the same mean and standard deviation statistics. If your images are from a very different domain, you should compute your own statistics (source).
noe
- 22,074
- 1
- 43
- 70
-
Thanks for the response! – offset-null1 Sep 25 '20 at 12:16
-
I'm sorry the undo of your answer selection but I found that indeed images were subtracted by mean but they weren't divided by std deviation .It might mislead future readers. Thankyou for your response. – offset-null1 Sep 26 '20 at 16:15