0

My goal is to use pre-trained VGG16 to compute the feature vectors excluding the top layer. I want to compute embedding(no training involved) per image one by one rather than feeding batches to the network as this network is just to compute embedding and not classification. In batch training I understand the importance of batch normalization but for a single image should I normalize it pixel-wise? All I can think is maybe it's useful to reduce the importance of illumination in an image. Am I missing something?

iftiben10
  • 200
  • 1
  • 6

1 Answers1

0

It all depends on how the original pretrained model was trained. If it was trained with normalized data, you should also normalize your data before giving it to the model. Otherwise, the input data distribution will not match what the network was trained with, and then you probably won't obtain good results.

VGG16 pretrained weights expect normalized data as input, so the answer is yes, you should normalize the data (subtracting the mean and dividing by the standard deviation).

If your image domain is similar to ImageNet, you may use the same mean and standard deviation statistics. If your images are from a very different domain, you should compute your own statistics (source).

noe
  • 22,074
  • 1
  • 43
  • 70