1

I used to believe in k-way-n-shot few-shot learning, k and n (number of classes and samples from each class respectively) must be the same in train and test phases. But now I come across a git repository for few-shot learning that uses different numbers in the train and test phase :

parser.add_argument('--dataset')
parser.add_argument('--distance', default='l2')
parser.add_argument('--n-train', default=1, type=int)
parser.add_argument('--n-test', default=1, type=int)
parser.add_argument('--k-train', default=60, type=int)
parser.add_argument('--k-test', default=5, type=int)
parser.add_argument('--q-train', default=5, type=int)
parser.add_argument('--q-test', default=1, type=int)

Are we allowed to do so?

Marzi Heidari
  • 249
  • 3
  • 11

1 Answers1

0

In few-shot learning, the number of classes does not have to be the same in the training and inference stages. Generally speaking, the number of classes in training is bigger than that in inference. The most crucial setting in few-shot learning is that the classes in the inference phase must not be present in the training phase. In other words, the intersection of the classes in training and the classes in inference is the empty set.

Marzi Heidari
  • 249
  • 3
  • 11