Yes, the classifier will expect the relative class frequencies in operation to be the same as those in the training set. This means that if you over-sample the minority class in the training set, the classifier is likely to over-predict that class in operational use.
To see why it is best to consider probabilistic classifiers, where the decision is based on the posterior probability of class membership p(C_i|x), but this can be written using Bayes' rule as
$p(C_i|x) = \frac{p(x|C_i)p(C_i)}{p(x)}\qquad$ where $\qquad p(x) = \sum_j p(x|C_j)p(c_j)$,
so we can see that the decision depends on the prior probabilities of the classes, $p(C_i)$, so if the prior probabilities in the training set are different than those in operation, the operational performance of our classifier will be suboptimal, even if it is optimal for the training set conditions.
Some classifiers have a problem learning from imbalanced datasets, so one solution is to oversample the classes to ameliorate this bias in the classifier. There are to approaches. The first is to oversample by just the right amount to overcome this (usually unknown) bias and no more, but that is really difficult. The other approach is to balance the training set and then post-process the output to compensate for the difference in training set and operational priors. We take the output of the classifier trained on an oversampled dataset and multiply by the ratio of operational and training set prior probabilities,
$q_o(C_i|x) \propto p_t(x|C_i)p_t(C_i) \times \frac{p_o(C_i)}{p_t(C_i} = p_t(x|C_i)p_o(C_i)$
Quantities with the o subscript relate to operational conditions and those wit the t subscript relate to training set conditions. I have written this as $q_o(C_i|x)$ as it is an un-normalised probability, but it is straight forward to renormalise them by dividing by the sum of $q_o(C_i|x)$ over all classes. For some problems it may be better to use cross-validation to chose the correction factor, rather than the theoretical value used here, as it depends on the bias in the classifier due to the imbalance.
So in short, for imbalanced datasets, use a probabilistic classifier and oversample (or reweight) to get a balanced dataset, in order to overcome the bias a classifier may have for imbalanced datasets. Then post-process the output of the classifier so that it doesn't over-predict the minority class in operation.