5

I was taking the tutorial of making Recommendation system , there I read that Nearest Neighbor is different from KNN classifier . Could anyone explain that what is Nearest Neighbor and how it is different between KNN ?

Hamza
  • 229
  • 3
  • 11
  • 1
    Can you link to that tutorial so that we can understand the context? – noe Apr 26 '21 at 10:49
  • I saw on the youtube tutorial and there he says only that Nearest Neighbor is unsupervised learning – Hamza Apr 26 '21 at 10:57
  • Presumably, the difference is `K-1` neighbors. – Ray Apr 26 '21 at 20:43
  • Nearest neighbor usually works by creating vectors for objects and then comparing them. I don't know how knn works under the hood, but if that works the same way, then they are the same. Only knn looks for more than one neighbor. – RFAI Nov 02 '22 at 00:07

2 Answers2

7

Not really sure about it, but KNN means K-Nearest Neighbors to me, so both are the same. The K just corresponds to the number of nearest neighbours you take into account when classifying.

Maybe what you call Nearest Neighbor is a KNN with K = 1.

Ubikuity
  • 591
  • 3
  • 9
  • 1
    That's it. I believe Neirest Neighbors is the general method name (idea of looking your neighbors to see what you are) while KNN stands for the specific algorithm : If you make a 10 Nearest Neighbor, algorithm will check the 10 known closer neighbors and make a mean / classifying (depending context) – Adept Apr 26 '21 at 12:40
4

Scikit wrote in his documantation:

sklearn.neighbors provides functionality for unsupervised and supervised neighbors-based learning methods. Unsupervised nearest neighbors is the foundation of many other learning methods, notably manifold learning and spectral clustering. Supervised neighbors-based learning comes in two flavors: classification for data with discrete labels, and regression for data with continuous labels.

The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point, and predict the label from these. The number of samples can be a user-defined constant (k-nearest neighbor learning), or vary based on the local density of points (radius-based neighbor learning). The distance can, in general, be any metric measure: standard Euclidean distance is the most common choice.

You can read more about it here https://scikit-learn.org/stable/modules/neighbors.html