In the world of data analysis, k-Nearest Neighbors (kNN) has a reputation for being easy to try but surprisingly deep. It has very few difficult parameters and is intuitively easy to understand, but there are unexpected tricks to mastering it in practice.
1. What is k-Nearest Neighbors?
For a given unknown sample, kNN finds the k nearest points (neighbors) within the learning data space. It then uses a simple method – majority voting of the labels (for classification) or the average of the values (for regression) – to predict the answer. You can switch between Euclidean distance, Manhattan distance, cosine similarity, and others depending on the characteristics of the problem.
- Too Small a k
It becomes sensitive to noise and prone to overfitting (e.g., k=1 is the most unstable).
- Too Large a k
It creates overly smooth boundaries, risking ignoring fine differences between categories (underfitting).
- Empirical Tuning
The standard approach is to find the optimal k using cross-validation.
- Weight of Linear Scanning
If you calculate the distance to every sample each time, the response time becomes severe when the number of data points exceeds tens of thousands.
- Fast Search Techniques
You can ensure scalability using KD-Tree, Ball-Tree, Approximate Nearest Neighbor (Annoy, Faiss), etc.
2. What is kNN Used For?
The strengths of kNN lie in its model-free nature, intuitive behaviour, and versatility.
(1). Recommendation
It can be used for collaborative filtering based on calculating the similarity between users or items.
(2). Anomaly Detection
It learns the distribution of distances to near neighbours of normal samples and detects outliers as dissimilar points.
(3). Handwritten Digit Recognition
It calculates distances on a pixel-by-pixel basis and classifies them into "familiar digits."
(4). Medical Diagnosis Support
It searches for similar patient data and assists in judgement based on the treatment results of past cases.
(5). Image Search Engine
It extracts "similar images" by calculating the distance between feature-vectorized images.
In fact, it can be used for both classification and regression, and its application range expands dramatically by combining distance definitions and advanced indexing techniques.
3. Benefits of Learning k-Nearest Neighbors
- The idea of quantifying similarity between data points is applicable to clustering, distance-based anomaly detection, and even kernel methods.
- You can confirm its operation with just a few lines of code, allowing you to easily learn the flow of pre-processing and model evaluation.
- You can intuitively grasp the changes in accuracy due to hyperparameter tuning (k and distance metric).
- You can master spatial data structures, which are essential when dealing with large-scale data.
- Speeding up with approximate search libraries is valuable in recommendation and big data analysis.
- If you can understand simple kNN, you can smoothly transition to applying distance learning and local linear models.
If you want to learn k-Nearest Neighbors (Nearest Neighbor Method), we recommend this book (access here).
コメント
コメントを投稿