From 8bafaaa091b06d8cbcd49b607633276ea70d3b01 Mon Sep 17 00:00:00 2001 From: Ashita Prasad Date: Sat, 8 Jun 2024 10:20:05 +0530 Subject: [PATCH] Rename K-nearest neighbor (KNN).md to knn.md --- .../machine-learning/{K-nearest neighbor (KNN).md => knn.md} | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) rename contrib/machine-learning/{K-nearest neighbor (KNN).md => knn.md} (99%) diff --git a/contrib/machine-learning/K-nearest neighbor (KNN).md b/contrib/machine-learning/knn.md similarity index 99% rename from contrib/machine-learning/K-nearest neighbor (KNN).md rename to contrib/machine-learning/knn.md index 748f808..85578f3 100644 --- a/contrib/machine-learning/K-nearest neighbor (KNN).md +++ b/contrib/machine-learning/knn.md @@ -119,4 +119,4 @@ plt.show() - **Feature Scaling:** Since KNN relies on distance calculations, features should be scaled (standardized or normalized) to ensure that all features contribute equally to the distance computation. - **Distance Metrics:** The choice of distance metric (Euclidean, Manhattan, etc.) can affect the performance of the algorithm. -In conclusion, KNN is a versatile and easy-to-implement algorithm suitable for various classification and regression tasks, particularly when working with small datasets and well-defined features. However, careful consideration should be given to the choice of K, feature scaling, and distance metrics to optimize its performance. \ No newline at end of file +In conclusion, KNN is a versatile and easy-to-implement algorithm suitable for various classification and regression tasks, particularly when working with small datasets and well-defined features. However, careful consideration should be given to the choice of K, feature scaling, and distance metrics to optimize its performance.