## k-Nearest-Neighbor Classifier with sklearn

### Introduction

The underlying concepts of the K-Nearest-Neighbor classifier (kNN) can be found in the chapter k-Nearest-Neighbor Classifier of our Machine Learning Tutorial. In this chapter we also showed simple functions written in Python to demonstrate the fundamental principals.

Instead of using these functions, even though they showed impressive results, we recommend to use the functionalities of the sklearn module. We used sklearn already in our previous chapters.

### Using sklearn for kNN

neighbors is a package of the sklearn module, which provides functionalities for nearest neighbor classifiers both for unsupervised and supervised learning.

The classes in sklearn.neighbors can handle both Numpy arrays and scipy.sparse matrices as input. For dense matrices, a large number of possible distance metrics are supported. For sparse matrices, arbitrary Minkowski metrics are supported for searches.

scikit-learn implements two different nearest neighbors classifiers:

KNeighborsClassifier
is based on the k nearest neighbors of a sample, which has to be classified. The number 'k' is an integer value specified by the user. This is the most frequently used classifiers of both algorithms.
is based on the number of neighbors within a fixed radius r for each sample which has to be classified. 'r' is float value specified by the user. This classifier is less often used.

#### KNeighborsClassifier

We will artificially create a dataset with three classes to test the k-nearest neighbor classifier 'KNeighborsClassifier' from 'sklearn.neighbors'. We described this in our chapter Data Set Creation for Machine Learning

from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
import numpy as np

centers = [[2, 3], [5, 5], [1, 8]]
n_classes = len(centers)
data, labels = make_blobs(n_samples=150,
centers=np.array(centers),
random_state=1)


Let us visualize what we have created:

import matplotlib.pyplot as plt

colours = ('green', 'red', 'blue')
n_classes = 3

fig, ax = plt.subplots()
for n_class in range(0, n_classes):
ax.scatter(data[labels==n_class, 0], data[labels==n_class, 1],
c=colours[n_class], s=10, label=str(n_class))

ax.legend(loc='upper right');


We have to split now the data in a test and train set.

from sklearn.model_selection import train_test_split
res = train_test_split(data, labels,
train_size=0.8,
test_size=0.2,
random_state=1)

train_data, test_data, train_labels, test_labels = res


We are ready now to perform the classification with the kNeighborsClassifier:

# Create and fit a nearest-neighbor classifier
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(train_data, train_labels)

predicted = knn.predict(test_data)
print("Predictions from the classifier:")
print(predicted)
print("Target values:")
print(test_labels)

Predictions from the classifier:
[2 2 2 0 0 1 1 2 2 1 0 1 0 0 2 0 0 0 1 0 0 1 1 2 0 0 0 1 2 1]
Target values:
[2 2 2 0 0 1 1 2 2 1 0 1 0 0 2 0 0 0 1 0 0 1 1 2 0 0 0 1 2 1]


To evaluate the result, we will use accuracy_score from the module sklearn.metrics. To see how accuracy_score works, we will use a simple example with pseudo predictions and labels:

from sklearn.metrics import accuracy_score
example_predictions = [0, 2, 1, 3, 2, 0, 1]
example_labels      = [0, 1, 2, 3, 2, 1, 1]
print(accuracy_score(example_predictions, example_labels))

0.5714285714285714


The return value corresponds to the quotient of correctly classified and the total number of predictions. If you are only interested in the number of correctly classified items, you can set the parameter normalize to False. The default value is True.

print(accuracy_score(example_predictions,
example_labels,
normalize=False))

4


Now we are ready to evaluate the results of our previous clissification example:

print(accuracy_score(predicted, test_labels))

1.0


You may have noticed that we instantiated the k-nearest neighbor classifier in our previous example by calling it without any arguments, i.e. KNeighborsClassifier(). In the following, we instantiate it with some possible keyword parameters:

knn = KNeighborsClassifier(algorithm='auto',
leaf_size=30,
metric='minkowski',
metric_params=None,
n_jobs=1,
n_neighbors=5,
p=2,
weights='uniform')


The parameter metric is Minkowski by default. We explained the Minkowski distance in our chapter k-Nearest-Neighbor Classifier. The parameter p is the p of the Minkowski formula: When p is set to 1, this is equivalent to using the manhattan_distance, and the euclidean_distance will be used if p is assigned the value 2.

The parameter 'algorithm determines which algorithm will be used, i.e.

• ball_tree will use BallTree
• kd_tree will use KDTree
• brute will use a brute-force search. We set the parameter to auto which will attempt to decide the most appropriate algorithm based on the values passed to the fit method.

The parameter leaf_size is needed by BallTree or KDTree. It can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.

### Using the Iris Data

In the following example we will use the Iris data set:

from sklearn import datasets
from sklearn.model_selection import train_test_split

data, labels = iris.data, iris.target

res = train_test_split(data, labels,
train_size=0.8,
test_size=0.2,
random_state=12)
train_data, test_data, train_labels, test_labels = res

# Create and fit a nearest-neighbor classifier
from sklearn.neighbors import KNeighborsClassifier
# classifier "out of the box", no parameters
knn = KNeighborsClassifier()
knn.fit(train_data, train_labels)

print("Predictions from the classifier:")
test_data_predicted = knn.predict(test_data)
print(test_data_predicted)
print("Target values:")
print(test_labels)

Predictions from the classifier:
[0 2 0 1 2 2 2 0 2 0 1 0 0 0 1 2 2 1 0 2 0 1 2 1 0 2 1 1 0 0]
Target values:
[0 2 0 1 2 2 2 0 2 0 1 0 0 0 1 2 2 1 0 1 0 1 2 1 0 2 1 1 0 0]

print(accuracy_score(test_data_predicted, test_labels))

0.9666666666666667

print("Predictions from the classifier:")
learn_data_predicted = knn.predict(train_data)
print(learn_data_predicted)
print("Target values:")
print(train_labels)
print(accuracy_score(learn_data_predicted, train_labels))

Predictions from the classifier:
[0 1 2 0 2 0 1 1 0 1 1 0 0 0 0 0 0 0 2 0 2 1 1 1 0 2 1 1 2 0 2 0 2 1 2 2 1
1 1 2 2 0 2 2 0 1 0 2 2 0 1 1 0 0 1 1 1 1 2 1 2 0 0 1 1 2 0 2 1 0 2 2 1 2
2 0 0 2 1 1 2 0 1 1 0 1 1 2 2 1 0 2 0 2 0 0 1 2 2 1 2 2 0 1 1 0 2 2 2 1 2
2 2 0 0 1 0 2 2 1]
Target values:
[0 1 2 0 2 0 1 1 0 1 1 0 0 0 0 0 0 0 2 0 2 1 1 1 0 2 1 1 2 0 2 0 2 2 2 2 1
1 1 1 2 0 2 2 0 1 0 2 2 0 1 1 0 0 1 1 1 1 2 1 2 0 0 1 1 1 0 2 1 0 2 2 1 2
2 0 0 2 1 1 2 0 1 1 0 1 1 2 2 1 0 2 0 2 0 0 1 2 2 1 2 2 0 1 1 0 2 2 2 1 2
2 2 0 0 1 0 2 2 1]
0.975

knn2 = KNeighborsClassifier(algorithm='auto',
leaf_size=30,
metric='minkowski',
metric_params=None,
n_jobs=1,
n_neighbors=5,
p=2,         # p=2 is equivalent to euclidian distance
weights='uniform')

knn.fit(train_data, train_labels)
test_data_predicted = knn.predict(test_data)
accuracy_score(test_data_predicted, test_labels)

Output:
0.9666666666666667

The way of working of the k nearest neighbor classifier consists in increasing a circle around the unknown (i.e. the item which needs to be classified) sample until the circle contains exactly k items. The Radius Neighbors Classifier has a fixed length for the surrounding circle. It locates all items in the training dataset that are within the circle with the given radius length around the item, which has to be classified. As a consequence of the fixed radius approach dense regions of the feature distribution will provide more information and sparse regions will contribute less information.

from sklearn.neighbors import RadiusNeighborsClassifier

X = [[0, 1], [0.5, 1], [3, 1], [3, 2], [1.3, 0.8], [2.5, 2.5]]
y = [0, 0, 1, 1, 0, 1]

neigh.fit(X, y)

print(neigh.predict([[1.5, 1.2]]))

print(neigh.predict([[3.1, 2.1]]))

[0]
[1]

from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
import numpy as np

centers = [[2, 3], [5, 5], [7, 9]]
n_classes = len(centers)
data, labels = make_blobs(n_samples=155,
centers=np.array(centers),
cluster_std = 1.3,
random_state=1)

import matplotlib.pyplot as plt

colours = ('green', 'red', 'blue')
n_classes = 3

fig, ax = plt.subplots()
for n_class in range(0, n_classes):
ax.scatter(data[labels==n_class, 0], data[labels==n_class, 1],
c=colours[n_class], s=10, label=str(n_class))

res = train_test_split(data, labels,
train_size=0.8,
test_size=0.2,
random_state=1)
train_data, test_data, train_labels, test_labels = res

rnn = RadiusNeighborsClassifier(radius=1)
rnn.fit(train_data, train_labels)

Output:
RadiusNeighborsClassifier(radius=1)
predicted = rnn.predict(test_data)

print(accuracy_score(predicted, test_labels))

0.9354838709677419


A good value for k is the square root of all the samples in the training set:

k = int(len(labels) ** 0.5)
# make this value odd:
if k % 2 == 0:
k += 1
k

Output:
13

Let us compare this with a k nearest neighbor classifier:

knn = KNeighborsClassifier(algorithm='auto',
leaf_size=30,
metric='minkowski',
metric_params=None,
n_jobs=1,
n_neighbors=k, # default is 5
p=2,         # p=2 is equivalent to euclidian distance
weights='uniform')

knn.fit(data, labels)

Output:
KNeighborsClassifier(n_jobs=1, n_neighbors=13)
predicted = knn.predict(test_data)
print(accuracy_score(predicted, test_labels))

0.967741935483871

from sklearn.metrics import confusion_matrix
# Evaluate Model
cm = confusion_matrix(predicted, test_labels)
print(cm)

[[10  0  0]
[ 0  7  0]
[ 0  1 13]]


### Exercises

#### Exercise 1

Classify the data in "strange_flowers.txt" with a k nearest neighbor classifier.

### Solutions

#### Solution to Exercise 1

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler # necessary to reduce biases of large numbers
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score

names=["red", "green", "blue", "size", "label"],
sep=" ")
dataset

Output:
red green blue size label
0 255.0 104.0 12.0 4.04 1.0
1 241.0 102.0 2.0 3.60 1.0
2 250.0 109.0 6.0 3.53 1.0
3 249.0 89.0 3.0 3.79 1.0
4 253.0 106.0 0.0 3.53 1.0
... ... ... ... ... ...
790 197.0 250.0 101.0 2.98 4.0
791 197.0 252.0 96.0 2.99 4.0
792 197.0 253.0 100.0 3.41 4.0
793 197.0 248.0 106.0 3.10 4.0
794 197.0 250.0 107.0 3.09 4.0

795 rows × 5 columns

# alternative way to read and extract the data

import numpy as np

data = raw_data[:,:-1]
labels = raw_data[:,-1]


We will continue now with the Pandas DataFrame object 'dataset', whe we read in with 'read_csv':

data = dataset.drop('label', axis=1)
labels = dataset.label

X_train, X_test, y_train, y_test = train_test_split(data,
labels,
random_state=0,
test_size=0.2)

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train) #  transform
X_test = scaler.transform(X_test) #  transform

X_train

Output:
array([[ 0.53464891, -0.70988171, -0.43608345,  0.06437159],
[-1.0989029 ,  1.87003125,  2.07641509, -1.28023732],
[ 0.96453097, -0.47534417, -0.46172119,  0.40554102],
...,
[-1.0989029 ,  1.93704197,  2.07641509, -1.76188828],
[-1.31384393, -0.54235489, -0.64118537,  1.24843018],
[-1.0989029 ,  1.98730002,  1.87131317, -2.06292012]])

We set k to the square root of size of the learn set:

k = int(len(X_train) ** 0.5)
k

Output:
25
# Define the model
classifier = KNeighborsClassifier(n_neighbors=k,
p=2,    # Euclidian
metric="minkowski") #  p for different label types

classifier.fit(X_train, y_train)

Output:
KNeighborsClassifier(n_neighbors=25)
y_pred = classifier.predict(X_test)
y_pred

Output:
array([3., 1., 3., 4., 3., 3., 1., 4., 3., 3., 4., 1., 3., 1., 2., 2., 2.,
3., 1., 4., 2., 3., 4., 2., 3., 3., 4., 4., 1., 2., 1., 1., 2., 3.,
1., 3., 3., 2., 2., 2., 3., 3., 4., 1., 4., 2., 3., 2., 3., 2., 2.,
3., 1., 3., 4., 1., 2., 4., 2., 3., 3., 4., 3., 4., 3., 1., 1., 2.,
1., 3., 3., 1., 4., 2., 2., 3., 2., 4., 2., 4., 1., 3., 4., 2., 4.,
3., 2., 2., 2., 3., 2., 2., 3., 3., 1., 4., 2., 1., 2., 2., 2., 2.,
4., 3., 3., 3., 2., 1., 2., 4., 2., 3., 3., 1., 2., 4., 3., 1., 1.,
2., 1., 4., 3., 4., 2., 2., 3., 2., 4., 1., 4., 2., 4., 4., 4., 4.,
4., 2., 4., 4., 4., 2., 3., 2., 1., 1., 2., 3., 1., 1., 3., 1., 2.,
4., 1., 4., 2., 3., 1.])
# Evaluate Model
cm = confusion_matrix(y_test, y_pred)
print(cm)

[[31  1  0  0]
[ 1 46  0  0]
[ 0  0 44  0]
[ 0  0  0 36]]

print(accuracy_score(y_test, y_pred))

0.9874213836477987


### Determining the Optimal k Value

As we have written the the optimal value for k is usually the square root of n, where n is the total number of samples of our data set.

We can also determine a value for k by plotting the accuracy values for different k values:

import matplotlib.pyplot as plt

from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
import numpy as np

n_classes = 6
data, labels = make_blobs(n_samples=500,
centers=n_classes,
cluster_std = 1.6,
random_state=1)

import matplotlib.pyplot as plt

colours = ('green', 'red', 'blue', 'magenta', 'yellow', 'pink')

fig, ax = plt.subplots()
for n_class in range(0, n_classes):
ax.scatter(data[labels==n_class, 0], data[labels==n_class, 1],
c=colours[n_class], s=10, label=str(n_class))

res = train_test_split(data, labels,
train_size=0.7,
test_size=0.3,
random_state=1)
train_data, test_data, train_labels, test_labels = res

print(len(train_data), len(test_data), len(train_labels))

X, Y = [], []
for k in range(1, 25):
classifier = KNeighborsClassifier(n_neighbors=k,
p=2,    # Euclidian
metric="minkowski")
classifier.fit(train_data, train_labels)
predictions = classifier.predict(test_data)
score = accuracy_score(test_labels, predictions)
X.append(k)
Y.append(score)

fig, ax = plt.subplots()
ax.set_xlabel('k')
ax.set_ylabel('accuracy')
ax.plot(X, Y, "go")

350 150 350

Output:
[<matplotlib.lines.Line2D at 0x7f18c859d850>]`