Machine Learning with Python: k-Nearest Neighbor Classifier

k-Nearest-Neighbor Classifier

Nearest Neighbors people

"Show me who your friends are and I’ll tell you who you are?"

The concept of the k-nearest neighbor classifier can hardly be simpler described. This is an old saying, which can be found in many languages and many cultures. It's also metnioned in other words in the Bible: "He who walks with wise men will be wise, but the companion of fools will suffer harm" (Proverbs 13:20 )

This means that the concept of the k-nearest neighbor classifier is part of our everyday life and judging: Imagine you meet a group of people, they are all very young, stylish and sportive. They talk about there friend Ben, who isn't with them. So, what is your imagination of Ben? Right, you imagine him as being yong, stylish and sportive as well.

If you learn that Ben lives in a neighborhood where people vote conservative and that the average income is above 200000 dollars a year? Both his neighbors make even more than 300,000 dollars per year? What do you think of Ben? Most probably, you do not consider him to be an underdog and you may suspect him to be a conservative as well?

Now let's get a little bit more mathematically:

The k-Nearest-Neighbor Classifier (k-NN) works directly on the learned samples, instead of creating rules compared to other classification methods.

Nearest Neighbor Algorithm:

Given a set of categories $\{c_1, c_2, ... c_n\}$, also called classes, e.g. {"male", "female"}. There is also a learnset $LS$ consisting of labelled instances.

The task of classification consists in assigning a category or class to an arbitrary instance. If the instance $o$ is an element of $LS$, the label of the instance will be used.

Now, we will look at the case where $o$ is not in $LS$:

$o$ is compared with all instances of $LS$. A distance metric is used for comparison. We determine the $k$ closest neighbors of $o$, i.e. the items with the smallest distances. $k$ is a user defined constant and a positive integer, which is usually small.

The most common class of $LS$ will be assigned to the instance $o$. If k = 1, then the object is simply assigned to the class of that single nearest neighbor.

The algorithm for the k-nearest neighbor classifier is among the simplest of all machine learning algorithms. k-NN is a type of instance-based learning, or lazy learning, where the function is only approximated locally and all the computations are performed, when we do the actual classification.

k-nearest-neighbor from Scratch

Preparing the Dataset

Nearest Neighbor, way of working

Before we actually start with writing a nearest neighbor classifier, we need to think about the data, i.e. the learnset. We will use the "iris" dataset provided by the datasets of the sklearn module.

The data set consists of 50 samples from each of three species of Iris

Four features were measured from each sample: the length and the width of the sepals and petals, in centimetres.

import numpy as np
from sklearn import datasets
iris = datasets.load_iris()
iris_data = iris.data
iris_labels = iris.target
print(iris_data[0], iris_data[79], iris_data[100])
print(iris_labels[0], iris_labels[79], iris_labels[100])
[ 5.1  3.5  1.4  0.2] [ 5.7  2.6  3.5  1. ] [ 6.3  3.3  6.   2.5]
0 1 2

We create a learnset from the sets above. We use permutation from np.random to split the data randomly.

np.random.seed(42)
indices = np.random.permutation(len(iris_data))
n_training_samples = 12
learnset_data = iris_data[indices[:-n_training_samples]]
learnset_labels = iris_labels[indices[:-n_training_samples]]
testset_data = iris_data[indices[-n_training_samples:]]
testset_labels = iris_labels[indices[-n_training_samples:]]
print(learnset_data[:4], learnset_labels[:4])
print(testset_data[:4], testset_labels[:4])
[[ 6.1  2.8  4.7  1.2]
 [ 5.7  3.8  1.7  0.3]
 [ 7.7  2.6  6.9  2.3]
 [ 6.   2.9  4.5  1.5]] [1 0 2 1]
[[ 5.7  2.8  4.1  1.3]
 [ 6.5  3.   5.5  1.8]
 [ 6.3  2.3  4.4  1.3]
 [ 6.4  2.9  4.3  1.3]] [1 2 1 1]

The following code is only necessary to visualize the data of our learnset. Our data consists of for values per iris item, so we will reduce the data to three values by summing up the third and fourth value. This way, we are capable of depicting the data in 3-dimensional space:

# following line is only necessary, if you use ipython notebook!!!
%matplotlib inline 
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
colours = ("r", "b")
X = []
for iclass in range(3):
    X.append([[], [], []])
    for i in range(len(learnset_data)):
        if learnset_labels[i] == iclass:
            X[iclass][0].append(learnset_data[i][0])
            X[iclass][1].append(learnset_data[i][1])
            X[iclass][2].append(sum(learnset_data[i][2:]))
colours = ("r", "g", "y")
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for iclass in range(3):
       ax.scatter(X[iclass][0], X[iclass][1], X[iclass][2], c=colours[iclass])
plt.show()

Determining the Neighbors

To determine the similarity between two instances, we need a distance function. In our example, the Euclidean distance is ideal:

def distance(instance1, instance2):
    # just in case, if the instances are lists or tuples:
    instance1 = np.array(instance1) 
    instance2 = np.array(instance2)
    
    return np.linalg.norm(instance1 - instance2)
print(distance([3, 5], [1, 1]))
print(distance(learnset_data[3], learnset_data[44]))
4.472135955
3.41906419946

The function 'get_neighbors returns a list with 'k' neighbors, which are closest to the instance 'test_instance':

def get_neighbors(training_set, 
                  labels, 
                  test_instance, 
                  k, 
                  distance=distance):
    """
    get_neighors calculates a list of the k nearest neighbors
    of an instance 'test_instance'.
    The list neighbors contains 3-tuples with  
    (index, dist, label)
    where 
    index    is the index from the training_set, 
    dist     is the distance between the test_instance and the 
             instance training_set[index]
    distance is a reference to a function used to calculate the 
             distances
    """
    distances = []
    for index in range(len(training_set)):
        dist = distance(test_instance, training_set[index])
        distances.append((training_set[index], dist, labels[index]))
    distances.sort(key=lambda x: x[1])
    neighbors = distances[:k]
    return neighbors

We will test the function with our iris samples:

for i in range(5):
    neighbors = get_neighbors(learnset_data, 
                              learnset_labels, 
                              testset_data[i], 
                              3, 
                              distance=distance)
    print(i, 
          testset_data[i], 
          testset_labels[i], 
          neighbors)
    
0 [ 5.7  2.8  4.1  1.3] 1 [(array([ 5.7,  2.9,  4.2,  1.3]), 0.14142135623730995, 1), (array([ 5.6,  2.7,  4.2,  1.3]), 0.17320508075688815, 1), (array([ 5.6,  3. ,  4.1,  1.3]), 0.22360679774997935, 1)]
1 [ 6.5  3.   5.5  1.8] 2 [(array([ 6.4,  3.1,  5.5,  1.8]), 0.14142135623730931, 2), (array([ 6.3,  2.9,  5.6,  1.8]), 0.24494897427831783, 2), (array([ 6.5,  3. ,  5.2,  2. ]), 0.36055512754639879, 2)]
2 [ 6.3  2.3  4.4  1.3] 1 [(array([ 6.2,  2.2,  4.5,  1.5]), 0.26457513110645858, 1), (array([ 6.3,  2.5,  4.9,  1.5]), 0.57445626465380295, 1), (array([ 6. ,  2.2,  4. ,  1. ]), 0.5916079783099617, 1)]
3 [ 6.4  2.9  4.3  1.3] 1 [(array([ 6.2,  2.9,  4.3,  1.3]), 0.20000000000000018, 1), (array([ 6.6,  3. ,  4.4,  1.4]), 0.26457513110645869, 1), (array([ 6.6,  2.9,  4.6,  1.3]), 0.3605551275463984, 1)]
4 [ 5.6  2.8  4.9  2. ] 2 [(array([ 5.8,  2.7,  5.1,  1.9]), 0.3162277660168375, 2), (array([ 5.8,  2.7,  5.1,  1.9]), 0.3162277660168375, 2), (array([ 5.7,  2.5,  5. ,  2. ]), 0.33166247903553986, 2)]

Voting to get a Single Result

We will write a vote function now. This functions uses the class 'Counter' from collections to count the quantity of the classes inside of an instance list. This instance list will be the neighbors of course. The function 'vote' returns the most common class:

from collections import Counter
def vote(neighbors):
    class_counter = Counter()
    for neighbor in neighbors:
        class_counter[neighbor[2]] += 1
    return class_counter.most_common(1)[0][0]

We will test 'vote' on our training samples:

for i in range(n_training_samples):
    neighbors = get_neighbors(learnset_data, 
                              learnset_labels, 
                              testset_data[i], 
                              3, 
                              distance=distance)
    print("index: ", i, 
          ", result of vote: ", vote(neighbors), 
          ", label: ", testset_labels[i], 
          ", data: ", testset_data[i])
index:  0 , result of vote:  1 , label:  1 , data:  [ 5.7  2.8  4.1  1.3]
index:  1 , result of vote:  2 , label:  2 , data:  [ 6.5  3.   5.5  1.8]
index:  2 , result of vote:  1 , label:  1 , data:  [ 6.3  2.3  4.4  1.3]
index:  3 , result of vote:  1 , label:  1 , data:  [ 6.4  2.9  4.3  1.3]
index:  4 , result of vote:  2 , label:  2 , data:  [ 5.6  2.8  4.9  2. ]
index:  5 , result of vote:  2 , label:  2 , data:  [ 5.9  3.   5.1  1.8]
index:  6 , result of vote:  0 , label:  0 , data:  [ 5.4  3.4  1.7  0.2]
index:  7 , result of vote:  1 , label:  1 , data:  [ 6.1  2.8  4.   1.3]
index:  8 , result of vote:  1 , label:  2 , data:  [ 4.9  2.5  4.5  1.7]
index:  9 , result of vote:  0 , label:  0 , data:  [ 5.8  4.   1.2  0.2]
index:  10 , result of vote:  1 , label:  1 , data:  [ 5.8  2.6  4.   1.2]
index:  11 , result of vote:  2 , label:  2 , data:  [ 7.1  3.   5.9  2.1]

We can see that the predictions correspond to the labelled results, except in case of the item with the index 8.

'vote_prob' is a function like 'vote' but returns the class name and the probability for this class:

def vote_prob(neighbors):
    class_counter = Counter()
    for neighbor in neighbors:
        class_counter[neighbor[2]] += 1
    labels, votes = zip(*class_counter.most_common())
    winner = class_counter.most_common(1)[0][0]
    votes4winner = class_counter.most_common(1)[0][1]
    return winner, votes4winner/sum(votes)
for i in range(n_training_samples):
    neighbors = get_neighbors(learnset_data, 
                              learnset_labels, 
                              testset_data[i], 
                              5, 
                              distance=distance)
    print("index: ", i, 
          ", vote_prob: ", vote_prob(neighbors), 
          ", label: ", testset_labels[i], 
          ", data: ", testset_data[i])
index:  0 , vote_prob:  (1, 1.0) , label:  1 , data:  [ 5.7  2.8  4.1  1.3]
index:  1 , vote_prob:  (2, 1.0) , label:  2 , data:  [ 6.5  3.   5.5  1.8]
index:  2 , vote_prob:  (1, 1.0) , label:  1 , data:  [ 6.3  2.3  4.4  1.3]
index:  3 , vote_prob:  (1, 1.0) , label:  1 , data:  [ 6.4  2.9  4.3  1.3]
index:  4 , vote_prob:  (2, 1.0) , label:  2 , data:  [ 5.6  2.8  4.9  2. ]
index:  5 , vote_prob:  (2, 0.8) , label:  2 , data:  [ 5.9  3.   5.1  1.8]
index:  6 , vote_prob:  (0, 1.0) , label:  0 , data:  [ 5.4  3.4  1.7  0.2]
index:  7 , vote_prob:  (1, 1.0) , label:  1 , data:  [ 6.1  2.8  4.   1.3]
index:  8 , vote_prob:  (1, 1.0) , label:  2 , data:  [ 4.9  2.5  4.5  1.7]
index:  9 , vote_prob:  (0, 1.0) , label:  0 , data:  [ 5.8  4.   1.2  0.2]
index:  10 , vote_prob:  (1, 1.0) , label:  1 , data:  [ 5.8  2.6  4.   1.2]
index:  11 , vote_prob:  (2, 1.0) , label:  2 , data:  [ 7.1  3.   5.9  2.1]

The Weighted Nearest Neighbour Classifier

We looked only at k items in the vicinity of an unknown object „UO", and had a majority vote. Using the majority vote has shown quite efficient in our previous example, but this didn't take into account the following reasoning: The farther a neighbor is, the more it "deviates" from the "real" result. Or in other words, we can trust the closest neighbors more than the farther ones. Let's assume, we have 11 neighbors of an unknown item UO. The closest five neighbors belong to a class A and all the other six, which are farther away belong to a class B. What class should be assigned to UO? The previous approach says B, because we have a 6 to 5 vote in favor of B. On the other hand the closest 5 are all A and this should count more.

To pursue this strategy, we can assign weights to the neighbors in the following way: The nearest neighbor of an instance gets a weight $1 / 1$, the second closest gets a weight of $1 / 2$ and then going on up to $1/k$ for the farthest away neighbor.

This means that we are using the harmonic series as weights:

$$\sum_{i}^{k}{1/(i+1)} = 1 + \frac{1}{2} + \frac{1}{3} + ... + \frac{1}{k}$$

We implement this in the following function:

def vote_harmonic_weights(neighbors, all_results=True):
    class_counter = Counter()
    number_of_neighbors = len(neighbors)
    for index in range(number_of_neighbors):
        class_counter[neighbors[index][2]] += 1/(index+1)
    labels, votes = zip(*class_counter.most_common())
    #print(labels, votes)
    winner = class_counter.most_common(1)[0][0]
    votes4winner = class_counter.most_common(1)[0][1]
    if all_results:
        total = sum(class_counter.values(), 0.0)
        for key in class_counter:
             class_counter[key] /= total
        return winner, class_counter.most_common()
    else:
        return winner, votes4winner / sum(votes)
        
for i in range(n_training_samples):
    neighbors = get_neighbors(learnset_data, 
                              learnset_labels, 
                              testset_data[i], 
                              6, 
                              distance=distance)
    print("index: ", i, 
          ", result of vote: ", 
          vote_harmonic_weights(neighbors,
                                all_results=True))
index:  0 , result of vote:  (1, [(1, 1.0)])
index:  1 , result of vote:  (2, [(2, 1.0)])
index:  2 , result of vote:  (1, [(1, 1.0)])
index:  3 , result of vote:  (1, [(1, 1.0)])
index:  4 , result of vote:  (2, [(2, 0.9319727891156463), (1, 0.06802721088435375)])
index:  5 , result of vote:  (2, [(2, 0.8503401360544217), (1, 0.14965986394557826)])
index:  6 , result of vote:  (0, [(0, 1.0)])
index:  7 , result of vote:  (1, [(1, 1.0)])
index:  8 , result of vote:  (1, [(1, 1.0)])
index:  9 , result of vote:  (0, [(0, 1.0)])
index:  10 , result of vote:  (1, [(1, 1.0)])
index:  11 , result of vote:  (2, [(2, 1.0)])

The previous approach took only the ranking of the neighbors according to their distance in account. We can improve the voting by using the actual distance. To this purpos we will write a new voting function:

def vote_distance_weights(neighbors, all_results=True):
    class_counter = Counter()
    number_of_neighbors = len(neighbors)
    for index in range(number_of_neighbors):
        dist = neighbors[index][1]
        label = neighbors[index][2]
        class_counter[label] += 1 / (dist**2 + 1)
    labels, votes = zip(*class_counter.most_common())
    #print(labels, votes)
    winner = class_counter.most_common(1)[0][0]
    votes4winner = class_counter.most_common(1)[0][1]
    if all_results:
        total = sum(class_counter.values(), 0.0)
        for key in class_counter:
             class_counter[key] /= total
        return winner, class_counter.most_common()
    else:
        return winner, votes4winner / sum(votes)
for i in range(n_training_samples):
    neighbors = get_neighbors(learnset_data, 
                              learnset_labels, 
                              testset_data[i], 
                              6, 
                              distance=distance)
    print("index: ", i, 
          ", result of vote: ", vote_distance_weights(neighbors,
                                                      all_results=True))
index:  0 , result of vote:  (1, [(1, 1.0)])
index:  1 , result of vote:  (2, [(2, 1.0)])
index:  2 , result of vote:  (1, [(1, 1.0)])
index:  3 , result of vote:  (1, [(1, 1.0)])
index:  4 , result of vote:  (2, [(2, 0.84901545921183608), (1, 0.15098454078816387)])
index:  5 , result of vote:  (2, [(2, 0.67361374621844783), (1, 0.32638625378155212)])
index:  6 , result of vote:  (0, [(0, 1.0)])
index:  7 , result of vote:  (1, [(1, 1.0)])
index:  8 , result of vote:  (1, [(1, 1.0)])
index:  9 , result of vote:  (0, [(0, 1.0)])
index:  10 , result of vote:  (1, [(1, 1.0)])
index:  11 , result of vote:  (2, [(2, 1.0)])

Another Example for Nearest Neighbor Classification

We want to test the previous functions with another very simple dataset:

train_set = [(1, 2, 2), 
             (-3, -2, 0),
             (1, 1, 3), 
             (-3, -3, -1),
             (-3, -2, -0.5),
             (0, 0.3, 0.8),
             (-0.5, 0.6, 0.7),
             (0, 0, 0)
            ]
labels = ['apple',  'banana', 'apple', 
          'banana', 'apple', "orange",
          'orange', 'orange']
k = 1
for test_instance in [(0, 0, 0), (2, 2, 2), 
                      (-3, -1, 0), (0, 1, 0.9),
                      (1, 1.5, 1.8), (0.9, 0.8, 1.6)]:
    neighbors = get_neighbors(train_set, 
                              labels, 
                              test_instance, 
                              2)
    print("vote distance weights: ", vote_distance_weights(neighbors))
vote distance weights:  ('orange', [('orange', 1.0)])
vote distance weights:  ('apple', [('apple', 1.0)])
vote distance weights:  ('banana', [('banana', 0.52941176470588236), ('apple', 0.47058823529411764)])
vote distance weights:  ('orange', [('orange', 1.0)])
vote distance weights:  ('apple', [('apple', 1.0)])
vote distance weights:  ('apple', [('apple', 0.50847457627118653), ('orange', 0.49152542372881353)])

kNN in Linguistics

The next example comes from computer linguistics. We show how we can use a k-nearest neighbor classifier to recognize misspelled words.

We use a module called levenshtein, which we have implemented in our tutorial on Levenshtein Distance.

from levenshtein import levenshtein
cities = []
with open("data/city_names.txt") as fh:
    for line in fh:
        city = line.strip()
        if " " in city:
            # like Freiburg im Breisgau add city only as well
            cities.append(city.split()[0])
        cities.append(city)
#cities = cities[:20]
for city in ["Freiburg", "Frieburg", "Freiborg", 
             "Hamborg", "Sahrluis"]:
    neighbors = get_neighbors(cities, 
                              cities, 
                              city, 
                              2,
                              distance=levenshtein)
    print("vote_distance_weights: ", vote_distance_weights(neighbors))
vote_distance_weights:  ('Freiburg', [('Freiburg', 0.6666666666666666), ('Freiberg', 0.3333333333333333)])
vote_distance_weights:  ('Freiburg', [('Freiburg', 0.6666666666666666), ('Lüneburg', 0.3333333333333333)])
vote_distance_weights:  ('Freiburg', [('Freiburg', 0.5), ('Freiberg', 0.5)])
vote_distance_weights:  ('Hamburg', [('Hamburg', 0.7142857142857143), ('Bamberg', 0.28571428571428575)])
vote_distance_weights:  ('Saarlouis', [('Saarlouis', 0.8387096774193549), ('Bayreuth', 0.16129032258064516)])

If you work under Linux (especially Ubuntu), you can find a file with a British-English dictionary under /usr/share/dict/british-english. Windows users and others can download the file as

british-english.txt

We use extremely misspelled words in the following example. We see that out simple vote_prob function is doing well only in two cases: In correcting "holpposs" to "helpless" and "blagrufoo" to "barefoot". Whereas our distance voting is doing well in all cases. Okay, we have to admit that we had "liberty" in mind, when we wrote "liberti", but suggesting "liberal" is a good choice.

words = []
with open("british-english.txt") as fh:
    for line in fh:
        word = line.strip()
        words.append(word)
for word in ["holpful", "kundnoss", "holpposs", "blagrufoo", "liberdi"]:
    neighbors = get_neighbors(words, 
                              words, 
                              word, 
                              3,
                              distance=levenshtein)
    print("vote_distance_weights: ", vote_distance_weights(neighbors, 
                                                           all_results=False))
    print("vote_prob: ", vote_prob(neighbors))
vote_distance_weights:  ('helpful', 0.5555555555555556)
vote_prob:  ('doleful', 0.3333333333333333)
vote_distance_weights:  ('kindness', 0.5)
vote_prob:  ('kudos', 0.3333333333333333)
vote_distance_weights:  ("hippo's", 0.3333333333333333)
vote_prob:  ("hippo's", 0.3333333333333333)
vote_distance_weights:  ('barefoot', 0.4333333333333333)
vote_prob:  ('barefoot', 0.3333333333333333)
vote_distance_weights:  ('liberal', 0.4)
vote_prob:  ('Hibernia', 0.3333333333333333)