数学推导+纯Python实现机器学习算法3:k近邻
好久没有写机器学习算法系列了,国庆假期重新开始写起~
作为一种没有显式训练和学习过程的分类和回归算法,k 近邻在众多有监督机器学习算法中算是一种比较独特的方法。说它独特,是因为 k 近邻不像其他模型有损失函数、有优化算法、有训练过程。对于给定的实例数据和实例数据对应所属类别,当要对新的实例进行分类时,根据这个实例最近的 k 个实例所属的类别来决定其属于哪一类。所以相对于其它机器学习模型和算法,k 近邻总体上而言是一种非常简单的方法。
k 近邻基本理论
先来看 k 近邻算法最直观的解释:给定一个训练数据集,对于新的输入实例,在训练集中找到与该实例最近邻的 k 个实例,这 k 个实例的多数属于哪个类,则该实例就属于哪个类。
从上述对 k 近邻的直观解释中,我们可以归纳出该算法的几个关键点。一是找到与该实例最近邻的实例,这里就涉及到如何找到,即在特征向量空间中,我们要采取何种方式来对距离进行度量。这是我们要考虑的第一个问题。第二则是 k 个实例,这个 k 值的大小如何选择。第三是 k 个实例的多数属于哪个类,明显是多数表决的归类规则。当然还可能使用其他规则,所以第三个关键就是分类决策规则。下面我们分别来看一下这个关键点。
首先的是距离的度量方式。距离的度量用在 k 近邻中我们也可以称之为相似性度量,即特征空间中两个实例点相似程度的反映。在机器学习中,常用的距离度量方式包括欧式距离、曼哈顿距离、余弦距离以及切比雪夫距离等。在 k 近邻算法中常用的距离度量方式是欧式距离,也即 L2 距离,L2 距离计算公式如下:
其次是 k 值的选择。一般而言,k 值的大小对分类结果有着重大的影响。当选择的 k 值较小的情况下,就相当于用较小的邻域中的训练实例进行预测,只有当与输入实例较近的训练实例才会对预测结果起作用。但与此同时预测结果会对实例点非常敏感,分类器抗噪能力较差,因而容易产生过拟合,所以一般而言,k 值的选择不宜过小。但如果选择较大的 k 值,就相当于在用较大邻域中的训练实例进行预测,但相应的分类误差也会增大,模型整体变得简单,会产生一定程度的欠拟合。所以一般而言,我们需要采用交叉验证的方式来选择合适的 k 值。
最后的分类决策规则。通常为多数表决方法,这个很好理解,这里不做过多解释。所以总结来看,k 近邻算法的本质在于基于距离和 k 值对特征空间进行划分。当训练数据、距离度量方式、k 值和分类决策规则确定后,对于任一新输入的实例,其所属的类别唯一地确定。
以上便是 k 近邻算法的基本原理。可以看到,k 近邻法没有像此前的线性回归和逻辑回归那样需要相对复杂的数学公式推导,是一种非典型的监督机器学习算法。
knn 的 Python 实现
跟之前一样,在写代码前我们需要捋一下思路,要实现一个 knn 算法我们需要哪些组件。根据前面的理论解析,我们得首先准备训练数据,让其保存起来。然后是定义距离度量函数,k 值我们可以作为超参数先行指定,最后来实现多数表决的分类规则。
先导入相关的 package 和设定绘图参数:
import numpy as np
from collections import Counter
import random
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.utils import shuffle
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
以 iris 数据集为例:
X, y = shuffle(iris.data, iris.target, random_state=13)
X = X.astype(np.float32)
# 训练集与测试集的简单划分
offset = int(X.shape[0] * 0.7)
X_train, y_train = X[:offset], y[:offset]
X_test, y_test = X[offset:], y[offset:]
y_train = y_train.reshape((-1,1))
y_test = y_test.reshape((-1,1))
print('X_train=', X_train.shape)
print('X_test=', X_test.shape)
print('y_train=', y_train.shape)
print('y_test=', y_test.shape)
定义 L2 距离度量函数:
def compute_distances(X, X_train):
num_test = X.shape[0]
num_train = X_train.shape[0]
dists = np.zeros((num_test, num_train))
M = np.dot(X, X_train.T)
te = np.square(X).sum(axis=1)
tr = np.square(X_train).sum(axis=1)
dists = np.sqrt(-2 * M + tr + np.matrix(te).T)
return dists
我们尝试计算一下测试集与训练集实例的距离:
dists = compute_distances(X_test)
print(dists.shape)
顺便对距离矩阵进行可视化展示一下:
plt.imshow(dists, interpolation='none')
plt.show()
使用多数表决的分类决策规则定义预测函数,这里假设 k 值取 1:
def predict_labels(y_train, dists, k=1):
num_test = dists.shape[0]
y_pred = np.zeros(num_test)
for i in range(num_test):
closest_y = []
# 注意 argsort 函数的用法
labels = y_train[np.argsort(dists[i, :])].flatten()
closest_y = labels[0:k]
c = Counter(closest_y)
y_pred[i] = c.most_common(1)[0][0]
return y_pred
跑一下预测测试集的类别准确率:
y_test_pred = predict_labels(dists, k=1)
y_test_pred = y_test_pred.reshape((-1, 1))
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / X_test.shape[0]
print('Got %d / %d correct => accuracy: %f' % (num_correct, X_test.shape[0], accuracy))
可以看到在 k 取 1 时测试集分类准确率为 0.97778,应该是很高了。
下面我们尝试一下使用 5 折交叉验证来选择最优的 k 值:
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
k_to_accuracies = {}
for k in k_choices:
for fold in range(num_folds):
# 对传入的训练集单独划出一个验证集作为测试集
validation_X_test = X_train_folds[fold]
validation_y_test = y_train_folds[fold]
temp_X_train = np.concatenate(X_train_folds[:fold] + X_train_folds[fold + 1:])
temp_y_train = np.concatenate(y_train_folds[:fold] + y_train_folds[fold + 1:])
# 计算距离
temp_dists = compute_distances(validation_X_test)
temp_y_test_pred = predict_labels(temp_dists, k=k)
temp_y_test_pred = temp_y_test_pred.reshape((-1, 1))
# 查看分类准确率
num_correct = np.sum(temp_y_test_pred == validation_y_test)
num_test = validation_X_test.shape[0]
accuracy = float(num_correct) / num_test
k_to_accuracies[k] = k_to_accuracies.get(k,[]) + [accuracy]
# 打印不同 k 值不同折数下的分类准确率
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
对不同 k 值下的分类准确率进行可视化展示:
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
可以看到,在 k 值取 20 以内,分类准确率波动幅度不大,取 20 到 40 时分类准确率开始下降,取 50 以上时,分类准确率呈断崖式下跌。这主要与我们的数据量有关。
查看最佳 k 值大小:
best_k = k_choices[np.argmax(accuracies_mean)]
print('最佳k值为',best_k)
knn 算法的函数化封装
照例,在进行分步搭建 knn 算法之后我们继续对其进行函数化封装,完整代码如下:
import numpy as np
from collections import Counter
import random
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.utils import shuffle
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
class KNearestNeighbor(object):
def __init__(self):
pass
def train(self, X, y):
self.X_train = X
self.y_train = y
def compute_distances(self, X):
num_test = X.shape[0]
num_train = self.X_train.shape[0]
dists = np.zeros((num_test, num_train))
M = np.dot(X, self.X_train.T)
te = np.square(X).sum(axis=1)
tr = np.square(self.X_train).sum(axis=1)
dists = np.sqrt(-2 * M + tr + np.matrix(te).T)
return dists
def predict_labels(self, dists, k=1):
num_test = dists.shape[0]
y_pred = np.zeros(num_test)
for i in range(num_test):
closest_y = []
labels = self.y_train[np.argsort(dists[i, :])].flatten()
closest_y = labels[0:k]
c = Counter(closest_y)
y_pred[i] = c.most_common(1)[0][0]
return y_pred
def cross_validation(self, X_train, y_train):
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
k_to_accuracies = {}
for k in k_choices:
for fold in range(num_folds):
validation_X_test = X_train_folds[fold]
validation_y_test = y_train_folds[fold]
temp_X_train = np.concatenate(X_train_folds[:fold] + X_train_folds[fold + 1:])
temp_y_train = np.concatenate(y_train_folds[:fold] + y_train_folds[fold + 1:])
self.train(temp_X_train, temp_y_train )
temp_dists = self.compute_distances(validation_X_test)
temp_y_test_pred = self.predict_labels(temp_dists, k=k)
temp_y_test_pred = temp_y_test_pred.reshape((-1, 1)) #Checking accuracies
num_correct = np.sum(temp_y_test_pred == validation_y_test)
num_test = validation_X_test.shape[0]
accuracy = float(num_correct) / num_test
k_to_accuracies[k] = k_to_accuracies.get(k,[]) + [accuracy] # Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
best_k = k_choices[np.argmax(accuracies_mean)]
print('最佳k值为{}'.format(best_k))
return best_k
def create_train_test(self):
X, y = shuffle(iris.data, iris.target, random_state=13)
X = X.astype(np.float32)
y = y.reshape((-1,1))
offset = int(X.shape[0] * 0.7)
X_train, y_train = X[:offset], y[:offset]
X_test, y_test = X[offset:], y[offset:]
y_train = y_train.reshape((-1,1))
y_test = y_test.reshape((-1,1))
return X_train, y_train, X_test, y_test
if __name__ == '__main__':
knn_classifier = KNearestNeighbor()
X_train, y_train, X_test, y_test = knn_classifier.create_train_test()
best_k = knn_classifier.cross_validation(X_train, y_train)
dists = knn_classifier.compute_distances(X_test)
y_test_pred = knn_classifier.predict_labels(dists, k=best_k)
y_test_pred = y_test_pred.reshape((-1, 1))
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / X_test.shape[0]
print('Got %d / %d correct => accuracy: %f' % (num_correct, X_test.shape[0], accuracy))
以上便是 knn 算法的全部内容。
参考资料:
李航 统计学习方法
cs231n lecture1
往期精彩:
一个数据科学从业者的学习历程
长按二维码.关注数据科学家养成记