其他
PyTorch 教程-在 PyTorch 中测试深度神经网络
我们将绘制一个精确的决策边界来分离我们的分类结果。在此,我们还将测试我们的模型。训练模型的步骤如下:
步骤 1:
def plot_decision_boundary(x, y):
步骤 2:
x_span=np.linspace(min(x[:,0]),max(x[:,0]))
y_span=np.linspace(min(y[:,1]),max(y[:,1]))
步骤 3:
xx,yy=np.meshgrid(x_span,y_span)
meshgrid() 函数接受向量 x_span 和 y_span 作为参数。这两个向量都包含50个元素,这个函数将返回一个 50x50 矩阵的二维数组。新添加的行将是 x_span 向量中原始行的重复副本,并返回给 xx 变量。y_span 的过程相同;它将返回一个 50x50 矩阵的二维数组,其中新添加的列将是 y_span 向量中原始列的重复副本。这个矩阵将返回给 yy 变量。
步骤 4:
print(xx.ravel(),yy.ravel())
步骤 5:
grid=np.c_[xx.ravel(),yy.ravel()]
grid=torch.Tensor(np.c_[xx.ravel(),yy.ravel()])
步骤 6:
model.forward(grid)
pred_func=model.forward(grid)
步骤 7:
z=pred_func.view(xx.shape).numpy()
z=pred_func.view(xx.shape).detach().numpy()
步骤 8:
plt.contourf(xx, yy,z)
步骤 9:
plot_decision_boundary(x,y)
scatter_plot()
步骤 10:
p1=torch.Tensor(0.25,0.25])
步骤 11:
plt.plot(p1[0],p1[1],marker='o',markersize=5,color='red')
plt.plot(p1.numpy()[0],p1.numpy()[1],marker='o',markersize=5,color='red')
步骤 12:
我们可以对这个点进行预测。我们将预测这个点属于正区域 2 类别 1 的概率。我们知道所有橙色的点都被标记为 1,所有蓝色的点都被标记为 0。因此,概率可以确定为
print("Red point positive probability={}".format(model.forward(p1).item()))
步骤 13:
def predict(self,x):
pred=self.forward(x)
if pred>=0.5:
return 1
else:
return 0
步骤 14:
print("Red point in calss={}".format(model.predict(p1)))
我们的模型运行顺利,并能准确地处理随机数据。
完整代码
import torch
import numpy as np
import matplotlib.pyplot as plt
import torch.nn as nn
from sklearn import datasets
no_of_points=500
x,y=datasets.make_circles(n_samples=no_of_points,random_state=123,noise=0.1,factor=0.2)
xdata=torch.Tensor(x)
ydata=torch.Tensor(y)
def scatter_plot():
plt.scatter(x[y==0,0],x[y==0,1])
plt.scatter(x[y==1,0],x[y==1,1])
plt.show()
class Deep_neural_network(nn.Module):
def __init__(self,input_size, h1, output_size):
super().__init__()
self.linear=nn.Linear(input_size, h1) # input layer connect with hidden layer
self.linear1=nn.Linear(h1, output_size) # hidden layer connect with output layer
def forward(self,x):
x=torch.sigmoid(self.linear(x)) # Return the prediction x
x=torch.sigmoid(self.linear1(x)) # Prediction will go through the next layer.
return x # Returning final outputs
def predict(self,z):
pred=self.forward(z)
if pred>=0.5:
return 1
else:
return 0
torch.manual_seed(2)
model= Deep_neural_network(2,4,1) # 2 input nodes, 4 hidden nodes and 1 output node
print(list(model.parameters()))
criterion=nn.BCELoss()
optimizer=torch.optim.Adam(model.parameters(),lr=0.1)
epochs=1000
losses=[]
for i in range(epochs):
ypred=model.forward(xdata)
loss=criterion(ypred,ydata)
print("epoch:",i,"loss:",loss.item())
losses.append(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
def plot_decision_boundary(x, y):
x_span=np.linspace(min(x[:,0]),max(x[:,0]))
y_span=np.linspace(min(x[:,1]),max(x[:,1]))
xx,yy=np.meshgrid(x_span,y_span)
grid=torch.Tensor(np.c_[xx.ravel(),yy.ravel()])
pred_func=model.forward(grid)
z=pred_func.view(xx.shape).detach().numpy()
plt.contourf(xx,yy,z)
z1=0.25
z2=0.25
p1=torch.Tensor([z1,z2])
plt.plot(p1.numpy()[0],p1.numpy()[1],marker='o',markersize=5,color='red')
print("Red point positive probability={}".format(model.forward(p1).item()))
print("Red point in calss={}".format(model.predict(p1)))
plot_decision_boundary(x,y)
scatter_plot()