AI杂谈:手推BP
BP是Back Propagation的缩写,它是深度学习神经网络的理论核心,本文造两个例子来手动推导一下BP的过程
一、链式法则
链式法则是BP的核心,分两种情况,下面逐一说明
1. 一元方程
在一元方程的情况下,链式法则比较简单,假设存在下面两个函数:
那么x的变化最终会影响到z的值,用数学符号表示如下:
在多元方程的情况下,链式法则稍微复杂一些,假设存在下面三个函数:
二、只有一个weight的简单情况
做了一个简单的网络,这可以对应到前面讲的链式法则的第一种情况,如下图所示:
图1
其中圆形表示叶子节点,方块表示非叶子节点,每个非叶子节点的定义如下,训练过程中的前向过程会根据这些公式进行计算:
先进行一次前向计算,这样可以得到y1、y2、y3、loss的值
再进行一次反向计算,得到每个参数的梯度值,进而根据上面的公式(13)、(14)、(15)来更新参数值
下面看下反向传播时的梯度的计算过程,因为是梯度值是从后往前计算的,所以先看w2的梯度计算:
import oneflow as ofimport oneflow.nn as nnimport oneflow.optim as optim
class Sample(nn.Module): def __init__(self): super(Sample, self).__init__() self.w1 = of.tensor(10.0, dtype=of.float, requires_grad=True) self.b1 = of.tensor(1.0, dtype=of.float, requires_grad=True) self.w2 = of.tensor(20.0, dtype=of.float, requires_grad=True) self.loss = nn.MSELoss()
def parameters(self): return [self.w1, self.b1, self.w2]
def forward(self, x, label): y1 = self.w1 * x + self.b1 y2 = y1 * self.w2 y3 = 2 * y2 return self.loss(y3, label)
model = Sample()
optimizer = optim.SGD(model.parameters(), lr=0.005)data = of.tensor(1.0, dtype=of.float)label = of.tensor(500.0, dtype=of.float)
loss = model(data, label)print("------------before backward()---------------")print("w1 =", model.w1)print("b1 =", model.b1)print("w2 =", model.w2)print("w1.grad =", model.w1.grad)print("b1.grad =", model.b1.grad)print("w2.grad =", model.w2.grad)loss.backward()print("------------after backward()---------------")print("w1 =", model.w1)print("b1 =", model.b1)print("w2 =", model.w2)print("w1.grad =", model.w1.grad)print("b1.grad =", model.b1.grad)print("w2.grad =", model.w2.grad)optimizer.step()print("------------after step()---------------")print("w1 =", model.w1)print("b1 =", model.b1)print("w2 =", model.w2)print("w1.grad =", model.w1.grad)print("b1.grad =", model.b1.grad)print("w2.grad =", model.w2.grad)optimizer.zero_grad()print("------------after zero_grad()---------------")print("w1 =", model.w1)print("b1 =", model.b1)print("w2 =", model.w2)print("w1.grad =", model.w1.grad)print("b1.grad =", model.b1.grad)print("w2.grad =", model.w2.grad)这段代码只跑了一次forward和一次backward,然后调用step更新了参数信息,最后调用zero_grad来对这一轮backward算出来的梯度信息进行了清零,下面是运行结果:
------------before backward()---------------w1 = tensor(10., requires_grad=True)b1 = tensor(1., requires_grad=True)w2 = tensor(20., requires_grad=True)w1.grad = Noneb1.grad = Nonew2.grad = None------------after backward()---------------w1 = tensor(10., requires_grad=True)b1 = tensor(1., requires_grad=True)w2 = tensor(20., requires_grad=True)w1.grad = tensor(-4800.)b1.grad = tensor(-4800.)w2.grad = tensor(-2640.)------------after step()---------------w1 = tensor(34., requires_grad=True)b1 = tensor(25., requires_grad=True)w2 = tensor(33.2000, requires_grad=True)w1.grad = tensor(-4800.)b1.grad = tensor(-4800.)w2.grad = tensor(-2640.)------------after zero_grad()---------------w1 = tensor(34., requires_grad=True)b1 = tensor(25., requires_grad=True)w2 = tensor(33.2000, requires_grad=True)w1.grad = tensor(0.)b1.grad = tensor(0.)w2.grad = tensor(0.)三、以conv为例的含有多个weights的情况
用一个非常简单的conv来举例,这个conv的各种属性如下:
| input | kernel | stride | pad | output |
|---|---|---|---|---|
ih=3 iw=3 ic=1 | oc=1 kh=2 kw=2 ic=1 | sh=1 sw=1 | pad_l=0 pad_t=0 pad_r=0 pad_b=0 | oh=2 ow=2 oc=1 |
如下图所示:
图2
假定这个例子中的网络结构如下图:
图3
在这个简单的网络中,z节点表示一个avg-pooling的操作,kernel是2x2,loss采用均方误差,下面是对应的公式:
import oneflow as ofimport oneflow.nn as nnimport oneflow.optim as optim
class Sample(nn.Module): def __init__(self): super(Sample, self).__init__() self.op1 = nn.Conv2d(in_channels=1, out_channels=1, kernel_size=(2,2), bias=False) self.op2 = nn.AvgPool2d(kernel_size=(2,2)) self.loss = nn.MSELoss()
def forward(self, x, label): y1 = self.op1(x) y2 = self.op2(y1) return self.loss(y2, label)
model = Sample()
optimizer = optim.SGD(model.parameters(), lr=0.005)data = of.randn(1, 1, 3, 3)label = of.randn(1, 1, 1, 1)
loss = model(data, label)print("------------before backward()---------------")param = model.parameters()print("w =", next(param))loss.backward()print("------------after backward()---------------")param = model.parameters()print("w =", next(param))optimizer.step()print("------------after step()---------------")param = model.parameters()print("w =", next(param))optimizer.zero_grad()print("------------after zero_grad()---------------")param = model.parameters()print("w =", next(param))输出如下(PS:里面的input、param、label的值都是随机的,每次运行的结果会不一样):
------------before backward()---------------w = tensor([[[[ 0.2621, -0.2583], [-0.1751, -0.0839]]]], dtype=oneflow.float32, grad_fn=<accumulate_grad>)------------after backward()---------------w = tensor([[[[ 0.2621, -0.2583], [-0.1751, -0.0839]]]], dtype=oneflow.float32, grad_fn=<accumulate_grad>)------------after step()---------------w = tensor([[[[ 0.2587, -0.2642], [-0.1831, -0.0884]]]], dtype=oneflow.float32, grad_fn=<accumulate_grad>)------------after zero_grad()---------------w = tensor([[[[ 0.2587, -0.2642], [-0.1831, -0.0884]]]], dtype=oneflow.float32, grad_fn=<accumulate_grad>)四、Summary and Reference
本文的参考资料主要是李宏毅老师的视频,封面图片剪自于后面参考资料1中的李宏毅老师的视频,第一节链式法则的内容也整理自这个视频,第二、三节的内容是自己理解之后自己做的两个小例子的自行推演,链接细节如下:
封面截图链接:http://speech.ee.ntu.edu.tw/~tlkagk/courses/MLDS_2015_2/Lecture/DNN%20backprop.ecm.mp4/index.html
老的官网:http://speech.ee.ntu.edu.tw/~tlkagk/courses.html
新的官网:https://speech.ee.ntu.edu.tw/~hylee/index.php
油管channel:https://www.youtube.com/c/HungyiLeeNTU