其他
【他山之石】TensorFlow神经网络实现二分类的正确姿势
“他山之石,可以攻玉”,站在巨人的肩膀才能看得更高,走得更远。在科研的道路上,更需借助东风才能更快前行。为此,我们特别搜集整理了一些实用的代码链接,数据集,软件,编程技巧等,开辟“他山之石”专栏,助你乘风破浪,一路奋勇向前,敬请关注。
地址:https://zhuanlan.zhihu.com/p/352062289
https://book.douban.com/subject/26976457/
样例程序
需要引入的库如下
import tensorflow as tf
from numpy.random import RandomState
import matplotlib.pyplot as plt
下面模拟数据集
dataset_X = RandomState(1).rand(DATASET_SIZE, 2)
dataset_y = [[int((x1-0.5)**2+(x2-0.5)**2 < 0.15)] for x1, x2 in dataset_X]
下面定义权重矩阵和偏置变量
w1 = tf.Variable(tf.random_normal([2, 4], stddev=1, seed=1))
b1 = tf.Variable(tf.random_normal([1, 4], stddev=1, seed=1))
w2 = tf.Variable(tf.random_normal([4, 1], stddev=1, seed=1))
b2 = tf.Variable(tf.random_normal([1], stddev=1, seed=1))
_X = tf.placeholder(tf.float32, shape=(None, 2), name="x_input")
_y = tf.placeholder(tf.float32, shape=(None, 1), name="y_input")
前向传播过程如下
a = tf.sigmoid(tf.matmul(_X, w1) + b1)
y = tf.sigmoid(tf.matmul( a, w2) + b2)
反向传播过程如下
cross_entropy = -tf.reduce_mean(
_y * tf.log(tf.clip_by_value(y, 1e-10, 1.0)) +
(1 - _y) * tf.log(tf.clip_by_value(1 - y, 1e-10, 1.0))
)
train_step = tf.train.AdamOptimizer(0.03).minimize(cross_entropy)
接下来进行训练
with tf.Session() as sess:
tf.global_variables_initializer().run()
for i in range(STEPS):
beg = (i * BATCH_SIZE) % DATASET_SIZE
end = min(beg + BATCH_SIZE, DATASET_SIZE)
sess.run(train_step, feed_dict={_X: dataset_X[beg: end], _y: dataset_y[beg: end]})
if i % 2000 == 0:
total_cross_entropy = sess.run(
cross_entropy, feed_dict={_X: dataset_X, _y: dataset_y}
)
print("After {:>5d} training_step(s), loss is {:.4f}".format(i, total_cross_entropy))
predict = sess.run(y, feed_dict={_X: dataset_X})
plt.subplot(121)
for i, (x1, x2) in enumerate(dataset_X):
plt.scatter(x1, x2, color=["orange", "blue"][int(predict[i][0] > 0.5)])
plt.subplot(122)
for i, (x1, x2) in enumerate(dataset_X):
plt.scatter(x1, x2, color=["orange", "blue"][dataset_y[i][0]])
plt.show()
实验结果如下
After 0 training_step(s), loss is 0.7133
After 2000 training_step(s), loss is 0.1970
After 4000 training_step(s), loss is 0.0976
After 6000 training_step(s), loss is 0.0678
After 8000 training_step(s), loss is 0.0486
After 0 training_step(s), loss is 0.7246
After 2000 training_step(s), loss is 0.4504
After 4000 training_step(s), loss is 0.3730
After 6000 training_step(s), loss is 0.3188
After 8000 training_step(s), loss is 0.2945
最后给出完整代码
import tensorflow as tf
from numpy.random import RandomState
import matplotlib.pyplot as plt
w1 = tf.Variable(tf.random_normal([2, 4], stddev=1, seed=1))
b1 = tf.Variable(tf.random_normal([1, 4], stddev=1, seed=1))
w2 = tf.Variable(tf.random_normal([4, 1], stddev=1, seed=1))
b2 = tf.Variable(tf.random_normal([1], stddev=1, seed=1))
_X = tf.placeholder(tf.float32, shape=(None, 2), name="x_input")
_y = tf.placeholder(tf.float32, shape=(None, 1), name="y_input")
a = tf.sigmoid(tf.matmul(_X, w1) + b1)
y = tf.sigmoid(tf.matmul( a, w2) + b2)
cross_entropy = -tf.reduce_mean(
_y * tf.log(tf.clip_by_value(y, 1e-10, 1.0)) +
(1 - _y) * tf.log(tf.clip_by_value(1 - y, 1e-10, 1.0))
)
train_step = tf.train.AdamOptimizer(0.03).minimize(cross_entropy)
BATCH_SIZE, DATASET_SIZE, STEPS = 8, 256, 10000
dataset_X = RandomState(1).rand(DATASET_SIZE, 2)
dataset_y = [[int((x1-0.5)**2+(x2-0.5)**2 < 0.15)] for x1, x2 in dataset_X]
with tf.Session() as sess:
tf.global_variables_initializer().run()
for i in range(STEPS):
beg = (i * BATCH_SIZE) % DATASET_SIZE
end = min(beg + BATCH_SIZE, DATASET_SIZE)
sess.run(train_step, feed_dict={_X: dataset_X[beg: end], _y: dataset_y[beg: end]})
if i % 2000 == 0:
total_cross_entropy = sess.run(
cross_entropy, feed_dict={_X: dataset_X, _y: dataset_y}
)
print("After {:>5d} training_step(s), loss is {:.4f}".format(i, total_cross_entropy))
predict = sess.run(y, feed_dict={_X: dataset_X})
plt.subplot(121)
for i, (x1, x2) in enumerate(dataset_X):
plt.scatter(x1, x2, color=["orange", "blue"][int(predict[i][0] > 0.5)])
plt.subplot(122)
for i, (x1, x2) in enumerate(dataset_X):
plt.scatter(x1, x2, color=["orange", "blue"][dataset_y[i][0]])
plt.show()
“他山之石”历史文章
人类早期驯服野生机器学习模型的珍贵资料
不会强化学习,只会numpy,能解决多难的RL问题?
技术总结《OpenAI Gym》
ROC和CMC曲线的理解(FAR, FRR的理解)
pytorch使用hook打印中间特征图、计算网络算力等
Ray和Pytorch Lightning 使用指北
如何在科研论文中画出漂亮的插图?
PyTorch 源码解读之 torch.optim:优化算法接口详解
AI框架基础技术之深度学习中的通信优化
SimCLR:用于视觉表征的对比学习框架
Pytorch Autograd与计算图
tensorflow2.4性能调优最佳实践
PyTorch | DDP系列:入门教程、实现原理与源代码解析、实战与技巧
教你用PyTorch玩转Transformer英译中翻译模型!
深度学习工程技巧之网格调参
更多他山之石专栏文章,
请点击文章底部“阅读原文”查看
分享、点赞、在看,给个三连击呗!