TensorFlow 帮你实现更好的结构化图层和模型
今天主要向大家介绍的内容是:自定义层。
我们建议使用 tf.keras 作为构建神经网络的高级 API。也就是说,大多数 TensorFlow API 都可以通过 eager execution(即时执行)来使用。
import tensorflow as tf
tf.enable_eager_execution()
层:常用的操作集
大多数情况下,在编写机器学习模型代码时,您希望在比单个操作和操作单个变量更高的抽象级别上进行操作。
许多机器学习模型可以表达为相对简单的层的组合和堆叠,TensorFlow 提供了一组常见层作为一种简单的方法,您可以从头编写自己的特定于应用程序的层,或者将其作为现有层的组合。
TensorFlow 在 tf.keras 包中封装了完整的 Keras API,并在构建模型时 Keras 层和 Keras 包发挥着巨大的作用。
# In the tf.keras.layers package, layers are objects. To construct a layer,
# simply construct the object. Most layers take as a first argument the number
# of output dimensions / channels.
layer = tf.keras.layers.Dense(100)
# The number of input dimensions is often unnecessary, as it can be inferred
# the first time the layer is used, but it can be provided if you want to
# specify it manually, which is useful in some complex models.
layer = tf.keras.layers.Dense(10, input_shape=(None, 5))
可以在文档中看到已存在层的完整列表。它包括 Dense(完全连接层),Conv2D,LSTM,BatchNormalization(批处理标准化),Dropout 等等。
# To use a layer, simply call it.
layer(tf.zeros([10, 5]))
# Layers have many useful methods. For example, you can inspect all variables
# in a layer by calling layer.variables. In this case a fully-connected layer
# will have variables for weights and biases.
layer.variables
[,
]
# The variables are also accessible through nice accessors
layer.kernel, layer.bias
(,
)
实现自定义层
实现自定义层的最佳方法是扩展 tf.keras.Layer 类并实现:* __init__,您可以在其中执行所有与输入无关的初始化 * build,您可以在其中了解输入张量的形状,并可以执行其余的初始化 * call,以及在此进行正演计算。
请注意,您不必等到调用 build 来创建变量,您还可以在 __init__ 中创建变量。然而,在 build 中创建变量的优势在于它使后期的变量创建基于层将要操作的输入的形状。另一方面,在 __init__ 中创建变量意味着需要明确指定创建变量所需的形状。
class MyDenseLayer(tf.keras.layers.Layer):
def __init__(self, num_outputs):
super(MyDenseLayer, self).__init__()
self.num_outputs = num_outputs
def build(self, input_shape):
self.kernel = self.add_variable("kernel",
shape=[int(input_shape[-1]),
self.num_outputs])
def call(self, input):
return tf.matmul(input, self.kernel)
layer = MyDenseLayer(10)
print(layer(tf.zeros([10, 5])))
print(layer.variables)
tf.Tensor(
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]], shape=(10, 10), dtype=float32)
[<tf.Variable 'my_dense_layer/kernel:0' shape=(5, 10) dtype=float32, numpy=
array([[ 0.2774077 , -0.0018627 , 0.35655916, 0.5582008 , 0.17234564,
-0.15487313, -0.417266 , -0.50856596, -0.5074028 , 0.01600116],
[ 0.534511 , -0.4714492 , -0.23187858, 0.53936654, 0.53503364,
-0.617422 , -0.6192259 , 0.29145825, 0.0223884 , -0.5270795 ],
[-0.2874091 , 0.16588253, 0.0788359 , -0.1317451 , 0.2750584 ,
-0.5630307 , -0.07108849, -0.38031346, -0.30722007, -0.5128627 ],
[-0.5630339 , -0.4541433 , -0.3941666 , -0.26502702, 0.10295987,
-0.41846734, -0.18145484, 0.28857005, 0.0117566 , 0.10138774],
[ 0.5869536 , -0.35585892, -0.32530165, 0.52835554, -0.29882053,
-0.26029676, -0.2692049 , -0.2949 , 0.13486022, -0.40910304]],
dtype=float32)>]
请注意,您不必等到调用 build 来创建变量,您还可以在 __init__ 中创建变量。
尽可能使用标准层,则整体代码更易于阅读和维护,其他读者也将熟悉标准层的行为。如果你想使用 tf.keras.layers 或 tf.contrib.layers 中不存在的图层,请考虑提交一份 github 问题,或者更好的是,你可以提交一个 pull 请求。
模型:组合层
机器学习模型中许多有趣的类层事物都是通过组合现有的层来实现的。例如,resnet 中的每个剩余块都是卷积、批处理规范化和快捷方式的组合。
在创建包含其他图层的类似图层时使用的主类是 tf.keras.Model。其实现是通过继承 tf.keras.Model 来实现的。
class ResnetIdentityBlock(tf.keras.Model):
def __init__(self, kernel_size, filters):
super(ResnetIdentityBlock, self).__init__(name='')
filters1, filters2, filters3 = filters
self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))
self.bn2a = tf.keras.layers.BatchNormalization()
self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')
self.bn2b = tf.keras.layers.BatchNormalization()
self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1))
self.bn2c = tf.keras.layers.BatchNormalization()
def call(self, input_tensor, training=False):
x = self.conv2a(input_tensor)
x = self.bn2a(x, training=training)
x = tf.nn.relu(x)
x = self.conv2b(x)
x = self.bn2b(x, training=training)
x = tf.nn.relu(x)
x = self.conv2c(x)
x = self.bn2c(x, training=training)
x += input_tensor
return tf.nn.relu(x)
block = ResnetIdentityBlock(1, [1, 2, 3])
print(block(tf.zeros([1, 2, 3, 3])))
print([x.name for x in block.variables])
tf.Tensor(
[[[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]]], shape=(1, 2, 3, 3), dtype=float32)
['resnet_identity_block/conv2d/kernel:0', 'resnet_identity_block/conv2d/bias:0', 'resnet_identity_block/batch_normalization/gamma:0', 'resnet_identity_block/batch_normalization/beta:0', 'resnet_identity_block/conv2d_1/kernel:0', 'resnet_identity_block/conv2d_1/bias:0', 'resnet_identity_block/batch_normalization_1/gamma:0', 'resnet_identity_block/batch_normalization_1/beta:0', 'resnet_identity_block/conv2d_2/kernel:0', 'resnet_identity_block/conv2d_2/bias:0', 'resnet_identity_block/batch_normalization_2/gamma:0', 'resnet_identity_block/batch_normalization_2/beta:0', 'resnet_identity_block/batch_normalization/moving_mean:0', 'resnet_identity_block/batch_normalization/moving_variance:0', 'resnet_identity_block/batch_normalization_1/moving_mean:0', 'resnet_identity_block/batch_normalization_1/moving_variance:0', 'resnet_identity_block/batch_normalization_2/moving_mean:0', 'resnet_identity_block/batch_normalization_2/moving_variance:0']
然而,很多时候,由许多层组成的模型只是简单地调用一个接一个的层。这可以使用 tf.keras.Sequential 在非常少的代码中完成。
my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(2, 1,
padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(3, (1, 1)),
tf.keras.layers.BatchNormalization()])
my_seq(tf.zeros([1, 2, 3, 3]))
下一步
现在,您可以回到之前的笔记并调整线性回归示例,以使用更好的结构化图层和模型。
更多 AI 相关阅读: