tfe 配合 Keras model 线性拟合 和 自己处理梯度进行线性拟合

编程入门 行业动态 更新时间:2024-10-27 14:18:33

tfe 配合 Keras model <a href=https://www.elefans.com/category/jswz/34/1768154.html style=线性拟合 和 自己处理梯度进行线性拟合"/>

tfe 配合 Keras model 线性拟合 和 自己处理梯度进行线性拟合

原文链接: tfe 配合 Keras model 线性拟合 和 自己处理梯度进行线性拟合

上一篇: tfe 简单 案例 自动优化 线性拟合

下一篇: tfe 模型保存和载入

简单线性拟合,自己处理梯度

import tensorflow as tftf.enable_eager_execution()# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noisedef prediction(input, weight, bias):return input * weight + bias# A loss function using mean-squared error
def loss(weights, biases):error = prediction(training_inputs, weights, biases) - training_outputsreturn tf.reduce_mean(tf.square(error))# Return the derivative of loss with respect to weight and bias
def grad(weights, biases):with tf.GradientTape() as tape:loss_value = loss(weights, biases)return tape.gradient(loss_value, [weights, biases])train_steps = 200
learning_rate = 0.1
# Start with arbitrary values for W and B on the same batch of data
W = tf.Variable(5.)
B = tf.Variable(10.)print("init loss ", loss(W, B))for i in range(train_steps):dW, dB = grad(W, B)W.assign_sub(dW * learning_rate)B.assign_sub(dB * learning_rate)if i % 20 == 0:print("Loss ", i, loss(W, B))print("Final loss ", loss(W, B))
print("w,b ", W.numpy(), B.numpy())

结果

init loss  tf.Tensor(68.62634, shape=(), dtype=float32)
Loss  0 tf.Tensor(44.42792, shape=(), dtype=float32)
Loss  20 tf.Tensor(1.0371987, shape=(), dtype=float32)
Loss  40 tf.Tensor(1.0309802, shape=(), dtype=float32)
Loss  60 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  80 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  100 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  120 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  140 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  160 tf.Tensor(1.0309793, shape=(), dtype=float32)
Loss  180 tf.Tensor(1.0309793, shape=(), dtype=float32)
Final loss  tf.Tensor(1.0309793, shape=(), dtype=float32)
w,b  2.9830043 2.0019853

使用优化器和Keras

继承Keras的Model类,然后自定义loss和网络结构

最后在迭代中使用优化器优化即可

优化器中传入的是一个函数,每次回调用该函数,根据函数返回的loss优化参数

import tensorflow as tftf.enable_eager_execution()class Model(tf.keras.Model):def __init__(self):super(Model, self).__init__()self.W = tf.Variable(5., name='weight')self.B = tf.Variable(10., name='bias')def call(self, inputs):return inputs * self.W + self.B# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 10000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise# The loss function to be optimized
def loss(model, inputs, targets):error = model(inputs) - targetsreturn tf.reduce_mean(tf.square(error))def grad(model, inputs, targets):with tf.GradientTape() as tape:loss_value = loss(model, inputs, targets)return tape.gradient(loss_value, [model.W, model.B])# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)print('init loss ', loss(model, training_inputs, training_outputs))# Training loop
for i in range(300):grads = grad(model, training_inputs, training_outputs)optimizer.apply_gradients(zip(grads, [model.W, model.B]),global_step=tf.train.get_or_create_global_step())if i % 20 == 0:print('loss ', i, loss(model, training_inputs, training_outputs))print("Final loss ", loss(model, training_inputs, training_outputs))
print('w ,b ', model.W.numpy(), model.B.numpy())

运行结果

init loss  tf.Tensor(68.86918, shape=(), dtype=float32)
loss  0 tf.Tensor(66.18292, shape=(), dtype=float32)
loss  20 tf.Tensor(30.068779, shape=(), dtype=float32)
loss  40 tf.Tensor(13.970341, shape=(), dtype=float32)
loss  60 tf.Tensor(6.79418, shape=(), dtype=float32)
loss  80 tf.Tensor(3.5952754, shape=(), dtype=float32)
loss  100 tf.Tensor(2.169298, shape=(), dtype=float32)
loss  120 tf.Tensor(1.5336366, shape=(), dtype=float32)
loss  140 tf.Tensor(1.2502754, shape=(), dtype=float32)
loss  160 tf.Tensor(1.1239598, shape=(), dtype=float32)
loss  180 tf.Tensor(1.0676513, shape=(), dtype=float32)
loss  200 tf.Tensor(1.0425501, shape=(), dtype=float32)
loss  220 tf.Tensor(1.0313606, shape=(), dtype=float32)
loss  240 tf.Tensor(1.0263724, shape=(), dtype=float32)
loss  260 tf.Tensor(1.024149, shape=(), dtype=float32)
loss  280 tf.Tensor(1.0231577, shape=(), dtype=float32)
Final loss  tf.Tensor(1.0227305, shape=(), dtype=float32)
w ,b  3.0170832 2.0243566

更多推荐

tfe 配合 Keras model 线性拟合 和 自己处理梯度进行线性拟合

本文发布于:2024-03-08 01:19:21,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1719467.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:线性   梯度   tfe   Keras   model

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!