使用 Tensorboard 实时监控训练并可视化模型架构

编程入门 行业动态 更新时间:2024-10-13 00:38:16
本文介绍了使用 Tensorboard 实时监控训练并可视化模型架构的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

限时送ChatGPT账号..

我正在学习使用 Tensorboard -- Tensorflow 2.0.

I am learning to use Tensorboard -- Tensorflow 2.0.

特别是,我想实时监控学习曲线,并直观地检查和交流我的模型的架构.

In particular, I would like to monitor the learning curves realtime and also to visually inspect and communicate the architecture of my model.

下面我将提供可重现示例的代码.

Below I will provide code for a reproducible example.

我有三个问题:

虽然我在培训结束后得到了学习曲线,但我不知道我应该怎么做才能实时监控它们

Although I get the learning curves once the training is over I don't know what I should do to monitor them in real time

我从 Tensorboard 得到的学习曲线与 history.history 的情节不一致.事实上,它的逆转很奇怪,也很难解释.

The learning curve I get from Tensorboard does not agree with the plot of history.history. In fact is bizarre and difficult to interpret its reversals.

我无法理解图表.我已经训练了一个序列模型,其中包含 5 个密集层和中间的 dropout 层.Tensorboard 向我展示的是其中包含更多元素的东西.

I can not make sense of the graph. I have trained a sequential model with 5 dense layers and dropout layers in between. What Tensorboard shows me is something which much more elements in it.

我的代码如下:

from keras.datasets import boston_housing

(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()

inputs = Input(shape = (train_data.shape[1], ))
x1 = Dense(100, kernel_initializer = 'he_normal', activation = 'elu')(inputs)
x1a = Dropout(0.5)(x1)
x2 = Dense(100, kernel_initializer = 'he_normal', activation = 'elu')(x1a)
x2a = Dropout(0.5)(x2)
x3 = Dense(100, kernel_initializer = 'he_normal', activation = 'elu')(x2a)
x3a = Dropout(0.5)(x3)
x4 = Dense(100, kernel_initializer = 'he_normal', activation = 'elu')(x3a)
x4a = Dropout(0.5)(x4)
x5 = Dense(100, kernel_initializer = 'he_normal', activation = 'elu')(x4a)
predictions = Dense(1)(x5)
model = Model(inputs = inputs, outputs = predictions)

modelpile(optimizer = 'Adam', loss = 'mse')

logdir="logs\\fit\\" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)

history = model.fit(train_data, train_targets,
          batch_size= 32,
          epochs= 20,
          validation_data=(test_data, test_targets),
          shuffle=True,
          callbacks=[tensorboard_callback ])

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])

plt.plot(history.history['val_loss'])

推荐答案

我认为您可以做的是在对模型调用 .fit() 之前启动 TensorBoard.如果您使用的是 IPython(Jupyter 或 Colab),并且已经安装了 TensorBoard,那么您可以通过以下方式修改代码;

I think what you can do is to launch TensorBoard before calling .fit() on your model. If you are using IPython (Jupyter or Colab), and have already installed TensorBoard, here's how you can modify your code;

from keras.datasets import boston_housing

(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()

inputs = Input(shape = (train_data.shape[1], ))
x1 = Dense(100, kernel_initializer = 'he_normal', activation = 'relu')(inputs)
x1a = Dropout(0.5)(x1)
x2 = Dense(100, kernel_initializer = 'he_normal', activation = 'relu')(x1a)
x2a = Dropout(0.5)(x2)
x3 = Dense(100, kernel_initializer = 'he_normal', activation = 'relu')(x2a)
x3a = Dropout(0.5)(x3)
x4 = Dense(100, kernel_initializer = 'he_normal', activation = 'relu')(x3a)
x4a = Dropout(0.5)(x4)
x5 = Dense(100, kernel_initializer = 'he_normal', activation = 'relu')(x4a)
predictions = Dense(1)(x5)
model = Model(inputs = inputs, outputs = predictions)

modelpile(optimizer = 'Adam', loss = 'mse')

logdir="logs\\fit\\" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)

在另一个单元格中,您可以运行;

In another cell, you can run;

# Magic func to use TensorBoard directly in IPython
%load_ext tensorboard

通过在另一个单元中运行它来启动 TensorBoard;

Launch TensorBoard by running this in another cell;

# Launch TensorBoard with objects in the log directory
# This should launch tensorboard in your browser, but you may not see your metadata.
%tensorboard --logdir=logdir 

你终于可以在另一个单元格中对你的模型调用 .fit();

And you can finally call .fit() on your model in another cell;

history = model.fit(train_data, train_targets,
          batch_size= 32,
          epochs= 20,
          validation_data=(test_data, test_targets),
          shuffle=True,
          callbacks=[tensorboard_callback ])

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])

如果您不使用 IPython,您可能只需要在训练模型期间或之前启动 TensorBoard 以实时监控它.

If you are not using IPython, you probably just have to launch TensorBoard during or before training your model to monitor it in real-time.

这篇关于使用 Tensorboard 实时监控训练并可视化模型架构的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

更多推荐

[db:关键词]

本文发布于:2023-05-01 05:38:44,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1405334.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:架构   实时监控   模型   Tensorboard

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!