Tensorflow 2.0 Keras的训练速度比2.0 Estimator慢4倍

编程入门 行业动态 更新时间:2024-10-24 12:26:59
本文介绍了Tensorflow 2.0 Keras的训练速度比2.0 Estimator慢4倍的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我们最近将TF 2.0转换为Keras,但是当将其与2.0上的DNNClassifier Estimator进行比较时,我们使用Keras的速度慢了大约4倍.但是我无法为自己的生活弄清楚为什么会这样.两者的其余代码都相同,使用一个input_fn()返回相同的tf.data.Dataset,并使用相同的feature_columns.数天以来一直在为这个问题而苦苦挣扎.任何帮助将不胜感激.谢谢

We recently switched to Keras for TF 2.0, but when we compared it to the DNNClassifier Estimator on 2.0, we experienced around 4x slower speeds with Keras. But I cannot for the life of me figure out why this is happening. The rest of the code for both are identical, using an input_fn() that returns the same tf.data.Dataset, and using identical feature_columns. Been struggling with this problem for days now. Any help would be greatly greatly appreciated. Thank you

估算器代码:

estimator = tf.estimator.DNNClassifier( feature_columns = feature_columns, hidden_units = [64,64], activation_fn = tf.nn.relu, optimizer = 'Adagrad', dropout = 0.4, n_classes = len(vocab), model_dir = model_dir, batch_norm = false) estimator.train(input_fn=train_input_fn, steps=400)

Keras代码:

feature_layer = tf.keras.layers.DenseFeatures(feature_columns); model = tf.keras.Sequential([ feature_layer, layers.Dense(64, input_shape = (len(vocab),), activation = tf.nn.relu), layers.Dropout(0.4), layers.Dense(64, activation = tf.nn.relu), layers.Dropout(0.4), layers.Dense(len(vocab), activation = 'softmax')]); modelpile( loss = 'sparse_categorical_crossentropy', optimizer = 'Adagrad' distribute = None) model.fit(x = train_input_fn(), epochs = 1, steps_per_epoch = 400, shuffle = True)

更新:为进一步测试,我编写了一个自定义的子类模型(请参阅:入门专家),它的运行速度比Keras快,但比Estimators慢.如果Estimator的训练时间为100秒,则自定义模型大约需要180秒,而Keras大约需要350秒.有趣的是,使用Adam()的Estimator运行的速度比使用Adagrad()的运行速度慢,而Keras的运行速度似乎更快.使用Adam(),Keras花费的时间不到DNNClassifier的两倍.假设我没有弄乱自定义代码,我开始认为DNNClassifier拥有很多后端优化/效率,使其运行速度比Keras快.

UPDATE: To test further, I wrote a custom subclassed Model (See: Get Started For Experts), which runs faster than Keras but slower than Estimators. If Estimator trains in 100 secs, the custom model takes approx ~180secs, and Keras approx ~350secs. An interesting note is that Estimator runs slower with Adam() than Adagrad() while Keras seems to run faster. With Adam() Keras takes less than twice as long as DNNClassifier. Assuming I didn't mess up the custom code, I'm beginning to think that DNNClassifier just has a lot of backend optimization / efficiencies that make it run faster than Keras.

自定义代码:

class MyModel(Model): def __init__(self): super(MyModel, self).__init__() self.features = layers.DenseFeatures(feature_columns, trainable=False) self.dense = layers.Dense(64, activation = 'relu') self.dropout = layers.Dropout(0.4) self.dense2 = layers.Dense(64, activation = 'relu') self.dropout2 = layers.Dropout(0.4) self.softmax = layers.Dense(len(vocab_of_codes), activation = 'softmax') def call(self, x): x = self.features(x) x = self.dense(x) x = self.dropout(x) x = self.dense2(x) x = self.dropout2(x) return self.softmax(x) model = MyModel() loss_object = tf.keras.losses.SparseCategoricalCrossentropy() optimizer = tf.keras.optimizers.Adagrad() @tf.function def train_step(features, label): with tf.GradientTape() as tape: predictions = model(features) loss = loss_object(label, predictions) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) itera = iter(train_input_fn()) for i in range(400): features, labels = next(itera) train_step(features, labels)

更新:它可能似乎是数据集.当我在train_input_fn()内的估计器中打印数据集的一行时,它会打印出非急切的Tensor定义.在Keras中,它会打印出渴望的值.通过Keras后端代码,当它接收到tf.data.dataset作为输入时,它会急切地(并且仅急切地)对其进行处理,这就是为什么当我在train_input_fn()上使用tf.function时它都崩溃的原因.基本上,我的猜测是DNNClassifier的训练速度比Keras快,因为它在图形模式下运行更多的数据集代码.将发布所有更新/发现.

UPDATE: It possibly seems to be the dataset. When I print a row of the dataset within the train_input_fn(), in estimators, it prints out the non-eager Tensor definition. In Keras, it prints out the eager values. Going through the Keras backend code, when it receives a tf.data.dataset as input, it handles it eagerly (and ONLY eagerly), which is why it was crashing whenever I used tf.function on the train_input_fn(). Basically, my guess is DNNClassifier is training faster than Keras because it runs more dataset code in graph mode. Will post any updates/finds.

推荐答案

我认为它的速度较慢,因为它没有在图形上执行.为了在TF2中的图形上执行,您需要一个用tf.function装饰器装饰的函数.查看本节,以获取有关如何重组代码的想法.

I believe it is slower because it is not being executed on the graph. In order to execute on the graph in TF2 you'll need a function decorated with the tf.function decorator. Check out this section for ideas on how to restructure your code.

更多推荐

Tensorflow 2.0 Keras的训练速度比2.0 Estimator慢4倍

本文发布于:2023-11-14 11:37:34,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1587112.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:速度   Tensorflow   Estimator   Keras

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!