我正在使用Keras,在Windows 7的后端张量流上使用NVIDIA Quadro M2000M GPU。
当我初始化包含5个GRU,5个Dropout和1个Dense图层的模型时,GPU内存使用量会跳至3800MB的4096MB并保持不变,直到我重新开始我的spyder会话。 在spyder中清除会话:
K.clear_session()不起作用。 内存使用率保持在较高水平。
这样的模型分配GPU的这么多内存是正常的吗? 我可以改变什么,以便可以使用内存使用情况? 我想提高训练速度,我认为这种高内存使用会阻碍GPU充分发挥其潜力。
更新
我的模型看起来像这样:
model = Sequential() layers = [1, 70, 50,100, 50,20, 1] model.add(GRU( layers[1], #batch_size = 32, input_shape=(sequence_length, anzahl_features), return_sequences=True)) model.add(Dropout(dropout_1)) model.add(GRU( layers[2], #batch_size = 32, return_sequences=True)) model.add(Dropout(dropout_2)) model.add(GRU( layers[3], #batch_size = 32, return_sequences=True)) model.add(Dropout(dropout_3)) model.add(GRU( layers[4], #batch_size = 32, return_sequences=True)) model.add(Dropout(dropout_4)) model.add(GRU( layers[5], #batch_size = 32, return_sequences=False)) model.add(Dropout(dropout_5)) model.add(Dense( layers[6])) model.add(Activation('sigmoid'))我的特征矩阵的大小为506x500x35(506个示例,500个序列长度和35个特征)。 批量大小设置为128.站点说明:我不是说这是完美的特征矩阵或模型配置。
这里还有GPU-Z的截图,我重新启动spyder并启动模型直到第二个时期:
I am using Keras, on the backend tensorflow on windows 7 with the NVIDIA Quadro M2000M GPU.
When i initialization my model which contains 5 GRU, 5 Dropout and 1 Dense layers the GPU memory usage jumps to 3800MB of 4096MB and stays there until i restart my spyder session. Clearing the session within spyder with:
K.clear_session()does not work. The Memory usage stays at that high level.
Is it normal that such a model allocate this much memory of the GPU? What can i change so the memory usage can be used proberly? I want to improve the training speed and i think this high memory usage hinder the GPU to use her full potential.
Update
My model looks like this:
model = Sequential() layers = [1, 70, 50,100, 50,20, 1] model.add(GRU( layers[1], #batch_size = 32, input_shape=(sequence_length, anzahl_features), return_sequences=True)) model.add(Dropout(dropout_1)) model.add(GRU( layers[2], #batch_size = 32, return_sequences=True)) model.add(Dropout(dropout_2)) model.add(GRU( layers[3], #batch_size = 32, return_sequences=True)) model.add(Dropout(dropout_3)) model.add(GRU( layers[4], #batch_size = 32, return_sequences=True)) model.add(Dropout(dropout_4)) model.add(GRU( layers[5], #batch_size = 32, return_sequences=False)) model.add(Dropout(dropout_5)) model.add(Dense( layers[6])) model.add(Activation('sigmoid'))My feature matrix has the size 506x500x35 (506 examples, 500 sequence length and 35 features). The batch size is set to 128. Site note: I am not saying that that is the perfect feature matrix or model configuration.
Here also a screenshot of GPU-Z where i restarted spyder and started the model until the second epoch:
最满意答案
默认情况下,TensorFlow会分配整个GPU内存。
如果您想更好地控制GPU内存使用,可以使用以下方法:
per_process_gpu_memory_fraction配置选项,或 allow_growth配置选项。By default TensorFlow allocates the whole GPU memory.
If you want to have a better control on the GPU memory usage you can use these methods:
the per_process_gpu_memory_fraction config option, or the allow_growth config option.更多推荐
发布评论