我最近开始使用Google Colab,并想训练我的第一个卷积神经网络.我得到了此处.
I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here.
然后,我将我的代码粘贴到Colab中以创建CNN,并开始了该过程. 这是完整的代码:
Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:
(第1部分从这里,因为它对我来说是有效的
(part 1 is copied from here as it worked as exptected for me
第1步:
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null !apt-get update -qq 2>&1 > /dev/null !apt-get -y install -qq google-drive-ocamlfuse fuse第2步:
from google.colab import auth auth.authenticate_user()第3步:
from oauth2client.client import GoogleCredentials creds = GoogleCredentials.get_application_default() import getpass !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL vcode = getpass.getpass() !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}第4步:
!mkdir -p drive !google-drive-ocamlfuse drive第5步:
print('Files in Drive:') !ls drive/第2部分:复制粘贴我的CNN
我使用Udemy课程的教程创建了这个CNN.它使用带有tensorflow的keras作为后端. 为了简单起见,我上传了一个非常简单的版本,足以显示我的问题
Part 2: Copy pasting my CNN
I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems
from keras.models import Sequential from keras.layers import Conv2D from keras.layers import MaxPooling2D from keras.layers import Flatten from keras.layers import Dense from keras.layers import Dropout from keras.optimizers import Adam from keras.preprocessing.image import ImageDataGenerator参数
imageSize=32 batchSize=64 epochAmount=50CNN
classifier=Sequential() classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer classifier.add(Flatten())ANN
classifier.add(Dense(units=64, activation='relu')) #hidden layer classifier.add(Dense(units=1, activation='sigmoid')) #output layer classifierpile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method图像预处理
train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True) test_datagen = ImageDataGenerator(rescale = 1./255) training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set', target_size = (imageSize, imageSize), batch_size = batchSize, class_mode = 'binary') test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set', target_size = (imageSize, imageSize), batch_size = batchSize, class_mode = 'binary') classifier.fit_generator(training_set, steps_per_epoch = (8000//batchSize), epochs = epochAmount, validation_data = test_set, validation_steps = (2000//batchSize))现在是我的问题
首先,我使用的训练集是一个数据库,其中包含10000张各种分辨率的猫和狗的图片. (8000个培训设置,2000个测试设置)
Now comes my Problem
First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)
我在Google Colab(启用了GPU支持)和PC(GTX 1060上的tensorflow-gpu)上运行了这个CNN
I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)
这是我的PC的中间结果:
This is an intermediate result from my PC:
Epoch 2/50 63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520这是来自Colab的:
And this from Colab:
Epoch 1/50 13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916为什么我的情况下Google Colab这么慢?
Why is Google Colab so slow in my case?
我个人认为瓶颈是从驱动器中拉出然后读取图像组成的,但是除了选择其他导入数据库的方法外,我不知道如何解决这个问题.
Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.
推荐答案由于@ Feng 已注意到,正在读取文件从驱动器非常慢. 此教程建议使用某种形式为了克服此问题,请使用诸如hdf5或lmdb之类的内存映射文件.这样,I \ O操作会更快(有关hdf5格式的速度增益的完整说明,请参见).
As @Feng has already noted, reading files from drive is very slow. This tutorial suggests using some sort of a memory mapped file like hdf5 or lmdb in order to overcome this issue. This way the I\O Operations are much faster (for a complete explanation on the speed gain of hdf5 format see this).
更多推荐
与我的PC相比,Google Colab的运行速度非常慢
发布评论