我已经训练了一个分类器,现在我想通过任何单个图像。
我正在使用带有Tensorflow的keras库作为后端。
我收到一个我似乎无法过去的错误
img_path = '/path/to/my/image.jpg' import numpy as np from keras.preprocessing import image x = image.load_img(img_path, target_size=(250, 250)) x = image.img_to_array(x) x = np.expand_dims(x, axis=0) preds = model.predict(x)我是否需要重塑我的数据以将None作为第一维? 我很困惑为什么Tensorflow会将None视为第一个维度?
Error when checking : expected convolution2d_input_1 to have shape (None, 250, 250, 3) but got array with shape (1, 3, 250, 250)我想知道我训练过的模型的架构是否存在问题?
编辑:如果我调用model.summary()给convolution2d_input_1作为...
编辑:我确实玩了下面的建议但使用numpy转置而不是tf - 似乎仍然遇到同样的问题!
I have trained a classifier and I now want to pass any single image through.
I'm using the keras library with Tensorflow as the backend.
I'm getting an error I can't seem to get past
img_path = '/path/to/my/image.jpg' import numpy as np from keras.preprocessing import image x = image.load_img(img_path, target_size=(250, 250)) x = image.img_to_array(x) x = np.expand_dims(x, axis=0) preds = model.predict(x)Do I need to reshape my data to have None as the first dimension? I'm confused why Tensorflow would expect None as the first dimension?
Error when checking : expected convolution2d_input_1 to have shape (None, 250, 250, 3) but got array with shape (1, 3, 250, 250)I'm wondering if there has been an issue with the architecture of my trained model?
edit: if i call model.summary() give convolution2d_input_1 as...
Edit: I did play around with the suggestion below but used numpy to transpose instead of tf - still seem to be hitting the same issue!
最满意答案
None匹配任何数字。 通常,当您将某些数据传递给模型时,预计会传递尺寸张数: None x data_size ,这意味着第一个维度是任何维度并表示批量大小。 在你的情况下,问题是你通过250 x 250 x 3 ,预计3 x 250 x 250 。 尝试:
x = image.load_img(img_path, target_size=(250, 250)) x_trans = tf.transpose(x, perm=[2, 0, 1]) x_expanded = np.expand_dims(x_trans, axis=0) preds = model.predict(x_expanded)None matches any number. Usually, when you pass some data to a model, it is expected that you pass tensor of dimensions: None x data_size, meaning the first dimension is any dimension and denotes batch size. In your case, the problem is that you pass 250 x 250 x 3, and it is expected 3 x 250 x 250. Try:
x = image.load_img(img_path, target_size=(250, 250)) x_trans = tf.transpose(x, perm=[2, 0, 1]) x_expanded = np.expand_dims(x_trans, axis=0) preds = model.predict(x_expanded)更多推荐
发布评论