预测时的尺寸误差(Dimensional error while predicting)

编程入门 行业动态 更新时间:2024-10-06 10:26:44
预测时的尺寸误差(Dimensional error while predicting)

我使用代码训练了卷积模型

我试图得到如下的预测,

import cv2 from keras.models import Sequential, load_model import numpy as np #create an empty frame frames = [] #defince row, col img_rows,img_cols,img_depth=16,16,15 cap = cv2.VideoCapture('run.avi') fps = cap.get(5) #Use only first 15 frames for prediction for k in range(15): ret, frame = cap.read() frame=cv2.resize(frame,(img_rows,img_cols),interpolation=cv2.INTER_AREA) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) frames.append(gray) #preprocess input = np.array(frames) ipt=np.rollaxis(np.rollaxis(input,2,0),2,0) reshape_frames = np.expand_dims(ipt, axis=0) #run prediction model = load_model('current.h5') preds = model.predict(reshape_frames) print(preds)

但它会触发以下错误,

ValueError:检查时出错:期望的conv3d_1_input具有5个维度,但是获得具有形状的数组(1,16,16,15)

我怎样才能解决这个问题?

I have trained a convolutional3d model using code

Im trying to get the prediction as follows,

import cv2 from keras.models import Sequential, load_model import numpy as np #create an empty frame frames = [] #defince row, col img_rows,img_cols,img_depth=16,16,15 cap = cv2.VideoCapture('run.avi') fps = cap.get(5) #Use only first 15 frames for prediction for k in range(15): ret, frame = cap.read() frame=cv2.resize(frame,(img_rows,img_cols),interpolation=cv2.INTER_AREA) gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) frames.append(gray) #preprocess input = np.array(frames) ipt=np.rollaxis(np.rollaxis(input,2,0),2,0) reshape_frames = np.expand_dims(ipt, axis=0) #run prediction model = load_model('current.h5') preds = model.predict(reshape_frames) print(preds)

but it fires the following error,

ValueError: Error when checking : expected conv3d_1_input to have 5 dimensions, but got array with shape (1, 16, 16, 15)

How can I be able to sort this out?

最满意答案

请参阅卷积3D图层的文档 :

输入形状

具有形状的5D张量:(采样,通道,conv_dim1,conv_dim2,conv_dim3)如果data_format ='channels_first'或具有形状的5D张量:(samples,conv_dim1,conv_dim2,conv_dim3,channels)if data_format ='channels_last'。

所以基本上发生的是,您提供给第一个conv 3D层的输入形状不符合预期的输入。

要解决这个问题,你可以这样做:

更改提供的输入,以便它与预期的输入相匹配(也考虑到上面提到的data_format )。 因为它看起来在你的代码中,你根本不使用img_depth信息。 您基本上提供了一个2D图像到3D转换网。 使用一个2D convnet并创建一个新模型

See in the docs for convolutional 3D layers:

Input shape

5D tensor with shape: (samples, channels, conv_dim1, conv_dim2, conv_dim3) if data_format='channels_first' or 5D tensor with shape: (samples, conv_dim1, conv_dim2, conv_dim3, channels) if data_format='channels_last'.

So what is bascially happening is that the input shape you provide to your first conv 3D layer does not fit to the expected input.

To solve this you could go as follows:

change the provided input, so that it matches the expected input (also take into account the data_format as noted above). As it looks in your code, you don't use the img_depth information at all. You basically provide a 2D image to a 3D conv net. Use a 2D convnet and create a new model

更多推荐

本文发布于:2023-08-07 12:40:00,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1464121.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:误差   尺寸   Dimensional   error   predicting

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!