【pytorch】将模型部署至生产环境:使用opencv(C++)中的dnn模块或onnxruntime(python)直接调用ONNX模型

编程入门 行业动态 更新时间:2024-10-25 07:29:47

【pytorch】将<a href=https://www.elefans.com/category/jswz/34/1771358.html style=模型部署至生产环境:使用opencv(C++)中的dnn模块或onnxruntime(python)直接调用ONNX模型"/>

【pytorch】将模型部署至生产环境:使用opencv(C++)中的dnn模块或onnxruntime(python)直接调用ONNX模型

(一)待训练模型采用
CIFAR10,10分类
按上述源码训练后得到模型参数文件:saveTextOnlyParams.pth
关于onnx及onnxruntime使用见:
【pytorch】将训练好的模型部署至生产环境:onnx及onnxruntime使用
(二)实现思路为:将pytorch中训练好的模型使用ONNX导出,再使用opencv中的dnn模块直接进行加载使用:
下面是pytorch中训练好的模型,通过ONNX进行导出的源代码;为了方便引用,将原代码预处理及后处理内容部分移至模型内:

import os.path
from typing import Iterator
import numpy as np
import torch
import cv2
from PIL import Image
from torch.utils.data import Dataset,DataLoader,Subset,random_split
import re
from functools import reduce
from torch.utils.tensorboard import SummaryWriter as Writer
from torchvision import transforms,datasets
import torchvision as tv
from torch import nn
import torch.nn.functional as F
import time
import onnx#查看命令:tensorboard --logdir=./myBorderText
#可用pycharm中code中的generater功能实现:class myCustomerNetWork(nn.Module):def __init__(self):super().__init__()#输入3通道输出6通道:self.features=nn.Sequential(nn.Conv2d(3, 64, (3, 3)),nn.ReLU(),nn.Conv2d(64,128,(3,3)),nn.ReLU(),nn.Conv2d(128,256,(3,3)),nn.ReLU(),nn.AdaptiveAvgPool2d(1))self.classfired=nn.Sequential(nn.Flatten(),nn.Linear(256,80),nn.Dropout(),nn.Linear(80,10))def forward(self,x):return self.classfired(self.features(x))
#网络输入要求为torch.Size([32, 3, 32, 32])格式myNet=myCustomerNetWork()
pthfile = r'D:\flask_pytorch\saveTextOnlyParams.pth'
#当strict=false时,参数文件匹配得上就加载,没有就默认初始化。
myNet.load_state_dict(torch.load(pthfile),strict=False)
if torch.cuda.is_available():myNet=myNet.cuda()
myNet.eval()if __name__ == '__main__':#对于opencv中的dnn模块,网络输入必须为(n,c,w,h)格式imagePath = r"C:\Users\25360\Desktop\monodepth.jpeg"img = cv2.imdecode(np.fromfile(imagePath, np.uint8), -1)img = cv2.resize(img, (32, 32))# bgr转rgbimg = img[:, :, ::-1].copy()inputX = torch.FloatTensor(img).cuda()inputX = inputX.permute(2, 0, 1).contiguous()inputX = inputX.unsqueeze(0)torch.onnx.export(myNet, inputX, r'./model_static.onnx',input_names=['in'], output_names=['out'],verbose=True)#检验导出模型的正确性:onnx_model = onnx.load(r"./model_static.onnx")try:onnx.checker.check_model(onnx_model)except Exception:print("Model incorrect")else:print("Model correct")

测试输出:
前面若干行为输入及各层参数,后面各行为图的结构。

graph(%in : Float(1, 3, 32, 32, strides=[3072, 1024, 32, 1], requires_grad=0, device=cuda:0),%features.0.weight : Float(64, 3, 3, 3, strides=[27, 9, 3, 1], requires_grad=1, device=cuda:0),%features.0.bias : Float(64, strides=[1], requires_grad=1, device=cuda:0),%features.2.weight : Float(128, 64, 3, 3, strides=[576, 9, 3, 1], requires_grad=1, device=cuda:0),%features.2.bias : Float(128, strides=[1], requires_grad=1, device=cuda:0),%features.4.weight : Float(256, 128, 3, 3, strides=[1152, 9, 3, 1], requires_grad=1, device=cuda:0),%features.4.bias : Float(256, strides=[1], requires_grad=1, device=cuda:0),%classfired.1.weight : Float(80, 256, strides=[256, 1], requires_grad=1, device=cuda:0),%classfired.1.bias : Float(80, strides=[1], requires_grad=1, device=cuda:0),%classfired.3.weight : Float(10, 80, strides=[80, 1], requires_grad=1, device=cuda:0),%classfired.3.bias : Float(10, strides=[1], requires_grad=1, device=cuda:0)):%input : Float(1, 64, 30, 30, strides=[57600, 900, 30, 1], requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[1, 1]](%in, %features.0.weight, %features.0.bias) # D:\anaconda\envs\mypytorch\lib\site-packages\torch\nn\modules\conv.py:443:0%input.4 : Float(1, 64, 30, 30, strides=[57600, 900, 30, 1], requires_grad=1, device=cuda:0) = onnx::Relu(%input) # D:\anaconda\envs\mypytorch\lib\site-packages\torch\nn\functional.py:1442:0%input.8 : Float(1, 128, 28, 28, strides=[100352, 784, 28, 1], requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[1, 1]](%input.4, %features.2.weight, %features.2.bias) # D:\anaconda\envs\mypytorch\lib\site-packages\torch\nn\modules\conv.py:443:0%input.12 : Float(1, 128, 28, 28, strides=[100352, 784, 28, 1], requires_grad=1, device=cuda:0) = onnx::Relu(%input.8) # D:\anaconda\envs\mypytorch\lib\site-packages\torch\nn\functional.py:1442:0%input.16 : Float(1, 256, 26, 26, strides=[173056, 676, 26, 1], requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[1, 1]](%input.12, %features.4.weight, %features.4.bias) # D:\anaconda\envs\mypytorch\lib\site-packages\torch\nn\modules\conv.py:443:0%input.20 : Float(1, 256, 26, 26, strides=[173056, 676, 26, 1], requires_grad=1, device=cuda:0) = onnx::Relu(%input.16) # D:\anaconda\envs\mypytorch\lib\site-packages\torch\nn\functional.py:1442:0%17 : Float(1, 256, 1, 1, strides=[256, 1, 1, 1], requires_grad=1, device=cuda:0) = onnx::GlobalAveragePool(%input.20) # D:\anaconda\envs\mypytorch\lib\site-packages\torch\nn\functional.py:1241:0%18 : Float(1, 256, strides=[256, 1], requires_grad=1, device=cuda:0) = onnx::Flatten[axis=1](%17) # D:\anaconda\envs\mypytorch\lib\site-packages\torch\nn\modules\flatten.py:45:0%input.24 : Float(1, 80, strides=[80, 1], requires_grad=1, device=cuda:0) = onnx::Gemm[alpha=1., beta=1., transB=1](%18, %classfired.1.weight, %classfired.1.bias) # D:\anaconda\envs\mypytorch\lib\site-packages\torch\nn\modules\linear.py:103:0%out : Float(1, 10, strides=[10, 1], requires_grad=1, device=cuda:0) = onnx::Gemm[alpha=1., beta=1., transB=1](%input.24, %classfired.3.weight, %classfired.3.bias) # D:\anaconda\envs\mypytorch\lib\site-packages\torch\nn\modules\linear.py:103:0return (%out)Model correct

(三)安装可视化工具,将生成的model_static.onnx文件可视化。
pip install netron

netron -b model_static.onnx

(四)先采用onnxruntime的python接口进行运行测试,使用:
pip install onnxruntime 进行安装。安装后测试代码如下:

    #onnx输入的字典对应的值应该是numpy类型#对于opencv中的dnn模块,网络输入必须为(n,c,w,h)格式imagePath = r"C:\Users\25360\Desktop\monodepth.jpeg"img = cv2.imdecode(np.fromfile(imagePath, np.uint8), -1)img = cv2.resize(img, (32, 32))# bgr转rgbimg = img[:, :, ::-1].copy()inputX = torch.FloatTensor(img).cuda()inputX = inputX.permute(2, 0, 1).contiguous()inputX = inputX.unsqueeze(0)session = onnxruntime.InferenceSession(r'./model_static.onnx')inputs = {'in': inputX.cpu().numpy()}# 其第一个参数为输出张量名的列表,第二个参数为输入值的字典output = session.run(['out'], inputs)print(output)

输出为:

[array([[  71.6224  ,   10.650559,  165.06479 ,  313.57675 , -148.11444 ,329.7959  ,  109.913574, -266.10846 , -171.09756 , -272.62152 ]],dtype=float32)]

(五)借助opencv中的dnn模块直接加载:

#include <iostream>
#include<opencv.hpp>
int main()
{cv::dnn::Net net = cv::dnn::readNetFromONNX("model_static.onnx");cv::Mat image = cv::imread("C:/Users/25360/Desktop/monodepth.jpeg", cv::IMREAD_COLOR);// if (image.empty()){return 0;}//cv::namedWindow("image", cv::WindowFlags::WINDOW_NORMAL);//cv::imshow("image", image);//cv::waitKey(0);cv::Mat blob;cv::resize(image, image, cv::Size(32, 32));//转为n,c,w.h格式:同时从BGR交换为RGBcv::dnn::blobFromImage(image,blob,1.0,cv::Size(), cv::Scalar(),true);net.setInput(blob,"in");cv::Mat predict = net.forward("out");std::cout << "【Python风格】\n" << format(predict, cv::Formatter::FMT_PYTHON) << std::endl;cv::Point minLoc, maxcLoc;;double min, max;cv::minMaxLoc(predict, &min, &max, &minLoc, &maxcLoc);std::cout << "最大值:" << max << "位置:" << maxcLoc;
}

其输出为:

【Python风格】
[[71.622391, 10.650551, 165.06482, 313.57678, -148.11443, 329.79596, 109.91354, -266.10849, -171.09752, -272.62152]]
最大值:329.796位置:[5, 0]

更多推荐

【pytorch】将模型部署至生产环境:使用opencv(C++)中的dnn模块或onnxruntime(python)直接调用ONNX模型

本文发布于:2024-02-10 19:59:13,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1677002.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:模型   模块   直接调用   环境   opencv

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!