ios目标检测实战(一)

编程入门 行业动态 更新时间:2024-10-08 22:50:49

ios目标检测<a href=https://www.elefans.com/category/jswz/34/1769775.html style=实战(一)"/>

ios目标检测实战(一)

OD ios


先读博客ios+opencv调用摄像头,完成基础操作

先读博客ios+opencv调用摄像头,完成基础操作

先读博客ios+opencv调用摄像头,完成基础操作

参考博客:

ios 获取工程内文件的路径:

std::string和NSString互转:

iOS开发将txt文件拉入xcode中 如何找到并且读取:

第一篇博客讲如何跑基础demo

效果:

按钮为闪光灯,对整体功能没有影响,博文不重点讲。

我们使用YOLO_lite模型参数,因为它能在手机上实时显示,图像采集使用opencv-ios自带的方法。

预训练模型使用的是coco数据集。

视频处理主要使用OC语言,不会没关系,和C++一模一样。

导入模型

  • 我们使用的调用YOLO预训练模的代码参考这个文件:.cpp,可以参考我的这篇博客(点这里)进行电脑端的目标检测任务,看看效果先。

  • 我们使用的获取实时图像的方法参考这篇博文(点这里)开始,这里就不讲如何利用Opencv在iphone上获取图像。

我们在,主要为.cfg和.weight两个文件。

我们把文件拖入xcode项目中:

在配好ios+opencv基础环境后,我们的代码是这样子的:

核心代码

我们在回调函数processImage()中添加以下内容:

    Mat blob;Mat image_t;cvtColor(image, image_t, cv::COLOR_RGBA2RGB, 3);blobFromImage(image_t, blob, 1/255.0, cvSize(inpWidth, inpHeight), Scalar(0,0,0), true, false);net.setInput(blob);vector<Mat> outs;net.forward(outs, getOutputsNames(net));postprocess(image_t, outs);vector<double> layersTimes;double freq = getTickFrequency() / 1000;double t = net.getPerfProfile(layersTimes) / freq;string label = format("Inference time for a frame : %.2f ms", t);putText(image_t, label, cv::Point(0, 15), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 0, 255));cvtColor(image_t, image, cv::COLOR_RGB2RGBA);

这些代码是把原先的cpp代码中摄像机while循环的部分更改为object-c的形式,几乎和cpp代码一模一样。

  • 先把输入图像(4通道)改为3通道,才能输入神经网络。

接下来把要用到的其他函数也进行修改后变成oc版本,我又写了个model_load函数用以获取预训练文件:

我直接贴完整代码了:

#import <opencv2/opencv.hpp>
#import <opencv2/imgproc/types_c.h>
#import <opencv2/imgcodecs/ios.h>
#import <opencv2/videoio/cap_ios.h>
#import "ViewController.h"using namespace cv;
using namespace dnn;
using namespace std;@interface ViewController ()<CvVideoCameraDelegate>
{Mat cvImage;
}
@property (weak, nonatomic) IBOutlet UISwitch *lightSwitch;
@property (weak, nonatomic) IBOutlet UIImageView *imageView;
@property(nonatomic,strong)CvVideoCamera * videoCamera;
@end@implementation ViewController//一些常量和函数声明
//void drawPred(int classId, float conf, int left, int top, int right, int bottom, Mat& frame);
//void postprocess(Mat& frame, const vector<Mat>& out);
//void model_load();static Net net;
static int inpWidth = 416;
static int inpHeight = 416;
static float confThreshold = 0.5;
static float nmsThreshold = 0.4;//coco class
static string classes[]= {"person","bicycle","car", "motorbike","aeroplane","bus","train","truck","boat","traffic light","fire hydrant","stop sign","parking meter","bench","bird","cat","dog","horse","sheep","cow","elephant","bear","zebra","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite","baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife","spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza","donut","cake","chair","sofa","pottedplant","bed","diningtable","toilet","tvmonitor","laptop","mouse","remote","keyboard","cell phone","microwave","oven","toaster","sink","refrigerator","book","clock","vase","scissors","teddy bear","hair drier","toothbrush"};- (void)viewDidLoad {[super viewDidLoad];// Do any additional setup after loading the view, typically from a nib.self.videoCamera = [[CvVideoCamera alloc]initWithParentView:self.imageView];self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;self.videoCamera.defaultAVCaptureSessionPreset =AVCaptureSessionPreset640x480;self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;self.videoCamera.defaultFPS = 200;self.videoCamera.grayscaleMode = false;self.videoCamera.delegate = self;model_load();[self.videoCamera start];}static void model_load(){NSString *modelConfiguration_t = [[NSBundle mainBundle] pathForResource:@"coco_lite_trial6" ofType:@"cfg"];NSString *modelWeights_t = [[NSBundle mainBundle] pathForResource:@"coco_lite_trial6_653550" ofType:@"weights"];String modelConfiguration =  [modelConfiguration_t UTF8String];String modelWeights = [modelWeights_t UTF8String];
//    NSString *path = [[NSBundle mainBundle] pathForResource:@"yolov3" ofType:@"cfg"];//    String modelConfiguration =  "/var/containers/Bundle/Application/F107EBBC-63C8-4787-8D0E-D263774AE049/fuck_ios.app/yolov3.cfg";
//    String modelWeights = "model/yolov3.weights";
//    Net net = readNetFromDarknet(modelConfiguration);net = readNetFromDarknet(modelConfiguration, modelWeights);net.setPreferableBackend(DNN_BACKEND_OPENCV);net.setPreferableTarget(DNN_TARGET_CPU);
}//处理主程序
- (void)processImage:(Mat&)image{Mat blob;Mat image_t;cvtColor(image, image_t, cv::COLOR_RGBA2RGB, 3);blobFromImage(image_t, blob, 1/255.0, cvSize(inpWidth, inpHeight), Scalar(0,0,0), true, false);net.setInput(blob);vector<Mat> outs;net.forward(outs, getOutputsNames(net));postprocess(image_t, outs);vector<double> layersTimes;double freq = getTickFrequency() / 1000;double t = net.getPerfProfile(layersTimes) / freq;string label = format("Inference time for a frame : %.2f ms", t);putText(image_t, label, cv::Point(0, 15), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 0, 255));cvtColor(image_t, image, cv::COLOR_RGB2RGBA);
}static void postprocess(Mat& frame, const vector<Mat>& outs)
{vector<int> classIds;vector<float> confidences;vector<cv::Rect> boxes;for (size_t i = 0; i < outs.size(); ++i){// Scan through all the bounding boxes output from the network and keep only the// ones with high confidence scores. Assign the box's class label as the class// with the highest score for the box.float* data = (float*)outs[i].data;for (int j = 0; j < outs[i].rows; ++j, data += outs[i].cols){Mat scores = outs[i].row(j).colRange(5, outs[i].cols);cv::Point classIdPoint;double confidence;// Get the value and location of the maximum scoreminMaxLoc(scores, 0, &confidence, 0, &classIdPoint);if (confidence > confThreshold){int centerX = (int)(data[0] * frame.cols);int centerY = (int)(data[1] * frame.rows);int width = (int)(data[2] * frame.cols);int height = (int)(data[3] * frame.rows);int left = centerX - width / 2;int top = centerY - height / 2;classIds.push_back(classIdPoint.x);confidences.push_back((float)confidence);boxes.push_back(cv::Rect(left, top, width, height));}}}// Perform non maximum suppression to eliminate redundant overlapping boxes with// lower confidencesvector<int> indices;NMSBoxes(boxes, confidences, confThreshold, nmsThreshold, indices);for (size_t i = 0; i < indices.size(); ++i){int idx = indices[i];cv::Rect box = boxes[idx];drawPred(classIds[idx], confidences[idx], box.x, box.y,box.x + box.width, box.y + box.height, frame);}
}
static void drawPred(int classId, float conf, int left, int top, int right, int bottom, Mat& frame)
{//Draw a rectangle displaying the bounding boxrectangle(frame, cv::Point(left, top), cv::Point(right, bottom), cv::Scalar(255, 178, 50), 3);//Get the label for the class name and its confidencestring label = format("%.2f", conf);//    if (!classes.empty()){CV_Assert(classId < (int)80);label = classes[classId] + ":" + label;}//Display the label at the top of the bounding boxint baseLine;cv::Size labelSize = getTextSize(label, FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);top = max(top, labelSize.height);rectangle(frame, cv::Point(left, top - round(1.5*labelSize.height)), cv::Point(left + round(1.5*labelSize.width), top + baseLine), Scalar(255, 255, 255), FILLED);putText(frame, label, cv::Point(left, top), FONT_HERSHEY_SIMPLEX, 0.75, Scalar(0,0,0),1);
}
static vector<String> getOutputsNames(const Net& net)
{static vector<String> names;if (names.empty()){//Get the indices of the output layers, i.e. the layers with unconnected outputsvector<int> outLayers = net.getUnconnectedOutLayers();//get the names of all the layers in the networkvector<String> layersNames = net.getLayerNames();// Get the names of the output layers in namesnames.resize(outLayers.size());for (size_t i = 0; i < outLayers.size(); ++i)names[i] = layersNames[outLayers[i] - 1];}return names;
}- (void)didReceiveMemoryWarning {[super didReceiveMemoryWarning];// Dispose of any resources that can be recreated.
}- (IBAction)turnlight:(UISwitch *)sender {Class captureDeviceClass = NSClassFromString(@"AVCaptureDevice");if (captureDeviceClass != nil) {AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];if ([device hasTorch] && [device hasFlash]){[device lockForConfiguration:nil];if(sender.isOn){[device setTorchMode:AVCaptureTorchModeOn];[device setFlashMode:AVCaptureFlashModeOn];} else {[device setTorchMode:AVCaptureTorchModeOff];[device setFlashMode:AVCaptureFlashModeOff];}[device unlockForConfiguration];}}
}@end

最后的那部分我加了一个闪光灯按钮,代码网上找的。

更多推荐

ios目标检测实战(一)

本文发布于:2024-02-07 07:54:36,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1754774.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:实战   目标   ios

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!