[iOS OpenCV的使用,灰度和二值化]

编程入门 行业动态 更新时间:2024-10-24 10:25:28

[iOS OpenCV的使用,<a href=https://www.elefans.com/category/jswz/34/1769417.html style=灰度和二值化]"/>

[iOS OpenCV的使用,灰度和二值化]

看网上方法很多,但版本都不够新,我看了网上一些知识,总结了下,来个最新版Xcode6.1的.

最近主要想做iOS端的车牌识别,所以开始了解OpenCV。有兴趣的可以跟我交流下哈。

 

一.Opencv的使用:

  步骤:

  1.从官网下载iOS版本的Opencv2.framework。

  2.拖进工程,选择copy items if needed

  3.进入building settings,设置Framework SearchPath:

     设置成$(PROJECT_DIR)/Newtest,这个Newtest是你的项目名,主要是为了定位到你存放的Opencv2.framework所在位置。

  4.使用Opencv的方式:第(1)种全局pch:(不推荐)新建pch文件,修改成:

              #ifdef __cplusplus

              #import <opencv2/opencv.hpp>

              #endif

                并在building setting里的 Incease Sharing of Precompiled Headers项目处:

               设置成$(PROJECT_DIR)/Newtest,同理,这个Newtest是你的项目名,主要是为了定位到你存放的PCH文件所在位置。

              PCH文件以前建工程默认生成,是全局性质的import。Xcode6不再自动生成。苹果引导开发者在某个类要用时才用。

             第(2)种:在需要的地方#import <opencv2/opencv.hpp>

                这里的重点是:使用opencv的类名一定要改成.mm!!

                        比如你专门写了各一个处理图片的类,Imageprocess。可以在.h里加入。

 

二:灰度化和二值化的主要实现过程:

  其实过程就是这样:

  UIImage(iOS图像类)-> cv::Mat(OpenCV图像类) -> Opencv灰度或二值处理函数 -> UIImage

 

三:Opencv类Imageprocess代码参考:

Imageprocess.h

//
//  Imageprocess.h
//  Chepaishibie
//
//  Created by shen on 15/1/28.
//  Copyright (c) 2015年 shen. All rights reserved.
//

#import <Foundation/Foundation.h>
#import <opencv2/opencv.hpp>
#import <UIKit/UIKit.h>@interface Imageprocess : UIViewController- (cv::Mat)cvMatFromUIImage:(UIImage *)image;- (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat;- (IplImage *)CreateIplImageFromUIImage:(UIImage *)image;- (UIImage *)UIImageFromIplImage:(IplImage *)image;- (UIImage *)Grayimage:(UIImage *)srcimage;- (UIImage *)Erzhiimage:(UIImage *)srcimage;int  Otsu(unsigned char* pGrayImg , int iWidth , int iHeight);@end

Imageprocess.mm 里面包含了很多函数:

主要是 UIImage->cv::Mat ,cv::Mat->UIImage,UIImage->IplImage,IplImage->UIImage, 灰度化,二值化等,还有个OSTU计算阈值的方法。

//
//  Imageprocess.mm
//  Chepaishibie
//
//  Created by shen on 15/1/28.
//  Copyright (c) 2015年 shen. All rights reserved.
//

#import "Imageprocess.h"@implementation Imageprocess#pragma mark - opencv method
// UIImage to cvMat
- (cv::Mat)cvMatFromUIImage:(UIImage *)image
{CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);CGFloat cols = image.size.width;CGFloat rows = image.size.height;cv::Mat cvMat(rows, cols, CV_8UC4); // 8 bits per component, 4 channels
    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,                 // Pointer to  datacols,                       // Width of bitmaprows,                       // Height of bitmap8,                          // Bits per componentcvMat.step[0],              // Bytes per rowcolorSpace,                 // ColorspacekCGImageAlphaNoneSkipLast |kCGBitmapByteOrderDefault); // Bitmap info flags
    CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), image.CGImage);CGContextRelease(contextRef);CGColorSpaceRelease(colorSpace);return cvMat;
}// CvMat to UIImage
-(UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize()*cvMat.total()];CGColorSpaceRef colorSpace;if (cvMat.elemSize() == 1) {colorSpace = CGColorSpaceCreateDeviceGray();} else {colorSpace = CGColorSpaceCreateDeviceRGB();}CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);// Creating CGImage from cv::MatCGImageRef imageRef = CGImageCreate(cvMat.cols,                                 //widthcvMat.rows,                                 //height8,                                          //bits per component8 * cvMat.elemSize(),                       //bits per pixelcvMat.step[0],                            //bytesPerRowcolorSpace,                                 //colorspacekCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap infoprovider,                                   //CGDataProviderRefNULL,                                       //decodefalse,                                      //should interpolatekCGRenderingIntentDefault                   //intent
                                        );// Getting UIImage from CGImageUIImage *finalImage = [UIImage imageWithCGImage:imageRef];CGImageRelease(imageRef);CGDataProviderRelease(provider);CGColorSpaceRelease(colorSpace);return finalImage;
}//由于OpenCV主要针对的是计算机视觉方面的处理,因此在函数库中,最重要的结构体是IplImage结构。
// NOTE you SHOULD cvReleaseImage() for the return value when end of the code.
- (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {// Getting CGImage from UIImageCGImageRef imageRef = image.CGImage;CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();// Creating temporal IplImage for drawingIplImage *iplimage = cvCreateImage(cvSize(image.size.width,image.size.height), IPL_DEPTH_8U, 4);// Creating CGContext for temporal IplImageCGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height,iplimage->depth, iplimage->widthStep,colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);// Drawing CGImage to CGContext
    CGContextDrawImage(contextRef,CGRectMake(0, 0, image.size.width, image.size.height),imageRef);CGContextRelease(contextRef);CGColorSpaceRelease(colorSpace);// Creating result IplImageIplImage *ret = cvCreateImage(cvGetSize(iplimage), IPL_DEPTH_8U, 3);cvCvtColor(iplimage, ret, CV_RGBA2BGR);cvReleaseImage(&iplimage);return ret;
}// NOTE You should convert color mode as RGB before passing to this function
- (UIImage *)UIImageFromIplImage:(IplImage *)image {CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();// Allocating the buffer for CGImageNSData *data =[NSData dataWithBytes:image->imageData length:image->imageSize];CGDataProviderRef provider =CGDataProviderCreateWithCFData((__bridge CFDataRef)data);// Creating CGImage from chunk of IplImageCGImageRef imageRef = CGImageCreate(image->width, image->height,image->depth, image->depth * image->nChannels, image->widthStep,colorSpace, kCGImageAlphaNone|kCGBitmapByteOrderDefault,provider, NULL, false, kCGRenderingIntentDefault);// Getting UIImage from CGImageUIImage *ret = [UIImage imageWithCGImage:imageRef];CGImageRelease(imageRef);CGDataProviderRelease(provider);CGColorSpaceRelease(colorSpace);return ret;
}#pragma mark - custom method// OSTU算法求出阈值
int  Otsu(unsigned char* pGrayImg , int iWidth , int iHeight)
{if((pGrayImg==0)||(iWidth<=0)||(iHeight<=0))return -1;int ihist[256];int thresholdValue=0; // „–÷µint n, n1, n2 ;double m1, m2, sum, csum, fmax, sb;int i,j,k;memset(ihist, 0, sizeof(ihist));n=iHeight*iWidth;sum = csum = 0.0;fmax = -1.0;n1 = 0;for(i=0; i < iHeight; i++){for(j=0; j < iWidth; j++){ihist[*pGrayImg]++;pGrayImg++;}}pGrayImg -= n;for (k=0; k <= 255; k++){sum += (double) k * (double) ihist[k];}for (k=0; k <=255; k++){n1 += ihist[k];if(n1==0)continue;n2 = n - n1;if(n2==0)break;csum += (double)k *ihist[k];m1 = csum/n1;m2 = (sum-csum)/n2;sb = (double) n1 *(double) n2 *(m1 - m2) * (m1 - m2);if (sb > fmax){fmax = sb;thresholdValue = k;}}return(thresholdValue);
}-(UIImage *)Grayimage:(UIImage *)srcimage{UIImage *resimage;//openCV二值化过程:/*//1.Src的UIImage ->  Src的IplImageIplImage* srcImage1 = [self CreateIplImageFromUIImage:srcimage];//2.设置Src的IplImage的ImageROIint width = srcImage1->width;int height = srcImage1->height;printf("图片大小%d,%d\n",width,height);// 分割矩形区域int x = 400;int y = 1100;int w = 1200;int h = 600;//cvSetImageROI:基于给定的矩形设置图像的ROI(感兴趣区域,region of interesting)cvSetImageROI(srcImage1, cvRect(x, y, w , h));//3.创建新的dstImage1的IplImage,并复制Src的IplImageIplImage* dstImage1 = cvCreateImage(cvSize(w, h), srcImage1->depth, srcImage1->nChannels);//cvCopy:如果输入输出数组中的一个是IplImage类型的话,其ROI和COI将被使用。cvCopy(srcImage1, dstImage1,0);//cvResetImageROI:释放基于给定的矩形设置图像的ROI(感兴趣区域,region of interesting)cvResetImageROI(srcImage1);resimage = [self UIImageFromIplImage:dstImage1];*///4.dstImage1的IplImage转换成cvMat形式的matImagecv::Mat matImage = [self cvMatFromUIImage:srcimage];cv::Mat matGrey;//5.cvtColor函数对matImage进行灰度处理//取得IplImage形式的灰度图像cv::cvtColor(matImage, matGrey, CV_BGR2GRAY);// 转换成灰色//6.使用灰度后的IplImage形式图像,用OSTU算法算阈值:threshold//IplImage grey = matGrey;
    resimage = [self UIImageFromCVMat:matGrey];/*unsigned char* dataImage = (unsigned char*)grey.imageData;int threshold = Otsu(dataImage, grey.width, grey.height);printf("阈值:%d\n",threshold);//7.利用阈值算得新的cvMat形式的图像cv::Mat matBinary;cv::threshold(matGrey, matBinary, threshold, 255, cv::THRESH_BINARY);//8.cvMat形式的图像转UIImageUIImage* image = [[UIImage alloc ]init];image = [self UIImageFromCVMat:matBinary];resimage = image;*/return resimage;
}-(UIImage *)Erzhiimage:(UIImage *)srcimage{UIImage *resimage;//openCV二值化过程:/*//1.Src的UIImage ->  Src的IplImageIplImage* srcImage1 = [self CreateIplImageFromUIImage:srcimage];//2.设置Src的IplImage的ImageROIint width = srcImage1->width;int height = srcImage1->height;printf("图片大小%d,%d\n",width,height);//// 分割矩形区域int x = 400;int y = 1100;int w = 1200;int h = 600;//cvSetImageROI:基于给定的矩形设置图像的ROI(感兴趣区域,region of interesting)cvSetImageROI(srcImage1, cvRect(x, y, w , h));//3.创建新的dstImage1的IplImage,并复制Src的IplImageIplImage* dstImage1 = cvCreateImage(cvSize(w, h), srcImage1->depth, srcImage1->nChannels);//cvCopy:如果输入输出数组中的一个是IplImage类型的话,其ROI和COI将被使用。cvCopy(srcImage1, dstImage1,0);//cvResetImageROI:释放基于给定的矩形设置图像的ROI(感兴趣区域,region of interesting)cvResetImageROI(srcImage1);resimage = [self UIImageFromIplImage:dstImage1];*///4.dstImage1的IplImage转换成cvMat形式的matImagecv::Mat matImage = [self cvMatFromUIImage:srcimage];cv::Mat matGrey;//5.cvtColor函数对matImage进行灰度处理//取得IplImage形式的灰度图像cv::cvtColor(matImage, matGrey, CV_BGR2GRAY);// 转换成灰色//6.使用灰度后的IplImage形式图像,用OSTU算法算阈值:thresholdIplImage grey = matGrey;unsigned char* dataImage = (unsigned char*)grey.imageData;int threshold = Otsu(dataImage, grey.width, grey.height);printf("阈值:%d\n",threshold);//7.利用阈值算得新的cvMat形式的图像
    cv::Mat matBinary;cv::threshold(matGrey, matBinary, threshold, 255, cv::THRESH_BINARY);//8.cvMat形式的图像转UIImageUIImage* image = [[UIImage alloc ]init];image = [self UIImageFromCVMat:matBinary];resimage = image;return resimage;
}@end

四:可能问题:

  1.出现'list' file not found:   检查类名是否改成.mm了!还不行的话,在Build Phases 中加入库:libc++.dylib 试试。

  2.arm64不支持的问题:在Building settings里Build Active Architecture Only改为No,然后下面Valid Architectures把arm64删了。

 

五:样例参考:有两个很好的例子,一个是二值,一个是图像匹配。

1.二值 

2.图像匹配 

 

转载于:.html

更多推荐

[iOS OpenCV的使用,灰度和二值化]

本文发布于:2024-02-11 22:43:18,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1684016.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:灰度   iOS   OpenCV   二值化

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!