本文介绍了如何将 UIImage 转换为 CVPixelBuffer的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
限时送ChatGPT账号..Apple 的新 CoreML 框架有一个预测函数,它采用 CVPixelBuffer
.为了对 UIImage
进行分类,必须在两者之间进行转换.
Apple's new CoreML framework has a prediction function that takes a CVPixelBuffer
. In order to classify a UIImage
a conversion must be made between the two.
我从 Apple 工程师那里得到的转换代码:
Conversion code I got from an Apple Engineer:
1 // image has been defined earlier
2
3 var pixelbuffer: CVPixelBuffer? = nil
4
5 CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_OneComponent8, nil, &pixelbuffer)
6 CVPixelBufferLockBaseAddress(pixelbuffer!, CVPixelBufferLockFlags(rawValue:0))
7
8 let colorspace = CGColorSpaceCreateDeviceGray()
9 let bitmapContext = CGContext(data: CVPixelBufferGetBaseAddress(pixelbuffer!), width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelbuffer!), space: colorspace, bitmapInfo: 0)!
10
11 bitmapContext.draw(image.cgImage!, in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
此解决方案很快,适用于灰度图像.根据图像类型必须进行的更改是:
This solution is in swift and is for a grayscale image. Changes that must be made depending on the type of image are:
第 5 行 |kCVPixelFormatType_OneComponent8
到另一个 OSType
(kCVPixelFormatType_32ARGB
for RGB)8号线|colorSpace
到另一个 CGColorSpace
(CGColorSpaceCreateDeviceRGB
for RGB)第 9 行 |bitsPerComponent
到内存的每像素位数(RGB 为 32)第 9 行 |bitmapInfo
到非零的 CGBitmapInfo
属性(kCGBitmapByteOrderDefault
是默认值)
Line 5 | kCVPixelFormatType_OneComponent8
to another OSType
(kCVPixelFormatType_32ARGB
for RGB)
Line 8 | colorSpace
to another CGColorSpace
(CGColorSpaceCreateDeviceRGB
for RGB)
Line 9 | bitsPerComponent
to the number of bits per pixel of memory (32 for RGB)
Line 9 | bitmapInfo
to a nonzero CGBitmapInfo
property (kCGBitmapByteOrderDefault
is the default)
推荐答案
你可以看看这个教程 https://www.hackingwithswift/whats-new-in-ios-11,代码在 Swift 4
You can take a look at this tutorial https://www.hackingwithswift/whats-new-in-ios-11, code is in Swift 4
func buffer(from image: UIImage) -> CVPixelBuffer? {
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
guard (status == kCVReturnSuccess) else {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pixelData, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
context?.translateBy(x: 0, y: image.size.height)
context?.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context!)
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
}
这篇关于如何将 UIImage 转换为 CVPixelBuffer的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
更多推荐
[db:关键词]
发布评论