从CGImage获取像素格式

编程入门 行业动态 更新时间:2024-10-28 07:29:36
本文介绍了从CGImage获取像素格式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我非常了解位图布局和像素格式主题,但是在处理通过NSImage加载的png/jpeg图像时遇到了问题–我无法弄清楚我得到的是预期的行为还是错误./p>

I understand bitmap layout and pixel format subject pretty well, but getting an issue when working with png / jpeg images loaded through NSImage – I can't figure out if what I get is the intended behaviour or a bug.

let nsImage:NSImage = NSImage(byReferencingURL: …) let cgImage:CGImage = nsImage.CGImageForProposedRect(nil, context: nil, hints: nil)! let bitmapInfo:CGBitmapInfo = CGImageGetBitmapInfo(cgImage) Swift.print(bitmapInfo.contains(CGBitmapInfo.ByteOrderDefault)) // True

我的kCGBitmapByteOrder32Host是小尾数,这意味着像素格式也是小尾数–在这种情况下为BGRA.但是…png格式在规范上是大尾数法,这就是字节在数据中的实际排列方式–与位图信息告诉我的相反.

My kCGBitmapByteOrder32Host is little endian, which implies that the pixel format is also little endian – BGRA in this case. But… png format is big endian by specification, and that's how the bytes are actually arranged in the data – opposite from what bitmap info tells me.

有人知道发生了什么吗?由于png正确显示,因此系统肯定会以某种方式知道如何处理此问题.有没有防弹的方式来检测CGImage的像素格式?完整的演示项目可在GitHub上获得.

Does anybody knows what's going on? Surely the system somehow knows how do deal with this, since pngs are displayed correctly. Is there a bullet-proof way detecting pixel format of CGImage? Complete demo project is available at GitHub.

P. S..我正在通过CFDataGetBytePtr缓冲区将原始像素数据复制到另一个库缓冲区中,然后对其进行处理和保存.为此,我需要明确指定像素格式.我正在处理的实际图像(已检查的任何png/jpeg文件)正确显示,例如:

P. S. I'm copying raw pixel data via CFDataGetBytePtr buffer into another library buffer, which is then gets processed and saved. In order to do so, I need to explicitly specify pixel format. Actual images I'm dealing with (any png / jpeg files that I've checked) display correctly, for example:

但是相同图像的位图信息给我不正确的字节序信息,导致位图以BGRA像素格式而不是实际的RGBA处理,当我对其进行处理时,结果如下所示:

But bitmap info of the same images gives me incorrect endianness information, resulting in bitmap being handled as BGRA pixel format instead of actual RGBA, when I process it the result looks like this:

生成的图像演示了红色和蓝色像素之间的颜色交换,如果显式指定了RGBA像素格式,则一切工作正常,但是我需要这种检测才能自动化.

The resulting image demonstrates the colour swapping between red and blue pixels, if RGBA pixel format is specified explicitly, everything works out perfectly, but I need this detection to be automated.

P. P. S.文档简要提到了CGColorSpace是另一个定义像素格式/字节顺序的重要变量,但是我没有提到如何使它脱离那儿.

P. P. S. Documentation briefly mentions that CGColorSpace is another important variable that defines pixel format / byte order, but I found no mentions how to get it out of there.

推荐答案

几年后,在生产中测试了我的发现之后,我可以很自信地分享这些发现,但是希望有理论知识的人可以在这里更好地解释吗?刷新内存的好地方:

Some years later and after testing my findings in production I can share them with good confidence, but hoping someone with theory knowledge will explain things better here? Good places to refresh memory:

  • 维基百科:RGBA颜色空间–表示形式
  • Apple列表:CGBitmapContextCreate中的字节顺序
  • Apple列表:kCGImageAlphaPremultiplied第一/最后
  • Wikipedia: RGBA color space – Representation
  • Apple Lists: Byte Order in CGBitmapContextCreate
  • Apple Lists: kCGImageAlphaPremultiplied First/Last

基于此,您可以使用以下扩展名:

Based on that you can use following extensions:

public enum PixelFormat { case abgr case argb case bgra case rgba } extension CGBitmapInfo { public static var byteOrder16Host: CGBitmapInfo { return CFByteOrderGetCurrent() == Int(CFByteOrderLittleEndian.rawValue) ? .byteOrder16Little : .byteOrder16Big } public static var byteOrder32Host: CGBitmapInfo { return CFByteOrderGetCurrent() == Int(CFByteOrderLittleEndian.rawValue) ? .byteOrder32Little : .byteOrder32Big } } extension CGBitmapInfo { public var pixelFormat: PixelFormat? { // AlphaFirst – the alpha channel is next to the red channel, argb and bgra are both alpha first formats. // AlphaLast – the alpha channel is next to the blue channel, rgba and abgr are both alpha last formats. // LittleEndian – blue comes before red, bgra and abgr are little endian formats. // Little endian ordered pixels are BGR (BGRX, XBGR, BGRA, ABGR, BGR). // BigEndian – red comes before blue, argb and rgba are big endian formats. // Big endian ordered pixels are RGB (XRGB, RGBX, ARGB, RGBA, RGB). let alphaInfo: CGImageAlphaInfo? = CGImageAlphaInfo(rawValue: self.rawValue & type(of: self).alphaInfoMask.rawValue) let alphaFirst: Bool = alphaInfo == .premultipliedFirst || alphaInfo == .first || alphaInfo == .noneSkipFirst let alphaLast: Bool = alphaInfo == .premultipliedLast || alphaInfo == .last || alphaInfo == .noneSkipLast let endianLittle: Bool = self.contains(.byteOrder32Little) // This is slippery… while byte order host returns little endian, default bytes are stored in big endian // format. Here we just assume if no byte order is given, then simple RGB is used, aka big endian, though… if alphaFirst && endianLittle { return .bgra } else if alphaFirst { return .argb } else if alphaLast && endianLittle { return .abgr } else if alphaLast { return .rgba } else { return nil } } }

请注意,您应该始终注意颜色空间 –它直接影响原始像素数据的存储方式. CGColorSpace(name: CGColorSpace.sRGB)可能是最安全的一种-它以纯格式存储颜色,例如,如果处理红色RGB,它将被存储为(255,0,0),而设备颜色空间将为您提供(235) ,73,53).

Note, that you should always pay attention to colour space – it directly affects how raw pixel data is stored. CGColorSpace(name: CGColorSpace.sRGB) is probably the safest one – it stores colours in plain format, for example, if you deal with red RGB it will be stored just like that (255, 0, 0) while device colour space will give you something like (235, 73, 53).

要在实践中看到此内容,请先将以下内容放到操场上.您将需要两个带有Alpha且不带此和此应该有效.

To see this in practice drop above and the following into a playground. You'll need two one-pixel red images with alpha and without, this and this should work.

import AppKit import CoreGraphics extension CFData { public var pixelComponents: [UInt8] { let buffer: UnsafeMutablePointer<UInt8> = UnsafeMutablePointer.allocate(capacity: 4) defer { buffer.deallocate(capacity: 4) } CFDataGetBytes(self, CFRange(location: 0, length: CFDataGetLength(self)), buffer) return Array(UnsafeBufferPointer(start: buffer, count: 4)) } } let color: NSColor = .red Thread.sleep(forTimeInterval: 2) // Must flip coordinates to capture what we want… let screen: NSScreen = NSScreen.screens.first(where: { $0.frame.contains(NSEvent.mouseLocation) })! let rect: CGRect = CGRect(origin: CGPoint(x: NSEvent.mouseLocation.x - 10, y: screen.frame.height - NSEvent.mouseLocation.y), size: CGSize(width: 1, height: 1)) Swift.print("Will capture image with \(rect) frame.") let screenImage: CGImage = CGWindowListCreateImage(rect, [], kCGNullWindowID, [])! let urlImageWithAlpha: CGImage = NSImage(byReferencing: URL(fileURLWithPath: "/Users/ianbytchek/Downloads/red-pixel-with-alpha.png")).cgImage(forProposedRect: nil, context: nil, hints: nil)! let urlImageNoAlpha: CGImage = NSImage(byReferencing: URL(fileURLWithPath: "/Users/ianbytchek/Downloads/red-pixel-no-alpha.png")).cgImage(forProposedRect: nil, context: nil, hints: nil)! Swift.print(screenImage.colorSpace!, screenImage.bitmapInfo, screenImage.bitmapInfo.pixelFormat!, screenImage.dataProvider!.data!.pixelComponents) Swift.print(urlImageWithAlpha.colorSpace!, urlImageWithAlpha.bitmapInfo, urlImageWithAlpha.bitmapInfo.pixelFormat!, urlImageWithAlpha.dataProvider!.data!.pixelComponents) Swift.print(urlImageNoAlpha.colorSpace!, urlImageNoAlpha.bitmapInfo, urlImageNoAlpha.bitmapInfo.pixelFormat!, urlImageNoAlpha.dataProvider!.data!.pixelComponents) let formats: [CGBitmapInfo.RawValue] = [ CGImageAlphaInfo.premultipliedFirst.rawValue, CGImageAlphaInfo.noneSkipFirst.rawValue, CGImageAlphaInfo.premultipliedLast.rawValue, CGImageAlphaInfo.noneSkipLast.rawValue, ] for format in formats { // This "paints" and prints out components in the order they are stored in data. let context: CGContext = CGContext(data: nil, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 32, space: CGColorSpace(name: CGColorSpace.sRGB)!, bitmapInfo: format)! let components: UnsafeBufferPointer<UInt8> = UnsafeBufferPointer(start: context.data!.assumingMemoryBound(to: UInt8.self), count: 4) context.setFillColor(red: 1 / 0xFF, green: 2 / 0xFF, blue: 3 / 0xFF, alpha: 1) context.fill(CGRect(x: 0, y: 0, width: 1, height: 1)) Swift.print(context.colorSpace!, context.bitmapInfo, context.bitmapInfo.pixelFormat!, Array(components)) }

这将输出以下内容.请注意屏幕捕获的图像与从磁盘加载的图像有何不同.

This will output the following. Pay attention how screen-captured image differs from ones loaded from disk.

Will capture image with (285.7734375, 294.5, 1.0, 1.0) frame. <CGColorSpace 0x7fde4e9103e0> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; iMac) CGBitmapInfo(rawValue: 8194) bgra [27, 13, 252, 255] <CGColorSpace 0x7fde4d703b20> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Color LCD) CGBitmapInfo(rawValue: 3) rgba [235, 73, 53, 255] <CGColorSpace 0x7fde4e915dc0> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Color LCD) CGBitmapInfo(rawValue: 5) rgba [235, 73, 53, 255] <CGColorSpace 0x7fde4d60d390> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; sRGB IEC61966-2.1) CGBitmapInfo(rawValue: 2) argb [255, 1, 2, 3] <CGColorSpace 0x7fde4d60d390> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; sRGB IEC61966-2.1) CGBitmapInfo(rawValue: 6) argb [255, 1, 2, 3] <CGColorSpace 0x7fde4d60d390> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; sRGB IEC61966-2.1) CGBitmapInfo(rawValue: 1) rgba [1, 2, 3, 255] <CGColorSpace 0x7fde4d60d390> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; sRGB IEC61966-2.1) CGBitmapInfo(rawValue: 5) rgba [1, 2, 3, 255]

更多推荐

从CGImage获取像素格式

本文发布于:2023-07-30 16:25:41,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1251029.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:像素   格式   CGImage

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!