使用FFMpeg将BGR BufferedImage快速转换为YUV(Fast transformation of BGR BufferedImage to YUV using FFMpeg)

编程入门 行业动态 更新时间:2024-10-24 15:13:38
使用FFMpeg将BGR BufferedImage快速转换为YUV(Fast transformation of BGR BufferedImage to YUV using FFMpeg)

我想使用FFMpeg的sws_scale函数通过JNI将Java中的TYPE_3BYTE_BGR BufferedImage转换为yuv。 我首先从BufferedImage中提取我的图像数据

byte[] imgData = ((DataBufferByte) myImage.getRaster().getDataBuffer()).getData(); byte[] output = processImage(toSend,0);

然后我将它传递给processImage函数,这是一个本机函数。 C ++端看起来像这样:

JNIEXPORT jbyteArray JNICALL Java_jni_JniExample_processData (JNIEnv *env, jobject obj, jbyteArray data, jint index) { jboolean isCopy; uint8_t *test = (uint8_t *)env->GetPrimitiveArrayCritical(data, &isCopy); uint8_t *inData[1]; // RGB24 have one plane inData[0] = test; SwsContext * ctx = sws_getContext(width,height,AV_PIX_FMT_BGR24, (int)width, (int)width, AV_PIX_FMT_YUV420P, 0, 0, 0, 0); int lumaPlaneSize = width *height; uint8_t *yuv[3]; yuv[0] = new uint8_t[lumaPlaneSize]; yuv[1] = new uint8_t[lumaPlaneSize/4]; yuv[2] = new uint8_t[lumaPlaneSize/4]; int inLinesize[1] = { 3*nvEncoder->width }; // RGB stride int outLinesize[3] = { 3*width ,3*width ,3*width }; // YUV stride sws_scale(ctx, inData, inLinesize, 0, height , yuv, outLinesize);

但是,运行代码后,我收到警告: [swscaler @ 0x7fb598659480] Warning: data is not aligned! This can lead to a speedloss, everything crashes. [swscaler @ 0x7fb598659480] Warning: data is not aligned! This can lead to a speedloss, everything crashes. ,一切都在最后一行崩溃了。 我是否正确地将正确的参数传递给sws_scale? (特别是步伐)。

更新:此处有一个单独的错误: SwsContext * ctx = sws_getContext(width,height,AV_PIX_FMT_BGR24, (int)width, (int)width,0,NULL,NULL,NULL)应更改为: SwsContext * ctx = sws_getContext(width,height,AV_PIX_FMT_BGR24, (int)height, (int)width,0,NULL,NULL,NULL)

I wanted to transform a TYPE_3BYTE_BGR BufferedImage in Java to yuv using the sws_scale function of FFMpeg through JNI. I first extract the data of my image from the BufferedImage as

byte[] imgData = ((DataBufferByte) myImage.getRaster().getDataBuffer()).getData(); byte[] output = processImage(toSend,0);

Then I pass it to the processImage function which is a native function. The C++ side looks like this:

JNIEXPORT jbyteArray JNICALL Java_jni_JniExample_processData (JNIEnv *env, jobject obj, jbyteArray data, jint index) { jboolean isCopy; uint8_t *test = (uint8_t *)env->GetPrimitiveArrayCritical(data, &isCopy); uint8_t *inData[1]; // RGB24 have one plane inData[0] = test; SwsContext * ctx = sws_getContext(width,height,AV_PIX_FMT_BGR24, (int)width, (int)width, AV_PIX_FMT_YUV420P, 0, 0, 0, 0); int lumaPlaneSize = width *height; uint8_t *yuv[3]; yuv[0] = new uint8_t[lumaPlaneSize]; yuv[1] = new uint8_t[lumaPlaneSize/4]; yuv[2] = new uint8_t[lumaPlaneSize/4]; int inLinesize[1] = { 3*nvEncoder->width }; // RGB stride int outLinesize[3] = { 3*width ,3*width ,3*width }; // YUV stride sws_scale(ctx, inData, inLinesize, 0, height , yuv, outLinesize);

However, after running the code, I get the warning: [swscaler @ 0x7fb598659480] Warning: data is not aligned! This can lead to a speedloss, everything crashes., and everything crashes on the last line. Am I doing things properly in terms of passing the correct arguments to sws_scale? (specially the strides).

Update: There was a separate bug here: SwsContext * ctx = sws_getContext(width,height,AV_PIX_FMT_BGR24, (int)width, (int)width,0,NULL,NULL,NULL) which should be changed to: SwsContext * ctx = sws_getContext(width,height,AV_PIX_FMT_BGR24, (int)height, (int)width,0,NULL,NULL,NULL)

最满意答案

我看到的第一个问题 - 输出图像的步幅错误:

yuv[0] = new uint8_t[lumaPlaneSize]; yuv[1] = new uint8_t[lumaPlaneSize/4]; yuv[2] = new uint8_t[lumaPlaneSize/4]; int inLinesize[1] = { 3*nvEncoder->width }; // RGB stride int outLinesize[3] = { 3*width ,3*width ,3*width }; // YUV stride // ^^^^^^^ ^^^^^^^ ^^^^^^^

分配的飞机不足以通过步幅。 YUV420对每个通道使用一个字节,因此3是冗余的并且导致绑定违规。 到达下一行时,到期的rescaler会跳过很多空间。 接下来,实际的色度宽度是亮度宽度的一半,因此如果您想要紧凑的亮度和色度平面,在线端没有间隙,请使用下一个:

int outLinesize[3] = { width , width / 2 , width / 2 }; // YUV stride

分配大小保持不变。

The first problem I see - wrong strides for output image:

yuv[0] = new uint8_t[lumaPlaneSize]; yuv[1] = new uint8_t[lumaPlaneSize/4]; yuv[2] = new uint8_t[lumaPlaneSize/4]; int inLinesize[1] = { 3*nvEncoder->width }; // RGB stride int outLinesize[3] = { 3*width ,3*width ,3*width }; // YUV stride // ^^^^^^^ ^^^^^^^ ^^^^^^^

Allocated planes are not large enough for passed strides. YUV420 uses one byte for each channel, so 3 is redundant and leads to bound violation. due rescaler skips a lot of space when goes to next line. Next, actual chroma width is a half of luma width, so if you want tight-packed luma and chroma planes without gaps at line ends use next:

int outLinesize[3] = { width , width / 2 , width / 2 }; // YUV stride

Allocation sizes remain the same.

更多推荐

本文发布于:2023-07-28 01:13:00,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1298250.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:转换为   快速   BufferedImage   FFMpeg   BGR

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!