视差图的 OpenCv 深度估计

编程入门 行业动态 更新时间:2024-10-25 23:29:24
本文介绍了视差图的 OpenCv 深度估计的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

I'm trying to estimate depth from a stereo pair images with OpenCV. I have disparity map and depth estimation can be obtained as:

(Baseline*focal) depth = ------------------ (disparity*SensorSize)

I have used Block Matching technique to find the same points in the two rectificated images. OpenCV permits to set some block matching parameter, for example BMState->numberOfDisparities.

After block matching process:

cvFindStereoCorrespondenceBM( frame1r, frame2r, disp, BMState); cvConvertScale( disp, disp, 16, 0 ); cvNormalize( disp, vdisp, 0, 255, CV_MINMAX );

I found depth value as:

if(cvGet2D(vdisp,y,x).val[0]>0) { depth =((baseline*focal)/(((cvGet2D(vdisp,y,x).val[0])*SENSOR_ELEMENT_SIZE))); }

But the depth value obtaied is different from the value obtaied with the previous formula due to the value of BMState->numberOfDisparities that changes the result value.

How can I set this parameter? what to change this parameter?

Thanks

解决方案

The simple formula is valid if and only if the motion from left camera to right one is a pure translation (in particular, parallel to the horizontal image axis).

In practice this is hardly ever the case. It is common, for example, to perform the matching after rectifying the images, i.e. after warping them using a known Fundamental Matrix, so that corresponding pixels are constrained to belong to the same row. Once you have matches on the rectified images, you can remap them onto the original images using the inverse of the rectifying warp, and then triangulate into 3D space to reconstruct the scene. OpenCV has a routine to do that: reprojectImageTo3d

更多推荐

视差图的 OpenCv 深度估计

本文发布于:2023-05-27 09:58:39,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/285791.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:视差   深度   OpenCv

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!