3d 点云中的平面拟合

编程入门 行业动态 更新时间:2024-10-21 19:50:55
本文介绍了3d 点云中的平面拟合的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我正在尝试使用回归公式在 3d 点云中找到平面

你只需要像这样添加一个标量字段:

is_floor = cloud.add_scalar_field(plane_fit")

Wich 将为拟合的平面的点添加一个值为 1 的新列.

您可以将标量场​​可视化:

旧答案

我认为您可以轻松地使用 PCA 将平面拟合到 3D 点而不是回归.

这是一个简单的 PCA 实现:

def PCA(数据,相关性=假,排序=真):"对数据应用主成分分析参数----------数据:数组包含数据的数组.数组必须有 NxM 维,其中每个N 行中的代表不同的单个记录,M 列中的每一列表示为该单个记录记录的不同变量.大批([[V11, ... , V1m],...,[Vn1, ... , Vnm]])相关性(可选):bool设置要计算的矩阵类型(见注释):如果为真,则计算相关矩阵.如果 False(默认)计算协方差矩阵.排序(可选):布尔设置特征值/向量的顺序如果为 True(默认),它们将被排序(从高值到低值).如果 False 他们不会.退货-------特征值:(1,M) 数组对应矩阵的特征值.特征向量:(M,M) 数组对应矩阵的特征向量.笔记-----当存在不同的幅度时,相关矩阵是更好的选择表示 M 个变量.在其他情况下使用协方差矩阵."均值 = np.mean(数据,轴 = 0)data_adjust = 数据 - 平均值#: 由于 np.cov/corrcoef 语法,数据被转置如果相关性:矩阵 = np.corrcoef(data_adjust.T)别的:矩阵 = np.cov(data_adjust.T)特征值,特征向量 = np.linalg.eig(matrix)如果排序:#: 对特征值和特征向量进行排序sort = eigenvalues.argsort()[::-1]特征值 = 特征值[排序]特征向量 = 特征向量[:,sort]返回特征值、特征向量

这是将点拟合到平面的方法:

def best_fitting_plane(points, equation=False):"计算给定点的最佳拟合平面参数----------点:数组对应于我们想要的点的 x,y,z 坐标定义最佳拟合平面.预期格式:大批([[x1,y1,z1],...,[xn,yn,zn]])方程(可选):布尔设置输出平面格式:如果为 True,则返回平面的 a、b、c、d 系数.如果 False(默认)返​​回 1 个点和 1 个法线向量.退货-------a, b, c, d : 浮动求解平面方程的系数.或者点,正常:数组由 1 个点和 1 个法线向量定义的平面.格式:数组([Px,Py,Pz]), 数组([Nx,Ny,Nz])"w, v = PCA(点)#: 平面的法线是最后一个特征向量正常 = v[:,2]#: 从飞机上得到一个点点= np.mean(点,轴= 0)如果方程:a, b, c = 正常d = -(np.dot(normal, point))返回 a, b, c, d别的:返回点,正常

然而,由于此方法对异常值敏感,您可以使用 RANSAC 使拟合稳健异常值.

在 这里有一个 Python 实现.

并且您应该只需要定义一个平面模型类,以便使用它来将平面拟合到 3D 点.

在任何情况下,如果您可以从异常值中清除 3D 点(也许您可以使用 KD-Tree S.O.R 过滤器),您应该使用 PCA 获得不错的结果.

这是一个S.O.R的实现:

def statistics_outilier_removal(kdtree, k=8, z_max=2 ):"在给定的 KDTree 上计算统计异常值去除过滤器.参数----------kdtree:scipy 的 KDTree 实例KDTree 的结构将用于计算过滤器.k(可选):int最近邻居的数量将用于估计从每个点到他最近的邻居的平均距离.默认值:8z_max(可选):int最大 Z 分数决定该点是异常值还是异常值不是.退货-------sor_filter : 布尔数组指示点应保留或不保留的布尔掩码.布尔掩码的大小将与点数相同在 KD 树中.笔记-----应使用 2 个可选参数(k 和 z_max)来调整过滤到想要的结果.更高的k"值(通常)会导致修剪的点数更高.较低的z_max"值(通常)会导致修剪的点数较高."距离,我 = kdtree.query(kdtree.data, k=k, n_jobs=-1)z_distances = stats.zscore(np.mean(distances,axis=1))sor_filter = abs(z_distances) 您可以使用可能使用 这个实现

I am trying to find planes in a 3d point cloud, using the regression formula Z= aX + bY +C

I implemented least squares and ransac solutions, but the 3 parameters equation limits the plane fitting to 2.5D- the formula can not be applied on planes parallel to the Z-axis.

My question is how can I generalize the plane fitting to full 3d? I want to add the fourth parameter in order to get the full equation aX +bY +c*Z + d how can I avoid the trivial (0,0,0,0) solution?

Thanks!

The Code I'm using:

from sklearn import linear_model def local_regression_plane_ransac(neighborhood): """ Computes parameters for a local regression plane using RANSAC """ XY = neighborhood[:,:2] Z = neighborhood[:,2] ransac = linear_model.RANSACRegressor( linear_model.LinearRegression(), residual_threshold=0.1 ) ransac.fit(XY, Z) inlier_mask = ransac.inlier_mask_ coeff = model_ransac.estimator_.coef_ intercept = model_ransac.estimator_.intercept_

解决方案

Update

This functionality is now integrated in github/daavoo/pyntcloud and makes the plane fitting process much simplier:

Given a point cloud:

You just need to add a scalar field like this:

is_floor = cloud.add_scalar_field("plane_fit")

Wich will add a new column with value 1 for the points of the plane fitted.

You can visualize the scalar field:


Old answer

I think that you could easily use PCA to fit the plane to the 3D points instead of regression.

Here is a simple PCA implementation:

def PCA(data, correlation = False, sort = True): """ Applies Principal Component Analysis to the data Parameters ---------- data: array The array containing the data. The array must have NxM dimensions, where each of the N rows represents a different individual record and each of the M columns represents a different variable recorded for that individual record. array([ [V11, ... , V1m], ..., [Vn1, ... , Vnm]]) correlation(Optional) : bool Set the type of matrix to be computed (see Notes): If True compute the correlation matrix. If False(Default) compute the covariance matrix. sort(Optional) : bool Set the order that the eigenvalues/vectors will have If True(Default) they will be sorted (from higher value to less). If False they won't. Returns ------- eigenvalues: (1,M) array The eigenvalues of the corresponding matrix. eigenvector: (M,M) array The eigenvectors of the corresponding matrix. Notes ----- The correlation matrix is a better choice when there are different magnitudes representing the M variables. Use covariance matrix in other cases. """ mean = np.mean(data, axis=0) data_adjust = data - mean #: the data is transposed due to np.cov/corrcoef syntax if correlation: matrix = np.corrcoef(data_adjust.T) else: matrix = np.cov(data_adjust.T) eigenvalues, eigenvectors = np.linalg.eig(matrix) if sort: #: sort eigenvalues and eigenvectors sort = eigenvalues.argsort()[::-1] eigenvalues = eigenvalues[sort] eigenvectors = eigenvectors[:,sort] return eigenvalues, eigenvectors

And here is how you could fit the points to a plane:

def best_fitting_plane(points, equation=False): """ Computes the best fitting plane of the given points Parameters ---------- points: array The x,y,z coordinates corresponding to the points from which we want to define the best fitting plane. Expected format: array([ [x1,y1,z1], ..., [xn,yn,zn]]) equation(Optional) : bool Set the oputput plane format: If True return the a,b,c,d coefficients of the plane. If False(Default) return 1 Point and 1 Normal vector. Returns ------- a, b, c, d : float The coefficients solving the plane equation. or point, normal: array The plane defined by 1 Point and 1 Normal vector. With format: array([Px,Py,Pz]), array([Nx,Ny,Nz]) """ w, v = PCA(points) #: the normal of the plane is the last eigenvector normal = v[:,2] #: get a point from the plane point = np.mean(points, axis=0) if equation: a, b, c = normal d = -(np.dot(normal, point)) return a, b, c, d else: return point, normal

However as this method is sensitive to outliers you could use RANSAC to make the fit robust to outliers.

There is a Python implementation of ransac here.

And you should only need to define a Plane Model class in order to use it for fitting planes to 3D points.

In any case if you can clean the 3D points from outliers (maybe you could use a KD-Tree S.O.R filter to that) you should get pretty good results with PCA.

Here is an implementation of an S.O.R:

def statistical_outilier_removal(kdtree, k=8, z_max=2 ): """ Compute a Statistical Outlier Removal filter on the given KDTree. Parameters ---------- kdtree: scipy's KDTree instance The KDTree's structure which will be used to compute the filter. k(Optional): int The number of nearest neighbors wich will be used to estimate the mean distance from each point to his nearest neighbors. Default : 8 z_max(Optional): int The maximum Z score wich determines if the point is an outlier or not. Returns ------- sor_filter : boolean array The boolean mask indicating wherever a point should be keeped or not. The size of the boolean mask will be the same as the number of points in the KDTree. Notes ----- The 2 optional parameters (k and z_max) should be used in order to adjust the filter to the desired result. A HIGHER 'k' value will result(normally) in a HIGHER number of points trimmed. A LOWER 'z_max' value will result(normally) in a HIGHER number of points trimmed. """ distances, i = kdtree.query(kdtree.data, k=k, n_jobs=-1) z_distances = stats.zscore(np.mean(distances, axis=1)) sor_filter = abs(z_distances) < z_max return sor_filter

You could feed the function with a KDtree of your 3D points computed maybe using this implementation

更多推荐

3d 点云中的平面拟合

本文发布于:2023-07-27 23:06:27,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1225167.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:云中   平面

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!