开发者说丨利用NDT算法构建点云地图
本章主要讲解室外构建点云地图的常用方法,以及基本原理和实现。主要是一个基于NDT的激光里程计,然后配合IMU/ENCODER/GPS等传感器提供匹配初值,完成地图构建。
下面是由社区开发者-胡想成提供的文章,对利用NDT算法构建点云地图进行详细讲解,希望这篇文章能给感兴趣的开发者带来更多帮助。
以下,ENJOY
在高精地图制作的环节中,制作点云地图是第一步,本章我们主要应用NDT(Normal Distributions Transform,正态分布变换)完成点云地图的构建。当然3D点云地图的构建方法还有很多,比如LOAM/Lego LOAM。本篇我们将完成一个激光里程计,如激光雷达型号Robosense 16,mapping过程是离线的,对电脑性能要求较高,并且内存尽量大。《NDT原理推导》请参见文末链接1。
NDT原理
将连续帧激光点云进行NDT配准,配准结果作为下一帧点云配准的初值。可得到当前的pose,进而可求相对于起始帧的Transform,可将当前帧点云对齐到坐标原点,逐帧降采样加入map中,即可完成地图构建。纯Lidar计算的最终变换肯定会存在累计误差,这里我们的传感器还有ENCODER、IMU,可以利用其提供NDT配准的初值,理论上会显著改善这一情况。
这里一般通过slam算法完成的是特征点地图,或者是降采样之后的地图。如果想要构建高精地图,并且是通过手工标注的方式完成,那么我们就需要构建稠密的点云地图。稠密地图很简单,通过点云配准计算得到两帧的Transform之后,直接将当前帧去掉过远过近的点,不作任何处理,加入map即可。
在本篇的建图中,会用到一些其他的传感器比如IMU/ENCODER/GPS,也可以用纯Lidar。只是用到了传感器的相关数据,谈不上融合。
订阅相关话题,包括Lidar/ENCODER/IMU,发布出来的有当前已构建好的map话题/ndt_map,当前的激光点云/current_points,以及当前车辆运动的odometry话题/ndt_odom。
1ndt_map_pub = nh.advertise<:pointcloud2> ("/ndt_map", 5);
2// dense_map_pub = nh.advertise<:pointcloud2>("/dense_map", 1000);
3odom_pub = nh.advertise<:odometry>("/ndt_odom", 5, false);
4current_points_pub = nh.advertise<:pointcloud2>("/current_points", 5);
5
6points_sub = nh.subscribe(_lidar_topic, 5, &LidarMapping::points_callback, this);
7 odom_sub = nh.subscribe(_odom_topic, 5, &LidarMapping::odom_callback, this);
8 imu_sub = nh.subscribe(_imu_topic, 5, &LidarMapping::imu_callback, this);
<左右滑动以查看完整代码>
预处理
对于激光雷达,其过近的点由于落在车体上,过远的点已经非常稀疏,因此都需要被过滤掉。
1// tmp为初始点云,截取有效范围内的点存到scan中
2for (auto point:tmp.points) {
3r = std::sqrt(pow(point.x, 2.0) + pow(point.y, 2.0));
4if (min_scan_range < r && r < max_scan_range) {
5 scan.points.push_back(point);
6}
7}
8pcl::PointCloud<:pointxyzi>::Ptr scan_ptr(new pcl::PointCloud<:pointxyzi>(scan)); // scan保存到scan_ptr中
<左右滑动以查看完整代码>
同时把第一帧激光点云加入map中,initial_scan_loaded标记这里是第一帧激光点云,后续的点云配准完成后直接加入map中来。
1if (initial_scan_loaded == 0)
2{
3pcl::transformPointCloud(*scan_ptr, *transformed_scan_ptr, tf_btol);
4map += *transformed_scan_ptr;
5initial_scan_loaded = 1;
6}
<左右滑动以查看完整代码>
这里的tf_btol是初始设定的,初始化tf_btol以获得初始变换矩阵,当车辆初始的起点不在预定义的globalMap原点时,就需要tf_btol了,后面配准的时候都是相对于localMap的原点,因此tf_ltob.inv将作为补偿的矩阵。
1// 根据初始设定的_tf,_roll等,初始化tl_btol rot_x_btol等
2Eigen::Translation3f tl_btol(_tf_x, _tf_y, _tf_z);
3Eigen::AngleAxisf rot_x_btol(_tf_roll, Eigen::Vector3f::UnitX()); // rot: rotation
4Eigen::AngleAxisf rot_y_btol(_tf_pitch, Eigen::Vector3f::UnitY());
5Eigen::AngleAxisf rot_z_btol(_tf_yaw, Eigen::Vector3f::UnitZ());
6
7tf_btol = (tl_btol * rot_z_btol * rot_y_btol * rot_x_btol).matrix();
8tf_ltob = tf_btol.inverse(); // 将tf_btol取逆
<左右滑动以查看完整代码>
降采样
1// 用voxelgrid对点云进行降采样,存入filtered_scan_ptr
2pcl::VoxelGrid<:pointxyzi> voxel_grid_filter;
3voxel_grid_filter.setLeafSize(voxel_leaf_size, voxel_leaf_size, voxel_leaf_size);
4voxel_grid_filter.setInputCloud(scan_ptr);
5voxel_grid_filter.filter(*filtered_scan_ptr);
<左右滑动以查看完整代码>
这里IMU数据在使用之前,需要确定一下方向,是否需要进行坐标轴的正负变换。只需要保证IMU测量而得实时的yaw角和车头方向基本保持一致。这里特别需要注意,通常我们在汽车上使用的IMU分六轴和九轴的,九轴的IMU是有地磁仪的,那么IMU测量的yaw角就是相当于地球正北方向的夹角,即为车头方向,可直接使用。如果是六轴的,那么测量出来的yaw角是相对于车身的,不能直接作为车头方向,只有计算连续帧的增量才算有效。对于组合导航而言,一般搭载的IMU是六轴的,但是由于GPS双天线,可以得到正北的方向,最终输出的yaw角为和正北方向的夹角,可以直接使用。
这里我们需要自己的传感器设备有一个清楚的了解,可以参考《自动驾驶实战(四)—地理与平面坐标系对齐及INS数据可视化》,测试一下IMU的朝向问题。
1 void imu_callback(const sensor_msgs::Imu::Ptr &input) {
2if (_imu_upside_down) // _imu_upside_down指示是否进行imu的正负变换
3 imuUpSideDown(input);
4
5const ros::Time current_time = input->header.stamp;
6static ros::Time previous_time = current_time;
7const double diff_time = (current_time - previous_time).toSec();
8
9// 解析imu消息,获得rpy
10double imu_roll, imu_pitch, imu_yaw;
11tf::Quaternion imu_orientation;
12tf::quaternionMsgToTF(input->orientation, imu_orientation);
13tf::Matrix3x3(imu_orientation).getRPY(imu_roll, imu_pitch, imu_yaw);
14
15imu_roll = warpToPmPi(imu_roll); // 保持在±180°内,角度转弧度
16imu_pitch = warpToPmPi(imu_pitch);
17imu_yaw = warpToPmPi(imu_yaw);
18
19static double previous_imu_roll = imu_roll, previous_imu_pitch = imu_pitch, previous_imu_yaw = imu_yaw;
20const double diff_imu_roll = calcDiffForRadian(imu_roll, previous_imu_roll);
21const double diff_imu_pitch = calcDiffForRadian(imu_pitch, previous_imu_pitch);
22const double diff_imu_yaw = calcDiffForRadian(imu_yaw, previous_imu_yaw);
23
24imu.header = input->header;//瞬时加速度只考虑x方向
25imu.linear_acceleration.x = input->linear_acceleration.x;
26// imu.linear_acceleration.y = input->linear_acceleration.y;
27// imu.linear_acceleration.z = input->linear_acceleration.z;
28imu.linear_acceleration.y = 0;
29imu.linear_acceleration.z = 0;
30
31if (diff_time != 0) {
32 imu.angular_velocity.x = diff_imu_roll / diff_time;
33 imu.angular_velocity.y = diff_imu_pitch / diff_time;
34 imu.angular_velocity.z = diff_imu_yaw / diff_time;
35} else {
36 imu.angular_velocity.x = 0;
37 imu.angular_velocity.y = 0;
38 imu.angular_velocity.z = 0;
39}
40
41imu_calc(input->header.stamp);//后面详细讲解
42
43previous_time = current_time;
44previous_imu_roll = imu_roll;
45previous_imu_pitch = imu_pitch;
46previous_imu_yaw = imu_yaw;
47}
<左右滑动以查看完整代码>
1 void odom_callback(const nav_msgs::Odometry::ConstPtr &input) {
2odom = *input;
3odom_calc(input->header.stamp);//后续讲解
4}
<左右滑动以查看完整代码>
如果有GPS当然也可以用GPS来提供初值,这里有些GPS会提供heading角,有些则不会,只有位置。有heading角的更准,只有位置也没关系,通过连续的位置坐标,也可以计算一个平面运动的yaw角,这种计算方法在我的系列博客中经常提到,这里只给出基本的计算方法,转换成T矩阵部分就参考之前的代码即可。
无heading
1 geometry_msgs::PoseStamped pose;
2pose.header = msg.header;
3pose.header.stamp = msg.header.stamp;
4pose.header.frame_id = "map";
5pose.pose.position.x = gps_pos_(0); //gps位置
6pose.pose.position.y = gps_pos_(1);
7pose.pose.position.z = gps_pos_(2);
8
9// 计算yaw
10double distance = sqrt(pow(pose.pose.position.y - _prev_pose.pose.position.y, 2) +
11 pow(pose.pose.position.x - _prev_pose.pose.position.x, 2));
12if (distance > 0.2)
13{
14// 返回值是此点与远点连线与x轴正方向的夹角
15yaw = atan2(pose.pose.position.y - _prev_pose.pose.position.y, pose.pose.position.x - _prev_pose.pose.position.x);
16_quat = tf::createQuaternionMsgFromYaw(yaw);
17_prev_pose = pose;
18}
19pose.pose.orientation = _quat; //yaw转四元数
<左右滑动以查看完整代码>
有heading
1//有heading
2gps_heading_time_ = heading_msg->header.stamp;
3gps_heading_ = Eigen::Quaterniond(q.z, 0, 0, q.w); // gps heading to map
4has_heading_ = true;
5
6if (has_pos_) {
7NODELET_INFO("INITIAL POSE AND HEADING...");
8
9auto ros_time_diff = gps_pos_time_ - gps_heading_time_;
10double time_diff = ros_time_diff.toSec();
11
12Eigen::Isometry3d e;
13e.setIdentity();
14if (has_heading_ && std::fabs(time_diff) < 0.5) {
15 e.prerotate(gps_heading_);
16} else {
17 e.prerotate(Eigen::Quaterniond::Identity());
18}
19e.pretranslate(gps_pos_);
20
21geometry_msgs::PoseStamped heading_pose;
22heading_pose.header = heading_msg->header;
23heading_pose.header.stamp = heading_msg->header.stamp;
24heading_pose.header.frame_id = "map";
25heading_pose.pose.position.x = gps_pos_(0);//位置
26heading_pose.pose.position.y = gps_pos_(1);
27heading_pose.pose.position.z = gps_pos_(2);
28
29heading_pose.pose.orientation.x = gps_heading_.x();//heading角
30heading_pose.pose.orientation.y = gps_heading_.y();
31heading_pose.pose.orientation.z = gps_heading_.z();
32heading_pose.pose.orientation.w = gps_heading_.w();
33heading_pub.publish(heading_pose);
34
35has_pos_ = false;
36has_heading_ = false;
37}
<左右滑动以查看完整代码>
1ndt.setTransformationEpsilon(trans_eps); // 两次变换之间允许的最大值,用于判断是否收敛,作为迭代计算完成的阈值; =0.01
2ndt.setStepSize(step_size); //牛顿法优化的最大步长 0.1
3ndt.setResolution(ndt_res); //ndt cell的分辨率,我设定的是0.5,一般0.1-1,太大导致精度下降,太小导致内存撑爆
4ndt.setMaximumIterations(max_iter); //ndt的最大迭代次数 这里我设置的是30
5ndt.setInputSource(filtered_scan_ptr); // 输入source点云
<左右滑动以查看完整代码>
这里接着考虑map,如果是第一帧激光,会和初始的map配准,初始map为空。
1static bool is_first_map = true;
2if (is_first_map == true){
3ndt.setInputTarget(map_ptr);
4is_first_map = false;
5}
考虑我们需要输出的pose,这里为避免混淆,我们定了guess_pose、previous_pose、guess_pose_for_ndt。其中guess_pose为当前时刻的初值,即NDT计算的初始估计位置,在不使用GNSS/IMU/ODOM的情况下,以上一帧车辆位置的变换矩阵(previous_pose)作为guess_pose,最后输出当前帧配准完成的guess_pose_for_ndt。
1/* Reminder: how transformation matrices work :
2
3 |-------> This column is the translation
4| 1 0 0 x | \
5| 0 1 0 y | }-> The identity 3x3 matrix (no rotation) on the left
6| 0 0 1 z | /
7| 0 0 0 1 | -> We do not use this line (and it has to stay 0,0,0,1)
8
9METHOD #1: Using a Matrix4f
10This is the "manual" method, perfect to understand but error prone !
11*/
12
13// 初始pose
14guess_pose.x = previous_pose.x + diff_x;
15guess_pose.y = previous_pose.y + diff_y;
16guess_pose.z = previous_pose.z + diff_z;
17guess_pose.roll = previous_pose.roll;
18guess_pose.pitch = previous_pose.pitch;
19guess_pose.yaw = previous_pose.yaw + diff_yaw; //平面上运动,近似只有偏航角变化
20pose guess_pose_for_ndt;
21guess_pose_for_ndt = guess_pose;
22
23Eigen::AngleAxisf init_rotation_x(guess_pose_for_ndt.roll, Eigen::Vector3f::UnitX()); //沿X轴转roll
24Eigen::AngleAxisf init_rotation_y(guess_pose_for_ndt.pitch, Eigen::Vector3f::UnitY());
25Eigen::AngleAxisf init_rotation_z(guess_pose_for_ndt.yaw, Eigen::Vector3f::UnitZ());
26Eigen::Translation3f init_translation(guess_pose_for_ndt.x, guess_pose_for_ndt.y, guess_pose_for_ndt.z);
27
28// 旋转向量->初始的变换矩阵 角度*模长
29Eigen::Matrix4f init_guess = (init_translation * init_rotation_z * init_rotation_y * init_rotation_x).matrix() * tf_btol;
<左右滑动以查看完整代码>
对于NDT算法而言,当匹配的两帧点云已确定,配准结果很大程度上取决于初值的好坏,当然没有ICP对初值那么敏感。不管是NDT还是ICP,最终求出了cost function,都是采用牛顿法进行迭代优化。所以配准时间和迭代次数息息相关。这里我们有四种初值的获取方法,纯Lidar建图的话,采用上一帧的变换作为初值,在有ENCODER和IMU的情况下,可以用这两个融合一个初值,有GPS的话,也可以用GPS计算初值。
1pose guess_pose_for_ndt; //命名guess_pose_for_ndt,方便处理
2if (_use_imu && _use_odom) {
3guess_pose_for_ndt = guess_pose_imu_odom;
4std::cout << " use imu_odom guess pose" << std::endl;
5} else if (_use_imu && !_use_odom) {
6std::cout << " use imu guess pose" << std::endl;
7guess_pose_for_ndt = guess_pose_imu;
8} else if (!_use_imu && _use_odom) {
9std::cout << " use odom guess pose" << std::endl;
10guess_pose_for_ndt = guess_pose_odom;
11} else {
12std::cout << " use origin guess pose" << std::endl;
13guess_pose_for_ndt = guess_pose;
14}
15
16// 计算init_guess
17Eigen::AngleAxisf init_rotation_x(static_cast<const float &>(guess_pose_for_ndt.roll), Eigen::Vector3f::UnitX());
18Eigen::AngleAxisf init_rotation_y(static_cast<const float &>(guess_pose_for_ndt.pitch), Eigen::Vector3f::UnitY());
19Eigen::AngleAxisf init_rotation_z(static_cast<const float &>(guess_pose_for_ndt.yaw), Eigen::Vector3f::UnitZ());
20
21Eigen::Translation3f init_translation(static_cast<const float &>(guess_pose_for_ndt.x),static_cast<const float &>(guess_pose_for_ndt.y), static_cast<const float &>(guess_pose_for_ndt.z));
22
23Eigen::Matrix4f init_guess = (init_translation * init_rotation_z * init_rotation_y * init_rotation_x).matrix() * tf_btol;
<左右滑动以查看完整代码>
对于guess_pose,上面已经用previous_pose赋值。对于传感器IMU/ENCODER均为增量型的,无法直接使用其瞬时值,而是需要计算在短时间内(前后两帧)的增量。对于IMU而言,加速度计测量比较准,一次积分出来的速度,和二次积分出来的位移都会存在较大误差。
1void imu_calc(ros::Time current_time) {
2static ros::Time previous_time = current_time;
3double diff_time = (current_time - previous_time).toSec();
4
5double diff_imu_roll = imu.angular_velocity.x * diff_time;
6double diff_imu_pitch = imu.angular_velocity.y * diff_time;
7double diff_imu_yaw = imu.angular_velocity.z * diff_time;
8
9current_pose_imu.roll += diff_imu_roll;
10current_pose_imu.pitch += diff_imu_pitch;
11current_pose_imu.yaw += diff_imu_yaw;
12
13// 对imu由于不平衡造成的补偿问题
14double accX1 = imu.linear_acceleration.x;
15double accY1 = std::cos(current_pose_imu.roll) * imu.linear_acceleration.y -std::sin(current_pose_imu.roll) * imu.linear_acceleration.z;
16double accZ1 = std::sin(current_pose_imu.roll) * imu.linear_acceleration.y + std::cos(current_pose_imu.roll) * imu.linear_acceleration.z;
17
18double accX2 = std::sin(current_pose_imu.pitch) * accZ1 + std::cos(current_pose_imu.pitch) * accX1;
19double accY2 = accY1;
20double accZ2 = std::cos(current_pose_imu.pitch) * accZ1 - std::sin(current_pose_imu.pitch) * accX1;
21
22double accX = std::cos(current_pose_imu.yaw) * accX2 - std::sin(current_pose_imu.yaw) * accY2;
23double accY = std::sin(current_pose_imu.yaw) * accX2 + std::cos(current_pose_imu.yaw) * accY2;
24double accZ = accZ2;
25
26// imu计算xyz方向上的偏移,再加上考虑加速度以及时间参数,获得较为准确的距离偏移
27offset_imu_x += current_velocity_imu_x * diff_time + accX * diff_time * diff_time / 2.0;
28offset_imu_y += current_velocity_imu_y * diff_time + accY * diff_time * diff_time / 2.0;
29offset_imu_z += current_velocity_imu_z * diff_time + accZ * diff_time * diff_time / 2.0;
30
31current_velocity_imu_x += accX * diff_time; // imu的速度值会通过slam进行修正,以避免累计误差
32current_velocity_imu_y += accY * diff_time; // 或者说imu计算时所用的速度并不是用imu得到的,而是用slam得到的
33current_velocity_imu_z += accZ * diff_time; // imu所提供的参数,主要用来计算角度上的偏移,以及加速度导致的距离上的偏移!@
34
35offset_imu_roll += diff_imu_roll;
36offset_imu_pitch += diff_imu_pitch;
37offset_imu_yaw += diff_imu_yaw;
38
39guess_pose_imu.x = previous_pose.x + offset_imu_x;
40guess_pose_imu.y = previous_pose.y + offset_imu_y;
41guess_pose_imu.z = previous_pose.z + offset_imu_z;
42guess_pose_imu.roll = previous_pose.roll + offset_imu_roll;
43guess_pose_imu.pitch = previous_pose.pitch + offset_imu_pitch;
44guess_pose_imu.yaw = previous_pose.yaw + offset_imu_yaw;
45
46previous_time = current_time;
47}
<左右滑动以查看完整代码>
对于odometry而言,通过测量轮速来获取实时的速度,测量比较准,而由速度根据运动模型计算出的加速度等值也是有一定问题的。
1void odom_calc(ros::Time current_time) {
2static ros::Time previous_time = current_time;
3double diff_time = (current_time - previous_time).toSec();
4
5double diff_odom_roll = odom.twist.twist.angular.x * diff_time;
6double diff_odom_pitch = odom.twist.twist.angular.y * diff_time;
7double diff_odom_yaw = odom.twist.twist.angular.z * diff_time;
8
9current_pose_odom.roll += diff_odom_roll;
10current_pose_odom.pitch += diff_odom_pitch;
11current_pose_odom.yaw += diff_odom_yaw;
12
13double diff_distance = odom.twist.twist.linear.x * diff_time;
14offset_odom_x += diff_distance * cos(-current_pose_odom.pitch) * cos(current_pose_odom.yaw);
15offset_odom_y += diff_distance * cos(-current_pose_odom.pitch) * sin(current_pose_odom.yaw);
16offset_odom_z += diff_distance * sin(-current_pose_odom.pitch);
17
18offset_odom_roll += diff_odom_roll;
19offset_odom_pitch += diff_odom_pitch;
20offset_odom_yaw += diff_odom_yaw;
21
22guess_pose_odom.x = previous_pose.x + offset_odom_x;
23guess_pose_odom.y = previous_pose.y + offset_odom_y;
24guess_pose_odom.z = previous_pose.z + offset_odom_z;
25guess_pose_odom.roll = previous_pose.roll + offset_odom_roll;
26guess_pose_odom.pitch = previous_pose.pitch + offset_odom_pitch;
27guess_pose_odom.yaw = previous_pose.yaw + offset_odom_yaw;
28
29previous_time = current_time;
30}
<左右滑动以查看完整代码>
我们可以结合这两个传感器的优势和劣势,只用IMU的陀螺仪和ENCODER的速度,算出一个更可靠的初值。
1void imu_odom_calc(ros::Time current_time) {
2static ros::Time previous_time = current_time;
3double diff_time = (current_time - previous_time).toSec(); // static声明的变量只会在第一次使用时被声明,因此不会被覆盖
4
5// imu只使用陀螺仪,即 只输出转角信息roll,pitch,yaw
6double diff_imu_roll = imu.angular_velocity.x * diff_time;
7double diff_imu_pitch = imu.angular_velocity.y * diff_time;
8double diff_imu_yaw = imu.angular_velocity.z * diff_time;
9
10current_pose_imu_odom.roll += diff_imu_roll;
11current_pose_imu_odom.pitch += diff_imu_pitch;
12current_pose_imu_odom.yaw += diff_imu_yaw;
13
14// odom信息处理,计算 -- xyz移动距离的计算,融合odom的速度(位移)信息和imu的转角信息
15double diff_distance = odom.twist.twist.linear.x * diff_time;
16offset_imu_odom_x += diff_distance * cos(-current_pose_imu_odom.pitch) * cos(current_pose_imu_odom.yaw);
17offset_imu_odom_y += diff_distance * cos(-current_pose_imu_odom.pitch) * sin(current_pose_imu_odom.yaw);
18offset_imu_odom_z += diff_distance * sin(-current_pose_imu_odom.pitch);
19
20offset_imu_odom_roll += diff_imu_roll;
21offset_imu_odom_pitch += diff_imu_pitch;
22offset_imu_odom_yaw += diff_imu_yaw;
23
24// 融合imu和odom输出一个guess_pose
25// 注:guess_pose是在previous_pose基础上叠加一个offset,包括xyz的和rpy的
26// xyz的offset需要融合imu的转角和odom的速度(位移)
27// rpy的offset直接采用imu的rpy偏差值
28guess_pose_imu_odom.x = previous_pose.x + offset_imu_odom_x;
29guess_pose_imu_odom.y = previous_pose.y + offset_imu_odom_y;
30guess_pose_imu_odom.z = previous_pose.z + offset_imu_odom_z;
31guess_pose_imu_odom.roll = previous_pose.roll + offset_imu_odom_roll;
32guess_pose_imu_odom.pitch = previous_pose.pitch + offset_imu_odom_pitch;
33guess_pose_imu_odom.yaw = previous_pose.yaw + offset_imu_odom_yaw;
34
35previous_time = current_time;
36}
<左右滑动以查看完整代码>
利用GPS提供初值的方法在上一节已经讲过,这里直接拿实时的GPS位置+姿态即可,而不需要计算增量。
1pcl::PointCloud<:pointxyzi>::Ptr output_cloud(new pcl::PointCloud<:pointxyzi>);
2
3ndt.align(*output_cloud, init_guess); //开始配准
4fitness_score = ndt.getFitnessScore(); //配准得分
5t_localizer = ndt.getFinalTransformation(); //配准成功得到的变换矩阵
6has_converged = ndt.hasConverged();
7final_num_iteration = ndt.getFinalNumIteration(); //最终迭代次数
8transformation_probability = ndt.getTransformationProbability();
<左右滑动以查看完整代码>
output_cloud:存储source经过配准后的点云(由于source_pointcloud被极大的降采样了,因此这个数据没什么用),我们只需要得到变换矩阵,再把scan_ptr变换过去,加入map中即可。
1t_base_link = t_localizer * tf_ltob; //当前时刻相对于原点的变换(定位)
2pcl::transformPointCloud(*scan_ptr, *transformed_scan_ptr, t_localizer); // 对原始点云进行变换
<左右滑动以查看完整代码>
注:NDT对位置不敏感,通常在3m以内都可以迭代计算过去,但是NDT对角度比较敏感,因此初始初始的角度不能与实际相差过大(最好在±45°之内)。
1tf::Matrix3x3 mat_l, mat_b; //相邻帧、相对于全局的旋转矩阵
2
3mat_l.setValue(static_cast<double>(t_localizer(0, 0)), static_cast<double>(t_localizer(0, 1)),
4 static_cast<double>(t_localizer(0, 2)), static_cast<double>(t_localizer(1, 0)),
5 static_cast<double>(t_localizer(1, 1)), static_cast<double>(t_localizer(1, 2)),
6 static_cast<double>(t_localizer(2, 0)), static_cast<double>(t_localizer(2, 1)),
7 static_cast<double>(t_localizer(2, 2)));
8
9mat_b.setValue(static_cast<double>(t_base_link(0, 0)), static_cast<double>(t_base_link(0, 1)),
10 static_cast<double>(t_base_link(0, 2)), static_cast<double>(t_base_link(1, 0)),
11 static_cast<double>(t_base_link(1, 1)), static_cast<double>(t_base_link(1, 2)),
12 static_cast<double>(t_base_link(2, 0)), static_cast<double>(t_base_link(2, 1)),
13 static_cast<double>(t_base_link(2, 2)));
14
15// Update localizer_pose. 相邻帧
16localizer_pose.x = t_localizer(0, 3);
17localizer_pose.y = t_localizer(1, 3);
18localizer_pose.z = t_localizer(2, 3);
19mat_l.getRPY(localizer_pose.roll, localizer_pose.pitch, localizer_pose.yaw, 1);
20
21// Update ndt_pose. 全局
22ndt_pose.x = t_base_link(0, 3);
23ndt_pose.y = t_base_link(1, 3);
24ndt_pose.z = t_base_link(2, 3);
25mat_b.getRPY(ndt_pose.roll, ndt_pose.pitch, ndt_pose.yaw, 1);
26
27current_pose.x = ndt_pose.x;
28current_pose.y = ndt_pose.y;
29current_pose.z = ndt_pose.z;
30current_pose.roll = ndt_pose.roll;
31current_pose.pitch = ndt_pose.pitch;
32current_pose.yaw = ndt_pose.yaw;
33
34// tf变换 map->base_link
35transform.setOrigin(tf::Vector3(current_pose.x, current_pose.y, current_pose.z));
36q.setRPY(current_pose.roll, current_pose.pitch, current_pose.yaw);
37transform.setRotation(q);
38br.sendTransform(tf::StampedTransform(transform, current_scan_time, "map", "base_link"));
<左右滑动以查看完整代码>
因为有些帧会配准失败,需要舍弃,前面的diff,即相邻帧之间的位移也需要计算。
1// 计算两帧时间
2scan_duration = current_scan_time - previous_scan_time;
3double secs = scan_duration.toSec();
4
5// Calculate the offset (curren_pos - previous_pos) 计算两帧之间的位移、速度和偏航角变化
6diff_x = current_pose.x - previous_pose.x;
7diff_y = current_pose.y - previous_pose.y;
8diff_z = current_pose.z - previous_pose.z;
9diff_yaw = calcDiffForRadian(current_pose.yaw, previous_pose.yaw);
10diff = sqrt(diff_x * diff_x + diff_y * diff_y + diff_z * diff_z);
11
12// 匀速运动模型
13current_velocity_x = diff_x / secs;
14current_velocity_y = diff_y / secs;
15current_velocity_z = diff_z / secs;
16
17// Update position and posture. current_pos -> previous_pos 更新当前位姿
18previous_pose.x = current_pose.x;
19previous_pose.y = current_pose.y;
20previous_pose.z = current_pose.z;
21previous_pose.roll = current_pose.roll;
22previous_pose.pitch = current_pose.pitch;
23previous_pose.yaw = current_pose.yaw;
24
25previous_scan_time.sec = current_scan_time.sec;
26previous_scan_time.nsec = current_scan_time.nsec;
<左右滑动以查看完整代码>
每隔n米更新一次全局地图,同时使用单独的线程,按照一定的频率进行地图的保存。注意,这里需要对当前帧进行降采样才能加入地图里。否则可能会匹配失败,我之前出现这个问题,百思不得其解,后面仔细考虑才明白。
匹配的当前帧预处理过程中,是做过降采样的,也可以理解成一种特征,用网格中心点代替此网格内的点云,必然会损失一些点云细节。如果是稠密建图,直接将原始点云帧加入map里面,地图细节必然会保留得更加完整,但是在当前降采样的点云帧和map做匹配的时候,就不一定能够匹配上了。
1// Calculate the shift between added_pos and current_pos
2double shift = sqrt(pow(current_pose.x - added_pose.x, 2.0) + pow(current_pose.y - added_pose.y, 2.0));
3
4// 给定两帧距离
5if (shift >= min_add_scan_shift)
6{
7//注意:必须降采样,才能加入map中
8pcl::VoxelGrid <:pointxyzi> voxel_grid_filter_tomap;
9voxel_grid_filter_tomap.setLeafSize(voxel_leaf_size * 2, voxel_leaf_size * 2, voxel_leaf_size * 2);
10voxel_grid_filter_tomap.setInputCloud(transformed_scan_ptr);
11voxel_grid_filter_tomap.filter(*transformed_scan_ptr);
12
13submap_size += shift;
14map += *transformed_scan_ptr;
15
16added_pose.x = current_pose.x;
17added_pose.y = current_pose.y;
18added_pose.z = current_pose.z;
19added_pose.roll = current_pose.roll;
20added_pose.pitch = current_pose.pitch;
21added_pose.yaw = current_pose.yaw;
22
23// 此时加入的target:map_ptr并不包括刚加入点云的transformed_scan_ptr
24//map更新了,因此target也需要随之更新
25if (_incremental_voxel_update)
26 cpu_ndt.updateVoxelGrid(transformed_scan_ptr);
27else
28 cpu_ndt.setInputTarget(map_ptr);
29
30}
<左右滑动以查看完整代码>
发布map和当前的odometry
1// ros发布map
2sensor_msgs::PointCloud2::Ptr map_msg_ptr(new sensor_msgs::PointCloud2);
3pcl::toROSMsg(*map_ptr, *map_msg_ptr);
4ndt_map_pub.publish(*map_msg_ptr);
5
6// publish the odom
7nav_msgs::Odometry odom;
8odom.header.stamp = current_scan_time;
9odom.header.frame_id = "map";
10
11odom.pose.pose.position.x = current_pose.x;
12odom.pose.pose.position.y = current_pose.y;
13odom.pose.pose.position.z = current_pose.z;
14
15odom.pose.pose.orientation.x = q.x();
16odom.pose.pose.orientation.y = q.y();
17odom.pose.pose.orientation.z = q.z();
18odom.pose.pose.orientation.w = q.w();
19
20odom.child_frame_id = "base_link";
21odom.twist.twist.linear.x = current_velocity_x;
22odom.twist.twist.linear.y = current_velocity_y;
23odom.twist.twist.angular.z = current_velocity_z;
24
25odom_pub.publish(odom);
26
27// 实时的点云发布一份
28sensor_msgs::PointCloud2::Ptr pointcloud_current_ptr(new sensor_msgs::PointCloud2);
29pcl::toROSMsg(*transformed_scan_ptr, *pointcloud_current_ptr);
30pointcloud_current_ptr->header.frame_id = "map";
31current_points_pub.publish(*pointcloud_current_ptr);
<左右滑动以查看完整代码>
最后是一些初值的设定
1diff = 0.0, diff_x = 0.0, diff_y = 0.0, diff_z = 0.0, diff_yaw= 0.0; // current_pose - previous_pose
2current_velocity_x = 0.0, current_velocity_y = 0.0, current_velocity_z = 0.0;
3
4// Default values
5max_iter = 30; // Maximum iterations
6//static float ndt_res = 1.0; // Resolution
7ndt_res = 1; // Resolution
8step_size = 0.1; // Step size
9trans_eps = 0.01; // Transformation epsilon
10voxel_leaf_size = 0.1; // Leaf size of VoxelGrid filter.
11initial_scan_loaded = 0;
12min_scan_range = 5;
13max_scan_range = 80; //static double min_scan_range = 1.0;
14min_add_scan_shift = 0.5; //static double min_add_scan_shift = 0.5;
15_tf_x=0.0, _tf_y=0.0, _tf_z=0.0, _tf_roll=0.0, _tf_pitch=0.0, _tf_yaw=0.0;
16
17previous_pose.x = previous_pose.y = previous_pose.z = 0.0;
18previous_pose.roll = previous_pose.pitch = previous_pose.yaw = 0.0;
19ndt_pose.x = ndt_pose.y = ndt_pose.z = 0.0;
20ndt_pose.roll = ndt_pose.pitch = ndt_pose.yaw = 0.0;
21current_pose.x = current_pose.y = current_pose.z = 0.0;
22current_pose.roll = current_pose.pitch = current_pose.yaw = 0.0;
23guess_pose.x = guess_pose.y = guess_pose.z = 0.0;
24guess_pose.roll = guess_pose.pitch = guess_pose.yaw = 0.0;
25added_pose.x = added_pose.y = added_pose.z = 0.0;
26added_pose.roll = added_pose.pitch = added_pose.yaw = 0.0;
27
28diff_x = 0.0;
29diff_y = 0.0;
30diff_z = 0.0;
31diff_yaw = 0.0;
<左右滑动以查看完整代码>
以上代码是我们整个纯激光mapping的过程,是基于cpu_ndt完成的。
这里如果要保存地图pcd文件的话,可以命令行保存这个/ndt_map的话题,但建图是否完成只能自己把握了。这里我们可以定义submap,定义一个submap_size来记录shift的和,给定阈值max_submap_size来判定是否停止建图。
1 if (submap_size >= max_submap_size)
2{
3std::string s1 = "submap_";
4std::string s2 = std::to_string(submap_num);
5std::string s3 = ".pcd";
6// std::string pcd_filename = s1 + s2 + s3;
7std::string pcd_filename = "/home/catalina/"+s1 + s2 + s3;
8
9if (submap.size() != 0)
10{
11 if (pcl::io::savePCDFileBinary(pcd_filename, submap) == -1)
12 {
13 std::cout << "Failed saving " << pcd_filename << "." << std::endl;
14 }
15 std::cout << "Saved " << pcd_filename << " (" << submap.size() << " points)" << std::endl;
16
17 map = submap;
18 submap.clear();
19 submap_size = 0.0;
20}
21submap_num++;
22}
<左右滑动以查看完整代码>
每帧扫描到的点云中,落在地面上的点云大概占30%,地面上的点云对抑制z轴漂移还是有很大作用的,因此,在NDT_Mapping中进行配准时保留地面,这部分地面去除算法可以参考lego_loam,近几年也有许多其他的效果比较好的地面去除算法,比如地面平面拟合等,后续会专门写一两篇博客来分析。
在文章的最后,我们在可视化的部分,做一点工作,在rviz里面添加一个小车模型.做法非常简单,在launch文件中加入。
1 <include file="$(find vehicle_description)/launch/lexus.launch"/>
<左右滑动以查看完整代码>
我修改的小车模型的代码提供在下面,放在工作空间下编译即可使用,提供了6款小车模型,可修改lexus.launch为milee.launch,vehicle_model.launch等,可视化效果如下图。
最后观察我们输出的log,可以稍作分析。一般来说,NDT是基于概率分布的算法,速度很快,迭代次数不会超过5次,score得分是越低越好,一般来说,小于1。迭代时间的话,在我的i5、i8代笔记本上测试,一次迭代不到10ms,正常成功匹配一次也差不多10ms左右。在工程上可以用多线程有进行优化,可能会加快速度。
从整个算法中我们可以发现,NDT相比如传统特征点法的优势在于,匹配速度快,可以理解成均匀采样,对点云的细节会保留得比较完整,在结构化环境/树木多的环境中,比如空旷广场/湖边等场景下,很难提取到合适的特征点,NDT算法的优势就很明显。
下面的两张图,一张是空旷广场,一张是湖边,两个场景的特点都是树木,草丛比较多,采用lego loam等算法建图的时候会失败,最终都是用NDT算法完成的建图,并且只有激光里程计部分,由此看来NDT算法还是有一些可取之处。
之前我们已经提过NDT算法的一些缺点,就是对旋转很敏感,比如下面的位置,就是匹配效果不好的情况.目前我们采用的NDT库的实现是八叉树,目前有很多变种的NDT算法,可以在一定程度上减小旋转的影响,我也正在研究之中。
1*《NDT原理推导》
【http://xchu.net/2019/09/14/NDT%E5%8E%9F%E7%90%86%E6%8E%A8%E5%AF%BC/】
2*《自动驾驶实战系列(四)—地理与平面坐标系对齐及INS数据可视化》
【http://xchu.net/2020/01/10/37gnss-localizer/#more】
以上是"利用NDT算法构建点云地图"的全部内容,更多话题讨论、技术交流可以扫描下方二维码添加『Apollo小哥哥』为好友,进开发者交流群。
* 以上内容为开发者原创,不代表百度官方言论。
内容来自攻城狮の家:
http://xchu.net/2019/10/11/31ndt-map/,欢迎大家收藏点赞。已获开发者授权,原文地址请戳阅读原文。