Research Article: A real-time road detection method based on reorganized lidar data

Date Published: April 16, 2019

Publisher: Public Library of Science

Author(s): Fenglei Xu, Longtao Chen, Jing Lou, Mingwu Ren, Yanyong Guo.


Road Detection is a basic task in automated driving field, in which 3D lidar data is commonly used recently. In this paper, we propose to rearrange 3D lidar data into a new organized form to construct direct spatial relationship among point cloud, and put forward new features for real-time road detection tasks. Our model works based on two prerequisites: (1) Road regions are always flatter than non-road regions. (2) Light travels in straight lines in a uniform medium. Based on prerequisite 1, we put forward difference-between-lines feature, while ScanID density and obstacle radial map are generated based on prerequisite 2. According to our method, we construct an array of structures to store and reorganize 3D input firstly. Then, two novel features, difference-between-lines and ScanID density, are extracted, based on which we construct a consistency map and an obstacle map in Bird Eye View (BEV). Finally, the road region is extracted by fusing these two maps and refinement is used to polish up our outcome. We have carried out experiments on the public KITTI-Road benchmark, achieving one of the best performances among the lidar-based road detection methods. To further prove the efficiency of our method on unstructured road, the visual outcomes in rural areas are also proposed.

Partial Text

Traversable road detection is always a core task in the context of autonomous driving vehicles, which has been studied for decades. Bunches of implementations are proposed based on various sensors, in which vision-based road detection is the most conventional kind. However, the lack of depth information makes environmental perception inadequate to construct a perfect road model. So, a consensus has been reached that range data is necessary in road detection, which leads to researches on utilization of 3D information recently. Range data is usually provided by stereo-cameras, radars or lidars. In off-road environment, which is much more challenging compared to well-structured urban scenes, 3D lidar scanners are more widely needed.

In this section, we evaluate the proposed method with 5 well-performed models demonstrated on KITTI dataset [34], including Road Estimation with Sparse 3D Points From Velodyne (RES3D-Velo) [31], Graph Based Road Estimation using Sparse 3D Points from Velodyne (GRES3D+VELO) [35], CRF based Road Detection with Multi-Sensor Fusion (FusedCRF) [36], LidarHistogram (LidarHisto) [33], Hybrid Conditional Random Field (HybridCRF) [37]. To easily express our method, we name it RDR, which indicates a Road Detection Method based on Reorganized Lidar Data.

Although RDR performs well and exceeds the compared models on datasets, it does fail in some cases. These failures are mainly caused by two attributes of our method. Fig 10 shows the typical failure cases collected from KITTI dataset. Fig 10(a) is mainly caused by failure of assumption 1. The non-road region is a flat plane, and there is no inflection point at road boundaries, so that RDR select a point sequence containing both road and non-road regions as one candidate road part. Fig 10(b) occurs because we use straight line fitting to locate road boundaries, which are used for choosing road region from candidates in RDR. To reduce this influence, we increase the vertical resolution in grid map to stretch road along forward direction and reduce bend radian. Fig 10(b) is the outcome after adopting this step.

Throughout this paper, we present a novel road detection method based on modified lidar data, which meets real-time requirement. The reorganization of lidar data constructs spatial relationship based on ScanID and PointID, which are actually quantifications of laser head pitch angle and lidar rotation angle to some extent. Based on the reorganized lidar data, two main innovations of our work, distance-between-lines and ScanID density, are put forward to generate consistency map and obstacle map respectively. Next, two maps are fused to one, and after refinement, road region is detected ultimately. Experimental results indicate a novel performance of our proposed model.




Leave a Reply

Your email address will not be published.