Usage of ADM PV-RCNN

Usage of ADM PV-RCNN

image

Note

This document mainly introduces the usage of the ADM PV-RCNN Docker container. For more details, please refer to here.

This Docker container includes the complete pipeline work files from LiDAR to algorithm model. The Docker container can process Pcap format data packets collected by LiDAR. Currently, it only supports data collected by four types of LiDAR produced by Robosense: RS-LiDAR-16, RS-LiDAR-32, RS-LiDAR-64, and RS-LiDAR-128. More brands and types of LiDAR will be supported in the future.


Description

ADM PV-RCNN stands for Adaptive Deformation Module PV-RCNN. It is a new point-voxel-based 3D object detection model based on PV-RCNN (see github.com/open-mmlab/OpenPCDet). We have added an adaptive deformation convolution module and a context fusion module to the original PV-RCNN. The adaptive deformation convolution module addresses the issue of low recognition accuracy of the PV-RCNN model in blurry and long-distance scenarios, while the context fusion module reduces the false positive rate of the PV-RCNN model in regions with uneven point cloud distribution. We tested the proposed ADM PV-RCNN model on the KITTI dataset, and the results show that our model significantly outperforms PV-RCNN and other similar models.


Download

Simply enter docker pull 663lab/adm-pv-rcnn:v1.0 on a machine with Docker installed to download the Docker container.


Usage

1. Parse LiDAR pcap data packets

Enter the Docker container and mount the Pcap format data packets collected by Robosense LiDAR into the Docker container.

a. Go to the Robosense LiDAR SDK folder

cd ~/catkin_ws/src/rslidar_sdk/

b. Convert Pcap data packets to Pcd data

▪︎ Modify the parameters in the config.yaml file under the config directory as needed, such as setting msg_source to 3, pcap_path to the path of your pcap data packet, and lidar_type to the type of your LiDAR.

▪︎ Run the following command to convert the Pcap data packet to a ROSBag data packet:

roslaunch rslidar_sdk start.launch

▪︎ Go to the directory where the Pcap data packet is stored.

▪︎ Run the following commands to fix the ROSBag data packet:

rosbag reindex xxx.bag.active
rosbag fix xxx.bag.active result.bag

▪︎ Run the following command to convert the ROSBag data packet to Pcd data:

rosrun pcl_ros bag_to_pcd result.bag ~/pcdfiles

The converted Pcd data files are stored in the /root/pcdfiles/ directory.

2. Convert LiDAR point clouds to standardized bin files

a. Convert Pcd data files to binary bin data files

▪︎ Go to the Pcd-to-bin project folder

cd ~/pcd2bin/

▪︎ Modify the pcd2bin.cpp file in the current directory, and set pcd_path and bin_path to the corresponding paths.

▪︎ Run the following commands to compile and install the Pcd-to-bin project:

mkdir build && cd build
cmake ... && make

▪︎ Run the following command to generate binary bin data files:

zsh ./bin2pcd

b. Convert bin data files to standardized bin files

▪︎ Enter the Modbin project folder:

cd ~/modbin/

▪︎ Run the following commands to generate standardized bin data files:

mkdir ~/modfiles
conda activate model
python modbin.py --ori_path ~/binfiles mod_path ~/modfiles

3. Usage of ADM PV-RCNN

a. Enter the ADM PV-RCNN project folder:

cd ~/ADM-PV-RCNN/

b. Copy the ADM PV-RCNN src folder to OpenPCDet:

zsh ./init.sh

c. Prepare the Kitti dataset:

Please download the official KITTI 3D Object Detection dataset and organize the downloaded files as follows (road planes can be downloaded from [road plane], which is optional for data augmentation during training):

ADM-PV-RCNN
├── OpenPCDet
│   ├── data
│   │   ├── kitti
│   │   │   │──ImageSets
│   │   │   │──training
│   │   │   │   ├──calib & velodyne & label_2 & image_2 & (optional: planes)
│   │   │   │──testing
│   │   │   │   ├──calib & velodyne & image_2
│   ├── pcdet
│   ├── tools

▪︎ Run the following command to generate data information:

python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml

d. Run experiments with specific configuration files:

▪︎ To test and evaluate the pretrained model, you can download our pretrained model from here.

• Enter the tools directory:

cd OpenPCDet/tools

• Test with the pretrained model:

python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --ckpt ${CKPT}

• For example:

python test.py --cfg_file cfgs/kitti_models/def_pv_rcnn.yaml --batch_size 4 --ckpt ${SAVED_CKPT_PATH}/def_pv_rcnn.pth

▪︎ Train the model:

• Train with multiple GPUs:

sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}  --epochs 80

• For example:

sh scripts/dist_train.sh 8 --cfg_file cfgs/kitti_models/def_pv_rcnn.yaml  --batch_size 16  --epochs 100

• Train with a single GPU:

python train.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --epochs 100

e. Quick Demo

▪︎ Run the following command to use the pretrained model and your custom point cloud data for a demo:

python demo.py --cfg_file cfgs/kitti_models/adm_pv_rcnn.yaml --ckpt adm_pv_rcnn_epoch_100.pth --data_path ${POINT_CLOUD_DATA}

Here, ${POINT_CLOUD_DATA} can be in the following formats:

1). Your converted custom data, such as a single numpy file my_data.npy.

2). Your converted custom data, such as a single bin file my_data.bin.

3). Your converted custom data, such as a directory containing multiple point cloud data files.

4). Original KITTI .bin data, such as data/kitti/testing/velodyne/000010.bin.

PS: If you have any questions, please email me at jensen.acm@gmail.com (please include “ADM PV-RCNN” in the subject line).

Last updated on