diff --git a/README.md b/README.md index 50010e254a215a391ccbdf12461c7dc059b3c262..24e5d12361f4c97128f3eac2e8e9b2a62ceb468c 100644 --- a/README.md +++ b/README.md @@ -32,6 +32,27 @@ trained on sparse or dense LiDAR data. The obtained results on the KITTI dataset show that fusing dense LiDAR and color images is an efficient solution for future object detectors. +Fig.1 illustrates +our proposed frameworks: using a common detection network +structure, different kind of data are used to perform network +training as follows: +1) Color-based framework: uses only color images for training the detection network as shown in Fig.1(a). +2) Sparse LiDAR-based framework: uses only sparse depth +images for training the detection network as shown in +Fig.1(b). The framework is similar to Color-only, except +that LiDAR images are used instead of camera images. +There is no fusion in this experiment. The sparse depth +images is obtained by projecting LiDAR point cloud data +on 2D image following [11]. +3) Dense LiDAR-based framework: uses only dense depth +images for training the detection network as shown in +Fig.1(c). The dense image is obtained through self-supervised algorithm [1]. This framework is similar to +the two above frameworks as there is not fusion in this +experiment as well. +4) Color and dense LiDAR-based framework: uses both +color and dense LiDAR images for training the detec-tion network as shown in Fig.1(d). This framework is +described in Section III + # References 1. F. Farahnakian, and J. Heikkonen, “Fusing LiDAR and Color Imagery for Object Detection using Convolutional Neural Networks”, The 23th edition of the IEEE International conference on information fusion