Skip to content
Snippets Groups Projects
Commit 0495ecaf authored by Fahimeh Farahnakian's avatar Fahimeh Farahnakian :speech_balloon:
Browse files

Update README.md

parent 14af2b44
No related branches found
No related tags found
No related merge requests found
...@@ -56,6 +56,42 @@ color and dense LiDAR images for training the detec-tion network as shown in Fig ...@@ -56,6 +56,42 @@ color and dense LiDAR images for training the detec-tion network as shown in Fig
![Image description](fig1.jpg) ![Image description](fig1.jpg)
Fig. 1. The proposed (a) Color-based (b) Sparse LiDAR-based (c) Dense LiDAR-based and (d) Color and dense LiDAR based frameworks. Fig. 1. The proposed (a) Color-based (b) Sparse LiDAR-based (c) Dense LiDAR-based and (d) Color and dense LiDAR based frameworks.
#Qualitative Results
Fig.2 and Fig.3 illustrate four example detection results
from test KITTI dataset by the proposed frameworks with
Faster R-CNN and SSD detectors, respectively. The detection
results of the proposed fusion frameworks show that these
are able to detect targets more efficient than other proposed
frameworks. Note that because the early fusion framework can integrate information from both color and dense depth images.
The fusion frameworks successfully detected the size/location
of the bounding boxes. In the third and fourth examples, our
fusion framework has detected ”Pedestrians” and “Cyclist”
that other frameworks have missed. Moreover, the fusion
framework is able to detect small objects with a few pixels
as shown in Fig.2 (E) and many of them are detected by
our framework. It shows the generalisation capability of the
proposed framework and indicates its potentials in executing
2D object detection in real situations beyond a pre-designed
dataset.
![Image description](fasterRCNN.jpg)
Fig. 2. Qualitative results of the proposed frameworks with Faster R-CNN on four example images from test KITTI dataset. The first row of images is the
ground truths on input color images. The second is the color-based baseline framework. The third and forth rows of images are the detection result of two
uni-modals on sparse and dense depth images, respectively. The last row illustrates the detection result of multi-model framework on color and dense depth
image.
![Image description](SSD.jpg)
Fig. 3. Qualitative results of the proposed frameworks with SSD on four example images from test KITTI dataset. The first row of images is the ground
truths on input color images. The second is the color-based baseline framework. The third and forth rows of images are the detection result of two uni-modals
on sparse and dense depth images, respectively. The last row illustrates the detection result of multi-model approach on color-dense depth image.
# References # References
1. F. Farahnakian, and J. Heikkonen, “Fusing LiDAR and Color Imagery for Object Detection using 1. F. Farahnakian, and J. Heikkonen, “Fusing LiDAR and Color Imagery for Object Detection using
Convolutional Neural Networks”, The 23th edition of the IEEE International conference on information fusion Convolutional Neural Networks”, The 23th edition of the IEEE International conference on information fusion
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment