From 0495ecafa4275195551dcc189d2c8a3db649cc02 Mon Sep 17 00:00:00 2001
From: Fahimeh Farahnakian <fahimeh.farahnakian@utu.fi>
Date: Wed, 15 Jul 2020 13:42:16 +0300
Subject: [PATCH] Update README.md

---
 README.md | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/README.md b/README.md
index 93d1985..648eea6 100644
--- a/README.md
+++ b/README.md
@@ -56,6 +56,42 @@ color and dense LiDAR images for training the detec-tion network as shown in Fig
 ![Image description](fig1.jpg)
 
 Fig. 1. The proposed (a) Color-based (b) Sparse LiDAR-based (c) Dense LiDAR-based and (d) Color and dense LiDAR based frameworks.
+
+#Qualitative Results
+Fig.2 and Fig.3 illustrate four example detection results
+from test KITTI dataset by the proposed frameworks with
+Faster R-CNN and SSD detectors, respectively. The detection
+results of the proposed fusion frameworks show that these
+are able to detect targets more efficient than other proposed
+frameworks. Note that because the early fusion framework can integrate information from both color and dense depth images.
+The fusion frameworks successfully detected the size/location
+of the bounding boxes. In the third and fourth examples, our
+fusion framework has detected ”Pedestrians” and “Cyclist”
+that other frameworks have missed. Moreover, the fusion
+framework is able to detect small objects with a few pixels
+as shown in Fig.2 (E) and many of them are detected by
+our framework. It shows the generalisation capability of the
+proposed framework and indicates its potentials in executing
+2D object detection in real situations beyond a pre-designed
+dataset.
+
+
+![Image description](fasterRCNN.jpg)
+
+Fig. 2. Qualitative results of the proposed frameworks with Faster R-CNN on four example images from test KITTI dataset. The first row of images is the
+ground truths on input color images. The second is the color-based baseline framework. The third and forth rows of images are the detection result of two
+uni-modals on sparse and dense depth images, respectively. The last row illustrates the detection result of multi-model framework on color and dense depth
+image.
+
+![Image description](SSD.jpg)
+
+
+
+Fig. 3. Qualitative results of the proposed frameworks with SSD on four example images from test KITTI dataset. The first row of images is the ground
+truths on input color images. The second is the color-based baseline framework. The third and forth rows of images are the detection result of two uni-modals
+on sparse and dense depth images, respectively. The last row illustrates the detection result of multi-model approach on color-dense depth image.
+
+
 # References
 1. F. Farahnakian, and J. Heikkonen, “Fusing LiDAR and Color Imagery for Object Detection using
 Convolutional Neural Networks”, The 23th edition of the IEEE International conference on information fusion
-- 
GitLab