From 05556ad95209bb183dad125f3f4150530daf1f61 Mon Sep 17 00:00:00 2001
From: Fahimeh Farahnakian <fahimeh.farahnakian@utu.fi>
Date: Wed, 15 Jul 2020 13:23:44 +0300
Subject: [PATCH] Update README.md

---
 README.md | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/README.md b/README.md
index 50010e2..24e5d12 100644
--- a/README.md
+++ b/README.md
@@ -32,6 +32,27 @@ trained on sparse or dense LiDAR data. The obtained results
 on the KITTI dataset show that fusing dense LiDAR and color
 images is an efficient solution for future object detectors.
 
+Fig.1 illustrates
+our proposed frameworks: using a common detection network
+structure, different kind of data are used to perform network
+training as follows:
+1) Color-based framework: uses only color images for training the detection network as shown in Fig.1(a).
+2) Sparse LiDAR-based framework: uses only sparse depth
+images for training the detection network as shown in
+Fig.1(b). The framework is similar to Color-only, except
+that LiDAR images are used instead of camera images.
+There is no fusion in this experiment. The sparse depth
+images is obtained by projecting LiDAR point cloud data
+on 2D image following [11].
+3) Dense LiDAR-based framework: uses only dense depth
+images for training the detection network as shown in
+Fig.1(c). The dense image is obtained through self-supervised algorithm [1]. This framework is similar to
+the two above frameworks as there is not fusion in this
+experiment as well.
+4) Color and dense LiDAR-based framework: uses both
+color and dense LiDAR images for training the detec-tion network as shown in Fig.1(d). This framework is
+described in Section III
+
 # References
 1. F. Farahnakian, and J. Heikkonen, “Fusing LiDAR and Color Imagery for Object Detection using
 Convolutional Neural Networks”, The 23th edition of the IEEE International conference on information fusion
-- 
GitLab