From c898bf8e9f018e6f882df2ed2176f6cf2c7e4353 Mon Sep 17 00:00:00 2001
From: Fahimeh Farahnakian <fahimeh.farahnakian@utu.fi>
Date: Wed, 15 Jul 2020 13:36:14 +0300
Subject: [PATCH] Update README.md

---
 README.md | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/README.md b/README.md
index 24e5d12..93d1985 100644
--- a/README.md
+++ b/README.md
@@ -50,9 +50,12 @@ Fig.1(c). The dense image is obtained through self-supervised algorithm [1]. Thi
 the two above frameworks as there is not fusion in this
 experiment as well.
 4) Color and dense LiDAR-based framework: uses both
-color and dense LiDAR images for training the detec-tion network as shown in Fig.1(d). This framework is
-described in Section III
+color and dense LiDAR images for training the detec-tion network as shown in Fig.1(d). 
 
+
+![Image description](fig1.jpg)
+
+Fig. 1. The proposed (a) Color-based (b) Sparse LiDAR-based (c) Dense LiDAR-based and (d) Color and dense LiDAR based frameworks.
 # References
 1. F. Farahnakian, and J. Heikkonen, “Fusing LiDAR and Color Imagery for Object Detection using
 Convolutional Neural Networks”, The 23th edition of the IEEE International conference on information fusion
-- 
GitLab