diff --git a/README.md b/README.md
index 24e5d12361f4c97128f3eac2e8e9b2a62ceb468c..93d19851fe5e1b220afdec1b82f084a7a30ae592 100644
--- a/README.md
+++ b/README.md
@@ -50,9 +50,12 @@ Fig.1(c). The dense image is obtained through self-supervised algorithm [1]. Thi
 the two above frameworks as there is not fusion in this
 experiment as well.
 4) Color and dense LiDAR-based framework: uses both
-color and dense LiDAR images for training the detec-tion network as shown in Fig.1(d). This framework is
-described in Section III
+color and dense LiDAR images for training the detec-tion network as shown in Fig.1(d). 
 
+
+![Image description](fig1.jpg)
+
+Fig. 1. The proposed (a) Color-based (b) Sparse LiDAR-based (c) Dense LiDAR-based and (d) Color and dense LiDAR based frameworks.
 # References
 1. F. Farahnakian, and J. Heikkonen, “Fusing LiDAR and Color Imagery for Object Detection using
 Convolutional Neural Networks”, The 23th edition of the IEEE International conference on information fusion