diff --git a/README.md b/README.md
index d33c0f9af244cf4b1e4ac2d42257a5e8c6d930c1..7a24480e45f5c98633d58f3eb58c0c2cd7497ef4 100644
--- a/README.md
+++ b/README.md
@@ -26,7 +26,17 @@ Fig. 2. Proposed RetinaNet based fusion framework. The original input images are
 According to the weakness and strengths of color camera and IR, this idea arises from the intuition that an improved solution would combine data from both homogenous sensors to produce more accurate and reliable performance. However, there is no general guideline for network architecture design, and questions of “what to fuse?”, “when to fuse?”, and “how to fuse?” remain open. Inspired by this consideration, we investigated how IR and camera images can be integrated for carrying out object detection. Beside the middle fusion framework in the previous work, we proposed a late multi-modal fusion to provide complimentary information from RGB and thermal infrared cameras in order to improve the detection performance. This framework (Fig.3) first employs RetinaNet as a dense simple deep model for each input image separately to extract possible candidate proposals which likely contain the targets of interest. Then, all proposals are generated by concatenating the obtained proposals from two modalities. Finally, redundant proposals are removed by Non-Maximum Suppression (NMS). 
 
 ![Image description](fig3.jpg)
-Fig. 3. An overview of the proposed late fusion framework. Our framework has two feature extractor: (A) a RetinaNet for process RGB input image and (B) a RetinaNet for extracting features from the corresponding input IR image. (C) The framework concatenates outputs of RetinaNet networks (ORGB,OIR), and then a final set of target proposals is obtained after none-maximum suppression. (D) The final output containing predicted bounding boxes which are associated with a category label and a objectness score.
+
+
+Fig. 3. An overview of the proposed late fusion framework. Our framework has two feature extractor: (A) a RetinaNet for process RGB input image and (B) a RetinaNet for extracting features from the corresponding input IR image. (C) The framework concatenates outputs of RetinaNet networks (ORGB,OIR), and then a final set of target proposals is obtained after none-maximum suppression. (D) The final output containing predicted bounding boxes which are associated with a category label and a objectness score [4].
+
+
+We compared four proposed frameworks on the test dataset using the Average Precision percentage (%AP) as this is the actual metric for target detection). Table1 shows AP for each five vessel class and navigation buoy. The experimental results show that our late fusion framework can get more detection accuracy compared with middle fusion and uni-modal frameworks.
+
+Table1. Average precion (AP%) on test dataset1[4].
+
+![Image description](tabel1.jpg)
+
 
 # References
 1.  F. Farahnakian, M.Haghbayan, J. Poikonen, M. Laurinen, P. Nevalainen and J. Heikkonen, “Object Detection based on Multi-sensor Proposal Fusion in Maritime Environment”, The 17th IEEE International Conference on Machine Learning and Applications (ICMLA), 2018, US.