diff --git a/README.md b/README.md
index a97a0a91cce793749a3445acb60e5233aa7a0e31..38a6b031b748fa31036ec4da012d562af2e8851e 100644
--- a/README.md
+++ b/README.md
@@ -13,12 +13,13 @@ Fig. 1. Overview of the proposed framework. Initial proposals with 933 candidate
 Image fusion methods have gained a lot of attraction over the past few years in the field of sensor fusion. An efficient image fusion approach can obtain complementary information from various multi-modality images. In addition, the fused image is more robust to imperfect conditions such as mis-registration and noise. We explored the performance of existing deep learning-based and traditional image fusion techniques for fusing visible and infrared images in our Dataset1. The performance of these techniques is evaluated with six common quality metrics on two different scenarios: day-time and night-time. Experimental results [k] show that deep learning based methods, DLF and DenseFuse, can obtain more natural results and contain less artificial noise; they also provided best quantitative performance.
 
 
+# Visible and Infrared Image Fusion Framework based on RetinaNet for Marine Environment [3]: 
+Safety and security are critical issues in maritime environment. Automatic and reliable object detection based on multi-sensor data fusion is one of the efficient way for improving these issues in intelligent systems. We proposed a middle fusion framework (Fig.2) to achieve a robust object detection. The framework firstly utilizes a fusion strategy to combine both visible and infrared images and generates fused images. The resulting fused images are then processed by a simple dense convolutional neural network based detector, RetinaNet, to predict multiple 2D box hypotheses and the infrared confidences. The experimental results on real marine data show that our multi-modal framework can achieve higher detection accuracy comparison with two another uni-modal frameworks. Our framework is effectively able to detect and classify objects into one of vessel type or navigation buoy in the real marine dataset, as long as their apparent image size is more than 16 ×16 pixels.
 
+![Image description](fig2.jpg)
 
 
-
-
-
+Fig. 2. Proposed RetinaNet based fusion framework. The original input images are of size 3240 _ 944 pixels. They are fused using VSM-WLS in order to provide complementary information for object detection. Then, the fused image is processed by RetinaNet in order to detect and localize objects around the vessel.