diff --git a/README.md b/README.md
index ad9a95d98f1e08ff7a2353eb5cb0744864795906..a97a0a91cce793749a3445acb60e5233aa7a0e31 100644
--- a/README.md
+++ b/README.md
@@ -9,7 +9,8 @@ Most of state-of-the-art object detectors employ object proposals methods for gu
 Fig. 1. Overview of the proposed framework. Initial proposals with 933 candidates are first generated by SS and are then filtered using proposal fusion of multiple sensors. After that, the final proposals are classified using CNN[1].
 
 
-
+# Comparative Analysis of Image Fusion Methods in Marine Environment [2]: 
+Image fusion methods have gained a lot of attraction over the past few years in the field of sensor fusion. An efficient image fusion approach can obtain complementary information from various multi-modality images. In addition, the fused image is more robust to imperfect conditions such as mis-registration and noise. We explored the performance of existing deep learning-based and traditional image fusion techniques for fusing visible and infrared images in our Dataset1. The performance of these techniques is evaluated with six common quality metrics on two different scenarios: day-time and night-time. Experimental results [k] show that deep learning based methods, DLF and DenseFuse, can obtain more natural results and contain less artificial noise; they also provided best quantitative performance.
 
 
 
@@ -25,4 +26,8 @@ Fig. 1. Overview of the proposed framework. Initial proposals with 933 candidate
 
 
 # References
-1.  F. Farahnakian, M.Haghbayan, J. Poikonen, M. Laurinen, P. Nevalainen and J. Heikkonen, “Object Detection based on Multi-sensor Proposal Fusion in Maritime Environment”, The 17th IEEE International Conference on Machine Learning and Applications (ICMLA), 2018, US.
\ No newline at end of file
+1.  F. Farahnakian, M.Haghbayan, J. Poikonen, M. Laurinen, P. Nevalainen and J. Heikkonen, “Object Detection based on Multi-sensor Proposal Fusion in Maritime Environment”, The 17th IEEE International Conference on Machine Learning and Applications (ICMLA), 2018, US.
+2. F. Farahnakian, P. Movahedi , J. Poikonen, E. Lehtonen, D. Makris and J. Heikkonen, “Comparative Analysis of Image Fusion Methods in Marine Environment”, Proceedings of the 13th edition of the IEEE International Symposium on RObotic and Sensors Environments (ROSE), 2019, Canada.
+3. F. Farahnakian, J. Poikonen, M. Laurinen, D. Makris and J. Heikkonen, “Visible and Infrared Image Fusion Framework based on RetinaNet for Marine Environment”, The 22th edition of the IEEE International conference on information fusion (Fusion), 2019, Canada.
+4. F. Farahnakian, J. Poikonen, M. Laurinen, and J. Heikkonen, “Deep Convolutional Neural Network-based Fusion of RGB and IR Images in Marine Environment”, The 22th IEEE International Conference on Intelligent Transportation Systems (ITSC), 2019, New Zealand.
+