Skip to content
Snippets Groups Projects
Commit 64050e9c authored by Fahimeh Farahnakian's avatar Fahimeh Farahnakian :speech_balloon:
Browse files

Update README.md

parent edb54366
No related branches found
No related tags found
No related merge requests found
......@@ -13,12 +13,13 @@ Fig. 1. Overview of the proposed framework. Initial proposals with 933 candidate
Image fusion methods have gained a lot of attraction over the past few years in the field of sensor fusion. An efficient image fusion approach can obtain complementary information from various multi-modality images. In addition, the fused image is more robust to imperfect conditions such as mis-registration and noise. We explored the performance of existing deep learning-based and traditional image fusion techniques for fusing visible and infrared images in our Dataset1. The performance of these techniques is evaluated with six common quality metrics on two different scenarios: day-time and night-time. Experimental results [k] show that deep learning based methods, DLF and DenseFuse, can obtain more natural results and contain less artificial noise; they also provided best quantitative performance.
# Visible and Infrared Image Fusion Framework based on RetinaNet for Marine Environment [3]:
Safety and security are critical issues in maritime environment. Automatic and reliable object detection based on multi-sensor data fusion is one of the efficient way for improving these issues in intelligent systems. We proposed a middle fusion framework (Fig.2) to achieve a robust object detection. The framework firstly utilizes a fusion strategy to combine both visible and infrared images and generates fused images. The resulting fused images are then processed by a simple dense convolutional neural network based detector, RetinaNet, to predict multiple 2D box hypotheses and the infrared confidences. The experimental results on real marine data show that our multi-modal framework can achieve higher detection accuracy comparison with two another uni-modal frameworks. Our framework is effectively able to detect and classify objects into one of vessel type or navigation buoy in the real marine dataset, as long as their apparent image size is more than 16 ×16 pixels.
![Image description](fig2.jpg)
Fig. 2. Proposed RetinaNet based fusion framework. The original input images are of size 3240 _ 944 pixels. They are fused using VSM-WLS in order to provide complementary information for object detection. Then, the fused image is processed by RetinaNet in order to detect and localize objects around the vessel.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment