Skip to content
Snippets Groups Projects
Commit ae312809 authored by Fahimeh Farahnakian's avatar Fahimeh Farahnakian :speech_balloon:
Browse files

Update README.md

parent fda8b4a8
No related branches found
No related tags found
No related merge requests found
...@@ -42,36 +42,28 @@ Table1. Average precion (AP%) on test dataset1[4]. ...@@ -42,36 +42,28 @@ Table1. Average precion (AP%) on test dataset1[4].
Vessel detection studies conducted on inshore and offshore Vessel detection studies conducted on inshore and offshore
maritime images are scarce, due to a limited availability of maritime images are scarce, due to a limited availability of
domain-specific datasets. We addressed this need collecting domain-specific datasets. We addressed this need collecting
two datasets in the Finnish Archipelago. They consist of im- two datasets in the Finnish Archipelago. They consist of images of maritime vessels engaged in various operating scenarios, climatic conditions and lighting environments. Vessel
ages of maritime vessels engaged in various operating sce- instances were precisely annotated in both datasets. We evaluated the out-of-the-box performance of three state-of-the-art
narios, climatic conditions and lighting environments. Vessel
instances were precisely annotated in both datasets. We eval-
uated the out-of-the-box performance of three state-of-the-art
CNN-based object detection algorithms (Faster R-CNN, CNN-based object detection algorithms (Faster R-CNN,
R-FCN and SSD) on these datasets and compared them R-FCN and SSD) on these datasets and compared them
in terms of accuracy and run-time. The algorithms were pre- in terms of accuracy and run-time. The algorithms were previously trained on the COCO dataset. We explore their
viously trained on the COCO dataset. We explore their performance based on different feature extractors. Furthermore, we investigate the effect of the object size on the algo-
performance based on different feature extractors. Further-
more, we investigate the effect of the object size on the algo-
rithm performance. For this purpose, we group all objects in rithm performance. For this purpose, we group all objects in
each image into three categories (small, medium and large) each image into three categories (small, medium and large)
according to the number of occupied pixels in the annotated according to the number of occupied pixels in the annotated
bounding box. Experiments show that Faster R-CNN with bounding box. Experiments show that Faster R-CNN with
ResNet101 as feature extractor outperforms the other algo- ResNet101 as feature extractor outperforms the other algorithms.
rithms.
# An Efficient Multi-sensor Fusion Approach for Object Detection in Maritime Environmentt [6]: # An Efficient Multi-sensor Fusion Approach for Object Detection in Maritime Environmentt [6]:
Robust real-time object detection and tracking are Robust real-time object detection and tracking are
challenging problems in autonomous transportation systems challenging problems in autonomous transportation systems
due to operation of algorithms in inherently uncertain and due to operation of algorithms in inherently uncertain and
dynamic environments and rapid movement of objects. There- dynamic environments and rapid movement of objects. Therefore, tracking and detection algorithms must cooperate with
fore, tracking and detection algorithms must cooperate with
each other to achieve smooth tracking of detected objects that each other to achieve smooth tracking of detected objects that
later can be used by the navigation system. In this paper, we later can be used by the navigation system. In this paper, we
first present an efficient multi-sensor fusion approach based on first present an efficient multi-sensor fusion approach based on
the probabilistic data association method in order to achieve the probabilistic data association method in order to achieve
accurate object detection and tracking results. The proposed ap- accurate object detection and tracking results. The proposed approach fuses the detection results obtained independently from
proach fuses the detection results obtained independently from
four main sensors: radar, LiDAR, RGB camera and infrared four main sensors: radar, LiDAR, RGB camera and infrared
camera. It generates object region proposals based on the fused camera. It generates object region proposals based on the fused
detection result. Then, a Convolutional Neural Network (CNN) detection result. Then, a Convolutional Neural Network (CNN)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment