diff --git a/README.md b/README.md
index 40ac736e785f8666f3f648c8d246b98ba711f450..8486ccc96733bbc4349c4a9928e336d26eb7b789 100644
--- a/README.md
+++ b/README.md
@@ -6,4 +6,4 @@ In summary, the following research objectives and contributions have been deline
 Most of state-of-the-art object detectors employ object proposals methods for guiding the search for object instances across images. These methods can improve detection accuracy by extracting reliable proposals that contain objects of interest. Moreover, they can considerably reduce computation compared with a dense detection approach such as sliding window by avoiding exhaustive sliding window search across images. We proposed an effective object detection framework based on proposal fusion of multiple sensors such as infrared camera, RGB cameras, radar and LiDAR. such as infrared camera, RGB cameras, radar and LiDAR. Our framework (Fig.1) first applies the Selective Search (SS) method on RGB image data to extract possible candidate proposals which likely contain the objects of interest. Then it uses the information from other sensors in order to reduce the number of generated proposals by SS and find more dense proposals. Finally, the class of objects within the final proposals are identified by Convolutional Neural Network (CNN) as a main architecture of deep learning. Experimental results on real dataset demonstrate that our framework can precisely detect meaningful object regions using a smaller number of proposals than other object proposals methods.
 
 However, we annotated all images into one class:Damage.  Using this tool you can create a polygon mask as shown below:
-![Image description](Fig1.png) ![Image description](Fig1.png)
\ No newline at end of file
+![Image description](Fig1.jpeg) ![Image description](Fig1.jpeg)
\ No newline at end of file