Abstract


= PDF Reprint,     = BibTeX entry,     = Online Abstract


Click to download PDF version Click to download BibTeX data Clik to view abstract D. Zhu, Y. Luo, L. Dai, X. Shao, Q. Zhou, L. Itti, J. Lu, Salient object detection via a local and global method based on deep residual network, Journal of Visual Communication and Image Representation, Vol. 54, pp. 1-9, Elsevier, Jul 2018. [2017 impact factor: 1.836] (Cited by 22)

Abstract: Salient object detection is a fundamental problem in both pattern recognition and image processing tasks. Previous salient object detection algorithms usually involve various features based on priors/assumptions about the properties of the objects. Inspired by the effectiveness of recently developed deep feature learning, we propose a novel Salient Object Detection via a Local and Global method based on Deep Residual Network model (SOD-LGDRN) for saliency computation. In particular, we train a deep residual network (ResNet-G) to measure the prominence of the salient object globally and extract multiple level local features via another deep residual network (ResNet-L) to capture the local property of the salient object. The final saliency map is obtained by combining the local-level and global-level saliency via Bayesian fusion. Quantitative and qualitative experiments on six benchmark datasets demonstrate that our SOD-LGDRN method outperforms eight state-of-the-art methods in the salient object detection.

Themes: Computational Modeling, Computer Vision

 

Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Tue 09 Jan 2024 12:10:23 PM PST