Abstract


= PDF Reprint,     = BibTeX entry,     = Online Abstract


Click to download BibTeX data Clik to view abstract D. R. Edgington, I. Kerkez, D. Oliver, L. Kuhnz, D. Cline, D. Walther, L. Itti, Detecting Benthic Megafauna in Underwater Video, In: Proc. 2004 AGU Fall Meeting (AVED), Vol. 85, No. 47, p. OS43B/0551, Dec 2004. (Cited by 3)

Abstract: Remotely operated vehicles (ROVs) have revolutionized oceanographic research, supplementing traditional technologies of acoustics and trawling as tools which assess animal diversity, distribution and abundance. Video equipment deployed on ROVs enable quantitative video transects (QVTs) to be recorded from ocean habitats, providing high-resolution imagery on the scale of individual organisms and their associated habitat. Currently, the manual method employed by trained scientists analyzing QVTs is labor-intensive and costly, limiting the amount of data analyzed from ROV dives. An automated system for detecting organisms and identifying objects visible in video would address these concerns. Automated event detection (scene segmentation) is a step towards an automated analytical system for QVTs. In the work presented here, video frames are processed with a neuromorphic selective-attention algorithm. The candidate locations identified by the attention selection module are subject to a number of parameters. These parameters, combined with successful tracking over several frames, determine whether detected events are deemed ``interesting'' or ``boring''. ``Interesting'' events are marked in the video frames for subsequent identification and processing. As reported previously for mid-water QVTs, the system agrees with professional annotations 80 percent of the time. Poor contrast of small translucent animals in conjunction with the presence of debris (``marine snow'') complicates automated event detection. While the visual characteristics of the seafloor (benthic) habitat are very different from the mid-water environment, the system yields a 92 percent correlation of detected animals on the seafloor compared with professional annotations. We present data detailing the comparison between a) automated detection and b) professional detection and classification, and we outline plans for future development of automated analysis.

Themes: Model of Bottom-Up Saliency-Based Visual Attention, Computational Modeling, Computer Vision

 

Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Tue 09 Jan 2024 12:10:23 PM PST