= PDF Reprint, = BibTeX entry, = Online Abstract
F. Baluch, L. Itti, Effects of training on perceptual salience, In: Proc. Vision Science Society Annual Meeting (VSS08), May 2008.
Abstract: Learning on a visual search task involves plasticity at one or more levels of the visual cortex. Does this plasticity boost the target features and suppress distractors in a manner that would make the target more perceptually salient? We address this question by designing a challenging, attentionally-demanding conjunction search task, where each colored Gabor patch item is defined by a conjunction of 3 features (hue, orientation and spatial frequency). Three subjects' eye movements were recorded while they searched for a target embedded among distractors in 1/f noise. Once the target is spotted subjects report its location and are given feedback based on whether they made the right choice or not. Subjects perform three 100-trial search sessions. Each trial had unique targets and distractors, so subjects gained general task expertise rather than expertise with specific stimuli. Accuracy improved significantly (one-way ANOVA p[[lt]]0.005) from session to session and subjects achieved on average a 15% boost in accuracy in locating the target. Further, the trajectories of subjects' eye movements through the three dimensional feature space were analyzed and the average Euclidean distance to the target, within the feature space, decreases from session to session. We also found that subjects make first saccades towards items closer to the target in feature space from one session to the next. Moreover, the average Euclidean distance of first saccade target items from the search target (in feature space) was reduced by 20% from the first session to the last. These results provide evidence for subjects making saccades towards items that are more similar to the target during the course of the sessions. These saccades towards target-like items suggest that these items are more perceptually salient and become even more so with training.
Themes: Model of Bottom-Up Saliency-Based Visual Attention, Scene Understanding
Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Tue 10 Jan 2023 02:30:30 PM PST