= PDF Reprint,     = BibTeX entry,     = Online Abstract

Click to download BibTeX data Clik to view abstract A-Y. D. Chiang, D. J. Berg, L. Itti, Saliency, Memory, and Attention Capture in Marketing, In: Proc. Vision Science Society Annual Meeting (VSS11), May 2011. (Cited by 1)

Abstract: Visual attention is considered to have great value in marketing. The AIDA (Attention – Interest – Desire – Action) advertising model suggests that attention capture is the first and most important step before the desired consumer consumption behavior takes action. Pre-attentive visual processing accounts largely for building up brand preferences in consumer schema: People tend to choose one brand over the other because they feel familiar with it (mere exposure effect), even though they don't consciously remember having seen the brand or its advertisements before. Marketers have been spending a lot of money and time designing, and choosing effective publicity materials for consumer attention capture. An efficient evaluation tool is thus considered necessary. We propose that saliency map (the computational model of vision) can serve as the useful tool to predict people's eye fixation locations in an advertisement, and help marketers to make strategic decisions choosing the most effective ad for publicity through an objective manner. To test saliency map's efficiency, eye movements from fourteen naive subjects were recorded while eighteen images from scenes of shopping environments were showed to them for two seconds followed by a random mask. Subjects were then asked to recall whether subsequently presented image contained items that were presented in the scene. We found no significant correlation between subjects' recall rates and computed saliency of objects from the scenes; however, the computed saliency has predicted eye locations three standard deviations above chance. The result has supported other marketing studies on pre-attentive visual processing, and further demonstrated the potential of saliency map in marketing.

Themes: Computer Vision, Beobots, Model of Bottom-Up Saliency-Based Visual Attention


Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Wed Feb 15 12:13:56 PST 2017