Research Support

We thank our sponsors, without whom we would not be able to carry out the research projects described on this web site. Below is a list of our currently funded projects.

New Faculty Startup Funds

Funding Agency: Charles Lee Powell Foundation and USC School of Engineering
Funding Period:9/1/00 -
Principal Investigator:Laurent Itti

The Powell award and support from the USC School of Engineering were key in allowing us to quickly start our new laboratory at USC. Indeed, it allowed us to purchase a fairly large computer cluster, which has already given us very promising research results. Moreover, it allowed us to immediately start on several research projects by supporting research assistants while proposals submitted to governmental agencies were under review. We are pleased that thanks to this startup funding, which has allowed us to focus on writing proposals while research was already operational from the beginning, we have raised over $1 million of further research funding in our first year at USC (though not all will be for us, as most of this funding is for multi-university, collaborative projects).

Attentional Modulation in Early Sensory Processing

Funding Agency:National Institutes of Health / National Eye Institute
Funding Period:9/10/01 - 9/9/04
Principal Investigators:Laurent Itti (USC), Thomas Ernst (Brookhaven National Labs)
Total award:$400,000 for both USC and BNL

We seek to understand, in quantitative and computational terms, how focal attention modulates early sensory processing. Understanding the neuronal mechanism of attentional feedback modulation has recently become a high-priority area of vision research, and is one of the research priorities at the National Eye Institute (see ``Vision Research, A National Plan 1999-2003,'' at http://www.nei.nih.gov, in the ``Strabismus, Amblyopia and Visual Processing'' program). Although much progress has been made in recent years, using techniques such as electrophysiology, psychophysics and functional imaging, the community currently greatly lacks a unified computational understanding of the effect. Electrophysiology demonstrates ``increased firing rates'' at the single-neuron level, psychophysics show ``improved discrimination thresholds,'' and functional magnetic resonance imaging (fMRI) reports ``increased activation'' with attention, but the computational mechanism at the origin of these observations remains largely unknown and controversial.

We have recently proposed a computational theory, based on simultaneously predicting attentional modulation for five different visual psychophysics discrimination tasks. It predicts that attention activates a winner-take-all competition among neurons tuned to different orientations within a single hypercolumn in primary visual cortex (area V1). This theory is rooted in new information-theoretic advances, which allowed us to quantitatively relate single-unit activity in a computational model to human psychophysical thresholds. We believe that now is the time to develop a second quantitative linkage, between modeled single-unit activity and fMRI activation. The major challenge will be to attempt, for the first time, to quantitatively evaluate the attentional modulation of the fMRI signal for different discrimination tasks. Indeed, a critical component of our computational theory is the assumption that the computational effect of attention is task-independent (e.g., the same increased competition effect is seen when subjects discriminate the contrast of a stimulus or its orientation, for matched task difficulties). While such task-independence would be particularly difficult to test in a single-unit experiment (because monkeys would have to be trained for five discrimination tasks to be performed while recording in V1), we propose an exploratory study to test for such independence in humans using psychophysics (at the University of Southern California, USC) and high-field fMRI (at Brookhaven National Laboratory, BNL).

A Neuromorphic Vision System for Every-Citizen Interfaces

Funding Agency: National Science Foundation (ITR/SY #0112991)
Funding Period:8/15/01 - 8/14/04
Principal Investigators:Christof Koch (Caltech), Laurent Itti (USC) and Tomaso Poggio (MIT)
Total award:$450,000 shared among Caltech, MIT and USC.

We recently developed two models of visual processing in primates: The first (by Koch and Itti) is concerned with visual orienting and attentional focusing towards interesting locations in a scene. It replicates processing in posterior parietal cortex and other brain areas along the dorsal (or ``where'') visual stream in the primate brain. The second (Poggio) is concerned with robust object recognition at one location, based on the simulation of neurons in infero-temporal cortex and other areas along the ventral (or ``what'') visual stream. Both models account for a wealth of physiological and psychophysical observations in primates, matching or sometimes exceeding human performance under certain conditions. These biomimetic architectures provide the basis for a new approach to computer vision that, (i) is robust to clutter and to changing environments or imaging conditions, (ii) is on-line adaptable, and (iii) that closely matches human perception.

Departing from the more conventional approaches based on signal processing and mathematical transforms which we and others have used in the past, we propose to create, for the first time, a complete biologically-inspired model integrating object recognition algorithms with an attentional selection stage. We further propose to apply this new model to a well known but still fairly open computer vision problem: Every-Citizen Interfaces (ECI). In particular, we propose to develop robust and widely accessible systems for video-conferencing, smart rooms and distance education. Our combined attention and recognition model will detect locations of potential interest in live video streams, and will recognize at those locations simple gestures through which users will easily and intuitively guide the system. We speculate that our biological algorithms, which have separately demonstrated strong performance at detecting and identifying locations and objects of interest in real color scenes, will define, together, the broad direction for a new neuromorphic view on many currently problematic computer vision challenges.

Our broader goal is to demonstrate two points: First, that a biologically-inspired approach to traditionally hard computer vision problems can yield unusually robust and versatile vision systems (which work with color video streams, and quickly adapt to various environmental conditions, users, and tasks), and, second, that computational neuroscience models of vision can be extended to yield real, useful and widely applicable computer vision systems, and are not restricted to the testing of neuroscience hypotheses based on simple, laboratory stimuli. We speculate that our cross-disciplinary effort, fusioning the latest advances in neuroscience, on-line statistical learning and computer vision, will catalyze novel research directions and will educate students towards building truly robust and accessible Every-Citizen Interfaces.

Learning Higher-Order Perceptual Saliency of Monochromatic Images: Psychophysics and Computational Modeling

Funding Agency: National Imagery and Mapping Agency
Funding Period:9/1/01 - 8/31/03
Principal Investigators: Christof Koch (Caltech) and Laurent Itti (USC)
Total award:$219,000 shared among Caltech and USC.

While much recent research concerned with analyzing ``where the eyes look'' has focused on local image statistics, our research towards understanding how primates deploy visual attention in cluttered natural scenes has emphasized the importance of non-linear contextual interactions across distant visual locations. We propose to further our basic neuroscience research by implementing, for the first time, a biological model of early primate vision that contains all of the recently characterized cortical interactions and is applied to overhead imagery. Research Plan: (1) Record eye movements to determine where human observers, both naive as well as professional analysts (as accessible via NIMA), deploy their gaze when examining images (synthetic, land-based outdoors and overhead imagery). (2) Have the observers explicitely report ``interesting'' locations. This will allow us to train our computational model to the demands of specific search tasks. (3) Extend our model of bottom-up visual attention to include a realistic retinal filter (fall-off of resolution with eccentricity), and calibrate it to match the eye-tracking setup. (4) Merge our model of short-range interactions among banks of overlapping spatial filters to our model of bottom-up attention, to obtain more realistic local information processing in the presence of clutter. (5) Extend the merged model to include both excitatory and inhibitory long-range interactions. In conjunction with a fast (<1 sec) excitatory plasticity process, this will give rise to contour integration, critical for detecting roads and other extended contours. (6) Use the eye-movement and user-feedback data to calibrate the interactions in the model, via a supervised learning technique where the teaching signal is provided by the human observer in the loop. Expected Outcome: First, our model will detect ``interesting'' locations (as defined from human data) in overhead imagery; second, it will be used to attenuate those locations which may be salient but can be automatically determined to contain nothing interesting. This will allow us to efficiently cue expert analysts towards the most relevant locations in complex images.

Neural Computing, Round 3: Towards Biologically-Inspired Operating Systems and Computer Architectures

Funding Agency: National Science Foundation (BITS)
Funding Period:1/1/02 - 12/31/03
Principal Investigators: Michael Arbib, Laurent Itti (USC), Kirstie Bellman and Chris Landauer (The Aerospace Corporation)
Total award:$400,000 shared among USC and The Aerospace Corporation.

There have been two main rounds of neural computing to date, the first focusing on adaptation and self-organization in neural networks, the second focusing on compartmental modeling of the neuron. We propose a complementary research program that will catalyze a third round of neural computing: Analyzing the architecture of the primate brain to extract neural information processing principles and translate them into biologically-inspired operating systems and computer architectures. This new effort is motivated, on one end, by recent advances in computational neuroscience research at USC, studying systems ranging from low-level vision to the interactions between frontal and parietal cortices. These have given us a unique perspective on the higher-level principles of computation in neural systems, including the interplay between feed-forward and feed-back pathways, the sharing of neural resources between perception and action, and the role of plasticity on sensory and motor processing. While claims that engineered systems are based in one way or another on biology have been popular over the last decade, these neural principles have, until now, had little impact on computer software and architecture design, which have been primarily driven by engineering disciplines. On the other end, recent developments in software architecture at the Aerospace Corporation have suggested frameworks and tools, such as the wrapping technology and reflective agent systems, that, for the first time, make the actual embodiment of high-level neural processing principles possible for use in operating systems and computer architectures. This novel effort is hence a first attempt at concretely translating the latest advances from computational neuroscience into actual computer systems, using techniques and concepts derived from the latest software engineering research.

The proposed University-Industry collaborative research will have three components:

(1) Analyze and further develop our computational neuroscience models concerned with grasping, recognizing and executing actions, and up to describing those actions with language, in terms of basic information processing principles. In particular, we will study a dramatic pattern of "re-use" of neural architecture, the "mirror system" (Rizzolatti & Arbib, 1998) in which re-using the same neural hardware allows primates to execute, learn, perceive and describe actions. Indeed, we believe that this system, in which the sensory, planning and executive stages cooperate in a plastic and flexible manner to form an integrated system, lends itself perfectly to a first attempt at translating neural computation principles into new software and hardware architectures.

(2) With this analysis as basis, develop for computer science an explicit formulation of the brain’s approach to "reusable computing" by adding evolutionary refinements to augment available circuitry to handle new tasks, showing why what is known about the organization and architecture of these capabilities is also critical to the development of a new approach to computer architecture and operating systems. However, we stress that the new architectural developments will include, but not be restricted by, biological principles. For example, the inclusion of a non-biological wrapping technology will allow the re-use of biological computing strategies in a way that in biology is available only on an evolutionary time scale.

(3) Consolidate this work by developing a modeling environment that combines these neurological principles with techniques from architecture description languages and reflexive agent systems, as a precursor for systems which enable a user to specify desired system/architectural properties while not having to specify all the required components, relationships among components, and values of components. We will test the new environment both through re-implementations of the models of project one and by development of new approaches to Sensornet and distributed agent applications.

In the long term, our ambition is to open a new research effort, applying the latest advances in computational neurobiology to the design of a new generation of machines. In particular, we believe that the proposed research will catalyze research and development of unusually robust, versatile, and adaptive computer architectures, that can easily adapt, correct themselves, and be universally usable irrespectively of the user's gender, ethnicity, disability or disabling condition, geographic region, or education. Our long-term experience not only with computational neuroscience and with software architectures, but also with developing neuromorphic systems that are truly derived from biological computation principles will allow us to critically evaluate the new advances promised here.

Towards Neuromorphic Vision Systems: A Hands-On Experiment

Funding Agency: USC's Zumberge Faculty Research And Innovation Fund
Funding Period:7/1/02 - 6/30/03
Principal Investigator: Laurent Itti
Total award:$25,000

In recent years, a new discipline has emerged which challenges classical approaches to engineering and computer vision research, that of neuromorphic engineering. This new research effort promises to develop engineering systems with unparalleled robustness and on-line adaptability. Such systems are based on algorithms and techniques inspired from and closely replicating the principles of information processing in biological nervous systems. Their applicability to engineering challenges is widespread, and includes smart sensors, implanted electronic devices, autonomous visually-guided robotics systems, prosthesis systems, and robust human-computer interfaces.

Because of its truly interdisciplinary nature, benefiting from the latest advances in experimental and computational neuroscience, electrical engineering, control theory, and signal & image processing, neuromorphic engineering is difficult to approach from a pure engineering background. As was revealed through discussions with students around the biologically-oriented Computer Science courses we have taught so far (see http://iLab.usc.edu), many students are scared by the prospect of plunging into the broad and complex fields of biology and neuroscience, given that a very interesting classical engineering curriculum also is available.

This proposal is for pilot funding to initiate a combined research and education effort, to foster excitement and increase awareness among engineers about the recent achievements and great potential of neuromorphic engineering. Our research goal is to demonstrate how biologically-inspired algorithms may yield a fresh perspective upon traditionally hard engineering problems, including computer vision, navigation, sensorimotor coordination, and decision making under time pressure. Our educational goal is to showcase a number of powerful neuromorphic algorithms and to compare them to traditional engineering systems, in an attempt to convince interested students that the extra effort involved in working with biologists finds justification in giving them a real competitive edge.

The format which we propose to pursue is that of an inter-university competition around an outdoors race of autonomous vision-enabled toy off-road vehicles. Twenty vehicles equipped with inexpensive video cameras as their only sensor will depart on an outdoor trail along which various guiding landmarks and signs will be placed. Prior to the race, vehicles will have an opportunity to train to the landmarks and terrain through practice laps. This contest will dramatically differ from existing robotics contests in four aspects: First, to exploit real-time video streams and effectively base control on vision, embarked computational power will be high (four 1 GHz CPUs and 1 GByte of central memory), while most robotics competitions use microcontroller-based systems and simple (e.g., telemetry) sensors. Second, the environment will be unfriendly (outdoors, with variations of illumination, shadows, obstacles, etc.) and largely uncontrolled, while most contests use perfectly well-defined and simplified environments. Adaptability, robustness, and capability to handle unexpected situations hence will be key to success. Third, the circuit will be unknown to the participants until the practice laps, while the controlled environment of typical contests is usually known well in advance. Robots should thus either have extremely robust vision or be able to quickly learn the circuit and disposition of landmarks during the practice laps, to complement vision during the race. Fourth, credit will be given for final ranking to those robots which use biologically-plausible algorithms, in addition to their time to finish the race.

For more information, please visit out Beobots page.

Past support

In the past (at Caltech), we have also been fortunate to receive generous support from the National Science Foundation, the Office of Naval Research, and the National Institutes of Health.

How to interpret these numbers?

If you are a student and wonder how we could possibly be unable to give you an assistantship given the amounts shown above, please consider the following rough figures:

Copyright © 2000 by the University of Southern California, iLab and Prof. Laurent Itti