<< Back to Home
Neovision2 Tower dataset
The Tower dataset of 100 video clips is split into two subsets: training and testing. The training set is publicly
available below. The test set is also freely available upon request made by email to Prof. Laurent Itti
(email@example.com). Every request will be granted. We are asking that you
make a request just so that you are aware of proper training vs. test data practices, such
as avoiding double-dipping.
Neovision2 Tower Training Set
The following is provided for each clip:
- Summary: An image that shows a summary of the clip as 25 low-resolution images that span the duration of the clip.
- MPEG: A low-bandwidth MPEG-1 compressed version of the clip. This should only be used for human inspection of the clip. Use the high-resolution PNG images for machine vision purposes, as they have less video compression artifacts.
- CSV: Ground truth annotations in comma-separated values (CSV) format. These annotations indicate where objects of interest are in each video frame.
- PNG-ZIP: A ZIP archive of the clip where each raw video frame has been written out directly from the camera's native recording into a PNG file. Note that some lossy video compression occurred on the camera itself during recording, but at a very high quality. The conversion from the camera's video files to PNG is then lossless and did not intruduce any further compression artifacts.
Download full Neovision2 Tower Training dataset
The full training dataset contains 50 video clips and the corresponding 50 summary images and CSV ground-truth annotation files.
Download full training dataset [120 GB]
Browse the Neovision2 Tower Training dataset
You can view and download data for each video clip separately below.
This research project was made possible by funding from the Defense Advanced
Research Projects Agency. The authors of this document affirm that the views, opinions, and data provided herein are
solely their own and do not represent the views of the United States government or any agency thereof.
Copyright © 2013 by the University of
Southern California, iLab and Prof. Laurent