![](/i/ilab5.gif)
<< Back to Home
Neovision2 annotated video datasets
Thank you for your interest in the Neovision2 test sets. Please read the following to obtain access to the test
sets. These are standard practices to follow to ensure that you do not waste the test set, as there is only one
available:
- You can use the training set, openly available for download on this site, to develop, train, and fine-tune your
algorithms.
- You must then decide when all development, training and tuning is complete.
- At this point, please email Prof. Laurent Itti (itti@pollux.usc.edu) to
request access to the Neovision2 test sets.
- Every access request will be granted. However, it is up to you to ensure that you will not further develop, train,
or tune your algorithms after you have processed the test set. That is, processing the test set should be a one-time
action, after which you should simply report the results obtained. If you were to process the test set once, observe the
results, adjust a parameter in your algorithms, and process the test set again (perhaps hoping for better results), you
would be the victim of double-dipping (circular reasoning) because you would essentially have used the test set for
training, thereby making it a training set (with no pristine test set left for you to evaluate your algorithms).
- The following references may be useful: Wikipedia on
circular analysis, Wikipedia on training
set, Wikipedia on test set,
and Circular analysis in systems neuroscience
- Note that no validation set is provided here. Thus it is up to you to split the training set into training
vs. validation subsets if you need to make algorithm or parameter selections. Please do this before you request the test
sets.
This research project was made possible by funding from the Defense Advanced
Research Projects Agency. The authors of this document affirm that the views, opinions, and data provided herein are
solely their own and do not represent the views of the United States government or any agency thereof.
Copyright © 2013 by the University of
Southern California, iLab and Prof. Laurent
Itti