Abstract


= PDF Reprint,     = BibTeX entry,     = Online Abstract


Click to download BibTeX data Clik to view abstract K. Lekkala, H. Bao, P. Cai, W. Z. Lim, C. Liu, L. Itti, USCILab3D: A Large-scale, Long-term, Semantically Annotated Outdoor Dataset, In: The Thirty-eight Conference on Neural Information Processing Systems (NeurIPS'24), Datasets and Benchmarks Track, Dec 2024. [2024 datasets and benchmarks track acceptance rate: 25.3%]

Abstract: In this paper, we introduce the USCILab3D dataset, a large-scale, annotated outdoor dataset designed for versatile applications across multiple domains, including computer vision, robotics, and machine learning. The dataset was acquired using a mobile robot equipped with 5 cameras and a 32-beam, 360deg scanning LIDAR. The robot was teleoperated, over the course of a year and under a variety of weather and lighting conditions, through a rich variety of paths within the USC campus (229 acres = 92.7 hectares). The raw data was annotated using state-of-the-art large foundation models, and processed to provide multi-view imagery, 3D reconstructions, semantically-annotated images and point clouds (267 semantic categories), and text descriptions of images and objects within. The dataset also offers a diverse array of complex analyses using pose-stamping and trajectory data. In sum, the dataset offers 1.4M point clouds and 10M images (6TB of data). Despite covering a narrower geographical scope compared to a whole-city dataset, our dataset prioritizes intricate intersections along with denser multi-view scene images and semantic point clouds, enabling more precise 3D labelling and facilitating a broader spectrum of 3D vision tasks.

Themes: Computer Vision, Machine Learning

 

Copyright © 2000-2007 by the University of Southern California, iLab and Prof. Laurent Itti.
This page generated by bibTOhtml on Mon Jan 13 03:04:22 PM PST 2025