Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in real-time. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community.
If you use this in your research, please cite our WACV 2015 paper:
G. Ros, S. Ramos, M. Granados, A. Bakhtiary, D. Vazquez and A.M. Lopez.
Vision-based Offline-Online Perception Paradigm for Autonomous Driving.
In Winter Conference on Applications of Computer Vision (WACV), 2015.
This work is supported by Universitat Autònoma de Barcelona, the Spanish projects TRA2011-29454-C03-01(eCo-DRIVERS), TIN2011-25606 (SiMeVé) and TIN2011-29494-C03-02 (FireWATCHER), and Sebastian Ramos’ FPI Grant BES-2012-058280.