Download our training and test sets with the corresponding semantic ground truth.

The legend for the ground truth classes is like follows:

ClassColour nameRGB
VegetationDark yellow1281280
PoleLight yellow192192128
CyclistLight blue0128192

The color images contained in this dataset are part of the KITTI odometry dataset [Geiger]. In addition to the 70 labelled images of this dataset released with the publication of [Valentin], we have manually labelled a set of 146 images more, which we release here. We used the 70 labelled images of [Valentin] as part of our training set, as well as 100 more from our own labelled images. 46 of our labelled images were used for testing.

Please, note that we provide here not only our labelled images but, for the convenience of the interested researchers, we provide also the associated images of the KITTI odometry dataset. Thus, you have to respect their condition of use too.

[Geiger]  A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the KITTI vision benchmark suite”, CVPR, 2012. See also:

[Valentin] J. Valentin, S. Sengupta, J. Warrell, A. Shahrokni, and P. Torr, “Mesh based semantic modelling for indoor and outdoor scenes”, CVPR, 2013. See also:

Regarding the 146 labelled images that we provide, you can use them as far as you cite our associated WACV paper:

G. Ros, S. Ramos, M. Granados, A. Bakhtiary, D. Vazquez and A.M. Lopez.
Vision-based Offline-Online Perception Paradigm for Autonomous Driving.
In Winter Conference on Applications of Computer Vision (WACV), 2015.

author    = {G. Ros and S. Ramos and M. Granados and A. Bakhtiary and D. Vazquez and {A.M.} Lopez},
title     = {Vision-based Offline-Online Perception Paradigm for Autonomous Driving},
booktitle = {WACV},
year      = {2015}

Please, read the terms of use before downloading the dataset: disclaimer