Call for papers


Welcome Dates & Submission Committees Program Invited Talk Best Paper Award Past Editions

News
2013-12-20: Photo of the best paper award uploaded.
2013-12-10: The workshop was successfully held. Thanks all the participants and speakers! Soon we will upload some photos of the event.
2013-11-03: Program uploaded
2013-10-08: Confirmed 4th and last invited talk by ADAS Group, CVC, Spain.
2013-10-04: Camera-ready papers must be uploaded through the ICCVW2013 author kit.
2013-09-12: Mitch Bryson is confirmed as invited speaker in the workshop.
2013-09-03: The deadline has been EXTENDED to 2013-09-10, 23:59 PDT
2013-08-31: The submission website is now open and ready to upload papers.
2013-08-28: Michael Milford is confirmed as invited speaker in the workshop. We'll soon provide more information.
Welcome
Computer vision plays a key role in vehicle technology, some examples are advanced driver assistance systems (ADAS), exploratory and service robotics, unmanned aerial vehicles and underwater robots. In addition to traditional applications such as lane departure warning, traffic object recognition, visual odometry or trajectory planning, new challenges are arising: learning and evaluation with reduced groundtruth/testing data, on-board calibration for multi-cameras, SLAM in natural scenarios, etc.

The goal of the 4th CVVT:E2M workshop, to be hosted in ICCV 2013 in Sydney (Australia), is to get together researchers in computer vision for vehicular technologies, in order to and promote development and spreading of new ideas and results across the aforementioned fields. We invite the submission of original research contributions in computer vision addressed to:
  • Autonomous navigation and exploration based on vision and 3D measurements
  • Vision-based advanced driver assistance systems
  • Vision-based underwater and unmanned aerial vehicles
  • Visual driver monitoring and driver-vehicle interfaces
  • On-board calibration of multi-camera acquisition systems (stereo rigs, multimodal, networks)
  • Non-verbal and graphical information for long-distance exploration
  • Performance evaluation in navigation, exploration and driver-assistance
  • Machine learning techniques in visual navigation, exploration and driver-assistance
Please check the past editions section for more information of the last three editions of the workshop.
Best Paper Award
Among the papers accepted by the program committee, the CVVT2013 awarded the best paper of the workshop to:

"Exploiting Sparsity for Real Time Video Labelling"
by Lachlan Horne, Jose Alvarez and Nick Barnes



Dates & Submission
Important Dates
Submissions open:26thAugust 2013
Submissions deadline:10thSeptember 2013 (23:59 PDT)
Author notification:3rdOctober 2013
Camera-ready papers:13thOctober 2013
Workshop: 8thDecember 2013

Submission Procedure
Authors should take into account the following:
  • The submission site is https://cmt.research.microsoft.com/CVVTE2M2013.
  • The maximum paper length is 8 pages. The format of the papers is the same as the ICCV main conference.
  • We accept dual submissions to ICCV 2013 and CVVT2013, but the manuscript must contain substantial original contents not submitted to another conference, workshop or journal.
  • Submissions will be rejected without review if they: contain more than 8 pages, violate the double-blind policy or violate the dual-submission policy.
  • Manuscript templates can be found at the main conference website: http://www.iccv2013.org/author_guidelines.php
  • (new!) All the accepted articles will be indexed in IEEE Xplore, so they will be charged $200 fee, as in the main conference and workshops to be indexed.
Committees
General Chairs
David Gerónimo / KTH Royal Institute of Technology, Sweden
Atsushi Imiya / IMIT, Chiba University, Japan

Program and Area Chairs
Antonio M. López / CVC and Universitat Autònoma de Barcelona, Spain
Theo Gevers / CVC, Spain and University of Amsterdam, The Netherlands
Urbano Nunes / University of Coimbra, Portugal
Dariu M. Gavrila / Daimler AG, Germany
Steven Beauchemin / University of Western Ontario, Canada

PRoViDE Chair
Tomas Pajdla / The Czech Technical University in Prague, Czech Republic

Program Committee
Hanno Ackermann / Leibniz Universität Hannover, Germany
José M Álvarez / NICTA, Australia
Joao P.Barreto / Universidade de Coimbra, Portugal
Pascual Campoy / Universidad Politécnica de Madrid, Spain
Amitava Chatterjee / Jadavpur University, India
Arturo de la Escalera / Universidad Carlos III de Madrid, Spain
Armagan Elibol / Yildiz Technical University, Turkey
Paolo Grisleri / Universita di Parma & Vislab, Italy
Aura Hernández / CVC and Universitat Autònoma de Barcelona, Spain
Yousun Kang / Tokyo Polytechnic University, Japan
Norbert Krüger / The Maersk Mc-Kinney Moller Institute, Denmark
Frédéric Lerasle / LAAS-CNRS and Université Paul Sabatier, France
Ron Li / The Ohio State University, USA
Krystian Mikolajczyk / University of Surrey, United Kingdom
Hiroshi Murase / Nagoya University, Japan
Lazaros Nalpantidis / Aalborg University Copenhagen, Denmark
Luciano Oliveira / Federal University of Bahia, Brazil
Gerhard Paar / Joanneum Research in Graz, Austria
Oscar Pizarro / University of Sydney, Australia
Daniel Ponsa / CVC and Universitat Autònoma de Barcelona, Spain
Raúl Rojas / Free University of Berlin, Germany
Bodo Rosenhahn / University of Hannover, Germany
Vítor Santos / Universidade de Aveiro, Portugal
Angel D. Sappa / Computer Vision Center, Spain
Jun Sato / Nagoya Institute of Technology, Japan
Hanumant Singh / Wood-hole oceanographic institution, USA
Akihiko Torii / Tokyo Institute of Technology, Japan
Rudolph Triebel / Technische Universität München, Germany
Raquel Urtasun / Toyota Technological Institute at Chicago, USA
Tobi Vaudrey / The University of Auckland, New Zealand
Program
Room: 202
08:30Welcome message
08:40 Invited talk: RatSLAM: Using Models of Rodent Hippocampus for Vision-based Robot Navigation and Beyond
Michael Milford / Queensland Univ. of Technology, Australia
09:30 Oral 1: Enhanced Target Tracking in Aerial Imagery with P-N Learning and Structural Constraints
Mennatullah Siam / Nile University, Egypt
Mohamed El-Helw / Nile University, Egypt
10:00 Coffee Break
10:30 Oral 2: Evaluating Color Representations for Online Road Detection
Jose Alvarez / NICTA, Australia
Theo Gevers / University of Amsterdam, The Netherlands
Antonio López / CVC and Univ. Autònoma de Barcelona, Spain
10:50 Oral 3: Direct Generation of Regular-Grid Ground Surface Map From In-Vehicle Stereo Image Sequences
Shigeki Sugimoto / Tokyo Institute of Technology, Japan
Kouma Motooka / Tokyo Institute of Technology, Japan
Masatoshi Okutomi / Tokyo Institute of Technology, Japan
11:10 Oral 4: From Video Matching to Video Grounding
Georgios Evangelidis / INRIA, France
Ferran Diego Andilla / HCI, Germany
Radu Horaud / INRIA, France
11:30 Invited Talk: Autonomous driving: when 3D mapping meets semantics
Antonio M. López, Germán Ros, Sebastián Ramos, Jiaolong Xu / CVC, Spain
12:40 Lunch
14:30 Invited Talk: Autonomous systems for environmental monitoring: from the air and underwater
Mitch Bryson / University of Sydney, Australia
15:20 Oral 5: Visual Approaches for Driver and Driving Behavior Monitoring: A Review
Hang-Bong Kang / The Catholic University of Korea, Republic of Korea
15:40 Coffee break
16:10 Oral 6: Evaluation of the Capabilities of Confidence Measures for Assessing Optical Flow Quality
Patricia Márquez-Valle / CVC, Spain
Debora Gil / CVC, Spain
Aura Hernández / CVC and Univ. Autònoma de Barcelona, Spain
16:30 Oral 7: Exploiting Sparsity for Real Time Video Labelling
Lachlan Horne / NICTA, Australia
Jose Alvarez / NICTA, Australia
Nick Barnes / NICTA, Australia
16:50 Invited Talk: PRoViDE Planetary Robotics
Tomas Pajdla / Czech Technical University in Prague, Czech Republic
17:20 Best Paper Announcement and Closing
Invited Talks
PRoViDE Planetary Robotics
Tomas Pajdla (The Czech Technical University in Prague, Czech Republic)
bio Tomas Pajdla is Assistant Professor at the Czech Technical University in Prague. He works in geometry and algebra of computer vision and robotics with emphasis on non-classical camera systems, 3D reconstruction and industrial vision. T. Pajdla published more than 75 works in journals and proceedings and received awards for his work OAGM 1998, BMVC 2002 and ICCV 2005. He served as a programme chair of ECCV 2004 and more than ten times as area chair of ICCV, CVPR, ECCV, ACCV, ICRA and BMVC. He is a member of the ECCV Board, a co-editor of IPSJ Transactions on Computer Vision and Applications and was a co-editor of special issues of the International Journal on Computer Vision and Robotics and Automation Systems. T. Pajdla has connections to planetary research community through EU projects with NASA and EADS Astrium and to automotive industry via Daimler AG.
The workshop will host an invited talk on the outcomes so far of the FP7 European collaborative Project PRoViDE - Planetary Robotics Vision Data Exploitation". This project aims at assembling imaging data from vehicles and probes on planetary surfaces to make it available as a unique database featuring fusion between orbital and 3D ground vision data and a multiresolution visualization engine.

RatSLAM: Using Models of Rodent Hippocampus for Vision-based Robot Navigation and Beyond
Michael Milford (Queensland University of Technology, Australia)
bio Michael Milford is a Senior Lecturer and ARC DECRA Fellow of the Robotics, Vision and Sensor Networking Lab at Queensland University of Technology (QUT) in Brisbane, Australia. He received his PhD in Electrical Engineering from the University of Queensland. He was awarded an inaugural Australian Research Council Discovery Early Career Researcher Award in 2012 and Microsoft Faculty Fellowship in 2013. He has worked at the Queensland Brain Institute and his current research interests at QUT include biologically inspired robot-vision mapping and navigation, among others.
The brain circuitry involved in encoding space in rodents has been extensively tested over the past thirty years, with an ever increasing body of knowledge about the components and wiring involved in navigation tasks. The learning and recall of spatial features is known to take place in and around the hippocampus of the rodent, where there is clear evidence of cells that encode the rodent's position and heading. RatSLAM is a primarily vision-based robotic navigation system based on current models of the rodent hippocampus, which has achieved several significant outcomes in vision-based Simultaneous Localization And Mapping (SLAM), including mapping of an entire suburb using only a low cost webcam, and navigation continuously over a period of two weeks in a delivery robot experiment. RatSLAM has performed vision-based mapping on platforms including passenger cars, Pioneer and Guiabot robots, remote control cars, quad-rotors and autonomous tractors. The work has also led to recent experiments demonstrating that impressive feats of vision-based navigation can be achieved at any time of day or night, during any weather, and in any season using visual images as small as 2 pixels in size. I will discuss the insights from this research, as well as current and future areas of study with the aim of stimulating discussion and collaboration.

Autonomous systems for environmental monitoring: from the air and underwater
Mitch Bryson (University of Sydney, Australia)
bio Mitch Bryson is a postdoctoral research fellow and lecturer at the Australian Centre for Field Robotics (ACFR) at the University of Sydney, Australia. He received his PhD in robotics from the University of Sydney in 2008. His past experience includes the development of vision-based navigation and mapping systems for Unmanned Aerial Vehicles (UAVs) and he currently works on Autonomous Underwater Vehicles (AUVs) within the marine robotics research group at the ACFR. He has almost ten years experience in the development and fielding of robotic systems for environmental monitoring applications including ongoing projects with Australia's Department of Primary Industries and Integrated Marine Observing System. His research interests include robotic perception, navigation, mapping and low-cost autonomous systems in support of environmental science.
Terrestrial and marine ecosystems face a variety of pressures from human impacts such as urban development, agriculture and overfishing and are expected to face increasing pressures due to predicted global climate change. Environments such as coral reefs have significant economic value worldwide; long-term monitoring of these habitats is necessary for understanding changes and the effect of habitat protection measures. Traditionally monitoring programs have relied on routine map building performed using high-flying surveys with manned-aircraft, through satellite remote sensing or ship-bourne sonar bathymetry. These technologies are often limited in their ability to complete the picture at fine-scales, owing to limits in both the spatial and temporal resolution of the data they gather. Autonomous robotic platforms such and Unmanned Aerial Vehicles (UAVs) and Autonomous Underwater Vehicles (AUVs) are technologies that have recently been used to complete the picture in these applications. These platforms provide a unique perspective on the environment, venturing to depths and close proximity to terrains that are unsafe for manned platforms, providing rich levels of detail, with the potential for high endurance and persistent mapping. In this talk I will discuss past and on-going work in the use of UAVs, AUVs and vision-based navigation and mapping technologies within several environmental monitoring projects in Australia. Owing to the smaller size of autonomous platforms and the often low-cost nature of environmental monitoring programs, vision is an ideal sensor that has been used extensively. Future research challenges in the use of these systems include real-time perception and classification in unstructured environments and the development of autonomous decision-making systems that are "aware" of the high-order scientific goals as part of a monitoring mission.

Autonomous driving: when 3D mapping meets semantics
Antonio M. López, Germán Ros, Sebastián Ramos, Jiaolong Xu (CVC, Spain)
Our principal goal is the creation of a framework that allows vehicles to perform autonomous navigation in urban environments due to the awareness of suitable semantic information. To this end we are currently developing a novel approach that consists in creating semantically rich 3D maps, which encode all the information required by the navigation tasks. This map contains critical information such as: the segmentation of the voxels in different urban classes (road, building, sidewalk, fence, etc.), traffic intersections, traffic signs, quality of the terrain, and much more. Since "semantics" are hard to compute, the creation of these maps is carried out off-line and then, this information can be accessed on-board efficiently when a new vehicle needs it. Building this type of maps represents a challenge for several research lines within the computer vision field, specifically for semantic segmentation, object detection, 3D mapping, localization and overall for scene understanding. In this talk we explain our ongoing for this challenge.
Past Editions
The previous editions of the workshop are:
Next you can find some photos of the previous events (for information on the papers accepted and photos of previous editions please click the links above):


CVVT 2011

CVVT 2012

CVVT 2012


       




(c) 2013 Workshop on Computer Vision in Vehicle Technology: From Earth to Mars