News
2013-12-20: Photo of the best paper award uploaded.
2013-12-10: The workshop was successfully held. Thanks all the participants and speakers! Soon we will upload some photos of the event. 2013-11-03: Program uploaded 2013-10-08: Confirmed 4th and last invited talk by ADAS Group, CVC, Spain. 2013-10-04: Camera-ready papers must be uploaded through the ICCVW2013 author kit. 2013-09-12: Mitch Bryson is confirmed as invited speaker in the workshop. 2013-09-03: The deadline has been EXTENDED to 2013-09-10, 23:59 PDT 2013-08-31: The submission website is now open and ready to upload papers. 2013-08-28: Michael Milford is confirmed as invited speaker in the workshop. We'll soon provide more information. Welcome
Computer vision plays a key role in vehicle technology, some examples are advanced driver assistance systems (ADAS), exploratory and service robotics, unmanned aerial vehicles and underwater robots. In addition to traditional applications such as lane departure warning, traffic object recognition, visual odometry or trajectory planning, new challenges are arising: learning and evaluation with reduced groundtruth/testing data, on-board calibration for multi-cameras, SLAM in natural scenarios, etc.
The goal of the 4th CVVT:E2M workshop, to be hosted in ICCV 2013 in Sydney (Australia), is to get together researchers in computer vision for vehicular technologies, in order to and promote development and spreading of new ideas and results across the aforementioned fields. We invite the submission of original research contributions in computer vision addressed to:
Best Paper Award
Among the papers accepted by the program committee, the CVVT2013 awarded the best paper of the workshop to: by Lachlan Horne, Jose Alvarez and Nick Barnes Dates & Submission
Important Dates
Submission Procedure Authors should take into account the following:
Committees
General Chairs David Gerónimo / KTH Royal Institute of Technology, Sweden Atsushi Imiya / IMIT, Chiba University, Japan Program and Area Chairs Antonio M. López / CVC and Universitat Autònoma de Barcelona, Spain Theo Gevers / CVC, Spain and University of Amsterdam, The Netherlands Urbano Nunes / University of Coimbra, Portugal Dariu M. Gavrila / Daimler AG, Germany Steven Beauchemin / University of Western Ontario, Canada PRoViDE Chair Tomas Pajdla / The Czech Technical University in Prague, Czech Republic Program Committee
Hanno Ackermann / Leibniz Universität Hannover, Germany José M Álvarez / NICTA, Australia Joao P.Barreto / Universidade de Coimbra, Portugal Pascual Campoy / Universidad Politécnica de Madrid, Spain Amitava Chatterjee / Jadavpur University, India Arturo de la Escalera / Universidad Carlos III de Madrid, Spain Armagan Elibol / Yildiz Technical University, Turkey Paolo Grisleri / Universita di Parma & Vislab, Italy Aura Hernández / CVC and Universitat Autònoma de Barcelona, Spain Yousun Kang / Tokyo Polytechnic University, Japan Norbert Krüger / The Maersk Mc-Kinney Moller Institute, Denmark Frédéric Lerasle / LAAS-CNRS and Université Paul Sabatier, France Ron Li / The Ohio State University, USA Krystian Mikolajczyk / University of Surrey, United Kingdom Hiroshi Murase / Nagoya University, Japan Lazaros Nalpantidis / Aalborg University Copenhagen, Denmark Luciano Oliveira / Federal University of Bahia, Brazil Gerhard Paar / Joanneum Research in Graz, Austria Oscar Pizarro / University of Sydney, Australia Daniel Ponsa / CVC and Universitat Autònoma de Barcelona, Spain Raúl Rojas / Free University of Berlin, Germany Bodo Rosenhahn / University of Hannover, Germany Vítor Santos / Universidade de Aveiro, Portugal Angel D. Sappa / Computer Vision Center, Spain Jun Sato / Nagoya Institute of Technology, Japan Hanumant Singh / Wood-hole oceanographic institution, USA Akihiko Torii / Tokyo Institute of Technology, Japan Rudolph Triebel / Technische Universität München, Germany Raquel Urtasun / Toyota Technological Institute at Chicago, USA Tobi Vaudrey / The University of Auckland, New Zealand Program
Room: 202
Invited Talks
PRoViDE Planetary Robotics
Tomas Pajdla (The Czech Technical University in Prague, Czech Republic)
The workshop will host an invited talk on the outcomes so far of the FP7 European collaborative Project PRoViDE - Planetary Robotics Vision Data Exploitation". This project aims at assembling imaging data from vehicles and probes on planetary surfaces to make it available as a unique database featuring fusion between orbital and 3D ground vision data and a multiresolution visualization engine. RatSLAM: Using Models of Rodent Hippocampus for Vision-based Robot Navigation and Beyond Michael Milford (Queensland University of Technology, Australia)
The brain circuitry involved in encoding space in rodents has been extensively tested over the past thirty years, with an ever increasing body of knowledge about the components and wiring involved in navigation tasks. The learning and recall of spatial features is known to take place in and around the hippocampus of the rodent, where there is clear evidence of cells that encode the rodent's position and heading. RatSLAM is a primarily vision-based robotic navigation system based on current models of the rodent hippocampus, which has achieved several significant outcomes in vision-based Simultaneous Localization And Mapping (SLAM), including mapping of an entire suburb using only a low cost webcam, and navigation continuously over a period of two weeks in a delivery robot experiment. RatSLAM has performed vision-based mapping on platforms including passenger cars, Pioneer and Guiabot robots, remote control cars, quad-rotors and autonomous tractors. The work has also led to recent experiments demonstrating that impressive feats of vision-based navigation can be achieved at any time of day or night, during any weather, and in any season using visual images as small as 2 pixels in size. I will discuss the insights from this research, as well as current and future areas of study with the aim of stimulating discussion and collaboration. Autonomous systems for environmental monitoring: from the air and underwater Mitch Bryson (University of Sydney, Australia)
Terrestrial and marine ecosystems face a variety of pressures from human impacts such as urban development, agriculture and overfishing and are expected to face increasing pressures due to predicted global climate change. Environments such as coral reefs have significant economic value worldwide; long-term monitoring of these habitats is necessary for understanding changes and the effect of habitat protection measures. Traditionally monitoring programs have relied on routine map building performed using high-flying surveys with manned-aircraft, through satellite remote sensing or ship-bourne sonar bathymetry. These technologies are often limited in their ability to complete the picture at fine-scales, owing to limits in both the spatial and temporal resolution of the data they gather. Autonomous robotic platforms such and Unmanned Aerial Vehicles (UAVs) and Autonomous Underwater Vehicles (AUVs) are technologies that have recently been used to complete the picture in these applications. These platforms provide a unique perspective on the environment, venturing to depths and close proximity to terrains that are unsafe for manned platforms, providing rich levels of detail, with the potential for high endurance and persistent mapping. In this talk I will discuss past and on-going work in the use of UAVs, AUVs and vision-based navigation and mapping technologies within several environmental monitoring projects in Australia. Owing to the smaller size of autonomous platforms and the often low-cost nature of environmental monitoring programs, vision is an ideal sensor that has been used extensively. Future research challenges in the use of these systems include real-time perception and classification in unstructured environments and the development of autonomous decision-making systems that are "aware" of the high-order scientific goals as part of a monitoring mission. Autonomous driving: when 3D mapping meets semantics Antonio M. López, Germán Ros, Sebastián Ramos, Jiaolong Xu (CVC, Spain)
Our principal goal is the creation of a framework that allows vehicles to perform autonomous navigation in urban environments due to the awareness of suitable semantic information. To this end we are currently developing a novel approach that consists in creating semantically rich 3D maps, which encode all the information required by the navigation tasks. This map contains critical information such as: the segmentation of the voxels in different urban classes (road, building, sidewalk, fence, etc.), traffic intersections, traffic signs, quality of the terrain, and much more. Since "semantics" are hard to compute, the creation of these maps is carried out off-line and then, this information can be accessed on-board efficiently when a new vehicle needs it. Building this type of maps represents a challenge for several research lines within the computer vision field, specifically for semantic segmentation, object detection, 3D mapping, localization and overall for scene understanding. In this talk we explain our ongoing for this challenge. Past Editions
The previous editions of the workshop are:
|