TASK-CV 2016

This workshop aims at bringing together computer vision and multimedia researchers interested in domain adaptation and knowledge transfer techniques, which are receiving increasing attention in computer vision and multimedia research.
During the first decade of the XXI century, progress in machine learning has had an enormous impact in computer vision. The ability to learn models from data has been a fundamental paradigm in image classification, object detection, semantic segmentation or tracking.
A key ingredient of such a success has been the availability of visual data with annotations, both for training and testing, and well-established protocols for evaluating the results.
However, most of the time, annotating visual information is a tiresome human activity prone to errors. This represents a limitation for addressing new tasks and/or operating in new domains. In order to scale to such situations, it is worth finding mechanisms to reuse the available annotations or the models learned from them.
This also challenges the traditional machine learning theory, which usually assumes that there are sufficient labeled data of each task, and the training data distribution matches the test distribution.
Therefore, transferring and adapting source knowledge (in the form of annotated data or learned models) to perform new tasks and/or operating in new domains has recently emerged as a challenge to develop computer vision methods that are reliable across domains and tasks.
Besides, transfer learning has also gained interests from the multimedia community in many applications, such as video concept detection, image/video retrieval, and socialized video recommendation.
The workshops in previous years can be found via the links, www.cvc.uab.es/adas/task-cv2014/, and http://adas.cvc.uab.es/task-cv2015/.


TASK-CV aims to bring together research in transfer learning and domain adaptation for computer vision as a workshop hosted by the ECCV 2016. We invite the submission of research contributions such as:

  • TL/DA learning methods for challenging paradigms like unsupervised, and incremental or on-line learning.
  • TL/DA focusing on specific visual features, models or learning algorithms.
  • TL/DA jointly applied with other learning paradigms such as reinforcement learning.
  • TL/DA in the era of deep neural networks (e.g., CNNs), adaptation effects of fine-tuning, regularization techniques, transfer of architectures and weights, etc.
  • TL/DA focusing on specific computer vision tasks (e.g., image classification, object detection, semantic segmentation, recognition, retrieval, tracking, etc.) and applications (biomedical, robotics, multimedia, autonomous driving, etc.).
  • Comparative studies of different TL/DA methods.
  • Working frameworks with appropriate CV-oriented datasets and evaluation protocols to assess TL/DA methods.
  • Transferring knowledge across modalities (e.g., learning from 3D data for recognizing 2D data, and heterogeneous transfer learning)
  • Transferring part representations between categories.
  • Transferring tasks to new domains.
  • Solving domain shift due to sensor differences (e.g., low-vs-high resolution, power spectrum sensitivity) and compression schemes.
  • Datasets and protocols for evaluating TL/DA methods.

This is not a closed list; thus, we welcome other interesting and relevant research for TASK-CV.

Best Paper Awards

Best Paper Award (sponsored by Amazon):
Gabriela Csurka, Boris Chidlovskii, Stephane Clinchant, Sofia Michel, Unsupervised Domain Adaptation with Regularized Domain Instance Denoising.

Best Paper Award (sponsored by Google):
Penelope Tsatsoulis, Bryan Plummer, David Forsyth, Visual Analogies: A Framework for Defining Aspect Categorization

Honorable Mention Papers:
Baochen Sun, and Kate Saenko, Deep CORAL: Correlation Alignment for Deep Domain Adaptation.
Yuguang Yan, Qingyao Wu, Mingkui Tan, Huaqing Min, Online Heterogeneous Transfer Learning by Weighted Offline and Online Classifiers.


This workshop is sponsored by Goolge, and Amazon. This workshop is also supported by the Spanish projects: TRA2014-57088-C2-1-R and DGT project SPIP2014-01352. With the support of the Secretary for Universities and Research of the Ministry of Economy and Knowledge of the Generalitat of Catalonia (2014-SGR-1506) and TECNIOspring with the FP7 of the EU and ACCIÓ.