Invited Speakers

unknown

Judy Hoffman
University of California, Berkeley.

Title: Adapting Deep Networks Across Domains, Modalities, and Tasks

Abstract: Most deep visual recognition systems learn concepts directly from a large collection of manually annotated images/videos. However, this model is susceptible to biases in the labeled data and often fails to generalize to new scenarios. Rather than require human supervision for each new task or scenario, we propose adapting deep models for use in a new scenario with limited or no new annotations. Our method directly optimizes the network parameters to improve transferability either by learning a common representation which minimizes the difference between scenarios and by using mid-level supervision for stronger guidance of target training against a source model. We demonstrate our approach for the applications of transferring across domains, different visual recognition tasks, and different visual modalities.

unknown

Victor Lempitsky
Skolkovo Institute of Science and Technology.

Title: Deep Model Adaptation using Domain-Adversarial Training

Abstract: Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can utilize large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary for training).

As the training progresses, the approach promotes the emergence of "deep'' features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation.

Overall, the approach can be implemented with little effort using any of the deep-learning packages. In our experiments, the method performed very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.

unknown

Tinne Tuytelaars
Katholieke Universiteit Leuven.

Title: Domain Adaptation in a Deep Learning Context

Abstract: While new DA methods are starting to appear that simultaneously learn a robust representation for a given task and how to cope with domain changes, we will study a simpler setup, where an off-the-shelve pretrained model is used in combination with traditional domain adaptation methods. Is there still need for domain adaptation when using learnt features? And are DA methods developed for handcrafted features still effective or not?

Additionally, we will show how this setup can be used successfully in the context of object detection, adapting the R-CNN detector to a new dataset without the need for any form of supervision.

unknown

Meina Kan
VIPL & ICT, Chinese Academy of Science.

Title: Domain Samples Shifting with Auto Encoder Network for Unsupervised Domain Adaptation

Abstract: In this talk, I will present some of our recent works that endeavor to deal with the unsupervised domain adaption problem by shifting instances between domains. Especially, these works attempt to shift the source domains instances to target domain with which a favorable classification model can be learnt for the task on target domain. I will introduce one principal based on sparse representation that is used to enforce the closeness of two domains, and two models that are used to achieve the shifting between domains, i.e., a bi-linear model and a bi-shifting auto-encoder network.