This workshop aims at promoting discussions among researchers investigating learning from the scarce data. Typically, domain adaptation, transfer of knowledge, zero-, one- or few-shot learning are examples of inference from the scarce data. However, rapid progress has been made in domain adaptation and few-shot learning thanks to Convolutional Neural Networks. The well-established benchmarks such as the famous Office dataset have been nearly saturated with algorithms reaching ~90% accuracy. This workshop aims at going beyond conventional datasets and conventional approaches thus posing a new challenge that aiming to shake up the status quo.
TOPICS
We encourage discussions on recent advances, ongoing developments, and novel applications of domain adaptation, zero-, one- and few-shot learning. We are soliciting ivited talks that address a wide range of theoretical and practical issues including, but not limited to:
- Supervised and unsupervised domain adaptation
- Various concepts on statistical principles of DA: from kernel methods to covariances to tensors to the information geometry driven approaches to covariate shift
- Zero-, one- and few-shot learning approaches
- Generative Adversarial Networks for knowledge transfer
- Homogeneous and heterogeneous DA
- Metric learning approaches in DA
- Co-training
- Online learning
- Dataset biases
- Deep Learning for learning from the scarce data
- Multimodal or hyper-spectral data etc.
- Knowledge Transfer in Museum and Arts data
- Applications of domain and few-shot learning for:
- image/video recognition
- object recognition
- scene understanding
- industrial and medical applications
- Other related topics not listed above
SCHEDULE
Below is the program of the workshop on the 2nd of December, 2018. Please check Detailed Program below for the abstracts and biographies of our invited speakers (or click on links in tables).
Afternoon Session
Time | Invited Speaker | Title |
---|---|---|
13:30 | Dr. P. Koniusz, Dr. M. Harandi | Welcome |
13:35 | Assoc. Prof. Krystian Mikolajczyk | Domain Transfer with Semantic Grouping and Robust Pseudo Labelling |
14:05 | Assoc. Prof. Lei Wang | Unsupervised Feature Adaptation for Image Retrieval via Diffusion Process /slides/ |
14:50 | Dr. Mathieu Salzmann | Unshared Weights for Deep Domain Adaptation |
15:35 | Coffee break | Venue |
16:00 | Dr. Gabriela Csurka | New Trends in Visual Domain Adaptation /slides/Domain Adaptation in Computer Vision Applications (Book)/ |
16:45 | Dr. P. Koniusz, Dr. M. Harandi | Few words about the Open MIC dataset and closing remarks /slides/ECCV'18 talk (YouTube)/ECCV'18 and WACV'19 papers/ |
INFORMATION
Kindly note that registration at ACCV'18 webpage is mandatory for everyone participating in the workshop (at least a registration for pre-conference workshop).The workshop is scheduled to take place on the 2nd of December, 2018. It will be hosted in room Level 2, River View, Room 4. However, kindly check for any possible changes before the workshop date.
DETAILED PROGRAM
Below is the list of speakers who gave talks during the workshop:- Assoc. Prof. Krystian Mikolajczyk (Imperial College London)
Title: Domain Transfer with Semantic Grouping and Robust Pseudo LabellingAbstract: The talk will focus on a domain transfer approach for object detectors. I'll briefly discuss supervised object detection and then recent methods for domain transfer detection. Our approach relies on a training method that starts as a supervised problem in the source domain and moves to unsupervised target domain via robust pseudo labelling, semantic grouping and virtual adversarial object training. Robust pseudo labelling explores detector precision and ranking margin to provide hard training examples with low noise labels. Semantic grouping makes use of the scene context and the object co-occurrence within images to help align the source and target features. Virtual adversarial object training generates more examples during domain transfer learning. The approach is applicable to any existing object detector with a loss bas on classification and localization accuracy. I'll present some results on PASCAL VOC extended with clipart, watercolor and comic styles as the target domains.
Biography: Krystian Mikolajczyk did his undergraduate study at the University of Science and Technology (AGH) in Krakow, Poland. He completed his PhD degree at the Institute National Polytechnique de Grenoble, France. He then worked as a research assistant in INRIA, University of Oxford and Technical University of Darmstadt (Germany), before joining the University of Surrey as a Lecturer, and Imperial College London as a Reader in 2015. His main area of expertise is in image and video recognition, in particular in problems related to matching, representation and learning. He participated in a number of EU and UK projects in the area of image and video analysis. He publishes in computer vision, pattern recognition and machine learning forums. He has served in various roles at major international conferences co-chairing British Machine Vision Conference 2012, 2017 and IEEE International Conference on Advanced Video and Signal-Based Surveillance 2013. In 2014 he received Longuet-Higgins Prize awarded by the Technical Committee on Pattern Analysis and Machine Intelligence of the IEEE Computer Society. - Assoc. Prof. Lei Wang (University of Wollongong)
Title: Unsupervised Feature Adaptation for Image Retrieval via Diffusion ProcessAbstract: Deep convolutional features have now been widely applied to image retrieval and demonstrated excellent performance. Given an image database, the deep features of images are usually extracted by the model pre-trained on a large-scale benchmark dataset and used for retrieval. Nevertheless, the image database on which the retrieval is conducted could be from a domain different from that of the benchmark dataset. How to adapt these pre-trained deep features to a given image database becomes an issue. In particular, the unsupervised nature of image retrieval makes this issue challenging.
This talk will report our recent work on addressing the above issue through utilising diffusion process. By considering the underlying distribution of the images in a database, diffusion process can better evaluate image similarity and improve retrieval. We propose to treat diffusion process as a “black box” and directly model it by deep neural networks, so as to obtain the image representation that assimilates the effect of diffusion process and are therefore better adapted to the given image database. The proposed approach is fully unsupervised in the sense that it needs neither image labels nor external datasets. The adapted deep features directly work with Euclidean search and completely avoids online diffusion process in retrieval. Via experimental study, we will show its effectiveness and investigate its appealing characteristics such as the generalisation to new image insertion. Also, the potential extension to this work will be discussed.
Biography: Lei Wang received his PhD degree from Nanyang Technological University, Singapore. He is now Associate Professor at School of Computing and Information Technology of University of Wollongong, Australia. His research interests include machine learning, pattern recognition, and computer vision. Lei Wang has published 140+ peer-reviewed papers, including those in highly regarded journals and conferences such as IEEE TPAMI, IJCV, CVPR, ICCV and ECCV, etc. He was awarded the Early Career Researcher Award by Australian Academy of Science and Australian Research Council. He served as the General Co-Chair of DICTA 2014 and on the Technical Program Committees of 20+ international conferences and workshops. Lei Wang is senior member of IEEE. - Dr. Mathieu Salzmann (École polytechnique fédérale de Lausanne)
Title: Unshared Weights for Deep Domain AdaptationAbstract: As for many computer vision tasks, deep learning has become the de facto standard for domain adaptation. In this context, most methods rely on two-stream architectures, one for the source data and one for the target one, and enforce the weights of these two streams to be shared. Since the data has undergone a domain shift, however, there is no clear intuition why source and target images should follow the same feature extraction process. In this talk, I will discuss strategies to go beyond weight-sharing for deep domain adaptation. I will start with a simple technique based on regularizing the target weights with the source ones, and then describe a more advanced method where the target weights are inferred from the source ones.
Biography: Mathieu Salzmann is a Senior Researcher at EPFL-CVLab. Previously, he was a Senior Researcher and Research Leader in NICTA's computer vision research group. Prior to this, from Sept. 2010 to Jan 2012, he was a Research Assistant Professor at TTI-Chicago, and, from Feb. 2009 to Aug. 2010, a postdoctoral fellow at ICSI and EECS at UC Berkeley under the supervision of Prof. Trevor Darrell. He obtained his PhD in Jan. 2009 from EPFL under the supervision of Prof. Pascal Fua. His research interests lie at the intersection of geometry and machine learning for computer vision. - Dr. Gabriela Csurka (Naver Labs Europe)
Title: New Trends in Visual Domain AdaptationAbstract: I will give an overview of visual domain adaptation methods. The first part will focus on the tendencies used by historical shallow methods. Then I will discuss the effect of the success of deep convolutional architectures on the field of domain adaption showing that domain adaptation can benefit from these architectures in various manner. Finally I will talk about how the advances in adversarial learning and especially Generative Adversarial Networks yielded not only to a new set of deep domain adaptation models for image categorization, but also contributed to extend domain adaptation to new tasks such as object detection, semantic segmentation or person re-identification.
Biography: Gabriela Csurka is a senior scientist at Naver Labs Europe. She obtained her Ph.D. degree (1996) in Computer Science prepared, under the direction of Olivier Faugeras, on projective representations of the three-dimensional environment from uncalibrated stereo views. Main inventor of the bag-of-visual words (BOV) framework which revolutionized the field of computer vision and became a de facto standard for image classification for almost 10 years, she also worked on digital watermarking for copyright protection, image segmentation, cross-modal image retrieval and domain adaptation. She is the editor of the book entitled "Domain Adaptation in Computer Vision Applications" that appeared in 2017 in the Springer Series. Her current research interests are in computer vision, including image understanding, multi-view 3D reconstruction and visual localization. - Dr. Piotr Koniusz (Data61/CSIRO, Australian National University)
Dr. Mehrtash Harandi (Monash University)
Title: Few words about Open MIC and closing remarks.Abstract: As performance on older datasets has reached ~90% accuracy, we introduce an Open Museum Identification Challenge (Open MIC) dataset for Domain Adaptation and Few-shot Learning to stimulate research in domain adaptation, egocentric recognition and few-shot learning. The data contains photos of exhibits captured in 10 distinct exhibition spaces of several museums which showcase paintings, timepieces, sculptures, glassware, relics, science exhibits, natural history pieces, ceramics, pottery, tools and indigenous crafts (in total, 866 distinct exhibits to identify from). Each exhibition poses distinct challenges e.g., quality of lighting, motion blur, occlusions, clutter, viewpoint and scale variations, rotations, glares, transparency, non-planarity, clipping, etc. For the source domain, we captured the photos in a controlled fashion by Android phones (~7600 images). For the target domain, we employed wearable cameras (~7600 images) to ensure in-the-wild capturing process and a significant domain shift. The baseline algorithm achieves ~40-60% top-1 accuracy on various problems. Moreover, we introduce interesting protocols for one- and few-shot learning problems due to the similarity between DA and few-shot learning.
CITATION
If you wish to cite any topics raised during the workshop, refer to specific papers of our speakers. Additionally, you are welcome to cite the workshop itself:@misc{openmic_workshop_2018, title = {Museum Exhibit Identification Challenge (Open MIC) for Domain Adaptation and Few-Shot Learning}, author = {P. Koniusz and Y. Tas and H. Zhang and S. Herath and C. Simon and M. Harandi and R. Zhang}, howpublished = {ACCV Workshop, \url{http://users.cecs.anu.edu.au/~koniusz/openmic-accv18}}, note = {Accessed: 12-12-2018}, year = {2018}, }
ORGANISERS
- Dr. Piotr Koniusz (Data61/CSIRO and the Australian National University)
- Mr. Yusuf Tas (Data61/CSIRO, Australian National University)
- Mr. Hongguang Zhang (Australian National University)
- Mr. Samitha Herath (Data61/CSIRO, Australian National University)
- Mr. Christian Simon (Australian National University)
- Dr. Mehrtash Harandi (Monash University)
- Dr. Rui Zhang (Hubei University of Arts and Science)