Open MIC (Open Museum Identification Challenge) contains photos of exhibits captured in 10 distinct exhibition spaces of several museums which showcase paintings, timepieces, sculptures, glassware, relics, science exhibits, natural history pieces, ceramics, pottery, tools and indigenous crafts. The goal of Open MIC is to stimulate research in domain adaptation, egocentric recognition and few-shot learning by providing a testbed complementary to the famous Office 31 dataset which reaches ~90% accuracy.

INTRODUCTION

EXHIBITIONS

Open MIC contains 10 distinct source-target subsets of images from 10 different kinds of museum exhibition spaces. They include:

BASELINES

To demonstrate the intrinsic difficulty of the Open MIC dataset, we provide the community with baseline accuracies obtained from: Kindly note that this is an identification dataset, i.e. each class defines a unique exhibit. Thus, this domain adaptation dataset is related to a retrieval problem (if you like to pose it this way) or classification problem (each specific exhibit has one label).

DOMAIN ADAPTATION

We include the following evaluation protocols for Domain Adaptation (see the cited below ECCV'18 paper for more details). Kindly note that if you use our dataset, you do not have to run your algorithm on all these protocols for all combinations etc. Just choose one protocol you like:

FEW-SHOT LEARNING

We include the following evaluation protocols for One-shot Learning. Kindly note that if you use our dataset, you do not have to run your algorithm on all these protocols for all combinations etc. Just choose one protocol you like:

PUBLICATIONS

For more details on the data, protocols, evaluatinons and algorithms, see the following publication. We would ask you to kindly cite the following paper(s) when using our dataset:

REQUEST FORM

Our dataset license follows mostly the fair use regulations making it available for the academic non-commercial use only. The license assumes royalty-free, non-exclusive, non-transferable, attribution, 'no derivatives' rights. Please read carefully the license and fill in below the requested details. We will verify the request and send you an e-mail with a password. Send us an e-mail to Open MIC providing the following details (cut and paste and fill in and send to us):

DATASET/DOWNLOAD (SMALL SIZE)

Once you will have obtained a valid passwrd, you will be able to instantly downlaod our files (enter e-mail as login followed by the password from e-mail).

Firstly, go through the following 'readme' file for tdetails of what is contained in which folders of our archives:
Below we provide versions of our dataset in resolution 256, 512 and 1024px. You can choose the quality needed for your experiments but we expect that 256 or 512px should be sufficient if you work with CNNs. The following archives contain full images and crops. We used crops in our ECCV'18 paper as well as for one-shot learning:

DATASET/DOWNLOAD (LARGE CROPS)

Below are crops (3 per image) in high resolution of approximately 2048x2048px. Note that each exhibition archive is large, e.g. 1-3GB per file, and to evaluate your algorithm on any of the protocols lsited above, you will need to download all 10 following files:

DATASET/DOWNLOAD (FULL IMAGES)

Below are full resolution whole images (over 2048px). Note that each exhibition archive is large, e.g. 1-3GB per file:

FEW-SHOT LEARNING

Below are files prepared for the use with 'Power Normalizing Second-order Similarity Network for Few-shot Learning': Below are files prepared for the use with 'Adaptive Subspaces for Few-Shot Learning':

ADDITIONAL LABELS

Below are labels with multiple annotations per image in the target data (some of our ECCV'18 experiments use them) as well as the lists of source and target background images (labelled as -1). Moreover, we also provide annotations for the geometric and photometric distortions observed in target images (the latter file).