Skip to content

visipedia/imat_fashion_comp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 

Repository files navigation

wish

As shoppers move online, it would be a dream come true to have products in photos classified automatically. But, automatic product recognition is tough because for the same product, a picture can be taken in different lighting, angles, backgrounds, and levels of occlusion. Meanwhile different fine-grained categories may look very similar, for example, royal blue vs turquoise in color. Many of today’s general-purpose recognition machines simply cannot perceive such subtle differences between photos, yet these differences could be important for shopping decisions.

Tackling issues like this is why the Conference on Computer Vision and Pattern Recognition (CVPR) has put together a workshop specifically for data scientists focused on fine-grained visual categorization called the FGVC5 workshop. As part of this workshop, CVPR is partnering with Google, Wish, and Malong Technologies to challenge the data science community to help push the state of the art in automatic image classification.

Kaggle

We used Kaggle to hold our competiton: https://www.kaggle.com/c/imaterialist-challenge-fashion-2018/

Dataset Details

All images in the iFashion-Attribute database are provided by Wish, with 1,012,947, 9,897 and 39,706 images split into train, validation, and test sets respectively. It has 228 fine-grained fashion attribute-level classes which form 8 high-level fashion groups defined professionally from the fashion industry. Each image has multiple labels. Each group has a number of fashion classes, and the number ranges from 3 ( “gender” group) to 105 ( “category” group).

Files

The competition data can be download on Google Driver.

  • train.json: training data with image urls and labels

  • val.json:validation data with the same format as train.json

  • test.json: images of which the participants need to generate predictions. Only image URLs are provided.

  • Training Data All train/validation/test sets have the same format as shown below:

    {
      "images" : [image],
      "annotations" : [annotation],
    }

   image{
        "image_id" : int,
        "url": [string]
        }

   annotation{
       "image_id" : int,
       "label_id" : [int]
       }

Note that for each image, we only provide URLs instead of the image content. Users need to download the images by themselves. Note that the image urls are hosted by Wish so they are expected to be stable.

This year, we omit the names of the labels to avoid hand labeling the test images.

  • Testing data and submissions The testing data only has images as shown below:
  {
    "images" : [image],
  }

  {
    "image_id" : int,
    "url" : [string],
  }

Citation

If you find the dataset useful in your research, please consider citing:

@article{guo2019imaterialist,
 title={The iMaterialist Fashion Attribute Dataset},
 author={Guo, Sheng and Huang, Weilin and Zhang, Xiao and Srikhanta, Prasanna and Cui, Yin and Li, Yuan and R.Scott, Matthew  and Adam, Hartwig and Belongie, Serge},
 journal={arXiv preprint arXiv:1906.05750},
 year={2019}
}

Contact

Sheng Guo, Malong LLC (sheng@malongtech.com)

Weilin Huang, Malong LLC (whuang@malongtech.com)

Xiao Zhang, Google Research (andypassion@google.com)

Yin Cui, Cornell University (ycui@cs.cornell.edu)

Matt Scott, Malong LLC (mscott@malongtech.com)

About

The iMaterialist Fashion Attribute Dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published