Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MMPreTrain, Upgrade from Classification to Pre-Train #1474

Open
Ezra-Yu opened this issue Apr 10, 2023 · 6 comments
Open

MMPreTrain, Upgrade from Classification to Pre-Train #1474

Ezra-Yu opened this issue Apr 10, 2023 · 6 comments

Comments

@Ezra-Yu
Copy link
Collaborator

Ezra-Yu commented Apr 10, 2023

Dear community,

We are excited to announce the release of a new and upgraded deep learning pre-trained models library, MMPreTrain. We have integrated the original MMClassification, image classification algorithm library, and MMSelfSup, self-supervised learning algorithm to launch the deep learning pre-training algorithm library MMPreTrain.

🤔 Compatibility with MMClassification

Fully compatible with MMClassification's directory structure, supported algorithms as well as usage. All the code and projects, which are based on the ordinary mmcls, can be migrated by simply changing the library name.

For example:

  1. Importing Components
MMClassification MMPreTrain
from mmcls.models import ResNet from mmpretrain.models import ResNet
from mmcls.datasets import ImageNet from mmpretrain.datasets import ImageNet
.............. .......................
  1. Launch a train/test experiment
MMClassification MMPreTrain
use as framework python train configs/xxx_xx.py python train configs/xxx_xx.py
use mim mim train mmcls xxxx_xx.py mim train mmpretrain xxxx_xx.py

For more details about migrating from 0.x to pretrain, you can refer to the migration doc

👍 Major Upgrades

With the release of mmpretrain, we have made several major upgrades to our library.

1. Integrate Self-supervised Algorithms

we have integrated the self-supervised task, which enables users to easily get pre-trained models for various tasks. Users could find that in our directory mmpretrain/models, where a new folder selfsup was made, which support 18 recent self-supervised learning algorithms.

Contrastive leanrning Masked image modeling
MoCo series BEiT series
SimCLR MAE
BYOL SimMIM
SwAV MaskFeat
DenseCL CAE
SimSiam MILAN
BarlowTwins EVA
DenseCL MixMIM

2. Provide convenient higher-level APIs

Secondly, we have provided a more convenient higher-level API, making it easier for users to interact with our library.

  1. list_models

list_models supports fuzzy matching, you can use * to match any character.

>>> from mmpretrain import list_models
>>> list_models("*clip-openai")
['vit-base-p16_clip-openai-in12k-pre_3rdparty_in1k',
 'vit-base-p16_clip-openai-in12k-pre_3rdparty_in1k-384px',
 'vit-base-p16_clip-openai-pre_3rdparty_in1k',
 'vit-base-p16_clip-openai-pre_3rdparty_in1k-384px',
 'vit-base-p32_clip-openai-in12k-pre_3rdparty_in1k-384px',
 'vit-base-p32_clip-openai-pre_3rdparty_in1k']
>>> list_models("*convnext-b*21k")
['convnext-base_3rdparty_in21k',
 'convnext-base_in21k-pre-3rdparty_in1k-384px',
 'convnext-base_in21k-pre_3rdparty_in1k']
  1. get_model

get_model can get the model from model names

>>> from mmpretrain import get_model
>>> init_model = get_model("convnext-base_in21k-pre_3rdparty_in1k")
>>> pretrained_model = get_model("convnext-base_in21k-pre_3rdparty_in1k", pretrained=True)
# Do the froward
>>> import torch
>>> x = torch.rand((1, 3, 224, 224))
>>> y = pretrained_model(x)
>>> print(type(y), y.shape)
<class 'torch.Tensor'> torch.Size([1, 1000])
  1. ImageClassificationInferencer

To use the ImageClassificationInferencer

>>> from mmpretrain import ImageClassificationInferencer
>>> inferencer = ImageClassificationInferencer('resnet50_8xb32_in1k')
>>> results = inferencer('demo/demo.JPEG')
>>> print(results[0]['pred_class'])
sea snake

To inference multiple images by batch on CUDA

>>> from mmpretrain import ImageClassificationInferencer
>>> inferencer = ImageClassificationInferencer('resnet50_8xb32_in1k', device='cuda')
>>> imgs = ['demo/demo.JPEG'] * 100
>>> results = inferencer(imgs, batch_size=16)
Inference ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.5 it/s
>>> print(results[99]['pred_class'])
sea snake
  1. FeatureExtractor

Compared with model.extract_feat, it's used to extract features from the image files directly, instead of a batch of tensors.

>>> from mmpretrain import FeatureExtractor, get_model
>>> model = get_model('resnet50_8xb32_in1k', backbone=dict(out_indices=(0, 1, 2, 3)))
>>> extractor = FeatureExtractor(model)
>>> features = extractor('demo/bird.JPEG')[0]
>>> for feature in features:
...     print(feature.shape)
torch.Size([256])
torch.Size([512])
torch.Size([1024])
torch.Size([2048])

3. Based on the new training engine MMEngine

Based on MMEngine can support more aspects of upstream chip, training framework updates, and also more aspects of downstream calls to mmpretrain pre-trained models.

  1. Support Torch2.0 to accelerate your training

We have fully supported torch2.0, ensuring that our library is compatible with the latest version of PyTorch.
Add the following to your config. You can also refer to MMEngine DOC for help.

compile=True

This is the speed boosting effect

model speed up
ResNet 10.00% ↑
ViT 4.60% ↑
  1. Powerful Visualizer

To visualize the image classification result.

from mmpretrain.visualization import UniversalVisualizer
from mmpretrain.structures import DataSample
import mmcv

visualizer = UniversalVisualizer()
image = mmcv.imread('demo/bird.JPEG')[..., ::-1]  # The visualization methods accept RGB image.
data_sample = DataSample().set_gt_label(1).set_pred_label(2).set_pred_score([0., 0.8, 0.2])
visualizer.visualize_cls(image, data_sample, show=True)

visualize_cls

For more detail, You can refer to this PR.

↪ Feedbacks

We would like to invite the community to try it out and provide valuable feedback or suggestions. We are committed to improving our library and hope that you will join us on this journey.

The MMPreTrain team

@Ezra-Yu Ezra-Yu changed the title [FEEDBACK] MMPreTrain MMPreTrain, Upgrade from classification to pre-train Apr 10, 2023
@Ezra-Yu Ezra-Yu pinned this issue Apr 10, 2023
@yCobanoglu
Copy link

How can I use mmclassification instead of mmpretrain ? Maybe new repo would have been a good idea.

@Ezra-Yu
Copy link
Collaborator Author

Ezra-Yu commented Apr 11, 2023

How can I use mmclassification instead of mmpretrain ? Maybe new repo would have been a good idea.

Just use the mmcls-1.x and mmcls-0.x branch.

@tonysy tonysy changed the title MMPreTrain, Upgrade from classification to pre-train MMPreTrain, Upgrade from Classification to Pre-Train Apr 11, 2023
@XinyueZ
Copy link

XinyueZ commented Apr 13, 2023

Screenshot from 2023-04-13 11-01-46
ImageClassificationInferencer doesn't work anymore and doc <> api not equal the same.

how to pass config, checkpoint etc?

pls give approach.

@Ezra-Yu
Copy link
Collaborator Author

Ezra-Yu commented Apr 13, 2023

@XinyueZ in my envriment, it works well
image

For your own config and checkpoint:
image

@MR-ei
Copy link

MR-ei commented May 4, 2023

I think the documentation of ImageClassificationInferencer is confusing since it states that it has a weights attribute (same as inferencers from other mmlab repos) but in the init method there is no such attribute.

untitled

@Ezra-Yu
Copy link
Collaborator Author

Ezra-Yu commented May 5, 2023

Yes, it is a typo. @MR-ei , We will fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants