Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CLIP model to enable test_clip.py #1500

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

Add CLIP model to enable test_clip.py #1500

wants to merge 5 commits into from

Conversation

mgoin
Copy link
Member

@mgoin mgoin commented Dec 27, 2023

Since we've made a quantized CLIP (https://huggingface.co/neuralmagic/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K-quant-ds), let's use it to test our pipeline!

Copy link
Contributor

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this model work just for classification or for captioning as well? From the OpenCLIP models available, only a small fraction worked for captioning.

Can we add a description of the available models on sparsezoo somewhere? Either in a read me or docstring? Should make it clear how/if this differs from OpenCLIP

@mgoin
Copy link
Member Author

mgoin commented Dec 27, 2023

@dsikka it is just made for image/text retrieval and zero-shot image classification. I left in the pytest.skip for the captioning test as a result.
This model was made from OpenCLIP using a special branch that @ohaijen worked on. This was made as a collaboration so we're focused on getting results and pipelines working quickly - this is why we pushed to HF and everything to run the model is pretty much self-contained within that model card and notebooks.
My opportunistic thinking here was that we could use the model we're using on HF for some active testing on DeepSparse. I don't think there are plans on pushing CLIP models to SparseZoo at the moment, but if you think it's necessary for testing we could try to get a model up there.

Copy link
Contributor

@dsikka dsikka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few small comments otherwise LGTM

@mock_engine(rng_seed=0)
def test_visual_clip(engine, visual_input):
from deepsparse import Pipeline
from deepsparse.legacy import Pipeline
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldnt be needed?

@mock_engine(rng_seed=0)
def test_text_clip(engine, text_input):
from deepsparse import Pipeline
from deepsparse.legacy import Pipeline
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as above.

def model_folder():
from huggingface_hub import snapshot_download

model_id = "neuralmagic/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K-quant-ds"
Copy link
Contributor

@dsikka dsikka Jan 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add a quick comment/note indicating that this model is not from OpenCLIP and only used for zero-shot classification?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants