Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while Quantizing OWLv2 model #178

Open
n9s8a opened this issue Apr 29, 2024 · 0 comments
Open

Error while Quantizing OWLv2 model #178

n9s8a opened this issue Apr 29, 2024 · 0 comments

Comments

@n9s8a
Copy link

n9s8a commented Apr 29, 2024

Hi Team,

I hope this massage find you well.

I was trying to quantize OWLv2 using same method. I used below command.

! python -m awq.entry --model_path /path/to/owlv2/ --w_bit 4 --q_group_size 128 --run_awq --dump_awq awq_cache/owlv2.pt

I got below error.

ValueError: Unrecognized configuration class <class 'transformers.models.owlv2.configuration_owlv2.Owlv2Config'> for this kind of AutoModel: AutoModelForCausalLM.Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

can this method be used to compress ViT models? if yes how we can do that? What changes need to be made in the existing code?

Thank you for considering this request. I look forward to any updates or information you can provide on this matter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant