We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
So that it can not be easily missleading model using llava original dataset.
Meanwhile, it looks like the images are missing..
"id": "vcr-52941", "image": "vcr1images/lsmdc_3034_IDES_OF_MARCH/3034_IDES_OF_MARCH_01.27.04.788-01.27.10.308@2.jpg", "meta_dir": "./dataset/vcr1images/lsmdc_3034_IDES_OF_MARCH/3034_IDES_OF_MARCH_01.27.04.788-01.27.10.308@2.json", "class_names": [ "person",
The text was updated successfully, but these errors were encountered:
Hello,
Thanks for your interest in our work!
Sorry, something went wrong.
Hi The meta is needed to prepare training images?
The meta data is included in the vcr_images directory. Therefore, do not worry. The metadata is there.
I wanna using official llava base. I just need to add a vipProceessor to process image right?
Correct. Visual prompt blending is all you need.
No branches or pull requests
So that it can not be easily missleading model using llava original dataset.
Meanwhile, it looks like the images are missing..
The text was updated successfully, but these errors were encountered: