Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test the model #8

Open
souravsuresh opened this issue Jan 6, 2018 · 5 comments
Open

Test the model #8

souravsuresh opened this issue Jan 6, 2018 · 5 comments
Labels

Comments

@souravsuresh
Copy link

Can you specify how exactly can i test the model i.e given an image with question the model is expected to return answers with confidence.

@varunnrao
Copy link

Yes, please let us know how to test the model, since your repository does not have any instructions on how to test.
Instructions stop after you mention how to train and the log file is generated.

@varunnrao
Copy link

So, we'd like to know how to specify our own image and a series of questions to obtain answers with confidence after the model has been trained.

@BA3000
Copy link

BA3000 commented Apr 27, 2019

I think we can test in this way. First, modify the preprocess-images.py for preprocessing any given image, then use a method similar to the one difined in preprocess-vocab.py for preprocessing question. Finally, load and switch the model to evaluate mode and feed image feature and question for getting the output. With the result we get from our model, which is supposed to be an index, we can "translate" it into a specific answer with using the vocab.json, which is defined in the config.py and works like a dictionary.

@puzex
Copy link

puzex commented May 7, 2021

I think we can test in this way. First, modify the preprocess-images.py for preprocessing any given image, then use a method similar to the one difined in preprocess-vocab.py for preprocessing question. Finally, load and switch the model to evaluate mode and feed image feature and question for getting the output. With the result we get from our model, which is supposed to be an index, we can "translate" it into a specific answer with using the vocab.json, which is defined in the config.py and works like a dictionary.

I have the same question for long time.Have you solved this problem?

@BA3000
Copy link

BA3000 commented May 11, 2021

I think we can test in this way. First, modify the preprocess-images.py for preprocessing any given image, then use a method similar to the one difined in preprocess-vocab.py for preprocessing question. Finally, load and switch the model to evaluate mode and feed image feature and question for getting the output. With the result we get from our model, which is supposed to be an index, we can "translate" it into a specific answer with using the vocab.json, which is defined in the config.py and works like a dictionary.

I have the same question for long time.Have you solved this problem?

yes, but I did not keep the code, so I cannot post it. it is actually quite easy to test the model, all you need is the trained model, load it and switch it to test mode. then load the test dataset and feed these data to the model for predicting answers. save the predictions as a JSON file then upload them to the VQA server. you will be able to see the resulted accuracy from the output of the VQA server.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants