-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mismatch between pretrained weights and imdb data? #1
Comments
Could you fix the problem somehow? I seem to run into the same problem with the pretrained weights. I was getting an error with the encoding of the data in the preprocessing, then I switched to utf-8 encoding and the preprocessing worked alright. Then I get the error you are getting while loading the pretrained weights. It isn't specified anywhere in the code but do preprocessed data used for pretrained weights use somehow a different encoding than utf-8? Thanks for the interest.
|
First, I ran
./download.sh
andwget http://sato-motoki.com/research/vat/imdb_pretrained_lm_ijcai.model
.Followed by the iVat train command in README.md. I've attached the output. It seems like
vocab_inv
is larger than the max_vocab at the time the pretrained model was made.What is the best way to fix this?
Thanks!
The text was updated successfully, but these errors were encountered: