You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 10, 2021. It is now read-only.
I used the NMT tool to training a Japanese to English engine,Cleaned up unnecessary impurities. It's always a bad result.I want to know if this is a bug that the software learns from Double-byte text,Or I'm using a nonstandard parameter(-layers 3 -rnn_size 500 ).Has anyone ever encountered the same problem as me?
thanks everybody
The text was updated successfully, but these errors were encountered:
Hello! @sdlmw
I am training a Korean-English model now, and using -layers 8 -rnn_size 1000 options.
My data size is about 3 million and I also used the Mecab for tokenizing.
I am not so sure how you clean up the unnecessary things, but I don't have any problem about Double-byte text until now.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I used the NMT tool to training a Japanese to English engine,Cleaned up unnecessary impurities. It's always a bad result.I want to know if this is a bug that the software learns from Double-byte text,Or I'm using a nonstandard parameter(-layers 3 -rnn_size 500 ).Has anyone ever encountered the same problem as me?
thanks everybody
The text was updated successfully, but these errors were encountered: