You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 10, 2021. It is now read-only.
When I run th tools/release_model.lua -model <model> -gpuid 1, this is what happens. (using a model trained from this paragraph model)
[02/06/18 06:38:42 INFO] Using GPU(s): 1
[02/06/18 06:38:42 WARNING] The caching CUDA memory allocator is enabled. This allocator improves performance at the cost of a higher GPU memory usage. To optimize for memory, consider disabling it by setting the environment variable: THC_CACHING_ALLOCATOR=0
[02/06/18 06:38:42 INFO] Loading model '../<foldername>/paragraph/model/840B.300d.rnn.para_epoch1_107.09.t7'...
/root/torch/install/bin/luajit: tools/release_model.lua:93: unable to load the model (/root/torch/install/share/lua/5.1/torch/File.lua:343: unknown Torch class <onmt.CustomizedAttention>). If you are releasing a GPU model, it needs to be loaded on the GPU first (set -gpuid > 0)
stack traceback:
[C]: in function 'error'
tools/release_model.lua:93: in function 'main'
tools/release_model.lua:116: in main chunk
[C]: in function 'dofile'
/root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00406670
Let me know if there's anything I can do (i.e. changing to different Torch class?)
The text was updated successfully, but these errors were encountered:
jkcchan
changed the title
Release a model - error: unknown Torch class <onmt.CustomizedAttention>
Release model - error: unknown Torch class <onmt.CustomizedAttention>
Feb 6, 2018
When loading a class, the Torch loader expects it to be available. I would recommend to simply copy the release script in nqg/paragraph and run it from there to ensure that all custom class definitions are available.
@jkcchan Merely copying the release script into the nqg/paragraph folder did not work for me. What I rather did was copy the onmt/modules/* from the project you pointed to and copied them to onmt/modules/ folder of this project and than used the command. This should hopefully work for you too.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
When I run
th tools/release_model.lua -model <model> -gpuid 1
, this is what happens. (using a model trained from this paragraph model)Let me know if there's anything I can do (i.e. changing to different Torch class?)
The text was updated successfully, but these errors were encountered: