Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

load/save problem with CuArrays in tutorial/60.rnn.ipynb #668

Open
denizyuret opened this issue Nov 2, 2021 · 2 comments
Open

load/save problem with CuArrays in tutorial/60.rnn.ipynb #668

denizyuret opened this issue Nov 2, 2021 · 2 comments

Comments

@denizyuret
Copy link
Owner

No description provided.

@noyongkyoon
Copy link

Hello, Prof. Yuret.

When I choose not to train from scratch within the trainresults() function, opting instead for loading the model (and results) saved previously, I get the following error messages:

CUDA error: an illegal memory access was encountered (code 700, ERROR_ILLEGAL_ADDRESS)

Stacktrace:
[1] throw_api_error(res::CUDA.cudaError_enum)
@ CUDA ~/.julia/packages/CUDA/BbliS/lib/cudadrv/error.jl:89
[2] macro expansion
@ ~/.julia/packages/CUDA/BbliS/lib/cudadrv/error.jl:97 [inlined]
[3] cuMemAllocAsync
@ ~/.julia/packages/CUDA/BbliS/lib/utils/call.jl:26 [inlined]
[4] alloc(::Type{CUDA.Mem.DeviceBuffer}, bytesize::Int64; async::Bool, stream::CUDA.CuStream, pool::Nothing)
@ CUDA.Mem ~/.julia/packages/CUDA/BbliS/lib/cudadrv/memory.jl:83
[5] macro expansion
@ ~/.julia/packages/CUDA/BbliS/src/pool.jl:43 [inlined]
[6] macro expansion
@ ./timing.jl:383 [inlined]
[7] actual_alloc(bytes::Int64; async::Bool, stream::CUDA.CuStream)
@ CUDA ~/.julia/packages/CUDA/BbliS/src/pool.jl:41
[8] macro expansion
@ ~/.julia/packages/CUDA/BbliS/src/pool.jl:322 [inlined]
[9] macro expansion
@ ./timing.jl:383 [inlined]
[10] #_alloc#174
@ ~/.julia/packages/CUDA/BbliS/src/pool.jl:404 [inlined]
[11] #alloc#173
@ ~/.julia/packages/CUDA/BbliS/src/pool.jl:389 [inlined]
[12] alloc
@ ~/.julia/packages/CUDA/BbliS/src/pool.jl:383 [inlined]
[13] CUDA.CuArray{Float32, 3, CUDA.Mem.DeviceBuffer}(#unused#::UndefInitializer, dims::Tuple{Int64, Int64, Int64})
@ CUDA ~/.julia/packages/CUDA/BbliS/src/array.jl:42
[14] similar
@ ~/.julia/packages/CUDA/BbliS/src/array.jl:166 [inlined]
[15] similar(a::CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, dims::Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}, Base.OneTo{Int64}})
@ Base ./abstractarray.jl:795
[16] _getindex(::CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, ::Base.Slice{Base.OneTo{Int64}}, ::Vararg{Any})
@ GPUArrays ~/.julia/packages/GPUArrays/9GYI6/src/host/indexing.jl:42
[17] getindex(::CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}, ::Function, ::Matrix{UInt16})
@ GPUArrays ~/.julia/packages/GPUArrays/9GYI6/src/host/indexing.jl:38
[18] forw(::typeof(getindex), ::AutoGrad.Param{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}}, ::Vararg{Any}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ AutoGrad ~/.julia/packages/AutoGrad/1QZxP/src/core.jl:66
[19] forw
@ ~/.julia/packages/AutoGrad/1QZxP/src/core.jl:64 [inlined]
[20] getindex(::AutoGrad.Param{CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}}, ::Function, ::Matrix{UInt16})
@ AutoGrad ./none:0
[21] (::Embed)(x::Matrix{UInt16})
@ Main ./In[7]:3
[22] (::Chain)(x::Matrix{UInt16})
@ Main ./In[5]:6
[23] tag(tagger::Chain, s::String)
@ Main ./In[24]:6
[24] top-level scope
@ In[26]:1

​This error message seems daunting to me.
Would there be a way to save this model and then loading it to use it later, without this apprent hiccup?
What does one have to do by way of resolving this issue? Any pointers?

Y. No

@denizyuret
Copy link
Owner Author

I think this is related to JuliaGPU/CUDA.jl#1833

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants