You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The pth model is trained on the gpu and converted to onnx on the cpu. The current compiled code reports an error:
root@9b04f5972f0e:/mnt/test_model/resnet50/resnet50_pytorch# bash run_compile.sh
2021-11-23 08:43:00.368629: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Source type: TorchModelFile.
Target type: ONNXModel.
Compile path: TorchModelFile -> OnnxModel -> ONNXModel.
Compiling to OnnxModel...
/usr/local/lib/python3.6/dist-packages/torch/cuda/init.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/model_compiler/init.py", line 13, in compile_model
'path': compiler.compile_from_json(config)
File "/usr/local/lib/python3.6/dist-packages/model_compiler/compiler.py", line 50, in compile_from_json
target_model = compiler(source=source_type.from_json(value), config=compiler_config_type.from_json(value))
File "/usr/local/lib/python3.6/dist-packages/model_compiler/compilers/repository.py", line 118, in _compiler
result = edge.compiler(result, inner_config)
File "/usr/local/lib/python3.6/dist-packages/model_compiler/compilers/torch_model_file_to_onnx_model.py", line 85, in compile_source
model.load_state_dict(torch.load(source.model_path))
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 853, in _load
result = unpickler.load()
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 845, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 834, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 151, in _cuda_deserialize
device = validate_cuda_device(location)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 135, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CP
`
The text was updated successfully, but these errors were encountered:
The pth model is trained on the gpu and converted to onnx on the cpu. The current compiled code reports an error:
root@9b04f5972f0e:/mnt/test_model/resnet50/resnet50_pytorch#
bash run_compile.sh2021-11-23 08:43:00.368629: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Source type: TorchModelFile.
Target type: ONNXModel.
Compile path: TorchModelFile -> OnnxModel -> ONNXModel.
Compiling to OnnxModel...
/usr/local/lib/python3.6/dist-packages/torch/cuda/init.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/model_compiler/init.py", line 13, in compile_model
'path': compiler.compile_from_json(config)
File "/usr/local/lib/python3.6/dist-packages/model_compiler/compiler.py", line 50, in compile_from_json
target_model = compiler(source=source_type.from_json(value), config=compiler_config_type.from_json(value))
File "/usr/local/lib/python3.6/dist-packages/model_compiler/compilers/repository.py", line 118, in _compiler
result = edge.compiler(result, inner_config)
File "/usr/local/lib/python3.6/dist-packages/model_compiler/compilers/torch_model_file_to_onnx_model.py", line 85, in compile_source
model.load_state_dict(torch.load(source.model_path))
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 853, in _load
result = unpickler.load()
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 845, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 834, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 151, in _cuda_deserialize
device = validate_cuda_device(location)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 135, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CP
`
The text was updated successfully, but these errors were encountered: