Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when I run bash make.sh, here is a question #23

Open
fxywky opened this issue May 24, 2020 · 3 comments
Open

when I run bash make.sh, here is a question #23

fxywky opened this issue May 24, 2020 · 3 comments

Comments

@fxywky
Copy link

fxywky commented May 24, 2020

/home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/ATen/cuda/NumericLimits.cuh(86): warning: calling a constexpr host function("from_bits") from a host device function("upper_bound") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.

/home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/c10/util/ArrayRef.h:277:55: warning: ‘deprecated’ attribute directive ignored [-Wattributes]
using IntList C10_DEPRECATED_USING = ArrayRef<int64_t>;
^
/usr/local/cuda/bin/nvcc -DWITH_CUDA -I/home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src -I/home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include -I/home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/TH -I/home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/fxy/.conda/envs/torchPy/include/python3.6m -c /home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src/cuda/dcn_v2_psroi_pooling_cuda.cu -o build/temp.linux-x86_64-3.6/home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src/cuda/dcn_v2_psroi_pooling_cuda.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
/home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/ATen/cuda/NumericLimits.cuh(83): warning: calling a constexpr host function("from_bits") from a host device function("lowest") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.

/home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/ATen/cuda/NumericLimits.cuh(84): warning: calling a constexpr host function("from_bits") from a host device function("max") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.

/home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/ATen/cuda/NumericLimits.cuh(85): warning: calling a constexpr host function("from_bits") from a host device function("lower_bound") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.

/home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/ATen/cuda/NumericLimits.cuh(86): warning: calling a constexpr host function("from_bits") from a host device function("upper_bound") is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.

/home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/c10/util/ArrayRef.h:277:55: warning: ‘deprecated’ attribute directive ignored [-Wattributes]
using IntList C10_DEPRECATED_USING = ArrayRef<int64_t>;
^
/home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src/cuda/dcn_v2_psroi_pooling_cuda.cu: In lambda function:
/home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src/cuda/dcn_v2_psroi_pooling_cuda.cu:317:120: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated (declared at /home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:47) [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES(input.type(), "dcn_v2_psroi_pooling_cuda_forward", [&] {
^
/home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src/cuda/dcn_v2_psroi_pooling_cuda.cu: In lambda function:
/home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src/cuda/dcn_v2_psroi_pooling_cuda.cu:391:126: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated (declared at /home/fxy/.conda/envs/torchPy/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:47) [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES(out_grad.type(), "dcn_v2_psroi_pooling_cuda_backward", [&] {
^
creating build/lib.linux-x86_64-3.6
g++ -pthread -shared -B /home/fxy/.conda/envs/torchPy/compiler_compat -L/home/fxy/.conda/envs/torchPy/lib -Wl,-rpath=/home/fxy/.conda/envs/torchPy/lib -Wl,--no-as-needed -Wl,--sysroot=/ build/temp.linux-x86_64-3.6/home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src/vision.o build/temp.linux-x86_64-3.6/home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src/cpu/dcn_v2_cpu.o build/temp.linux-x86_64-3.6/home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src/cuda/dcn_v2_cuda.o build/temp.linux-x86_64-3.6/home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src/cuda/dcn_v2_im2col_cuda.o build/temp.linux-x86_64-3.6/home/fxy/Desktop/mountTry/fxy/Zooming-Slow-Mo-CVPR-2020-master/codes/models/modules/DCNv2/src/cuda/dcn_v2_psroi_pooling_cuda.o -L/usr/local/cuda/lib64 -lcudart -o build/lib.linux-x86_64-3.6/_ext.cpython-36m-x86_64-linux-gnu.so
running develop
running egg_info
creating DCNv2.egg-info
writing DCNv2.egg-info/PKG-INFO
writing dependency_links to DCNv2.egg-info/dependency_links.txt
writing top-level names to DCNv2.egg-info/top_level.txt
writing manifest file 'DCNv2.egg-info/SOURCES.txt'
reading manifest file 'DCNv2.egg-info/SOURCES.txt'
writing manifest file 'DCNv2.egg-info/SOURCES.txt'
running build_ext
copying build/lib.linux-x86_64-3.6/_ext.cpython-36m-x86_64-linux-gnu.so ->
error: [Errno 1] Operation not permitted

I don't know what's the problem and how to fix it??
my environment is cuda9.0, gcc4.9.2, torch1.1
thanks a lot !

@pettod
Copy link

pettod commented May 28, 2020

I have tried few different combinations of versions of cuda, gcc, torch. The versions which finally worked for me are:
gcc 5.5.0, pytorch 1.4.0, cuda 10.1, python 3.8

I used the following conda command to install the dependencies:
conda install pytorch==1.4.0 torchvision cudatoolkit -c pytorch

Not sure whether gcc version is needed to be changed, but here are instructions what I used to change the version:
https://askubuntu.com/a/1087368

@fxywky
Copy link
Author

fxywky commented Jun 8, 2020

Iuse a windows10 pc and follow your suggestion,but here is a another error. Here is the question
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\link.exe' failed with exit status 1120
my environment is pytorch 1.4.0, cuda 10.1, python 3.6

@Mukosame
Copy link
Owner

I have tried few different combinations of versions of cuda, gcc, torch. The versions which finally worked for me are:
gcc 5.5.0, pytorch 1.4.0, cuda 10.1, python 3.8

I used the following conda command to install the dependencies:
conda install pytorch==1.4.0 torchvision cudatoolkit -c pytorch

Not sure whether gcc version is needed to be changed, but here are instructions what I used to change the version:
https://askubuntu.com/a/1087368

Thank you so much for sharing your detailed environment. I just discovered that the pytorch and gcc version is very important to make it compile successfully: gcc >= 4.9.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants