Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

手动实现pixel_unshuffle时使用 F.conv2d报错 #521

Open
kyoron2 opened this issue Feb 2, 2024 · 1 comment
Open

手动实现pixel_unshuffle时使用 F.conv2d报错 #521

kyoron2 opened this issue Feb 2, 2024 · 1 comment

Comments

@kyoron2
Copy link

kyoron2 commented Feb 2, 2024

我希望使用megengine手动实现pixel_unshuffle方法
使用F.conv2d(input, kernel, stride=downscale_factor, groups=c)代码中
input.shape=(2, 3, 1980, 2880)
kernel.shape=(48,1,4,4)
downscale_factor=4
c=3

同样的方法在pytorch中有效,输出shape为(2, 48, 495, 720),但是在MegEngine的F.conv2d会爆出错误

环境

1.系统环境:
2.MegEngine版本:1.11.1+cu111
3.python版本:3.8

复现步骤

pytorch代码:
import torch
import torch.nn as nn
import torch.nn.functional as F

def pixel_unshuffle(input, downscale_factor):
'''
input: batchSize * c * kw * kh
kdownscale_factor: k

batchSize * c * k*w * k*h -> batchSize * k*k*c * w * h
'''
c = input.shape[1]

kernel = torch.zeros(size=[downscale_factor * downscale_factor * c,
                           1, downscale_factor, downscale_factor],
                     device=input.device)
for y in range(downscale_factor):
    for x in range(downscale_factor):
        kernel[x + y * downscale_factor::downscale_factor*downscale_factor, 0, y, x] = 1
return F.conv2d(input, kernel, stride=downscale_factor, groups=c)

x = torch.zeros(size=(2, 3, 1980, 2880))

x = pixel_unshuffle(x, 4)
print(x.shape)

megengine代码:
def pixel_unshuffle(input, downscale_factor):
'''
input: (batchSize, c, kw, kh)
downscale_factor: k

(batchSize, c, k*w, k*h) -> (batchSize, k*k*c, w, h)
'''
c = input.shape[1]

# 创建一个展开操作符(类似于PyTorch中的卷积核)
kernel = F.zeros((downscale_factor * downscale_factor * c, 1, downscale_factor, downscale_factor), device=input.device)
for y in range(downscale_factor):
    for x in range(downscale_factor):
        kernel[x + y * downscale_factor::downscale_factor*downscale_factor, 0, y, x] = 1
return F.conv2d(input, kernel, stride=downscale_factor, groups=c)

im = F.zeros((2, 3, 1980, 2880))
isn = pixel_unshuffle(im, 4)
print(isn)

请提供完整的日志及报错信息

Traceback (most recent call last):
File "/home/kyoron/.pycharm_helpers/pydev/pydevd.py", line 1500, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/kyoron/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/mnt/e/多媒体/桌面/WZ/code/CREStereo_swinT/CREStereo/nets/utils/utils.py", line 65, in
python-BaseException
isn = pixel_unshuffle(im, 4)
File "/mnt/e/多媒体/桌面/WZ/code/CREStereo_swinT/CREStereo/nets/utils/utils.py", line 62, in pixel_unshuffle
return F.conv2d(input, kernel, stride=downscale_factor, groups=c)
File "/home/kyoron/miniconda3/envs/dl/lib/python3.8/site-packages/megengine/functional/nn.py", line 265, in conv2d
(output,) = apply(op, inp, weight)
RuntimeError: assertion `filter.ndim == img_ndim + 3 || filter.ndim == img_ndim + 5' failed at ../../../../../../dnn/src/common/convolution.cpp:56: void {anonymous}::make_canonized_filter_meta_nchw_nhwc(size_t, const megdnn::TensorLayout&, const Param&, typename megdnn::ConvolutionBase::CanonizedFilterMeta&) [with Parameter = megdnn::param::Convolution; Param = megdnn::param::Convolution; size_t = long unsigned int; typename megdnn::ConvolutionBase::CanonizedFilterMeta = megdnn::ConvolutionBasemegdnn::param::Convolution::CanonizedFilterMeta]
extra message: bad filter ndim for group convolution: spatial_ndim=2 filter_ndim=4

backtrace:
/home/kyoron/miniconda3/envs/dl/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(_ZN3mgb13MegBrainErrorC1ERKSs+0x4a) [0x7feefd2d76aa]
/home/kyoron/miniconda3/envs/dl/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(+0x2ab2557) [0x7feefd339557]
/home/kyoron/miniconda3/envs/dl/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(_ZN6megdnn12ErrorHandler15on_megdnn_errorERKSs+0x14) [0x7fef00addb34]
/home/kyoron/miniconda3/envs/dl/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(_ZN6megdnn12ErrorHandler15on_megdnn_errorEPKc+0x22) [0x7fef00adf632]
/home/kyoron/miniconda3/envs/dl/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(_ZN6megdnn15__assert_fail__EPKciS1_S1_S1_z+0x190) [0x7fef00b80c50]
/home/kyoron/miniconda3/envs/dl/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(_ZNK6megdnn15ConvolutionBaseINS_5param11ConvolutionEE26make_canonized_filter_metaEmRKNS_12TensorLayoutE+0x12d6) [0x7fef00ae8a46]
/home/kyoron/miniconda3/envs/dl/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(ZNK6megdnn15ConvolutionBaseINS_5param11ConvolutionEE17deduce_layout_fwdERKNS_12TensorLayoutES6_RS4+0xa3) [0x7fef00af18c3]
/home/kyoron/miniconda3/envs/dl/lib/python3.8/site-packages/megengine/core/lib/libmegengine_shared.so(ZN6megdnn18ConvolutionForward13deduce_layoutERKNS_12TensorLayoutES3_RS1+0x1b) [0x7fef00af388b]
/home/kyoron/miniconda3/envs/dl/lib/python3.8/site-packages/megengine/core/_imperative_rt.cpython-38-x86_64-linux-gnu.so(+0x33d733) [0x7fef99e1c733]
/home/kyoron/miniconda3/envs/dl/lib/python3.8/site-packages/megengine/core/_imperative_rt.cpython-38-x86_64-linux-gnu.so(+0x2e5bff) [0x7fef99dc4bff]

ERROR conda.cli.main_run:execute(124): conda run python /home/kyoron/.pycharm_helpers/pydev/pydevd.py --multiprocess --qt-support=auto --client 127.0.0.1 --port 59297 --file /mnt/e/多媒体/桌面/WZ/code/CREStereo_swinT/CREStereo/nets/utils/utils.py failed. (See above for error)

@kyoron2
Copy link
Author

kyoron2 commented Feb 3, 2024

简单复现
`import megengine as mge
import megengine.functional as F
input = F.zeros((2, 3, 1980, 2880))
kernel=F.zeros((48,1,4,4))
downscale_factor=4
c=3
x=F.conv2d(input, kernel, stride=downscale_factor, groups=c)
print(x)

import torch
import torch.nn as nn

input = torch.zeros(2, 3, 1980, 2880)
kernel=torch.zeros(48,1,4,4)
downscale_factor=4
c=3
x=torch.nn.functional.conv2d(input, kernel, stride=downscale_factor, groups=c)
print(x.shape)`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant