Skip to content

XNOR-Net with binary conv2d kernels with XNOR GEMM op, support both CPU and GPU.

Notifications You must be signed in to change notification settings

pminhtam/xnor_conv_pytorch_extension

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

C++/CUDA Extensions XNOR convolution in PyTorch

XNOR extension

In XNOR convolution, both the filters and the input to convolutional layers are binary. Now, by approximating the convolution operations with XNOR and bitcounting operations, we can gain massive speed-up and memory savings.

xnor convolution using bitwise operation

Implement on numpy-python, Cpp and CUDA.

  • Inspect the C++ and CUDA extensions in the cpp/ and cuda/ folders,

Build cpp and CUDA

Build C++ and/or CUDA extensions by going into the cpp/ or cuda/ folder and executing python setup.py install,

Cpp

cd cpp
python setup.py install 

CUDA

cd cuda
python setup.py install 

Use

Cpp

import binary_cpp
output = binary_cpp.binary_conv2d(input,filter,bias)

CUDA

import binary_cuda
output = binary_cuda.binary_conv2d_cuda(input,filter,bias)

Numpy

from py.xnor_bitwise_numpy import xnor_bitwise_np
out = xnor_bitwise_np(input,filter)

References

[1] https://github.com/pytorch/extension-cpp

[2] Rastegari, Mohammad, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. "Xnor-net: Imagenet classification using binary convolutional neural networks." In European conference on computer vision, pp. 525-542. Springer, Cham, 2016.

[3] https://github.com/cooooorn/Pytorch-XNOR-Net

[4] https://github.com/anilsathyan7/ConvAcc

[5]Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1