You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@pfeatherstone why not) Although as the linked wiki states
There is a trade-off in that there may be some loss of precision when using floating point.
So faster convolutions come at a cost.
cplxmodule currently uses cplx.conv_quick for most convolutions (non-grouped), which uses two calls to conv at the cost of extra concatenation and slicing steps and, hence, copying and memory storage.
On the other hand cplxmodule currently uses the naïve four-op implementation for cplx.inear, although i've got both the Gauss-trick and concatenation version implemented and tested as well.
Unfortunately, i did not design a convenient mechanism in cplxmodule for changing the operations' underlying kernels used in the layers. So for now the selection is hardcoded to specific implementations (linear, bilinear, and transposed conv).
i have just pushed a commit to the master, implementing and testing the stuff. However, please, keep in mind the last paragraph of my previous response: currently you will have to manually change a couple of lines in cplxmodule/cplx.py
How about use this for naive convolution and reduce 4 convs down to 3
The text was updated successfully, but these errors were encountered: