You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Alternatively, I was considering implementing a ggml_flip operation (PyTorch's flip). This would allow us to implement a pad_reflect operation directly in 'user space' using flipping instead of adding a quite niche operation (pad_reflect) to ggml.
Alternatively, I was considering implementing a ggml_flip operation (PyTorch's flip). This would allow us to implement a pad_reflect operation directly in 'user space' using flipping instead of adding a quite niche operation (pad_reflect) to ggml.
I don't have a good idea which option would be more useful, so whatever you think makes more sense - PR welcome
Hello,
I have implemented a custom operation
ggml_pad_reflec_1d
on my ggml fork. This is required for Encodec.cpp.Should I upstream this operation? I would write a
ggml_pad_reflect
supporting 1D and 2D input, as in PyTorch's nn.ReflectionPad2dAlternatively, I was considering implementing a
ggml_flip
operation (PyTorch's flip). This would allow us to implement apad_reflect
operation directly in 'user space' using flipping instead of adding a quite niche operation (pad_reflect
) toggml
.What are your thoughts? @slaren @ggerganov
The text was updated successfully, but these errors were encountered: