Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Wrong gradients in the CUDA implementation of Layer Norm #2902

Open
arrufat opened this issue Dec 26, 2023 · 10 comments
Open

[Bug]: Wrong gradients in the CUDA implementation of Layer Norm #2902

arrufat opened this issue Dec 26, 2023 · 10 comments
Labels

Comments

@arrufat
Copy link
Contributor

arrufat commented Dec 26, 2023

What Operating System(s) are you seeing this problem on?

Linux (x86-64)

dlib version

19.24

Python version

N/A

Compiler

GCC 12.3.0

Expected Behavior

Expect the tests to pass for LayerNorm, both in CPU and CUDA implementations.

Current Behavior

This test passed (I have CUDA enabled of course):

dlib/dlib/test/dnn.cpp

Lines 603 to 653 in 46e59a2

void test_layer_normalize()
{
resizable_tensor x(2, 3, 4, 5);
resizable_tensor y_cpu(x);
tt::tensor_rand rnd(0);
rnd.fill_uniform(x);
resizable_tensor means_cpu(x.num_samples()), invstds_cpu(x.num_samples());
resizable_tensor gamma(1, x.k(), x.nr(), x.nc()), beta(1, x.k(), x.nr(), x.nc());
gamma = 1;
beta = 0;
const float eps = 1e-5;
cpu::layer_normalize(eps, y_cpu, means_cpu, invstds_cpu, x, gamma, beta);
// check that the mean and var per sample are 0 and 1
const float* p = y_cpu.host();
for (long n = 0; n < y_cpu.num_samples(); ++n)
{
running_stats<float> rs;
for (long k = 0; k < y_cpu.k(); ++k)
{
for (long r = 0; r < y_cpu.nr(); ++r)
{
for (long c = 0; c < y_cpu.nc(); ++c)
{
rs.add(p[tensor_index(y_cpu, n, k, r, c)]);
}
}
}
DLIB_TEST(::std::abs(rs.mean()) < 1e-6);
DLIB_TEST(::std::abs(rs.stddev() - 1.0f) < 0.01);
}
// check that the CPU and the CUDA implementation are equivalent
#if DLIB_USE_CUDA
resizable_tensor y_cuda(x);
resizable_tensor means_cuda(x.num_samples()), invstds_cuda(x.num_samples());
cuda::layer_normalize(eps, y_cuda, means_cuda, invstds_cuda, x, gamma, beta);
DLIB_TEST(max(abs(mat(y_cpu) - mat(y_cuda))) < 1e-5);
DLIB_TEST(max(abs(mat(means_cpu) - mat(means_cuda))) < 1e-5);
DLIB_TEST(max(abs(mat(invstds_cpu) - mat(invstds_cuda))) < 1e-5);
resizable_tensor gradient_input(x);
resizable_tensor src_grad_cpu(x), gamma_grad_cpu(1, x.k(), x.nr(), x.nc()), beta_grad_cpu(1, x.k(), x.nr(), x.nc());
resizable_tensor src_grad_cuda(x), gamma_grad_cuda(1, x.k(), x.nr(), x.nc()), beta_grad_cuda(1, x.k(), x.nr(), x.nc());
rnd.fill_gaussian(gradient_input);
src_grad_cpu = 0;
src_grad_cuda = 0;
cpu::layer_normalize_gradient(eps, gradient_input, means_cpu, invstds_cpu, x, gamma, src_grad_cpu, gamma_grad_cpu, beta_grad_cpu);
cuda::layer_normalize_gradient(eps, gradient_input, means_cuda, invstds_cuda, x, gamma, src_grad_cuda, gamma_grad_cuda, beta_grad_cuda);
DLIB_TEST(max(abs(mat(src_grad_cpu) - mat(src_grad_cuda))) < 1e-5);
DLIB_TEST(max(abs(mat(gamma_grad_cpu) - mat(gamma_grad_cuda))) < 1e-5);
DLIB_TEST(max(abs(mat(beta_grad_cpu) - mat(beta_grad_cuda))) < 1e-5);
#endif
}

In the first part (before the #if DLIB_USE_CUDA) we are checking that the Layer Normalization actually does what it says on CPU: normalizes each sample.

In the second part, I compute the CUDA version and check it is equal to the CPU version.
That works, Then, I proceed to compute the gradients on both CPU and GPU and check if they are equal.

Both these tests pass. However, this one only passes on CPU and does not pass on GPU:

dlib/dlib/test/dnn.cpp

Lines 2007 to 2012 in 46e59a2

{
print_spinner();
layer_norm_ l;
auto res = test_layer(l);
DLIB_TEST_MSG(res, res);
}

I have stared at both implementations for a while and I can't see what I am doing wrong.
If someone could have an extra look, I'll appreciate that.

Steps to Reproduce

Just run the test suite with CUDA enabled.

Anything else?

No response

@davisking
Copy link
Owner

Yeah I noticed this a bit ago but didn't figure it out when I looked. Need to look again though and figure it out. Although maybe you will figure it out first :) Must just be some subtle typo somewhere.

@arrufat
Copy link
Contributor Author

arrufat commented Dec 29, 2023

Yes, I will try to figure out. I will check what test the test_layer is doing to see why is not passing... The CPU version passes, but not the GPU version. But then when the test for equality in CPU and GPU passes, too, which is really odd.
Maybe rewriting the whole thing from scratch is easier than trying to figure out the typo, haha.

@davisking
Copy link
Owner

Yeah hard to say, bugs can be anywhere and you never know until after the fact 🤷 :D

@arrufat
Copy link
Contributor Author

arrufat commented Dec 31, 2023

It's really weird. If I change this line:

resizable_tensor x(2, 3, 4, 5);

to the size that test_layer tests

resizable_tensor x(4, 2, 2, 4);

Then, this one fails: the values are entirely different. But with the previous size, the values are the same.

DLIB_TEST(max(abs(mat(gamma_grad_cpu) - mat(gamma_grad_cuda))) < 1e-5);

@arrufat
Copy link
Contributor Author

arrufat commented Feb 14, 2024

I will tackle this again at some point. Sometimes, when things like this happen and I have absolutely no idea why, I question myself a lot...

@davisking
Copy link
Owner

Na everyone thinks that from time to time. Don't sweat it :)

@davisking davisking added the bug label Feb 14, 2024
Repository owner deleted a comment from dlib-issue-bot Feb 14, 2024
Repository owner deleted a comment from dlib-issue-bot Feb 14, 2024
@pfeatherstone
Copy link
Contributor

pfeatherstone commented Feb 25, 2024

I could be naive here, is there a reason why Layernorm isn't using CUDNN ?
Will cudnnNormalizationForwardInference, cudnnNormalizationForwardTraining and cudnnNormalizationBackward work ?
It looks like those functions can be used for batchnorm, layernorm and groupnorm.

@arrufat
Copy link
Contributor Author

arrufat commented Feb 25, 2024

Oh, I wasn't aware it was possible to do Layer Normalization with cuDNN.
This link says that: https://docs.nvidia.com/deeplearning/cudnn/api/cudnn-ops-library.html#cudnnnormalizationforwardtraining is deprecated in v9.
Could you please show me how to use it for LayerNorm? I can't see any mention in the API.

It seems weird that this layer, now used in most Transformer-Based networks, has no cuDNN implementation…

@pfeatherstone
Copy link
Contributor

Somewhere in the docs I read it could be used for multiple types of normalization...
I agree, it's hard to believe cudnn doesn't have first class support for it. Maybe libraries like Pytorch, GGML etc have moved away from cudnn and just use vanilla CUDA or https://github.com/NVIDIA/cccl to write their kernels from scratch. Don't know.
Also a lot of Transformers use different normalization functions now, like RMSNorm for example.

@arrufat
Copy link
Contributor Author

arrufat commented Feb 25, 2024

Ah, maybe you're referring to this? This is for FC or Conv mode in Batch Norm, which dlib already uses.
https://docs.nvidia.com/deeplearning/cudnn/api/cudnn-ops-library.html#cudnnnormmode-t

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants