Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rounding errors when using set_weight #1374

Open
Koehlibert opened this issue Aug 24, 2023 · 2 comments
Open

Rounding errors when using set_weight #1374

Koehlibert opened this issue Aug 24, 2023 · 2 comments

Comments

@Koehlibert
Copy link

Hi,
I'm working on a project in which I have to manually set weights for some layers of my network. However when doing so, the resulting weights are changed (not even rounded) at around 8 digits, as seen in the picture.
This breaks my algorithm and I am curious if someone knows how this can happen or how this can be prevented. Since this is part of a very large project, I'm afraid that giving a "minimal" working example would need several thousands of lines, so I will refrain from doing so. Maybe someone had a similar problem and has some insights.
Genauigkeit

@t-kalinowski
Copy link
Member

You're losing precision by converting from float64 to float32. Keras layer weights default to float32 typically. You can customize that by passing dtype = 'float64' when creating the layer.

@t-kalinowski
Copy link
Member

You can also try setting a global default with keras::k_set_floatx("float64"), but be aware that it doesn't always propagate to all the tensorflow operations, so you may still have to search out stray 'float32' conversions in a large codebase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants