Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential NaNs due to div by zero when normalising #7

Open
beyarkay opened this issue Oct 15, 2022 · 0 comments
Open

Potential NaNs due to div by zero when normalising #7

beyarkay opened this issue Oct 15, 2022 · 0 comments

Comments

@beyarkay
Copy link

beyarkay commented Oct 15, 2022

So these lines in the update method:

for i in 0..self.data.x {
    for j in 0..self.data.y {
        for k in 0..self.data.z {
            self.data.map[[i, j, k]] += (elem[[k]] - self.data.map[[i, j, k]]) * g[[i, j]];
        }

        let norm = norm(self.data.map.index_axis(Axis(0), i).index_axis(Axis(0), j));
        for k in 0..self.data.z {
            self.data.map[[i, j, k]] /= norm;
        }
    }
}

Were causing me issues because norm was ending up as zero, causing a divide by zero to make self.data.map[[i, j, k]] be f64::NaN and resulting in funky results later down the line (I've got NaN values in my input features)

I'm not sure what the purpose of the normalization is? I understand that it would ensure each neuron's weights sum to 1, but I can't find where this is recommended.

On my own fork I've wrapped the normalisation with a check to make sure norm >0 and that seems to have solved the issues, although I'm not sure how valid it is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant