Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weird/inconsistent behavior with constant lhs indexing inside @turbo loop #513

Open
melyso opened this issue Oct 26, 2023 · 2 comments
Open

Comments

@melyso
Copy link

melyso commented Oct 26, 2023

I've noticed some weird and inconsistent behavior when attempting to assign to given elements of a constant index inside of a set of @turbo loops.

The following:

X = ones(5, 5)

@turbo for j in axes(X,2)
    for i in axes(X,1)
        X[1,1] = 0
    end
end

will give an error ERROR: BoundsError: attempt to access 0-element Vector{Int64} at index [1] (which is obviously not true at face value)

However, this version, which should technically be the same

X = ones(5, 5)

@turbo for j in axes(X,2)
    for i in axes(X,1)
        X[1,j-j+1] = 0
    end
end

runs without complaining, with X becoming

0.0  1.0  1.0  1.0  1.0
1.0  1.0  1.0  1.0  1.0
1.0  1.0  1.0  1.0  1.0
1.0  1.0  1.0  1.0  1.0
1.0  1.0  1.0  1.0  1.0

as expected.
Perhaps the more worrisome example, however, is the (again, at face value identical)

X = ones(5, 5)

@turbo for j in axes(X,2)
    for i in axes(X,1)
        X[i-i+1,1] = 0
    end
end

which runs without complaining, but with X becoming

 0.0  1.0  1.0  1.0  1.0
 0.0  1.0  1.0  1.0  1.0
 1.0  1.0  1.0  1.0  1.0
 1.0  1.0  1.0  1.0  1.0
 1.0  1.0  1.0  1.0  1.0

which is just wrong.

What to do? Is there some reason that indexing into constant-value vectors shouldn't be allowed inside these loops? And can I be certain that this will work as intended if I do this add/subtract "trick" with a specific index? After a quick check with a tensor it seems like it only messes up if I choose the first/innermost index, but I'd like to know for sure that this is the case.

@chriselrod
Copy link
Member

Related: #331

@melyso
Copy link
Author

melyso commented Oct 27, 2023

I see! Of course this example is a little contrived in order to be "minimal" in some sense. In the real application it's really done to update the weights in a relatively small kernel before some interpolation, so it's really an attempt to get around the "only one loop per level"-limitation. But I guess I'll wait patiently for LoopModels.jl in that regard!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants