Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the div operator correct? #15

Closed
ThomasA opened this issue Jul 6, 2018 · 1 comment
Closed

Is the div operator correct? #15

ThomasA opened this issue Jul 6, 2018 · 1 comment

Comments

@ThomasA
Copy link
Contributor

ThomasA commented Jul 6, 2018

I am going through the code in this package in detail as part of working on #13. Looking at the operators.div operator, I notice this:

x = np.concatenate((np.expand_dims(dx[0, ], axis=0),
dx[1:-1, ] - dx[:-2, ],
-np.expand_dims(dx[-2, ], axis=0)),
axis=0)

If I try to follow the logic of the lines 155-157, it seems natural to me that line 157 should be:

-np.expand_dims(dx[-1, ], axis=0)),

As if continuing what was taking place in line 156 and then subtracting the final row of the array from zero. But, this reasoning does not quite hold, because in this case dx[-1, ] - dx[-2, ] is not computed anywhere and it does not seem to make sense to skip that. Based on that, I wonder if line 157 should instead be:

np.expand_dims(dx[-1, ] - dx[-2, ], axis=0)),

It is quite possible I am just completely mistaken, but if there is something wrong here, the following should be checked as well:

x += np.concatenate((np.expand_dims(dy[:, 0, ], axis=1),
dy[:, 1:-1, ] - dy[:, :-2, ],
-np.expand_dims(dy[:, -2, ], axis=1)),
axis=1)

x += np.concatenate((np.expand_dims(dz[:, :, 0, ], axis=2),
dz[:, :, 1:-1, ] - dz[:, :, :-2, ],
-np.expand_dims(dz[:, :, -2, ], axis=2)),
axis=2)

x += np.concatenate((np.expand_dims(dt[:, :, :, 0, ], axis=3),
dt[:, :, :, 1:-1, ] - dt[:, :, :, :-2, ],
-np.expand_dims(dt[:, :, :, -2, ], axis=3)),
axis=3)

@nperraud
Copy link
Collaborator

nperraud commented Mar 4, 2020

For sure, I understand the confusion... However I believe the code is correct. In this toolbox, divergence is defined as the - grad transposed. I just added this test:

# Test that 1D div op is the adjoint of grad

and it seems to pass...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants