Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Approximate material grid smoothing (β=∞) #1951

Draft
wants to merge 34 commits into
base: master
Choose a base branch
from

Conversation

smartalecH
Copy link
Collaborator

@smartalecH smartalecH commented Feb 16, 2022

As described in this comment, this PR attempts to perform subpixel smoothing assuming β=∞ for all actual values of β.

From a high-level (abstract) perspective, the technique seems to work (with some important exceptions). Here is the resulting geometry (plotted by pulling the trace of the ε tensor so we can explicitly see the smoothing) for various values of β:

β=2
image

β=64
image

β=512
image

β=∞
image

There are still some artifacts that need to be resolved for lower values of β, but it's clear that at least the routine is doing averaging across the boundries.

There is also a bug with the recombination step (while calculating the gradient). There are occasions when get_front_object does not set mat_behind. This never happens during the forward/adjoint runs -- which is unexpected as the same "routines" are used between these two cases, so we shouldn't see different behavior...

Of course, we still need to perform convergence tests, gradient tests, etc.

Incorporates & closes #1399.

@smartalecH smartalecH marked this pull request as draft February 16, 2022 16:42
@codecov-commenter
Copy link

codecov-commenter commented Feb 16, 2022

Codecov Report

Merging #1951 (c88ca4a) into master (55f909d) will decrease coverage by 8.24%.
The diff coverage is 54.54%.

@@            Coverage Diff             @@
##           master    #1951      +/-   ##
==========================================
- Coverage   73.29%   65.05%   -8.25%     
==========================================
  Files          17       17              
  Lines        4891     4916      +25     
==========================================
- Hits         3585     3198     -387     
- Misses       1306     1718     +412     
Impacted Files Coverage Δ
python/adjoint/filters.py 59.45% <50.00%> (-1.42%) ⬇️
python/adjoint/utils.py 48.83% <100.00%> (-32.56%) ⬇️
python/adjoint/wrapper.py 24.61% <0.00%> (-73.85%) ⬇️
python/adjoint/filter_source.py 23.07% <0.00%> (-65.39%) ⬇️
python/adjoint/objective.py 41.50% <0.00%> (-51.50%) ⬇️
python/source.py 64.06% <0.00%> (-31.25%) ⬇️
python/simulation.py 72.02% <0.00%> (-4.93%) ⬇️
python/adjoint/optimization_problem.py 53.68% <0.00%> (-3.16%) ⬇️
python/verbosity_mgr.py 65.15% <0.00%> (-1.52%) ⬇️
... and 2 more

@stevengj
Copy link
Collaborator

See also #1399 for what to do when mat != mat_behind, even if one (or both) of them are material grids or user functions. Basically, we want to treat the material as piecewise constant and do the subpixel averaging using the geometric fill factor.

src/meepgeom.cpp Outdated
}
return true;
}

double get_material_grid_fill(meep::ndim dim, double d, double r, double u, double eta,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Analytically computes fill fraction.

src/meepgeom.cpp Outdated
LOOP_OVER_DIRECTIONS(normal_vec_behind.dim, k) { normal_vec_behind.set_direction(k,normal_vec_behind.in_direction(k)*nabsinv); }

double uval = matgrid_val(p_mat_behind, tp, oi, mat_behind)+this->u_p;
double d = (mat_behind->eta-uval) * nabsinv;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

distance to an interface

src/meepgeom.cpp Outdated
*/
double fill_front, fill_back;
vector3 normal_front, normal_back;
if (is_material_grid(mat)){
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should only occur if the entire pixel is filled by the material grid.

src/meepgeom.cpp Outdated Show resolved Hide resolved
src/meepgeom.cpp Outdated

if (maxeval == 0) {
if (maxeval == 0 ||
(!get_front_object(v, geometry_tree, p, &o, shiftby, mat, mat_behind, p_mat, p_mat_behind)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would change get_front_object to return the number of objects found (treating the default material as an "object").

  • If it's > 2, then we go to the noavg: branch.
  • If it == 2, then even if one of the objects is a material grid or a user material then we treat it as a uniform material (by evaluating at p_mat or p_mat_behind, respectively) and do the geometric-object averaging (if the resulting mat and mat_behind are still unequal).
  • If it == 1, then if it's a material grid we do the first-order expansion to find the fill fraction etcetera.

src/meepgeom.cpp Outdated Show resolved Hide resolved
@smartalecH
Copy link
Collaborator Author

To debug the gradient step, make sure the forward, adjoint, and recombination steps all have the same geometry tree. See libctl display_geom_box_tree:

https://github.com/NanoComp/libctl/blob/069e736614d86d070ce9bfc222c995c818f24191/utils/geom.c#L1749

@smartalecH
Copy link
Collaborator Author

Quick update:

  • The new routine seems to work well, except that the relatively high gradient error we used to just see with β=∞ now occurs at all values of β (about 1%-10% relative error on average). This is somewhat expected, as there are "tricky" interfaces for all β now. These interfaces don't play well with the finite difference approximation of the A operator (see Analytic gradient computation Aᵤ #1916). To get better gradients, we need to backprop through the Kotkke algorithm too.
  • The current tests that are failing are mostly due to either API fixes I still need to make (i.e. remove certain do_smoothing flags) or because the gradient is no longer accurate enough to pass. There is, however, an issue with the artificial conductivities. Those gradients seem very off. I'm assuming I'm missing something there...

@smartalecH
Copy link
Collaborator Author

I've added support for analytic gradients using duals (as discussed), but there are a couple of issues.

First, our approximation is having trouble accurately representing the material grid at lower values of β. You can see the artifacts here:
image

This is somewhat expected (and not related at all to the duals recently implemented) as we always assume β=∞ near interfaces.

In addition, there are still some inaccuracies with the gradient when β≠0 (for both cases when smoothing is off or on). It's a somewhat complicated routine with lots of logic. A second (or third) pair of eyes would help a lot.

@stevengj
Copy link
Collaborator

stevengj commented Mar 10, 2022

Possible solution: do the β=infinity averaging for pixels that cross the u=η level set, but don't use ε1 and ε2.

Instead, use an "effective" ε1 and ε2 given by ε(project(u, β)) for u = η ± u₀′Δx/2. Make one the two "effective" ε values the value at the center of the sphere, i.e. at u=u₀, and make the other value the one at the edge of the sphere, i.e. at u=u₀±u₀′Δx/2 depending on whether the center is ≤ η or or > η.

@stevengj
Copy link
Collaborator

Regarding the merge with #1399 — in order to retain piecewise differentiability, when you have an "edge" pixel that also crosses the u=η level set, instead of sampling the MG at an arbitrary point you should probably compute the fill fraction and then do a simple weighted average of the "effective" ε1 and ε2 as defined above.

@smartalecH
Copy link
Collaborator Author

smartalecH commented Mar 31, 2022

cc @oskooi you can rerun the test with this PR

linking #1854

@smartalecH
Copy link
Collaborator Author

Status update:

I've fixed almost all of the existing tests. I'm still having trouble with our jax test, test_adjoint_jax.py (this particular one always gets me), and the standard adjoint solver test (test_adjoint_solver.py) using single precision. But I managed to remove all of the other bugs.

I've also updated test_material_grid.py to check for second-order convergence and continual smoothing when $\beta=\infty$ (woohoo, Latex works in Github now!).

@@ -7,6 +7,8 @@
import meep as mp
from scipy import special
from scipy import signal
import skfmm
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary to include this additional module just for its signed distance function (rather than write our own or find a similar function in e.g. SciPy)? skfmm does not seem to be widely used which could make installing it a challenge for some users building from source.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could make installing it a challenge for some users building from source.

From their docs:

pip install scikit-fmm

What's challenging about this particular package vs numpy, scipy, etc.?

It's convenient for testing, but we can remove it if it really is a challenge to install.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran into some package conflicts during the pip install. In general, it is useful to keep the number of package dependencies, particularly smaller ones, to a minimum.

@stevengj stevengj mentioned this pull request Jul 29, 2022
@stevengj
Copy link
Collaborator

stevengj commented Jul 29, 2022

@smartalecH, I pushed a rebased version following the instructions from #2164 (comment) to fix conflicts from the reformatting.

(If for some reason there is a problem with the rebase, the original branch is saved in the inf_smoothing_save branch.)

src/meepgeom.cpp Outdated
Comment on lines 1406 to 1415
uval = matgrid_val(p_mat, tp, oi, mat);
normal = matgrid_grad(p_mat, tp, oi, mat);
if (cvector3_re(normal).x == 0 && cvector3_re(normal).y == 0 && cvector3_re(normal).z == 0)
/* couldn't get normal vector for this point; no averaging is needed,
but we still need to interpolate onto our grid using duals */
goto mgavg;
duals::duald u_prime = normalize_dual(normal);
duals::duald d = (mat->eta - uval) / u_prime;
double r = v.diameter() / 2;
fill = get_material_grid_fill(v.dim, d, r, uval, mat->eta);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I think I have finally narrowed down the reason why some tests are failing.

Note that the matgrid_grad is especially sensitive to roundoff error (it calls several routines from both meep and libctl). There are many cases where the normal vector should be (0,0,0) (e.g. a uniform design region) but is typically off by a few $\epsilon$ in a random direction. Especially when the design variables are uniform, which is the case for the failing tests.

This is especially problematic when uvalmat->eta, because then we get some weird $\frac{0}{0}$ conditions (this produces some dramatic swings in both the value for d and its dual derivative).

We need to think about some ways to make this part of the code more robust to roundoff error. This is a bit outside of my wheelhouse...

@stevengj, @oskooi, do you have any suggestions?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an example, suppose I'm optimizing a uniform design field (such that rho=0.5 everywhere). Of course, I'm doing TO, so I run this through a conic filter first. Ideally, my output is also rho_tilde=0.5 everywhere, but in reality there are some perturbations of the order of $\epsilon$ scattered throughout.

Now, because of this slight variation, when I try to compute the normal vector at a particular point r (i.e. the spatial gradient of the design field) using matgrid_grad, I hope to get a vector that looks like (0,0,0) but most of the time in these cases it looks something like (3.330669e-15, 6.661338e-15, 0.000000e+00), where again, I'm off by a few $\epsilon$ in the x and y directions.

Furthermore, when I try to compute $d=\frac{\eta-\rho}{|\nabla\rho|}$, the quantity $\eta-\rho$ is also sensitive to floating point roundoff, and I occasionally get values like -2.775558e-16. Which means I have a really small number divided by an even smaller number ($|\nabla\rho|$ is pretty small), and we get some interesting behavior.

I've tried some thresholded compare statements (i.e. create a function equals(double var1, double var2) that checks if var1 and var2 are within a few $\epsilon$ of each other). Of course, these methods are a bit ad hoc and not ideal.

I'm thinking there's another way to tackle this problem that lets us avoid the issue of checking for roundoff altogether... just not sure what that is yet...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we try adding a check to the end of meep::vec material_grid_grad to set the gradient vector to zero if all the components happen to be smaller than some threshold value? That way, we just need to modify only one place in the code. (For consistency, we also should do the same thing for other functions which compute the normal vector i.e. material_function::normal_vector).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a function, AreEqual(), which checks two numbers (I templated it so that it works with doubles, floats, etc.) within $2\epsilon$. This threshold (like all thresholds) is a bit arbitrary though...

This mitigates many of the errors I've seen so far. But there are still some issues slipping through the cracks, again due to roundoff error.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, I would expect our final effective ε to be insensitive to errors like this, because in a region where ε is nearly constant its arithmetic and harmonic means are nearly equal so the choice of the normal vector becomes irrelevant. So, we should have a well-conditioned function to differentiate.

This seems like a classic type of problem in numerical analysis where you have some nice well-conditioned function that you want to compute, but your computational algorithm involves some intermediate step (like the computation of d) that is badly conditioned, and screws up your calculation. The solution is usually to reformulate the calculation algebraically to avoid the ill-conditioned step, e.g. to replace d with d√|∇ρ|. Sometimes it requires some clever algebra, e.g. see the exp1 and quadratic-formula examples in these notes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants