Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Corner case where indexing loses all reflections but still writes out files #2609

Open
dagewa opened this issue Feb 12, 2024 · 1 comment
Open
Assignees

Comments

@dagewa
Copy link
Member

dagewa commented Feb 12, 2024

This is a corner case in which indexing is attempted on a small molecule data set using only two images. The first image has only very few reflections. Initial indexing is successful, but indexes only 1 reflection on the first image, and 42 on the second. After some macrocycles of refinement, eventually the model no longer indexes any reflections on the first image. At this point, refinement fails, as per #2607.

Then we end up here and remove the reflections:

except (DialsRefineConfigError, DialsRefineRuntimeError) as e:
if len(experiments) == 1:
raise DialsIndexRefineError(str(e))
had_refinement_error = True
logger.info("Refinement failed:")
logger.info(e)
# need to remove crystals - may be shared!
models_to_remove = experiments.where(
crystal=experiments[-1].crystal
)
for model_id in sorted(models_to_remove, reverse=True):
del experiments[model_id]
# remove experiment id from the reflections associated
# with this deleted experiment - indexed flag removed
# below
# note here we are acting on the table from the last macrocycle
# This is guaranteed to exist due to the check if len(experiments) == 1: above
sel = self.refined_reflections["id"] == model_id
if sel.count(
True
): # not the case if failure on first cycle of refinement of new xtal
logger.info(
"Removing %d reflections with id %d",
sel.count(True),
model_id,

Because the crystal model is shared between the imagesets, this removes all reflections from both. Surprisingly, dials.index still writes out an indexed.expt containing no models, and indexed.refl containing only unindexed reflections.

I'm not sure what the best behaviour would be in this situation, but I have quite a few examples of this occurring with this 2-image indexing test case on sparse data so it isn't completely rare. One possibility is that indexing should continue with data just from the good image. More conservatively, we could continue to fail, but make this obvious and avoid writing out {expt,refl} files.

Here are the files for this specific case: 2image-indexing.zip

To run, use:

dials.index imported_2images.expt strong_2images.refl detector.fix=distance space_group="P21/n" unit_cell="7.72 11.40 14.23 90.00 92.85 90.00" restraint.phil
@dagewa
Copy link
Member Author

dagewa commented Feb 14, 2024

The use case here is a form of serial crystallography in which multiple images (maybe just 2, as in this case) are recorded from each crystal at different angles. This should help with indexing. However, with the current behaviour if one image is bad then indexing fails, not matter how good the other image is. I think my previous suggestions were not strong enough: we should not fail in this case, but should continue refining the solution against the single image.

This situation is detectable in dials.index because the refiner is being called with an experiment that has zero reflections (the reflections are not lost in outlier rejection, they are already not there right at the setup of the refiner). So, the indexer has the opportunity to remove this bad experiment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

When branches are created from issues, their pull requests are automatically linked.

1 participant