Replies: 2 comments 2 replies
-
#1427 is relevant and possibly should be completed first |
Beta Was this translation helpful? Give feedback.
0 replies
-
Q. Does |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
As @graeme-winter has noted, on fast computers with many cores like the com17 nodes at DLS, the limiting factor in overall run times for DIALS jobs seems to be the refinement step.
We have looked at this in the past and concluded that there wasn't any one obvious area in which the code could be sped up, but that was with an older version of
dials.refine
, before the current handling of choosing reflections for multi-turn data sets was introduced. There are also a number of infrequently changed options that may change the execution time for better or for worse, such asnproc
,engine
,sparse
,gradient_calculation_blocksize
and, importantly, various combinations of these.The appropriate settings depend very much on the shape of the data, so that the best settings for refinement of large numbers of XFEL stills are not the same as the best settings for scan-varying refinement of multi-turn highly multiple synchrotron scans.
I propose that we take the current state of the
dials.refine
code and systematically explore the settings to see if the optimum choices are being made at DLS for the type of data set we currently consider slow. It would be useful if we could identify a particular data set and a particular computer on which to run these tests.Once the optimum settings have been identified, it would be interesting to again try profiling the
dials.refine
code (@ndevenish 👀)Shall we set aside some time to do this as a group activity?
Beta Was this translation helpful? Give feedback.
All reactions