-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The "pool" argument is a little confusing #5
Comments
Thanks for picking this up, looks like the I'll add an explicit check for
|
Sounds good!
You might also consider just deferring all of this to |
For the moment I will just implement the 'quick-fix' version and maybe deprecate the There definitely seems to be some advantages to
The main effort (and I haven't considered it much), would just be converting the nested submits to the flat Also it is quite convenient to supply other executors e.g. mpi4py.futures.MPIPoolExecutor directly without implementing a full joblib backend. |
Quick-fix in this commit by the way, but might leave this open as a reminder to consider |
Awesome, thanks @jcmgray! Did I mention how handy I found your package to be? :) You could also just use |
That's good to hear! All very much enabled by I will investigate changing the default executor to With regard to with joblib.parallel_backend('dask'):
h.harvest_combos(combos, parallel='joblib') if this is (or becomes) a widespread pattern for specifying parallelisation. |
It actually needs the
concurrent.futures.Executor
API. I tried to pass in a thread pool from multiprocessing and it didn't work.The text was updated successfully, but these errors were encountered: