Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Xtb ignores ncores #355

Open
jthorton opened this issue Mar 11, 2022 · 5 comments
Open

Xtb ignores ncores #355

jthorton opened this issue Mar 11, 2022 · 5 comments

Comments

@jthorton
Copy link
Contributor

jthorton commented Mar 11, 2022

Describe the bug
when using xtb via QCEngine I notice that all available cores on my machine are being used despite passing a limit smaller than this to the compute function. This seems to be the default behaviour of xtb when OMP_NUM_THREADS or MKL_NUM_THREADS is not set in the environment. If I then set these variables both to 1 and then redo the calculation with ncores=4 xtb again ignores the qcengine input and instead respects the exported variable and runs on a single core. So it would seem that xtb is not picking up the variables set here.

I also wanted to check if this is how the resources should be set, could setting both of these to the ncore value lead to the use of more cores than allocated?

To Reproduce
A simple torsiondrive of ethane is attached using xtb-python, the files were renamed to make github happy.
ethane.txt
run_td.txt

Expected behavior
Xtb should respect the number of cores passed to ncores.

versions

Name Version Build Channel
qcengine 0.22.0 pyhd8ed1ab_0 conda-forge
xtb 6.4.1 h67747af_2 conda-forge
xtb-python 20.2 py38h96a0964_3 conda-forge

@WardLT
Copy link
Collaborator

WardLT commented Mar 30, 2022

Thanks for reporting this. I've also seen this behavior before, and have some ways of working around it.

grimme-lab/xtb-python#65 and your experience shows that the OMP_NUM_THREADS should work. I'm just not sure what's going wrong with setting the environment variables, or if the environment context isn't being used by XTB.

Are you having any particular problem due to this bug? I'm figuring out how much we prioritize fixing this (e.g., if your blocked vs annoyed vs just observing an issue).

@jthorton
Copy link
Contributor Author

Thanks for the response and glad someone else has seen this I thought it was just the way I was using xtb at first.

Are you having any particular problem due to this bug?

It is more the annoying category right now having to remember to set OMP_NUM_THREADS on each machine I run these calculations on otherwise we see a considerable slow down when it uses all cores.

@awvwgk
Copy link
Contributor

awvwgk commented Apr 20, 2022

The best choice would be to implement this in the xtb C-API at https://github.com/grimme-lab/xtb and expose the option in the harness as described in the linked thread.

@mattwthompson
Copy link
Contributor

Is there no shorter path to passing this through the Python layer?

@awvwgk
Copy link
Contributor

awvwgk commented Sep 19, 2022

Is there no shorter path to passing this through the Python layer?

qcng might find a way to hack around the current limitation of the Python API of xtb. Cleaner would be to support this option in xtb and just set it from the harness. Either case requires a patch for qcng and maybe also for xtb.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants