Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent number of threads used by the geometry optimizers #402

Open
q-posev opened this issue Apr 12, 2023 · 5 comments
Open

Inconsistent number of threads used by the geometry optimizers #402

q-posev opened this issue Apr 12, 2023 · 5 comments

Comments

@q-posev
Copy link
Contributor

q-posev commented Apr 12, 2023

Describe the bug

I noticed a discrepancy in the number of threads used by the QChem code (Psi4) during geometry optimization with 3 optimizers: pyberny, geometric, optking initiated via compute_procedure.

  1. only geometric correctly picks up the number of threads (8 in the example below) specified via task_config (ncores key) argument of compute_procedure (same hold for the memory key)
  2. berny assigns 6 threads in this example (a wild guess: I have 12 cores on my machine, so 6=12/2 makes sense..)
  3. optking goes for the serial execution

For example, running the code from the next section, the output can be the following:

Optimizing with berny
Number of threads used by berny: 6
Memory used by berny: 11.576
Optimizing with geometric
Number of threads used by geometric: 8
Memory used by geometric: 2000.0
Optimizing with optking
Number of threads used by optking: 1
Memory used by optking: 0.524

To Reproduce

import qcelemental as qcel
import qcengine

mol = qcel.models.Molecule.from_data("pubchem:water")

input_spec = qcel.models.procedures.QCInputSpecification(
   driver="gradient",
   model={
       "method": "b3lyp",
       "basis": "6-31g"
       },
   keywords={"scf_type": "df"}
)

opt_input = qcel.models.OptimizationInput(
   initial_molecule=mol,
   input_specification=input_spec,
   protocols={"trajectory": "all"},
   keywords={
       "coordsys": "tric",
       "maxiter": 50,
       "threads": 8,
       "program": "psi4"
       }
)

for optimizer in ['berny', 'geometric', 'optking']:

   print(f"Optimizing with {optimizer}")
   ret = qcengine.compute_procedure(opt_input, optimizer, task_config={"ncores": 8, "memory": "2000", "retries": 1}, raise_error=True)
   assert ret.success

   traj0 = ret.trajectory[0].dict()
   print(f"Number of threads used by {optimizer}: {traj0['provenance']['nthreads']}")
   print(f"Memory used by {optimizer}: {traj0['provenance']['memory']}")

Expected behavior

I expect the nthreads field of the provenance to be 8 for all aforementoned cases.

Additional context

  • qcelemental 0.25.1 via psi4 channel
  • qcengine 0.26.0 via psi4 channel
  • geometric 1.0 via pypi
  • pyberny 0.6.3 via pypi
  • OptKing 0.2.1 via psi4 channel
  • psi4 1.7+6ce35a5 via psi4 channel
@loriab
Copy link
Collaborator

loriab commented Apr 16, 2023

Thanks for the informative report. I hope to have a chance to look into it soon. There's some checking that the program harnesses follow task_config, but you're right that probably no one has checked opt procedures. fwiw, I'm more concerned if psi4 isn't using the 8 threads than if the optimizers aren't. Thanks again.

@q-posev
Copy link
Contributor Author

q-posev commented Apr 16, 2023

Thanks for looking into this. I can help too, but might need some initial pointers to the possible sources of errors as I am not yet familiar with the QCEngine code base.

fwiw, I'm more concerned if psi4 isn't using the 8 threads than if the optimizers aren't. Thanks again.

I can confirm that the issue with the wrong number of threads is propagated to Psi4. It can be seen by monitoring the CPU usage (e.g. with htop) during the execution of the code that I provided. But I first discovered it by printing the Psi4 output via stdout field.

@q-posev
Copy link
Contributor Author

q-posev commented Apr 19, 2023

OK, a brief update on my side:

  • to fix optking: I had to add add extras={"psiapi": True} to QCInputSpecification object. This looks hacky and I assume that the optking harness can be fixed to avoid that.
  • to fix berny: I had to modify the source code. BernyProcedure.compute method receives config as an argument but never uses it internally. I will submit a PR.

@loriab
Copy link
Collaborator

loriab commented Aug 5, 2023

Thanks for the berny fix. I've solved the optking more generally (see linked PR 87), but that undoes some deliberate settings, so that's a longer-running issue. Thanks again for bringing up this issue -- we'll have to make it into a test case.

@q-posev
Copy link
Contributor Author

q-posev commented Aug 5, 2023

Cool, thank you too!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants