Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ProtocolQMMM job takes a different POTCAR than ref job's POTCAR #52

Open
hari-ushankar opened this issue Jan 31, 2021 · 11 comments
Open

Comments

@hari-ushankar
Copy link

Hi there,

I have hit an issue while setting up a ProtocolQMM job and thought I'll post here. If it's not relevant, please let me know I can post it on pyiron_atomistics issues page.

I have a pure bulk Al structure and I want to use a non-default POTCAR for the QM calculation in my ProtocolQMMM job.

My issue is that ProtocolQMMM takes the default POTCAR (GGA XC) even though a different POTCAR(US-PPs) was supplied in the QM reference job. I have specifically assigned the following:

qm_ref_pureAl.potential.Al = "~/pyiron/resources/vasp/potentials/USPP/Al/POTCAR"

And I check my qm_ref_pureAl's directory and it nicely gets the ultrasoft pseudopotential from the directory I specify.

Although, when the actual ProtocolQMMM job is submitted the POTCAR file defaults to that of GGA.

I use the following for setting the path for the QM reference job:

qmmm_bulk.input.qm_ref_job_full_path = qm_ref_pureAl.path

Any tips/suggestions on how to tackle this problem would be helpful.

Thanks!

@hari-ushankar
Copy link
Author

Also, to add I have tried to set up a bulk Al- solute calculation inside ProtocolQMMM.

Again, I see that POTCAR for a reference job directory has both Al and the solute parts in it.

But the POSCAR and POTCARs in the qmmm_calc_static_qm folder only have the pure Al parts.

@liamhuber
Copy link
Member

Hi Hari,

Have you tried modifying a plain VASP job's potential in the same way? Does this also display this behaviour? What if you submit the job?

At the moment the only thing I can think is that the updated POTCAR information is not being saved in the reference job's HDF5 file and thus not being unpacked by the protocol. If this is the case, you should see similar behaviour with a plain VASP job -- create, set potential, and run in notebook and it's what you want, but create and set potential in notebook, then ship it off to the queue for running reverts to the default potential. I don't use VASP so I can't say how likely this behaviour is. If the plain VASP job behaves this way then we can ship the issue over to pyiron_atomistic.

Also, what release of pyiron_contrib and pyiron_atomistic are you using? I have a fear that your QM/MM code is on an un-merged branch of contrib and that the atomistic dependency might be outdated...

@liamhuber
Copy link
Member

I just took a look at the QM/MM protocol on the master branch; there's nothing in the QM/MM side that looks suspicious. It invokes the VASP calculation with the ExternalHamiltonian vertex. From this I think the key code snippet is this:

job = ref_job.copy_to(
            project=pr,
            new_job_name=name,
            input_only=True,
            new_database_entry=True
        )

I don't have time to dig into it now, but I guess the POTCAR is not part of the input

For a small unit cell VASP job with an updated POTCAR:

  • Create and run the job in notebook (should work)
  • Create then run on cluster (should work)
  • Create, copy (exactly as above, with input_only=True) and run copy in notebook (should fail)
  • Create, copy (not as above, rather with input_only=False) and run copy in notebook (should work???)

and let us know the results.

However, already right now there's a chance @sudarsan-surendralal might be able to look at the desired behaviour (PP switching) and my copy_to snippet (with input_only=True) and immediately be able to tell us the answer.

@hari-ushankar
Copy link
Author

hari-ushankar commented Feb 3, 2021

Hi again,

To answer @liamhuber 's questions:

  • Yes the VASP job starts and runs fine on the node itself with the correct POTCAR

  • The VASP job starts and runs fine on the cluster, with the correct POTCAR I specify..

Oddly, both of them threw VASP errors (Error EDDDAV: Call to ZHEGV failed. Returncode = 7 1 8) in the fourth ionic iteration but I guess that's not really pertinent to the current issue.

Regarding my code versions for pyiron and pyiron_contrib:

pyiron-5fe5379 (tagged as pyiron 0.2.17)
pyiron_contrib-b0a5d8b@protocal_memory

For the recent comment from Liam:
I'm not sure what you mean by setting input_only=True or False.. do you want me to test this with a ProtocolQMMM job or a regular VASP one?

I'm guessing it is for the ProtocolQMMM job since that involves the input_only functionality for the reference job. Also, where exactly I would be passing the input_only keyword?

My notebook cell for creating a ProtocolQMMM job looks like this:

qmmm_sol_structure_bulkAl = pr.create_job(ProtocolQMMM, 
            'qmmm_sol_structure_bulkAl_c{}_b{}'.format(int(n_core), int(n_buffer))
)
qmmm_sol_structure_bulkAl.input.structure = bulkAl.copy() # change structure here!
qmmm_sol_structure_bulkAl.input.mm_ref_job_full_path = mm_ref.path
qmmm_sol_structure_bulkAl.input.qm_ref_job_full_path = qm_ref_Al_Pb.path #qm_ref's path
qmmm_sol_structure_bulkAl.input.seed_ids = [sol_index]
qmmm_sol_structure_bulkAl.input.shell_cutoff = s_cutoff
qmmm_sol_structure_bulkAl.input.n_core_shells = n_core
qmmm_sol_structure_bulkAl.input.n_buffer_shells = n_buffer
qmmm_sol_structure_bulkAl.input.seed_species = [solute]
qmmm_sol_structure_bulkAl.input.n_steps = n_steps
qmmm_sol_structure_bulkAl.input.f_tol = f_tol
qmmm_sol_structure_bulkAl.input.filler_width = f_width
qmmm_sol_structure_bulkAl.input.vacuum_width = v_width
........
........
........

EDIT: I understood what you mean.. I'll try it out and update back ASAP!

@liamhuber
Copy link
Member

Ok good, the fact it runs on the cluster means there's no foul play between setting a new POTCAR and packaging to HDF5.

For the recent comment from Liam:
I'm not sure what you mean by setting input_only=True or False.. do you want me to test this with a ProtocolQMMM job or a regular VASP one?

Just a regular job, but use the copy functionality. The protocol is massive, but your problem is specifically with the Vasp POTCAR; since all the protocol does in this context is copy a reference job and run it, I'm just trying to extract and nail down the specific part that's causing problems by mimicking this corner of the protocol behaviour.

ref_job = pr.create_job(pr.job_type.Vasp, 'to_copy')
# ...then the rest of the setup, structure, kpoints, POTCAR, etc

job = ref_job.copy_to(
            project=pr,
            new_job_name='copied',
            input_only=True,  # or False, and don't forget to use different job names each time
            new_database_entry=True
        )
job.run()

@hari-ushankar
Copy link
Author

Regarding your points:
Case 1

* Create and run the job in notebook (should work)

Yes, this works as expected with the USPP
Case 2

* Create then run on cluster (should work)

Yes, this works too! POTCAR is as expected..

Case 3

* Create, copy (exactly as above, with `input_only=True`) and run copy in notebook (should fail)

Case 4

* Create, copy (not as above, rather with `input_only=False`) and run copy in notebook (should work???)

Well, for case 3 and case 4 the POTCAR here defaults to GGA. I have tried grep TITEL OUTCAR and I get those corresponding to the GGA-PBE XC ones.

Also, when I look at job_copy_to.input.potcar in case 4 I get the following:

  | Parameter | Value | Comment
-- | -- | -- | --
xc | GGA | LDA, GGA
Al | Al-gga-pbe |  
pot_0 | ~/pyiron/resources/vasp/potentials/potpaw_PBE/Al/POTCAR |  
pot_1 | ~/pyiron/resources/vasp/potentials/potpaw_PBE/Pb_d/POTCAR |  
Pb | Pb_d-gga-pbe |  

Weirdly enough I see the same table for the case 3 and case 2 also.

So I'm guessing the default ones somehow slide in the potcar list and get assigned to the species when the job is submitted. But I'm curious as to why this only occurs for cases 3 and 4 and case 2 runs OK with the specified USPPs.

@liamhuber
Copy link
Member

@hari-ushankar, cool, this is good news: the problem is just in the copy_to line of ExternalHamiltonian, so all tests can be done with a small toy job live in a notebook -- no need to involve the protocol or submit to the queue. The copying line is critical for the protocol, but I'm sure we can find a solution by either modifying it's arguments or fixing a bug in the copy command (if one exists).

As a next step please update to the latest versions of pyiron_atomistics and pyiron_base and make a minimum working example which looks roughly like this:

from pyiron_atomistics import Project
pr = Project('mwe')

def check_potcar_worked(job):
    # Whatever code you need
    if passes:
        return True
    else:
        return False

works = pr.create.job.Vasp('works')
# blah blah job setup
works.run()
print("working: {}".format(check_potcar_worked(works)))

reference = pr.create.job.Vasp('reference')
# blah blah same setup

copy1 = reference.copy_to(*some_args, **and_kwargs)
copy1.run()
print("copy 1: {}".format(check_potcar_worked(copy1)))

copy2 = reference.copy_to(*other_args, **and_or_kwargs)
copy2.run()
print("copy 2: {}".format(check_potcar_worked(copy2)))

You'll need to look at the docstring (and possibly code) to see what args and kwargs are reasonable to explore for your copied instances. If you find a setting that works then we'll just try updating the protocol's copy_to command. If none of them work then you can make a new issue on pyiron_atomistics and we'll see if we can get a copied job to do what you want! :)

@sudarsan-surendralal
Copy link
Member

sudarsan-surendralal commented Feb 3, 2021

Ok good, the fact it runs on the cluster means there's no foul play between setting a new POTCAR and packaging to HDF5.

@liamhuber, I think this is still the issue. When you submit a single VASP job to the queue, the POTCAR file is already written before the submission, and therefore there wouldn't be any problem. However, in these protocol jobs where the jobs are generated on the fly based on a ref job, they'll fail since the now POTCAR info isn't stored in the HDF5 file

Just to confirm this, @hari-ushankar can you verify that the last value that gets printed in the following snippet is None

from pyiron import Project
pr = Project("potcar_test")
pr.remove_jobs_silently()
struct = pr.create_ase_bulk("Al")
job = pr.create_job("Vasp", "job_1")
job.structure = struct
print(job.potential.to_dict())
job.potential.Al = <path to your POTCAR>
print(job.potential.to_dict())
job.calc_static()
job.run()
print(job.potential.to_dict())
job_load = pr.load("job_1")
print(job_load.potential.to_dict())

@liamhuber
Copy link
Member

@sudarsan-surendralal thanks for sharing your insight! If I understand you correctly that means that every time we use copy_to, Vasp jobs have no possibility to copy over the potential.

MWE:

from pyiron import Project
pr = Project('scratch')
pr.remove_jobs_silently(recursive=True)
job = pr.create_job(pr.job_type.Vasp, 'vasp')
job.structure = pr.create_ase_bulk('Al')
job.potential.Al = 'Al_GW'
copied = job.copy_to(new_job_name='copied')
print(job.potential)
> {'Al': 'Al_GW'}
print(copied.potential)
> {'Al': None}

@sudarsan-surendralal, this seems like a fundamental problem to me -- should we move this over to atomistics or am I still missing something silly?

@hari-ushankar
Copy link
Author

Hi @sudarsan-surendralal and @liamhuber,

To @sudarsan-surendralal comment:

Just to confirm this, @hari-ushankar can you verify that the last value that gets printed in the following snippet is None

my output is the following:

{'Al': None}
{'Al': '~/pyiron/resources/vasp/potentials/USPP/Al/POTCAR'}
{'Al': '~/pyiron/resources/vasp/potentials/USPP/Al/POTCAR'}
{'Al': None}

So yes, I get None as the last output...

I had to add a line (job.input.kpoints.set_kpoints_file(size_of_mesh=[1,1,1])) for changing the k-points to gamma only.

Also, I'm still using the pyiron 0.2.17 version.. I'm little worried that the protocal_memory branch won't be compatible with the latest pyiron_atomistics version... So I'll stick with this setup for now.

On the other hand, I did a quick ( and dirty?) fix by pasting the USPP POTCAR in my potpaw-gga-pbe/Al folder... So pyiron thinks it's taking the default GGA POTCAR but in reality, it's the USPP.. Things seem to be ok for now, but it would definitely be nice to fix this issue for future calculations.

@jan-janssen
Copy link
Member

@hari-ushankar Is this issue fixed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants