Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trainer does not expect keyword argument 'gpus' #1

Open
joelmeyerson opened this issue Apr 8, 2023 · 2 comments
Open

Trainer does not expect keyword argument 'gpus' #1

joelmeyerson opened this issue Apr 8, 2023 · 2 comments

Comments

@joelmeyerson
Copy link

Hi Mohammed and Yeqing, this is really exciting work! Thanks for sharing the code and providing nice documentation. I encountered a very minor issue running it which I thought I'd mention.

To install Genie I created a fresh venv and installed the dependencies using pip install -e .. I then downloaded and cleaned the SCOPe dataset as instructed in the README, and created a sub-directory in runs and a configuration file. I then started training but got an error. Here's the stack trace:

Traceback (most recent call last):
  File "/home/joel/git/genie/genie/train.py", line 70, in <module>
    main(args)
  File "/home/joel/git/genie/genie/train.py", line 44, in main
    trainer = Trainer(
  File "/home/joel/git/genie/venv/lib/python3.10/site-packages/pytorch_lightning/utilities/argparse.py", line 69, in insert_env_defaults
    return fn(self, **kwargs)
TypeError: Trainer.__init__() got an unexpected keyword argument 'gpus'

I believe the source of the issue is that pip installed v2.0.1 of PyTorch Lightning and according to this discussion Trainer may now expect the keyword arguments devices and accelerator instead of gpus. I was able to start training by replacing this line with these lines:

devices=gpus,
accelerator='gpu',

Is this issue reproducible on your end? If the issue is indeed genuine I'd be glad to submit a PR to fix it.

@yeqinglin
Copy link
Collaborator

Thanks for pointing this out. As you mentioned, the issue arises due to different PyTorch Lightning version. In the latest version of PyTorch Lightning, the above modification would fix the issue.

@joelmeyerson
Copy link
Author

Thanks. I created a PR that will fix the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants