Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use a Distributed Strategy to use GPU parallelism? #119

Open
GiovanniTurri opened this issue Jan 7, 2021 · 1 comment
Open

How to use a Distributed Strategy to use GPU parallelism? #119

GiovanniTurri opened this issue Jan 7, 2021 · 1 comment
Labels
enhancement New feature or request

Comments

@GiovanniTurri
Copy link

Hi all, thanks for the great job!

Do you have any tips on how to adapt the MirroredStrategy into train_tf.py and inference.py?
I can't distribute even the dataset..

Thank you so much!

@haydengunraj
Copy link
Collaborator

Unfortunately we don't have code to provide this functionality at the moment, but it would be a good enhancement for any new contributors.

@haydengunraj haydengunraj added the enhancement New feature or request label Mar 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants