Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About DataParallel , multi gpu #76

Open
wz991007 opened this issue May 19, 2022 · 0 comments
Open

About DataParallel , multi gpu #76

wz991007 opened this issue May 19, 2022 · 0 comments

Comments

@wz991007
Copy link

In class GraphAttentionLayer, there are two parameter matrices(self.W and self.a). When I want to use multi GPU, the parameter matrices can only be placed in cuda‘0’,but data of mini batch is located in different cuda. How to solve it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant