Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Utilizing multiple gpus with different memory to full use #473

Open
suhaspillai opened this issue Jun 12, 2017 · 0 comments
Open

Utilizing multiple gpus with different memory to full use #473

suhaspillai opened this issue Jun 12, 2017 · 0 comments

Comments

@suhaspillai
Copy link

suhaspillai commented Jun 12, 2017

I am running my network on 2 gpus using DataParallelTable .Following is the snippet

function loadNetworkParallel(net,nGPU)

  if nGPU>1 then
    require 'cudnn'
    require 'cutorch'
    require 'cunn'
    gpus = torch.range(1,nGPU):totable() 
    net_parallel = nn.DataParallelTable(1):add(net,{gpus })
    return net_parallel:cuda()
  elseif nGPU==1 then
    require 'cudnn'
    require 'cutorch'
    require 'cunn'
    return net:cuda()
  end
end
  1. GTX 980 - RAM: 4035MB
  2. GTX 1080 - RAM: 8114MB

I can run batch size of 15, where approx 4000MB is used by GTX980 and 4000MB by GTX 1080 . There is still (4000MB) left on GTX 1080 but I think DataParallelTable allocates input equally and so I am unable to run batch size greater than 15. Any idea on how can I make it work, or do I need to allocate inputs manually to each gpu by checking how much memory is still left.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant