Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggested Hardware/GPU Memory for training #91

Open
flexwend opened this issue Sep 17, 2021 · 3 comments
Open

Suggested Hardware/GPU Memory for training #91

flexwend opened this issue Sep 17, 2021 · 3 comments

Comments

@flexwend
Copy link

Can anybody suggest hardware for training? I got a dataset of about 1000 files, and would like to train with batch size 32.
I tried it with aws instance g4dn.xlarge (1 GPU, 4 vCPUs, 16 GiB of memory, 125 NVMe SSD, up to 25 Gbps network performance) and it seems like that its not enough memory and therefore reduces the performance significantly.

I would appriciate any tips

@marutichintan
Copy link

you can try this. i have tried performance is good.

https://e2enetworks.com
NVIDIA - RTX8000 | 48 GB | Ubuntu - 20.04 | GDC.RTX-16.115GB | CPU 16 | 115 GB | 900 GB

@flexwend
Copy link
Author

flexwend commented Oct 8, 2021

@marutichintan Thanks for the answer and tip

Can you say how large your dataset was and how long training took for a field?

@test2a
Copy link

test2a commented Nov 6, 2021

@flexwend did you use something? i just found this project and i am basically swamped. i feel overwhelmed. did you get to use this? what server did you use and what is the datasize-to-time ratio with regards to listed configs here?

don't we have to generate a training data just once?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants