You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can anybody suggest hardware for training? I got a dataset of about 1000 files, and would like to train with batch size 32.
I tried it with aws instance g4dn.xlarge (1 GPU, 4 vCPUs, 16 GiB of memory, 125 NVMe SSD, up to 25 Gbps network performance) and it seems like that its not enough memory and therefore reduces the performance significantly.
I would appriciate any tips
The text was updated successfully, but these errors were encountered:
@flexwend did you use something? i just found this project and i am basically swamped. i feel overwhelmed. did you get to use this? what server did you use and what is the datasize-to-time ratio with regards to listed configs here?
don't we have to generate a training data just once?
Can anybody suggest hardware for training? I got a dataset of about 1000 files, and would like to train with batch size 32.
I tried it with aws instance g4dn.xlarge (1 GPU, 4 vCPUs, 16 GiB of memory, 125 NVMe SSD, up to 25 Gbps network performance) and it seems like that its not enough memory and therefore reduces the performance significantly.
I would appriciate any tips
The text was updated successfully, but these errors were encountered: