You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have an AWS SageMaker instance consisting of 8 GPUs, each with 32 gigabytes of memory. However, when I attempted to train a SimpleT5 model for a text summarization task using high parameter settings, I encountered a CUDA out of memory error as a single GPU with 32GB of memory was insufficient for the task. Could you please help me resolve this issue through converting the training model via data parallelism or any other suitable methods?
Could you please help me resolve this issue through converting the training model via data parallelism or any other suitable methods?
The text was updated successfully, but these errors were encountered:
I have an AWS SageMaker instance consisting of 8 GPUs, each with 32 gigabytes of memory. However, when I attempted to train a SimpleT5 model for a text summarization task using high parameter settings, I encountered a CUDA out of memory error as a single GPU with 32GB of memory was insufficient for the task. Could you please help me resolve this issue through converting the training model via data parallelism or any other suitable methods?
Could you please help me resolve this issue through converting the training model via data parallelism or any other suitable methods?
The text was updated successfully, but these errors were encountered: