-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running training in a loop (M1 chip) #1345
Comments
Hi, I haven't encountered this yet. The fact that your code worked fine on an Intel Mac suggests that it's likely an issue with the 'tensorflow-macos' and 'tensorflow-metal' packages provided by Apple. If you can provide a reprex and I can reproduce on my side, I can take a look to see if it's an issue with the upstream package or with something related to R or reticulate. |
Thank you so much for your reply. Just one more question as I want to test is that the CPU or the GPU of M1 cause the issue, is there any quick function to disable the use of M1 GPU for training? Would something like
work for M1? |
As far as I know, visibility of the M1 GPU cannot be controlled through an environment variable. The way to hide it is directly in the TensorFlow session: tf$config$get_visible_devices("CPU") |>
tf$config$set_visible_devices() |
Hello,
I'm trying to repeat my training and prediction in a loop for 20 times. My code I have worked fine for Intel-based MacBook. However, I recently changed to an M1-based MacBook, and my loop repetition seems to get some trouble -- Although I didn't get any errors, the program never came to a finish for the 5th repeat in the loop of my training. If I change the loop number to 3, the loop can finish without any issue. I wonder if this is because some memory quota has been reached and if there's any way to raise the quota. Really appreciate the help.
The text was updated successfully, but these errors were encountered: