New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize PARSynthesizer's performance #1965
Comments
Reported Example 1Out of regular memory error #1952 by @prupireddy
"I find this particularly surprising given that I am running this on a machine with 128 GM RAM and I just restarted it." Suggested Workaround My recommendation would be to sample the data to reduce the footprint. You can either use less rows per sequence or try less sequences overall. Start with a much lower sample than you think you need (maybe a 5% sample of your data) and then increase by 5% each time to improve the data generated by the model. |
Reported Example 2Out of CUDA memory error https://sdv-space.slack.com/archives/C01GSDFSQ93/p1713451980542979 by Isaac (Slack) Use Case: PAR for forecasting time series
Attempted Workarounds:
Example Code (Srini):
|
Bro, I recently meet the problem in example 1, how I solve this problem is to modify the
The explaination of |
Problem Description
A number of SDV users have run into performance issues when using PARSynthesizer with their data. The issues usually manifest as regular out-of-memory errors or CUDA out-of-memory errors. Other times, it just takes a long time to train the model.
I'm creating this thread to collect all of these examples from the community so the SDV core team has the context they need to understand and improve the performance of PARSynthesizer.
For anyone using SDV PARSynthesizer, please add new examples of performance issues to this thread!
The text was updated successfully, but these errors were encountered: