You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry for never answering this, I just notice it now.
While we have plans to enable GPU acceleration, it's not going to be released in (at least) a few months. The library is currently header only and heavily relies on Eigen for the linear algebra. We have started discussing a large refactoring project to allow for different linear algebra backends in order to support GPU acceleration, but it is at the initial stage.
For your datasets:
I would try subsampling. I was faced with a similar problem (200k x 50) and got really good results by just picking 5-10k observations. You can even fit multiple models and average them. In my experience, the current implementation unfortunately doesn't scale well beyond 5-10k observations.
For the 61 columns dataset, I would also set the trunc_lvl to a fairly low value, maybe 5 initially, and see if increasing it really improves your results.
Hi,
I recently tried some simple examples on 400k x 4 and 400k x 61 data sets, and it took several hours to complete.
Is there a way or plan to enable GPU acceleration?
Alternatively, it may be easier for my data set to sort of build up the copula over time. Does your api support any sort of 'live training'?
Thanks
Kevin
The text was updated successfully, but these errors were encountered: