You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering about what kinds of runtime have you encountered in the practical application of this topic model (leaving aside the question of choosing K). In my limited experience, the scikit NMF decomposition algorithm has been extremely fast for small corpora (a matter of seconds) but it slows down drastically at higher K and larger matrices. I have a model currently running with K=20 on a sparse matrix with 4.3 million cells and it has been going for hours. Compared to standard LDA, this is significantly slower.
The scikit learn documentation mentions polynomial time complexity, which would explain the huge changes in execution time I experienced, and I would like to understand whether this is an issue for others as well.
The text was updated successfully, but these errors were encountered:
I was wondering about what kinds of runtime have you encountered in the practical application of this topic model (leaving aside the question of choosing K). In my limited experience, the scikit NMF decomposition algorithm has been extremely fast for small corpora (a matter of seconds) but it slows down drastically at higher K and larger matrices. I have a model currently running with K=20 on a sparse matrix with 4.3 million cells and it has been going for hours. Compared to standard LDA, this is significantly slower.
The scikit learn documentation mentions polynomial time complexity, which would explain the huge changes in execution time I experienced, and I would like to understand whether this is an issue for others as well.
The text was updated successfully, but these errors were encountered: