Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xlearn python API's .predict method in doesn't kill the created threads after execution in python API, which leads to resource exhausted. #363

Open
HovhannesManushyan opened this issue Aug 24, 2021 · 2 comments

Comments

@HovhannesManushyan
Copy link

I was getting strange resource exhausted bug when running xlearn fm model predict method for a while.

When I profiled the processes via htop, I have noticed that the number of threads gradually increases by 8 when invoking model.predict("model/model.out", f"output/output.txt") which leads to resource exhausted when the number of threads reaches a critical level.

One solution, I found to solve this problem is invoke the model.predict in a separate process via the multiprocessing module, however this solution is extremely slow in cases when model.predict needs to be invoked many times.

Is there a way to kill the created threads after the execution of the predict method has completed?

@HovhannesManushyan HovhannesManushyan changed the title xlearn's .predict method doesn't kill the created threads after execution in python API, which leads to resource exhausted. xlearn python API's .predict method in doesn't kill the created threads after execution in python API, which leads to resource exhausted. Aug 24, 2021
@HovhannesManushyan
Copy link
Author

This problem could be solved by building the command line xlearn and then executing the binary using Python's subprocess module.

@litchi6666
Copy link

we have the same problem!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants