Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run fails when ulimit is set over 1024^2 #82

Open
danpf opened this issue Jun 24, 2022 · 1 comment
Open

run fails when ulimit is set over 1024^2 #82

danpf opened this issue Jun 24, 2022 · 1 comment

Comments

@danpf
Copy link

danpf commented Jun 24, 2022

I usually don't use more than 1024^2 file descriptors, but have set ulimit -n unlimited in the past to overcome some problems on computers with lots of cpus.

If ulimit -n unlimited is set on Mac:

this causes the call to get_max_fd() to return INT_MAX, and then the process fails here:

if (max_fd > MAX_FD_LIMIT) {

This appears to only happen on mac. I'm guessing the problem is that mac doesn't report the actual limit while on linux it actually returns 1024^2.

original issue:
mamba-org/mamba#1758

I don't really know what the best solution to this would be, as I'm not a mac expert... : /

@bbannier
Copy link

We ran into this as well. The issue with the current handling is that it is not clear at all that this is reproc failing, not the called process.

We currently work around this by if needed setting a lower rlimit in case we see a limit reproc does not want to use. This is far from ideal. I wonder if the code here could be restructured to remove that arbitrary limit inside reproc, e.g., by using close_range(2).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants