You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just like #2864 for Kubernetes, the Slurm batch system ends up using the base AbstractBatchSystem version of check_resource_request() and imposes the various maxes as per-job limits, rather than overall limits on everything given to the backing scheduler at once.
Since Slurm doesn't actually schedule based on disk space, and especially not based on disk space on shared filesystems you might be using for the workdir if you have really big files, you might need to be able to limit the max disk space used by in-flight jobs at the Toil level in order to run a workflow.
We should maybe move the buffering system from #4356 from the Kubernetes batch system to the base AbstractBatchSystem.
┆Issue is synchronized with this Jira Story
┆Issue Number: TOIL-1537
The text was updated successfully, but these errors were encountered:
Just like #2864 for Kubernetes, the Slurm batch system ends up using the base AbstractBatchSystem version of
check_resource_request()
and imposes the various maxes as per-job limits, rather than overall limits on everything given to the backing scheduler at once.Since Slurm doesn't actually schedule based on disk space, and especially not based on disk space on shared filesystems you might be using for the workdir if you have really big files, you might need to be able to limit the max disk space used by in-flight jobs at the Toil level in order to run a workflow.
We should maybe move the buffering system from #4356 from the Kubernetes batch system to the base AbstractBatchSystem.
┆Issue is synchronized with this Jira Story
┆Issue Number: TOIL-1537
The text was updated successfully, but these errors were encountered: