Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SNAKEMAKE_PROFILE environment variable does not play well with Slurm #1398

Closed
holtgrewe opened this issue Feb 15, 2022 · 1 comment · Fixed by #1407
Closed

SNAKEMAKE_PROFILE environment variable does not play well with Slurm #1398

holtgrewe opened this issue Feb 15, 2022 · 1 comment · Fixed by #1407
Labels
bug Something isn't working

Comments

@holtgrewe
Copy link
Contributor

Snakemake version
v6.15.5

Describe the bug
Slurm is commonly configured to forward all environment variables. When defining env SNAKEMAKE_PROFILE then Slurm will by default forward this through sbatch. This causes the inner snakemake inside the job script to have args.profile set to a non-None value and subsequently snakemake existing with

Error: you need to specify the maximum number of jobs to be queued or executed at the same time with --jobs or -j.

Logs

Minimal example

  • Have a Slurm cluster
  • Use the following Snakefile
rule all:
  input: 'f.1', 'f.2', 'f.3'

rule one:
  output: 'f.{i}'
  shell: 'set -x; touch {output}'
  • Use the current snakemake-profiles/slurm profile (e.g., to create slurm-bih profile)
  • Start with export SNAKEMAKE_PROFILE=$HOME/work/Development/slurm-bih ; snakemake -j1 f.2.

Additional context
Specifying --profile=$HOME/work/Development/slurm-bih works.

The generated jobscript looks sane:

#!/bin/bash
# properties = {"type": "single", "rule": "one", "local": false, "input": [], "output": ["f.2"], "wildcards": {"i": "2"}, "params": {}, "log": [], "threads": 1, "resources": {"tmpdir": "/data/gpfs-1/users/holtgrem_c/scratch/tmp/hpc-cpu-104.cubi.bihealth.org"}, "jobid": 0, "cluster": {}}
 cd /data/gpfs-1/work/users/holtgrem_c/Development/snakemake-example && \
/data/gpfs-1/users/holtgrem_c/work/miniconda3/envs/snakemake-dev/bin/python3.8 \
-m snakemake f.2 --snakefile /data/gpfs-1/work/users/holtgrem_c/Development/snakemake-example/Snakefile \
--force --cores all --keep-target-files --keep-remote --max-inventory-time 0 \
--wait-for-files '/data/gpfs-1/work/users/holtgrem_c/Development/snakemake-example/.snakemake/tmp.oonr6sp8' --latency-wait 60 \
 --attempt 1 --force-use-threads --scheduler greedy \
--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ \
   --allowed-rules one --nocolor --notemp --no-hooks --nolock --scheduler-solver-path /data/gpfs-1/users/holtgrem_c/work/miniconda3/envs/snakemake-dev/bin \
--mode 2  --default-resources "tmpdir=system_tmpdir"  && exit 0 || exit 1
@holtgrewe holtgrewe added the bug Something isn't working label Feb 15, 2022
@holtgrewe
Copy link
Contributor Author

I suggest to resolve this by removing the environment variable from calls to args.cluster.

johanneskoester added a commit that referenced this issue Feb 18, 2022
* feat: cluster sidecar

* fix: do not pass SNAKEMAKE_PROFILE into cluster-submit (#1398)

Co-authored-by: Johannes Köster <johannes.koester@tu-dortmund.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant