Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor!: drop Landau (GPL) #318

Draft
wants to merge 11 commits into
base: master
Choose a base branch
from
Draft

refactor!: drop Landau (GPL) #318

wants to merge 11 commits into from

Conversation

henryiii
Copy link
Member

@henryiii henryiii commented May 23, 2022

Dropping GPL code in this branch. Checking CI.

Commands on Della (outside of Della, any computer with CUDA 11+ and CMake 3.18+ should be fine. If you don't have CMake, add pip install cmake right after the environment activation):

ml cudatoolkit/11.6

# If Minuit is not installed, it needs to be built and installed
git clone https://github.com/root-project/root --branch=v6-26-02
cmake -Sroot/math/minuit2 -Bbuild-minuit2 -DCMAKE_INSTALL_PREFIX=$HOME/minuit2
cmake --build build-minuit2
cmake --install build-minuit2

git clone https://github.com/GooFit/GooFit.git --branch=hackathon-2022 --recurse-submodules
cd GooFit
python3 -m venv .venv
source .venv/bin/activate
pip install -U pip setuptools wheel
pip install -r python/examples/requirements.txt
cmake -S. -Bbuild -DGOOFIT_DEVICE=CUDA -DMinuit2_ROOT=$HOME/minuit2 -DCMAKE_CUDA_ARCHITECTURES=80 -DGOOFIT_FORCE_LOCAL_THRUST=OFF

cmake --build build -j2
# run on a GPU node
cmake --build build -t test
SLURM: (click to expand)
#!/bin/bash
# GPU job

#SBATCH --job-name=goofit-job    # create a short name for your job
#SBATCH -o goofit.out            # Name of stdout output file (%j expands to jobId)
#SBATCH --nodes=1                # node count
#SBATCH --ntasks=1               # total number of tasks across all nodes
#SBATCH --cpus-per-task=1        # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --gres=gpu:1             # number of gpus per node
#SBATCH --mem=4G                 # total memory (RAM) per node
#SBATCH --time=01:00:00          # total run time limit (HH:MM:SS)

module purge
module load cudatoolkit/11.6
source .venv/bin/activate

echo "Job $SLURM_JOB_ID execution at: `date`"

cmake --build build -t test

This should start up an interactive job:

salloc --nodes=1 --ntasks=1 --mem=4G --time=00:20:00 --gres=gpu:1

@JuanBSLeite
Copy link
Contributor

The requirements.txt is inside python/examples folder.

pip install -r python/examples/requirements.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants