Skip to content

Commit

Permalink
Merge pull request #414 from NREL/aws_service_account
Browse files Browse the repository at this point in the history
AWS Service Account
  • Loading branch information
nmerket committed Nov 21, 2023
2 parents 24a6fa3 + 495dfec commit 941c606
Show file tree
Hide file tree
Showing 9 changed files with 31 additions and 33 deletions.
1 change: 1 addition & 0 deletions buildstockbatch/eagle.sh
Expand Up @@ -11,5 +11,6 @@ df -h

module load conda singularity-container
source activate "$MY_CONDA_ENV"
source /shared-projects/buildstock/aws_credentials.sh

time python -u -m buildstockbatch.hpc eagle "$PROJECTFILE"
3 changes: 2 additions & 1 deletion buildstockbatch/eagle_postprocessing.sh
Expand Up @@ -9,6 +9,7 @@ df -h

module load conda singularity-container
source activate "$MY_CONDA_ENV"
source /shared-projects/buildstock/aws_credentials.sh

export POSTPROCESS=1

Expand All @@ -27,6 +28,6 @@ pdsh -w $SLURM_JOB_NODELIST_PACK_GROUP_1 "free -h"
pdsh -w $SLURM_JOB_NODELIST_PACK_GROUP_1 "df -i; df -h"

$MY_CONDA_ENV/bin/dask scheduler --scheduler-file $SCHEDULER_FILE &> $OUT_DIR/dask_scheduler.out &
pdsh -w $SLURM_JOB_NODELIST_PACK_GROUP_1 "$MY_CONDA_ENV/bin/dask worker --scheduler-file $SCHEDULER_FILE --local-directory /tmp/scratch/dask --nworkers ${NPROCS} --nthreads 1 --memory-limit ${MEMORY}MB" &> $OUT_DIR/dask_workers.out &
pdsh -w $SLURM_JOB_NODELIST_PACK_GROUP_1 "source /shared-projects/buildstock/aws_credentials.sh; $MY_CONDA_ENV/bin/dask worker --scheduler-file $SCHEDULER_FILE --local-directory /tmp/scratch/dask --nworkers ${NPROCS} --nthreads 1 --memory-limit ${MEMORY}MB" &> $OUT_DIR/dask_workers.out &

time python -u -m buildstockbatch.hpc eagle "$PROJECTFILE"
4 changes: 4 additions & 0 deletions buildstockbatch/hpc.py
Expand Up @@ -656,6 +656,10 @@ def queue_post_processing(self, after_jobids=[], upload_only=False, hipri=False)
logger.debug("sbatch: {}".format(line))

def get_dask_client(self):
# Keep this, helpful for debugging on a bigmem node
# from dask.distributed import LocalCluster
# cluster = LocalCluster(local_directory="/tmp/scratch/dask", n_workers=90, memory_limit="16GiB")
# return Client(cluster)
return Client(scheduler_file=os.path.join(self.output_dir, "dask_scheduler.json"))

def process_results(self, *args, **kwargs):
Expand Down
1 change: 1 addition & 0 deletions buildstockbatch/kestrel.sh
Expand Up @@ -12,5 +12,6 @@ df -h

module load python apptainer
source "$MY_PYTHON_ENV/bin/activate"
source /kfs2/shared-projects/buildstock/aws_credentials.sh

time python -u -m buildstockbatch.hpc kestrel "$PROJECTFILE"
3 changes: 2 additions & 1 deletion buildstockbatch/kestrel_postprocessing.sh
Expand Up @@ -11,6 +11,7 @@ df -h

module load python apptainer
source "$MY_PYTHON_ENV/bin/activate"
source /kfs2/shared-projects/buildstock/aws_credentials.sh

export POSTPROCESS=1

Expand All @@ -29,6 +30,6 @@ pdsh -w $SLURM_JOB_NODELIST_PACK_GROUP_1 "free -h"
pdsh -w $SLURM_JOB_NODELIST_PACK_GROUP_1 "df -i; df -h"

$MY_PYTHON_ENV/bin/dask scheduler --scheduler-file $SCHEDULER_FILE &> $OUT_DIR/dask_scheduler.out &
pdsh -w $SLURM_JOB_NODELIST_PACK_GROUP_1 "$MY_PYTHON_ENV/bin/dask worker --scheduler-file $SCHEDULER_FILE --local-directory /tmp/scratch/dask --nworkers ${NPROCS} --nthreads 1 --memory-limit ${MEMORY}MB" &> $OUT_DIR/dask_workers.out &
pdsh -w $SLURM_JOB_NODELIST_PACK_GROUP_1 "source /kfs2/shared-projects/buildstock/aws_credentials.sh; $MY_PYTHON_ENV/bin/dask worker --scheduler-file $SCHEDULER_FILE --local-directory /tmp/scratch/dask --nworkers ${NPROCS} --nthreads 1 --memory-limit ${MEMORY}MB" &> $OUT_DIR/dask_workers.out &

time python -u -m buildstockbatch.hpc kestrel "$PROJECTFILE"
8 changes: 8 additions & 0 deletions docs/changelog/changelog_dev.rst
Expand Up @@ -44,3 +44,11 @@ Development Changelog
:tickets: 313

Add support for NREL's Kestrel supercomputer.

.. change::
:tags: general, postprocessing
:pullreq: 414
:tickets: 412

Add support for an AWS service account on Kestrel/Eagle so the user
doesn't have to manage AWS keys.
6 changes: 6 additions & 0 deletions docs/changelog/migration_dev.rst
Expand Up @@ -59,6 +59,12 @@ Calling buildstockbatch uses the ``buildstock_kestrel`` command line interface
is very similar to Eagle. A few of the optional args were renamed in this
version for consistency.

AWS Keys on Kestrel and Eagle
=============================

You no longer need to manage AWS keys on Kestrel or Eagle. A service account has
been created for each and the software knows where to find those keys.


Schema Updates
==============
Expand Down
28 changes: 2 additions & 26 deletions docs/installation.rst
Expand Up @@ -125,7 +125,8 @@ configure your user account with your AWS credentials. This setup only needs to
Kestrel
~~~~~~~

The most common way to run buildstockbatch on Kestrel will be to use a pre-built python environment. This is done as follows:
The most common way to run buildstockbatch on Kestrel will be to use a pre-built
python environment. This is done as follows:

::

Expand Down Expand Up @@ -193,31 +194,6 @@ You can get a list of installed environments by looking in the envs directory

ls /shared-projects/buildstock/envs

.. _aws-user-config-eagle:

AWS User Configuration
......................

To use the automatic upload of processed results to AWS Athena, you'll need to
configure your user account with your AWS credentials. This setup only needs to
be done once.

First, `ssh into Eagle`_, then
issue the following commands

::

module load conda
source activate /shared-projects/buildstock/envs/awscli
aws configure

Follow the on screen instructions to enter your AWS credentials. When you are
done:

::

source deactivate

Developer installation
......................

Expand Down
10 changes: 5 additions & 5 deletions docs/project_defn.rst
Expand Up @@ -285,12 +285,12 @@ Uploading to AWS Athena

BuildStock results can optionally be uploaded to AWS for further analysis using
Athena. This process requires appropriate access to an AWS account to be
configured on your machine. You will need to set this up wherever you use buildstockbatch.
If you don't have
keys, consult your AWS administrator to get them set up.
configured on your machine. You will need to set this up wherever you use
buildstockbatch. If you don't have keys, consult your AWS administrator to get
them set up. The appropriate keys are already installed on Eagle and Kestrel, so
no action is required.

* :ref:`Local Docker AWS setup instructions <aws-user-config-local>`
* :ref:`Eagle AWS setup instructions <aws-user-config-eagle>`
* :ref:`Local AWS setup instructions <aws-user-config-local>`
* `Detailed instructions from AWS <https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration>`_

.. _post-config-opts:
Expand Down

0 comments on commit 941c606

Please sign in to comment.