Skip to content

Commit

Permalink
Merge pull request #474 from radical-cybertools/release/1.5.0
Browse files Browse the repository at this point in the history
Release/1.5.0
  • Loading branch information
lee212 committed Aug 24, 2020
2 parents 2dd431d + 63ecac4 commit 213cf16
Show file tree
Hide file tree
Showing 41 changed files with 474 additions and 313 deletions.
3 changes: 2 additions & 1 deletion .pylintrc
Expand Up @@ -139,7 +139,8 @@ disable=raising-format-tuple,
R,
relative-import,
bare-except,
W0212
W0212,
W0622

# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
Expand Down
7 changes: 3 additions & 4 deletions .travis.yml
Expand Up @@ -6,16 +6,14 @@ os:

env:
global:
- RADICAL_PILOT_DBURL="mongodb://rct:rct_test@138.201.86.166/rct_test"
- CODECOV_TOKEN="790b223f-07e4-4707-bb97-3abe58e29cd8"
- LOG=`git log -n 1 | grep Merge`
- OLD=`echo $LOG | cut -d ' ' -f2`
- NEW=`echo $LOG | cut -d ' ' -f3`
- DIFF=`git diff --name-only --diff-filter=b $OLD...$NEW`
- DIFF=$(echo $DIFF | grep -o -e '\b[^ ]*.py\b')
- PYTEST="coverage run -m pytest -ra --timeout=600 -vvv --showlocals --forked --numprocesses=3"
- CMD_PYTEST_UNITTESTS=" $PYTEST tests/test_component/ ;
$PYTEST tests/test_utils/ "
- PYTEST="coverage run -m pytest -ra --timeout=600 -vvv --showlocals --forked"
- CMD_PYTEST_UNITTESTS=" $PYTEST tests/test_component tests/test_utils/ "
- CMD_PYTEST_INTEGRATION="$PYTEST tests/test_integration/"
- CMD_PYTEST_ISSUES=" $PYTEST tests/test_issues/"
- CMD_FLAKE8="test -z \"$DIFF\" && echo 'nothing to flake8' || flake8 $DIFF"
Expand Down Expand Up @@ -79,4 +77,5 @@ after_success:
services:
- rabbitmq
- mongod

15 changes: 15 additions & 0 deletions CHANGES.md
Expand Up @@ -6,6 +6,21 @@
https://github.com/radical-cybertools/radical.entk/ \
issues?q=is%3Aissue+is%3Aopen+


1.5.0 Release 2020-08-24
--------------------------------------------------------------------------------

- CI tests updated, PR #471
- Task sync enhancement, PR #466


1.4.1 Release 2020-07-17
--------------------------------------------------------------------------------

- Documentation updated, PR #453, #451, #450, #446
- Shared data fix #449


1.4.0 Release 2020-05-18
--------------------------------------------------------------------------------

Expand Down
2 changes: 1 addition & 1 deletion VERSION
@@ -1 +1 @@
1.4.0
1.4.1.post1
39 changes: 31 additions & 8 deletions docs/install.rst
Expand Up @@ -164,17 +164,40 @@ be printed.
RabbitMQ
========

Ensemble Toolkit relies on RabbitMQ for message transfers. RabbitMQ needs to be
configured or it can be installed on the same machine as EnTK is installed.
Installation instructions can be found at
<https://www.rabbitmq.com/download.html>. At the end of the installation run
```rabbitmq-server``` to start the server.

The following configuration defines a default server and port number to communicate.
Ensemble Toolkit relies on RabbitMQ for message transfers. Users have three
choices: (1) self-deploying and using a local RabbiMQ server; (2) self-deploying
and using a remote RabbitMQ server that is accessible from the target HPC
machine; (3) use a local or remote RabbitMQ server provided by the HPC
organization or by an external partner. Note that most HPC infrastructures
forbid executing servers on their login nodes. If you have no other option,
please open an issue on the `EnTK GitHub repository
<https://github.com/radical-cybertools/radical.entk/issues>`_and we will provide
you with a testing account on our RabbitMQ server.

In case, installation instructions can be found at
<https://www.rabbitmq.com/download.html>. At the end of the installation, do not
forget to run ```rabbitmq-server``` to start the server.

The following configuration defines a default server and port number to
communicate. Note that remote RabbitMQ servers may require username and
password. If you are using one of the RADICAL servers, username and password
are mandatory.

.. code-block:: bash
export RMQ_HOSTNAME=two.radical-project.org; export RMQ_PORT=33239
export RMQ_HOSTNAME={IP ADDRESS};
export RMQ_PORT={PORT NUMBER};
export RMQ_USERNAME={USERNAME};
export RMQ_PASSWORD={PASSWORD};
.. note:: {} sections need to be replaced with actual values, and EnTK
administrators are able to provide these information.

RMQ Account
-----------

Open a new ticket asking a new RMQ account:
https://github.com/radical-cybertools/radical.entk/issues

.. comments
Expand Down
7 changes: 7 additions & 0 deletions docs/user_guide/get_started.rst
Expand Up @@ -82,6 +82,13 @@ To run the script, simply execute the following from the command line:
.. code-block:: bash
python get_started.py
.. warning:: The first run may fail for different reasons, most of which
related to setting up the execution environment or requesting the correct
resources. Upon failure, Python may incorrectly raise the exception
``KeyboardInterrupt``. This may be confusion because it is reported even when
no keyboard interrupt has been issued. Currently, we did not find a way to
avoid to raise that exception.


And that's it! That's all the steps in this example. You can generate more verbose output
Expand Down
4 changes: 2 additions & 2 deletions examples/simple/eop.py
Expand Up @@ -45,7 +45,7 @@ def generate_pipeline():
t2.executable = '/bin/bash'
t2.arguments = ['-l', '-c', 'grep -o . output.txt | sort | uniq -c > ccount.txt']
# Copy data from the task in the first stage to the current task's location
t2.copy_input_data = ['$Pipline_%s_Stage_%s_Task_%s/output.txt' % (p.uid, s1.uid, t1.uid)]
t2.copy_input_data = ['$Pipline_%s_Stage_%s_Task_%s/output.txt' % (p.name, s1.name, t1.name)]

# Add the Task to the Stage
s2.add_tasks(t2)
Expand All @@ -61,7 +61,7 @@ def generate_pipeline():
t3.executable = '/bin/bash'
t3.arguments = ['-l', '-c', 'sha1sum ccount.txt > chksum.txt']
# Copy data from the task in the first stage to the current task's location
t3.copy_input_data = ['$Pipline_%s_Stage_%s_Task_%s/ccount.txt' % (p.uid, s2.uid, t2.uid)]
t3.copy_input_data = ['$Pipline_%s_Stage_%s_Task_%s/ccount.txt' % (p.name, s2.name, t2.name)]
# Download the output of the current task to the current location
t3.download_output_data = ['chksum.txt > chksum_%s.txt' % cnt]

Expand Down
5 changes: 4 additions & 1 deletion examples/user_guide/add_data.py
Expand Up @@ -16,6 +16,8 @@
# this script.
hostname = os.environ.get('RMQ_HOSTNAME', 'localhost')
port = os.environ.get('RMQ_PORT', 5672)
username = os.environ.get('RMQ_USERNAME')
password = os.environ.get('RMQ_PASSWORD')

if __name__ == '__main__':

Expand Down Expand Up @@ -57,7 +59,8 @@
p.add_stages(s2)

# Create Application Manager
appman = AppManager(hostname=hostname, port=port)
appman = AppManager(hostname=hostname, port=port, username=username,
password=password)

# Assign the workflow as a set or list of Pipelines to the Application Manager
appman.workflow = set([p])
Expand Down
6 changes: 4 additions & 2 deletions examples/user_guide/add_pipelines.py
Expand Up @@ -16,7 +16,8 @@
# this script.
hostname = os.environ.get('RMQ_HOSTNAME', 'localhost')
port = os.environ.get('RMQ_PORT', 5672)

username = os.environ.get('RMQ_USERNAME')
password = os.environ.get('RMQ_PASSWORD')

def generate_pipeline(name, stages):

Expand Down Expand Up @@ -56,7 +57,8 @@ def generate_pipeline(name, stages):
p2 = generate_pipeline(name='Pipeline 2', stages=2)

# Create Application Manager
appman = AppManager(hostname=hostname, port=port)
appman = AppManager(hostname=hostname, port=port, username=username,
password=password)

# Assign the workflow as a set or list of Pipelines to the Application Manager
# Note: The list order is not guaranteed to be preserved
Expand Down
6 changes: 5 additions & 1 deletion examples/user_guide/add_shared_data.py
Expand Up @@ -13,6 +13,9 @@

hostname = os.environ.get('RMQ_HOSTNAME','localhost')
port = int(os.environ.get('RMQ_PORT',5672))
username = os.environ.get('RMQ_USERNAME')
password = os.environ.get('RMQ_PASSWORD')

cur_dir = os.path.dirname(os.path.abspath(__file__))

def generate_pipeline():
Expand Down Expand Up @@ -59,7 +62,8 @@ def generate_pipeline():
}

# Create Application Manager
appman = AppManager(hostname=hostname, port=port)
appman = AppManager(hostname=hostname, port=port, username=username,
password=password)

# Assign resource manager to the Application Manager
appman.resource_desc = res_dict
Expand Down
5 changes: 4 additions & 1 deletion examples/user_guide/add_stages.py
Expand Up @@ -16,6 +16,8 @@
# this script.
hostname = os.environ.get('RMQ_HOSTNAME', 'localhost')
port = os.environ.get('RMQ_PORT', 5672)
username = os.environ.get('RMQ_USERNAME')
password = os.environ.get('RMQ_PASSWORD')

if __name__ == '__main__':

Expand Down Expand Up @@ -61,7 +63,8 @@


# Create Application Manager
appman = AppManager(hostname=hostname, port=port)
appman = AppManager(hostname=hostname, port=port, username=username,
password=password)

# Create a dictionary describe four mandatory keys:
# resource, walltime, and cpus
Expand Down
5 changes: 4 additions & 1 deletion examples/user_guide/add_tasks.py
Expand Up @@ -16,6 +16,8 @@
# this script.
hostname = os.environ.get('RMQ_HOSTNAME', 'localhost')
port = os.environ.get('RMQ_PORT', 5672)
username = os.environ.get('RMQ_USERNAME')
password = os.environ.get('RMQ_PASSWORD')

if __name__ == '__main__':

Expand All @@ -40,7 +42,8 @@
p.add_stages(s)

# Create Application Manager
appman = AppManager(hostname=hostname, port=port)
appman = AppManager(hostname=hostname, port=port, username=username,
password=password)

# Create a dictionary describe four mandatory keys:
# resource, walltime, and cpus
Expand Down
8 changes: 6 additions & 2 deletions examples/user_guide/change_target.py
Expand Up @@ -16,6 +16,8 @@
# this script.
hostname = os.environ.get('RMQ_HOSTNAME', 'localhost')
port = os.environ.get('RMQ_PORT', 5672)
username = os.environ.get('RMQ_USERNAME')
password = os.environ.get('RMQ_PASSWORD')

if __name__ == '__main__':

Expand Down Expand Up @@ -45,7 +47,8 @@
t2.executable = '/bin/bash'
t2.arguments = ['-l', '-c', 'grep -o . output.txt | sort | uniq -c > ccount.txt']
# Copy data from the task in the first stage to the current task's location
t2.copy_input_data = ['$Pipline_%s_Stage_%s_Task_%s/output.txt' % (p.uid, s1.uid, t1.uid)]
t2.copy_input_data = ['$Pipline_%s_Stage_%s_Task_%s/output.txt' % (p.name,
s1.name, t1.name)]
# Download the output of the current task to the current location
t2.download_output_data = ['ccount.txt']

Expand All @@ -56,7 +59,8 @@
p.add_stages(s2)

# Create Application Manager
appman = AppManager(hostname=hostname, port=port)
appman = AppManager(hostname=hostname, port=port, username=username,
password=password)

# Assign the workflow as a set or list of Pipelines to the Application Manager
appman.workflow = set([p])
Expand Down
5 changes: 4 additions & 1 deletion examples/user_guide/get_started.py
Expand Up @@ -16,6 +16,8 @@
# this script.
hostname = os.environ.get('RMQ_HOSTNAME', 'localhost')
port = int(os.environ.get('RMQ_PORT', 5672))
username = os.environ.get('RMQ_USERNAME')
password = os.environ.get('RMQ_PASSWORD')

if __name__ == '__main__':

Expand All @@ -38,7 +40,8 @@
p.add_stages(s)

# Create Application Manager
appman = AppManager(hostname=hostname, port=port)
appman = AppManager(hostname=hostname, port=port, username=username,
password=password)

# Create a dictionary describe four mandatory keys:
# resource, walltime, and cpus
Expand Down

0 comments on commit 213cf16

Please sign in to comment.