Skip to content

Commit

Permalink
Merge pull request #77 from StochSS/dev
Browse files Browse the repository at this point in the history
Release v0.9
  • Loading branch information
ethangreen-dev committed Jan 28, 2022
2 parents e5fb0de + 32f1d88 commit 21f672c
Show file tree
Hide file tree
Showing 50 changed files with 2,480 additions and 1,700 deletions.
4 changes: 3 additions & 1 deletion .dockerignore
Expand Up @@ -3,4 +3,6 @@
.venv
.vscode
dist
**/*__pycache__
**/*__pycache__
.ipynb_checkpoints/
examples/
2 changes: 1 addition & 1 deletion .env
@@ -1,4 +1,4 @@
export COMPOSE_PROJECT_NAME=stochss-compute
export PYTHONDONTWRITEBYTECODE=true
export FLASK_ENV=production
export DOCKER_WEB_PORT=1234
export FLASK_PORT=1234
17 changes: 0 additions & 17 deletions Dockerfile

This file was deleted.

103 changes: 69 additions & 34 deletions README.md
@@ -1,59 +1,90 @@
## Installation
# StochSS-Compute

#### Docker

The easiest way to get stochss-compute running is with docker. Clone the repository and run the following in the root directory:
With StochSS-Compute, you can run GillesPy2 simulations on your own server. Results are cached and anonymized, so you
can easily save and recall previous simulations.

## Example Quick Start
First, clone the repository.
```
docker-compose up --build
git clone https://github.com/StochSS/stochss-compute.git
cd stochss-compute
```

#### Manually

Ensure that the following dependencies are installed with your package manager of choice:

- `python-poetry`
- `redis`

Clone the repository and navigate into the new `stochss-compute` directory. Once inside, execute the following command to install the Python dependencies:

- If you are unfamiliar with python virtual environments, read this [documentation](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment) first.
- Note that you will have to activate your venv every time you run stochSS-compute, as well as for your dask scheduler and each of its workers.
- The following will set up the `dask-scheduler`, one `dask-worker`, the backend api server, and launch an example `jupyter` notebook.
- Each of these must be run in separate terminal windows in the main `stochss-compute` directory.
- Just copy and paste!
```
poetry install
# Terminal 1
python3 -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
dask-scheduler
```

And to activate the new virtual environment:

```
poetry shell
# Terminal 2
source venv/bin/activate
dask-worker localhost:8786
```

Once complete, both `celery` and `redis` need to be running.
```
# Terminal 3
source venv/bin/activate
python3 app.py
```
- Stochss-compute is now running on localhost:1234.
<!-- - Dask compute cluster configuration parameters can be passed to `app.py`, see the [documentation](https://github.com/StochSS/stochss-compute/blob/dev/stochss_compute/api/delegate/dask_delegate.py#L20). -->

```
celery -A stochss_compute.api worker -l INFO
# Terminal 4
source venv/bin/activate
jupyter notebook --port=9999
```
- This notebook will show you how to use StochSS-compute.
- Jupyter should then launch automatically, where you can then navigate to the examples directory and open up StartHere.ipynb.
- If not, copy and paste the following URL into your web browser:
`http://localhost:9999/notebooks/examples/StartHere.ipynb`
#### Docker

An alternative installation to the above method is to use docker. We host an image on docker hub you can download and use simply by running the following line.

`redis` can be run in several ways. If you prefer a `systemd` daemon:

```
sudo systemctl start redis
docker run -p 1234:1234 mdip226/stochss-compute:latest
```

Otherwise:
- The `-p` flag publishes the container's exposed port on the host computer, as in `-p <hostPort>:<containerPort>`
- Stochss-compute is now running on localhost:1234.

<!-- #### Minikube
- A third usage of StochSS compute it to use it with "Minikube", which is part of [Kubernetes](https://kubernetes.io/).
- Requires `minikube`, `docker`, and `kubectl` to be installed. Then:
```
redis-server
minikube start
cd into kubernetes directory
kubectl apply -f api_deployment.yaml
minikube dashboard
```
- Now, wait for the stochss-compute container to be created.
Finally, start the stochss-compute server.
- From here, there are two ways to access the cluster. -->

<!-- ##### To set up local access:
`minikube service --url stochss-compute-service`
- exposes external IP (on EKS or otherwise this is handled by your cloud provider)
- use this host and IP when calling ComputeServer()
- first time will be slow because the dask containers have to start up
##### To use ngrok to set up public access (ngrok.com to sign up for a free account and download/install):
```
poetry run stochss-compute
url=$(minikube service --url stochss-compute-service)
ngrok http $url
```
- use this URL when calling ComputeServer() -->


## Usage
<!-- ## Usage
Simulations are run on stochss-compute via Jupyter notebooks.
- The easiest way to run stochss-compute simulations is via Jupyter notebooks:
```python
import numpy, gillespy2
Expand Down Expand Up @@ -94,9 +125,13 @@ class ToggleSwitch(gillespy2.Model):
# Instantiate a new instance of the model.
model = ToggleSwitch()
# Run the model on a stochss-compute server instance running on localhost.
results = RemoteSimulation.on(ComputeServer("127.0.0.1", port=1234).with_model(model).run()
# Run the model on a stochss-compute server instance running on localhost.
# The default port is 1234, but will depend on how you choose to set it up.
results = RemoteSimulation.on(ComputeServer("127.0.0.1", port=1234)).with_model(model).run()
# Wait for the simulation to finish.
results.wait()
# Plot the results.
results.plot()
```
``` -->
5 changes: 5 additions & 0 deletions TODO
@@ -0,0 +1,5 @@
- start kubecluster when api starts
- handle replica sets and scheduler addressing and discovery
- dynamic scaling???
- handle "plot not ready" error
- error handling in general by trying to break it
27 changes: 27 additions & 0 deletions api.dockerfile
@@ -0,0 +1,27 @@
FROM python:3.8.10-buster

LABEL maintainer="Ethan Green <egreen4@unca.edu>"

# set up virtual environment inside container
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
# activate the venv
ENV PYTHONPATH="$VIRTUAL_ENV:$PYTHONPATH"
ENV PATH="$VIRTUAL_ENV:$PATH"
# make the venv a volume
VOLUME [ "/opt/venv" ]

WORKDIR /usr/src/app

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . /usr/src/app

ARG FLASK_ENV="production"
ENV FLASK_ENV="${FLASK_ENV}" \
PYTHONUNBUFFERED="true"

EXPOSE 1234

CMD [ "python", "app.py" ]
6 changes: 3 additions & 3 deletions app.py
@@ -1,7 +1,7 @@
from stochss_compute.api import base
from stochss_compute import api

def server_start():
base.flask.run(host="0.0.0.0", port=1234)
api.start_api(host="0.0.0.0", port=1234, debug=True)

if __name__ == "__main__":
server_start()
server_start()
30 changes: 30 additions & 0 deletions dask_plugin.py
@@ -0,0 +1,30 @@
import click

from redis import Redis
from distributed.diagnostics.plugin import SchedulerPlugin

from stochss_compute.api.delegate import DaskDelegateConfig

class DaskWorkerPlugin(SchedulerPlugin):
name = "test_plugin"

def __init__(self, redis_address, redis_port, redis_db):
self.redis = Redis(
host=redis_address,
port=redis_port,
db=redis_db
)

def transition(self, key, start, finish, *args, **kwargs):
print(f"{key}: {finish}")
if start == "memory" and finish == "forgotten":
finish = "done"

self.redis.set(f"state-{key}", finish)

@click.command()
def dask_setup(scheduler):
config = DaskDelegateConfig()

plugin = DaskWorkerPlugin(config.redis_address, config.redis_port, config.redis_db)
scheduler.add_plugin(plugin)
23 changes: 23 additions & 0 deletions dask_worker_spec.yaml
@@ -0,0 +1,23 @@
kind: Pod
metadata:
labels:
app: stochss-compute
spec:
restartPolicy: Never
containers:
- image: daskdev/dask:latest
imagePullPolicy: IfNotPresent
args: [dask-worker, --nthreads, '2', --no-dashboard, --memory-limit, 4GB, --death-timeout, '60']
name: dask
env:
- name: EXTRA_PIP_PACKAGES
value: "git+https://github.com/dask/distributed gillespy2"
- name: EXTRA_APT_PACKAGES
value: "build-essential"
resources:
limits:
cpu: "1"
memory: 1G
requests:
cpu: "1"
memory: "500Mi"
17 changes: 1 addition & 16 deletions docker-compose.yml
Expand Up @@ -26,22 +26,7 @@ services:
start_period: "5s"
retries: 3
ports:
- "1234:1234"
restart: "${DOCKER_RESTART_POLICY:-unless-stopped}"
stop_grace_period: "${DOCKER_STOP_GRACE_PERIOD:-3s}"
volumes:
- "${DOCKER_WEB_VOLUME:-./public:/app/public}"

worker:
build:
context: "."
args:
- "FLASK_ENV=${FLASK_ENV:-production}"
command: poetry run celery -A stochss_compute.api worker -E
depends_on:
- "redis"
env_file:
- ".env"
- "${FLASK_PORT}:1234"
restart: "${DOCKER_RESTART_POLICY:-unless-stopped}"
stop_grace_period: "${DOCKER_STOP_GRACE_PERIOD:-3s}"
volumes:
Expand Down
8 changes: 8 additions & 0 deletions docker/dask.dockerfile
@@ -0,0 +1,8 @@
FROM daskdev/dask

RUN dask-scheduler --host localhost

RUN dask-worker localhost:8786

EXPOSE 8786

0 comments on commit 21f672c

Please sign in to comment.