Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker compose redis worker #455

Merged
merged 18 commits into from Jul 15, 2022
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Expand Up @@ -4,13 +4,13 @@ repos:
hooks:
- id: flake8
name: flake8 (code linting)
language_version: python3.8
language_version: python3.9
- repo: https://github.com/psf/black
rev: 22.3.0 # New version tags can be found here: https://github.com/psf/black/tags
hooks:
- id: black
name: black (code formatting)
language_version: python3.8
language_version: python3.9
- repo: local
hooks:
- id: mypy
Expand Down
33 changes: 32 additions & 1 deletion docker-compose.yml
Expand Up @@ -18,6 +18,16 @@ services:
POSTGRES_PASSWORD: fm-dev-db-pass
volumes:
- ./ci/load-psql-extensions.sql:/docker-entrypoint-initdb.d/load-psql-extensions.sql
queue-db:
image: redis
restart: always
command: redis-server --loglevel warning --requirepass fm-redis-pass
expose:
- 6379
volumes:
- redis-cache:/data
environment:
- REDIS_REPLICATION_MODE=master
server:
build:
context: .
Expand Down Expand Up @@ -47,6 +57,25 @@ services:
bash -c "flexmeasures db upgrade
&& flexmeasures add toy-account --name 'Docker Toy Account'
&& gunicorn --bind 0.0.0.0:5000 --worker-tmp-dir /dev/shm --workers 2 --threads 4 wsgi:application"
worker:
build:
context: .
dockerfile: Dockerfile
depends_on:
- dev-db
- queue-db
restart: on-failure
environment:
SQLALCHEMY_DATABASE_URI: "postgresql://fm-dev-db-user:fm-dev-db-pass@dev-db:5432/fm-dev-db"
FLEXMEASURES_REDIS_URL: queue-db
FLEXMEASURES_REDIS_PASSWORD: fm-redis-pass
SECRET_KEY: notsecret
FLASK_ENV: development
LOGGING_LEVEL: INFO
volumes:
# a place for config and plugin code
- ./flexmeasures-instance/:/usr/var/flexmeasures-instance/:ro
command: flexmeasures jobs run-worker --name flexmeasures-worker --queue forecasting\|scheduling
test-db:
image: postgres
expose:
Expand All @@ -60,4 +89,6 @@ services:
- ./ci/load-psql-extensions.sql:/docker-entrypoint-initdb.d/load-psql-extensions.sql

volumes:
flexmeasures-instance:
redis-cache:
driver: local
flexmeasures-instance:
3 changes: 2 additions & 1 deletion documentation/changelog.rst
Expand Up @@ -15,6 +15,7 @@ Bugfixes

Infrastructure / Support
----------------------
* Docker compose stack now with Redis worker queue [see `PR #455 <http://www.github.com/FlexMeasures/flexmeasures/pull/455>`_]
* Allow access tokens to be passed as env vars as well [see `PR #443 <http://www.github.com/FlexMeasures/flexmeasures/pull/443>`_]

v0.10.1 | June XX, 2022
Expand Down Expand Up @@ -94,7 +95,7 @@ New features
* Add CLI option to specify custom strings that should be interpreted as NaN values when reading in time series data from CSV [see `PR #357 <http://www.github.com/FlexMeasures/flexmeasures/pull/357>`_]
* Add CLI commands ``flexmeasures add sensor``, ``flexmeasures add asset-type``, ``flexmeasures add beliefs`` (which were experimental features before) [see `PR #337 <http://www.github.com/FlexMeasures/flexmeasures/pull/337>`_]
* Add CLI commands for showing organisational structure [see `PR #339 <http://www.github.com/FlexMeasures/flexmeasures/pull/339>`_]
* Add a CLI command for showing time series data [see `PR #379 <http://www.github.com/FlexMeasures/flexmeasures/pull/379>`_]
* Add CLI command for showing time series data [see `PR #379 <http://www.github.com/FlexMeasures/flexmeasures/pull/379>`_]
* Add CLI command for attaching annotations to assets: ``flexmeasures add holidays`` adds public holidays [see `PR #343 <http://www.github.com/FlexMeasures/flexmeasures/pull/343>`_]
* Add CLI command for resampling existing sensor data to new resolution [see `PR #360 <http://www.github.com/FlexMeasures/flexmeasures/pull/360>`_]
* Add CLI command to delete an asset, with its sensors and data. [see `PR #395 <http://www.github.com/FlexMeasures/flexmeasures/pull/395>`_]
Expand Down
113 changes: 100 additions & 13 deletions documentation/dev/docker-compose.rst
Expand Up @@ -10,6 +10,8 @@ For this, we assume you are in the directory housing ``docker-compose.yml``.

.. note:: The minimum Docker version is 17.09 and for docker-compose we tested successfully at version 1.25. You can check your versions with ``docker[-compose] --version``.

.. note:: The command might also be ``docker compose`` (no dash), for instance if you are using `Docker Desktop <https://docs.docker.com/desktop>`_.

Build the compose stack
------------------------

Expand All @@ -19,11 +21,12 @@ Run this:

docker-compose build

This pulls the images you need, and re-builds the FlexMeasures one from code. If you change code, re-running this will re-build that image.
This pulls the images you need, and re-builds the FlexMeasures ones from code. If you change code, re-running this will re-build that image.

This compose script can also serve as an inspiration for using FlexMeasures in modern cloud environments (like Kubernetes). For instance, you might want to not build the FlexMeasures image from code, but simply pull the image from DockerHub.

.. todo:: This stack runs FlexMeasures, but misses the background worker aspect. For this, we'll add a redis node and one additional FlexMeasures node, which runs a worker as entry point instead (see `issue 418 <https://github.com/FlexMeasures/flexmeasures/issues/418>`_).
If you wanted, you could stop building from source, and directly use the official flexmeasures image for the server and worker container
(set ``image: lfenergy/flexmeasures`` in the file ``docker-compose.yml``).


Run the compose stack
Expand All @@ -35,38 +38,122 @@ Start the stack like this:

docker-compose up

You can see log output in the terminal, but ``docker-compose logs`` is also available to you.
.. warning:: This might fail if ports 5000 (Flask) or 6379 (Redis) are in use on your system. Stop these processes before you continue.

Check ``docker ps`` or ``docker-compose ps`` to see if your containers are running:


.. code-block:: console

± docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dda1a8606926 flexmeasures_server "bash -c 'flexmeasur…" 43 seconds ago Up 41 seconds (healthy) 0.0.0.0:5000->5000/tcp flexmeasures-server-1
27ed9eef1b04 postgres "docker-entrypoint.s…" 2 days ago Up 42 seconds 5432/tcp flexmeasures-dev-db-1
90df2065e08d postgres "docker-entrypoint.s…" 2 days ago Up 42 seconds 5432/tcp flexmeasures-test-db-1

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
beb9bf567303 flexmeasures_server "bash -c 'flexmeasur…" 44 seconds ago Up 38 seconds (health: starting) 0.0.0.0:5000->5000/tcp flexmeasures-server-1
e36cd54a7fd5 flexmeasures_worker "flexmeasures jobs r…" 44 seconds ago Up 5 seconds 5000/tcp flexmeasures-worker-1
c9985de27f68 postgres "docker-entrypoint.s…" 45 seconds ago Up 40 seconds 5432/tcp flexmeasures-test-db-1
03582d37230e postgres "docker-entrypoint.s…" 45 seconds ago Up 40 seconds 5432/tcp flexmeasures-dev-db-1
792ec3d86e71 redis "docker-entrypoint.s…" 45 seconds ago Up 40 seconds 0.0.0.0:6379->6379/tcp flexmeasures-queue-db-1

The FlexMeasures container has a health check implemented, which is reflected in this output and you can see which ports are available on your machine to interact.

You can use ``docker-compose logs`` to look at output. ``docker inspect <container>`` and ``docker exec -it <container> bash`` can be quite useful to dive into details.
The FlexMeasures server container has a health check implemented, which is reflected in this output and you can see which ports are available on your machine to interact.

.. todo:: We should provide a way to test that this is working, e.g. a list of steps. Document this, but also include that in our tsc/Release list (as a test step to see if Dockerization still works, plus a publish step for the released version).
You can use the terminal or ``docker-compose logs`` to look at output. ``docker inspect <container>`` and ``docker exec -it <container> bash`` can be quite useful to dive into details.
We'll see the latter more in this tutorial.


Configuration
---------------

You can pass in your own configuration (e.g. for MapBox access token, or db URI, see below) like we described above for running a container: put a file ``flexmeasures.cfg`` into a local folder called ``flexmeasures-instance``.
You can pass in your own configuration (e.g. for MapBox access token, or db URI, see below) like we described in :ref:`docker_configuration` ― put a file ``flexmeasures.cfg`` into a local folder called ``flexmeasures-instance`` (the volume should be already mapped).


Data
-------

The postgres database is a test database with toy data filled in when the flexmeasures container starts.
You could also connect it to some other database, by setting a different ``SQLALCHEMY_DATABASE_URI`` in the config.
You could also connect it to some other database (on your PC, in the cloud), by setting a different ``SQLALCHEMY_DATABASE_URI`` in the config.


Seeing it work: Running the toy tutorial
--------------------------------------

A good way to see if these containers work well together, and maybe to inspire how to use them for your own purposes, is the :ref:`tut_toy_schedule`.
The server container already creates the toy account when it starts. We'll now run the rest of that tutorial, with one twist at the end, when we create the battery schedule.

Let's go into the worker container:

.. code-block:: console

docker exec -it flexmeasures-worker-1 bash

There, we add the price data, as described in :ref:`tut_toy_schedule_price_data`. Create the prices and add them to the FlexMeasures DB in the container's bash session.

Next, we put a scheduling job in the worker's queue. This uses the Redis container ― the toy tutorial isn't doing that (the difference is ``--as-job``).

.. code-block:: console

flexmeasures add schedule --sensor-id 2 --optimization-context-id 3 \
--start ${TOMORROW}T07:00+01:00 --duration PT12H --soc-at-start 50% \
--roundtrip-efficiency 90% --as-job

We should now see in the output of ``docker logs flexmeasures-worker-1`` something like the following:

.. code-block:: console

Running Scheduling Job d3e10f6d-31d2-46c6-8308-01ede48f8fdd: <Sensor 2: charging, unit: MW res.: 0:15:00>, from 2022-07-06 07:00:00+01:00 to 2022-07-06 19:00:00+01:00

So the job had been queued in Redis, was then picked up by the worker process, and the result should be in our SQL database container. Let's check!

We'll not go into the server container this time, but simply send a command:

.. code-block:: console

TOMORROW=$(date --date="next day" '+%Y-%m-%d')
docker exec -it flexmeasures-server-1 bash -c "flexmeasures show beliefs --sensor-id 2 --start ${TOMORROW}T07:00:00+01:00 --duration PT12H"
nhoening marked this conversation as resolved.
Show resolved Hide resolved

The charging/discharging schedule should be there:

.. code-block:: console

$ flexmeasures show beliefs --sensor-id 2 --start ${TOMORROW}T07:00:00+01:00 --duration PT12H
┌────────────────────────────────────────────────────────────┐
│ ▐ ▐▀▀▌ ▛▀▀│
│ ▞▌ ▞ ▐ ▌ │ 0.4MW
│ ▌▌ ▌ ▐ ▐ │
│ ▗▘▌ ▌ ▐ ▐ │
│ ▐ ▐ ▗▘ ▝▖ ▐ │
│ ▞ ▐ ▐ ▌ ▌ │ 0.2MW
│ ▗▘ ▐ ▐ ▌ ▌ │
│ ▐ ▝▖ ▌ ▚ ▞ │
│▀▘───▀▀▀▀▀▀▀▀▀▀▀▀▀▀▌────▐─────▝▀▀▀▀▀▀▀▀▜─────▐▀▀▀▀▀▀▀▀▀─────│ 0MW
│ ▌ ▞ ▐ ▗▘ │
│ ▚ ▌ ▐ ▐ │
│ ▐ ▗▘ ▝▖ ▌ │ -0.2MW
│ ▐ ▐ ▌ ▌ │
│ ▐ ▐ ▌ ▗▘ │
│ ▌ ▞ ▌ ▐ │
│ ▌ ▌ ▐ ▐ │ -0.4MW
│ ▙▄▄▌ ▐▄▄▞ │
└────────────────────────────────────────────────────────────┘
10 20 30 40
██ charging

Like in the original toy tutorial, we can also check in the server container's `web UI <http://localhost:5000/sensors/2/>`_ (username is "toy-user@flexmeasures.io", password is "toy-password"):

.. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-charging.png
:align: center


Scripting with the Docker stack
----------------------------------

A very important aspect of this stack is if it can be put to interesting use.
For this, developers need to be able to script things like just did with the toy tutorial.
nhoening marked this conversation as resolved.
Show resolved Hide resolved

Note that instead of starting a console in the containers, we can also send commands to them right away.
For instance, we sent the complete ``flexmeasures show beliefs`` command and then viewed the output on our own machine.
Likewise, we send the ``pytest`` command to run the unit tests (see below).

Used this way, and in combination with the powerful list of :ref:`cli`, this FlexMeasures Docker stack is scriptable for interesting applications and simulations!


Running tests
Expand Down
87 changes: 4 additions & 83 deletions documentation/host/data.rst
@@ -1,16 +1,14 @@
.. _host-data:

Handling databases
=============================
Postgres database
=====================

This document describes how to get the postgres database ready to use and maintain it (do migrations / changes to the structure).

.. note:: This is about a stable database, useful for longer development work or production. A super quick way to get a postgres database running with Docker is described in :ref:`tut_toy_schedule` (and redis would work similarly).
.. note:: This is about a stable database, useful for longer development work or production. A super quick way to get a postgres database running with Docker is described in :ref:`tut_toy_schedule`. In :ref:`docker-compose` we use both postgres and redis.

We also spend a few words on coding with database transactions in mind.

Finally, we'll discuss how FlexMeasures is using Redis and redis-queues. When setting up on Windows, a guide to install the Redis-based queuing system for handling (forecasting) jobs.


.. contents:: Table of contents
:local:
Expand Down Expand Up @@ -227,7 +225,7 @@ For instance, you can create forecasts for your existing metered data with this

.. code-block:: console

flexmeasures add forecasts
flexmeasures add forecasts --help


Check out it's ``--help`` content to learn more. You can set which assets and which time window you want to forecast. Of course, making forecasts takes a while for a larger dataset.
Expand Down Expand Up @@ -359,80 +357,3 @@ It is really useful (and therefore an industry standard) to bundle certain datab

Please see the package ``flexmeasures.data.transactional`` for details on how a FlexMeasures developer should make use of this concept.
If you are writing a script or a view, you will find there the necessary structural help to bundle your work in a transaction.


.. _redis-queue:

Redis queue
-----------------------

FlexMeasures supports jobs (e.g. forecasting) running asynchronously to the main FlexMeasures application using `Redis Queue <http://python-rq.org/>`_.

It relies on a Redis server, which is has to be installed locally, or used on a separate host. In the latter case, configure :ref:`redis-config` details in your FlexMeasures config file.

Forecasting jobs are usually created (and enqueued) when new data comes in via the API. To asynchronously work on these forecasting jobs, run this in a console:

.. code-block:: console

flexmeasures jobs run_worker --queue forecasting


You should be able to run multiple workers in parallel, if necessary. You can add the ``--name`` argument to keep them a bit more organized.

The FlexMeasures unit tests use fakeredis to simulate this task queueing, with no configuration required.


Inspect the queue and jobs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The first option to inspect the state of the ``forecasting`` queue should be via the formidable `RQ dashboard <https://github.com/Parallels/rq-dashboard>`_. If you have admin rights, you can access it at ``your-flexmeasures-url/rq/``\ , so for instance ``http://localhost:5000/rq/``. You can also start RQ dashboard yourself (but you need to know the redis server credentials):

.. code-block:: console

pip install rq-dashboard
rq-dashboard --redis-host my.ip.addr.ess --redis-password secret --redis-database 0


RQ dashboard shows you ongoing and failed jobs, and you can see the error messages of the latter, which is very useful.

Finally, you can also inspect the queue and jobs via a console (\ `see the nice RQ documentation <http://python-rq.org/docs/>`_\ ), which is more powerful. Here is an example of inspecting the finished jobs and their results:

.. code-block:: python

from redis import Redis
from rq import Queue
from rq.job import Job
from rq.registry import FinishedJobRegistry

r = Redis("my.ip.addr.ess", port=6379, password="secret", db=2)
q = Queue("forecasting", connection=r)
finished = FinishedJobRegistry(queue=q)

finished_job_ids = finished.get_job_ids()
print("%d jobs finished successfully." % len(finished_job_ids))

job1 = Job.fetch(finished_job_ids[0], connection=r)
print("Result of job %s: %s" % (job1.id, job1.result))


Redis queues on Windows
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

On Unix, the rq system is automatically set up as part of FlexMeasures's main setup (the ``rq`` dependency).

However, rq is `not functional on Windows <http://python-rq.org/docs>`_ without the Windows Subsystem for Linux.

On these versions of Windows, FlexMeasures's queuing system uses an extension of Redis Queue called ``rq-win``.
This is also an automatically installed dependency of FlexMeasures.

However, the Redis server needs to be set up separately. Redis itself does not work on Windows, so it might be easiest to commission a Redis server in the cloud (e.g. on kamatera.com).

If you want to install Redis on Windows itself, it can be set up on a virtual machine as follows:


* `Install Vagrant on Windows <https://www.vagrantup.com/intro/getting-started/>`_ and `VirtualBox <https://www.virtualbox.org/>`_
* Download the `vagrant-redis <https://raw.github.com/ServiceStack/redis-windows/master/downloads/vagrant-redis.zip>`_ vagrant configuration
* Extract ``vagrant-redis.zip`` in any folder, e.g. in ``c:\vagrant-redis``
* Set ``config.vm.box = "hashicorp/precise64"`` in the Vagrantfile, and remove the line with ``config.vm.box_url``
* Run ``vagrant up`` in Command Prompt
* In case ``vagrant up`` fails because VT-x is not available, `enable it <https://www.howali.com/2017/05/enable-disable-intel-virtualization-technology-in-bios-uefi.html>`_ in your bios `if you can <https://www.intel.com/content/www/us/en/support/articles/000005486/processors.html>`_ (more debugging tips `here <https://forums.virtualbox.org/viewtopic.php?t=92111>`_ if needed)