diff --git a/documentation/api/change_log.rst b/documentation/api/change_log.rst index db5141fec..4bc6712c4 100644 --- a/documentation/api/change_log.rst +++ b/documentation/api/change_log.rst @@ -3,12 +3,14 @@ API change log =============== -v2.0 | 2021-04-02 +.. note:: The FlexMeasures API follows its own versioning scheme. This also reflects in the URL, allowing developers to upgrade at their own pace. + + +v2.0-2 | 2021-04-02 """"""""""""""""" - [**Breaking change**] Switched the interpretation of horizons to rolling horizons. - [**Breaking change**] Deprecated the use of ISO 8601 repeating time intervals to denote rolling horizons. -- [**Breaking change**] Deprecated the automatic inference of horizons for *postMeterData*, *postPrognosis*, *postPriceData* and *postWeatherData* endpoints for API version below v2.0. - Introduced the "prior" field for *postMeterData*, *postPrognosis*, *postPriceData* and *postWeatherData* endpoints. - Changed the Introduction section: @@ -18,16 +20,31 @@ v2.0 | 2021-04-02 - Rewrote relevant examples using horizon and prior fields. -v2.0 | 2021-02-19 +v2.0-1 | 2021-02-19 """"""""""""""""""" - REST endpoints for managing users: `/users/` (GET), `/user/` (GET, PATCH) and `/user//password-reset` (PATCH). -v2.0 | 2020-11-14 +v2.0-0 | 2020-11-14 """"""""""""""""""" - REST endpoints for managing assets: `/assets/` (GET, POST) and `/asset/` (GET, PATCH, DELETE). + +v1.3.9 | 2021-04-XX +""""""""""""""""" + +*Affects all versions since v1.0*. + +- Fixed regression by partially reverting the breaking change of v1.3-8: Re-instantiated automatic inference of horizons for Post requests for API versions below v2.0, but changed to inference policy: now inferring the data was recorded **right after each event** took place (leading to a zero horizon for each data point) rather than **after the last event** took place (which led to a different horizon for each data point); the latter had been the inference policy before v1.3-8. + +v1.3-8 | 2020-04-02 +""""""""""""""""""" + +*Affects all versions since v1.0*. + +- [**Breaking change**, partially reverted in v1.3-9] Deprecated the automatic inference of horizons for *postMeterData*, *postPrognosis*, *postPriceData* and *postWeatherData* endpoints for API version below v2.0. + v1.3-7 | 2020-12-16 """"""""""""""""""" @@ -155,10 +172,7 @@ v1.1-2 | 2018-08-15 - Added the *postPriceData* endpoint - Added a description of the *postPrognosis* endpoint in the Aggregator section - Added a description of the *postPriceData* endpoint in the Aggregator and Supplier sections - -.. ifconfig:: FLEXMEASURES_MODE == "play" - - - Added the *restoreData* endpoint +- Added the *restoreData* endpoint for servers in play mode v1.1-1 | 2018-08-06 """"""""""""""""""" diff --git a/documentation/api/introduction.rst b/documentation/api/introduction.rst index f5d9429d9..e422d8eab 100644 --- a/documentation/api/introduction.rst +++ b/documentation/api/introduction.rst @@ -283,7 +283,9 @@ In case of a single group of connections, the message may be flattened to: Timeseries ^^^^^^^^^^ -Timestamps and durations are consistent with the ISO 8601 standard. All timestamps in requests to the API must be timezone-aware. The timezone indication "Z" indicates a zero offset from UTC. Additionally, we use the following shorthand for sequential values within a time interval: +Timestamps and durations are consistent with the ISO 8601 standard. The resolution of the data is implicit, see :ref:`resolutions`. + +All timestamps in requests to the API must be timezone-aware. The timezone indication "Z" indicates a zero offset from UTC. Additionally, we use the following shorthand for sequential values within a time interval: .. code-block:: json @@ -431,8 +433,9 @@ This denotes that the prognosed interval has 5 minutes left to be concluded. Resolutions ^^^^^^^^^^^ -Specifying a resolution is redundant for POST requests that contain both "values" and a "duration". -Also, posted data is checked against the required resolution of the assets which are posted to. +Specifying a resolution is redundant for POST requests that contain both "values" and a "duration" ― FlexMeasures computes the resolution by dividing the duration by the number of values. + +When POSTing data, FlexMeasures checks this computed resolution against the required resolution of the assets which are posted to. If these can't be matched (through upsampling), an error will occur. GET requests (such as *getMeterData*) return data in the resolution which the sensor is configured for. A "resolution" may be specified explicitly to obtain the data in downsampled form, diff --git a/documentation/changelog.rst b/documentation/changelog.rst index 6fba835e8..c665c67c3 100644 --- a/documentation/changelog.rst +++ b/documentation/changelog.rst @@ -3,17 +3,70 @@ FlexMeasures Changelog ********************** -v0.2.5 | April XX, 2021 +v0.5.0 | May XX, 2021 =========================== +.. warning:: If you retrieve weather forecasts through FlexMeasures: we had to switch to OpenWeatherMap, as Dark Sky is closing. This requires an update to config variables ― the new setting is called ``OPENWEATHERMAP_API_KEY``. + New features ----------- -* Add sensors with CLI command [see `PR #83 `_] +* Allow plugins to overwrite UI routes and customise the teaser on the login form [see `PR #106 `_] +* Allow plugins to customise the copyright notice and credits in the UI footer [see `PR #123 `_] + +Bugfixes +----------- +* Fix last login date display in user list [see `PR #133 `_] +* Choose better forecasting horizons when weather data is posted [see `PR #131 `_] + +Infrastructure / Support +---------------------- +* Make assets use MW as their default unit and enforce that in CLI, as well (API already did) [see `PR #108 `_] +* For weather forecasts, switch from Dark Sky (closed from Aug 1, 2021) to OpenWeatherMap API [see `PR #113 `_] +* Re-use the database between automated tests, if possible. This shaves 2/3rd off of the time it takes for the FlexMeasures test suite to run [see `PR #115 `_] +* Let CLI package and plugins use Marshmallow Field definitions [see `PR #125 `_] + + +v0.4.1 | May 7, 2021 +=========================== + +Bugfixes +----------- +* Fix regression when editing assets in the UI [see `PR #122 `_] +* Fixed a regression that stopped asset, market and sensor selection from working [see `PR #117 `_] +* Prevent logging out user when clearing the session [see `PR #112 `_] +* Prevent user type data source to be created without setting a user [see `PR #111 `_] + +v0.4.0 | April 29, 2021 +=========================== + +.. warning:: Upgrading to this version requires running ``flexmeasures db upgrade`` (you can create a backup first with ``flexmeasures db-ops dump``). + +New features +----------- +* Configure the UI menu with ``FLEXMEASURES_LISTED_VIEWS`` [see `PR #91 `_] +* Allow for views and CLI functions to come from plugins [see also `PR #91 `_] + +.. note:: Read more on these features on `the FlexMeasures blog `__. + +Bugfixes +----------- +* Asset edit form displayed wrong error message. Also enabled the asset edit form to display the invalid user input back to the user [see `PR #93 `_] Infrastructure / Support ---------------------- * Updated dependencies, including Flask-Security-Too [see `PR #82 `_] -* Integration with `timely beliefs `_ lib: Sensor data as TimedBeliefs [see `PR #79 `_] +* Improved documentation after user feedback [see `PR #97 `_] +* Begin experimental integration with `timely beliefs `_ lib: Sensor data as TimedBeliefs [see `PR #79 `_ and `PR #99 `_] +* Add sensors with CLI command currently meant for developers only [see `PR #83 `_] +* Add data (beliefs about sensor events) with CLI command currently meant for developers only [see `PR #85 `_ and `PR #103 `_] + + +v0.3.1 | April 9, 2021 +=========================== + +Bugfixes +-------- +* PostMeterData endpoint was broken in API v2.0 [see `PR #95 `_] v0.3.0 | April 2, 2021 diff --git a/documentation/cli/change_log.rst b/documentation/cli/change_log.rst index df31de54e..28131d801 100644 --- a/documentation/cli/change_log.rst +++ b/documentation/cli/change_log.rst @@ -4,6 +4,11 @@ FlexMeasures CLI Changelog ********************** +since v0.4.0 | April 2, 2021 +===================== + +* Add the ``dev-add`` command group for experimental features around the upcoming data model refactoring. + since v0.3.0 | April 2, 2021 ===================== diff --git a/documentation/concepts/services.rst b/documentation/concepts/services.rst index a3f958791..2c742ec4a 100644 --- a/documentation/concepts/services.rst +++ b/documentation/concepts/services.rst @@ -15,6 +15,8 @@ The FlexMeasures platform continuously reads in meter data from your assets. To * Data gaps & strange outliers (assure data quality) * Idle processes / leaks (minimise waste) +.. todo:: These features are work in progress. Most of our customers already do this by themselves in a straightforward manner. + Forecasting -------------- diff --git a/documentation/configuration.rst b/documentation/configuration.rst index e737e0780..68b3c4759 100644 --- a/documentation/configuration.rst +++ b/documentation/configuration.rst @@ -6,7 +6,7 @@ Configuration The following configurations are used by FlexMeasures. Required settings (e.g. postgres db) are marked with a double star (**). -To enable easier quickstart tutorials, these settings can be set by env vars. +To enable easier quickstart tutorials, these settings can be set by environment variables. Recommended settings (e.g. mail, redis) are marked by one star (*). .. note:: FlexMeasures is best configured via a config file. The config file for FlexMeasures can be placed in one of two locations: @@ -15,6 +15,7 @@ Recommended settings (e.g. mail, redis) are marked by one star (*). * in the user's home directory (e.g. ``~/.flexmeasures.cfg`` on Unix). In this case, note the dot at the beginning of the filename! * in the app's instance directory (e.g. ``/path/to/your/flexmeasures/code/instance/flexmeasures.cfg``\ ). The path to that instance directory is shown to you by running flexmeasures (e.g. ``flexmeasures run``\ ) with required settings missing or otherwise by running ``flexmeasures shell``. + Basic functionality ------------------- @@ -25,11 +26,14 @@ Level above which log messages are added to the log file. See the ``logging`` pa Default: ``logging.WARNING`` + +.. _modes-config: + FLEXMEASURES_MODE ^^^^^^^^^^^^^^^^^ The mode in which FlexMeasures is being run, e.g. "demo" or "play". -This is used to turn on certain extra behaviours. +This is used to turn on certain extra behaviours, see :ref:`modes-dev` for details. Default: ``""`` @@ -51,6 +55,19 @@ and the first month when the domain was under the current owner's administration Default: ``{"flexmeasures.io": "2021-01"}`` + +.. _plugin-config: + +FLEXMEASURES_PLUGIN_PATHS +^^^^^^^^^^^^^^^^^^^^^^^^^ + +A list of absolute paths to Blueprint-based plugins for FlexMeasures (e.g. for custom views or CLI functions). +Each plugin path points to a folder, which should contain an ``__init__.py`` file where the Blueprint is defined. +See :ref:`plugins` on what is expected for content. + +Default: ``[]`` + + FLEXMEASURES_DB_BACKUP_PATH ^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -65,6 +82,7 @@ Whether to turn on a feature which times requests made through FlexMeasures. Int Default: ``False`` + UI -- @@ -89,6 +107,7 @@ Interval in which viewing the queues dashboard refreshes itself, in milliseconds Default: ``3000`` (3 seconds) + Timing ------ @@ -113,18 +132,14 @@ The horizon to use when making schedules. Default: ``timedelta(hours=2 * 24)`` + Tokens ------ -DARK_SKY_API_KEY +OPENWEATHERMAP_API_KEY ^^^^^^^^^^^^^^^^ -Token for accessing the DarkSky weather forecasting service. - -.. note:: DarkSky will soon become non-public (Aug 1, 2021), so they are not giving out new tokens. - We'll use another service soon (`see this issue `_). - This is unfortunate. - In the meantime, if you can't find anybody lending their token, consider posting weather forecasts to the FlexMeasures database yourself. +Token for accessing the OPenWeatherMap weather forecasting service. Default: ``None`` @@ -133,7 +148,7 @@ Default: ``None`` MAPBOX_ACCESS_TOKEN ^^^^^^^^^^^^^^^^^^^ -Token for accessing the mapbox API (for displaying maps on the dashboard and asset pages). You can learn how to obtain one `here `_ +Token for accessing the MapBox API (for displaying maps on the dashboard and asset pages). You can learn how to obtain one `here `_ Default: ``None`` @@ -144,6 +159,7 @@ Token which external services can use to check on the status of recurring tasks Default: ``None`` + SQLAlchemy ---------- @@ -172,6 +188,7 @@ Default: "connect_args": {"options": "-c timezone=utc"}, } + Security -------- @@ -215,7 +232,7 @@ Default: ``60 * 60 * 6`` (six hours) SECURITY_TRACKABLE ^^^^^^^^^^^^^^^^^^ -Wether to track user statistics. Turning this on requires certain user fields. +Whether to track user statistics. Turning this on requires certain user fields. We do not use this feature, but we do track number of logins. Default: ``False`` @@ -223,14 +240,14 @@ Default: ``False`` CORS_ORIGINS ^^^^^^^^^^^^ -Allowed cross-origins. Set to "*" to allow all. For development (e.g. javascript on localhost) you might use "null" in this list. +Allowed cross-origins. Set to "*" to allow all. For development (e.g. JavaScript on localhost) you might use "null" in this list. Default: ``[]`` CORS_RESOURCES: ^^^^^^^^^^^^^^^ -FlexMeasures resources which get cors protection. This can be a regex, a list of them or dict with all possible options. +FlexMeasures resources which get cors protection. This can be a regex, a list of them or a dictionary with all possible options. Default: ``[r"/api/*"]`` @@ -244,6 +261,7 @@ Allows users to make authenticated requests. If true, injects the Access-Control Default: ``True`` + .. _mail-config: Mail @@ -335,7 +353,7 @@ Default: ``6379`` FLEXMEASURES_REDIS_DB_NR (*) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Number of the redis database to use (Redis per default has 16 databases, nubered 0-15) +Number of the redis database to use (Redis per default has 16 databases, numbered 0-15) Default: ``0`` @@ -349,6 +367,8 @@ Default: ``None`` Demonstrations -------------- +.. _demo-credentials-config: + FLEXMEASURES_PUBLIC_DEMO_CREDENTIALS ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -356,6 +376,8 @@ When ``FLEXMEASURES_MODE=demo``\ , this can hold login credentials (demo user em Default: ``None`` +.. _demo-year-config: + FLEXMEASURES_DEMO_YEAR ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -364,9 +386,14 @@ so that old imported data can be demoed as if it were current Default: ``None`` -FLEXMEASURES_SHOW_CONTROL_UI + +.. _menu-config: + +FLEXMEASURES_LISTED_VIEWS ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The control page is still mocked, so this setting controls if it is to be shown. +A list of the views which are listed in the menu. -Default: ``False`` +.. note:: This setting is likely to be deprecated soon, as we might want to control it per account (once we implemented a multi-tenant data model per FlexMeasures server). + +Default: ``["dashboard", "analytics", "portfolio", "assets", "users"]`` diff --git a/documentation/dev/data.rst b/documentation/dev/data.rst index 39a5ca81f..1f398f03d 100644 --- a/documentation/dev/data.rst +++ b/documentation/dev/data.rst @@ -100,36 +100,35 @@ Or, from within Postgres console: CREATE DATABASE flexmeasures_test WITH OWNER = flexmeasures_test; -Log in as the postgres superuser and connect to your newly-created database: +Finally, test if you can log in as the flexmeasures user: .. code-block:: bash - sudo -u postgres psql + psql -U flexmeasures --password -h 127.0.0.1 -d flexmeasures .. code-block:: sql - \connect flexmeasures + \q + +Add Postgres Extensions to your database(s) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +To find the nearest sensors, FlexMeasures needs some extra POstgres support. Add the following extensions while logged in as the postgres superuser: +.. code-block:: bash + + sudo -u postgres psql + .. code-block:: sql + \connect flexmeasures CREATE EXTENSION cube; CREATE EXTENSION earthdistance; -Connect to the ``flexmeasures_test`` database and repeat creating these extensions there. Then ``exit``. - -Finally, try logging in as the flexmeasures user once: - -.. code-block:: bash - - psql -U flexmeasures --password -h 127.0.0.1 -d flexmeasures - -.. code-block:: sql - - \q +If you have it, connect to the ``flexmeasures_test`` database and repeat creating these extensions there. Then ``exit``. Configure FlexMeasures app for that database @@ -176,7 +175,7 @@ Then we import the data dump we made earlier: .. code-block:: bash - flask db-ops restore + flexmeasures db-ops restore A potential ``alembic_version`` error should not prevent other data tables from being restored. diff --git a/documentation/dev/modes.rst b/documentation/dev/modes.rst new file mode 100644 index 000000000..238082d0f --- /dev/null +++ b/documentation/dev/modes.rst @@ -0,0 +1,41 @@ +.. _modes-dev: + +Modes +============ + +FlexMeasures can be run in specific modes (see the :ref:`modes-config` config setting). +This is useful for certain special situations. Two are supported out of the box and we document here +how FlexMeasures behaves differently in these modes. + +Demo +------- + +In this mode, the server is assumed to be used as a demonstration tool. Most of the following adaptations therefore happen in the UI. + +- [Data] Demo data is often from an older source, and it's a hassle to change the year to the current year. FlexMeasures allows to set :ref:`demo-year-config` and when in ``demo`` mode, the current year will be translated to that year in the background. +- [UI] Logged-in users can view queues on the demo server (usually only admins can do that) +- [UI] Demo servers often display login credentials, so visitors can try out functionality. Use the :ref:`demo-credentials-config` config setting to do this. +- [UI] The dashboard shows all non-empty asset groups, instead of only the ones for the current user. +- [UI] The analytics page mocks confidence intervals around power, price and weather data, so that the demo data doesn't need to have them. +- [UI] The portfolio page mocks flexibility numbers and a mocked control action. + +Play +------ + +In this mode, the server is assumed to be used to run simulations. + +Big features +^^^^^^^^^^^^^ + +- [Data] Allows overwriting existing data when saving data to the database. +- [API] The inferred recording time of incoming data is immediately after the event took place, rather than the actual time at which the server received the data. +- [API] Posting price or weather data does not trigger forecasting jobs. +- [API] The ``restoreData`` endpoint is registered, enabling database resets through the API. +- [API] When posting weather data for a new location, a new weather sensor is automatically created, instead of returning the nearest available weather sensor to post data to. + +Small features +^^^^^^^^^^^^^^^ + +- [API] Posted UDI events are not enforced to be consecutive. +- [API] Names in ``GetConnectionResponse`` are the connections' unique database names rather than their display names (this feature is planned to be deprecated). +- [UI] The dashboard plot showing the latest power value is not enforced to lie in the past (in case of simulating future values). diff --git a/documentation/dev/plugins.rst b/documentation/dev/plugins.rst new file mode 100644 index 000000000..c70498e2c --- /dev/null +++ b/documentation/dev/plugins.rst @@ -0,0 +1,161 @@ +.. _plugins: + +Writing Plugins +==================== + +You can extend FlexMeasures with functionality like UI pages or CLI functions. + +A FlexMeasures plugin works as a `Flask Blueprint `_. + +.. todo:: We'll use this to allow for custom forecasting and scheduling algorithms, as well. + + +How it works +^^^^^^^^^^^^^^ + +Use the config setting :ref:`plugin-config` to point to your plugin(s). + +Here are the assumptions FlexMeasures makes to be able to import your Blueprint: + +- The plugin folder contains an __init__.py file. +- In this init, you define a Blueprint object called ``_bp``. + +We'll refer to the plugin with the name of your plugin folder. + + +Showcase +^^^^^^^^^ + +Here is a showcase file which constitutes a FlexMeasures plugin. We imagine that we made a plugin to implement some custom logic for a client. + +We created the file ``/our_client/__init__.py``. So, ``our_client`` is the plugin folder and becomes the plugin name. +All else that is needed for this showcase (not shown here) is ``/our_client/templates/metrics.html``, which works just as other FlexMeasures templates (they are Jinja2 templates and you can start them with ``{% extends "base.html" %}`` for integration into the FlexMeasures structure). + + +* We demonstrate adding a view which can be rendered via the FlexMeasures base templates. +* We also showcase a CLI function which has access to the FlexMeasures `app` object. It can be called via ``flexmeasures our_client test``. + +.. code-block:: python + + from flask import Blueprint, render_template, abort + + from flask_security import login_required + from flexmeasures.ui.utils.view_utils import render_flexmeasures_template + + + our_client_bp = Blueprint('our_client', 'our_client', + template_folder='templates') + + + # Showcase: Adding a view + + @our_client_bp.route('/') + @our_client_bp.route('/metrics') + @login_required + def metrics(): + msg = "I am part of FM !" + # Note that we render via the in-built FlexMeasures way + return render_flexmeasures_template( + "metrics.html", + message=msg, + ) + + + # Showcase: Adding a CLI command + + import click + from flask import current_app + from flask.cli import with_appcontext + + + our_client_bp.cli.help = "Our client commands" + + @our_client_bp.cli.command("test") + @with_appcontext + def oc_test(): + print(f"I am a CLI command, part of FlexMeasures: {current_app}") + + +.. note:: You can overwrite FlexMeasures routes here. In our example above, we set the root route ``/``. FlexMeasures registers plugin routes before its own, so in this case visiting the root URL of your app will display this plugged-in view (the same you'd see at `/metrics`). + +.. note:: Plugin views can also be added to the FlexMeasures UI menu ― just name them in the config setting :ref:`menu-config`. + +Validating data with marshmallow +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +FlexMeasures validates input data using `marshmallow `_. +Data fields can be made suitable for use in CLI commands through our ``MarshmallowClickMixin``. +An example: + +.. code-block:: python + + from datetime import datetime + from typing import Optional + + import click + from flexmeasures.data.schemas.times import AwareDateTimeField + from flexmeasures.data.schemas.utils import MarshmallowClickMixin + from marshmallow import fields + + class StrField(fields.Str, MarshmallowClickMixin): + """String field validator usable for UI routes and CLI functions.""" + + @click.command("meet") + @click.option( + "--where", + required=True, + type=StrField(), # see above: we just made this field suitable for CLI functions + help="(Required) Where we meet", + ) + @click.option( + "--when", + required=False, + type=AwareDateTimeField(format="iso"), # FlexMeasures already made this field suitable for CLI functions + help="[Optional] When we meet (expects timezone-aware ISO 8601 datetime format)", + ) + def schedule_meeting( + where: str, + when: Optional[datetime] = None, + ): + print(f"Okay, see you {where} on {when}.") + + +Using other files in your plugin +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Say you want to include other Python files in your plugin, importing them in your ``__init__.py`` file. +This can be done if you put the plugin path on the import path. Do it like this in your ``__init__.py``: + +.. code-block:: python + + import os + import sys + + HERE = os.path.dirname(os.path.abspath(__file__)) + sys.path.insert(0, HERE) + + from my_other_file import my_function + + +Customising the login teaser +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +FlexMeasures shows an image carousel next to its login form (see ``ui/templates/admin/login_user.html``). + +You can overwrite this content by adding your own login template and defining the ``teaser`` block yourself, e.g.: + +.. code-block:: html + + {% extends "admin/login_user.html" %} + + {% block teaser %} + +

Welcome to my plugin!

+ + {% endblock %} + +Place this template file in the template folder of your plugin blueprint (see above). Your template must have a different filename than "login_user", so FlexMeasures will find it properly! + +Finally, add this config setting to your FlexMeasures config file (using the template filename you chose, obviously): + + SECURITY_LOGIN_USER_TEMPLATE = "my_user_login.html" diff --git a/documentation/getting-started.rst b/documentation/getting-started.rst index ca95bc513..4bbf9d6d7 100644 --- a/documentation/getting-started.rst +++ b/documentation/getting-started.rst @@ -17,6 +17,7 @@ Install dependencies and the ``flexmeasures`` platform itself: pip install flexmeasures +.. note:: With newer Python versions and Windows, some smaller dependencies (e.g. ``tables`` or ``rq-win``) might cause issues as support is often slower. You might overcome this with a little research, by `installing from wheels `_ or `from the repo `_, respectively. Make a secret key for sessions and password salts @@ -217,7 +218,7 @@ To collect weather measurements and forecasts from the DarkSky API, there is a t flexmeasures add external-weather-forecasts --location 33.4366,126.5269 --store-in-db -.. note:: DarkSky is not handing out tokens anymore, as they have been bought by Apple (see `issue 3 `_). +.. note:: DarkSky is not handing out tokens any more, as they have been bought by Apple (see `issue 3 `_). Preparing the job queue database and start workers diff --git a/documentation/index.rst b/documentation/index.rst index 13c671cd4..b05c75b0c 100644 --- a/documentation/index.rst +++ b/documentation/index.rst @@ -84,7 +84,8 @@ The platform operator of FlexMeasures can be an Aggregator. dev/data dev/api dev/ci - + dev/plugins + dev/modes .. toctree:: :caption: Integrations diff --git a/flexmeasures/api/__init__.py b/flexmeasures/api/__init__.py index ef838271c..64ec35607 100644 --- a/flexmeasures/api/__init__.py +++ b/flexmeasures/api/__init__.py @@ -6,9 +6,9 @@ from flexmeasures import __version__ as flexmeasures_version from flexmeasures.data.models.user import User from flexmeasures.api.common.utils.args_parsing import ( - FMValidationError, validation_error_handler, ) +from flexmeasures.data.schemas.utils import FMValidationError # The api blueprint. It is registered with the Flask app (see app.py) flexmeasures_api = Blueprint("flexmeasures_api", __name__) @@ -105,9 +105,11 @@ def register_at(app: Flask): from flexmeasures.api.v1_2 import register_at as v1_2_register_at from flexmeasures.api.v1_3 import register_at as v1_3_register_at from flexmeasures.api.v2_0 import register_at as v2_0_register_at + from flexmeasures.api.dev import register_at as dev_register_at v1_register_at(app) v1_1_register_at(app) v1_2_register_at(app) v1_3_register_at(app) v2_0_register_at(app) + dev_register_at(app) diff --git a/flexmeasures/api/common/responses.py b/flexmeasures/api/common/responses.py index d381fa8d5..6b2133d06 100644 --- a/flexmeasures/api/common/responses.py +++ b/flexmeasures/api/common/responses.py @@ -64,7 +64,7 @@ def invalid_domain(message: str) -> ResponseTuple: return dict(result="Rejected", status="INVALID_DOMAIN", message=message), 400 -@BaseMessage("The prognosis horizon in your request could not be parsed.") +@BaseMessage("The horizon field in your request could not be parsed.") def invalid_horizon(message: str) -> ResponseTuple: return dict(result="Rejected", status="INVALID_HORIZON", message=message), 400 @@ -86,7 +86,7 @@ def invalid_ptu_duration(message: str) -> ResponseTuple: ) -@BaseMessage("Only the following resolutions are supported:") +@BaseMessage("Only the following resolutions in the data are supported:") def unapplicable_resolution(message: str) -> ResponseTuple: return dict(result="Rejected", status="INVALID_RESOLUTION", message=message), 400 diff --git a/flexmeasures/api/common/schemas/sensors.py b/flexmeasures/api/common/schemas/sensors.py index f39d61cff..4989b5fb4 100644 --- a/flexmeasures/api/common/schemas/sensors.py +++ b/flexmeasures/api/common/schemas/sensors.py @@ -19,7 +19,7 @@ class EntityAddressValidationError(FMValidationError): class SensorField(fields.Str): - """Field that deserializes to a Sensor, Asset, Market or WeatherSensor + """Field that de-serializes to a Sensor, Asset, Market or WeatherSensor and serializes back to an entity address (string).""" # todo: when Actuators also get an entity address, refactor this class to EntityField, @@ -43,7 +43,7 @@ def __init__( def _deserialize( # noqa: C901 todo: the noqa can probably be removed after refactoring Asset/Market/WeatherSensor to Sensor self, value, attr, obj, **kwargs ) -> Union[Sensor, Asset, Market, WeatherSensor]: - """Deserialize to a Sensor, Asset, Market or WeatherSensor.""" + """De-serialize to a Sensor, Asset, Market or WeatherSensor.""" # TODO: After refactoring, unify 3 generic_asset cases -> 1 sensor case try: ea = parse_entity_address(value, self.entity_type, self.fm_scheme) diff --git a/flexmeasures/api/common/schemas/tests/test_sensors.py b/flexmeasures/api/common/schemas/tests/test_sensors.py index 949952bcc..829daf91b 100644 --- a/flexmeasures/api/common/schemas/tests/test_sensors.py +++ b/flexmeasures/api/common/schemas/tests/test_sensors.py @@ -11,7 +11,7 @@ "entity_address, entity_type, fm_scheme, exp_deserialization_name", [ ( - build_entity_address(dict(sensor_id=9), "sensor"), + build_entity_address(dict(sensor_id=1), "sensor"), "sensor", "fm1", "my daughter's height", @@ -26,7 +26,7 @@ ), ( build_entity_address( - dict(owner_id=1, asset_id=3), "connection", fm_scheme="fm0" + dict(owner_id=1, asset_id=4), "connection", fm_scheme="fm0" ), "connection", "fm0", @@ -49,7 +49,14 @@ ], ) def test_sensor_field_straightforward( - entity_address, entity_type, fm_scheme, exp_deserialization_name + add_sensors, + setup_markets, + add_battery_assets, + add_weather_sensors, + entity_address, + entity_type, + fm_scheme, + exp_deserialization_name, ): """Testing straightforward cases""" sf = SensorField(entity_type, fm_scheme) diff --git a/flexmeasures/api/common/utils/api_utils.py b/flexmeasures/api/common/utils/api_utils.py index 91d2a2f6e..1bebc73fe 100644 --- a/flexmeasures/api/common/utils/api_utils.py +++ b/flexmeasures/api/common/utils/api_utils.py @@ -13,9 +13,7 @@ from flexmeasures.data import db from flexmeasures.data.models.assets import Asset, Power from flexmeasures.data.models.markets import Price -from flexmeasures.data.models.data_sources import DataSource from flexmeasures.data.models.weather import WeatherSensor, Weather -from flexmeasures.data.models.user import User from flexmeasures.data.utils import save_to_session from flexmeasures.api.common.responses import ( unrecognized_sensor, @@ -283,16 +281,6 @@ def asset_replace_name_with_id(connections_as_name: List[str]) -> List[str]: return connections_as_ea -def get_or_create_user_data_source(user: User) -> DataSource: - data_source = DataSource.query.filter(DataSource.user == user).one_or_none() - if not data_source: - current_app.logger.info("SETTING UP USER AS NEW DATA SOURCE...") - data_source = DataSource(user=user) - db.session.add(data_source) - db.session.flush() # flush so that we can reference the new object in the current db session - return data_source - - def get_weather_sensor_by( weather_sensor_type_name: str, latitude: float = 0, longitude: float = 0 ) -> Union[WeatherSensor, ResponseTuple]: @@ -348,7 +336,7 @@ def get_weather_sensor_by( def save_to_db( timed_values: List[Union[Power, Price, Weather]], forecasting_jobs: List[Job] ) -> ResponseTuple: - """Put the timed values into the database and create forecasting jobs. + """Put the timed values into the database and enqueue forecasting jobs. Data can only be replaced on servers in play mode. diff --git a/flexmeasures/api/common/utils/args_parsing.py b/flexmeasures/api/common/utils/args_parsing.py index 46d9a0862..32383ff7f 100644 --- a/flexmeasures/api/common/utils/args_parsing.py +++ b/flexmeasures/api/common/utils/args_parsing.py @@ -1,4 +1,5 @@ from flask import jsonify +from flexmeasures.data.schemas.utils import FMValidationError from webargs.multidictproxy import MultiDictProxy from webargs import ValidationError from webargs.flaskparser import parser @@ -18,18 +19,6 @@ def handle_error(error, req, schema, *, error_status_code, error_headers): raise error -class FMValidationError(ValidationError): - """ - Custom validation error class. - It differs from the classic validation error by having two - attributes, according to the USEF 2015 reference implementation. - Subclasses of this error might adjust the `status` attribute accordingly. - """ - - result = "Rejected" - status = "UNPROCESSABLE_ENTITY" - - def validation_error_handler(error: FMValidationError): """Handles errors during parsing. Aborts the current HTTP request and responds with a 422 error. diff --git a/flexmeasures/api/common/utils/validators.py b/flexmeasures/api/common/utils/validators.py index 74da4c030..027e4d079 100644 --- a/flexmeasures/api/common/utils/validators.py +++ b/flexmeasures/api/common/utils/validators.py @@ -17,7 +17,7 @@ from webargs.flaskparser import parser from flexmeasures.api.common.schemas.sensors import SensorField -from flexmeasures.api.common.schemas.times import DurationField +from flexmeasures.data.schemas.times import DurationField from flexmeasures.api.common.responses import ( # noqa: F401 required_info_missing, invalid_horizon, @@ -296,7 +296,9 @@ def decorated_service(*args, **kwargs): return wrapper -def optional_prior_accepted(ex_post: bool = False, infer_missing: bool = True): +def optional_prior_accepted( + ex_post: bool = False, infer_missing: bool = True, infer_missing_play: bool = False +): """Decorator which specifies that a GET or POST request accepts an optional prior. It parses relevant form data and sets the "prior" keyword param. @@ -304,9 +306,15 @@ def optional_prior_accepted(ex_post: bool = False, infer_missing: bool = True): - Denotes "at least before " - This results in the filter belief_time_window = (None, prior) - Optionally, an ex_post flag can be passed to the decorator to indicate that only ex-post datetimes are allowed. - As a useful setting (at least for POST requests), set infer_missing to True to have servers - (that are not in play mode) derive a prior from the server time. + Interpretation for POST requests: + - Denotes "recorded to some datetime, + - this results in the assignment belief_time = prior + + :param ex_post: if True, only ex-post datetimes are allowed. + :param infer_missing: if True, servers assume that the belief_time of posted + values is server time. This setting is meant to be used for POST requests. + :param infer_missing_play: if True, servers in play mode assume that the belief_time of posted + values is server time. This setting is meant to be used for POST requests. """ def wrapper(fn): @@ -332,11 +340,11 @@ def decorated_service(*args, **kwargs): if prior < knowledge_time: extra_info = "Meter data can only be observed after the fact." return invalid_horizon(extra_info) - elif ( - infer_missing is True - and current_app.config.get("FLEXMEASURES_MODE", "") != "play" + elif infer_missing is True or ( + infer_missing_play is True + and current_app.config.get("FLEXMEASURES_MODE", "") == "play" ): - # A missing prior is inferred by the server (if not in play mode) + # A missing prior is inferred by the server prior = server_now() else: # Otherwise, a missing prior is fine (a horizon may still be inferred by the server) @@ -353,6 +361,7 @@ def decorated_service(*args, **kwargs): def optional_horizon_accepted( # noqa C901 ex_post: bool = False, infer_missing: bool = True, + infer_missing_play: bool = False, accept_repeating_interval: bool = False, ): """Decorator which specifies that a GET or POST request accepts an optional horizon. @@ -376,11 +385,13 @@ def optional_horizon_accepted( # noqa C901 def post_meter_data(horizon): return 'Meter data posted' - :param ex_post: if True, only non-positive horizons are allowed. - :param infer_missing: if True, servers that are in play mode assume that the belief_horizon of posted - values is 0 hours. This setting is meant to be used for POST requests. - :param accept_repeating_interval: if True, the "rolling" keyword param is also set - (this was used for POST requests before v2.0) + :param ex_post: if True, only non-positive horizons are allowed. + :param infer_missing: if True, servers assume that the belief_horizon of posted + values is 0 hours. This setting is meant to be used for POST requests. + :param infer_missing_play: if True, servers in play mode assume that the belief_horizon of posted + values is 0 hours. This setting is meant to be used for POST requests. + :param accept_repeating_interval: if True, the "rolling" keyword param is also set + (this was used for POST requests before v2.0) """ def wrapper(fn): @@ -410,15 +421,12 @@ def decorated_service(*args, **kwargs): "For example: R/P1D should be replaced by P1D." ) return invalid_horizon(extra_info) - elif ( - infer_missing is True + elif infer_missing is True or ( + infer_missing_play is True and current_app.config.get("FLEXMEASURES_MODE", "") == "play" ): - # A missing horizon is set to zero for servers in play mode + # A missing horizon is set to zero horizon = timedelta(hours=0) - elif infer_missing is True and accept_repeating_interval is True: - extra_info = "Missing horizons are no longer accepted for API versions below v2.0." - return invalid_horizon(extra_info) else: # Otherwise, a missing horizon is fine (a prior may still be inferred by the server) horizon = None diff --git a/flexmeasures/api/dev/__init__.py b/flexmeasures/api/dev/__init__.py new file mode 100644 index 000000000..c175be741 --- /dev/null +++ b/flexmeasures/api/dev/__init__.py @@ -0,0 +1,9 @@ +from flask import Flask + + +def register_at(app: Flask): + """This can be used to register FlaskViews.""" + + from flexmeasures.api.dev.sensors import SensorAPI + + SensorAPI.register(app, route_prefix="/api/dev") diff --git a/flexmeasures/api/dev/sensors.py b/flexmeasures/api/dev/sensors.py new file mode 100644 index 000000000..96e9b1272 --- /dev/null +++ b/flexmeasures/api/dev/sensors.py @@ -0,0 +1,74 @@ +import json + +from flask_classful import FlaskView, route +from flask_login import login_required +from flask_security import roles_required +from marshmallow import fields +from webargs.flaskparser import use_kwargs +from werkzeug.exceptions import abort + +from flexmeasures.data.schemas.times import AwareDateTimeField +from flexmeasures.data.models.time_series import Sensor + + +class SensorAPI(FlaskView): + """ + This view exposes sensor attributes through API endpoints under development. + These endpoints are not yet part of our official API, but support the FlexMeasures UI. + """ + + route_base = "/sensor" + + @login_required + @roles_required("admin") # todo: remove after we check for sensor ownership + @route("//chart/") + @use_kwargs( + { + "event_starts_after": AwareDateTimeField(format="iso", required=False), + "event_ends_before": AwareDateTimeField(format="iso", required=False), + "beliefs_after": AwareDateTimeField(format="iso", required=False), + "beliefs_before": AwareDateTimeField(format="iso", required=False), + "include_data": fields.Boolean(required=False), + "dataset_name": fields.Str(required=False), + }, + location="query", + ) + def get_chart(self, id, **kwargs): + """GET from /sensor//chart""" + sensor = get_sensor_or_abort(id) + return json.dumps(sensor.chart(**kwargs)) + + @login_required + @roles_required("admin") # todo: remove after we check for sensor ownership + @route("//chart_data/") + @use_kwargs( + { + "event_starts_after": AwareDateTimeField(format="iso", required=False), + "event_ends_before": AwareDateTimeField(format="iso", required=False), + "beliefs_after": AwareDateTimeField(format="iso", required=False), + "beliefs_before": AwareDateTimeField(format="iso", required=False), + }, + location="query", + ) + def get_chart_data(self, id, **kwargs): + """GET from /sensor//chart_data + + Data for use in charts (in case you have the chart specs already). + """ + sensor = get_sensor_or_abort(id) + return sensor.search_beliefs(as_json=True, **kwargs) + + @login_required + @roles_required("admin") # todo: remove after we check for sensor ownership + def get(self, id: int): + """GET from /sensor/""" + sensor = get_sensor_or_abort(id) + attributes = ["name", "timezone", "timerange"] + return {attr: getattr(sensor, attr) for attr in attributes} + + +def get_sensor_or_abort(id: int) -> Sensor: + sensor = Sensor.query.filter(Sensor.id == id).one_or_none() + if sensor is None: + raise abort(404, f"Sensor {id} not found") + return sensor diff --git a/flexmeasures/api/tests/conftest.py b/flexmeasures/api/tests/conftest.py index ace7cb10d..32c6b57d7 100644 --- a/flexmeasures/api/tests/conftest.py +++ b/flexmeasures/api/tests/conftest.py @@ -7,8 +7,8 @@ from flask_security.utils import hash_password -@pytest.fixture(scope="function", autouse=True) -def setup_api_test_data(db): +@pytest.fixture(scope="module", autouse=True) +def setup_api_test_data(db, setup_roles_users): """ Adding the task-runner """ diff --git a/flexmeasures/api/v1/implementations.py b/flexmeasures/api/v1/implementations.py index 0940ed819..9203161b3 100644 --- a/flexmeasures/api/v1/implementations.py +++ b/flexmeasures/api/v1/implementations.py @@ -12,6 +12,7 @@ EntityAddressException, ) from flexmeasures.data.models.assets import Asset, Power +from flexmeasures.data.models.data_sources import get_or_create_source from flexmeasures.data.services.resources import get_assets from flexmeasures.data.services.forecasting import create_forecasting_jobs from flexmeasures.api.common.responses import ( @@ -24,7 +25,6 @@ ) from flexmeasures.api.common.utils.api_utils import ( groups_to_dict, - get_or_create_user_data_source, save_to_db, ) from flexmeasures.api.common.utils.validators import ( @@ -97,7 +97,9 @@ def get_meter_data_response( @units_accepted("power", "MW") @assets_required("connection") @values_required -@optional_horizon_accepted(ex_post=True, accept_repeating_interval=True) +@optional_horizon_accepted( + ex_post=True, infer_missing=True, accept_repeating_interval=True +) @period_required @post_data_checked_for_required_resolution("connection", "fm0") @as_json @@ -242,7 +244,7 @@ def create_connection_and_value_groups( # noqa: C901 current_app.logger.info("POSTING POWER DATA") - data_source = get_or_create_user_data_source(current_user) + data_source = get_or_create_source(current_user) user_assets = get_assets() if not user_assets: current_app.logger.info("User doesn't seem to have any assets") diff --git a/flexmeasures/api/v1/tests/conftest.py b/flexmeasures/api/v1/tests/conftest.py index 6ebc96298..adb09523f 100644 --- a/flexmeasures/api/v1/tests/conftest.py +++ b/flexmeasures/api/v1/tests/conftest.py @@ -4,27 +4,23 @@ import isodate import pytest -from flask_security import SQLAlchemySessionUserDatastore from flask_security.utils import hash_password from flexmeasures.data.services.users import create_user -@pytest.fixture(scope="function", autouse=True) -def setup_api_test_data(db): +@pytest.fixture(scope="module", autouse=True) +def setup_api_test_data(db, setup_roles_users, add_market_prices): """ Set up data for API v1 tests. """ print("Setting up data for API v1 tests on %s" % db.engine) - from flexmeasures.data.models.user import User, Role from flexmeasures.data.models.assets import Asset, AssetType, Power from flexmeasures.data.models.data_sources import DataSource - user_datastore = SQLAlchemySessionUserDatastore(db.session, User, Role) - # Create an anonymous user - create_user( + test_anonymous_prosumer = create_user( username="anonymous user with Prosumer role", email="demo@seita.nl", password=hash_password("testtest"), @@ -35,7 +31,6 @@ def setup_api_test_data(db): ) # Create 1 test asset for the anonymous user - test_prosumer = user_datastore.find_user(email="demo@seita.nl") test_asset_type = AssetType(name="test-type") db.session.add(test_asset_type) asset_names = ["CS 0"] @@ -50,7 +45,7 @@ def setup_api_test_data(db): longitude=100, unit="MW", ) - asset.owner = test_prosumer + asset.owner = test_anonymous_prosumer assets.append(asset) db.session.add(asset) @@ -62,7 +57,7 @@ def setup_api_test_data(db): ) # Create 5 test assets for the test_prosumer user - test_prosumer = user_datastore.find_user(email="test_prosumer@seita.nl") + test_prosumer = setup_roles_users["Test Prosumer"] asset_names = ["CS 1", "CS 2", "CS 3", "CS 4", "CS 5"] assets: List[Asset] = [] for asset_name in asset_names: @@ -83,7 +78,7 @@ def setup_api_test_data(db): # Add power forecasts to one of the assets, for two sources cs_5 = Asset.query.filter(Asset.name == "CS 5").one_or_none() - test_supplier = user_datastore.find_user(email="test_supplier@seita.nl") + test_supplier = setup_roles_users["Test Supplier"] prosumer_data_source = DataSource.query.filter( DataSource.user == test_prosumer ).one_or_none() @@ -113,3 +108,32 @@ def setup_api_test_data(db): db.session.bulk_save_objects(meter_data) print("Done setting up data for API v1 tests") + + +@pytest.fixture(scope="function") +def setup_fresh_api_test_data(fresh_db, setup_roles_users_fresh_db): + db = fresh_db + setup_roles_users = setup_roles_users_fresh_db + from flexmeasures.data.models.assets import Asset, AssetType + + # Create 5 test assets for the test_prosumer user + test_prosumer = setup_roles_users["Test Prosumer"] + test_asset_type = AssetType(name="test-type") + db.session.add(test_asset_type) + asset_names = ["CS 1", "CS 2", "CS 3", "CS 4", "CS 5"] + assets: List[Asset] = [] + for asset_name in asset_names: + asset = Asset( + name=asset_name, + asset_type_name="test-type", + event_resolution=timedelta(minutes=15), + capacity_in_mw=1, + latitude=100, + longitude=100, + unit="MW", + ) + asset.owner = test_prosumer + if asset_name == "CS 4": + asset.event_resolution = timedelta(hours=1) + assets.append(asset) + db.session.add(asset) diff --git a/flexmeasures/api/v1/tests/test_api_v1.py b/flexmeasures/api/v1/tests/test_api_v1.py index 2b95713d1..6c72178d0 100644 --- a/flexmeasures/api/v1/tests/test_api_v1.py +++ b/flexmeasures/api/v1/tests/test_api_v1.py @@ -4,9 +4,6 @@ import isodate import pandas as pd import pytest -from iso8601 import parse_date -from numpy import repeat - from flexmeasures.api.common.responses import ( invalid_domain, @@ -24,7 +21,6 @@ verify_power_in_db, ) from flexmeasures.data.auth_setup import UNAUTH_ERROR_STATUS -from flexmeasures.api.v1.tests.utils import count_connections_in_post_message from flexmeasures.data.models.assets import Asset @@ -250,94 +246,7 @@ def test_get_meter_data(db, app, client, message): assert get_meter_data_response.json["values"] == [(100.0 + i) for i in range(6)] -@pytest.mark.parametrize( - "post_message", - [ - message_for_post_meter_data(), - message_for_post_meter_data(single_connection=True), - message_for_post_meter_data(single_connection_group=True), - ], -) -@pytest.mark.parametrize( - "get_message", - [ - message_for_get_meter_data(), - message_for_get_meter_data(single_connection=False), - message_for_get_meter_data(resolution="PT30M"), - ], -) -def test_post_and_get_meter_data(db, app, client, post_message, get_message): - """ - Tries to post meter data as a logged-in test user with the MDC role, which should succeed. - There should be some ForecastingJobs waiting now. - Then tries to get meter data, which should succeed, and should return the same meter data as was posted, - or a downsampled version, if that was requested. - """ - - # post meter data - auth_token = get_auth_token(client, "test_prosumer@seita.nl", "testtest") - post_meter_data_response = client.post( - url_for("flexmeasures_api_v1.post_meter_data"), - json=message_replace_name_with_ea(post_message), - headers={"Authorization": auth_token}, - ) - print("Server responded with:\n%s" % post_meter_data_response.json) - assert post_meter_data_response.status_code == 200 - assert post_meter_data_response.json["type"] == "PostMeterDataResponse" - - # look for Forecasting jobs - expected_connections = count_connections_in_post_message(post_message) - assert ( - len(app.queues["forecasting"]) == 4 * expected_connections - ) # four horizons times the number of assets - horizons = repeat( - [ - timedelta(hours=1), - timedelta(hours=6), - timedelta(hours=24), - timedelta(hours=48), - ], - expected_connections, - ) - jobs = sorted(app.queues["forecasting"].jobs, key=lambda x: x.kwargs["horizon"]) - for job, horizon in zip(jobs, horizons): - assert job.kwargs["horizon"] == horizon - assert job.kwargs["start"] == parse_date(post_message["start"]) + horizon - for asset_name in ("CS 1", "CS 2", "CS 3"): - if asset_name in str(post_message): - asset = Asset.query.filter_by(name=asset_name).one_or_none() - assert asset.id in [job.kwargs["asset_id"] for job in jobs] - - # get meter data - get_meter_data_response = client.get( - url_for("flexmeasures_api_v1.get_meter_data"), - query_string=message_replace_name_with_ea(get_message), - headers={"Authorization": auth_token}, - ) - print("Server responded with:\n%s" % get_meter_data_response.json) - assert get_meter_data_response.status_code == 200 - assert get_meter_data_response.json["type"] == "GetMeterDataResponse" - if "groups" in post_message: - posted_values = post_message["groups"][0]["values"] - else: - posted_values = post_message["values"] - if "groups" in get_meter_data_response.json: - gotten_values = get_meter_data_response.json["groups"][0]["values"] - else: - gotten_values = get_meter_data_response.json["values"] - - if "resolution" not in get_message or get_message["resolution"] == "": - assert gotten_values == posted_values - else: - # We used a target resolution of 30 minutes, so double of 15 minutes. - # Six values went in, three come out. - if posted_values[1] > 0: # see utils.py:message_for_post_meter_data - assert gotten_values == [306.66, -0.0, 306.66] - else: - assert gotten_values == [153.33, 0, 306.66] - - -def test_post_meter_data_to_different_resolutions(db, app, client): +def test_post_meter_data_to_different_resolutions(app, client): """ Tries to post meter data to assets with different event_resolutions, which is not accepted. """ diff --git a/flexmeasures/api/v1/tests/test_api_v1_fresh_db.py b/flexmeasures/api/v1/tests/test_api_v1_fresh_db.py new file mode 100644 index 000000000..e3b1c517d --- /dev/null +++ b/flexmeasures/api/v1/tests/test_api_v1_fresh_db.py @@ -0,0 +1,104 @@ +from datetime import timedelta + +import pytest +from flask import url_for +from iso8601 import parse_date +from numpy import repeat + +from flexmeasures.api.common.utils.api_utils import message_replace_name_with_ea +from flexmeasures.api.tests.utils import get_auth_token +from flexmeasures.api.v1.tests.utils import ( + message_for_post_meter_data, + message_for_get_meter_data, + count_connections_in_post_message, +) +from flexmeasures.data.models.assets import Asset + + +@pytest.mark.parametrize( + "post_message", + [ + message_for_post_meter_data(), + message_for_post_meter_data(single_connection=True), + message_for_post_meter_data(single_connection_group=True), + ], +) +@pytest.mark.parametrize( + "get_message", + [ + message_for_get_meter_data(), + message_for_get_meter_data(single_connection=False), + message_for_get_meter_data(resolution="PT30M"), + ], +) +def test_post_and_get_meter_data( + setup_fresh_api_test_data, app, clean_redis, client, post_message, get_message +): + """ + Tries to post meter data as a logged-in test user with the MDC role, which should succeed. + There should be some ForecastingJobs waiting now. + Then tries to get meter data, which should succeed, and should return the same meter data as was posted, + or a downsampled version, if that was requested. + """ + + # post meter data + auth_token = get_auth_token(client, "test_prosumer@seita.nl", "testtest") + post_meter_data_response = client.post( + url_for("flexmeasures_api_v1.post_meter_data"), + json=message_replace_name_with_ea(post_message), + headers={"Authorization": auth_token}, + ) + print("Server responded with:\n%s" % post_meter_data_response.json) + assert post_meter_data_response.status_code == 200 + assert post_meter_data_response.json["type"] == "PostMeterDataResponse" + + # look for Forecasting jobs + expected_connections = count_connections_in_post_message(post_message) + assert ( + len(app.queues["forecasting"]) == 4 * expected_connections + ) # four horizons times the number of assets + horizons = repeat( + [ + timedelta(hours=1), + timedelta(hours=6), + timedelta(hours=24), + timedelta(hours=48), + ], + expected_connections, + ) + jobs = sorted(app.queues["forecasting"].jobs, key=lambda x: x.kwargs["horizon"]) + for job, horizon in zip(jobs, horizons): + assert job.kwargs["horizon"] == horizon + assert job.kwargs["start"] == parse_date(post_message["start"]) + horizon + for asset_name in ("CS 1", "CS 2", "CS 3"): + if asset_name in str(post_message): + asset = Asset.query.filter_by(name=asset_name).one_or_none() + assert asset.id in [job.kwargs["asset_id"] for job in jobs] + + # get meter data + get_meter_data_response = client.get( + url_for("flexmeasures_api_v1.get_meter_data"), + query_string=message_replace_name_with_ea(get_message), + headers={"Authorization": auth_token}, + ) + print("Server responded with:\n%s" % get_meter_data_response.json) + assert get_meter_data_response.status_code == 200 + assert get_meter_data_response.json["type"] == "GetMeterDataResponse" + if "groups" in post_message: + posted_values = post_message["groups"][0]["values"] + else: + posted_values = post_message["values"] + if "groups" in get_meter_data_response.json: + gotten_values = get_meter_data_response.json["groups"][0]["values"] + else: + gotten_values = get_meter_data_response.json["values"] + + if "resolution" not in get_message or get_message["resolution"] == "": + assert gotten_values == posted_values + else: + # We used a target resolution of 30 minutes, so double of 15 minutes. + # Six values went in, three come out. + if posted_values[1] > 0: # see utils.py:message_for_post_meter_data + assert gotten_values == [306.66, -0.0, 306.66] + else: + assert gotten_values == [153.33, 0, 306.66] diff --git a/flexmeasures/api/v1_1/implementations.py b/flexmeasures/api/v1_1/implementations.py index 349adc0b9..a99b26811 100644 --- a/flexmeasures/api/v1_1/implementations.py +++ b/flexmeasures/api/v1_1/implementations.py @@ -14,10 +14,10 @@ invalid_unit, unrecognized_market, ResponseTuple, + invalid_horizon, ) from flexmeasures.api.common.utils.api_utils import ( save_to_db, - get_or_create_user_data_source, ) from flexmeasures.api.common.utils.validators import ( type_accepted, @@ -38,6 +38,7 @@ create_connection_and_value_groups, ) from flexmeasures.api.common.utils.api_utils import get_weather_sensor_by +from flexmeasures.data.models.data_sources import get_or_create_source from flexmeasures.data.models.markets import Market, Price from flexmeasures.data.models.weather import Weather from flexmeasures.data.services.resources import get_assets @@ -63,7 +64,7 @@ def get_connection_response(): @type_accepted("PostPriceDataRequest") @units_accepted("price", "EUR/MWh", "KRW/kWh") @assets_required("market") -@optional_horizon_accepted(accept_repeating_interval=True) +@optional_horizon_accepted(infer_missing=True, accept_repeating_interval=True) @values_required @period_required @post_data_checked_for_required_resolution("market", "fm0") @@ -80,7 +81,7 @@ def post_price_data_response( current_app.logger.info("POSTING PRICE DATA") - data_source = get_or_create_user_data_source(current_user) + data_source = get_or_create_source(current_user) prices = [] forecasting_jobs = [] for market_group, value_group in zip(generic_asset_name_groups, value_groups): @@ -138,7 +139,7 @@ def post_price_data_response( @type_accepted("PostWeatherDataRequest") @unit_required @assets_required("sensor") -@optional_horizon_accepted(accept_repeating_interval=True) +@optional_horizon_accepted(infer_missing=True, accept_repeating_interval=True) @values_required @period_required @post_data_checked_for_required_resolution("weather_sensor", "fm0") @@ -155,7 +156,7 @@ def post_weather_data_response( # noqa: C901 current_app.logger.info("POSTING WEATHER DATA") - data_source = get_or_create_user_data_source(current_user) + data_source = get_or_create_source(current_user) weather_measurements = [] forecasting_jobs = [] for sensor_group, value_group in zip(generic_asset_name_groups, value_groups): @@ -215,7 +216,6 @@ def post_weather_data_response( # noqa: C901 start, start + duration, resolution=duration / len(value_group), - horizons=[horizon], enqueue=False, # will enqueue later, only if we successfully saved weather measurements ) ) @@ -269,7 +269,9 @@ def get_prognosis_response( @units_accepted("power", "MW") @assets_required("connection") @values_required -@optional_horizon_accepted(ex_post=False, accept_repeating_interval=True) +@optional_horizon_accepted( + ex_post=False, infer_missing=False, accept_repeating_interval=True +) @period_required @post_data_checked_for_required_resolution("connection", "fm0") @as_json @@ -287,6 +289,11 @@ def post_prognosis_response( Store the new power values for each asset. """ + if horizon is None: + # API versions before v2.0 cannot handle a missing horizon, because there is no prior + extra_info = "Please specify the horizon field using an ISO 8601 duration (such as 'PT24H')." + return invalid_horizon(extra_info) + return create_connection_and_value_groups( unit, generic_asset_name_groups, value_groups, horizon, rolling, start, duration ) diff --git a/flexmeasures/api/v1_1/tests/conftest.py b/flexmeasures/api/v1_1/tests/conftest.py index ea371fa83..6fa73f7a2 100644 --- a/flexmeasures/api/v1_1/tests/conftest.py +++ b/flexmeasures/api/v1_1/tests/conftest.py @@ -11,8 +11,8 @@ from flexmeasures.data.services.users import create_user -@pytest.fixture(scope="function", autouse=True) -def setup_api_test_data(db): +@pytest.fixture(scope="module") +def setup_api_test_data(db, setup_roles_users, add_market_prices): """ Set up data for API v1.1 tests. """ @@ -40,7 +40,7 @@ def setup_api_test_data(db): ) # Create 3 test assets for the test_prosumer user - test_prosumer = user_datastore.find_user(email="test_prosumer@seita.nl") + test_prosumer = setup_roles_users["Test Prosumer"] test_asset_type = AssetType(name="test-type") db.session.add(test_asset_type) asset_names = ["CS 1", "CS 2", "CS 3"] @@ -98,3 +98,10 @@ def setup_api_test_data(db): db.session.bulk_save_objects(power_forecasts) print("Done setting up data for API v1.1 tests") + + +@pytest.fixture(scope="function") +def setup_fresh_api_v1_1_test_data( + fresh_db, setup_roles_users_fresh_db, setup_markets_fresh_db +): + return fresh_db diff --git a/flexmeasures/api/v1_1/tests/test_api_v1_1.py b/flexmeasures/api/v1_1/tests/test_api_v1_1.py index f91c26915..62eb083eb 100644 --- a/flexmeasures/api/v1_1/tests/test_api_v1_1.py +++ b/flexmeasures/api/v1_1/tests/test_api_v1_1.py @@ -1,7 +1,6 @@ from flask import url_for import pytest from datetime import timedelta -from isodate import duration_isoformat from iso8601 import parse_date from flexmeasures.api.common.schemas.sensors import SensorField @@ -9,7 +8,6 @@ from flexmeasures.api.common.responses import ( request_processed, invalid_horizon, - unapplicable_resolution, invalid_unit, ) from flexmeasures.api.tests.utils import get_auth_token @@ -21,6 +19,7 @@ message_for_post_price_data, message_for_post_weather_data, verify_prices_in_db, + get_forecasting_jobs, ) from flexmeasures.data.auth_setup import UNAUTH_ERROR_STATUS @@ -63,7 +62,7 @@ def test_unauthorized_prognosis_request(client): message_for_get_prognosis(invalid_horizon=True), ], ) -def test_invalid_horizon(client, message): +def test_invalid_horizon(setup_api_test_data, client, message): auth_token = get_auth_token(client, "test_prosumer@seita.nl", "testtest") get_prognosis_response = client.get( url_for("flexmeasures_api_v1_1.get_prognosis"), @@ -76,7 +75,7 @@ def test_invalid_horizon(client, message): assert get_prognosis_response.json["status"] == invalid_horizon()[0]["status"] -def test_no_data(client): +def test_no_data(setup_api_test_data, client): auth_token = get_auth_token(client, "test_prosumer@seita.nl", "testtest") get_prognosis_response = client.get( url_for("flexmeasures_api_v1_1.get_prognosis"), @@ -103,7 +102,7 @@ def test_no_data(client): message_for_get_prognosis(rolling_horizon=True, timezone_alternative=True), ], ) -def test_get_prognosis(client, message): +def test_get_prognosis(setup_api_test_data, client, message): auth_token = get_auth_token(client, "test_prosumer@seita.nl", "testtest") get_prognosis_response = client.get( url_for("flexmeasures_api_v1_1.get_prognosis"), @@ -126,7 +125,7 @@ def test_get_prognosis(client, message): @pytest.mark.parametrize("post_message", [message_for_post_price_data()]) -def test_post_price_data(db, app, post_message): +def test_post_price_data(setup_api_test_data, db, app, clean_redis, post_message): """ Try to post price data as a logged-in test user with the Supplier role, which should succeed. """ @@ -163,7 +162,7 @@ def test_post_price_data(db, app, post_message): @pytest.mark.parametrize( "post_message", [message_for_post_price_data(invalid_unit=True)] ) -def test_post_price_data_invalid_unit(client, post_message): +def test_post_price_data_invalid_unit(setup_api_test_data, client, post_message): """ Try to post price data with the wrong unit, which should fail. """ @@ -187,49 +186,18 @@ def test_post_price_data_invalid_unit(client, post_message): ) -@pytest.mark.parametrize( - "post_message,status,msg", - [ - ( - message_for_post_price_data( - duration=duration_isoformat(timedelta(minutes=2)) - ), - 400, - unapplicable_resolution()[0]["message"], - ), - (message_for_post_price_data(compress_n=4), 200, "Request has been processed."), - ], -) -def test_post_price_data_unexpected_resolution(db, app, post_message, status, msg): - """ - Try to post price data with an unexpected resolution, - which might be fixed with upsampling or otherwise fail. - """ - with app.test_client() as client: - auth_token = get_auth_token(client, "test_supplier@seita.nl", "testtest") - post_price_data_response = client.post( - url_for("flexmeasures_api_v1_1.post_price_data"), - json=post_message, - headers={"Authorization": auth_token}, - ) - print("Server responded with:\n%s" % post_price_data_response.json) - assert post_price_data_response.json["type"] == "PostPriceDataResponse" - assert post_price_data_response.status_code == status - assert msg in post_price_data_response.json["message"] - if "processed" in msg: - verify_prices_in_db( - post_message, [v for v in post_message["values"] for i in range(4)], db - ) - - @pytest.mark.parametrize( "post_message", [message_for_post_weather_data(), message_for_post_weather_data(temperature=True)], ) -def test_post_weather_data(client, post_message): +def test_post_weather_forecasts( + setup_api_test_data, add_weather_sensors, app, client, post_message +): """ - Try to post wind speed data as a logged-in test user with the Supplier role, which should succeed. + Try to post wind speed and temperature forecasts as a logged-in test user with the Supplier role, which should succeed. + As only forecasts are sent, no forecasting jobs are expected. """ + assert len(get_forecasting_jobs("Weather")) == 0 # post weather data auth_token = get_auth_token(client, "test_supplier@seita.nl", "testtest") @@ -242,11 +210,13 @@ def test_post_weather_data(client, post_message): assert post_weather_data_response.status_code == 200 assert post_weather_data_response.json["type"] == "PostWeatherDataResponse" + assert len(get_forecasting_jobs("Weather")) == 0 + @pytest.mark.parametrize( "post_message", [message_for_post_weather_data(invalid_unit=True)] ) -def test_post_weather_data_invalid_unit(client, post_message): +def test_post_weather_forecasts_invalid_unit(setup_api_test_data, client, post_message): """ Try to post wind speed data as a logged-in test user with the Supplier role, but with a wrong unit for wind speed, which should fail. @@ -269,7 +239,9 @@ def test_post_weather_data_invalid_unit(client, post_message): @pytest.mark.parametrize("post_message", [message_for_post_price_data()]) -def test_auto_fix_missing_registration_of_user_as_data_source(client, post_message): +def test_auto_fix_missing_registration_of_user_as_data_source( + setup_api_test_data, client, post_message +): """Try to post price data as a user that has not been properly registered as a data source. The API call should succeed and the user should be automatically registered as a data source. """ diff --git a/flexmeasures/api/v1_1/tests/test_api_v1_1_fresh_db.py b/flexmeasures/api/v1_1/tests/test_api_v1_1_fresh_db.py new file mode 100644 index 000000000..c328a81f5 --- /dev/null +++ b/flexmeasures/api/v1_1/tests/test_api_v1_1_fresh_db.py @@ -0,0 +1,89 @@ +from datetime import timedelta +from iso8601 import parse_date + +import pytest +from flask import url_for +from isodate import duration_isoformat + +from flexmeasures.utils.time_utils import forecast_horizons_for +from flexmeasures.api.common.responses import unapplicable_resolution +from flexmeasures.api.tests.utils import get_auth_token +from flexmeasures.api.v1_1.tests.utils import ( + message_for_post_price_data, + message_for_post_weather_data, + verify_prices_in_db, + get_forecasting_jobs, +) + + +@pytest.mark.parametrize( + "post_message, status, msg", + [ + ( + message_for_post_price_data( + duration=duration_isoformat(timedelta(minutes=2)) + ), + 400, + unapplicable_resolution()[0]["message"], + ), + (message_for_post_price_data(compress_n=4), 200, "Request has been processed."), + ], +) +def test_post_price_data_unexpected_resolution( + setup_fresh_api_v1_1_test_data, app, client, post_message, status, msg +): + """ + Try to post price data with an unexpected resolution, + which might be fixed with upsampling or otherwise fail. + """ + db = setup_fresh_api_v1_1_test_data + auth_token = get_auth_token(client, "test_supplier@seita.nl", "testtest") + post_price_data_response = client.post( + url_for("flexmeasures_api_v1_1.post_price_data"), + json=post_message, + headers={"Authorization": auth_token}, + ) + print("Server responded with:\n%s" % post_price_data_response.json) + assert post_price_data_response.json["type"] == "PostPriceDataResponse" + assert post_price_data_response.status_code == status + assert msg in post_price_data_response.json["message"] + if "processed" in msg: + verify_prices_in_db( + post_message, [v for v in post_message["values"] for i in range(4)], db + ) + + +@pytest.mark.parametrize( + "post_message", + [message_for_post_weather_data(as_forecasts=False)], +) +def test_post_weather_data( + setup_fresh_api_v1_1_test_data, + add_weather_sensors_fresh_db, + app, + client, + post_message, +): + """ + Try to post wind speed data as a logged-in test user, which should lead to forecasting jobs. + """ + auth_token = get_auth_token(client, "test_supplier@seita.nl", "testtest") + post_weather_data_response = client.post( + url_for("flexmeasures_api_v1_1.post_weather_data"), + json=post_message, + headers={"Authorization": auth_token}, + ) + print("Server responded with:\n%s" % post_weather_data_response.json) + assert post_weather_data_response.status_code == 200 + assert post_weather_data_response.json["type"] == "PostWeatherDataResponse" + + forecast_horizons = forecast_horizons_for(timedelta(minutes=5)) + jobs = get_forecasting_jobs("Weather") + for job, horizon in zip( + sorted(jobs, key=lambda x: x.kwargs["horizon"]), forecast_horizons + ): + # check if jobs have expected horizons + assert job.kwargs["horizon"] == horizon + # check if jobs' start time (the time to be forecasted) + # is the weather observation plus the horizon + assert job.kwargs["start"] == parse_date(post_message["start"]) + horizon diff --git a/flexmeasures/api/v1_1/tests/utils.py b/flexmeasures/api/v1_1/tests/utils.py index 9c17ba0cb..ab47b7cc9 100644 --- a/flexmeasures/api/v1_1/tests/utils.py +++ b/flexmeasures/api/v1_1/tests/utils.py @@ -1,10 +1,12 @@ """Useful test messages""" -from typing import Optional, Dict, Any, Union +from typing import Optional, Dict, Any, List, Union from datetime import timedelta from isodate import duration_isoformat, parse_duration, parse_datetime import pandas as pd from numpy import tile +from rq.job import Job +from flask import current_app from flexmeasures.api.common.schemas.sensors import SensorField from flexmeasures.data.models.markets import Market, Price @@ -118,7 +120,7 @@ def message_for_post_price_data( def message_for_post_weather_data( - invalid_unit: bool = False, temperature: bool = False + invalid_unit: bool = False, temperature: bool = False, as_forecasts: bool = True ) -> dict: message: Dict[str, Any] = { "type": "PostWeatherDataRequest", @@ -141,6 +143,8 @@ def message_for_post_weather_data( message["unit"] = "°C" # Right unit for temperature elif invalid_unit: message["unit"] = "°C" # Wrong unit for wind speed + if not as_forecasts: + message["horizon"] = "PT0H" # weather measurements return message @@ -164,3 +168,11 @@ def verify_prices_in_db(post_message, values, db, swapped_sign: bool = False): if swapped_sign: df["value"] = -df["value"] assert df.value.tolist() == values + + +def get_forecasting_jobs(timed_value_type: str) -> List[Job]: + return [ + job + for job in current_app.queues["forecasting"].jobs + if job.kwargs["timed_value_type"] == timed_value_type + ] diff --git a/flexmeasures/api/v1_2/tests/conftest.py b/flexmeasures/api/v1_2/tests/conftest.py index 7a0f45011..947a82df5 100644 --- a/flexmeasures/api/v1_2/tests/conftest.py +++ b/flexmeasures/api/v1_2/tests/conftest.py @@ -1,19 +1,9 @@ -from flask_security import SQLAlchemySessionUserDatastore import pytest -@pytest.fixture(scope="function", autouse=True) -def setup_api_test_data(db): +@pytest.fixture(scope="module", autouse=True) +def setup_api_test_data(db, add_market_prices, add_battery_assets): """ Set up data for API v1.2 tests. """ print("Setting up data for API v1.2 tests on %s" % db.engine) - - from flexmeasures.data.models.user import User, Role - from flexmeasures.data.models.assets import Asset - - user_datastore = SQLAlchemySessionUserDatastore(db.session, User, Role) - test_prosumer = user_datastore.find_user(email="test_prosumer@seita.nl") - - battery = Asset.query.filter(Asset.name == "Test battery").one_or_none() - battery.owner = test_prosumer diff --git a/flexmeasures/api/v1_3/tests/conftest.py b/flexmeasures/api/v1_3/tests/conftest.py index a0c949463..c19263788 100644 --- a/flexmeasures/api/v1_3/tests/conftest.py +++ b/flexmeasures/api/v1_3/tests/conftest.py @@ -1,24 +1,17 @@ -from flask_security import SQLAlchemySessionUserDatastore import pytest -@pytest.fixture(scope="function", autouse=True) -def setup_api_test_data(db): +@pytest.fixture(scope="module", autouse=True) +def setup_api_test_data(db, add_market_prices, add_battery_assets): """ Set up data for API v1.3 tests. """ print("Setting up data for API v1.3 tests on %s" % db.engine) - from flexmeasures.data.models.user import User, Role - from flexmeasures.data.models.assets import Asset - user_datastore = SQLAlchemySessionUserDatastore(db.session, User, Role) - test_prosumer = user_datastore.find_user(email="test_prosumer@seita.nl") - - battery = Asset.query.filter(Asset.name == "Test battery").one_or_none() - battery.owner = test_prosumer - - charging_station = Asset.query.filter( - Asset.name == "Test charging station" - ).one_or_none() - charging_station.owner = test_prosumer +@pytest.fixture(scope="function") +def setup_fresh_api_test_data(fresh_db, add_battery_assets_fresh_db): + """ + Set up data for API v1.3 tests. + """ + pass diff --git a/flexmeasures/api/v1_3/tests/test_api_v1_3.py b/flexmeasures/api/v1_3/tests/test_api_v1_3.py index f0b64193e..ac94590d3 100644 --- a/flexmeasures/api/v1_3/tests/test_api_v1_3.py +++ b/flexmeasures/api/v1_3/tests/test_api_v1_3.py @@ -6,7 +6,7 @@ import pandas as pd from rq.job import Job -from flexmeasures.api.common.responses import unrecognized_event, unknown_schedule +from flexmeasures.api.common.responses import unrecognized_event from flexmeasures.api.tests.utils import get_auth_token from flexmeasures.api.v1_3.tests.utils import ( message_for_get_device_message, @@ -45,7 +45,9 @@ def test_get_device_message_wrong_event_id(client, message): (message_for_post_udi_event(targets=True), "Test charging station"), ], ) -def test_post_udi_event_and_get_device_message(app, message, asset_name): +def test_post_udi_event_and_get_device_message( + app, add_charging_station_assets, message, asset_name +): auth_token = None with app.test_client() as client: asset = Asset.query.filter(Asset.name == asset_name).one_or_none() @@ -198,68 +200,3 @@ def test_post_udi_event_and_get_device_message(app, message, asset_name): ).is_failed is True ) - - -@pytest.mark.parametrize("message", [message_for_post_udi_event(unknown_prices=True)]) -def test_post_udi_event_and_get_device_message_with_unknown_prices(app, message): - auth_token = None - with app.test_client() as client: - asset = Asset.query.filter(Asset.name == "Test battery").one_or_none() - asset_id = asset.id - asset_owner_id = asset.owner_id - message["event"] = message["event"] % (asset.owner_id, asset.id) - auth_token = get_auth_token(client, "test_prosumer@seita.nl", "testtest") - post_udi_event_response = client.post( - url_for("flexmeasures_api_v1_3.post_udi_event"), - json=message, - headers={"Authorization": auth_token}, - ) - print("Server responded with:\n%s" % post_udi_event_response.json) - assert post_udi_event_response.status_code == 200 - assert post_udi_event_response.json["type"] == "PostUdiEventResponse" - - # look for scheduling jobs in queue - assert ( - len(app.queues["scheduling"]) == 1 - ) # only 1 schedule should be made for 1 asset - job = app.queues["scheduling"].jobs[0] - assert job.kwargs["asset_id"] == asset_id - assert job.kwargs["start"] == parse_datetime(message["datetime"]) - assert job.id == message["event"] - assert ( - Job.fetch(message["event"], connection=app.queues["scheduling"].connection) - == job - ) - - # process the scheduling queue - work_on_rq(app.queues["scheduling"], exc_handler=handle_scheduling_exception) - processed_job = Job.fetch( - message["event"], connection=app.queues["scheduling"].connection - ) - assert processed_job.is_failed is True - - # check results are not in the database - scheduler_source = DataSource.query.filter_by( - name="Seita", type="scheduling script" - ).one_or_none() - assert ( - scheduler_source is None - ) # Make sure the scheduler data source is still not there - - # try to retrieve the schedule through the getDeviceMessage api endpoint - message = message_for_get_device_message() - message["event"] = message["event"] % (asset_owner_id, asset_id) - auth_token = get_auth_token(client, "test_prosumer@seita.nl", "testtest") - get_device_message_response = client.get( - url_for("flexmeasures_api_v1_3.get_device_message"), - query_string=message, - headers={"content-type": "application/json", "Authorization": auth_token}, - ) - print("Server responded with:\n%s" % get_device_message_response.json) - assert get_device_message_response.status_code == 400 - assert get_device_message_response.json["type"] == "GetDeviceMessageResponse" - assert ( - get_device_message_response.json["status"] - == unknown_schedule()[0]["status"] - ) - assert "prices unknown" in get_device_message_response.json["message"].lower() diff --git a/flexmeasures/api/v1_3/tests/test_api_v1_3_fresh_db.py b/flexmeasures/api/v1_3/tests/test_api_v1_3_fresh_db.py new file mode 100644 index 000000000..68c901d76 --- /dev/null +++ b/flexmeasures/api/v1_3/tests/test_api_v1_3_fresh_db.py @@ -0,0 +1,82 @@ +import pytest +from flask import url_for +from isodate import parse_datetime +from rq.job import Job + +from flexmeasures.api.common.responses import unknown_schedule +from flexmeasures.api.tests.utils import get_auth_token +from flexmeasures.api.v1_3.tests.utils import ( + message_for_post_udi_event, + message_for_get_device_message, +) +from flexmeasures.data.models.assets import Asset +from flexmeasures.data.models.data_sources import DataSource +from flexmeasures.data.services.scheduling import handle_scheduling_exception +from flexmeasures.data.tests.utils import work_on_rq + + +@pytest.mark.parametrize("message", [message_for_post_udi_event(unknown_prices=True)]) +def test_post_udi_event_and_get_device_message_with_unknown_prices( + setup_fresh_api_test_data, clean_redis, app, message +): + auth_token = None + with app.test_client() as client: + asset = Asset.query.filter(Asset.name == "Test battery").one_or_none() + asset_id = asset.id + asset_owner_id = asset.owner_id + message["event"] = message["event"] % (asset.owner_id, asset.id) + auth_token = get_auth_token(client, "test_prosumer@seita.nl", "testtest") + post_udi_event_response = client.post( + url_for("flexmeasures_api_v1_3.post_udi_event"), + json=message, + headers={"Authorization": auth_token}, + ) + print("Server responded with:\n%s" % post_udi_event_response.json) + assert post_udi_event_response.status_code == 200 + assert post_udi_event_response.json["type"] == "PostUdiEventResponse" + + # look for scheduling jobs in queue + assert ( + len(app.queues["scheduling"]) == 1 + ) # only 1 schedule should be made for 1 asset + job = app.queues["scheduling"].jobs[0] + assert job.kwargs["asset_id"] == asset_id + assert job.kwargs["start"] == parse_datetime(message["datetime"]) + assert job.id == message["event"] + assert ( + Job.fetch(message["event"], connection=app.queues["scheduling"].connection) + == job + ) + + # process the scheduling queue + work_on_rq(app.queues["scheduling"], exc_handler=handle_scheduling_exception) + processed_job = Job.fetch( + message["event"], connection=app.queues["scheduling"].connection + ) + assert processed_job.is_failed is True + + # check results are not in the database + scheduler_source = DataSource.query.filter_by( + name="Seita", type="scheduling script" + ).one_or_none() + assert ( + scheduler_source is None + ) # Make sure the scheduler data source is still not there + + # try to retrieve the schedule through the getDeviceMessage api endpoint + message = message_for_get_device_message() + message["event"] = message["event"] % (asset_owner_id, asset_id) + auth_token = get_auth_token(client, "test_prosumer@seita.nl", "testtest") + get_device_message_response = client.get( + url_for("flexmeasures_api_v1_3.get_device_message"), + query_string=message, + headers={"content-type": "application/json", "Authorization": auth_token}, + ) + print("Server responded with:\n%s" % get_device_message_response.json) + assert get_device_message_response.status_code == 400 + assert get_device_message_response.json["type"] == "GetDeviceMessageResponse" + assert ( + get_device_message_response.json["status"] + == unknown_schedule()[0]["status"] + ) + assert "prices unknown" in get_device_message_response.json["message"].lower() diff --git a/flexmeasures/api/v2_0/implementations/assets.py b/flexmeasures/api/v2_0/implementations/assets.py index d4fb8f79e..e23e4d2e2 100644 --- a/flexmeasures/api/v2_0/implementations/assets.py +++ b/flexmeasures/api/v2_0/implementations/assets.py @@ -9,7 +9,8 @@ from marshmallow import fields from flexmeasures.data.services.resources import get_assets -from flexmeasures.data.models.assets import Asset as AssetModel, AssetSchema +from flexmeasures.data.models.assets import Asset as AssetModel +from flexmeasures.data.schemas.assets import AssetSchema from flexmeasures.data.auth_setup import unauthorized_handler from flexmeasures.data.config import db from flexmeasures.api.common.responses import required_info_missing @@ -66,7 +67,7 @@ def load_asset(admins_only: bool = False): should be allowed. @app.route('/asset/') - @check_asset + @load_asset def get_asset(asset): return asset_schema.dump(asset), 200 diff --git a/flexmeasures/api/v2_0/implementations/sensors.py b/flexmeasures/api/v2_0/implementations/sensors.py index 715b300b3..25215908e 100644 --- a/flexmeasures/api/v2_0/implementations/sensors.py +++ b/flexmeasures/api/v2_0/implementations/sensors.py @@ -15,7 +15,6 @@ ResponseTuple, ) from flexmeasures.api.common.utils.api_utils import ( - get_or_create_user_data_source, get_weather_sensor_by, save_to_db, determine_belief_timing, @@ -33,6 +32,7 @@ values_required, ) from flexmeasures.data.models.assets import Asset, Power +from flexmeasures.data.models.data_sources import get_or_create_source from flexmeasures.data.models.markets import Market, Price from flexmeasures.data.models.weather import Weather from flexmeasures.data.services.forecasting import create_forecasting_jobs @@ -46,8 +46,8 @@ @type_accepted("PostPriceDataRequest") @units_accepted("price", "EUR/MWh", "KRW/kWh") @assets_required("market") -@optional_horizon_accepted() -@optional_prior_accepted() +@optional_horizon_accepted(infer_missing=False, infer_missing_play=True) +@optional_prior_accepted(infer_missing=True, infer_missing_play=False) @values_required @period_required @post_data_checked_for_required_resolution("market", "fm1") @@ -69,7 +69,7 @@ def post_price_data_response( # noqa C901 current_app.logger.info("POSTING PRICE DATA") - data_source = get_or_create_user_data_source(current_user) + data_source = get_or_create_source(current_user) prices = [] forecasting_jobs = [] for market_group, event_values in zip(generic_asset_name_groups, value_groups): @@ -130,8 +130,8 @@ def post_price_data_response( # noqa C901 @type_accepted("PostWeatherDataRequest") @unit_required @assets_required("weather_sensor") -@optional_horizon_accepted() -@optional_prior_accepted() +@optional_horizon_accepted(infer_missing=False, infer_missing_play=True) +@optional_prior_accepted(infer_missing=True, infer_missing_play=False) @values_required @period_required @post_data_checked_for_required_resolution("weather_sensor", "fm1") @@ -152,7 +152,7 @@ def post_weather_data_response( # noqa: C901 current_app.logger.info("POSTING WEATHER DATA") - data_source = get_or_create_user_data_source(current_user) + data_source = get_or_create_source(current_user) weather_measurements = [] forecasting_jobs = [] for sensor_group, event_values in zip(generic_asset_name_groups, value_groups): @@ -222,8 +222,8 @@ def post_weather_data_response( # noqa: C901 @units_accepted("power", "MW") @assets_required("connection") @values_required -@optional_horizon_accepted(ex_post=True) -@optional_prior_accepted(ex_post=True) +@optional_horizon_accepted(ex_post=True, infer_missing=False, infer_missing_play=True) +@optional_prior_accepted(ex_post=True, infer_missing=True, infer_missing_play=False) @period_required @post_data_checked_for_required_resolution("connection", "fm1") @as_json @@ -254,8 +254,8 @@ def post_meter_data_response( @units_accepted("power", "MW") @assets_required("connection") @values_required -@optional_horizon_accepted(ex_post=False) -@optional_prior_accepted(ex_post=False) +@optional_horizon_accepted(ex_post=False, infer_missing=False, infer_missing_play=False) +@optional_prior_accepted(ex_post=False, infer_missing=True, infer_missing_play=False) @period_required @post_data_checked_for_required_resolution("connection", "fm1") @as_json @@ -301,7 +301,7 @@ def post_power_data( current_app.logger.info("POSTING POWER DATA") - data_source = get_or_create_user_data_source(current_user) + data_source = get_or_create_source(current_user) user_assets = get_assets() if not user_assets: current_app.logger.info("User doesn't seem to have any assets") diff --git a/flexmeasures/api/v2_0/implementations/users.py b/flexmeasures/api/v2_0/implementations/users.py index ddbd7ca78..ac7904629 100644 --- a/flexmeasures/api/v2_0/implementations/users.py +++ b/flexmeasures/api/v2_0/implementations/users.py @@ -1,16 +1,15 @@ from functools import wraps from flask import current_app, abort -from marshmallow import ValidationError, validate, validates, fields +from marshmallow import fields from sqlalchemy.exc import IntegrityError from webargs.flaskparser import use_args from flask_security import current_user from flask_security.recoverable import send_reset_password_instructions from flask_json import as_json -from pytz import all_timezones -from flexmeasures.data import ma from flexmeasures.data.models.user import User as UserModel +from flexmeasures.data.schemas.users import UserSchema from flexmeasures.data.services.users import ( get_users, set_random_password, @@ -26,28 +25,6 @@ Both POST (to create) and DELETE are not accessible via the API, but as CLI functions. """ - -class UserSchema(ma.SQLAlchemySchema): - """ - This schema lists fields we support through this API (e.g. no password). - """ - - class Meta: - model = UserModel - - @validates("timezone") - def validate_timezone(self, timezone): - if timezone not in all_timezones: - raise ValidationError(f"Timezone {timezone} doesn't exist.") - - id = ma.auto_field() - email = ma.auto_field(required=True, validate=validate.Email) - username = ma.auto_field(required=True) - active = ma.auto_field() - timezone = ma.auto_field() - flexmeasures_roles = ma.auto_field() - - user_schema = UserSchema() users_schema = UserSchema(many=True) diff --git a/flexmeasures/api/v2_0/routes.py b/flexmeasures/api/v2_0/routes.py index f3e4c3bc1..b32b7d7e1 100644 --- a/flexmeasures/api/v2_0/routes.py +++ b/flexmeasures/api/v2_0/routes.py @@ -477,7 +477,7 @@ def reset_user_password(id: int): .. :quickref: User; Password reset Reset the user's password, and send them instructions on how to reset the password. - This endoint is useful from a security standpoint, in case of worries the password might be compromised. + This endpoint is useful from a security standpoint, in case of worries the password might be compromised. It sets the current password to something random, invalidates cookies and auth tokens, and also sends an email for resetting the password to the user. @@ -761,7 +761,7 @@ def post_meter_data(): :status 403: INVALID_SENDER :status 405: INVALID_METHOD """ - return v2_0_implementations.post_meter_data_response() + return v2_0_implementations.sensors.post_meter_data_response() @flexmeasures_api_v2_0.route("/postPrognosis", methods=["POST"]) diff --git a/flexmeasures/api/v2_0/tests/conftest.py b/flexmeasures/api/v2_0/tests/conftest.py index 55034397b..783e330f6 100644 --- a/flexmeasures/api/v2_0/tests/conftest.py +++ b/flexmeasures/api/v2_0/tests/conftest.py @@ -3,27 +3,33 @@ import pytest -@pytest.fixture(scope="function", autouse=True) -def setup_api_test_data(db): +@pytest.fixture(scope="module", autouse=True) +def setup_api_test_data(db, setup_roles_users, add_market_prices, add_battery_assets): """ Set up data for API v2.0 tests. """ print("Setting up data for API v2.0 tests on %s" % db.engine) from flexmeasures.data.models.user import User, Role - from flexmeasures.data.models.assets import Asset user_datastore = SQLAlchemySessionUserDatastore(db.session, User, Role) - test_supplier = user_datastore.find_user(email="test_supplier@seita.nl") - battery = Asset.query.filter(Asset.name == "Test battery").one_or_none() - battery.owner = test_supplier + battery = add_battery_assets["Test battery"] + battery.owner = setup_roles_users["Test Supplier"] test_prosumer = user_datastore.find_user(email="test_prosumer@seita.nl") admin_role = user_datastore.create_role(name="admin", description="God powers") user_datastore.add_role_to_user(test_prosumer, admin_role) - # an inactive user + +@pytest.fixture(scope="module") +def setup_inactive_user(db, setup_roles_users): + """ + Set up one inactive user. + """ + from flexmeasures.data.models.user import User, Role + + user_datastore = SQLAlchemySessionUserDatastore(db.session, User, Role) user_datastore.create_user( username="inactive test user", email="inactive@seita.nl", diff --git a/flexmeasures/api/v2_0/tests/test_api_v2_0_assets.py b/flexmeasures/api/v2_0/tests/test_api_v2_0_assets.py index 201baddd3..edde65c43 100644 --- a/flexmeasures/api/v2_0/tests/test_api_v2_0_assets.py +++ b/flexmeasures/api/v2_0/tests/test_api_v2_0_assets.py @@ -1,6 +1,8 @@ from flask import url_for import pytest +import pandas as pd + from flexmeasures.data.models.assets import Asset from flexmeasures.data.services.users import find_user_by_email from flexmeasures.api.tests.utils import get_auth_token, UserContext @@ -68,8 +70,8 @@ def test_get_asset_nonadmin_access(client): assert "not found" in asset_response.json["message"] -@pytest.mark.parametrize("use_owner_id,num_assets", [(False, 7), (True, 1)]) -def test_get_assets(client, use_owner_id, num_assets): +@pytest.mark.parametrize("use_owner_id, num_assets", [(False, 7), (True, 1)]) +def test_get_assets(client, add_charging_station_assets, use_owner_id, num_assets): """ Get assets, either for all users (prosumer is admin, so is allowed to see all 7 assets) or for a unique one (supplier user has one asset ― "Test battery"). @@ -95,7 +97,9 @@ def test_get_assets(client, use_owner_id, num_assets): if asset["name"] == "Test battery": battery = asset assert battery - assert battery["soc_datetime"] == "2015-01-01T00:00:00+00:00" + assert pd.Timestamp(battery["soc_datetime"]) == pd.Timestamp( + "2015-01-01T00:00:00+00:00" + ) assert battery["owner_id"] == test_supplier_id assert battery["capacity_in_mw"] == 2 diff --git a/flexmeasures/api/v2_0/tests/test_api_v2_0_sensors.py b/flexmeasures/api/v2_0/tests/test_api_v2_0_sensors.py index 9fbc5071a..c3a2a70f6 100644 --- a/flexmeasures/api/v2_0/tests/test_api_v2_0_sensors.py +++ b/flexmeasures/api/v2_0/tests/test_api_v2_0_sensors.py @@ -1,60 +1,13 @@ from flask import url_for import pytest -from datetime import timedelta -from iso8601 import parse_date -from flexmeasures.api.common.schemas.sensors import SensorField from flexmeasures.api.tests.utils import get_auth_token from flexmeasures.api.v2_0.tests.utils import ( - message_for_post_price_data, message_for_post_prognosis, verify_sensor_data_in_db, ) -@pytest.mark.parametrize( - "post_message", - [ - message_for_post_price_data(), - message_for_post_price_data(prior_instead_of_horizon=True), - ], -) -def test_post_price_data_2_0(db, app, post_message): - """ - Try to post price data as a logged-in test user with the Supplier role, which should succeed. - """ - # call with client whose context ends, so that we can test for, - # after-effects in the database after teardown committed. - with app.test_client() as client: - # post price data - auth_token = get_auth_token(client, "test_supplier@seita.nl", "testtest") - post_price_data_response = client.post( - url_for("flexmeasures_api_v2_0.post_price_data"), - json=post_message, - headers={"Authorization": auth_token}, - ) - print("Server responded with:\n%s" % post_price_data_response.json) - assert post_price_data_response.status_code == 200 - assert post_price_data_response.json["type"] == "PostPriceDataResponse" - - verify_sensor_data_in_db( - post_message, post_message["values"], db, entity_type="market", fm_scheme="fm1" - ) - - # look for Forecasting jobs in queue - assert ( - len(app.queues["forecasting"]) == 2 - ) # only one market is affected, but two horizons - horizons = [timedelta(hours=24), timedelta(hours=48)] - jobs = sorted(app.queues["forecasting"].jobs, key=lambda x: x.kwargs["horizon"]) - market = SensorField("market", fm_scheme="fm1").deserialize(post_message["market"]) - for job, horizon in zip(jobs, horizons): - assert job.kwargs["horizon"] == horizon - assert job.kwargs["start"] == parse_date(post_message["start"]) + horizon - assert job.kwargs["timed_value_type"] == "Price" - assert job.kwargs["asset_id"] == market.id - - @pytest.mark.parametrize( "post_message, fm_scheme", [ @@ -63,8 +16,7 @@ def test_post_price_data_2_0(db, app, post_message): ) def test_post_prognosis_2_0(db, app, post_message, fm_scheme): with app.test_client() as client: - # post price data - auth_token = get_auth_token(client, "test_supplier@seita.nl", "testtest") + auth_token = get_auth_token(client, "test_prosumer@seita.nl", "testtest") post_prognosis_response = client.post( url_for("flexmeasures_api_v2_0.post_prognosis"), json=post_message, diff --git a/flexmeasures/api/v2_0/tests/test_api_v2_0_sensors_fresh_db.py b/flexmeasures/api/v2_0/tests/test_api_v2_0_sensors_fresh_db.py new file mode 100644 index 000000000..87cc7598e --- /dev/null +++ b/flexmeasures/api/v2_0/tests/test_api_v2_0_sensors_fresh_db.py @@ -0,0 +1,63 @@ +from datetime import timedelta + +import pytest +from flask import url_for +from iso8601 import parse_date + +from flexmeasures.api.common.schemas.sensors import SensorField +from flexmeasures.api.tests.utils import get_auth_token +from flexmeasures.api.v2_0.tests.utils import ( + message_for_post_price_data, + verify_sensor_data_in_db, +) + + +@pytest.mark.parametrize( + "post_message", + [ + message_for_post_price_data(market_id=7), + message_for_post_price_data(market_id=1, prior_instead_of_horizon=True), + ], +) +def test_post_price_data_2_0( + fresh_db, + setup_roles_users_fresh_db, + setup_markets_fresh_db, + clean_redis, + app, + post_message, +): + """ + Try to post price data as a logged-in test user with the Supplier role, which should succeed. + """ + db = fresh_db + # call with client whose context ends, so that we can test for, + # after-effects in the database after teardown committed. + with app.test_client() as client: + # post price data + auth_token = get_auth_token(client, "test_supplier@seita.nl", "testtest") + post_price_data_response = client.post( + url_for("flexmeasures_api_v2_0.post_price_data"), + json=post_message, + headers={"Authorization": auth_token}, + ) + print("Server responded with:\n%s" % post_price_data_response.json) + assert post_price_data_response.status_code == 200 + assert post_price_data_response.json["type"] == "PostPriceDataResponse" + + verify_sensor_data_in_db( + post_message, post_message["values"], db, entity_type="market", fm_scheme="fm1" + ) + + # look for Forecasting jobs in queue + assert ( + len(app.queues["forecasting"]) == 2 + ) # only one market is affected, but two horizons + horizons = [timedelta(hours=24), timedelta(hours=48)] + jobs = sorted(app.queues["forecasting"].jobs, key=lambda x: x.kwargs["horizon"]) + market = SensorField("market", fm_scheme="fm1").deserialize(post_message["market"]) + for job, horizon in zip(jobs, horizons): + assert job.kwargs["horizon"] == horizon + assert job.kwargs["start"] == parse_date(post_message["start"]) + horizon + assert job.kwargs["timed_value_type"] == "Price" + assert job.kwargs["asset_id"] == market.id diff --git a/flexmeasures/api/v2_0/tests/test_api_v2_0_users.py b/flexmeasures/api/v2_0/tests/test_api_v2_0_users.py index 410d883ed..771d8a8f7 100644 --- a/flexmeasures/api/v2_0/tests/test_api_v2_0_users.py +++ b/flexmeasures/api/v2_0/tests/test_api_v2_0_users.py @@ -1,4 +1,4 @@ -from flask import url_for, request +from flask import url_for import pytest # from flexmeasures.data.models.user import User @@ -32,7 +32,7 @@ def test_get_users_bad_auth(client, use_auth): @pytest.mark.parametrize("include_inactive", [False, True]) -def test_get_users_inactive(client, include_inactive): +def test_get_users_inactive(client, setup_inactive_user, include_inactive): headers = { "content-type": "application/json", "Authorization": get_auth_token(client, "test_prosumer@seita.nl", "testtest"), @@ -117,7 +117,7 @@ def test_edit_user(client): def test_edit_user_with_unexpected_fields(client): - """Sending unexpected fields (not in Schema) is an Unprocessible Entity error.""" + """Sending unexpected fields (not in Schema) is an Unprocessable Entity error.""" with UserContext("test_supplier@seita.nl") as supplier: supplier_id = supplier.id with UserContext("test_prosumer@seita.nl") as prosumer: @@ -132,54 +132,3 @@ def test_edit_user_with_unexpected_fields(client): ) print("Server responded with:\n%s" % user_edit_response.json) assert user_edit_response.status_code == 422 - - -@pytest.mark.parametrize( - "sender", - ( - (""), - ("test_supplier@seita.nl"), - ("test_prosumer@seita.nl"), - ("test_prosumer@seita.nl"), - ("test_prosumer@seita.nl"), - ), -) -def test_user_reset_password(app, client, sender): - """ - Reset the password of supplier. - Only the prosumer is allowed to do that (as admin). - """ - with UserContext("test_supplier@seita.nl") as supplier: - supplier_id = supplier.id - old_password = supplier.password - headers = {"content-type": "application/json"} - if sender != "": - headers["Authorization"] = (get_auth_token(client, sender, "testtest"),) - with app.mail.record_messages() as outbox: - pwd_reset_response = client.patch( - url_for("flexmeasures_api_v2_0.reset_user_password", id=supplier_id), - query_string={}, - headers=headers, - ) - print("Server responded with:\n%s" % pwd_reset_response.json) - - if sender == "": - assert pwd_reset_response.status_code == 401 - return - - if sender == "test_supplier@seita.nl": - assert pwd_reset_response.status_code == 403 - return - - assert pwd_reset_response.status_code == 200 - - supplier = find_user_by_email("test_supplier@seita.nl") - assert len(outbox) == 2 - assert "has been reset" in outbox[0].subject - pwd_reset_instructions = outbox[1] - assert old_password != supplier.password - assert "reset instructions" in pwd_reset_instructions.subject - assert ( - "reset your password:\n\n%sreset/" % request.host_url - in pwd_reset_instructions.body - ) diff --git a/flexmeasures/api/v2_0/tests/test_api_v2_0_users_fresh_db.py b/flexmeasures/api/v2_0/tests/test_api_v2_0_users_fresh_db.py new file mode 100644 index 000000000..cfb2da44d --- /dev/null +++ b/flexmeasures/api/v2_0/tests/test_api_v2_0_users_fresh_db.py @@ -0,0 +1,56 @@ +import pytest +from flask import url_for, request + +from flexmeasures.api.tests.utils import UserContext, get_auth_token +from flexmeasures.data.services.users import find_user_by_email + + +@pytest.mark.parametrize( + "sender", + ( + (""), + ("test_supplier@seita.nl"), + ("test_prosumer@seita.nl"), + ("test_prosumer@seita.nl"), + ("test_prosumer@seita.nl"), + ), +) +def test_user_reset_password(app, client, sender): + """ + Reset the password of supplier. + Only the prosumer is allowed to do that (as admin). + """ + with UserContext("test_supplier@seita.nl") as supplier: + supplier_id = supplier.id + old_password = supplier.password + headers = {"content-type": "application/json"} + if sender != "": + headers["Authorization"] = (get_auth_token(client, sender, "testtest"),) + with app.mail.record_messages() as outbox: + pwd_reset_response = client.patch( + url_for("flexmeasures_api_v2_0.reset_user_password", id=supplier_id), + query_string={}, + headers=headers, + ) + print("Server responded with:\n%s" % pwd_reset_response.json) + + if sender == "": + assert pwd_reset_response.status_code == 401 + return + + if sender == "test_supplier@seita.nl": + assert pwd_reset_response.status_code == 403 + return + + assert pwd_reset_response.status_code == 200 + + supplier = find_user_by_email("test_supplier@seita.nl") + assert len(outbox) == 2 + assert "has been reset" in outbox[0].subject + pwd_reset_instructions = outbox[1] + assert old_password != supplier.password + assert "reset instructions" in pwd_reset_instructions.subject + assert ( + "reset your password:\n\n%sreset/" % request.host_url + in pwd_reset_instructions.body + ) diff --git a/flexmeasures/api/v2_0/tests/utils.py b/flexmeasures/api/v2_0/tests/utils.py index a096363a6..4bc5616bd 100644 --- a/flexmeasures/api/v2_0/tests/utils.py +++ b/flexmeasures/api/v2_0/tests/utils.py @@ -19,7 +19,7 @@ def get_asset_post_data() -> dict: post_data = { "name": "Test battery 2", - "unit": "kW", + "unit": "MW", "capacity_in_mw": 3, "event_resolution": timedelta(minutes=10).seconds / 60, "latitude": 30.1, @@ -32,6 +32,7 @@ def get_asset_post_data() -> dict: def message_for_post_price_data( + market_id: int, tile_n: int = 1, compress_n: int = 1, duration: Optional[timedelta] = None, @@ -59,7 +60,7 @@ def message_for_post_price_data( duration=duration, invalid_unit=invalid_unit, ) - message["market"] = "ea1.2018-06.localhost:fm1.1" + message["market"] = f"ea1.2018-06.localhost:fm1.{market_id}" message["horizon"] = duration_isoformat(timedelta(hours=0)) if no_horizon or prior_instead_of_horizon: message.pop("horizon", None) @@ -144,10 +145,13 @@ def verify_sensor_data_in_db( def message_for_post_prognosis(fm_scheme: str = "fm1"): + """ + Posting prognosis for a wind mill's production. + """ message = { "type": "PostPrognosisRequest", "connection": f"ea1.2018-06.localhost:{fm_scheme}.2", - "values": [300, 300, 300, 0, 0, 300], + "values": [-300, -300, -300, 0, 0, -300], "start": "2021-01-01T00:00:00Z", "duration": "PT1H30M", "prior": "2020-12-31T18:00:00Z", diff --git a/flexmeasures/app.py b/flexmeasures/app.py index e152b0517..9918b6d9e 100644 --- a/flexmeasures/app.py +++ b/flexmeasures/app.py @@ -98,13 +98,22 @@ def create(env: Optional[str] = None, path_to_config: Optional[str] = None) -> F register_api_at(app) + # Register plugins + + from flexmeasures.utils.app_utils import register_plugins + + register_plugins(app) + # Register the UI + # If plugins registered routes already (e.g. "/"), + # they have precedence (first registration wins). from flexmeasures.ui import register_at as register_ui_at register_ui_at(app) # Profile endpoints (if needed, e.g. during development) + @app.before_request def before_request(): if app.config.get("FLEXMEASURES_PROFILE_REQUESTS", False): diff --git a/flexmeasures/conftest.py b/flexmeasures/conftest.py index 6cf53f6c0..c937cc17c 100644 --- a/flexmeasures/conftest.py +++ b/flexmeasures/conftest.py @@ -1,13 +1,15 @@ +from contextlib import contextmanager import pytest from random import random from datetime import datetime, timedelta +from typing import Dict from isodate import parse_duration import pandas as pd import numpy as np from flask import request, jsonify from flask_sqlalchemy import SQLAlchemy -from flask_security import roles_accepted, SQLAlchemySessionUserDatastore +from flask_security import roles_accepted from flask_security.utils import hash_password from werkzeug.exceptions import ( InternalServerError, @@ -19,20 +21,39 @@ from flexmeasures.app import create as create_app from flexmeasures.utils.time_utils import as_server_time -from flexmeasures.data.services.users import create_user, find_user_by_email +from flexmeasures.data.services.users import create_user from flexmeasures.data.models.assets import AssetType, Asset, Power from flexmeasures.data.models.data_sources import DataSource -from flexmeasures.data.models.markets import Market, Price from flexmeasures.data.models.weather import WeatherSensor, WeatherSensorType +from flexmeasures.data.models.markets import Market, MarketType, Price from flexmeasures.data.models.time_series import Sensor, TimedBelief +from flexmeasures.data.models.user import User """ Useful things for all tests. -One application is made per test session, but cleanup and recreation currently happens per test. -This can be sped up if needed by moving some functions to "module" or even "session" scope, -but then the tests need to share data and and data modifications can lead to tricky debugging. +# App + +One application is made per test session. + +# Database + +Database recreation and cleanup can happen per test (use fresh_db) or per module (use db). +Having tests inside a module share a database makes those tests faster. +Tests that use fresh_db should be put in a separate module to avoid clashing with the module scoped test db. +For example: +- test_api_v1_1.py contains tests that share a module scoped database +- test_api_v1_1_fresh_db.py contains tests that each get a fresh function-scoped database +Further speed-up may be possible by defining a "package" scoped or even "session" scoped database, +but then tests in different modules need to share data and data modifications can lead to tricky debugging. + +# Data + +Various fixture below set up data that many tests use. +In case a test needs to use such data with a fresh test database, +that test should also use a fixture that requires the fresh_db. +Such fixtures can be recognised by having fresh_db appended to their name. """ @@ -52,8 +73,22 @@ def app(): print("DONE WITH APP FIXTURE") -@pytest.fixture(scope="function") +@pytest.fixture(scope="module") def db(app): + """Fresh test db per module.""" + with create_test_db(app) as test_db: + yield test_db + + +@pytest.fixture(scope="function") +def fresh_db(app): + """Fresh test db per function.""" + with create_test_db(app) as test_db: + yield test_db + + +@contextmanager +def create_test_db(app): """ Provide a db object with the structure freshly created. This assumes a clean database. It does clean up after itself when it's done (drops everything). @@ -75,27 +110,45 @@ def db(app): _db.drop_all() +@pytest.fixture(scope="module") +def setup_roles_users() -> Dict[str, User]: + return create_roles_users() + + @pytest.fixture(scope="function") -def setup_roles_users(db): +def setup_roles_users_fresh_db() -> Dict[str, User]: + return create_roles_users() + + +def create_roles_users() -> Dict[str, User]: """Create a minimal set of roles and users""" - create_user( + test_prosumer = create_user( username="Test Prosumer", email="test_prosumer@seita.nl", password=hash_password("testtest"), user_roles=dict(name="Prosumer", description="A Prosumer with a few assets."), ) - create_user( + test_supplier = create_user( username="Test Supplier", email="test_supplier@seita.nl", password=hash_password("testtest"), user_roles=dict(name="Supplier", description="A Supplier trading on markets."), ) + return {"Test Prosumer": test_prosumer, "Test Supplier": test_supplier} + + +@pytest.fixture(scope="module") +def setup_markets(db) -> Dict[str, Market]: + return create_test_markets(db) -@pytest.fixture(scope="function", autouse=True) -def setup_markets(db): +@pytest.fixture(scope="function") +def setup_markets_fresh_db(fresh_db) -> Dict[str, Market]: + return create_test_markets(fresh_db) + + +def create_test_markets(db) -> Dict[str, Market]: """Create the epex_da market.""" - from flexmeasures.data.models.markets import Market, MarketType day_ahead = MarketType( name="day_ahead", @@ -113,37 +166,55 @@ def setup_markets(db): knowledge_horizon_par={"x": 1, "y": 12, "z": "Europe/Paris"}, ) db.session.add(epex_da) + return {"epex_da": epex_da} -@pytest.fixture(scope="function", autouse=True) -def setup_assets(db, setup_roles_users, setup_markets): - """Make some asset types and add assets to known test users.""" - +@pytest.fixture(scope="module") +def setup_sources(db) -> Dict[str, DataSource]: data_source = DataSource(name="Seita", type="demo script") db.session.add(data_source) + return {"Seita": data_source} - db.session.add( - AssetType( - name="solar", - is_producer=True, - can_curtail=True, - daily_seasonality=True, - yearly_seasonality=True, - ) + +@pytest.fixture(scope="module") +def setup_asset_types(db) -> Dict[str, AssetType]: + return create_test_asset_types(db) + + +@pytest.fixture(scope="function") +def setup_asset_types_fresh_db(fresh_db) -> Dict[str, AssetType]: + return create_test_asset_types(fresh_db) + + +def create_test_asset_types(db) -> Dict[str, AssetType]: + """Make some asset types used throughout.""" + + solar = AssetType( + name="solar", + is_producer=True, + can_curtail=True, + daily_seasonality=True, + yearly_seasonality=True, ) - db.session.add( - AssetType( - name="wind", - is_producer=True, - can_curtail=True, - daily_seasonality=True, - yearly_seasonality=True, - ) + db.session.add(solar) + wind = AssetType( + name="wind", + is_producer=True, + can_curtail=True, + daily_seasonality=True, + yearly_seasonality=True, ) + db.session.add(wind) + return dict(solar=solar, wind=wind) - test_prosumer = find_user_by_email("test_prosumer@seita.nl") - test_market = Market.query.filter_by(name="epex_da").one_or_none() +@pytest.fixture(scope="module") +def setup_assets( + db, setup_roles_users, setup_markets, setup_sources, setup_asset_types +) -> Dict[str, Asset]: + """Add assets to known test users.""" + + assets = [] for asset_name in ["wind-asset-1", "wind-asset-2", "solar-asset-1"]: asset = Asset( name=asset_name, @@ -156,10 +227,11 @@ def setup_assets(db, setup_roles_users, setup_markets): max_soc_in_mwh=0, soc_in_mwh=0, unit="MW", - market_id=test_market.id, + market_id=setup_markets["epex_da"].id, ) - asset.owner = test_prosumer + asset.owner = setup_roles_users["Test Prosumer"] db.session.add(asset) + assets.append(asset) # one day of test data (one complete sine curve) time_slots = pd.date_range( @@ -171,25 +243,23 @@ def setup_assets(db, setup_roles_users, setup_markets): datetime=as_server_time(dt), horizon=parse_duration("PT0M"), value=val, - data_source_id=data_source.id, + data_source_id=setup_sources["Seita"].id, ) p.asset = asset db.session.add(p) + return {asset.name: asset for asset in assets} -@pytest.fixture(scope="function") -def setup_beliefs(db: SQLAlchemy, setup_markets) -> int: +@pytest.fixture(scope="module") +def setup_beliefs(db: SQLAlchemy, setup_markets, setup_sources) -> int: """ :returns: the number of beliefs set up """ sensor = Sensor.query.filter(Sensor.name == "epex_da").one_or_none() - data_source = DataSource.query.filter_by( - name="Seita", type="demo script" - ).one_or_none() beliefs = [ TimedBelief( sensor=sensor, - source=data_source, + source=setup_sources["Seita"], event_value=21, event_start="2021-03-28 16:00+01", belief_horizon=timedelta(0), @@ -199,13 +269,9 @@ def setup_beliefs(db: SQLAlchemy, setup_markets) -> int: return len(beliefs) -@pytest.fixture(scope="function", autouse=True) -def add_market_prices(db: SQLAlchemy, setup_assets, setup_markets): +@pytest.fixture(scope="module") +def add_market_prices(db: SQLAlchemy, setup_assets, setup_markets, setup_sources): """Add two days of market prices for the EPEX day-ahead market.""" - epex_da = Market.query.filter(Market.name == "epex_da").one_or_none() - data_source = DataSource.query.filter_by( - name="Seita", type="demo script" - ).one_or_none() # one day of test data (one complete sine curve) time_slots = pd.date_range( @@ -217,9 +283,9 @@ def add_market_prices(db: SQLAlchemy, setup_assets, setup_markets): datetime=as_server_time(dt), horizon=timedelta(hours=0), value=val, - data_source_id=data_source.id, + data_source_id=setup_sources["Seita"].id, ) - p.market = epex_da + p.market = setup_markets["epex_da"] db.session.add(p) # another day of test data (8 expensive hours, 8 cheap hours, and again 8 expensive hours) @@ -232,14 +298,31 @@ def add_market_prices(db: SQLAlchemy, setup_assets, setup_markets): datetime=as_server_time(dt), horizon=timedelta(hours=0), value=val, - data_source_id=data_source.id, + data_source_id=setup_sources["Seita"].id, ) - p.market = epex_da + p.market = setup_markets["epex_da"] db.session.add(p) -@pytest.fixture(scope="function", autouse=True) -def add_battery_assets(db: SQLAlchemy, setup_roles_users, setup_markets): +@pytest.fixture(scope="module") +def add_battery_assets( + db: SQLAlchemy, setup_roles_users, setup_markets +) -> Dict[str, Asset]: + return create_test_battery_assets(db, setup_roles_users, setup_markets) + + +@pytest.fixture(scope="function") +def add_battery_assets_fresh_db( + fresh_db, setup_roles_users_fresh_db, setup_markets_fresh_db +) -> Dict[str, Asset]: + return create_test_battery_assets( + fresh_db, setup_roles_users_fresh_db, setup_markets_fresh_db + ) + + +def create_test_battery_assets( + db: SQLAlchemy, setup_roles_users, setup_markets +) -> Dict[str, Asset]: """Add two battery assets, set their capacity values and their initial SOC.""" db.session.add( AssetType( @@ -254,13 +337,7 @@ def add_battery_assets(db: SQLAlchemy, setup_roles_users, setup_markets): ) ) - from flexmeasures.data.models.user import User, Role - - user_datastore = SQLAlchemySessionUserDatastore(db.session, User, Role) - test_prosumer = user_datastore.find_user(email="test_prosumer@seita.nl") - epex_da = Market.query.filter(Market.name == "epex_da").one_or_none() - - battery = Asset( + test_battery = Asset( name="Test battery", asset_type_name="battery", event_resolution=timedelta(minutes=15), @@ -272,13 +349,13 @@ def add_battery_assets(db: SQLAlchemy, setup_roles_users, setup_markets): soc_udi_event_id=203, latitude=10, longitude=100, - market_id=epex_da.id, + market_id=setup_markets["epex_da"].id, unit="MW", ) - battery.owner = test_prosumer - db.session.add(battery) + test_battery.owner = setup_roles_users["Test Prosumer"] + db.session.add(test_battery) - battery = Asset( + test_battery_no_prices = Asset( name="Test battery with no known prices", asset_type_name="battery", event_resolution=timedelta(minutes=15), @@ -290,15 +367,21 @@ def add_battery_assets(db: SQLAlchemy, setup_roles_users, setup_markets): soc_udi_event_id=203, latitude=10, longitude=100, - market_id=epex_da.id, + market_id=setup_markets["epex_da"].id, unit="MW", ) - battery.owner = test_prosumer - db.session.add(battery) - - -@pytest.fixture(scope="function", autouse=True) -def add_charging_station_assets(db: SQLAlchemy, setup_roles_users, setup_markets): + test_battery_no_prices.owner = setup_roles_users["Test Prosumer"] + db.session.add(test_battery_no_prices) + return { + "Test battery": test_battery, + "Test battery with no known prices": test_battery_no_prices, + } + + +@pytest.fixture(scope="module") +def add_charging_station_assets( + db: SQLAlchemy, setup_roles_users, setup_markets +) -> Dict[str, Asset]: """Add uni- and bi-directional charging station assets, set their capacity value and their initial SOC.""" db.session.add( AssetType( @@ -325,12 +408,6 @@ def add_charging_station_assets(db: SQLAlchemy, setup_roles_users, setup_markets ) ) - from flexmeasures.data.models.user import User, Role - - user_datastore = SQLAlchemySessionUserDatastore(db.session, User, Role) - test_prosumer = user_datastore.find_user(email="test_prosumer@seita.nl") - epex_da = Market.query.filter(Market.name == "epex_da").one_or_none() - charging_station = Asset( name="Test charging station", asset_type_name="one-way_evse", @@ -343,10 +420,10 @@ def add_charging_station_assets(db: SQLAlchemy, setup_roles_users, setup_markets soc_udi_event_id=203, latitude=10, longitude=100, - market_id=epex_da.id, + market_id=setup_markets["epex_da"].id, unit="MW", ) - charging_station.owner = test_prosumer + charging_station.owner = setup_roles_users["Test Prosumer"] db.session.add(charging_station) bidirectional_charging_station = Asset( @@ -361,20 +438,33 @@ def add_charging_station_assets(db: SQLAlchemy, setup_roles_users, setup_markets soc_udi_event_id=203, latitude=10, longitude=100, - market_id=epex_da.id, + market_id=setup_markets["epex_da"].id, unit="MW", ) - bidirectional_charging_station.owner = test_prosumer + bidirectional_charging_station.owner = setup_roles_users["Test Prosumer"] db.session.add(bidirectional_charging_station) + return { + "Test charging station": charging_station, + "Test charging station (bidirectional)": bidirectional_charging_station, + } + + +@pytest.fixture(scope="module") +def add_weather_sensors(db) -> Dict[str, WeatherSensor]: + return create_weather_sensors(db) -@pytest.fixture(scope="function", autouse=True) -def add_weather_sensors(db: SQLAlchemy): +@pytest.fixture(scope="function") +def add_weather_sensors_fresh_db(fresh_db) -> Dict[str, WeatherSensor]: + return create_weather_sensors(fresh_db) + + +def create_weather_sensors(db: SQLAlchemy): """Add some weather sensors and weather sensor types.""" test_sensor_type = WeatherSensorType(name="wind_speed") db.session.add(test_sensor_type) - sensor = WeatherSensor( + wind_sensor = WeatherSensor( name="wind_speed_sensor", weather_sensor_type_name="wind_speed", event_resolution=timedelta(minutes=5), @@ -382,11 +472,11 @@ def add_weather_sensors(db: SQLAlchemy): longitude=126, unit="m/s", ) - db.session.add(sensor) + db.session.add(wind_sensor) test_sensor_type = WeatherSensorType(name="temperature") db.session.add(test_sensor_type) - sensor = WeatherSensor( + temp_sensor = WeatherSensor( name="temperature_sensor", weather_sensor_type_name="temperature", event_resolution=timedelta(minutes=5), @@ -394,10 +484,11 @@ def add_weather_sensors(db: SQLAlchemy): longitude=126.0, unit="°C", ) - db.session.add(sensor) + db.session.add(temp_sensor) + return {"wind": wind_sensor, "temperature": temp_sensor} -@pytest.fixture(scope="function", autouse=True) +@pytest.fixture(scope="module") def add_sensors(db: SQLAlchemy): """Add some generic sensors.""" height_sensor = Sensor( @@ -405,9 +496,10 @@ def add_sensors(db: SQLAlchemy): unit="m", ) db.session.add(height_sensor) + return height_sensor -@pytest.fixture(scope="function", autouse=True) +@pytest.fixture(scope="function") def clean_redis(app): failed = app.queues["forecasting"].failed_job_registry app.queues["forecasting"].empty() diff --git a/flexmeasures/data/migrations/versions/04f0e2d2924a_add_source_id_as_primary_key_for_timed_beliefs.py b/flexmeasures/data/migrations/versions/04f0e2d2924a_add_source_id_as_primary_key_for_timed_beliefs.py new file mode 100644 index 000000000..fe2d80d88 --- /dev/null +++ b/flexmeasures/data/migrations/versions/04f0e2d2924a_add_source_id_as_primary_key_for_timed_beliefs.py @@ -0,0 +1,39 @@ +"""add source id as primary key for timed beliefs + +Revision ID: 04f0e2d2924a +Revises: e62ac5f519d7 +Create Date: 2021-04-10 13:53:22.561718 + +""" +from alembic import op + + +# revision identifiers, used by Alembic. +revision = "04f0e2d2924a" +down_revision = "e62ac5f519d7" +branch_labels = None +depends_on = None + + +def upgrade(): + op.drop_constraint("timed_belief_pkey", "timed_belief") + op.create_primary_key( + "timed_belief_pkey", + "timed_belief", + [ + "event_start", + "belief_horizon", + "cumulative_probability", + "sensor_id", + "source_id", + ], + ) + + +def downgrade(): + op.drop_constraint("timed_belief_pkey", "timed_belief") + op.create_primary_key( + "timed_belief_pkey", + "timed_belief", + ["event_start", "belief_horizon", "cumulative_probability", "sensor_id"], + ) diff --git a/flexmeasures/data/models/assets.py b/flexmeasures/data/models/assets.py index af7ab4b19..7dd30712e 100644 --- a/flexmeasures/data/models/assets.py +++ b/flexmeasures/data/models/assets.py @@ -3,13 +3,9 @@ import isodate import timely_beliefs as tb from sqlalchemy.orm import Query -from marshmallow import ValidationError, validate, validates, fields, validates_schema from flexmeasures.data.config import db -from flexmeasures.data import ma -from flexmeasures.data.models.time_series import Sensor, SensorSchemaMixin, TimedValue -from flexmeasures.data.models.markets import Market -from flexmeasures.data.models.user import User +from flexmeasures.data.models.time_series import Sensor, TimedValue from flexmeasures.utils.entity_address_utils import build_entity_address from flexmeasures.utils.flexmeasures_inflection import humanize, pluralize @@ -113,9 +109,12 @@ def __init__(self, **kwargs): else: # The UI may initialize Asset objects from API form data with a known id sensor_id = kwargs["id"] - + if "unit" not in kwargs: + kwargs["unit"] = "MW" # current default super(Asset, self).__init__(**kwargs) self.id = sensor_id + if self.unit != "MW": + raise Exception("FlexMeasures only supports MW as unit for now.") self.name = self.name.replace(" (MW)", "") if "display_name" not in kwargs: self.display_name = humanize(self.name) @@ -187,67 +186,6 @@ def __repr__(self): ) -class AssetSchema(SensorSchemaMixin, ma.SQLAlchemySchema): - """ - Asset schema, with validations. - """ - - class Meta: - model = Asset - - @validates("name") - def validate_name(self, name: str): - asset = Asset.query.filter(Asset.name == name).one_or_none() - if asset: - raise ValidationError(f"An asset with the name {name} already exists.") - - @validates("owner_id") - def validate_owner(self, owner_id: int): - owner = User.query.get(owner_id) - if not owner: - raise ValidationError(f"Owner with id {owner_id} doesn't exist.") - if "Prosumer" not in owner.flexmeasures_roles: - raise ValidationError( - "Asset owner must have role 'Prosumer'." - f" User {owner_id} has roles {[r.name for r in owner.flexmeasures_roles]}." - ) - - @validates("market_id") - def validate_market(self, market_id: int): - market = Market.query.get(market_id) - if not market: - raise ValidationError(f"Market with id {market_id} doesn't exist.") - - @validates("asset_type_name") - def validate_asset_type(self, asset_type_name: str): - asset_type = AssetType.query.get(asset_type_name) - if not asset_type: - raise ValidationError(f"Asset type {asset_type_name} doesn't exist.") - - @validates_schema(skip_on_field_errors=False) - def validate_soc_constraints(self, data, **kwargs): - if "max_soc_in_mwh" in data and "min_soc_in_mwh" in data: - if data["max_soc_in_mwh"] < data["min_soc_in_mwh"]: - errors = { - "max_soc_in_mwh": "This value must be equal or higher than the minimum soc." - } - raise ValidationError(errors) - - id = ma.auto_field() - display_name = fields.Str(validate=validate.Length(min=4)) - capacity_in_mw = fields.Float(required=True, validate=validate.Range(min=0.0001)) - min_soc_in_mwh = fields.Float(validate=validate.Range(min=0)) - max_soc_in_mwh = fields.Float(validate=validate.Range(min=0)) - soc_in_mwh = ma.auto_field() - soc_datetime = ma.auto_field() - soc_udi_event_id = ma.auto_field() - latitude = fields.Float(required=True, validate=validate.Range(min=-90, max=90)) - longitude = fields.Float(required=True, validate=validate.Range(min=-180, max=180)) - asset_type_name = ma.auto_field(required=True) - owner_id = ma.auto_field(required=True) - market_id = ma.auto_field(required=True) - - def assets_share_location(assets: List[Asset]) -> bool: """ Return True if all assets in this list are located on the same spot. @@ -302,7 +240,7 @@ def __init__(self, **kwargs): super(Power, self).__init__(**kwargs) def __repr__(self): - return "" % ( + return "" % ( self.value, self.asset_id, self.datetime, diff --git a/flexmeasures/data/models/charts/__init__.py b/flexmeasures/data/models/charts/__init__.py new file mode 100644 index 000000000..a25bd3cf3 --- /dev/null +++ b/flexmeasures/data/models/charts/__init__.py @@ -0,0 +1,26 @@ +from inspect import getmembers, isfunction + +from . import belief_charts +from .defaults import apply_chart_defaults + + +def chart_type_to_chart_specs(chart_type: str, **kwargs) -> dict: + """Create chart specs of a given chart type, using FlexMeasures defaults for settings like width and height. + + :param chart_type: Name of a variable defining chart specs or a function returning chart specs. + The chart specs can be a dictionary or an Altair chart specification. + - In case of a dictionary, the creator needs to ensure that the dictionary contains valid specs + - In case of an Altair chart specification, Altair validates for you + :returns: A dictionary containing a vega-lite chart specification + """ + # Create a dictionary mapping chart types to chart specs, and apply defaults to the chart specs, too. + belief_charts_mapping = { + chart_type: apply_chart_defaults(chart_specs) + for chart_type, chart_specs in getmembers(belief_charts) + if isfunction(chart_specs) or isinstance(chart_specs, dict) + } + # Create chart specs + chart_specs_or_fnc = belief_charts_mapping[chart_type] + if isfunction(chart_specs_or_fnc): + return chart_specs_or_fnc(**kwargs) + return chart_specs_or_fnc diff --git a/flexmeasures/data/models/charts/belief_charts.py b/flexmeasures/data/models/charts/belief_charts.py new file mode 100644 index 000000000..e9c810de0 --- /dev/null +++ b/flexmeasures/data/models/charts/belief_charts.py @@ -0,0 +1,27 @@ +from flexmeasures.data.models.charts.defaults import TIME_TITLE, TIME_TOOLTIP_TITLE + + +def bar_chart(title: str, quantity: str = "unknown quantity", unit: str = "a.u."): + if not unit: + unit = "a.u." + return { + "description": "A simple bar chart.", + "title": title, + "mark": "bar", + "encoding": { + "x": {"field": "event_start", "type": "T", "title": TIME_TITLE}, + "y": { + "field": "event_value", + "type": "quantitative", + "title": quantity + " (" + unit + ")", + }, + "tooltip": [ + {"field": "full_date", "title": TIME_TOOLTIP_TITLE, "type": "nominal"}, + { + "field": "event_value", + "title": quantity + " (" + unit + ")", + "type": "quantitative", + }, + ], + }, + } diff --git a/flexmeasures/data/models/charts/defaults.py b/flexmeasures/data/models/charts/defaults.py new file mode 100644 index 000000000..4d979c5a6 --- /dev/null +++ b/flexmeasures/data/models/charts/defaults.py @@ -0,0 +1,42 @@ +from functools import wraps +from typing import Callable, Union + +import altair as alt + + +HEIGHT = 300 +WIDTH = 600 +REDUCED_HEIGHT = REDUCED_WIDTH = 60 +SELECTOR_COLOR = "darkred" +TIME_FORMAT = "%I:%M %p on %A %b %e, %Y" +TIME_TOOLTIP_TITLE = "Time and date" +TIME_TITLE = None +TIME_SELECTION_TOOLTIP = "Click and drag to select a time window" + + +def apply_chart_defaults(fn): + @wraps(fn) + def decorated_chart_specs(*args, **kwargs): + dataset_name = kwargs.pop("dataset_name", None) + if isinstance(fn, Callable): + # function that returns a chart specification + chart_specs: Union[dict, alt.TopLevelMixin] = fn(*args, **kwargs) + else: + # not a function, but a direct chart specification + chart_specs: Union[dict, alt.TopLevelMixin] = fn + if isinstance(chart_specs, alt.TopLevelMixin): + chart_specs = chart_specs.to_dict() + chart_specs.pop("$schema") + if dataset_name: + chart_specs["data"] = {"name": dataset_name} + chart_specs["height"] = HEIGHT + chart_specs["width"] = WIDTH + chart_specs["transform"] = [ + { + "as": "full_date", + "calculate": f"timeFormat(datum.event_start, '{TIME_FORMAT}')", + } + ] + return chart_specs + + return decorated_chart_specs diff --git a/flexmeasures/data/models/charts/readme.md b/flexmeasures/data/models/charts/readme.md new file mode 100644 index 000000000..f918ff80f --- /dev/null +++ b/flexmeasures/data/models/charts/readme.md @@ -0,0 +1,7 @@ +# Developer docs for adding chart specs + +Chart specs can be specified as a dictionary with a vega-lite specification or as an altair chart. +Alternatively, they can be specified as a function that returns a dict (with vega-lite specs) or an altair chart. +This approach is useful if you need to parameterize the specification with kwargs. + +Todo: support a plug-in architecture, see https://packaging.python.org/guides/creating-and-discovering-plugins/ diff --git a/flexmeasures/data/models/data_sources.py b/flexmeasures/data/models/data_sources.py index 430dc4de9..66696f37d 100644 --- a/flexmeasures/data/models/data_sources.py +++ b/flexmeasures/data/models/data_sources.py @@ -1,9 +1,10 @@ -from typing import Optional +from typing import Optional, Union import timely_beliefs as tb +from flask import current_app from flexmeasures.data.config import db -from flexmeasures.data.models.user import User +from flexmeasures.data.models.user import User, is_user class DataSource(db.Model, tb.BeliefSourceDBMixin): @@ -31,6 +32,8 @@ def __init__( name = user.username type = "user" self.user_id = user.id + elif user is None and type == "user": + raise TypeError("A data source cannot have type 'user' but no user set.") self.type = type tb.BeliefSourceDBMixin.__init__(self, name=name) db.Model.__init__(self, **kwargs) @@ -46,10 +49,44 @@ def label(self): return f"schedule by {self.name}" elif self.type == "crawling script": return f"data retrieved from {self.name}" - elif self.type == "demo script": + elif self.type in ("demo script", "CLI script"): return f"demo data entered by {self.name}" else: return f"data from {self.name}" def __repr__(self): return "" % (self.id, self.label) + + +def get_or_create_source( + source: Union[User, str], source_type: Optional[str] = None, flush: bool = True +) -> DataSource: + if is_user(source): + source_type = "user" + query = DataSource.query.filter(DataSource.type == source_type) + if is_user(source): + query = query.filter(DataSource.user == source) + elif isinstance(source, str): + query = query.filter(DataSource.name == source) + else: + raise TypeError("source should be of type User or str") + _source = query.one_or_none() + if not _source: + current_app.logger.info(f"Setting up '{source}' as new data source...") + if is_user(source): + _source = DataSource(user=source) + else: + if source_type is None: + raise TypeError("Please specify a source type") + _source = DataSource(name=source, type=source_type) + db.session.add(_source) + if flush: + # assigns id so that we can reference the new object in the current db session + db.session.flush() + return _source + + +def get_source_or_none(source: int, source_type: str) -> Optional[DataSource]: + query = DataSource.query.filter(DataSource.type == source_type) + query = query.filter(DataSource.id == int(source)) + return query.one_or_none() diff --git a/flexmeasures/data/models/planning/tests/conftest.py b/flexmeasures/data/models/planning/tests/conftest.py index e69de29bb..359f4d79b 100644 --- a/flexmeasures/data/models/planning/tests/conftest.py +++ b/flexmeasures/data/models/planning/tests/conftest.py @@ -0,0 +1,9 @@ +import pytest + + +@pytest.fixture(scope="function", autouse=True) +def setup_planning_test_data(db, add_market_prices, add_charging_station_assets): + """ + Set up data for all planning tests. + """ + print("Setting up data for planning tests on %s" % db.engine) diff --git a/flexmeasures/data/models/planning/tests/test_solver.py b/flexmeasures/data/models/planning/tests/test_solver.py index 8184670f2..fd832b586 100644 --- a/flexmeasures/data/models/planning/tests/test_solver.py +++ b/flexmeasures/data/models/planning/tests/test_solver.py @@ -12,7 +12,7 @@ from flexmeasures.utils.time_utils import as_server_time -def test_battery_solver_day_1(): +def test_battery_solver_day_1(add_battery_assets): epex_da = Market.query.filter(Market.name == "epex_da").one_or_none() battery = Asset.query.filter(Asset.name == "Test battery").one_or_none() start = as_server_time(datetime(2015, 1, 1)) @@ -33,7 +33,7 @@ def test_battery_solver_day_1(): assert soc <= battery.max_soc_in_mwh -def test_battery_solver_day_2(): +def test_battery_solver_day_2(add_battery_assets): epex_da = Market.query.filter(Market.name == "epex_da").one_or_none() battery = Asset.query.filter(Asset.name == "Test battery").one_or_none() start = as_server_time(datetime(2015, 1, 2)) diff --git a/flexmeasures/data/models/time_series.py b/flexmeasures/data/models/time_series.py index b2598c234..4e5284270 100644 --- a/flexmeasures/data/models/time_series.py +++ b/flexmeasures/data/models/time_series.py @@ -1,14 +1,13 @@ from typing import List, Dict, Optional, Union, Tuple from datetime import datetime as datetime_type, timedelta +import json from sqlalchemy.ext.declarative import declared_attr from sqlalchemy.orm import Query, Session import timely_beliefs as tb import timely_beliefs.utils as tb_utils -from marshmallow import Schema, fields from flexmeasures.data.config import db -from flexmeasures.data import ma from flexmeasures.data.queries.utils import ( add_belief_timing_filter, add_user_source_filter, @@ -18,6 +17,9 @@ ) from flexmeasures.data.services.time_series import collect_time_series_data from flexmeasures.utils.entity_address_utils import build_entity_address +from flexmeasures.data.models.charts import chart_type_to_chart_specs +from flexmeasures.utils.time_utils import server_now +from flexmeasures.utils.flexmeasures_inflection import capitalize class Sensor(db.Model, tb.SensorDBMixin): @@ -34,27 +36,116 @@ def entity_address(self) -> str: def search_beliefs( self, - event_time_window: Tuple[Optional[datetime_type], Optional[datetime_type]] = ( - None, - None, - ), - belief_time_window: Tuple[Optional[datetime_type], Optional[datetime_type]] = ( - None, - None, - ), + event_starts_after: Optional[datetime_type] = None, + event_ends_before: Optional[datetime_type] = None, + beliefs_after: Optional[datetime_type] = None, + beliefs_before: Optional[datetime_type] = None, source: Optional[Union[int, List[int], str, List[str]]] = None, + as_json: bool = False, ): """Search all beliefs about events for this sensor. - :param event_time_window: search only events within this time window - :param belief_time_window: search only beliefs within this time window - :param source: search only beliefs by this source (pass its name or id) or list of sources""" - return TimedBelief.search( + :param event_starts_after: only return beliefs about events that start after this datetime (inclusive) + :param event_ends_before: only return beliefs about events that end before this datetime (inclusive) + :param beliefs_after: only return beliefs formed after this datetime (inclusive) + :param beliefs_before: only return beliefs formed before this datetime (inclusive) + :param source: search only beliefs by this source (pass its name or id) or list of sources + :param as_json: return beliefs in JSON format (e.g. for use in charts) rather than as BeliefsDataFrame + """ + bdf = TimedBelief.search( sensor=self, - event_time_window=event_time_window, - belief_time_window=belief_time_window, + event_starts_after=event_starts_after, + event_ends_before=event_ends_before, + beliefs_after=beliefs_after, + beliefs_before=beliefs_before, source=source, ) + if as_json: + df = bdf.reset_index() + df["source"] = df["source"].apply(lambda x: x.name) + return df.to_json(orient="records") + return bdf + + def chart( + self, + chart_type: str = "bar_chart", + event_starts_after: Optional[datetime_type] = None, + event_ends_before: Optional[datetime_type] = None, + beliefs_after: Optional[datetime_type] = None, + beliefs_before: Optional[datetime_type] = None, + source: Optional[Union[int, List[int], str, List[str]]] = None, + include_data: bool = False, + dataset_name: Optional[str] = None, + **kwargs, + ) -> dict: + """Create a chart showing sensor data. + + :param chart_type: currently only "bar_chart" # todo: where can we properly list the available chart types? + :param event_starts_after: only return beliefs about events that start after this datetime (inclusive) + :param event_ends_before: only return beliefs about events that end before this datetime (inclusive) + :param beliefs_after: only return beliefs formed after this datetime (inclusive) + :param beliefs_before: only return beliefs formed before this datetime (inclusive) + :param source: search only beliefs by this source (pass its name or id) or list of sources + :param include_data: if True, include data in the chart, or if False, exclude data + :param dataset_name: optionally name the dataset used in the chart (the default name is sensor_) + """ + + # Set up chart specification + if dataset_name is None: + dataset_name = "sensor_" + str(self.id) + self.sensor_type = ( + self.name + ) # todo remove this placeholder when sensor types are modelled + chart_specs = chart_type_to_chart_specs( + chart_type, + title=capitalize(self.name), + quantity=capitalize(self.sensor_type), + unit=self.unit, + dataset_name=dataset_name, + **kwargs, + ) + + if include_data: + # Set up data + data = self.search_beliefs( + as_json=True, + event_starts_after=event_starts_after, + event_ends_before=event_ends_before, + beliefs_after=beliefs_after, + beliefs_before=beliefs_before, + source=source, + ) + # Combine chart specs and data + chart_specs["datasets"] = {dataset_name: json.loads(data)} + return chart_specs + + @property + def timerange(self) -> Dict[str, datetime_type]: + """Time range for which sensor data exists. + + :returns: dictionary with start and end, for example: + { + 'start': datetime.datetime(2020, 12, 3, 14, 0, tzinfo=pytz.utc), + 'end': datetime.datetime(2020, 12, 3, 14, 30, tzinfo=pytz.utc) + } + """ + least_recent_query = ( + TimedBelief.query.filter(TimedBelief.sensor == self) + .order_by(TimedBelief.event_start.asc()) + .limit(1) + ) + most_recent_query = ( + TimedBelief.query.filter(TimedBelief.sensor == self) + .order_by(TimedBelief.event_start.desc()) + .limit(1) + ) + results = least_recent_query.union_all(most_recent_query).all() + if not results: + # return now in case there is no data for the sensor + now = server_now() + return dict(start=now, end=now) + least_recent, most_recent = results + return dict(start=least_recent.event_start, end=most_recent.event_end) def __repr__(self) -> str: return f"" @@ -87,45 +178,63 @@ def __init__( def search( cls, sensor: Sensor, - event_time_window: Tuple[Optional[datetime_type], Optional[datetime_type]] = ( - None, - None, - ), - belief_time_window: Tuple[Optional[datetime_type], Optional[datetime_type]] = ( - None, - None, - ), + event_starts_after: Optional[datetime_type] = None, + event_ends_before: Optional[datetime_type] = None, + beliefs_after: Optional[datetime_type] = None, + beliefs_before: Optional[datetime_type] = None, source: Optional[Union[int, List[int], str, List[str]]] = None, ) -> tb.BeliefsDataFrame: """Search all beliefs about events for a given sensor. :param sensor: search only this sensor - :param event_time_window: search only events within this time window - :param belief_time_window: search only beliefs within this time window + :param event_starts_after: only return beliefs about events that start after this datetime (inclusive) + :param event_ends_before: only return beliefs about events that end before this datetime (inclusive) + :param beliefs_after: only return beliefs formed after this datetime (inclusive) + :param beliefs_before: only return beliefs formed before this datetime (inclusive) :param source: search only beliefs by this source (pass its name or id) or list of sources """ return cls.search_session( session=db.session, sensor=sensor, - event_before=event_time_window[1], - event_not_before=event_time_window[0], - belief_before=belief_time_window[1], - belief_not_before=belief_time_window[0], + event_starts_after=event_starts_after, + event_ends_before=event_ends_before, + beliefs_after=beliefs_after, + beliefs_before=beliefs_before, source=source, ) @classmethod - def add(cls, bdf: tb.BeliefsDataFrame, commit_transaction: bool = True): + def add( + cls, + bdf: tb.BeliefsDataFrame, + expunge_session: bool = False, + allow_overwrite: bool = False, + bulk_save_objects: bool = False, + commit_transaction: bool = False, + ): """Add a BeliefsDataFrame as timed beliefs in the database. :param bdf: the BeliefsDataFrame to be persisted - :param commit_transaction: if True, the session is committed - if False, you can still add other data to the session - and commit it all within an atomic transaction + :param expunge_session: if True, all non-flushed instances are removed from the session before adding beliefs. + Expunging can resolve problems you might encounter with states of objects in your session. + When using this option, you might want to flush newly-created objects which are not beliefs + (e.g. a sensor or data source object). + :param allow_overwrite: if True, new objects are merged + if False, objects are added to the session or bulk saved + :param bulk_save_objects: if True, objects are bulk saved with session.bulk_save_objects(), + which is quite fast but has several caveats, see: + https://docs.sqlalchemy.org/orm/persistence_techniques.html#bulk-operations-caveats + if False, objects are added to the session with session.add_all() + :param commit_transaction: if True, the session is committed + if False, you can still add other data to the session + and commit it all within an atomic transaction """ return cls.add_to_session( session=db.session, beliefs_data_frame=bdf, + expunge_session=expunge_session, + allow_overwrite=allow_overwrite, + bulk_save_objects=bulk_save_objects, commit_transaction=commit_transaction, ) @@ -134,35 +243,6 @@ def __repr__(self) -> str: return tb.TimedBelief.__repr__(self) -class SensorSchemaMixin(Schema): - """ - Base sensor schema. - - Here we include all fields which are implemented by timely_beliefs.SensorDBMixin - All classes inheriting from timely beliefs sensor don't need to repeat these. - In a while, this schema can represent our unified Sensor class. - - When subclassing, also subclass from `ma.SQLAlchemySchema` and add your own DB model class, e.g.: - - class Meta: - model = Asset - """ - - name = ma.auto_field(required=True) - unit = ma.auto_field(required=True) - timezone = ma.auto_field() - event_resolution = fields.TimeDelta(required=True, precision="minutes") - - -class SensorSchema(SensorSchemaMixin, ma.SQLAlchemySchema): - """ - Sensor schema, with validations. - """ - - class Meta: - model = Sensor - - class TimedValue(object): """ A mixin of all tables that store time series data, either forecasts or measurements. diff --git a/flexmeasures/data/models/user.py b/flexmeasures/data/models/user.py index c2fa4ea9f..fca257a48 100644 --- a/flexmeasures/data/models/user.py +++ b/flexmeasures/data/models/user.py @@ -98,3 +98,14 @@ def remember_login(the_app, user): if user.login_count is None: user.login_count = 0 user.login_count = user.login_count + 1 + + +def is_user(o) -> bool: + """True if object is or proxies a User, False otherwise. + + Takes into account that object can be of LocalProxy type, and + uses get_current_object to get the underlying (User) object. + """ + return isinstance(o, User) or ( + hasattr(o, "_get_current_object") and isinstance(o._get_current_object(), User) + ) diff --git a/flexmeasures/data/models/weather.py b/flexmeasures/data/models/weather.py index c1cf4563d..b844a5818 100644 --- a/flexmeasures/data/models/weather.py +++ b/flexmeasures/data/models/weather.py @@ -6,14 +6,12 @@ from sqlalchemy.ext.hybrid import hybrid_method, hybrid_property from sqlalchemy.sql.expression import func from sqlalchemy.schema import UniqueConstraint -from marshmallow import ValidationError, validates, validate, fields -from flexmeasures.data import ma from flexmeasures.data.config import db -from flexmeasures.data.models.time_series import Sensor, SensorSchemaMixin, TimedValue +from flexmeasures.data.models.time_series import Sensor, TimedValue +from flexmeasures.utils.geo_utils import parse_lat_lng from flexmeasures.utils.entity_address_utils import build_entity_address from flexmeasures.utils.flexmeasures_inflection import humanize -from flexmeasures.utils.geo_utils import parse_lat_lng class WeatherSensorType(db.Model): @@ -137,7 +135,7 @@ def great_circle_distance(self, **kwargs): great_circle_distance(lat=32, lng=54) """ - r = 6371 # Radius of Earth in kilometers + r = 6371 # Radius of Earth in kilometres other_latitude, other_longitude = parse_lat_lng(kwargs) if other_latitude is None or other_longitude is None: return None @@ -192,37 +190,6 @@ def to_dict(self) -> Dict[str, str]: return dict(name=self.name, sensor_type=self.weather_sensor_type_name) -class WeatherSensorSchema(SensorSchemaMixin, ma.SQLAlchemySchema): - """ - WeatherSensor schema, with validations. - """ - - class Meta: - model = WeatherSensor - - @validates("name") - def validate_name(self, name: str): - sensor = WeatherSensor.query.filter( - WeatherSensor.name == name.lower() - ).one_or_none() - if sensor: - raise ValidationError( - f"A weather sensor with the name {name} already exists." - ) - - @validates("weather_sensor_type_name") - def validate_weather_sensor_type(self, weather_sensor_type_name: str): - weather_sensor_type = WeatherSensorType.query.get(weather_sensor_type_name) - if not weather_sensor_type: - raise ValidationError( - f"Weather sensor type {weather_sensor_type_name} doesn't exist." - ) - - weather_sensor_type_name = ma.auto_field(required=True) - latitude = fields.Float(required=True, validate=validate.Range(min=-90, max=90)) - longitude = fields.Float(required=True, validate=validate.Range(min=-180, max=180)) - - class Weather(TimedValue, db.Model): """ All weather measurements are stored in one slim table. diff --git a/flexmeasures/api/common/schemas/__init__.py b/flexmeasures/data/schemas/__init__.py similarity index 100% rename from flexmeasures/api/common/schemas/__init__.py rename to flexmeasures/data/schemas/__init__.py diff --git a/flexmeasures/data/schemas/assets.py b/flexmeasures/data/schemas/assets.py new file mode 100644 index 000000000..3fbfed276 --- /dev/null +++ b/flexmeasures/data/schemas/assets.py @@ -0,0 +1,68 @@ +from marshmallow import validates, ValidationError, validates_schema, fields, validate + +from flexmeasures.data import ma +from flexmeasures.data.models.assets import Asset, AssetType +from flexmeasures.data.models.markets import Market +from flexmeasures.data.models.user import User +from flexmeasures.data.schemas.sensors import SensorSchemaMixin + + +class AssetSchema(SensorSchemaMixin, ma.SQLAlchemySchema): + """ + Asset schema, with validations. + """ + + class Meta: + model = Asset + + @validates("name") + def validate_name(self, name: str): + asset = Asset.query.filter(Asset.name == name).one_or_none() + if asset: + raise ValidationError(f"An asset with the name {name} already exists.") + + @validates("owner_id") + def validate_owner(self, owner_id: int): + owner = User.query.get(owner_id) + if not owner: + raise ValidationError(f"Owner with id {owner_id} doesn't exist.") + if "Prosumer" not in owner.flexmeasures_roles: + raise ValidationError( + "Asset owner must have role 'Prosumer'." + f" User {owner_id} has roles {[r.name for r in owner.flexmeasures_roles]}." + ) + + @validates("market_id") + def validate_market(self, market_id: int): + market = Market.query.get(market_id) + if not market: + raise ValidationError(f"Market with id {market_id} doesn't exist.") + + @validates("asset_type_name") + def validate_asset_type(self, asset_type_name: str): + asset_type = AssetType.query.get(asset_type_name) + if not asset_type: + raise ValidationError(f"Asset type {asset_type_name} doesn't exist.") + + @validates_schema(skip_on_field_errors=False) + def validate_soc_constraints(self, data, **kwargs): + if "max_soc_in_mwh" in data and "min_soc_in_mwh" in data: + if data["max_soc_in_mwh"] < data["min_soc_in_mwh"]: + errors = { + "max_soc_in_mwh": "This value must be equal or higher than the minimum soc." + } + raise ValidationError(errors) + + id = ma.auto_field() + display_name = fields.Str(validate=validate.Length(min=4)) + capacity_in_mw = fields.Float(required=True, validate=validate.Range(min=0.0001)) + min_soc_in_mwh = fields.Float(validate=validate.Range(min=0)) + max_soc_in_mwh = fields.Float(validate=validate.Range(min=0)) + soc_in_mwh = ma.auto_field() + soc_datetime = ma.auto_field() + soc_udi_event_id = ma.auto_field() + latitude = fields.Float(required=True, validate=validate.Range(min=-90, max=90)) + longitude = fields.Float(required=True, validate=validate.Range(min=-180, max=180)) + asset_type_name = ma.auto_field(required=True) + owner_id = ma.auto_field(required=True) + market_id = ma.auto_field(required=True) diff --git a/flexmeasures/data/schemas/sensors.py b/flexmeasures/data/schemas/sensors.py new file mode 100644 index 000000000..80fb23ecd --- /dev/null +++ b/flexmeasures/data/schemas/sensors.py @@ -0,0 +1,33 @@ +from marshmallow import Schema, fields + +from flexmeasures.data import ma +from flexmeasures.data.models.time_series import Sensor + + +class SensorSchemaMixin(Schema): + """ + Base sensor schema. + + Here we include all fields which are implemented by timely_beliefs.SensorDBMixin + All classes inheriting from timely beliefs sensor don't need to repeat these. + In a while, this schema can represent our unified Sensor class. + + When subclassing, also subclass from `ma.SQLAlchemySchema` and add your own DB model class, e.g.: + + class Meta: + model = Asset + """ + + name = ma.auto_field(required=True) + unit = ma.auto_field(required=True) + timezone = ma.auto_field() + event_resolution = fields.TimeDelta(required=True, precision="minutes") + + +class SensorSchema(SensorSchemaMixin, ma.SQLAlchemySchema): + """ + Sensor schema, with validations. + """ + + class Meta: + model = Sensor diff --git a/flexmeasures/api/common/schemas/tests/__init__.py b/flexmeasures/data/schemas/tests/__init__.py similarity index 100% rename from flexmeasures/api/common/schemas/tests/__init__.py rename to flexmeasures/data/schemas/tests/__init__.py diff --git a/flexmeasures/api/common/schemas/tests/test_times.py b/flexmeasures/data/schemas/tests/test_times.py similarity index 97% rename from flexmeasures/api/common/schemas/tests/test_times.py rename to flexmeasures/data/schemas/tests/test_times.py index a6466b610..16e5dcd68 100644 --- a/flexmeasures/api/common/schemas/tests/test_times.py +++ b/flexmeasures/data/schemas/tests/test_times.py @@ -4,7 +4,7 @@ import pytz import isodate -from flexmeasures.api.common.schemas.times import DurationField, DurationValidationError +from flexmeasures.data.schemas.times import DurationField, DurationValidationError @pytest.mark.parametrize( diff --git a/flexmeasures/api/common/schemas/times.py b/flexmeasures/data/schemas/times.py similarity index 77% rename from flexmeasures/api/common/schemas/times.py rename to flexmeasures/data/schemas/times.py index 8f2108ee8..002c64e81 100644 --- a/flexmeasures/api/common/schemas/times.py +++ b/flexmeasures/data/schemas/times.py @@ -6,14 +6,14 @@ from isodate.isoerror import ISO8601Error import pandas as pd -from flexmeasures.api.common.utils.args_parsing import FMValidationError +from flexmeasures.data.schemas.utils import FMValidationError, MarshmallowClickMixin class DurationValidationError(FMValidationError): status = "INVALID_PERIOD" # USEF error status -class DurationField(fields.Str): +class DurationField(fields.Str, MarshmallowClickMixin): """Field that deserializes to a ISO8601 Duration and serializes back to a string.""" @@ -62,3 +62,16 @@ def ground_from( ) return (pd.Timestamp(start) + offset).to_pydatetime() - start return duration + + +class AwareDateTimeField(fields.AwareDateTime, MarshmallowClickMixin): + """Field that de-serializes to a timezone aware datetime + and serializes back to a string.""" + + def _deserialize(self, value: str, attr, obj, **kwargs) -> datetime: + """ + Work-around until this PR lands: + https://github.com/marshmallow-code/marshmallow/pull/1787 + """ + value = value.replace(" ", "+") + return fields.AwareDateTime._deserialize(self, value, attr, obj, **kwargs) diff --git a/flexmeasures/data/schemas/users.py b/flexmeasures/data/schemas/users.py new file mode 100644 index 000000000..18a135166 --- /dev/null +++ b/flexmeasures/data/schemas/users.py @@ -0,0 +1,28 @@ +from marshmallow import validates, ValidationError, validate +from pytz import all_timezones + +from flexmeasures.data import ma +from flexmeasures.data.models.user import User as UserModel +from flexmeasures.data.schemas.times import AwareDateTimeField + + +class UserSchema(ma.SQLAlchemySchema): + """ + This schema lists fields we support through this API (e.g. no password). + """ + + class Meta: + model = UserModel + + @validates("timezone") + def validate_timezone(self, timezone): + if timezone not in all_timezones: + raise ValidationError(f"Timezone {timezone} doesn't exist.") + + id = ma.auto_field() + email = ma.auto_field(required=True, validate=validate.Email) + username = ma.auto_field(required=True) + active = ma.auto_field() + timezone = ma.auto_field() + flexmeasures_roles = ma.auto_field() + last_login_at = AwareDateTimeField() diff --git a/flexmeasures/data/schemas/utils.py b/flexmeasures/data/schemas/utils.py new file mode 100644 index 000000000..42276d8e5 --- /dev/null +++ b/flexmeasures/data/schemas/utils.py @@ -0,0 +1,26 @@ +import click +import marshmallow as ma +from marshmallow import ValidationError + + +class MarshmallowClickMixin(click.ParamType): + def get_metavar(self, param): + return self.__class__.__name__ + + def convert(self, value, param, ctx, **kwargs): + try: + return self.deserialize(value, **kwargs) + except ma.exceptions.ValidationError as e: + raise click.exceptions.BadParameter(e, ctx=ctx, param=param) + + +class FMValidationError(ValidationError): + """ + Custom validation error class. + It differs from the classic validation error by having two + attributes, according to the USEF 2015 reference implementation. + Subclasses of this error might adjust the `status` attribute accordingly. + """ + + result = "Rejected" + status = "UNPROCESSABLE_ENTITY" diff --git a/flexmeasures/data/schemas/weather.py b/flexmeasures/data/schemas/weather.py new file mode 100644 index 000000000..5e81f63fc --- /dev/null +++ b/flexmeasures/data/schemas/weather.py @@ -0,0 +1,36 @@ +from marshmallow import validates, ValidationError, fields, validate + +from flexmeasures.data import ma +from flexmeasures.data.models.weather import WeatherSensor, WeatherSensorType +from flexmeasures.data.schemas.sensors import SensorSchemaMixin + + +class WeatherSensorSchema(SensorSchemaMixin, ma.SQLAlchemySchema): + """ + WeatherSensor schema, with validations. + """ + + class Meta: + model = WeatherSensor + + @validates("name") + def validate_name(self, name: str): + sensor = WeatherSensor.query.filter( + WeatherSensor.name == name.lower() + ).one_or_none() + if sensor: + raise ValidationError( + f"A weather sensor with the name {name} already exists." + ) + + @validates("weather_sensor_type_name") + def validate_weather_sensor_type(self, weather_sensor_type_name: str): + weather_sensor_type = WeatherSensorType.query.get(weather_sensor_type_name) + if not weather_sensor_type: + raise ValidationError( + f"Weather sensor type {weather_sensor_type_name} doesn't exist." + ) + + weather_sensor_type_name = ma.auto_field(required=True) + latitude = fields.Float(required=True, validate=validate.Range(min=-90, max=90)) + longitude = fields.Float(required=True, validate=validate.Range(min=-180, max=180)) diff --git a/flexmeasures/data/scripts/cli_tasks/Readme.md b/flexmeasures/data/scripts/cli_tasks/Readme.md index ee4a56beb..afa76806b 100644 --- a/flexmeasures/data/scripts/cli_tasks/Readme.md +++ b/flexmeasures/data/scripts/cli_tasks/Readme.md @@ -5,20 +5,9 @@ These scripts are made available as cli tasks. To view the available commands, run: - flask --help + flexmeasures --help -For help on individual commands, for example on the saving and loading functionality, type `flask db-save --help` or `flask db-load --help`. -These help messages are generated from the code (see the file db_pop.py in the cli_tasks directory). -Structural data refers to database tables with relatively little entries (they describe things like assets, markets and weather sensors). -Time series data refers to database tables with many entries (like power, price and temperature values). -The default location for storing database backups is within the top-level `migrations` directory. -The contents of this folder are not part of the code repository, and database backups will be lost when deleted. - -The load functionality is also made available as an API endpoint called _restoreData_, and described as such in the user documentation for the play server. -The relevant API endpoint is set up in the `flexmeasures/api/play` directory. -The file `routes.py` contains its registration and documentation, while the file `implementations.py` contains the functional logic that connects the API endpoint to the same scripts that are accessible through the command line interface. - -The save functionality is currently not available as an API endpoint. -This script cannot be executed within the lifetime of an https request, and would require processing within a separate thread, similar to how forecasting jobs are handled by FlexMeasures. +For help on individual commands, type `flexmesaures --help`. +Structural data refers to database tables which do not contain time series data. To create new commands, be sure to register any new file (containing the corresponding script) with the flask cli in `flexmeasures/data/__init__.py`. diff --git a/flexmeasures/data/scripts/cli_tasks/data_add.py b/flexmeasures/data/scripts/cli_tasks/data_add.py index 3ae882841..aa7f47760 100644 --- a/flexmeasures/data/scripts/cli_tasks/data_add.py +++ b/flexmeasures/data/scripts/cli_tasks/data_add.py @@ -1,7 +1,7 @@ """CLI Tasks for (de)populating the database - most useful in development""" from datetime import timedelta -from typing import List +from typing import List, Optional import pandas as pd import pytz @@ -10,13 +10,24 @@ from flask_security.utils import hash_password import click import getpass +from sqlalchemy.exc import IntegrityError +import timely_beliefs as tb +from flexmeasures.data import db from flexmeasures.data.services.forecasting import create_forecasting_jobs from flexmeasures.data.services.users import create_user -from flexmeasures.data.models.time_series import Sensor, SensorSchema -from flexmeasures.data.models.assets import Asset, AssetSchema +from flexmeasures.data.models.time_series import Sensor, TimedBelief +from flexmeasures.data.schemas.sensors import SensorSchema +from flexmeasures.data.models.assets import Asset +from flexmeasures.data.schemas.assets import AssetSchema from flexmeasures.data.models.markets import Market -from flexmeasures.data.models.weather import WeatherSensor, WeatherSensorSchema +from flexmeasures.data.models.weather import WeatherSensor +from flexmeasures.data.schemas.weather import WeatherSensorSchema +from flexmeasures.data.models.data_sources import ( + get_or_create_source, + get_source_or_none, +) +from flexmeasures.utils.time_utils import server_now @click.group("add") @@ -24,6 +35,11 @@ def fm_add_data(): """FlexMeasures: Add data.""" +@click.group("dev-add") +def fm_dev_add_data(): + """Developer CLI commands not yet meant for users: Add data.""" + + @fm_add_data.command("user") @with_appcontext @click.option("--username", required=True) @@ -63,7 +79,7 @@ def new_user(username: str, email: str, roles: List[str], timezone: str): print(f"Successfully created user {created_user}") -@fm_add_data.command("sensor") +@fm_dev_add_data.command("sensor") @with_appcontext @click.option("--name", required=True) @click.option("--unit", required=True, help="e.g. °C, m/s, kW/m²") @@ -95,8 +111,18 @@ def add_sensor(**args): @with_appcontext @click.option("--name", required=True) @click.option("--asset-type-name", required=True) -@click.option("--unit", required=True, help="e.g. MW, kW/h", default="MW") -@click.option("--capacity-in-MW", required=True, type=float) +@click.option( + "--unit", + help="unit of rate, just MW (default) for now", + type=click.Choice(["MW"]), + default="MW", +) # TODO: enable others +@click.option( + "--capacity-in-MW", + required=True, + type=float, + help="Maximum rate of this asset in MW", +) @click.option( "--event-resolution", required=True, @@ -200,6 +226,154 @@ def add_initial_structure(): populate_structure(app.db) +@fm_dev_add_data.command("beliefs") +@with_appcontext +@click.argument("file", type=click.Path(exists=True)) +@click.option( + "--sensor-id", + required=True, + type=click.IntRange(min=1), + help="Sensor to which the beliefs pertain.", +) +@click.option( + "--source", + required=True, + type=str, + help="Source of the beliefs (an existing source id or name, or a new name).", +) +@click.option( + "--horizon", + required=False, + type=int, + help="Belief horizon in minutes (use positive horizon for ex-ante beliefs or negative horizon for ex-post beliefs).", +) +@click.option( + "--cp", + required=False, + type=click.FloatRange(0, 1), + help="Cumulative probability in the range [0, 1].", +) +@click.option( + "--allow-overwrite/--do-not-allow-overwrite", + default=False, + help="Allow overwriting possibly already existing data.\n" + "Not allowing overwriting can be much more efficient", +) +@click.option( + "--skiprows", + required=False, + default=1, + type=int, + help="Number of rows to skip from the top. Set to >1 to skip additional headers.", +) +@click.option( + "--nrows", + required=False, + type=int, + help="Number of rows to read (from the top, after possibly skipping rows). Leave out to read all rows.", +) +@click.option( + "--datecol", + required=False, + default=0, + type=int, + help="Column number with datetimes (0 is 1st column, the default)", +) +@click.option( + "--valuecol", + required=False, + default=1, + type=int, + help="Column number with values (1 is 2nd column, the default)", +) +@click.option( + "--sheet_number", + required=False, + type=int, + help="[For xls or xlsx files] Sheet number with the data (0 is 1st sheet)", +) +def add_beliefs( + file: str, + sensor_id: int, + source: str, + horizon: Optional[int] = None, + cp: Optional[float] = None, + allow_overwrite: bool = False, + skiprows: int = 1, + nrows: Optional[int] = None, + datecol: int = 0, + valuecol: int = 1, + sheet_number: Optional[int] = None, +): + """Add sensor data from a csv file (also accepts xls or xlsx). + + To use default settings, structure your csv file as follows: + + - One header line (will be ignored!) + - UTC datetimes in 1st column + - values in 2nd column + + For example: + + Date,Inflow (cubic meter) + 2020-12-03 14:00,212 + 2020-12-03 14:10,215.6 + 2020-12-03 14:20,203.8 + + In case no --horizon is specified, the moment of executing this CLI command is taken + as the time at which the beliefs were recorded. + """ + sensor = Sensor.query.filter(Sensor.id == sensor_id).one_or_none() + if sensor is None: + print(f"Failed to create beliefs: no sensor found with id {sensor_id}.") + return + if source.isdigit(): + _source = get_source_or_none(int(source), source_type="CLI script") + if not _source: + print(f"Failed to find source {source}.") + return + else: + _source = get_or_create_source(source, source_type="CLI script") + + # Set up optional parameters for read_csv + kwargs = dict() + if file.split(".")[-1].lower() == "csv": + kwargs["infer_datetime_format"] = True + if sheet_number is not None: + kwargs["sheet_name"] = sheet_number + if horizon is not None: + kwargs["belief_horizon"] = timedelta(minutes=horizon) + else: + kwargs["belief_time"] = server_now().astimezone(pytz.timezone(sensor.timezone)) + + bdf = tb.read_csv( + file, + sensor, + source=_source, + cumulative_probability=cp, + header=None, + skiprows=skiprows, + nrows=nrows, + usecols=[datecol, valuecol], + parse_dates=True, + **kwargs, + ) + try: + TimedBelief.add( + bdf, + expunge_session=True, + allow_overwrite=allow_overwrite, + bulk_save_objects=True, + commit_transaction=True, + ) + print(f"Successfully created beliefs\n{bdf}") + except IntegrityError as e: + db.session.rollback() + print(f"Failed to create beliefs due to the following error: {e.orig}") + if not allow_overwrite: + print("As a possible workaround, use the --allow-overwrite flag.") + + @fm_add_data.command("forecasts") @with_appcontext @click.option( @@ -229,14 +403,14 @@ def add_initial_structure(): multiple=True, type=click.Choice(["1", "6", "24", "48"]), default=["1", "6", "24", "48"], - help="Forecasting horizon in hours. This argument can be given multiple times.", + help="Forecasting horizon in hours. This argument can be given multiple times. Defaults to all possible horizons.", ) @click.option( "--as-job", is_flag=True, help="Whether to queue a forecasting job instead of computing directly." " Useful to run locally and create forecasts on a remote server. In that case, just point the redis db in your" - " config settings to that of the remote server. To process the job, run a worker to process the forecasting queue.", + " config settings to that of the remote server. To process the job, run a worker to process the forecasting queue. Defaults to False.", ) def create_forecasts( asset_type: str = None, @@ -292,11 +466,12 @@ def create_forecasts( @fm_add_data.command("external-weather-forecasts") +@with_appcontext @click.option( "--region", type=str, default="", - help="Name of the region (will create sub-folder, should later tag the forecast in the DB, probably).", + help="Name of the region (will create sub-folder if you store json files, should later probably tag the forecast in the DB).", ) @click.option( "--location", @@ -311,7 +486,7 @@ def create_forecasts( "--num_cells", type=int, default=1, - help="Number of cells on the grid. Only used if a region of interest has been mapped in the location parameter.", + help="Number of cells on the grid. Only used if a region of interest has been mapped in the location parameter. Defaults to 1.", ) @click.option( "--method", @@ -322,13 +497,13 @@ def create_forecasts( @click.option( "--store-in-db/--store-as-json-files", default=False, - help="Store forecasts in the database, or simply save as json files.", + help="Store forecasts in the database, or simply save as json files. (defaults to json files)", ) def collect_weather_data(region, location, num_cells, method, store_in_db): """ - Collect weather forecasts from the DarkSky API + Collect weather forecasts from the OpenWeatherMap API - This function can get weather data for one location or for several location within + This function can get weather data for one location or for several locations within a geometrical grid (See the --location parameter). """ from flexmeasures.data.scripts.grid_weather import get_weather_forecasts @@ -337,6 +512,7 @@ def collect_weather_data(region, location, num_cells, method, store_in_db): app.cli.add_command(fm_add_data) +app.cli.add_command(fm_dev_add_data) def check_timezone(timezone): diff --git a/flexmeasures/data/scripts/grid_weather.py b/flexmeasures/data/scripts/grid_weather.py index 09e927e88..99b21924c 100755 --- a/flexmeasures/data/scripts/grid_weather.py +++ b/flexmeasures/data/scripts/grid_weather.py @@ -1,13 +1,14 @@ #!/usr/bin/env python import os -from typing import Tuple, List +from typing import Tuple, List, Dict import json from datetime import datetime import click from flask import Flask, current_app -from forecastiopy import ForecastIO +import requests +import pytz from flexmeasures.utils.time_utils import as_server_time, get_timezone from flexmeasures.utils.geo_utils import compute_irradiance @@ -18,7 +19,7 @@ from flexmeasures.data.models.data_sources import DataSource FILE_PATH_LOCATION = "/../raw_data/weather-forecasts" -DATA_SOURCE_NAME = "DarkSky" +DATA_SOURCE_NAME = "OpenWeatherMap" class LatLngGrid(object): @@ -217,12 +218,12 @@ def locations_hex(self) -> List[Tuple[float, float]]: sw = ( lat + self.cell_size_lat / 2, lng - self.cell_size_lat / 3 ** (1 / 2) / 2, - ) # South west coord. + ) # South west coordinates locations.append(sw) se = ( lat + self.cell_size_lat / 2, lng + self.cell_size_lng / 3 ** (1 / 2) / 2, - ) # South east coord. + ) # South east coordinates locations.append(se) return locations @@ -317,22 +318,30 @@ def get_data_source() -> DataSource: return data_source -def call_darksky(api_key: str, location: Tuple[float, float]) -> dict: - """Make a single call to the Dark Sky API and return the result parsed as dict""" - return ForecastIO.ForecastIO( - api_key, - units=ForecastIO.ForecastIO.UNITS_SI, - lang=ForecastIO.ForecastIO.LANG_ENGLISH, - latitude=location[0], - longitude=location[1], - extend="hourly", - ).forecast +def call_openweatherapi( + api_key: str, location: Tuple[float, float] +) -> Tuple[int, List[Dict]]: + """ + Make a single "one-call" to the Open Weather API and return the API timestamp as well as the 48 hourly forecasts. + See https://openweathermap.org/api/one-call-api for docs. + Note that the first forecast is about the current hour. + """ + query_str = f"lat={location[0]}&lon={location[1]}&units=metric&exclude=minutely,daily,alerts&appid={api_key}" + res = requests.get(f"http://api.openweathermap.org/data/2.5/onecall?{query_str}") + assert ( + res.status_code == 200 + ), f"OpenWeatherMap returned status code {res.status_code}: {res.text}" + data = res.json() + return data["current"]["dt"], data["hourly"] def save_forecasts_in_db( - api_key: str, locations: List[Tuple[float, float]], data_source: DataSource + api_key: str, + locations: List[Tuple[float, float]], + data_source: DataSource, + max_degree_difference_for_nearest_weather_sensor: int = 2, ): - """Process the response from DarkSky into Weather timed values. + """Process the response from OpenWeatherMap API into Weather timed values. Collects all forecasts for all locations and all sensors at all locations, then bulk-saves them. """ click.echo("[FLEXMEASURES] Getting weather forecasts:") @@ -344,22 +353,24 @@ def save_forecasts_in_db( for location in locations: click.echo("[FLEXMEASURES] %s, %s" % location) - forecasts = call_darksky(api_key, location) + api_timestamp, forecasts = call_openweatherapi(api_key, location) time_of_api_call = as_server_time( - datetime.fromtimestamp(forecasts["currently"]["time"], get_timezone()) + datetime.fromtimestamp(api_timestamp, tz=get_timezone()) ).replace(second=0, microsecond=0) click.echo( - "[FLEXMEASURES] Called Dark Sky API successfully at %s." % time_of_api_call + "[FLEXMEASURES] Called OpenWeatherMap API successfully at %s." + % time_of_api_call ) - # map sensor name in our db to sensor name/label in dark sky response + # map sensor name in our db to sensor name/label in OWM response sensor_name_mapping = dict( - temperature="temperature", wind_speed="windSpeed", radiation="cloudCover" + temperature="temp", wind_speed="wind_speed", radiation="clouds" ) - for fc in forecasts["hourly"]["data"]: + # loop through forecasts, including the one of current hour (horizon 0) + for fc in forecasts: fc_datetime = as_server_time( - datetime.fromtimestamp(fc["time"], get_timezone()) + datetime.fromtimestamp(fc["dt"], get_timezone()) ).replace(second=0, microsecond=0) fc_horizon = fc_datetime - time_of_api_call click.echo( @@ -375,6 +386,16 @@ def save_forecasts_in_db( flexmeasures_sensor_type, lat=location[0], lng=location[1] ) if weather_sensor is not None: + # Complain if the nearest weather sensor is further away than 2 degrees + if abs( + location[0] - weather_sensor.latitude + ) > max_degree_difference_for_nearest_weather_sensor or abs( + location[1] - weather_sensor.longitude + > max_degree_difference_for_nearest_weather_sensor + ): + raise Exception( + f"No sufficiently close weather sensor found (within 2 degrees distance) for type {flexmeasures_sensor_type}! We're looking for: {location}, closest available: ({weather_sensor.latitude}, {weather_sensor.longitude})" + ) weather_sensors[flexmeasures_sensor_type] = weather_sensor else: raise Exception( @@ -383,13 +404,14 @@ def save_forecasts_in_db( ) fc_value = fc[needed_response_label] - # the radiation is not available in dark sky -> we compute it ourselves + # the radiation is not available in OWM -> we compute it ourselves if flexmeasures_sensor_type == "radiation": fc_value = compute_irradiance( location[0], location[1], fc_datetime, - fc[needed_response_label], + # OWM sends cloud coverage in percent, we need a ratio + fc[needed_response_label] / 100.0, ) db_forecasts.append( @@ -424,15 +446,19 @@ def save_forecasts_as_json( click.echo("[FLEXMEASURES] Getting weather forecasts:") click.echo("[FLEXMEASURES] Latitude, Longitude") click.echo("[FLEXMEASURES] ----------------------") - # UTC timestamp to remember when data was fetched. - now_str = datetime.utcnow().strftime("%Y-%m-%dT%H-%M-%S") - os.mkdir("%s/%s" % (data_path, now_str)) for location in locations: click.echo("[FLEXMEASURES] %s, %s" % location) - forecasts = call_darksky(api_key, location) - forecasts_file = "%s/%s/forecast_lat_%s_lng_%s.json" % ( - data_path, - now_str, + api_timestamp, forecasts = call_openweatherapi(api_key, location) + time_of_api_call = as_server_time( + datetime.fromtimestamp(api_timestamp, tz=pytz.utc) + ).replace(second=0, microsecond=0) + now_str = time_of_api_call.strftime("%Y-%m-%dT%H-%M-%S") + path_to_files = os.path.join(data_path, now_str) + if not os.path.exists(path_to_files): + click.echo(f"Making directory: {path_to_files} ...") + os.mkdir(path_to_files) + forecasts_file = "%s/forecast_lat_%s_lng_%s.json" % ( + path_to_files, str(location[0]), str(location[1]), ) @@ -451,11 +477,11 @@ def get_weather_forecasts( ): """ Get current weather forecasts for a latitude/longitude grid and store them in individual json files. - Note that 1000 free calls per day can be made to the Dark Sky API, - so we can make a call every 15 minutes for up to 10 assets or every hour for up to 40 assets. + Note that 1000 free calls per day can be made to the OpenWeatherMap API, + so we can make a call every 15 minutes for up to 10 assets or every hour for up to 40 assets (or get a paid account). """ - if app.config.get("DARK_SKY_API_KEY") is None: - raise Exception("No DarkSky API key available.") + if app.config.get("OPENWEATHERMAP_API_KEY") is None: + raise Exception("Setting OPENWEATHERMAP_API_KEY not available.") if ( location.count(",") == 0 @@ -504,7 +530,7 @@ def get_weather_forecasts( else: raise Exception("location parameter '%s' has too many locations." % location) - api_key = app.config.get("DARK_SKY_API_KEY") + api_key = app.config.get("OPENWEATHERMAP_API_KEY") # Save the results if store_in_db: diff --git a/flexmeasures/data/services/forecasting.py b/flexmeasures/data/services/forecasting.py index 12035d8b2..7d66afb7d 100644 --- a/flexmeasures/data/services/forecasting.py +++ b/flexmeasures/data/services/forecasting.py @@ -78,7 +78,7 @@ def create_forecasting_jobs( current default model configuration will be used. It's possible to customize model parameters, but this feature is (currently) meant to only - be used by tests, so that model behavior can be adapted to test conditions. If used outside + be used by tests, so that model behaviour can be adapted to test conditions. If used outside of testing, an exception is raised. if enqueue is True (default), the jobs are put on the redis queue. diff --git a/flexmeasures/data/services/time_series.py b/flexmeasures/data/services/time_series.py index 44fa20757..1c0af44eb 100644 --- a/flexmeasures/data/services/time_series.py +++ b/flexmeasures/data/services/time_series.py @@ -257,6 +257,9 @@ def convert_query_window_for_demo( end = (query_window[-1] + timedelta(days=1)).replace(year=demo_year) else: end = query_window[-1] + + if start >= end: + start, end = (end, start) return start, end diff --git a/flexmeasures/data/tests/conftest.py b/flexmeasures/data/tests/conftest.py index b2ce954c2..969d34bc8 100644 --- a/flexmeasures/data/tests/conftest.py +++ b/flexmeasures/data/tests/conftest.py @@ -2,14 +2,15 @@ from datetime import datetime, timedelta from random import random +from isodate import parse_duration import pandas as pd import numpy as np from flask_sqlalchemy import SQLAlchemy from statsmodels.api import OLS +from flexmeasures.data.models.assets import Asset, Power from flexmeasures.data.models.data_sources import DataSource from flexmeasures.data.models.weather import WeatherSensorType, WeatherSensor, Weather -from flexmeasures.data.models.assets import AssetType from flexmeasures.data.models.forecasting import model_map from flexmeasures.data.models.forecasting.model_spec_factory import ( create_initial_model_specs, @@ -17,8 +18,14 @@ from flexmeasures.utils.time_utils import as_server_time -@pytest.fixture(scope="function", autouse=True) -def setup_test_data(db, app, remove_seasonality_for_power_forecasts): +@pytest.fixture(scope="module") +def setup_test_data( + db, + app, + add_market_prices, + setup_assets, + remove_seasonality_for_power_forecasts, +): """ Adding a few forecasting jobs (based on data made in flexmeasures.conftest). """ @@ -29,14 +36,72 @@ def setup_test_data(db, app, remove_seasonality_for_power_forecasts): print("Done setting up data for data tests") -@pytest.fixture(scope="function", autouse=True) -def remove_seasonality_for_power_forecasts(db): +@pytest.fixture(scope="function") +def setup_fresh_test_data( + fresh_db, + setup_markets_fresh_db, + setup_roles_users_fresh_db, + app, + fresh_remove_seasonality_for_power_forecasts, +): + db = fresh_db + setup_roles_users = setup_roles_users_fresh_db + setup_markets = setup_markets_fresh_db + + data_source = DataSource(name="Seita", type="demo script") + db.session.add(data_source) + db.session.flush() + + for asset_name in ["wind-asset-2", "solar-asset-1"]: + asset = Asset( + name=asset_name, + asset_type_name="wind" if "wind" in asset_name else "solar", + event_resolution=timedelta(minutes=15), + capacity_in_mw=1, + latitude=10, + longitude=100, + min_soc_in_mwh=0, + max_soc_in_mwh=0, + soc_in_mwh=0, + unit="MW", + market_id=setup_markets["epex_da"].id, + ) + asset.owner = setup_roles_users["Test Prosumer"] + db.session.add(asset) + + time_slots = pd.date_range( + datetime(2015, 1, 1), datetime(2015, 1, 1, 23, 45), freq="15T" + ) + values = [random() * (1 + np.sin(x / 15)) for x in range(len(time_slots))] + for dt, val in zip(time_slots, values): + p = Power( + datetime=as_server_time(dt), + horizon=parse_duration("PT0M"), + value=val, + data_source_id=data_source.id, + ) + p.asset = asset + db.session.add(p) + add_test_weather_sensor_and_forecasts(fresh_db) + + +@pytest.fixture(scope="module", autouse=True) +def remove_seasonality_for_power_forecasts(db, setup_asset_types): + """Make sure the AssetType specs make us query only data we actually have in the test db""" + for asset_type in setup_asset_types.keys(): + setup_asset_types[asset_type].daily_seasonality = False + setup_asset_types[asset_type].weekly_seasonality = False + setup_asset_types[asset_type].yearly_seasonality = False + + +@pytest.fixture(scope="function") +def fresh_remove_seasonality_for_power_forecasts(db, setup_asset_types_fresh_db): """Make sure the AssetType specs make us query only data we actually have in the test db""" - asset_types = AssetType.query.all() - for a in asset_types: - a.daily_seasonality = False - a.weekly_seasonality = False - a.yearly_seasonality = False + setup_asset_types = setup_asset_types_fresh_db + for asset_type in setup_asset_types.keys(): + setup_asset_types[asset_type].daily_seasonality = False + setup_asset_types[asset_type].weekly_seasonality = False + setup_asset_types[asset_type].yearly_seasonality = False def add_test_weather_sensor_and_forecasts(db: SQLAlchemy): @@ -74,7 +139,7 @@ def add_test_weather_sensor_and_forecasts(db: SQLAlchemy): ) -@pytest.fixture(scope="function", autouse=True) +@pytest.fixture(scope="module", autouse=True) def add_failing_test_model(db): """Add a test model specs to the lookup which should fail due to missing data. It falls back to linear OLS (which falls back to naive).""" diff --git a/flexmeasures/data/tests/test_forecasting_jobs.py b/flexmeasures/data/tests/test_forecasting_jobs.py index e92307c04..9c6f9c6a4 100644 --- a/flexmeasures/data/tests/test_forecasting_jobs.py +++ b/flexmeasures/data/tests/test_forecasting_jobs.py @@ -3,9 +3,7 @@ from datetime import datetime, timedelta import os -import pytest import numpy as np -from sqlalchemy.orm import Query from rq.job import Job from flexmeasures.data.models.data_sources import DataSource @@ -36,15 +34,19 @@ def get_data_source(model_identifier: str = "linear-OLS model v2"): ).one_or_none() -def check_aggregate(overall_expected: int, horizon: timedelta): +def check_aggregate(overall_expected: int, horizon: timedelta, asset_id: int): """Check that the expected number of forecasts were made for the given horizon, and check that each forecast is a number.""" - all_forecasts = Power.query.filter(Power.horizon == horizon).all() + all_forecasts = ( + Power.query.filter(Power.asset_id == asset_id) + .filter(Power.horizon == horizon) + .all() + ) assert len(all_forecasts) == overall_expected assert all([not np.isnan(f.value) for f in all_forecasts]) -def test_forecasting_an_hour_of_wind(db, app): +def test_forecasting_an_hour_of_wind(db, app, setup_test_data): """Test one clean run of one job: - data source was made, - forecasts have been made @@ -80,69 +82,10 @@ def test_forecasting_an_hour_of_wind(db, app): .all() ) assert len(forecasts) == 4 - check_aggregate(4, horizon) - - -def test_forecasting_three_hours_of_wind(db, app): - wind_device2: Asset = Asset.query.filter_by(name="wind-asset-2").one_or_none() - - # makes 12 forecasts - horizon = timedelta(hours=1) - job = create_forecasting_jobs( - timed_value_type="Power", - start_of_roll=as_server_time(datetime(2015, 1, 1, 10)), - end_of_roll=as_server_time(datetime(2015, 1, 1, 13)), - horizons=[horizon], - asset_id=wind_device2.id, - custom_model_params=custom_model_params(), - ) - print("Job: %s" % job[0].id) - - work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) - - forecasts = ( - Power.query.filter(Power.asset_id == wind_device2.id) - .filter(Power.horizon == horizon) - .filter( - (Power.datetime >= as_server_time(datetime(2015, 1, 1, 11))) - & (Power.datetime < as_server_time(datetime(2015, 1, 1, 14))) - ) - .all() - ) - assert len(forecasts) == 12 - check_aggregate(12, horizon) - - -def test_forecasting_two_hours_of_solar(db, app): - solar_device1: Asset = Asset.query.filter_by(name="solar-asset-1").one_or_none() - - # makes 8 forecasts - horizon = timedelta(hours=1) - job = create_forecasting_jobs( - timed_value_type="Power", - start_of_roll=as_server_time(datetime(2015, 1, 1, 12)), - end_of_roll=as_server_time(datetime(2015, 1, 1, 14)), - horizons=[horizon], - asset_id=solar_device1.id, - custom_model_params=custom_model_params(), - ) - print("Job: %s" % job[0].id) - - work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) - forecasts = ( - Power.query.filter(Power.asset_id == solar_device1.id) - .filter(Power.horizon == horizon) - .filter( - (Power.datetime >= as_server_time(datetime(2015, 1, 1, 13))) - & (Power.datetime < as_server_time(datetime(2015, 1, 1, 15))) - ) - .all() - ) - assert len(forecasts) == 8 - check_aggregate(8, horizon) + check_aggregate(4, horizon, wind_device_1.id) -def test_forecasting_two_hours_of_solar_at_edge_of_data_set(db, app): +def test_forecasting_two_hours_of_solar_at_edge_of_data_set(db, app, setup_test_data): solar_device1: Asset = Asset.query.filter_by(name="solar-asset-1").one_or_none() last_power_datetime = ( @@ -182,7 +125,7 @@ def test_forecasting_two_hours_of_solar_at_edge_of_data_set(db, app): .all() ) assert len(forecasts) == 1 - check_aggregate(4, horizon) + check_aggregate(4, horizon, solar_device1.id) def check_failures( @@ -227,7 +170,7 @@ def check_failures( assert job.meta["model_identifier"] == model_identifiers[job_idx] -def test_failed_forecasting_insufficient_data(app): +def test_failed_forecasting_insufficient_data(app, clean_redis, setup_test_data): """This one (as well as the fallback) should fail as there is no underlying data. (Power data is in 2015)""" solar_device1: Asset = Asset.query.filter_by(name="solar-asset-1").one_or_none() @@ -243,7 +186,7 @@ def test_failed_forecasting_insufficient_data(app): check_failures(app.queues["forecasting"], 2 * ["NotEnoughDataException"]) -def test_failed_forecasting_invalid_horizon(app): +def test_failed_forecasting_invalid_horizon(app, clean_redis, setup_test_data): """ This one (as well as the fallback) should fail as the horizon is invalid.""" solar_device1: Asset = Asset.query.filter_by(name="solar-asset-1").one_or_none() create_forecasting_jobs( @@ -258,7 +201,7 @@ def test_failed_forecasting_invalid_horizon(app): check_failures(app.queues["forecasting"], 2 * ["InvalidHorizonException"]) -def test_failed_unknown_model(app): +def test_failed_unknown_model(app, clean_redis, setup_test_data): """ This one should fail because we use a model search term which yields no model configurator.""" solar_device1: Asset = Asset.query.filter_by(name="solar-asset-1").one_or_none() horizon = timedelta(hours=1) @@ -278,98 +221,3 @@ def test_failed_unknown_model(app): work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) check_failures(app.queues["forecasting"], ["No model found for search term"]) - - -@pytest.mark.parametrize( - "model_to_start_with, model_version", [("failing-test", 1), ("linear-OLS", 2)] -) -def test_failed_model_with_too_much_training_then_succeed_with_fallback( - app, model_to_start_with, model_version -): - """ - Here we fail once - because we start with a model that needs too much training. - So we check for this failure happening as expected. - But then, we do succeed with the fallback model one level down. - (fail-test falls back to linear & linear falls back to naive). - As a result, there should be forecasts in the DB. - """ - solar_device1: Asset = Asset.query.filter_by(name="solar-asset-1").one_or_none() - horizon_hours = 1 - horizon = timedelta(hours=horizon_hours) - - cmp = custom_model_params() - hour_start = 5 - if model_to_start_with == "linear-OLS": - # making the linear model fail and fall back to naive - hour_start = 3 # Todo: explain this parameter; why would it fail to forecast if data is there for the full day? - - # The failed test model (this failure enqueues a new job) - create_forecasting_jobs( - timed_value_type="Power", - start_of_roll=as_server_time(datetime(2015, 1, 1, hour_start)), - end_of_roll=as_server_time(datetime(2015, 1, 1, hour_start + 2)), - horizons=[horizon], - asset_id=solar_device1.id, - model_search_term=model_to_start_with, - custom_model_params=cmp, - ) - work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) - - # Check if the correct model failed in the expected way - check_failures( - app.queues["forecasting"], - ["NotEnoughDataException"], - ["%s model v%d" % (model_to_start_with, model_version)], - ) - - # this query is useful to check data: - def make_query(the_horizon_hours: int) -> Query: - the_horizon = timedelta(hours=the_horizon_hours) - return ( - Power.query.filter(Power.asset_id == solar_device1.id) - .filter(Power.horizon == the_horizon) - .filter( - ( - Power.datetime - >= as_server_time( - datetime(2015, 1, 1, hour_start + the_horizon_hours) - ) - ) - & ( - Power.datetime - < as_server_time( - datetime(2015, 1, 1, hour_start + the_horizon_hours + 2) - ) - ) - ) - ) - - # The successful (linear or naive) OLS leads to these. - forecasts = make_query(the_horizon_hours=horizon_hours).all() - - assert len(forecasts) == 8 - check_aggregate(8, horizon) - - if model_to_start_with == "linear-OLS": - existing_data = make_query(the_horizon_hours=0).all() - - for ed, fd in zip(existing_data, forecasts): - assert ed.value == fd.value - - # Now to check which models actually got to work. - # We check which data sources do and do not exist by now: - assert ( - get_data_source("failing-test model v1") is None - ) # the test failure model failed -> no data source - if model_to_start_with == "linear-OLS": - assert ( - get_data_source() is None - ) # the default (linear regression) (was made to) fail, as well - assert ( - get_data_source("naive model v1") is not None - ) # the naive one had to be used - else: - assert get_data_source() is not None # the default (linear regression) - assert ( - get_data_source("naive model v1") is None - ) # the naive one did not have to be used diff --git a/flexmeasures/data/tests/test_forecasting_jobs_fresh_db.py b/flexmeasures/data/tests/test_forecasting_jobs_fresh_db.py new file mode 100644 index 000000000..6ba275264 --- /dev/null +++ b/flexmeasures/data/tests/test_forecasting_jobs_fresh_db.py @@ -0,0 +1,175 @@ +from datetime import timedelta, datetime + +import pytest +from sqlalchemy.orm import Query + +from flexmeasures.data.models.assets import Asset, Power +from flexmeasures.data.services.forecasting import ( + create_forecasting_jobs, + handle_forecasting_exception, +) +from flexmeasures.data.tests.test_forecasting_jobs import ( + custom_model_params, + check_aggregate, + check_failures, + get_data_source, +) +from flexmeasures.data.tests.utils import work_on_rq +from flexmeasures.utils.time_utils import as_server_time + + +def test_forecasting_three_hours_of_wind(app, setup_fresh_test_data, clean_redis): + wind_device2: Asset = Asset.query.filter_by(name="wind-asset-2").one_or_none() + + # makes 12 forecasts + horizon = timedelta(hours=1) + job = create_forecasting_jobs( + timed_value_type="Power", + start_of_roll=as_server_time(datetime(2015, 1, 1, 10)), + end_of_roll=as_server_time(datetime(2015, 1, 1, 13)), + horizons=[horizon], + asset_id=wind_device2.id, + custom_model_params=custom_model_params(), + ) + print("Job: %s" % job[0].id) + + work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) + + forecasts = ( + Power.query.filter(Power.asset_id == wind_device2.id) + .filter(Power.horizon == horizon) + .filter( + (Power.datetime >= as_server_time(datetime(2015, 1, 1, 11))) + & (Power.datetime < as_server_time(datetime(2015, 1, 1, 14))) + ) + .all() + ) + assert len(forecasts) == 12 + check_aggregate(12, horizon, wind_device2.id) + + +def test_forecasting_two_hours_of_solar(app, setup_fresh_test_data, clean_redis): + solar_device1: Asset = Asset.query.filter_by(name="solar-asset-1").one_or_none() + wind_device2: Asset = Asset.query.filter_by(name="wind-asset-2").one_or_none() + print(solar_device1) + print(wind_device2) + + # makes 8 forecasts + horizon = timedelta(hours=1) + job = create_forecasting_jobs( + timed_value_type="Power", + start_of_roll=as_server_time(datetime(2015, 1, 1, 12)), + end_of_roll=as_server_time(datetime(2015, 1, 1, 14)), + horizons=[horizon], + asset_id=solar_device1.id, + custom_model_params=custom_model_params(), + ) + print("Job: %s" % job[0].id) + + work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) + forecasts = ( + Power.query.filter(Power.asset_id == solar_device1.id) + .filter(Power.horizon == horizon) + .filter( + (Power.datetime >= as_server_time(datetime(2015, 1, 1, 13))) + & (Power.datetime < as_server_time(datetime(2015, 1, 1, 15))) + ) + .all() + ) + assert len(forecasts) == 8 + check_aggregate(8, horizon, solar_device1.id) + + +@pytest.mark.parametrize( + "model_to_start_with, model_version", [("failing-test", 1), ("linear-OLS", 2)] +) +def test_failed_model_with_too_much_training_then_succeed_with_fallback( + setup_fresh_test_data, app, clean_redis, model_to_start_with, model_version +): + """ + Here we fail once - because we start with a model that needs too much training. + So we check for this failure happening as expected. + But then, we do succeed with the fallback model one level down. + (fail-test falls back to linear & linear falls back to naive). + As a result, there should be forecasts in the DB. + """ + solar_device1: Asset = Asset.query.filter_by(name="solar-asset-1").one_or_none() + horizon_hours = 1 + horizon = timedelta(hours=horizon_hours) + + cmp = custom_model_params() + hour_start = 5 + if model_to_start_with == "linear-OLS": + # making the linear model fail and fall back to naive + hour_start = 3 # Todo: explain this parameter; why would it fail to forecast if data is there for the full day? + + # The failed test model (this failure enqueues a new job) + create_forecasting_jobs( + timed_value_type="Power", + start_of_roll=as_server_time(datetime(2015, 1, 1, hour_start)), + end_of_roll=as_server_time(datetime(2015, 1, 1, hour_start + 2)), + horizons=[horizon], + asset_id=solar_device1.id, + model_search_term=model_to_start_with, + custom_model_params=cmp, + ) + work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) + + # Check if the correct model failed in the expected way + check_failures( + app.queues["forecasting"], + ["NotEnoughDataException"], + ["%s model v%d" % (model_to_start_with, model_version)], + ) + + # this query is useful to check data: + def make_query(the_horizon_hours: int) -> Query: + the_horizon = timedelta(hours=the_horizon_hours) + return ( + Power.query.filter(Power.asset_id == solar_device1.id) + .filter(Power.horizon == the_horizon) + .filter( + ( + Power.datetime + >= as_server_time( + datetime(2015, 1, 1, hour_start + the_horizon_hours) + ) + ) + & ( + Power.datetime + < as_server_time( + datetime(2015, 1, 1, hour_start + the_horizon_hours + 2) + ) + ) + ) + ) + + # The successful (linear or naive) OLS leads to these. + forecasts = make_query(the_horizon_hours=horizon_hours).all() + + assert len(forecasts) == 8 + check_aggregate(8, horizon, solar_device1.id) + + if model_to_start_with == "linear-OLS": + existing_data = make_query(the_horizon_hours=0).all() + + for ed, fd in zip(existing_data, forecasts): + assert ed.value == fd.value + + # Now to check which models actually got to work. + # We check which data sources do and do not exist by now: + assert ( + get_data_source("failing-test model v1") is None + ) # the test failure model failed -> no data source + if model_to_start_with == "linear-OLS": + assert ( + get_data_source() is None + ) # the default (linear regression) (was made to) fail, as well + assert ( + get_data_source("naive model v1") is not None + ) # the naive one had to be used + else: + assert get_data_source() is not None # the default (linear regression) + assert ( + get_data_source("naive model v1") is None + ) # the naive one did not have to be used diff --git a/flexmeasures/data/tests/test_queries.py b/flexmeasures/data/tests/test_queries.py index b510809ad..145dd8820 100644 --- a/flexmeasures/data/tests/test_queries.py +++ b/flexmeasures/data/tests/test_queries.py @@ -39,7 +39,7 @@ # ), # test empty BeliefsDataFrame # todo: uncomment when this if fixed: https://github.com/pandas-dev/pandas/issues/30517 ], ) -def test_collect_power(db, app, query_start, query_end, num_values): +def test_collect_power(db, app, query_start, query_end, num_values, setup_test_data): wind_device_1 = Asset.query.filter_by(name="wind-asset-1").one_or_none() data = Power.query.filter(Power.asset_id == wind_device_1.id).all() print(data) @@ -88,7 +88,7 @@ def test_collect_power(db, app, query_start, query_end, num_values): ], ) def test_collect_power_resampled( - db, app, query_start, query_end, resolution, num_values + db, app, query_start, query_end, resolution, num_values, setup_test_data ): wind_device_1 = Asset.query.filter_by(name="wind-asset-1").one_or_none() bdf: tb.BeliefsDataFrame = Power.collect( @@ -204,7 +204,7 @@ def test_multiplication_with_both_empty_dataframe(): pd.testing.assert_frame_equal(df, df_compare) -def test_simplify_index(): +def test_simplify_index(setup_test_data): """Check whether simplify_index retains the event resolution.""" wind_device_1 = Asset.query.filter_by(name="wind-asset-1").one_or_none() bdf: tb.BeliefsDataFrame = Power.collect( @@ -238,7 +238,7 @@ def test_query_beliefs(setup_beliefs): assert len(bdf) == setup_beliefs -def test_persist_beliefs(setup_beliefs): +def test_persist_beliefs(setup_beliefs, setup_test_data): """Check whether persisting beliefs works. We load the already set up beliefs, and form new beliefs an hour later. diff --git a/flexmeasures/data/tests/test_scheduling_jobs.py b/flexmeasures/data/tests/test_scheduling_jobs.py index 324188564..ec7f0a4cf 100644 --- a/flexmeasures/data/tests/test_scheduling_jobs.py +++ b/flexmeasures/data/tests/test_scheduling_jobs.py @@ -1,9 +1,6 @@ # flake8: noqa: E402 from datetime import datetime, timedelta -import numpy as np -import pandas as pd - from flexmeasures.data.models.data_sources import DataSource from flexmeasures.data.models.assets import Asset, Power from flexmeasures.data.tests.utils import work_on_rq, exception_reporter @@ -11,7 +8,7 @@ from flexmeasures.utils.time_utils import as_server_time -def test_scheduling_a_battery(db, app): +def test_scheduling_a_battery(db, app, add_battery_assets, setup_test_data): """Test one clean run of one scheduling job: - data source was made, - schedule has been made @@ -49,68 +46,3 @@ def test_scheduling_a_battery(db, app): ) print([v.value for v in power_values]) assert len(power_values) == 96 - - -def test_scheduling_a_charging_station(db, app): - """Test one clean run of one scheduling job: - - data source was made, - - schedule has been made - - Starting with a state of charge 1 kWh, within 2 hours we should be able to reach 5 kWh. - """ - soc_at_start = 1 - target_soc = 5 - duration_until_target = timedelta(hours=2) - - charging_station = Asset.query.filter( - Asset.name == "Test charging station" - ).one_or_none() - start = as_server_time(datetime(2015, 1, 2)) - end = as_server_time(datetime(2015, 1, 3)) - resolution = timedelta(minutes=15) - target_soc_datetime = start + duration_until_target - soc_targets = pd.Series( - np.nan, index=pd.date_range(start, end, freq=resolution, closed="right") - ) - soc_targets.loc[target_soc_datetime] = target_soc - - assert ( - DataSource.query.filter_by(name="Seita", type="scheduling script").one_or_none() - is None - ) # Make sure the scheduler data source isn't there - - job = create_scheduling_job( - charging_station.id, - start, - end, - belief_time=start, - resolution=resolution, - soc_at_start=soc_at_start, - soc_targets=soc_targets, - ) - - print("Job: %s" % job.id) - - work_on_rq(app.queues["scheduling"], exc_handler=exception_reporter) - - scheduler_source = DataSource.query.filter_by( - name="Seita", type="scheduling script" - ).one_or_none() - assert ( - scheduler_source is not None - ) # Make sure the scheduler data source is now there - - power_values = ( - Power.query.filter(Power.asset_id == charging_station.id) - .filter(Power.data_source_id == scheduler_source.id) - .all() - ) - consumption_schedule = pd.Series( - [-v.value for v in power_values], - index=pd.DatetimeIndex([v.datetime for v in power_values]), - ) # For consumption schedules, positive values denote consumption. For the db, consumption is negative - assert len(consumption_schedule) == 96 - print(consumption_schedule.head(12)) - assert ( - consumption_schedule.head(8).sum() * (resolution / timedelta(hours=1)) == 4.0 - ) # The first 2 hours should consume 4 kWh to charge from 1 to 5 kWh diff --git a/flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py b/flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py new file mode 100644 index 000000000..722b69adf --- /dev/null +++ b/flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py @@ -0,0 +1,77 @@ +from datetime import timedelta, datetime + +import numpy as np +import pandas as pd + +from flexmeasures.data.models.assets import Asset, Power +from flexmeasures.data.models.data_sources import DataSource +from flexmeasures.data.services.scheduling import create_scheduling_job +from flexmeasures.data.tests.utils import work_on_rq, exception_reporter +from flexmeasures.utils.time_utils import as_server_time + + +def test_scheduling_a_charging_station( + db, app, add_charging_station_assets, setup_test_data +): + """Test one clean run of one scheduling job: + - data source was made, + - schedule has been made + + Starting with a state of charge 1 kWh, within 2 hours we should be able to reach 5 kWh. + """ + soc_at_start = 1 + target_soc = 5 + duration_until_target = timedelta(hours=2) + + charging_station = Asset.query.filter( + Asset.name == "Test charging station" + ).one_or_none() + start = as_server_time(datetime(2015, 1, 2)) + end = as_server_time(datetime(2015, 1, 3)) + resolution = timedelta(minutes=15) + target_soc_datetime = start + duration_until_target + soc_targets = pd.Series( + np.nan, index=pd.date_range(start, end, freq=resolution, closed="right") + ) + soc_targets.loc[target_soc_datetime] = target_soc + + assert ( + DataSource.query.filter_by(name="Seita", type="scheduling script").one_or_none() + is None + ) # Make sure the scheduler data source isn't there + + job = create_scheduling_job( + charging_station.id, + start, + end, + belief_time=start, + resolution=resolution, + soc_at_start=soc_at_start, + soc_targets=soc_targets, + ) + + print("Job: %s" % job.id) + + work_on_rq(app.queues["scheduling"], exc_handler=exception_reporter) + + scheduler_source = DataSource.query.filter_by( + name="Seita", type="scheduling script" + ).one_or_none() + assert ( + scheduler_source is not None + ) # Make sure the scheduler data source is now there + + power_values = ( + Power.query.filter(Power.asset_id == charging_station.id) + .filter(Power.data_source_id == scheduler_source.id) + .all() + ) + consumption_schedule = pd.Series( + [-v.value for v in power_values], + index=pd.DatetimeIndex([v.datetime for v in power_values]), + ) # For consumption schedules, positive values denote consumption. For the db, consumption is negative + assert len(consumption_schedule) == 96 + print(consumption_schedule.head(12)) + assert ( + consumption_schedule.head(8).sum() * (resolution / timedelta(hours=1)) == 4.0 + ) # The first 2 hours should consume 4 kWh to charge from 1 to 5 kWh diff --git a/flexmeasures/data/tests/test_user_services.py b/flexmeasures/data/tests/test_user_services.py index ed73769b1..6008b7894 100644 --- a/flexmeasures/data/tests/test_user_services.py +++ b/flexmeasures/data/tests/test_user_services.py @@ -13,7 +13,7 @@ from flexmeasures.data.models.data_sources import DataSource -def test_create_user(app): +def test_create_user(fresh_db, setup_roles_users_fresh_db, app): """Create a user""" num_users = User.query.count() user = create_user( @@ -29,7 +29,7 @@ def test_create_user(app): assert DataSource.query.filter_by(name=user.username).one_or_none() -def test_create_invalid_user(app): +def test_create_invalid_user(fresh_db, setup_roles_users_fresh_db, app): """A few invalid attempts to create a user""" with pytest.raises(InvalidFlexMeasuresUser) as exc_info: create_user(password=hash_password("testtest"), user_roles=["Prosumer"]) @@ -67,7 +67,7 @@ def test_create_invalid_user(app): assert "already exists" in str(exc_info.value) -def test_delete_user(app): +def test_delete_user(fresh_db, setup_roles_users_fresh_db, app): """Assert user has assets and power measurements. Deleting removes all of that.""" prosumer: User = find_user_by_email("test_prosumer@seita.nl") num_users_before = User.query.count() diff --git a/flexmeasures/data/transactional.py b/flexmeasures/data/transactional.py index 920da7397..adbade62c 100644 --- a/flexmeasures/data/transactional.py +++ b/flexmeasures/data/transactional.py @@ -59,7 +59,7 @@ def after_request_exception_rollback_session(exception): Register this on your app via the teardown_request setup method. We roll back the session if there was any error (which only has an effect if - the session has not yet been comitted). + the session has not yet been committed). Flask-SQLAlchemy is closing the scoped sessions automatically.""" if exception is not None: @@ -118,12 +118,13 @@ def wrap(*args, **kwargs): task_run.datetime = datetime.utcnow().replace(tzinfo=pytz.utc) task_run.status = status click.echo( - "Reported task %s status as %s" % (task_function.__name__, status) + "[FLEXMEASURES] Reported task %s status as %s" + % (task_function.__name__, status) ) db.session.commit() except Exception as e: click.echo( - "[FLEXMEASURES] Could not report the running of Task %s, encountered the following problem: [%s]." + "[FLEXMEASURES] Could not report the running of task %s. Encountered the following problem: [%s]." " The task might have run fine." % (task_function.__name__, str(e)) ) db.session.rollback() diff --git a/flexmeasures/ui/__init__.py b/flexmeasures/ui/__init__.py index c3df28dff..6348aa578 100644 --- a/flexmeasures/ui/__init__.py +++ b/flexmeasures/ui/__init__.py @@ -33,9 +33,11 @@ def register_at(app: Flask): from flexmeasures.ui.crud.assets import AssetCrudUI from flexmeasures.ui.crud.users import UserCrudUI + from flexmeasures.ui.views.sensors import SensorUI AssetCrudUI.register(app) UserCrudUI.register(app) + SensorUI.register(app) import flexmeasures.ui.views # noqa: F401 this is necessary to load the views @@ -138,7 +140,7 @@ def add_jinja_variables(app): for v in ( "FLEXMEASURES_MODE", "FLEXMEASURES_PLATFORM_NAME", - "FLEXMEASURES_SHOW_CONTROL_UI", + "FLEXMEASURES_LISTED_VIEWS", "FLEXMEASURES_PUBLIC_DEMO_CREDENTIALS", ): app.jinja_env.globals[v] = app.config.get(v, "") diff --git a/flexmeasures/ui/crud/assets.py b/flexmeasures/ui/crud/assets.py index 609c95321..d82be0b6e 100644 --- a/flexmeasures/ui/crud/assets.py +++ b/flexmeasures/ui/crud/assets.py @@ -16,6 +16,7 @@ from flexmeasures.data.models.assets import AssetType, Asset from flexmeasures.data.models.user import User from flexmeasures.data.models.markets import Market +from flexmeasures.utils.flexmeasures_inflection import parameterize from flexmeasures.ui.utils.plotting_utils import get_latest_power_as_plot from flexmeasures.ui.utils.view_utils import render_flexmeasures_template from flexmeasures.ui.crud.api_wrapper import InternalApi @@ -68,13 +69,13 @@ def validate_on_submit(self): ) return super().validate_on_submit() - def to_json(self) -> dict: + def to_json(self, for_posting=False) -> dict: """ turn form data into a JSON we can POST to our internal API """ data = copy.copy(self.data) - data["name"] = data["display_name"] # both are part of the asset model - data[ - "unit" - ] = "MW" # TODO: make unit a choice? this is hard-coded in the UI as well + if for_posting: + data["name"] = parameterize( + data["display_name"] + ) # best guess at un-humanizing data["capacity_in_mw"] = float(data["capacity_in_mw"]) data["min_soc_in_mwh"] = float(data["min_soc_in_mwh"]) data["max_soc_in_mwh"] = float(data["max_soc_in_mwh"]) @@ -248,7 +249,7 @@ def post(self, id: str): if form_valid and owner is not None and market is not None: post_asset_response = InternalApi().post( url_for("flexmeasures_api_v2_0.post_assets"), - args=asset_form.to_json(), + args=asset_form.to_json(for_posting=True), do_not_raise_for=[400, 422], ) @@ -263,8 +264,11 @@ def post(self, id: str): f"Internal asset API call unsuccessful [{post_asset_response.status_code}]: {post_asset_response.text}" ) asset_form.process_api_validation_errors(post_asset_response.json()) - if "message" in post_asset_response.json(): - error_msg = post_asset_response.json()["message"] + if ( + "message" in post_asset_response.json() + and "json" in post_asset_response.json()["message"] + ): + error_msg = str(post_asset_response.json()["message"]["json"]) if asset is None: msg = "Cannot create asset. " + error_msg return render_flexmeasures_template( @@ -278,11 +282,22 @@ def post(self, id: str): else: asset_form = with_options(AssetForm()) if not asset_form.validate_on_submit(): + asset = Asset.query.get(id) + latest_measurement_time_str, asset_plot_html = get_latest_power_as_plot( + asset + ) + # Display the form data, but set some extra data which the page wants to show. + asset_info = asset_form.data.copy() + asset_info["id"] = id + asset_info["owner_id"] = asset.owner_id + asset_info["entity_address"] = asset.entity_address return render_flexmeasures_template( - "crud/asset_new.html", + "crud/asset.html", asset_form=asset_form, + asset=asset_info, msg="Cannot edit asset.", - map_center=get_center_location(db, user=current_user), + latest_measurement_time_str=latest_measurement_time_str, + asset_plot_html=asset_plot_html, mapboxAccessToken=current_app.config.get("MAPBOX_ACCESS_TOKEN", ""), ) patch_asset_response = InternalApi().patch( @@ -300,6 +315,7 @@ def post(self, id: str): current_app.logger.error( f"Internal asset API call unsuccessful [{patch_asset_response.status_code}]: {patch_asset_response.text}" ) + msg = "Cannot edit asset." asset_form.process_api_validation_errors(patch_asset_response.json()) asset = Asset.query.get(id) diff --git a/flexmeasures/ui/crud/users.py b/flexmeasures/ui/crud/users.py index 082616c07..fa64c7d7a 100644 --- a/flexmeasures/ui/crud/users.py +++ b/flexmeasures/ui/crud/users.py @@ -1,4 +1,5 @@ from typing import Optional, Union +from datetime import datetime from flask import request, url_for from flask_classful import FlaskView @@ -55,6 +56,10 @@ def process_internal_api_response( role_ids = tuple(user_data.get("flexmeasures_roles", [])) user_data["flexmeasures_roles"] = Role.query.filter(Role.id.in_(role_ids)).all() user_data.pop("status", None) # might have come from requests.response + if "last_login_at" in user_data and user_data["last_login_at"] is not None: + user_data["last_login_at"] = datetime.fromisoformat( + user_data["last_login_at"] + ) if user_id: user_data["id"] = user_id if make_obj: diff --git a/flexmeasures/ui/static/js/daterange-utils.js b/flexmeasures/ui/static/js/daterange-utils.js new file mode 100644 index 000000000..ebf9fde97 --- /dev/null +++ b/flexmeasures/ui/static/js/daterange-utils.js @@ -0,0 +1,41 @@ +// Date range utils +export function subtract(oldDate, nDays) { + var newDate = new Date(oldDate) + newDate.setDate(newDate.getDate() - nDays); + return newDate; +} +export function thisMonth(oldDate) { + var d1 = new Date(oldDate) + d1.setDate(1); + var d2 = new Date(d1.getFullYear(), d1.getMonth() + 1, 0); + return [d1, d2]; +}; +export function lastNMonths(oldDate, nMonths) { + var d0 = new Date(oldDate) + var d1 = new Date(d0.getFullYear(), d0.getMonth() - nMonths + 2, 0); + d1.setDate(1); + var d2 = new Date(d0.getFullYear(), d0.getMonth() + 1, 0); + return [d1, d2]; +}; +export function getOffsetBetweenTimezonesForDate(date, timezone1, timezone2) { + const o1 = getTimeZoneOffset(date, timezone1) + const o2 = getTimeZoneOffset(date, timezone2) + return o2 - o1 +} + +function getTimeZoneOffset(date, timeZone) { + + // Abuse the Intl API to get a local ISO 8601 string for a given time zone. + let iso = date.toLocaleString('en-CA', { timeZone, hour12: false }).replace(', ', 'T'); + + // Include the milliseconds from the original timestamp + iso += '.' + date.getMilliseconds().toString().padStart(3, '0'); + + // Lie to the Date object constructor that it's a UTC time. + const lie = new Date(iso + 'Z'); + + // Return the difference in timestamps, as minutes + // Positive values are West of GMT, opposite of ISO 8601 + // this matches the output of `Date.getTimeZoneOffset` + return -(lie - date) / 60 / 1000; +} \ No newline at end of file diff --git a/flexmeasures/ui/templates/admin/login_user.html b/flexmeasures/ui/templates/admin/login_user.html index a8e2a74d1..557fc9028 100644 --- a/flexmeasures/ui/templates/admin/login_user.html +++ b/flexmeasures/ui/templates/admin/login_user.html @@ -40,6 +40,7 @@

Interested in a demo?

{% endif %}
+ {% block teaser %}

The FlexMeasures Platform

+ {% endblock teaser %}
diff --git a/flexmeasures/ui/templates/base.html b/flexmeasures/ui/templates/base.html index 666643572..9dfd7b3a0 100644 --- a/flexmeasures/ui/templates/base.html +++ b/flexmeasures/ui/templates/base.html @@ -64,8 +64,7 @@ {{ FLEXMEASURES_PLATFORM_NAME }} - {{ self.title() }} {% if user_is_logged_in and not "Error" in self.title() %} for {{ user_name }} on - Jeju island {% endif %} + {{ self.title() }} {% if user_is_logged_in and not "Error" in self.title() %} for {{ user_name }} {% endif %} + {% endblock credits %} {% if app_running_since %} @@ -301,4 +304,4 @@

Icons from Flaticon -{% endblock base %} \ No newline at end of file +{% endblock base %} diff --git a/flexmeasures/ui/templates/crud/asset.html b/flexmeasures/ui/templates/crud/asset.html index 3e461fd16..0762524eb 100644 --- a/flexmeasures/ui/templates/crud/asset.html +++ b/flexmeasures/ui/templates/crud/asset.html @@ -49,7 +49,7 @@

Edit asset {{ asset.display_name }}

(Owned by {{ asset.owner_id | username }}) - +
{{ asset_form.display_name.label(class="col-sm-6 control-label") }}
@@ -198,7 +198,7 @@

Location

// create map var assetMap = L - .map('mapid', { center: [{{ asset.latitude }}, {{ asset.longitude }}], zoom: 10}) + .map('mapid', { center: [{{ asset.latitude | replace("None", 10) }}, {{ asset.longitude | replace("None", 10) }}], zoom: 10}) .on('popupopen', function () { $(function () { $('[data-toggle="tooltip"]').tooltip(); @@ -207,17 +207,17 @@

Location

addTileLayer(assetMap, '{{ mapboxAccessToken }}'); // create marker - var {{ asset.asset_type_name | parameterize }}_icon = new L.DivIcon({ + var asset_icon = new L.DivIcon({ className: 'map-icon', - html: '', + html: '', iconSize: [100, 100], // size of the icon iconAnchor: [50, 50], // point of the icon which will correspond to marker's location popupAnchor: [0, -50] // point from which the popup should open relative to the iconAnchor }); var marker = L .marker( - [{{ asset.latitude }}, {{ asset.longitude }}], - { icon: {{ asset.asset_type_name | parameterize }}_icon } + [{{ asset.latitude | replace("None", 10)}}, {{ asset.longitude | replace("None", 10) }}], + { icon: asset_icon } ).addTo(assetMap); assetMap.on('click', function (e) { diff --git a/flexmeasures/ui/templates/defaults.jinja b/flexmeasures/ui/templates/defaults.jinja index a361acae9..2b436f031 100644 --- a/flexmeasures/ui/templates/defaults.jinja +++ b/flexmeasures/ui/templates/defaults.jinja @@ -3,26 +3,37 @@ {# Front-end app naming #} -{% set show_queues = True if current_user.is_authenticated and (current_user.has_role('admin') or FLEXMEASURES_MODE == "demo") else False %} - {# Front-end menu, as columns with href, id, caption, and (fa fa-)icon #} -{% set navigation_bar = [ - ('dashboard', 'dashboard', 'Dashboard', 'dashboard'), - ('assets', 'assets', 'Assets', 'list-ul'), -] if current_user.is_authenticated else [] %} -{% do navigation_bar.append(('users', 'users', 'Users', 'users')) if current_user.has_role('admin') %} -{% do navigation_bar.extend([ - ('portfolio', 'portfolio', 'Portfolio overview', 'briefcase'), - ('analytics', 'analytics', 'Analytics', 'bar-chart'), - ('upload', 'upload', 'Upload data', 'cloud-upload'), -]) if current_user.is_authenticated %} -{% do navigation_bar.extend([ - ('control', 'control', 'Flexibility actions', 'wrench'), -]) if FLEXMEASURES_SHOW_CONTROL_UI and current_user.is_authenticated %} +{% set navigation_bar = [] %} + +{% set nav_bar_specs = { + "dashboard": dict(title="Dashboard", icon="dashboard"), + "assets": dict(title="Assets", icon="list-ul"), + "users": dict(title="Users", icon="users"), + "portfolio": dict(title="Portfolio overview", icon="briefcase"), + "analytics": dict(title="Analytics", icon="bar-chart"), + "upload": dict(title="Upload data", icon="cloud-upload"), + "control": dict(title="Flexibility actions", icon="wrench") +} +%} + +{% for view_name in FLEXMEASURES_LISTED_VIEWS %} + {# add specs for views we don't know (plugin views) #} + {% do nav_bar_specs.update({view_name: dict(title=view_name.capitalize(), icon="info")}) if view_name not in nav_bar_specs %} + {# add view to menu if user is authenticated #} + {% do navigation_bar.append( + (view_name, view_name, nav_bar_specs[view_name]["title"], nav_bar_specs[view_name]["icon"]) + ) if current_user.is_authenticated %} +{% endfor %} + + +{% set show_queues = True if current_user.is_authenticated and (current_user.has_role('admin') or FLEXMEASURES_MODE == "demo") else False %} {% do navigation_bar.append(('tasks', 'tasks', 'Tasks', 'tasks')) if show_queues %} + {% do navigation_bar.append(('account', 'account', '', 'user')) if current_user.is_authenticated %} + {% do navigation_bar.append(('ui/static/documentation/html/index.html', 'docs', '', 'question')) if documentation_exists and current_user.is_authenticated %} {% set active_page = active_page|default('dashboard') -%} diff --git a/flexmeasures/ui/templates/views/analytics.html b/flexmeasures/ui/templates/views/analytics.html index 7c145c128..10591e128 100644 --- a/flexmeasures/ui/templates/views/analytics.html +++ b/flexmeasures/ui/templates/views/analytics.html @@ -157,7 +157,7 @@

Metrics

Rev./Costs {% endif %} - {% if selected_weather_sensor %} + {% if selected_sensor %} {{ selected_sensor_type.display_name | capitalize }} diff --git a/flexmeasures/ui/templates/views/sensors.html b/flexmeasures/ui/templates/views/sensors.html new file mode 100644 index 000000000..c8efd6c81 --- /dev/null +++ b/flexmeasures/ui/templates/views/sensors.html @@ -0,0 +1,139 @@ +{% extends "base.html" %} + +{% set active_page = "assets" %} + +{% block title %} Assets {% endblock %} + +{% block divs %} + +
+
+
+

+
+ + + + + + + + + + + + + + +{% endblock %} \ No newline at end of file diff --git a/flexmeasures/ui/tests/conftest.py b/flexmeasures/ui/tests/conftest.py index c09372725..f29d34b5a 100644 --- a/flexmeasures/ui/tests/conftest.py +++ b/flexmeasures/ui/tests/conftest.py @@ -28,8 +28,10 @@ def as_admin(client): logout(client) -@pytest.fixture(scope="function", autouse=True) -def setup_ui_test_data(db): +@pytest.fixture(scope="module", autouse=True) +def setup_ui_test_data( + db, setup_roles_users, setup_markets, setup_sources, setup_asset_types +): """ Create another prosumer, without data, and an admin Also, a weather sensor (and sensor type). diff --git a/flexmeasures/ui/tests/test_asset_crud.py b/flexmeasures/ui/tests/test_asset_crud.py index 4ae20f7c1..8a7b134f2 100644 --- a/flexmeasures/ui/tests/test_asset_crud.py +++ b/flexmeasures/ui/tests/test_asset_crud.py @@ -34,13 +34,13 @@ def test_assets_page_nonempty(db, client, requests_mock, as_prosumer, use_owned_ assert asset["display_name"].encode() in asset_index.data -def test_new_asset_page(client, as_admin): +def test_new_asset_page(client, setup_assets, as_admin): asset_page = client.get(url_for("AssetCrudUI:get", id="new"), follow_redirects=True) assert asset_page.status_code == 200 assert b"Creating a new asset" in asset_page.data -def test_asset_page(db, client, requests_mock, as_prosumer): +def test_asset_page(db, client, setup_assets, requests_mock, as_prosumer): prosumer = find_user_by_email("test_prosumer@seita.nl") asset = prosumer.assets[0] db.session.expunge(prosumer) @@ -61,7 +61,7 @@ def test_asset_page(db, client, requests_mock, as_prosumer): assert str(mock_asset["longitude"]).encode() in asset_page.data -def test_edit_asset(db, client, requests_mock, as_admin): +def test_edit_asset(db, client, setup_assets, requests_mock, as_admin): mock_asset = mock_asset_response(as_list=False) requests_mock.patch( "http://localhost//api/v2_0/asset/1", status_code=200, json=mock_asset @@ -78,7 +78,7 @@ def test_edit_asset(db, client, requests_mock, as_admin): assert str(mock_asset["longitude"]) in str(response.data) -def test_add_asset(db, client, requests_mock, as_admin): +def test_add_asset(db, client, setup_assets, requests_mock, as_admin): """Add a new asset""" prosumer = find_user_by_email("test_prosumer@seita.nl") mock_asset = mock_asset_response(owner_id=prosumer.id, as_list=False) diff --git a/flexmeasures/ui/tests/test_views.py b/flexmeasures/ui/tests/test_views.py index 6bb723766..27edd4049 100644 --- a/flexmeasures/ui/tests/test_views.py +++ b/flexmeasures/ui/tests/test_views.py @@ -4,7 +4,7 @@ from flexmeasures.ui.tests.utils import logout -def test_dashboard_responds(client, as_prosumer): +def test_dashboard_responds(client, setup_assets, as_prosumer): dashboard = client.get( url_for("flexmeasures_ui.dashboard_view"), follow_redirects=True ) @@ -21,7 +21,7 @@ def test_dashboard_responds_only_for_logged_in_users(client, as_prosumer): assert b"Please log in" in dashboard.data -def test_portfolio_responds(client, as_prosumer): +def test_portfolio_responds(client, setup_assets, as_prosumer): portfolio = client.get( url_for("flexmeasures_ui.portfolio_view"), follow_redirects=True ) @@ -42,7 +42,7 @@ def test_control_responds(client, as_prosumer): assert b"Control actions" in control.data -def test_analytics_responds(db, client, as_prosumer): +def test_analytics_responds(db, client, setup_assets, as_prosumer): analytics = client.get( url_for("flexmeasures_ui.analytics_view"), follow_redirects=True ) diff --git a/flexmeasures/ui/tests/utils.py b/flexmeasures/ui/tests/utils.py index f04c4c15b..5e0a47d74 100644 --- a/flexmeasures/ui/tests/utils.py +++ b/flexmeasures/ui/tests/utils.py @@ -64,6 +64,7 @@ def mock_user_response( active=active, password="secret", flexmeasures_roles=[1], + last_login_at="2021-05-14T20:00:00+02:00", ) if as_list: user_list = [user] diff --git a/flexmeasures/ui/utils/plotting_utils.py b/flexmeasures/ui/utils/plotting_utils.py index 9c407e01b..ac69d44f4 100644 --- a/flexmeasures/ui/utils/plotting_utils.py +++ b/flexmeasures/ui/utils/plotting_utils.py @@ -564,7 +564,11 @@ def get_latest_power_as_plot(asset: Asset, small: bool = False) -> Tuple[str, st First returned string is the measurement time, second string is the html string.""" if current_app.config.get("FLEXMEASURES_MODE", "") == "demo": - before = server_now().replace(year=2015) + demo_year = current_app.config.get("FLEXMEASURES_DEMO_YEAR", None) + if demo_year is None: + before = server_now() + else: + before = server_now().replace(year=demo_year) elif current_app.config.get("FLEXMEASURES_MODE", "") == "play": before = None # type:ignore else: diff --git a/flexmeasures/ui/utils/view_utils.py b/flexmeasures/ui/utils/view_utils.py index 6ee924e06..c93bf0bab 100644 --- a/flexmeasures/ui/utils/view_utils.py +++ b/flexmeasures/ui/utils/view_utils.py @@ -9,7 +9,6 @@ from flask_security.core import current_user from werkzeug.exceptions import BadRequest import iso8601 -import pytz from flexmeasures import __version__ as flexmeasures_version from flexmeasures.utils import time_utils @@ -84,13 +83,16 @@ def render_flexmeasures_template(html_filename: str, **variables): variables["user_name"] = ( current_user.is_authenticated and current_user.username or "" ) + variables["js_versions"] = current_app.config.get("FLEXMEASURES_JS_VERSIONS") return render_template(html_filename, **variables) def clear_session(): for skey in [ - k for k in session.keys() if k not in ("_id", "user_id", "csrf_token") + k + for k in session.keys() + if k not in ("_fresh", "_id", "_user_id", "csrf_token", "fs_cc", "fs_paa") ]: current_app.logger.info( "Removing %s:%s from session ... " % (skey, session[skey]) @@ -100,6 +102,8 @@ def clear_session(): def set_time_range_for_session(): """Set period (start_date, end_date and resolution) on session if they are not yet set. + The datepicker sends times as tz-aware UTC strings. + We re-interpret them as being in the server's timezone. Also set the forecast horizon, if given.""" if "start_time" in request.values: session["start_time"] = time_utils.localized_datetime( @@ -110,12 +114,8 @@ def set_time_range_for_session(): else: if ( session["start_time"].tzinfo is None - ): # session storage seems to lose tz info - session["start_time"] = ( - session["start_time"] - .replace(tzinfo=pytz.utc) - .astimezone(time_utils.get_timezone()) - ) + ): # session storage seems to lose tz info and becomes UTC + session["start_time"] = time_utils.as_server_time(session["start_time"]) if "end_time" in request.values: session["end_time"] = time_utils.localized_datetime( @@ -125,13 +125,9 @@ def set_time_range_for_session(): session["end_time"] = time_utils.get_default_end_time() else: if session["end_time"].tzinfo is None: - session["end_time"] = ( - session["end_time"] - .replace(tzinfo=pytz.utc) - .astimezone(time_utils.get_timezone()) - ) + session["end_time"] = time_utils.as_server_time(session["end_time"]) - # Our demo server works only with the current year's data + # Our demo server's UI should only work with the current year's data if current_app.config.get("FLEXMEASURES_MODE", "") == "demo": session["start_time"] = session["start_time"].replace(year=datetime.now().year) session["end_time"] = session["end_time"].replace(year=datetime.now().year) diff --git a/flexmeasures/ui/views/charts.py b/flexmeasures/ui/views/charts.py index fdfced2f6..47a63581e 100644 --- a/flexmeasures/ui/views/charts.py +++ b/flexmeasures/ui/views/charts.py @@ -6,7 +6,7 @@ from flexmeasures.api.v2_0 import flexmeasures_api as flexmeasures_api_v2_0 from flexmeasures.api.v2_0.routes import v2_0_service_listing -from flexmeasures.api.common.schemas.times import DurationField +from flexmeasures.data.schemas.times import DurationField from flexmeasures.data.queries.analytics import get_power_data from flexmeasures.ui.views.analytics import make_power_figure diff --git a/flexmeasures/ui/views/sensors.py b/flexmeasures/ui/views/sensors.py new file mode 100644 index 000000000..091c6d31f --- /dev/null +++ b/flexmeasures/ui/views/sensors.py @@ -0,0 +1,60 @@ +from altair.utils.html import spec_to_html +from flask import current_app +from flask_classful import FlaskView, route +from flask_security import login_required, roles_required +from marshmallow import fields +from webargs.flaskparser import use_kwargs + +from flexmeasures.data.schemas.times import AwareDateTimeField +from flexmeasures.api.dev.sensors import SensorAPI +from flexmeasures.ui.utils.view_utils import render_flexmeasures_template + + +class SensorUI(FlaskView): + """ + This view creates several new UI endpoints for viewing sensors. + + todo: consider extending this view for crud purposes + """ + + route_base = "/sensors" + + @login_required + @roles_required("admin") # todo: remove after we check for sensor ownership + @route("//chart/") + @use_kwargs( + { + "event_starts_after": AwareDateTimeField(format="iso", required=False), + "event_ends_before": AwareDateTimeField(format="iso", required=False), + "beliefs_after": AwareDateTimeField(format="iso", required=False), + "beliefs_before": AwareDateTimeField(format="iso", required=False), + "dataset_name": fields.Str(required=False), + }, + location="query", + ) + def get_chart(self, id, **kwargs): + """GET from /sensors//chart""" + chart_specs = SensorAPI().get_chart( + id, include_data=True, as_html=True, **kwargs + ) + return spec_to_html( + chart_specs, + "vega-lite", + vega_version=current_app.config.get("FLEXMEASURES_JS_VERSIONS").vega, + vegaembed_version=current_app.config.get( + "FLEXMEASURES_JS_VERSIONS" + ).vegaembed, + vegalite_version=current_app.config.get( + "FLEXMEASURES_JS_VERSIONS" + ).vegalite, + ) + + @login_required + @roles_required("admin") # todo: remove after we check for sensor ownership + def get(self, id: int): + """GET from /sensors/""" + return render_flexmeasures_template( + "views/sensors.html", + sensor_id=id, + msg="", + ) diff --git a/flexmeasures/utils/app_utils.py b/flexmeasures/utils/app_utils.py index 7d1845db6..4065d79bd 100644 --- a/flexmeasures/utils/app_utils.py +++ b/flexmeasures/utils/app_utils.py @@ -1,7 +1,9 @@ import os import sys +import importlib.util import click +from flask import Flask from flask.cli import FlaskGroup from flexmeasures.app import create as create_app @@ -36,7 +38,7 @@ def set_secret_key(app, filename="secret_key"): try: app.config["SECRET_KEY"] = open(filename, "rb").read() except IOError: - print( + app.logger.error( """ Error: No secret key set. @@ -63,3 +65,39 @@ def set_secret_key(app, filename="secret_key"): ) sys.exit(2) + + +def register_plugins(app: Flask): + """ + Register FlexMeasures plugins as Blueprints. + This is configured by the config setting FLEXMEASURES_PLUGIN_PATHS. + + Assumptions: + - Your plugin folders contains an __init__.py file. + - In this init, you define a Blueprint object called _bp + + We'll refer to the plugins with the name of your plugin folders (last part of the path). + """ + plugin_paths = app.config.get("FLEXMEASURES_PLUGIN_PATHS", "") + if not isinstance(plugin_paths, list): + app.logger.warning( + f"The value of FLEXMEASURES_PLUGIN_PATHS is not a list: {plugin_paths}. Cannot install plugins ..." + ) + return + for plugin_path in plugin_paths: + plugin_name = plugin_path.split("/")[-1] + if not os.path.exists(os.path.join(plugin_path, "__init__.py")): + app.logger.warning( + f"Plugin {plugin_name} does not contain an '__init__.py' file. Cannot load plugin {plugin_name}." + ) + return + app.logger.debug(f"Importing plugin {plugin_name} ...") + spec = importlib.util.spec_from_file_location( + plugin_name, os.path.join(plugin_path, "__init__.py") + ) + app.logger.debug(spec) + module = importlib.util.module_from_spec(spec) + app.logger.debug(module) + sys.modules[plugin_name] = module + spec.loader.exec_module(module) + app.register_blueprint(getattr(module, f"{plugin_name}_bp")) diff --git a/flexmeasures/utils/config_defaults.py b/flexmeasures/utils/config_defaults.py index c84ebe111..4c787a8c1 100644 --- a/flexmeasures/utils/config_defaults.py +++ b/flexmeasures/utils/config_defaults.py @@ -65,7 +65,7 @@ class Config(object): CORS_RESOURCES: Union[dict, list, str] = [r"/api/*"] CORS_SUPPORTS_CREDENTIALS: bool = True - DARK_SKY_API_KEY: Optional[str] = None + OPENWEATHERMAP_API_KEY: Optional[str] = None MAPBOX_ACCESS_TOKEN: Optional[str] = None @@ -78,7 +78,6 @@ class Config(object): FLEXMEASURES_PLATFORM_NAME: str = "FlexMeasures" FLEXMEASURES_MODE: str = "" FLEXMEASURES_TIMEZONE: str = "Asia/Seoul" - FLEXMEASURES_SHOW_CONTROL_UI: bool = False FLEXMEASURES_HIDE_NAN_IN_UI: bool = False FLEXMEASURES_PUBLIC_DEMO_CREDENTIALS: Optional[Tuple] = None FLEXMEASURES_DEMO_YEAR: Optional[int] = None @@ -86,8 +85,16 @@ class Config(object): # This setting contains the domain on which FlexMeasures runs # and the first month when the domain was under the current owner's administration FLEXMEASURES_HOSTS_AND_AUTH_START: dict = {"flexmeasures.io": "2021-01"} + FLEXMEASURES_PLUGIN_PATHS: List[str] = [] FLEXMEASURES_PROFILE_REQUESTS: bool = False FLEXMEASURES_DB_BACKUP_PATH: str = "migrations/dumps" + FLEXMEASURES_LISTED_VIEWS: List[str] = [ + "dashboard", + "analytics", + "portfolio", + "assets", + "users", + ] FLEXMEASURES_LP_SOLVER: str = "cbc" FLEXMEASURES_PLANNING_HORIZON: timedelta = timedelta(hours=2 * 24) FLEXMEASURES_PLANNING_TTL: timedelta = timedelta( @@ -98,6 +105,12 @@ class Config(object): FLEXMEASURES_REDIS_PORT: int = 6379 FLEXMEASURES_REDIS_DB_NR: int = 0 # Redis per default has 16 databases, [0-15] FLEXMEASURES_REDIS_PASSWORD: Optional[str] = None + FLEXMEASURES_JS_VERSIONS: dict = dict( + vega="5", + vegaembed="6.17.0", + vegalite="5.0.0", + # todo: expand with other js versions used in FlexMeasures + ) # names of settings which cannot be None diff --git a/flexmeasures/utils/tests/test_time_utils.py b/flexmeasures/utils/tests/test_time_utils.py new file mode 100644 index 000000000..6d3519bb1 --- /dev/null +++ b/flexmeasures/utils/tests/test_time_utils.py @@ -0,0 +1,43 @@ +from datetime import datetime, timedelta + +import pytz +import pytest + +from flexmeasures.utils.time_utils import ( + server_now, + naturalized_datetime_str, +) + + +@pytest.mark.parametrize( + "dt_tz,now,server_tz,delta_in_h,exp_result", + [ + (None, datetime.utcnow(), "UTC", 3, "3 hours ago"), + (None, datetime(2021, 5, 17, 3), "Europe/Amsterdam", 48, "May 15"), + ("Asia/Seoul", "server_now", "Europe/Amsterdam", 1, "an hour ago"), + ("UTC", datetime(2021, 5, 17, 3), "Asia/Seoul", 24 * 7, "May 10"), + ("UTC", datetime(2021, 5, 17, 3), "Asia/Seoul", None, "never"), + ], +) +def test_naturalized_datetime_str( + app, + monkeypatch, + dt_tz, + now, + server_tz, + delta_in_h, + exp_result, +): + monkeypatch.setitem(app.config, "FLEXMEASURES_TIMEZONE", server_tz) + if now == "server_now": + now = server_now() # done this way as it needs app context + if now.tzinfo is None: + now.replace(tzinfo=pytz.utc) # assuming UTC + if delta_in_h is not None: + h_ago = now - timedelta(hours=delta_in_h) + if dt_tz is not None: + h_ago = h_ago.astimezone(pytz.timezone(dt_tz)) + else: + h_ago = None + print(h_ago) + assert naturalized_datetime_str(h_ago, now=now) == exp_result diff --git a/flexmeasures/utils/time_utils.py b/flexmeasures/utils/time_utils.py index d3a2ad940..b72691ec0 100644 --- a/flexmeasures/utils/time_utils.py +++ b/flexmeasures/utils/time_utils.py @@ -33,12 +33,25 @@ def ensure_local_timezone( def as_server_time(dt: datetime) -> datetime: - """The datetime represented in the timezone of the FlexMeasures platform.""" + """The datetime represented in the timezone of the FlexMeasures platform. + If dt is naive, we assume it is UTC time. + """ return naive_utc_from(dt).replace(tzinfo=pytz.utc).astimezone(get_timezone()) +def localized_datetime(dt: datetime) -> datetime: + """ + Localise a datetime to the timezone of the FlexMeasures platform. + Note: this will change nothing but the tzinfo field. + """ + return get_timezone().localize(naive_utc_from(dt)) + + def naive_utc_from(dt: datetime) -> datetime: - """Return a naive datetime, that is localised to UTC if it has a timezone.""" + """ + Return a naive datetime, that is localised to UTC if it has a timezone. + If dt is naive, we assume it is already in UTC time. + """ if not hasattr(dt, "tzinfo") or dt.tzinfo is None: # let's hope this is the UTC time you expect return dt @@ -58,16 +71,13 @@ def tz_index_naively( return data -def localized_datetime(dt: datetime) -> datetime: - """Localise a datetime to the timezone of the FlexMeasures platform.""" - return get_timezone().localize(naive_utc_from(dt)) - - def localized_datetime_str(dt: datetime, dt_format: str = "%Y-%m-%d %I:%M %p") -> str: - """Localise a datetime to the timezone of the FlexMeasures platform. - Hint: This can be set as a jinja filter, so we can display local time in the app, e.g.: - app.jinja_env.filters['datetime'] = localized_datetime_filter + """ + Localise a datetime to the timezone of the FlexMeasures platform. If no datetime is passed in, use server_now() as basis. + + Hint: This can be set as a jinja filter, so we can display local time in the app, e.g.: + app.jinja_env.filters['localized_datetime'] = localized_datetime_str """ if dt is None: dt = server_now() @@ -76,16 +86,36 @@ def localized_datetime_str(dt: datetime, dt_format: str = "%Y-%m-%d %I:%M %p") - return local_dt.strftime(dt_format) -def naturalized_datetime_str(dt: Optional[datetime]) -> str: - """ Naturalise a datetime object.""" +def naturalized_datetime_str( + dt: Optional[datetime], now: Optional[datetime] = None +) -> str: + """ + Naturalise a datetime object (into a human-friendly string). + The dt parameter (as well as the now parameter if you use it) + can be either naive or tz-aware. We assume UTC in the naive case. + + We use the the humanize library to generate a human-friendly string. + If dt is not longer ago than 24 hours, we use humanize.naturaltime (e.g. "3 hours ago"), + otherwise humanize.naturaldate (e.g. "one week ago") + + Hint: This can be set as a jinja filter, so we can display local time in the app, e.g.: + app.jinja_env.filters['naturalized_datetime'] = naturalized_datetime_str + """ if dt is None: return "never" + if now is None: + now = datetime.utcnow() # humanize uses the local now internally, so let's make dt local - local_timezone = tzlocal.get_localzone() - local_dt = ( - dt.replace(tzinfo=pytz.utc).astimezone(local_timezone).replace(tzinfo=None) - ) - if dt >= datetime.utcnow() - timedelta(hours=24): + if dt.tzinfo is None: + local_dt = ( + dt.replace(tzinfo=pytz.utc) + .astimezone(tzlocal.get_localzone()) + .replace(tzinfo=None) + ) + else: + local_dt = dt.astimezone(tzlocal.get_localzone()).replace(tzinfo=None) + # decide which humanize call to use for naturalization + if naive_utc_from(dt) >= naive_utc_from(now) - timedelta(hours=24): return naturaltime(local_dt) else: return naturaldate(local_dt) @@ -123,9 +153,11 @@ def decide_resolution(start: Optional[datetime], end: Optional[datetime]) -> str return resolution -def get_timezone(of_user=False): +def get_timezone(of_user=False) -> pytz.BaseTzInfo: """Return the FlexMeasures timezone, or if desired try to return the timezone of the current user.""" - default_timezone = pytz.timezone(current_app.config.get("FLEXMEASURES_TIMEZONE")) + default_timezone = pytz.timezone( + current_app.config.get("FLEXMEASURES_TIMEZONE", "") + ) if not of_user: return default_timezone if current_user.is_anonymous: @@ -195,7 +227,9 @@ def forecast_horizons_for( else: resolution_str = resolution horizons = [] - if resolution_str in ("15T", "1h", "H"): + if resolution_str in ("5T", "10T"): + horizons = ["1h", "6h", "24h"] + elif resolution_str in ("15T", "1h", "H"): horizons = ["1h", "6h", "24h", "48h"] elif resolution_str in ("24h", "D"): horizons = ["24h", "48h"] diff --git a/requirements/app.in b/requirements/app.in index 09d7da32f..2264d838c 100644 --- a/requirements/app.in +++ b/requirements/app.in @@ -1,4 +1,5 @@ # see ui/utils/plotting_utils: separate_legend() and create_hover_tool() +altair bokeh==1.0.4 colour pscript @@ -25,14 +26,13 @@ rq-win; os_name == 'nt' or os_name == 'win' redis; os_name == 'nt' or os_name == 'win' tldextract pyomo>=5.6 -forecastiopy pvlib # the following three are optional in pvlib, but we use them netCDF4 siphon tables timetomodel>=0.6.8 -timely-beliefs>=1.3.0 +timely-beliefs>=1.4.3 python-dotenv # a backport, not needed in Python3.8 importlib_metadata diff --git a/requirements/app.txt b/requirements/app.txt index 35570ffd7..05c12e7b2 100644 --- a/requirements/app.txt +++ b/requirements/app.txt @@ -6,8 +6,10 @@ # alembic==1.5.8 # via flask-migrate -altair==3.0.0 - # via timely-beliefs +altair==4.1.0 + # via + # -r requirements/app.in + # timely-beliefs arrow==1.0.3 # via rq-dashboard attrs==20.3.0 @@ -97,8 +99,6 @@ flask==1.1.2 # flask-sslify # flask-wtf # rq-dashboard -forecastiopy==0.22 - # via -r requirements/app.in greenlet==1.0.0 # via sqlalchemy humanize==3.3.0 @@ -258,7 +258,6 @@ requests-file==1.5.1 # via tldextract requests==2.25.1 # via - # forecastiopy # pvlib # requests-file # siphon @@ -285,7 +284,6 @@ siphon==0.9 # via -r requirements/app.in six==1.15.0 # via - # altair # bcrypt # bokeh # cycler @@ -317,7 +315,7 @@ tables==3.6.1 # via -r requirements/app.in threadpoolctl==2.1.0 # via scikit-learn -timely-beliefs==1.3.0 +timely-beliefs==1.4.3 # via -r requirements/app.in timetomodel==0.6.9 # via -r requirements/app.in diff --git a/to_pypi.sh b/to_pypi.sh index a26723733..73fc58030 100755 --- a/to_pypi.sh +++ b/to_pypi.sh @@ -8,31 +8,32 @@ # The version comes from setuptools_scm. See `python setup.py --version`. # setuptools_scm works via git tags that should implement a semantic versioning scheme, e.g. v0.2.3 # -# If there were zero commits since since tag, we have a real release and the version basicaly *is* what the tag says. -# Otherwise, the version also include a .devN identifier, where N is the number of commits since the last version tag. +# If there were zero commits since since tag, we have a real release and the version basically *is* what the tag says. +# Otherwise, the version also includes a .devN identifier, where N is the number of commits since the last version tag. # # More information on creating a dev release # ------------------------------------------- # Note that the only way to create a new dev release is to add another commit on your development branch. -# It might have been convenient to not have to commit to do that (for exoerimenting with very small changes), +# It might have been convenient to not have to commit to do that (for experimenting with very small changes), # but we decided against that. Let's explore why for a bit: # # First, setuptools_scm has the ability to add a local scheme (git commit and date/time) to the version, # but we've disabled that, as that extra part isn't formatted in a way that Pypi accepts it. -# Another way would have been to add a local version identifier ("+M", not the plus sign), +# Another way would have been to add a local version identifier ("+M", note the plus sign), # which is allowed in PEP 440 but explicitly disallowed by Pypi. # Finally, if we simply add a number to .devN (-> .devNM), the ordering of dev versions would be # disturbed after the next local commit (e.g. we add 1 to .dev4, making it .dev41, and then the next version, .dev5, # is not the highest version chosen by PyPi). # -# So we'll use these tools as the experts intend us to. +# So we'll use these tools as the experts intended. # If you want, you can read more about acceptable versions in PEP 440: https://www.python.org/dev/peps/pep-0440/ rm -rf build/* dist/* pip -q install twine +pip -q install wheel python setup.py egg_info sdist python setup.py egg_info bdist_wheel -twine upload dist/* \ No newline at end of file +twine upload dist/*