diff --git a/documentation/api/notation.rst b/documentation/api/notation.rst index 0e5a6c8a7..63606ee27 100644 --- a/documentation/api/notation.rst +++ b/documentation/api/notation.rst @@ -94,7 +94,7 @@ It uses the fact that all FlexMeasures sensors have unique IDs. The ``fm0`` scheme is the original scheme. It identified different types of sensors (such as grid connections, weather sensors and markets) in different ways. -The ``fm0`` scheme has been deprecated and is no longer supported officially. +The ``fm0`` scheme has been sunset since API version 3. Timeseries @@ -393,22 +393,17 @@ For example, to obtain data originating from data source 42, include the followi Data source IDs can be found by hovering over data in charts. -.. note:: Older API version (< 3) accepted user IDs (integers), account roles (strings) and lists thereof, instead of data source IDs (integers). - - .. _units: Units ^^^^^ -From API version 3 onwards, we are much more flexible with sent units. +The FlexMeasures API is quite flexible with sent units. A valid unit for timeseries data is any unit that is convertible to the configured sensor unit registered in FlexMeasures. So, for example, you can send timeseries data with "W" unit to a "kW" sensor. And if you wish to do so, you can even send a timeseries with "kWh" unit to a "kW" sensor. In this case, FlexMeasures will convert the data using the resolution of the timeseries. -For API versions 1 and 2, the unit sent needs to be an exact match with the sensor unit, and only "MW" is allowed for power sensors. - .. _signs: Signs of power values diff --git a/documentation/changelog.rst b/documentation/changelog.rst index b0fc1f0c7..5fe59b0c5 100644 --- a/documentation/changelog.rst +++ b/documentation/changelog.rst @@ -35,6 +35,7 @@ Infrastructure / Support * Document the `device_scheduler` linear program [see `PR #764 `_]. * Add support for `HiGHS `_ solver [see `PR #766 `_]. * Add support for installing FlexMeasures under Python 3.11 [see `PR #771 `_]. +* Removed obsolete code dealing with deprecated data models (e.g. assets, markets and weather sensors), and sunset the fm0 scheme for entity addresses [see `PR #695 `_ and `project 11 `_] v0.14.2 | July 25, 2023 ============================ diff --git a/documentation/configuration.rst b/documentation/configuration.rst index 635737c82..4fc64aad2 100644 --- a/documentation/configuration.rst +++ b/documentation/configuration.rst @@ -179,7 +179,7 @@ For more fine-grained control, the entries can also be tuples of view names and .. note:: This fine-grained control requires FlexMeasures version 0.6.0 -Default: ``["dashboard", "analytics", "portfolio", "assets", "users"]`` +Default: ``["dashboard"]`` FLEXMEASURES_MENU_LISTED_VIEW_ICONS diff --git a/documentation/dev/note-on-datamodel-transition.rst b/documentation/dev/note-on-datamodel-transition.rst index 22f417320..a76592dd5 100644 --- a/documentation/dev/note-on-datamodel-transition.rst +++ b/documentation/dev/note-on-datamodel-transition.rst @@ -11,41 +11,42 @@ A note on the ongoing data model transition ============================================ -FlexMeasures is already ~3 years in the making. It's a normal process for well-maintained software to update architectural principles during such a time. +FlexMeasures is already ~5 years in the making. It's a normal process for well-maintained software to update architectural principles during such a time. -We are in the middle of a refactoring which affects the data model, and if you are using FlexMeasures on your own server, we want you to know the following: +We are finishing up a refactoring which affects the data model, and if you are using FlexMeasures on your own server, we want you to know the following: We have your back ------------------ -If you work with the current model, there will be support to transition data to the new model once it's active. Actually, we are already working with the new model in some projects, so talk to us if you're interested. +By upgrading FlexMeasures one minor version at a time, you get the most out of our transition tools, including database upgrades (moving data over from the old to the new model automatically), plugin compatibility warnings, deprecation warnings for upcoming sunsets, and blackout tests (:ref:`more info here`). +If you still work with the old model and are having trouble to transition data to the current model, let us know. This transition is in your interest, as well ---------------------------------------------- -We do this transition so we can make FlexMeasures even more useful. For instance: support for more kinds of assets (energy plus related sensors). Or better forecasting and scheduling support. +We did this transition so we could make FlexMeasures even more useful. For instance: support for more kinds of assets (energy plus related sensors), and better support for forecasting, scheduling and reporting. What are the big changes? ----------------------------- -There are two important transitions happening in this transition: +There are two important transitions that happened in this transition: -1. First, we'll be deprecating the specific data types ``Asset``, ``Market`` and ``WeatherSensor``. We learned that to manage energy flexibility, you need all sort of sensors, and thus a more generalisable data model. When we model assets and sensors, we'll also better be able to differentiate the business from the data world. -2. Second, we'll fully integrate the `timely-beliefs framework `_ as the model for our time series data, which brings some major benefits for programmers as it lets us handle uncertain, multi-source time series data in a special Pandas data frame. +1. First, we deprecated the specific data types ``Asset``, ``Market`` and ``WeatherSensor``. We learned that to manage energy flexibility, you need all sort of sensors, and thus a more generalisable data model. When we modelled assets and sensors, we were also better able to differentiate the business from the data world. +2. Second, we fully integrated the `timely-beliefs framework `_ as the model for our time series data, which brings some major benefits for programmers as it lets us handle uncertain, multi-source time series data in a special Pandas data frame. -For the curious, here are visualisations of where we're now and where we're going (click image for large versions). +For the curious, here are visualisations of where we were before and where we're going (click image for large versions). -The current model: +The old model: .. image:: https://raw.githubusercontent.com/FlexMeasures/screenshots/main/architecture/FlexMeasures-CurrentDataModel.png :target: https://raw.githubusercontent.com/FlexMeasures/screenshots/main/architecture/FlexMeasures-CurrentDataModel.png :align: center .. :scale: 40% -The new model (work in progress): +The future model (work in progress): .. image:: https://raw.githubusercontent.com/FlexMeasures/screenshots/main/architecture/FlexMeasures-NewDataModel.png :target: https://raw.githubusercontent.com/FlexMeasures/screenshots/main/architecture/FlexMeasures-NewDataModel.png @@ -64,19 +65,22 @@ Here is a brief list: - |check_| `Support Sensor and Asset diversity `_: We are generalizing our database structure for organising energy data, to support all sorts of sensors and assets, and are letting users move their data to the new database model. We do this so we can better support the diverse set of use cases for energy flexibility. - |check_| `Update API endpoints for time series communication `_: We are updating our API with new endpoints for communicating time series data, thereby consolidating a few older endpoints into a better standard. We do this so we can both simplify our API and documentation, and support a diversity of sensors. - |check_| `Update CLI commands for setting up Sensors and Assets `_: We are updating our CLI commands to reflect the new database structure. We do this to facilitate setting up structure for new users. -- |uncheck_| `Update UI views for Sensors and Assets `_: We are updating our UI views (dashboard maps and analytics charts) according to our new database structure for organising energy data. We do this so users can customize what they want to see. +- |check_| `Update UI views for Sensors and Assets `_: We are updating our UI views (dashboard maps and analytics charts) according to our new database structure for organising energy data. We do this so users can customize what they want to see. +- |check_| `Deprecate old database models `_: We are deprecating the Power, Price and Weather tables in favour of the TimedBelief table, and deprecating the Asset, Market and WeatherSensor tables in favour of the Sensor and GenericAsset tables. We are doing this to clean up the code and database structure. +- |uncheck_| `Infrastructure for reporting on sensors `_: We are working on a backend infrastructure for sensors that record reports based on other sensors, like daily costs and aggregate power flow. - |uncheck_| `Scheduling of sensors `_: We are extending our database structure for Sensors with actuator functionality, and are moving to a model store where scheduling models can be registered. We do this so we can provide better plugin support for scheduling a diverse set of devices. - |uncheck_| `Forecasting of sensors `_: We are revising our forecasting tooling to support fixed-viewpoint forecasts. We do this so we can better support decision moments with the most recent expectations about relevant sensors. -- |uncheck_| `Deprecate old database models `_: We are deprecating the Power, Price and Weather tables in favour of the TimedBelief table, and deprecating the Asset, Market and WeatherSensor tables in favour of the Sensor and GeneralizedAsset tables. We are doing this to clean up the code and database structure. -The state of the transition (March 2022, v0.9.0) +The state of the transition (July 2023, v0.15.0) --------------------------------------------------- Project 9 was implemented with the release of v0.8.0. This work moved a lot of structure over, as well as actual data and some UI (dashboard, assets). We believe that was the hardest part. -We are now working on deprecating the old database models (see project 11). As part of that move, we decided to begin the work on a new API version (v3) which supports only the new data model (and is more REST-like). That work was done in project 13. The new APIs for assets and sensor data had already been working before (at /api/dev) and had been powering what is shown in the UI since v0.8.0. +In project 13, we began work on a new API version (v3) that supports only the new data model (and is more REST-like). The new APIs for assets and sensor data had already been working before (at /api/dev) and had been powering what is shown in the UI since v0.8.0. We also implemented many CLI commands which support the new model (project 14). +We have deprecated and sunset all API versions before v3, while offering the ability for FlexMeasures hosts to organise blackout tests, and have removed the old database models (see project 11). + We take care to support people on the old data model so the transition will be as smooth as possible, as we said above. One part of this is that the ``flexmeasures db upgrade`` command copies your data to the new model. Also, creating new data (e.g. old-style assets) creates new-style data (e.g. assets/sensors) automatically. However, some edge cases are not supported in this way. For instance, edited asset meta data might have to be re-entered later. Feel free to contact us to discuss the transition if needed. diff --git a/documentation/host/modes.rst b/documentation/host/modes.rst index e126e5a9c..0c74282b4 100644 --- a/documentation/host/modes.rst +++ b/documentation/host/modes.rst @@ -16,8 +16,6 @@ In this mode, the server is assumed to be used as a demonstration tool. Most of - [UI] Logged-in users can view queues on the demo server (usually only admins can do that) - [UI] Demo servers often display login credentials, so visitors can try out functionality. Use the :ref:`demo-credentials-config` config setting to do this. - [UI] The dashboard shows all non-empty asset groups, instead of only the ones for the current user. -- [UI] The analytics page mocks confidence intervals around power, price and weather data, so that the demo data doesn't need to have them. -- [UI] The portfolio page mocks flexibility numbers and a mocked control action. Play ------ diff --git a/documentation/index.rst b/documentation/index.rst index 597f9ad5e..a545fa8f6 100644 --- a/documentation/index.rst +++ b/documentation/index.rst @@ -169,7 +169,6 @@ The platform operator of FlexMeasures can be an Aggregator. :maxdepth: 1 concepts/benefits - concepts/benefits_of_flex concepts/inbuilt-smart-functionality concepts/algorithms concepts/security_auth diff --git a/documentation/tut/forecasting_scheduling.rst b/documentation/tut/forecasting_scheduling.rst index 01a4cef45..d15cb9435 100644 --- a/documentation/tut/forecasting_scheduling.rst +++ b/documentation/tut/forecasting_scheduling.rst @@ -40,7 +40,7 @@ You can also clear the job queues: When the main FlexMeasures process runs (e.g. by ``flexmeasures run``\ ), the queues of forecasting and scheduling jobs can be visited at ``http://localhost:5000/tasks/forecasting`` and ``http://localhost:5000/tasks/schedules``\ , respectively (by admins). -When forecasts and schedules have been generated, they should be visible at ``http://localhost:5000/analytics``. +When forecasts and schedules have been generated, they should be visible at ``http://localhost:5000/assets/``. .. note:: You can run workers who process jobs on different computers than the main server process. This can be a great architectural choice. Just keep in mind to use the same databases (postgres/redis) and to stick to the same FlexMeasures version on both. diff --git a/documentation/views/analytics.rst b/documentation/views/analytics.rst deleted file mode 100644 index 392f50e03..000000000 --- a/documentation/views/analytics.rst +++ /dev/null @@ -1,54 +0,0 @@ -.. _analytics: - -**************** -Client analytics -**************** - -The client analytics page shows relevant data to an asset's operation: production and consumption, market prices and weather data. -The view serves to browse through available data history and to assess how the app is monitoring and forecasting data streams from various sources. -In particular, the page contains: - -.. contents:: - :local: - :depth: 1 - - -.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_analytics.png - :align: center -.. :scale: 40% - - -.. _analytics_controls: - -Data filtering -============= - -FlexMeasures offers data analytics on various aggregation levels: per asset, per asset type or even per higher aggregation levels like all renewables. - -The time window is freely selectable. - -In addition, the source of market and weather data can be selected, as well as the forecast horizon. - -For certain assets, which bundle meters on the same location, individual traces can be shown next to each other in the (upper left) power plot, for comparison. - - -.. _analytics_plots: - -Data visualisation -================== - -In each plot, the data is shown for different types of data: measurements (e.g. of power or prices), forecasts and schedules (only for power, obviously). - -In the FlexMeasures platform, forecasting models can indicate a range of uncertainty around their forecasts, which will also be shown in plots if available. - - -.. _analytics_metrics: - -Metrics -========== - -FlexMeasures summarises the visualised data as realised (by measurement) and expected (by forecast) sums. -In addition, the mean average error (MAE) and the weighted absolute percentage error (WAPE) are computed for power, -weather and price data if forecasts are available for the chosen time range. - - diff --git a/documentation/views/control.rst b/documentation/views/control.rst deleted file mode 100644 index 7be04a9c1..000000000 --- a/documentation/views/control.rst +++ /dev/null @@ -1,37 +0,0 @@ -.. _control: - -***************** -Flexibility opportunities -***************** - -Flexibility opportunities have commercial value that users can valorise on. -When FlexMeasures has identified commercial value of flexibility, the user is suggested to act on it. -This might happen in an automated fashion (scripts reading out suggested schedules from the FlexMeasures API and implementing them to local operations if possible) or manually (operators agreeing with the opportunities identified by FlexMeasures and acting on the suggested schedules). - -For this latter case, in the Flexibility opportunities web-page (a yet-to-be designed UI feature discussed below), FlexMeasures could show all flexibility opportunities that the user can act on for a selected time window. - -.. contents:: - :local: - :depth: 1 - - -Visualisation of opportunities -======================== - -Visualising flexibility opportunities and their effects is not straightforward. -Flexibility opportunities can cause changes to the power profile of an asset in potentially complex ways. -One example is called the rebound effect, where a decrease in consumption leads to an increase in consumption at a later point in time, because consumption is essentially postponed. -Such effects could be taken into account by FlexMeasures and shown to the user, e.g. as a part of expected value calculations and power profile forecasts. - -Below is an example of what this could look like. -This is a potential UX design which we have not implemented yet. - -.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_control.png - :align: center -.. :scale: 40% - -The operator can select flexibility opportunities with an attached value, and see the effects on the power profile in a visual manner. -Listed flexibility opportunities include previously realised opportunities and currently offered opportunities. -Currently offered opportunities are presented as an order book, where they are sorted according to their commercial value. - -Of course, depending on the time window selection and constraints set by the asset owner, the rebound effects of an opportunity may partially take place outside of the selected time window. diff --git a/documentation/views/dashboard.rst b/documentation/views/dashboard.rst index 76cfa5b6e..b466867c4 100644 --- a/documentation/views/dashboard.rst +++ b/documentation/views/dashboard.rst @@ -24,8 +24,7 @@ Interactive map of assets ========================= The map shows all of the user's assets with icons for each asset type. -Clicking on an asset allows the user to see its current state (e.g. latest measurement of wind power production) and to navigate to the :ref:`analytics` page -to see more details, for instance forecasts. +Hovering over an asset allows users to see its name and ownership, and clicking on an asset allows the user to navigate to its page to see more details, for instance forecasts. .. _dashboard_summary: @@ -34,7 +33,7 @@ Summary of asset types ====================== The summary below the map lists all asset types that the user has hooked up to the platform and how many of each there are. -Clicking on the asset type name leads to the :ref:`analytics` page, where data is shown aggregated for that asset type. +Clicking on the asset type name leads to the asset's page, where its data is shown. Grouping by accounts diff --git a/documentation/views/portfolio.rst b/documentation/views/portfolio.rst deleted file mode 100644 index 32cfea0f2..000000000 --- a/documentation/views/portfolio.rst +++ /dev/null @@ -1,97 +0,0 @@ -.. _portfolio: - -****************** -Portfolio overview -****************** - -The portfolio overview shows results and opportunities regarding the user's asset portfolio. -The view serves to get an overview over the portfolio's energy status and can be viewed with either -the consumption or the generation side aggregated. - -In particular, the page contains: - -.. contents:: - :local: - :depth: 1 - - -.. image:: https://github.com/FlexMeasures/screenshots/raw/main/screenshot_portfolio.png - :align: center -.. :scale: 40% - - -.. _portfolio_statements: - -Statements about energy and flex activations -======================================================= - -The financial statements separate the effects of energy consumption/production and flexible schedules over two tables. - -Energy summary ------------------------ - -The top table lists the effects of energy trading for each asset type in the user's portfolio. -Production and consumption values are total volumes within the selected time window. -[#f1]_ - -Costs and revenues are calculated based on the relevant market prices for the user within the selected time window. -A consumer will only have costs, while a prosumer may have both costs and revenues. -A supplier has revenues, since it sells energy to the other roles within FlexMeasures. - -Finally, the financial statements show the total profit or loss per asset type. - - -Market status ----------------------------------- -.. note:: This feature is mocked for now. - -The bottom table lists the effects of flexible schedules for each asset type in the user's portfolio. -Separate columns are stated for each type of scheduled deviation from the status quo, e.g. curtailment and shifting (see :ref:`flexibility_types`), with relevant total volumes within the selected time window. -[#f1]_ - -Costs and revenues are calculated based on the following internal method for profit sharing: -Asset owners that follow flexible schedules via the platform will generate revenues. -Suppliers that follow flexible schedules via the platform will generate both costs and revenues, where the revenues come from interacting with external markets. -Finally, the financial statements show the total profit or loss per asset. - -.. rubric:: Footnotes - -.. [#f1] For time windows that include future time slots, future values are based on forecasts. - - -.. _portfolio_power_profile: - -Power profile measurements and forecasts -======================================== - -The power profile shows total production and consumption over the selected time window. -A switch allows the user to view the contribution of each asset type to either total as a stacked plot. -Past time slots show measurement data, whereas future time slots show forecasts. -When suggested changes exist in flexible schedules during the selected time window, the plot is overlaid with highlights (see :ref:`portfolio_flexibility_opportunities` ). - - -.. _portfolio_flexibility_effects: - -Changes to the power profile due to flexible schedules -===================================================== - -A crucial goal of FlexMeasures is to visualise the opportunities within flexible schedules. -This goal is not yet completely realised, but we show a mock here of how this could like when realised: - -Just below the power profile, the net effect of flexible schedules that have previously been computed by FlexMeasures is plotted. -The profile indicates the change in power resulting from schedules that are planned in the future, as well as from schedules that had been planned in the past. -Positive values indicate an increase in production or a decrease in consumption, both of which result in an increased load on the network. -For short-term changes in power due to activation of flexibility, this is sometimes called up-regulation. -Negative values indicate a decrease in production or an increase in consumption, which result in a decreased load on the network (down-regulation). -When flexibility opportunities exist in the selected time window, the plot is overlaid with highlights (see :ref:`portfolio_flexibility_opportunities` ). - - -.. _portfolio_flexibility_opportunities: - -Opportunities to valorise on flexibility -============================================== - -When flexibility opportunities exist in the selected time window, plots are overlaid with highlights indicating time slots -in which flexible scheduling adjustments can be taken in the future or were missed in the past. -The default time window (the next 24 hours) shows immediately upcoming opportunities to valorise on flexibility opportunities. -The user could learn more about identified opportunities on a yet-to-be-developed view which goes further into details. diff --git a/flexmeasures/api/common/schemas/sensors.py b/flexmeasures/api/common/schemas/sensors.py index 3b4daa65e..d33eaecb7 100644 --- a/flexmeasures/api/common/schemas/sensors.py +++ b/flexmeasures/api/common/schemas/sensors.py @@ -2,9 +2,6 @@ from marshmallow import fields from flexmeasures.api import FMValidationError -from flexmeasures.api.common.utils.api_utils import ( - get_sensor_by_generic_asset_type_and_location, -) from flexmeasures.utils.entity_address_utils import ( parse_entity_address, EntityAddressException, @@ -33,20 +30,21 @@ def _serialize(self, sensor: Sensor, attr, data, **kwargs) -> int: class SensorField(fields.Str): """Field that de-serializes to a Sensor, - and serializes a Sensor, Asset, Market or WeatherSensor into an entity address (string).""" + and serializes a Sensor into an entity address (string). + """ # todo: when Actuators also get an entity address, refactor this class to EntityField, # where an Entity represents anything with an entity address: we currently foresee Sensors and Actuators def __init__( self, - entity_type: str, - fm_scheme: str, + entity_type: str = "sensor", + fm_scheme: str = "fm1", *args, **kwargs, ): """ - :param entity_type: "sensor", "connection", "market" or "weather_sensor" + :param entity_type: "sensor" (in the future, possibly also another type of resource that is assigned an entity address) :param fm_scheme: "fm0" or "fm1" """ self.entity_type = entity_type @@ -58,20 +56,7 @@ def _deserialize(self, value, attr, obj, **kwargs) -> Sensor: try: ea = parse_entity_address(value, self.entity_type, self.fm_scheme) if self.fm_scheme == "fm0": - if self.entity_type == "connection": - sensor = Sensor.query.filter( - Sensor.id == ea["asset_id"] - ).one_or_none() - elif self.entity_type == "market": - sensor = Sensor.query.filter( - Sensor.name == ea["market_name"] - ).one_or_none() - elif self.entity_type == "weather_sensor": - sensor = get_sensor_by_generic_asset_type_and_location( - ea["weather_sensor_type_name"], ea["latitude"], ea["longitude"] - ) - else: - return NotImplemented + raise EntityAddressException("The fm0 scheme is no longer supported.") else: sensor = Sensor.query.filter(Sensor.id == ea["sensor_id"]).one_or_none() if sensor is not None: diff --git a/flexmeasures/api/common/schemas/tests/test_sensors.py b/flexmeasures/api/common/schemas/tests/test_sensors.py index 598d6221a..fe80811f7 100644 --- a/flexmeasures/api/common/schemas/tests/test_sensors.py +++ b/flexmeasures/api/common/schemas/tests/test_sensors.py @@ -16,22 +16,6 @@ "fm1", "height", ), - ( - build_entity_address( - dict(market_name="epex_da"), "market", fm_scheme="fm0" - ), - "market", - "fm0", - "epex_da", - ), - ( - build_entity_address( - dict(owner_id=1, asset_id=4), "connection", fm_scheme="fm0" - ), - "connection", - "fm0", - "Test battery with no known prices", - ), ], ) def test_sensor_field_straightforward( @@ -47,9 +31,6 @@ def test_sensor_field_straightforward( sf = SensorField(entity_type, fm_scheme) deser = sf.deserialize(entity_address, None, None) assert deser.name == exp_deserialization_name - if fm_scheme == "fm0" and entity_type in ("connection", "market", "weather_sensor"): - # These entity types are deserialized to Sensors, which have no entity address under the fm0 scheme - return assert sf.serialize(entity_type, {entity_type: deser}) == entity_address @@ -57,28 +38,30 @@ def test_sensor_field_straightforward( "entity_address, entity_type, fm_scheme, error_msg", [ ( - "ea1.2021-01.io.flexmeasures:some.weird:identifier%that^is*not)used", + build_entity_address( + dict(market_name="epex_da"), "market", fm_scheme="fm0" + ), "market", "fm0", - "Could not parse", + "fm0 scheme is no longer supported", ), ( "ea1.2021-01.io.flexmeasures:fm1.some.weird:identifier%that^is*not)used", - "market", + "sensor", "fm1", "Could not parse", ), ( build_entity_address( - dict(market_name="non_existing_market"), "market", fm_scheme="fm0" + dict(sensor_id=99999999999999), "sensor", fm_scheme="fm1" ), - "market", - "fm0", + "sensor", + "fm1", "doesn't exist", ), ( build_entity_address(dict(sensor_id=-1), "sensor", fm_scheme="fm1"), - "market", + "sensor", "fm1", "Could not parse", ), diff --git a/flexmeasures/api/common/utils/api_utils.py b/flexmeasures/api/common/utils/api_utils.py index b7ba5bbc3..b1213fa8e 100644 --- a/flexmeasures/api/common/utils/api_utils.py +++ b/flexmeasures/api/common/utils/api_utils.py @@ -1,30 +1,19 @@ from __future__ import annotations from timely_beliefs.beliefs.classes import BeliefsDataFrame -from typing import List, Sequence, Tuple, Union -import copy -from datetime import datetime, timedelta -from json import loads as parse_json, JSONDecodeError +from typing import List, Sequence, Union +from datetime import timedelta from flask import current_app -from inflection import pluralize from numpy import array from psycopg2.errors import UniqueViolation from rq.job import Job from sqlalchemy.exc import IntegrityError -import timely_beliefs as tb from flexmeasures.data import db -from flexmeasures.data.models.assets import Asset, Power -from flexmeasures.data.models.generic_assets import GenericAsset, GenericAssetType -from flexmeasures.data.models.markets import Price -from flexmeasures.data.models.time_series import Sensor, TimedBelief -from flexmeasures.data.models.weather import WeatherSensor, Weather -from flexmeasures.data.services.time_series import drop_unchanged_beliefs -from flexmeasures.data.utils import save_to_session, save_to_db as modern_save_to_db +from flexmeasures.data.utils import save_to_db from flexmeasures.api.common.responses import ( invalid_replacement, - unrecognized_sensor, ResponseTuple, request_processed, already_received_and_successfully_processed, @@ -32,95 +21,6 @@ from flexmeasures.utils.error_utils import error_handling_router -def list_access(service_listing, service_name): - """ - For a given USEF service name (API endpoint) in a service listing, - return the list of USEF roles that are allowed to access the service. - """ - return next( - service["access"] - for service in service_listing["services"] - if service["name"] == service_name - ) - - -def contains_empty_items(groups: List[List[str]]): - """ - Return True if any of the items in the groups is empty. - """ - for group in groups: - for item in group: - if item == "" or item is None: - return True - return False - - -def parse_as_list( - connection: str | float | Sequence[str | float], of_type: type | None = None -) -> Sequence[str | float | None]: - """ - Return a list of connections (or values), even if it's just one connection (or value) - """ - connections: Sequence[Union[str, float, None]] = [] - if not isinstance(connection, list): - if of_type is None: - connections = [connection] # type: ignore - else: - try: - connections = [of_type(connection)] - except TypeError: - connections = [None] - else: # key should have been plural - if of_type is None: - connections = connection - else: - try: - connections = [of_type(c) for c in connection] - except TypeError: - connections = [None] - return connections - - -# TODO: deprecate ― we should be using webargs to get data from a request, it's more descriptive and has error handling -def get_form_from_request(_request) -> Union[dict, None]: - if _request.method == "GET": - d = _request.args.to_dict( - flat=False - ) # From MultiDict, obtain all values with the same key as a list - parsed_d = {} - for k, v_list in d.items(): - parsed_v_list = [] - for v in v_list: - try: - parsed_v = parse_json(v) - except JSONDecodeError: - parsed_v = v - if isinstance(parsed_v, list): - parsed_v_list.extend(parsed_v) - else: - parsed_v_list.append(v) - if len(parsed_v_list) == 1: # Flatten single-value lists - parsed_d[k] = parsed_v_list[0] - else: - parsed_d[k] = parsed_v_list - return parsed_d - elif _request.method == "POST": - return _request.get_json(force=True) - else: - return None - - -def append_doc_of(fun): - def decorator(f): - if f.__doc__: - f.__doc__ += fun.__doc__ - else: - f.__doc__ = fun.__doc__ - return f - - return decorator - - def upsample_values( value_groups: Union[List[List[float]], List[float]], from_resolution: timedelta, @@ -139,86 +39,6 @@ def upsample_values( return value_groups -def groups_to_dict( - connection_groups: List[str], - value_groups: List[List[str]], - generic_asset_type_name: str, - plural_name: str | None = None, - groups_name="groups", -) -> dict: - """Put the connections and values in a dictionary and simplify if groups have identical values and/or if there is - only one group. - - Examples: - - >> connection_groups = [[1]] - >> value_groups = [[300, 300, 300]] - >> response_dict = groups_to_dict(connection_groups, value_groups, "connection") - >> print(response_dict) - << { - "connection": 1, - "values": [300, 300, 300] - } - - >> connection_groups = [[1], [2]] - >> value_groups = [[300, 300, 300], [300, 300, 300]] - >> response_dict = groups_to_dict(connection_groups, value_groups, "connection") - >> print(response_dict) - << { - "connections": [1, 2], - "values": [300, 300, 300] - } - - >> connection_groups = [[1], [2]] - >> value_groups = [[300, 300, 300], [400, 400, 400]] - >> response_dict = groups_to_dict(connection_groups, value_groups, "connection") - >> print(response_dict) - << { - "groups": [ - { - "connection": 1, - "values": [300, 300, 300] - }, - { - "connection": 2, - "values": [400, 400, 400] - } - ] - } - """ - - if plural_name is None: - plural_name = pluralize(generic_asset_type_name) - - # Simplify groups that have identical values - value_groups, connection_groups = unique_ever_seen(value_groups, connection_groups) - - # Simplify if there is only one group - if len(value_groups) == len(connection_groups) == 1: - if len(connection_groups[0]) == 1: - return { - generic_asset_type_name: connection_groups[0][0], - "values": value_groups[0], - } - else: - return {plural_name: connection_groups[0], "values": value_groups[0]} - else: - d: dict = {groups_name: []} - for connection_group, value_group in zip(connection_groups, value_groups): - if len(connection_group) == 1: - d[groups_name].append( - { - generic_asset_type_name: connection_group[0], - "values": value_group, - } - ) - else: - d[groups_name].append( - {plural_name: connection_group, "values": value_group} - ) - return d - - def unique_ever_seen(iterable: Sequence, selector: Sequence): """ Return unique iterable elements with corresponding lists of selector elements, preserving order. @@ -244,106 +64,6 @@ def unique_ever_seen(iterable: Sequence, selector: Sequence): return u, s -def message_replace_name_with_ea(message_with_connections_as_asset_names: dict) -> dict: - """ - For each connection in the message specified by a name, replace that name with the correct entity address. - TODO: Deprecated. This function is now only used in tests of deprecated API versions and should go (also asset_replace_name_with_id) - """ - message_with_connections_as_eas = copy.deepcopy( - message_with_connections_as_asset_names - ) - if "connection" in message_with_connections_as_asset_names: - message_with_connections_as_eas["connection"] = asset_replace_name_with_id( - parse_as_list( # type:ignore - message_with_connections_as_eas["connection"], of_type=str - ) - ) - elif "connections" in message_with_connections_as_asset_names: - message_with_connections_as_eas["connections"] = asset_replace_name_with_id( - parse_as_list( # type:ignore - message_with_connections_as_eas["connections"], of_type=str - ) - ) - elif "groups" in message_with_connections_as_asset_names: - for i, group in enumerate(message_with_connections_as_asset_names["groups"]): - if "connection" in group: - message_with_connections_as_eas["groups"][i][ - "connection" - ] = asset_replace_name_with_id( - parse_as_list(group["connection"], of_type=str) # type:ignore - ) - elif "connections" in group: - message_with_connections_as_eas["groups"][i][ - "connections" - ] = asset_replace_name_with_id( - parse_as_list(group["connections"], of_type=str) # type:ignore - ) - return message_with_connections_as_eas - - -def asset_replace_name_with_id(connections_as_name: List[str]) -> List[str]: - """Look up the owner and id given the asset name and construct a type 1 USEF entity address.""" - connections_as_ea = [] - for asset_name in connections_as_name: - asset = Asset.query.filter(Asset.name == asset_name).one_or_none() - connections_as_ea.append(asset.entity_address) - return connections_as_ea - - -def get_sensor_by_generic_asset_type_and_location( - generic_asset_type_name: str, latitude: float = 0, longitude: float = 0 -) -> Union[Sensor, ResponseTuple]: - """ - Search a sensor by generic asset type and location. - Can create a sensor if needed (depends on API mode) - and then inform the requesting user which one to use. - """ - # Look for the Sensor object - sensor = ( - Sensor.query.join(GenericAsset) - .join(GenericAssetType) - .filter(GenericAssetType.name == generic_asset_type_name) - .filter(GenericAsset.generic_asset_type_id == GenericAssetType.id) - .filter(GenericAsset.latitude == latitude) - .filter(GenericAsset.longitude == longitude) - .filter(Sensor.generic_asset_id == GenericAsset.id) - .one_or_none() - ) - if sensor is None: - create_sensor_if_unknown = False - if current_app.config.get("FLEXMEASURES_MODE", "") == "play": - create_sensor_if_unknown = True - - # either create a new weather sensor and post to that - if create_sensor_if_unknown: - current_app.logger.info("CREATING NEW WEATHER SENSOR...") - weather_sensor = WeatherSensor( - name="Weather sensor for %s at latitude %s and longitude %s" - % (generic_asset_type_name, latitude, longitude), - weather_sensor_type_name=generic_asset_type_name, - latitude=latitude, - longitude=longitude, - ) - db.session.add(weather_sensor) - db.session.flush() # flush so that we can reference the new object in the current db session - sensor = weather_sensor.corresponding_sensor - - # or query and return the nearest sensor and let the requesting user post to that one - else: - nearest_weather_sensor = WeatherSensor.query.order_by( - WeatherSensor.great_circle_distance( - latitude=latitude, longitude=longitude - ).asc() - ).first() - if nearest_weather_sensor is not None: - return unrecognized_sensor( - *nearest_weather_sensor.location, - ) - else: - return unrecognized_sensor() - return sensor - - def enqueue_forecasting_jobs( forecasting_jobs: list[Job] | None = None, ): @@ -360,11 +80,8 @@ def save_and_enqueue( forecasting_jobs: list[Job] | None = None, save_changed_beliefs_only: bool = True, ) -> ResponseTuple: - # Attempt to save - status = modern_save_to_db( - data, save_changed_beliefs_only=save_changed_beliefs_only - ) + status = save_to_db(data, save_changed_beliefs_only=save_changed_beliefs_only) db.session.commit() # Only enqueue forecasting jobs upon successfully saving new data @@ -382,122 +99,6 @@ def save_and_enqueue( return invalid_replacement() -def save_to_db( - timed_values: Union[BeliefsDataFrame, List[Union[Power, Price, Weather]]], - forecasting_jobs: List[Job] = [], - save_changed_beliefs_only: bool = True, -) -> ResponseTuple: - """Put the timed values into the database and enqueue forecasting jobs. - - Data can only be replaced on servers in play mode. - - TODO: remove this legacy function in its entirety (announced v0.8.0) - - :param timed_values: BeliefsDataFrame or a list of Power, Price or Weather values to be saved - :param forecasting_jobs: list of forecasting Jobs for redis queues. - :param save_changed_beliefs_only: if True, beliefs that are already stored in the database with an earlier belief time are dropped. - :returns: ResponseTuple - """ - - import warnings - - warnings.warn( - "The method api.common.utils.api_utils.save_to_db is deprecated. Check out the following replacements:" - "- [recommended option] to store BeliefsDataFrames only, switch to data.utils.save_to_db" - "- to store BeliefsDataFrames and enqueue jobs, switch to api.common.utils.api_utils.save_and_enqueue" - ) - - if isinstance(timed_values, BeliefsDataFrame): - - if save_changed_beliefs_only: - # Drop beliefs that haven't changed - timed_values = drop_unchanged_beliefs(timed_values) - - # Work around bug in which groupby still introduces an index level, even though we asked it not to - if None in timed_values.index.names: - timed_values.index = timed_values.index.droplevel(None) - - if timed_values.empty: - current_app.logger.debug("Nothing new to save") - return already_received_and_successfully_processed() - - current_app.logger.info("SAVING TO DB AND QUEUEING...") - try: - if isinstance(timed_values, BeliefsDataFrame): - TimedBelief.add_to_session( - session=db.session, beliefs_data_frame=timed_values - ) - else: - save_to_session(timed_values) - db.session.flush() - [current_app.queues["forecasting"].enqueue_job(job) for job in forecasting_jobs] - db.session.commit() - return request_processed() - except IntegrityError as e: - current_app.logger.warning(e) - db.session.rollback() - - # Possibly allow data to be replaced depending on config setting - if current_app.config.get("FLEXMEASURES_ALLOW_DATA_OVERWRITE", False): - if isinstance(timed_values, BeliefsDataFrame): - TimedBelief.add_to_session( - session=db.session, - beliefs_data_frame=timed_values, - allow_overwrite=True, - ) - else: - save_to_session(timed_values, overwrite=True) - [ - current_app.queues["forecasting"].enqueue_job(job) - for job in forecasting_jobs - ] - db.session.commit() - return request_processed() - else: - return already_received_and_successfully_processed() - - -def determine_belief_timing( - event_values: list, - start: datetime, - resolution: timedelta, - horizon: timedelta, - prior: datetime, - sensor: tb.Sensor, -) -> Tuple[List[datetime], List[timedelta]]: - """Determine event starts from start, resolution and len(event_values), - and belief horizons from horizon, prior, or both, taking into account - the sensor's knowledge horizon function. - - In case both horizon and prior is set, we take the greatest belief horizon, - which represents the earliest belief time. - """ - event_starts = [start + j * resolution for j in range(len(event_values))] - belief_horizons_from_horizon = None - belief_horizons_from_prior = None - if horizon is not None: - belief_horizons_from_horizon = [horizon] * len(event_values) - if prior is None: - return event_starts, belief_horizons_from_horizon - if prior is not None: - belief_horizons_from_prior = [ - event_start - prior - sensor.knowledge_horizon(event_start) - for event_start in event_starts - ] - if horizon is None: - return event_starts, belief_horizons_from_prior - if ( - belief_horizons_from_horizon is not None - and belief_horizons_from_prior is not None - ): - belief_horizons = [ - max(a, b) - for a, b in zip(belief_horizons_from_horizon, belief_horizons_from_prior) - ] - return event_starts, belief_horizons - raise ValueError("Missing horizon or prior.") - - def catch_timed_belief_replacements(error: IntegrityError): """Catch IntegrityErrors due to a UniqueViolation on the TimedBelief primary key. diff --git a/flexmeasures/api/common/utils/decorators.py b/flexmeasures/api/common/utils/decorators.py deleted file mode 100644 index 4096114a7..000000000 --- a/flexmeasures/api/common/utils/decorators.py +++ /dev/null @@ -1,73 +0,0 @@ -from __future__ import annotations - -from functools import wraps - -from flask import current_app, request, Response -from flask_json import as_json -from werkzeug.datastructures import Headers - -from flexmeasures.api.common.utils.api_utils import get_form_from_request - - -def as_response_type(response_type): - """Decorator which adds a "type" parameter to the data of the flask response. - Example: - - @app.route('/postMeterData') - @as_response_type("PostMeterDataResponse") - @as_json - def post_meter_data() -> dict: - return {"message": "Meter data posted"} - - The response.json will be: - - { - "message": "Meter data posted", - "type": "PostMeterDataResponse" - } - - :param response_type: The response type. - """ - - def wrapper(fn): - @wraps(fn) - @as_json - def decorated_service(*args, **kwargs): - try: - current_app.logger.info(get_form_from_request(request)) - except OSError as e: # don't crash if request can't be logged (e.g. [Errno 90] Message too long) - current_app.logger.info(e) - response = fn(*args, **kwargs) # expects flask response object - if not ( - hasattr(response, "json") - and hasattr(response, "headers") - and hasattr(response, "status_code") - ): - current_app.logger.warning( - "Response is not a Flask response object. I did not assign a response type." - ) - return response - data, status_code, headers = split_response(response) - if "type" in data: - current_app.logger.warning( - "Response already contains 'type' key. I did not assign a new response type." - ) - else: - data["type"] = response_type - headers.pop("content-length", None) - headers.pop("Content-Length", None) - return data, status_code, headers - - return decorated_service - - return wrapper - - -def split_response(response: Response) -> tuple[dict, int, dict]: - """Split Flask Response object into json data, status code and headers.""" - data = response.json - headers = dict( - zip(Headers.keys(response.headers), Headers.values(response.headers)) - ) - status_code = response.status_code - return data, status_code, headers diff --git a/flexmeasures/api/common/utils/migration_utils.py b/flexmeasures/api/common/utils/migration_utils.py deleted file mode 100644 index 875991266..000000000 --- a/flexmeasures/api/common/utils/migration_utils.py +++ /dev/null @@ -1,37 +0,0 @@ -""" -This module is part of our data model migration (see https://github.com/SeitaBV/flexmeasures/projects/9). -It will become obsolete when we deprecate the fm0 scheme for entity addresses. -""" - -from typing import List, Optional, Union - -from flexmeasures.api.common.responses import ( - deprecated_api_version, - unrecognized_market, - ResponseTuple, -) -from flexmeasures.data.models.time_series import Sensor -from flexmeasures.data.queries.sensors import ( - query_sensor_by_name_and_generic_asset_type_name, -) - - -def get_sensor_by_unique_name( - sensor_name: str, generic_asset_type_names: Optional[List[str]] = None -) -> Union[Sensor, ResponseTuple]: - """Search a sensor by unique name, returning a ResponseTuple if it is not found. - - Optionally specify a list of generic asset type names to filter on. - This function should be used only for sensors that correspond to the old Market class. - """ - # Look for the Sensor object - sensors = query_sensor_by_name_and_generic_asset_type_name( - sensor_name, generic_asset_type_names - ).all() - if len(sensors) == 0: - return unrecognized_market(sensor_name) - elif len(sensors) > 1: - return deprecated_api_version( - f"Multiple sensors were found named {sensor_name}." - ) - return sensors[0] diff --git a/flexmeasures/api/common/utils/validators.py b/flexmeasures/api/common/utils/validators.py index 50ce9f22c..80f2e1081 100644 --- a/flexmeasures/api/common/utils/validators.py +++ b/flexmeasures/api/common/utils/validators.py @@ -2,22 +2,17 @@ from datetime import datetime, timedelta from functools import wraps -from typing import List, Tuple, Union, Optional +from typing import Tuple, Union, Optional import re import isodate from isodate.isoerror import ISO8601Error -import inflect -from inflection import pluralize -from pandas.tseries.frequencies import to_offset from flask import request, current_app from flask_json import as_json -from flask_security import current_user import marshmallow from webargs.flaskparser import parser -from flexmeasures.api.common.schemas.sensors import SensorField from flexmeasures.data.schemas.times import DurationField from flexmeasures.api.common.responses import ( # noqa: F401 required_info_missing, @@ -36,16 +31,6 @@ unrecognized_connection_group, unrecognized_asset, ) -from flexmeasures.api.common.utils.api_utils import ( - get_form_from_request, - parse_as_list, - contains_empty_items, - upsample_values, -) -from flexmeasures.data import db -from flexmeasures.data.models.data_sources import DataSource -from flexmeasures.data.services.users import get_users -from flexmeasures.utils.time_utils import server_now """ This module has validators used by API endpoints <= 2.0 to describe @@ -56,53 +41,6 @@ """ -p = inflect.engine() - - -def validate_user_sources(sources: Union[int, str, List[Union[int, str]]]) -> List[int]: - """ - Return a list of user-based data source ids, given: - - one or more user ids - - one or more account role names - """ - sources = ( - sources if isinstance(sources, list) else [sources] - ) # Make sure sources is a list - user_source_ids: List[int] = [] - for source in sources: - if isinstance(source, int): # Parse as user id - try: - user_source_ids.extend( - db.session.query(DataSource.id) - .filter(DataSource.user_id == source) - .one_or_none() - ) - except TypeError: - current_app.logger.warning("Could not retrieve data source %s" % source) - pass - else: # Parse as account role name - user_ids = [user.id for user in get_users(account_role_name=source)] - user_source_ids.extend( - [ - params[0] - for params in db.session.query(DataSource.id) - .filter(DataSource.user_id.in_(user_ids)) - .all() - ] - ) - return list(set(user_source_ids)) # only unique ids - - -def include_current_user_source_id(source_ids: List[int]) -> List[int]: - """Includes the source id of the current user.""" - source_ids.extend( - db.session.query(DataSource.id) - .filter(DataSource.user_id == current_user.id) - .one_or_none() - ) - return list(set(source_ids)) # only unique source ids - - def parse_horizon(horizon_str: str) -> Tuple[Optional[timedelta], bool]: """ Validates whether a horizon string represents a valid ISO 8601 (repeating) time interval. @@ -159,32 +97,6 @@ def parse_duration( return None -def parse_isodate_str(start: str) -> Union[datetime, None]: - """ - Validates whether the string 'start' is a valid ISO 8601 datetime. - """ - try: - return isodate.parse_datetime(start) - except (ISO8601Error, AttributeError): - return None - - -def valid_sensor_units(sensor: str) -> List[str]: - """ - Returns the accepted units for this sensor. - """ - if sensor == "temperature": - return ["°C", "0C"] - elif sensor == "irradiance": - return ["kW/m²", "kW/m2"] - elif sensor == "wind speed": - return ["m/s"] - else: - raise NotImplementedError( - "Unknown sensor or physical unit, cannot determine valid units." - ) - - def optional_duration_accepted(default_duration: timedelta): """Decorator which specifies that a GET or POST request accepts an optional duration. It parses relevant form data and sets the "duration" keyword param. @@ -227,692 +139,3 @@ def decorated_service(*args, **kwargs): return decorated_service return wrapper - - -def optional_user_sources_accepted( - default_source: int | str | list[int | str] | None = None, -): - """Decorator which specifies that a GET or POST request accepts an optional source or list of data sources. - It parses relevant form data and sets the "user_source_ids" keyword parameter. - - Data originating from the requesting user is included by default. - That is, user_source_ids always includes the source id of the requesting user. - - Each source should either be a known USEF role name or a user id. - We'll parse them as a list of source ids. - - Case 1: - If a request states one or more data sources, then we'll only query those, in addition to the user's own data. - Default sources specified in the decorator (see example below) are ignored. - - Case 2: - If a request does not state any data sources, a list of default sources will be used. - - Case 2A: - Default sources can be specified in the decorator (see example below). - - Case 2B: - If no default sources are specified in the decorator, all sources are included. - - Example: - - @app.route('/getMeterData') - @optional_sources_accepted("MDC") - def get_meter_data(user_source_ids): - return 'Meter data posted' - - The source ids then include the user's own id, - and ids of other users that are registered as a Meter Data Company. - - If the message specifies: - - .. code-block:: json - - { - "sources": ["Prosumer", "ESCo"] - } - - The source ids then include the user's own id, - and ids of other users whose organisation account is registered as a Prosumer and/or Energy Service Company. - """ - - def wrapper(fn): - @wraps(fn) - @as_json - def decorated_service(*args, **kwargs): - form = get_form_from_request(request) - if form is None: - current_app.logger.warning( - "Unsupported request method for unpacking 'source' from request." - ) - return invalid_method(request.method) - - if "source" in form: - validated_user_source_ids = validate_user_sources(form["source"]) - if None in validated_user_source_ids: - return invalid_source(form["source"]) - kwargs["user_source_ids"] = include_current_user_source_id( - validated_user_source_ids - ) - elif default_source is not None: - kwargs["user_source_ids"] = include_current_user_source_id( - validate_user_sources(default_source) - ) - else: - kwargs["user_source_ids"] = None - - return fn(*args, **kwargs) - - return decorated_service - - return wrapper - - -def optional_prior_accepted( - ex_post: bool = False, infer_missing: bool = True, infer_missing_play: bool = False -): - """Decorator which specifies that a GET or POST request accepts an optional prior. - It parses relevant form data and sets the "prior" keyword param. - - Interpretation for GET requests: - - Denotes "at least before " - - This results in the filter belief_time_window = (None, prior) - - Interpretation for POST requests: - - Denotes "recorded to some datetime, - - this results in the assignment belief_time = prior - - :param ex_post: if True, only ex-post datetimes are allowed. - :param infer_missing: if True, servers assume that the belief_time of posted - values is server time. This setting is meant to be used for POST requests. - :param infer_missing_play: if True, servers in play mode assume that the belief_time of posted - values is server time. This setting is meant to be used for POST requests. - """ - - def wrapper(fn): - @wraps(fn) - @as_json - def decorated_service(*args, **kwargs): - form = get_form_from_request(request) - if form is None: - current_app.logger.warning( - "Unsupported request method for unpacking 'prior' from request." - ) - return invalid_method(request.method) - - if "prior" in form: - prior = parse_isodate_str(form["prior"]) - if ex_post is True: - start = parse_isodate_str(form["start"]) - duration = parse_duration(form["duration"], start) - # todo: validate start and duration (refactor already duplicate code from period_required and optional_horizon_accepted) - knowledge_time = ( - start + duration - ) # todo: take into account knowledge horizon function - if prior < knowledge_time: - extra_info = "Meter data can only be observed after the fact." - return invalid_horizon(extra_info) - elif infer_missing is True or ( - infer_missing_play is True - and current_app.config.get("FLEXMEASURES_MODE", "") == "play" - ): - # A missing prior is inferred by the server - prior = server_now() - else: - # Otherwise, a missing prior is fine (a horizon may still be inferred by the server) - prior = None - - kwargs["prior"] = prior - return fn(*args, **kwargs) - - return decorated_service - - return wrapper - - -def optional_horizon_accepted( # noqa C901 - ex_post: bool = False, - infer_missing: bool = True, - infer_missing_play: bool = False, - accept_repeating_interval: bool = False, -): - """Decorator which specifies that a GET or POST request accepts an optional horizon. - The horizon should be in accordance with the ISO 8601 standard. - It parses relevant form data and sets the "horizon" keyword param (a timedelta). - - Interpretation for GET requests: - - Denotes "at least before the fact (positive horizon), - or at most after the fact (negative horizon)" - - This results in the filter belief_horizon_window = (horizon, None) - - Interpretation for POST requests: - - Denotes "at before the fact (positive horizon), - or at after the fact (negative horizon)" - - this results in the assignment belief_horizon = horizon - - For example: - - @app.route('/postMeterData') - @optional_horizon_accepted() - def post_meter_data(horizon): - return 'Meter data posted' - - :param ex_post: if True, only non-positive horizons are allowed. - :param infer_missing: if True, servers assume that the belief_horizon of posted - values is 0 hours. This setting is meant to be used for POST requests. - :param infer_missing_play: if True, servers in play mode assume that the belief_horizon of posted - values is 0 hours. This setting is meant to be used for POST requests. - :param accept_repeating_interval: if True, the "rolling" keyword param is also set - (this was used for POST requests before v2.0) - """ - - def wrapper(fn): - @wraps(fn) - @as_json - def decorated_service(*args, **kwargs): - form = get_form_from_request(request) - if form is None: - current_app.logger.warning( - "Unsupported request method for unpacking 'horizon' from request." - ) - return invalid_method(request.method) - - rolling = True - if "horizon" in form: - horizon, rolling = parse_horizon(form["horizon"]) - if horizon is None: - current_app.logger.warning("Cannot parse 'horizon' value") - return invalid_horizon() - elif ex_post is True: - if horizon > timedelta(hours=0): - extra_info = "Meter data must have a zero or negative horizon to indicate observations after the fact." - return invalid_horizon(extra_info) - elif rolling is True and accept_repeating_interval is False: - extra_info = ( - "API versions 2.0 and higher use regular ISO 8601 durations instead of repeating time intervals. " - "For example: R/P1D should be replaced by P1D." - ) - return invalid_horizon(extra_info) - elif infer_missing is True or ( - infer_missing_play is True - and current_app.config.get("FLEXMEASURES_MODE", "") == "play" - ): - # A missing horizon is set to zero - horizon = timedelta(hours=0) - else: - # Otherwise, a missing horizon is fine (a prior may still be inferred by the server) - horizon = None - - kwargs["horizon"] = horizon - if infer_missing is True and accept_repeating_interval is True: - kwargs["rolling"] = rolling - return fn(*args, **kwargs) - - return decorated_service - - return wrapper - - -def unit_required(fn): - """Decorator which specifies that a GET or POST request must specify a unit. - It parses relevant form data and sets the "unit keyword param. - Example: - - @app.route('/postMeterData') - @unit_required - def post_meter_data(unit): - return 'Meter data posted' - - The message must specify a 'unit'. - """ - - @wraps(fn) - @as_json - def wrapper(*args, **kwargs): - form = get_form_from_request(request) - if form is None: - current_app.logger.warning( - "Unsupported request method for unpacking 'unit' from request." - ) - return invalid_method(request.method) - - if "unit" in form: - unit = form["unit"] - else: - current_app.logger.warning("Request missing 'unit'.") - return invalid_unit(quantity=None, units=None) - - kwargs["unit"] = unit - return fn(*args, **kwargs) - - return wrapper - - -def period_required(fn): - """Decorator which specifies that a GET or POST request must specify a time period (by start and duration). - It parses relevant form data and sets the "start" and "duration" keyword params. - Example: - - @app.route('/postMeterData') - @period_required - def post_meter_data(period): - return 'Meter data posted' - - The message must specify a 'start' and a 'duration' in accordance with the ISO 8601 standard. - This decorator should not be used together with optional_duration_accepted. - """ - - @wraps(fn) - @as_json - def wrapper(*args, **kwargs): - form = get_form_from_request(request) - if form is None: - current_app.logger.warning( - "Unsupported request method for unpacking 'start' and 'duration' from request." - ) - return invalid_method(request.method) - - if "start" in form: - start = parse_isodate_str(form["start"]) - if not start: - current_app.logger.warning("Cannot parse 'start' value") - return invalid_period() - if start.tzinfo is None: - current_app.logger.warning("Cannot parse timezone of 'start' value") - return invalid_timezone( - "Start time should explicitly state a timezone." - ) - else: - current_app.logger.warning("Request missing 'start'.") - return invalid_period() - kwargs["start"] = start - if "duration" in form: - duration = parse_duration(form["duration"], start) - if not duration: - current_app.logger.warning("Cannot parse 'duration' value") - return invalid_period() - else: - current_app.logger.warning("Request missing 'duration'.") - return invalid_period() - kwargs["duration"] = duration - return fn(*args, **kwargs) - - return wrapper - - -def assets_required( - generic_asset_type_name: str, plural_name: str | None = None, groups_name="groups" -): - """Decorator which specifies that a GET or POST request must specify one or more assets. - It parses relevant form data and sets the "generic_asset_name_groups" keyword param. - Example: - - @app.route('/postMeterData') - @assets_required("connection", plural_name="connections") - def post_meter_data(generic_asset_name_groups): - return 'Meter data posted' - - Given this example, the message must specify one or more assets as "connections". - If that is the case, then the assets are passed to the function as generic_asset_name_groups. - - Connections can be listed in one of the following ways: - - value of 'connection' key (for a single asset) - - values of 'connections' key (for multiple assets that have the same timeseries data) - - values of the 'connection' and/or 'connections' keys listed under the 'groups' key - (for multiple assets with different timeseries data) - """ - if plural_name is None: - plural_name = pluralize(generic_asset_type_name) - - def wrapper(fn): - @wraps(fn) - @as_json - def decorated_service(*args, **kwargs): - form = get_form_from_request(request) - if form is None: - current_app.logger.warning( - "Unsupported request method for unpacking '%s' from request." - % plural_name - ) - return invalid_method(request.method) - - if generic_asset_type_name in form: - generic_asset_name_groups = [ - parse_as_list(form[generic_asset_type_name]) - ] - elif plural_name in form: - generic_asset_name_groups = [parse_as_list(form[plural_name])] - elif groups_name in form: - generic_asset_name_groups = [] - for group in form["groups"]: - if generic_asset_type_name in group: - generic_asset_name_groups.append( - parse_as_list(group[generic_asset_type_name]) - ) - elif plural_name in group: - generic_asset_name_groups.append( - parse_as_list(group[plural_name]) - ) - else: - current_app.logger.warning( - "Group %s missing %s" % (group, plural_name) - ) - return unrecognized_connection_group() - else: - current_app.logger.warning("Request missing %s or group." % plural_name) - return unrecognized_connection_group() - - if not contains_empty_items(generic_asset_name_groups): - kwargs["generic_asset_name_groups"] = generic_asset_name_groups - return fn(*args, **kwargs) - else: - current_app.logger.warning("Request includes empty %s." % plural_name) - return unrecognized_connection_group() - - return decorated_service - - return wrapper - - -def values_required(fn): - """Decorator which specifies that a GET or POST request must specify one or more values. - It parses relevant form data and sets the "value_groups" keyword param. - Example: - - @app.route('/postMeterData') - @values_required - def post_meter_data(value_groups): - return 'Meter data posted' - - The message must specify one or more values. If that is the case, then the values are passed to the - function as value_groups. - """ - - @wraps(fn) - @as_json - def wrapper(*args, **kwargs): - form = get_form_from_request(request) - if form is None: - current_app.logger.warning( - "Unsupported request method for unpacking 'values' from request." - ) - return invalid_method(request.method) - - if "value" in form: - value_groups = [parse_as_list(form["value"], of_type=float)] - elif "values" in form: - value_groups = [parse_as_list(form["values"], of_type=float)] - elif "groups" in form: - value_groups = [] - for group in form["groups"]: - if "value" in group: - value_groups.append(parse_as_list(group["value"], of_type=float)) - elif "values" in group: - value_groups.append(parse_as_list(group["values"], of_type=float)) - else: - current_app.logger.warning("Group %s missing value(s)" % group) - return ptus_incomplete() - else: - current_app.logger.warning("Request missing value(s) or group.") - return ptus_incomplete() - - if not contains_empty_items(value_groups): - kwargs["value_groups"] = value_groups - return fn(*args, **kwargs) - else: - extra_info = "Request includes empty or ill-formatted value(s)." - current_app.logger.warning(extra_info) - return ptus_incomplete(extra_info) - - return wrapper - - -def type_accepted(message_type: str): - """Decorator which specifies that a GET or POST request must specify the specified message type. Example: - - @app.route('/postMeterData') - @type_accepted('PostMeterDataRequest') - def post_meter_data(): - return 'Meter data posted' - - The message must specify 'PostMeterDataRequest' as its 'type'. - - :param message_type: The message type. - """ - - def wrapper(fn): - @wraps(fn) - @as_json - def decorated_service(*args, **kwargs): - form = get_form_from_request(request) - if form is None: - current_app.logger.warning( - "Unsupported request method for unpacking 'type' from request." - ) - return invalid_method(request.method) - elif "type" not in form: - current_app.logger.warning("Request is missing message type.") - return no_message_type() - elif form["type"] != message_type: - current_app.logger.warning("Type is not accepted for this endpoint.") - return invalid_message_type(message_type) - else: - return fn(*args, **kwargs) - - return decorated_service - - return wrapper - - -def units_accepted(quantity: str, *units: str): - """Decorator which specifies that a GET or POST request must specify one of the - specified physical units. First parameter specifies the physical or economical quantity. - It parses relevant form data and sets the "unit" keyword param. - Example: - - @app.route('/postMeterData') - @units_accepted("power", 'MW', 'MWh') - def post_meter_data(unit): - return 'Meter data posted' - - The message must either specify 'MW' or 'MWh' as the unit. - - :param quantity: The physical or economic quantity - :param units: The possible units. - """ - - def wrapper(fn): - @wraps(fn) - @as_json - def decorated_service(*args, **kwargs): - form = get_form_from_request(request) - if form is None: - current_app.logger.warning( - "Unsupported request method for unpacking 'unit' from request." - ) - return invalid_method(request.method) - elif "unit" not in form: - current_app.logger.warning("Request is missing unit.") - return invalid_unit(quantity, units) - elif form["unit"] not in units: - current_app.logger.warning( - "Unit %s is not accepted as one of %s." % (form["unit"], units) - ) - return invalid_unit(quantity, units) - else: - kwargs["unit"] = form["unit"] - return fn(*args, **kwargs) - - return decorated_service - - return wrapper - - -def post_data_checked_for_required_resolution( - entity_type: str, fm_scheme: str -): # noqa: C901 - """Decorator which checks that a POST request receives time series data with the event resolutions - required by the sensor. It sets the "resolution" keyword argument. - If the resolution in the data is a multiple of the sensor resolution, values are upsampled to the sensor resolution. - Finally, this decorator also checks if all sensors have the same event_resolution and complains otherwise. - - The resolution of the data is inferred from the duration and the number of values. - Therefore, the decorator should follow after the values_required, period_required and assets_required decorators. - Example: - - @app.route('/postMeterData') - @values_required - @period_required - @assets_required("connection") - @post_data_checked_for_required_resolution("connection") - def post_meter_data(value_groups, start, duration, generic_asset_name_groups, resolution) - return 'Meter data posted' - """ - - def wrapper(fn): - @wraps(fn) - @as_json - def decorated_service(*args, **kwargs): - form = get_form_from_request(request) - if form is None: - current_app.logger.warning( - "Unsupported request method for inferring resolution from request." - ) - return invalid_method(request.method) - - if not all( - key in kwargs - for key in [ - "value_groups", - "start", - "duration", - ] - ): - current_app.logger.warning("Could not infer resolution.") - fields = ("values", "start", "duration") - return required_info_missing(fields, "Resolution cannot be inferred.") - if "generic_asset_name_groups" not in kwargs: - return required_info_missing( - (entity_type), - "Required resolution cannot be found without asset info.", - ) - - # Calculating (inferring) the resolution in the POSTed data - inferred_resolution = ( - (kwargs["start"] + kwargs["duration"]) - kwargs["start"] - ) / len(kwargs["value_groups"][0]) - - # Finding the required resolution for sensors affected in this request - required_resolution = None - last_sensor = None - for asset_group in kwargs["generic_asset_name_groups"]: - for asset_descriptor in asset_group: - # Getting the sensor - sensor = SensorField(entity_type, fm_scheme).deserialize( - asset_descriptor - ) - if sensor is None: - return unrecognized_asset( - f"Failed to look up asset by {asset_descriptor}" - ) - # Complain if sensors don't all require the same resolution - if ( - required_resolution is not None - and sensor.event_resolution != required_resolution - ): - return conflicting_resolutions( - f"Cannot send data for both {sensor} and {last_sensor}." - ) - # Setting the resolution & remembering last looked-at sensor - required_resolution = sensor.event_resolution - last_sensor = sensor - - # if inferred resolution is a multiple from required_solution, we can upsample_values - # todo: next line fails on sensors with 0 resolution - if inferred_resolution % required_resolution == timedelta(hours=0): - for i in range(len(kwargs["value_groups"])): - kwargs["value_groups"][i] = upsample_values( - kwargs["value_groups"][i], - from_resolution=inferred_resolution, - to_resolution=required_resolution, - ) - inferred_resolution = required_resolution - - if inferred_resolution != required_resolution: - current_app.logger.warning( - f"Resolution {inferred_resolution} is not accepted. We require {required_resolution}." - ) - return unapplicable_resolution( - isodate.duration_isoformat(required_resolution) - ) - else: - kwargs["resolution"] = inferred_resolution - return fn(*args, **kwargs) - - return decorated_service - - return wrapper - - -def get_data_downsampling_allowed(entity_type: str, fm_scheme: str): - """Decorator which allows downsampling of data which a GET request returns. - It checks for a form parameter "resolution". - If that is given and is a multiple of the sensor's event_resolution, - downsampling is performed on the data. This is done by setting the "resolution" - keyword parameter, which is obeyed by collect_time_series_data and used - in resampling. - - The original resolution of the data is the event_resolution of the sensor. - Therefore, the decorator should follow after the assets_required decorator. - - Example: - - @app.route('/getMeterData') - @assets_required("connection") - @get_data_downsampling_allowed("connection") - def get_meter_data(generic_asset_name_groups, resolution): - return data - - """ - - def wrapper(fn): - @wraps(fn) - @as_json - def decorated_service(*args, **kwargs): - kwargs[ - "resolution" - ] = None # using this decorator means you can expect this attribute, None means default - form = get_form_from_request(request) - if form is None: - current_app.logger.warning( - "Unsupported request method for unpacking 'resolution' from request." - ) - return invalid_method(request.method) - - if "resolution" in form and form["resolution"]: - ds_resolution = parse_duration(form["resolution"]) - if ds_resolution is None: - return invalid_resolution_str(form["resolution"]) - # Check if the resolution can be applied to all sensors (if it is a multiple - # of the event_resolution(s) and thus downsampling is possible) - for asset_group in kwargs["generic_asset_name_groups"]: - for asset_descriptor in asset_group: - sensor = SensorField(entity_type, fm_scheme).deserialize( - asset_descriptor - ) - if sensor is None: - return unrecognized_asset() - sensor_resolution = sensor.event_resolution - if ds_resolution % sensor_resolution != timedelta(minutes=0): - return unapplicable_resolution( - f"{isodate.duration_isoformat(sensor_resolution)} or a multiple hereof." - ) - kwargs["resolution"] = to_offset( - isodate.parse_duration(form["resolution"]) - ).freqstr # Convert ISO period string to pandas frequency string - - return fn(*args, **kwargs) - - return decorated_service - - return wrapper diff --git a/flexmeasures/api/v3_0/tests/test_assets_api.py b/flexmeasures/api/v3_0/tests/test_assets_api.py index 7e1ea7593..01df9cf77 100644 --- a/flexmeasures/api/v3_0/tests/test_assets_api.py +++ b/flexmeasures/api/v3_0/tests/test_assets_api.py @@ -294,7 +294,7 @@ def test_post_an_asset_with_invalid_data(client, setup_api_test_data): The right error messages should be in the response and the number of assets has not increased. """ with UserContext("test_admin_user@seita.nl") as prosumer: - num_assets_before = len(prosumer.assets) + num_assets_before = len(prosumer.account.generic_assets) auth_token = get_auth_token(client, "test_admin_user@seita.nl", "testtest") diff --git a/flexmeasures/api/v3_0/tests/test_sensor_schedules.py b/flexmeasures/api/v3_0/tests/test_sensor_schedules.py index b2ab198f0..d03f4091e 100644 --- a/flexmeasures/api/v3_0/tests/test_sensor_schedules.py +++ b/flexmeasures/api/v3_0/tests/test_sensor_schedules.py @@ -9,6 +9,7 @@ from flexmeasures.api.tests.utils import check_deprecation, get_auth_token from flexmeasures.api.v3_0.tests.utils import message_for_trigger_schedule from flexmeasures.data.models.data_sources import DataSource +from flexmeasures.data.models.generic_assets import GenericAsset from flexmeasures.data.models.time_series import Sensor, TimedBelief from flexmeasures.data.tests.utils import work_on_rq from flexmeasures.data.services.scheduling import ( @@ -27,7 +28,7 @@ def test_get_schedule_wrong_job_id( keep_scheduling_queue_empty, ): wrong_job_id = 9999 - sensor = Sensor.query.filter(Sensor.name == "Test battery").one_or_none() + sensor = add_battery_assets["Test battery"].sensors[0] with app.test_client() as client: auth_token = get_auth_token(client, "test_prosumer_user@seita.nl", "testtest") get_schedule_response = client.get( @@ -71,7 +72,7 @@ def test_trigger_schedule_with_invalid_flexmodel( sent_value, err_msg, ): - sensor = Sensor.query.filter(Sensor.name == "Test battery").one_or_none() + sensor = add_battery_assets["Test battery"].sensors[0] with app.test_client() as client: if sent_value: # if None, field is a term we expect in the response, not more message["flex-model"][field] = sent_value @@ -108,7 +109,7 @@ def test_trigger_and_get_schedule_with_unknown_prices( ): auth_token = None with app.test_client() as client: - sensor = Sensor.query.filter(Sensor.name == "Test battery").one_or_none() + sensor = add_battery_assets["Test battery"].sensors[0] # trigger a schedule through the /sensors//schedules/trigger [POST] api endpoint auth_token = get_auth_token(client, "test_prosumer_user@seita.nl", "testtest") @@ -176,7 +177,6 @@ def test_trigger_and_get_schedule( message, asset_name, ): - # Include the price sensor in the flex-context explicitly, to test deserialization price_sensor_id = add_market_prices["epex_da"].id message["flex-context"] = { @@ -187,7 +187,13 @@ def test_trigger_and_get_schedule( # trigger a schedule through the /sensors//schedules/trigger [POST] api endpoint assert len(app.queues["scheduling"]) == 0 - sensor = Sensor.query.filter(Sensor.name == asset_name).one_or_none() + sensor = ( + Sensor.query.filter(Sensor.name == "power") + .join(GenericAsset) + .filter(GenericAsset.id == Sensor.generic_asset_id) + .filter(GenericAsset.name == asset_name) + .one_or_none() + ) with app.test_client() as client: auth_token = get_auth_token(client, "test_prosumer_user@seita.nl", "testtest") trigger_schedule_response = client.post( diff --git a/flexmeasures/conftest.py b/flexmeasures/conftest.py index ba23b61c8..4106cce58 100644 --- a/flexmeasures/conftest.py +++ b/flexmeasures/conftest.py @@ -2,9 +2,8 @@ from contextlib import contextmanager import pytest -from random import random +from random import random, seed from datetime import datetime, timedelta -import pytz from isodate import parse_duration import pandas as pd @@ -12,6 +11,8 @@ from flask import request, jsonify from flask_sqlalchemy import SQLAlchemy from flask_security import roles_accepted +from timely_beliefs.sensors.func_store.knowledge_horizons import x_days_ago_at_y_oclock + from werkzeug.exceptions import ( InternalServerError, BadRequest, @@ -24,11 +25,9 @@ from flexmeasures.auth.policy import ADMIN_ROLE from flexmeasures.utils.time_utils import as_server_time from flexmeasures.data.services.users import create_user -from flexmeasures.data.models.assets import AssetType, Asset from flexmeasures.data.models.generic_assets import GenericAssetType, GenericAsset from flexmeasures.data.models.data_sources import DataSource from flexmeasures.data.models.planning.utils import initialize_index -from flexmeasures.data.models.markets import Market, MarketType from flexmeasures.data.models.time_series import Sensor, TimedBelief from flexmeasures.data.models.user import User, Account, AccountRole @@ -231,34 +230,42 @@ def create_roles_users(db, test_accounts) -> dict[str, User]: @pytest.fixture(scope="module") -def setup_markets(db) -> dict[str, Market]: +def setup_markets(db) -> dict[str, Sensor]: return create_test_markets(db) @pytest.fixture(scope="function") -def setup_markets_fresh_db(fresh_db) -> dict[str, Market]: +def setup_markets_fresh_db(fresh_db) -> dict[str, Sensor]: return create_test_markets(fresh_db) -def create_test_markets(db) -> dict[str, Market]: +def create_test_markets(db) -> dict[str, Sensor]: """Create the epex_da market.""" - day_ahead = MarketType( + day_ahead = GenericAssetType( name="day_ahead", - daily_seasonality=True, - weekly_seasonality=True, - yearly_seasonality=True, ) - db.session.add(day_ahead) - epex_da = Market( + epex = GenericAsset( + name="epex", + generic_asset_type=day_ahead, + ) + epex_da = Sensor( name="epex_da", - market_type_name="day_ahead", + generic_asset=epex, event_resolution=timedelta(hours=1), unit="EUR/MWh", - knowledge_horizon_fnc="x_days_ago_at_y_oclock", - knowledge_horizon_par={"x": 1, "y": 12, "z": "Europe/Paris"}, + knowledge_horizon=( + x_days_ago_at_y_oclock, + {"x": 1, "y": 12, "z": "Europe/Paris"}, + ), + attributes=dict( + daily_seasonality=True, + weekly_seasonality=True, + yearly_seasonality=True, + ), ) db.session.add(epex_da) + db.session.flush() # assign an id, so it can be used to set a market_id attribute on a GenericAsset or Sensor return {"epex_da": epex_da} @@ -280,20 +287,10 @@ def create_sources(db) -> dict[str, DataSource]: return {"Seita": seita_source, "ENTSO-E": entsoe_source} -@pytest.fixture(scope="module") -def setup_asset_types(db) -> dict[str, AssetType]: - return create_test_asset_types(db) - - -@pytest.fixture(scope="function") -def setup_asset_types_fresh_db(fresh_db) -> dict[str, AssetType]: - return create_test_asset_types(fresh_db) - - @pytest.fixture(scope="module") def setup_generic_assets( db, setup_generic_asset_types, setup_accounts -) -> dict[str, AssetType]: +) -> dict[str, GenericAsset]: """Make some generic assets used throughout.""" return create_generic_assets(db, setup_generic_asset_types, setup_accounts) @@ -301,14 +298,16 @@ def setup_generic_assets( @pytest.fixture(scope="function") def setup_generic_assets_fresh_db( fresh_db, setup_generic_asset_types_fresh_db, setup_accounts_fresh_db -) -> dict[str, AssetType]: +) -> dict[str, GenericAsset]: """Make some generic assets used throughout.""" return create_generic_assets( fresh_db, setup_generic_asset_types_fresh_db, setup_accounts_fresh_db ) -def create_generic_assets(db, setup_generic_asset_types, setup_accounts): +def create_generic_assets( + db, setup_generic_asset_types, setup_accounts +) -> dict[str, GenericAsset]: troposphere = GenericAsset( name="troposphere", generic_asset_type=setup_generic_asset_types["public_good"] ) @@ -335,18 +334,18 @@ def create_generic_assets(db, setup_generic_asset_types, setup_accounts): @pytest.fixture(scope="module") -def setup_generic_asset_types(db) -> dict[str, AssetType]: +def setup_generic_asset_types(db) -> dict[str, GenericAssetType]: """Make some generic asset types used throughout.""" return create_generic_asset_types(db) @pytest.fixture(scope="function") -def setup_generic_asset_types_fresh_db(fresh_db) -> dict[str, AssetType]: +def setup_generic_asset_types_fresh_db(fresh_db) -> dict[str, GenericAssetType]: """Make some generic asset types used throughout.""" return create_generic_asset_types(fresh_db) -def create_generic_asset_types(db): +def create_generic_asset_types(db) -> dict[str, GenericAssetType]: public_good = GenericAssetType( name="public good", ) @@ -372,59 +371,75 @@ def create_generic_asset_types(db): ) -def create_test_asset_types(db) -> dict[str, AssetType]: - """Make some asset types used throughout. - Deprecated. Remove with Asset model.""" - - solar = AssetType( - name="solar", - is_producer=True, - can_curtail=True, - daily_seasonality=True, - yearly_seasonality=True, - ) - db.session.add(solar) - wind = AssetType( - name="wind", - is_producer=True, - can_curtail=True, - daily_seasonality=True, - yearly_seasonality=True, +@pytest.fixture(scope="module") +def setup_assets( + db, setup_accounts, setup_markets, setup_sources, setup_generic_asset_types +) -> dict[str, GenericAsset]: + return create_assets( + db, setup_accounts, setup_markets, setup_sources, setup_generic_asset_types ) - db.session.add(wind) - return dict(solar=solar, wind=wind) -@pytest.fixture(scope="module") -def setup_assets( - db, setup_roles_users, setup_markets, setup_sources, setup_asset_types -) -> dict[str, Asset]: - """Add assets to known test users. - Deprecated. Remove with Asset model.""" - # db.session.refresh(setup_roles_users["Test Prosumer User"]) +@pytest.fixture(scope="function") +def setup_assets_fresh_db( + fresh_db, + setup_accounts_fresh_db, + setup_markets_fresh_db, + setup_sources_fresh_db, + setup_generic_asset_types_fresh_db, +) -> dict[str, GenericAsset]: + return create_assets( + fresh_db, + setup_accounts_fresh_db, + setup_markets_fresh_db, + setup_sources_fresh_db, + setup_generic_asset_types_fresh_db, + ) + + +def create_assets( + db, setup_accounts, setup_markets, setup_sources, setup_asset_types +) -> dict[str, GenericAsset]: + """Add assets with power sensors to known test accounts.""" + assets = [] for asset_name in ["wind-asset-1", "wind-asset-2", "solar-asset-1"]: - asset = Asset( + asset = GenericAsset( name=asset_name, - owner_id=setup_roles_users["Test Prosumer User"], - asset_type_name="wind" if "wind" in asset_name else "solar", - event_resolution=timedelta(minutes=15), - capacity_in_mw=1, + generic_asset_type=setup_asset_types["wind"] + if "wind" in asset_name + else setup_asset_types["solar"], + owner=setup_accounts["Prosumer"], latitude=10, longitude=100, - min_soc_in_mwh=0, - max_soc_in_mwh=0, - soc_in_mwh=0, + attributes=dict( + capacity_in_mw=1, + min_soc_in_mwh=0, + max_soc_in_mwh=0, + soc_in_mwh=0, + market_id=setup_markets["epex_da"].id, + is_producer=True, + can_curtail=True, + ), + ) + sensor = Sensor( + name="power", + generic_asset=asset, + event_resolution=timedelta(minutes=15), unit="MW", - market_id=setup_markets["epex_da"].id, + attributes=dict( + daily_seasonality=True, + yearly_seasonality=True, + ), ) - db.session.add(asset) + db.session.add(sensor) assets.append(asset) # one day of test data (one complete sine curve) time_slots = pd.date_range( datetime(2015, 1, 1), datetime(2015, 1, 1, 23, 45), freq="15T" ) + seed(42) # ensure same results over different test runs values = [ random() * (1 + np.sin(x * 2 * np.pi / (4 * 24))) for x in range(len(time_slots)) @@ -434,7 +449,7 @@ def setup_assets( event_start=as_server_time(dt), belief_horizon=parse_duration("PT0M"), event_value=val, - sensor=asset.corresponding_sensor, + sensor=sensor, source=setup_sources["Seita"], ) for dt, val in zip(time_slots, values) @@ -519,6 +534,7 @@ def add_market_prices( end=pd.Timestamp("2015-01-02").tz_localize("Europe/Amsterdam"), resolution="1H", ) + seed(42) # ensure same results over different test runs values = [ random() * (1 + np.sin(x * 2 * np.pi / 24)) for x in range(len(time_slots)) ] @@ -528,7 +544,7 @@ def add_market_prices( belief_horizon=timedelta(hours=0), event_value=val, source=setup_sources["Seita"], - sensor=setup_markets["epex_da"].corresponding_sensor, + sensor=setup_markets["epex_da"], ) for dt, val in zip(time_slots, values) ] @@ -547,84 +563,116 @@ def add_market_prices( belief_horizon=timedelta(hours=0), event_value=val, source=setup_sources["Seita"], - sensor=setup_markets["epex_da"].corresponding_sensor, + sensor=setup_markets["epex_da"], ) for dt, val in zip(time_slots, values) ] db.session.add_all(day2_beliefs) - return {"epex_da": setup_markets["epex_da"].corresponding_sensor} + return {"epex_da": setup_markets["epex_da"]} @pytest.fixture(scope="module") def add_battery_assets( - db: SQLAlchemy, setup_roles_users, setup_markets -) -> dict[str, Asset]: - return create_test_battery_assets(db, setup_roles_users, setup_markets) + db: SQLAlchemy, + setup_roles_users, + setup_accounts, + setup_markets, + setup_generic_asset_types, +) -> dict[str, GenericAsset]: + return create_test_battery_assets( + db, setup_accounts, setup_markets, setup_generic_asset_types + ) @pytest.fixture(scope="function") def add_battery_assets_fresh_db( - fresh_db, setup_roles_users_fresh_db, setup_markets_fresh_db -) -> dict[str, Asset]: + fresh_db, + setup_roles_users_fresh_db, + setup_accounts_fresh_db, + setup_markets_fresh_db, + setup_generic_asset_types_fresh_db, +) -> dict[str, GenericAsset]: return create_test_battery_assets( - fresh_db, setup_roles_users_fresh_db, setup_markets_fresh_db + fresh_db, + setup_accounts_fresh_db, + setup_markets_fresh_db, + setup_generic_asset_types_fresh_db, ) def create_test_battery_assets( - db: SQLAlchemy, setup_roles_users, setup_markets -) -> dict[str, Asset]: + db: SQLAlchemy, setup_accounts, setup_markets, generic_asset_types +) -> dict[str, GenericAsset]: """ Add two battery assets, set their capacity values and their initial SOC. """ - db.session.add( - AssetType( - name="battery", + battery_type = generic_asset_types["battery"] + + test_battery = GenericAsset( + name="Test battery", + owner=setup_accounts["Prosumer"], + generic_asset_type=battery_type, + latitude=10, + longitude=100, + attributes=dict( + capacity_in_mw=2, + max_soc_in_mwh=5, + min_soc_in_mwh=0, + soc_in_mwh=2.5, + soc_datetime="2015-01-01T00:00+01", + soc_udi_event_id=203, + market_id=setup_markets["epex_da"].id, is_consumer=True, is_producer=True, can_curtail=True, can_shift=True, + ), + ) + test_battery_sensor = Sensor( + name="power", + generic_asset=test_battery, + event_resolution=timedelta(minutes=15), + unit="MW", + attributes=dict( daily_seasonality=True, weekly_seasonality=True, yearly_seasonality=True, - ) + ), ) + db.session.add(test_battery_sensor) - test_battery = Asset( - name="Test battery", - owner_id=setup_roles_users["Test Prosumer User"], - asset_type_name="battery", - event_resolution=timedelta(minutes=15), - capacity_in_mw=2, - max_soc_in_mwh=5, - min_soc_in_mwh=0, - soc_in_mwh=2.5, - soc_datetime=pytz.timezone("Europe/Amsterdam").localize(datetime(2015, 1, 1)), - soc_udi_event_id=203, + test_battery_no_prices = GenericAsset( + name="Test battery with no known prices", + owner=setup_accounts["Prosumer"], + generic_asset_type=battery_type, latitude=10, longitude=100, - market_id=setup_markets["epex_da"].id, - unit="MW", + attributes=dict( + capacity_in_mw=2, + max_soc_in_mwh=5, + min_soc_in_mwh=0, + soc_in_mwh=2.5, + soc_datetime="2040-01-01T00:00+01", + soc_udi_event_id=203, + market_id=setup_markets["epex_da"].id, + is_consumer=True, + is_producer=True, + can_curtail=True, + can_shift=True, + ), ) - db.session.add(test_battery) - - test_battery_no_prices = Asset( - name="Test battery with no known prices", - owner_id=setup_roles_users["Test Prosumer User"], - asset_type_name="battery", + test_battery_sensor_no_prices = Sensor( + name="power", + generic_asset=test_battery_no_prices, event_resolution=timedelta(minutes=15), - capacity_in_mw=2, - max_soc_in_mwh=5, - min_soc_in_mwh=0, - soc_in_mwh=2.5, - soc_datetime=pytz.timezone("Europe/Amsterdam").localize(datetime(2040, 1, 1)), - soc_udi_event_id=203, - latitude=10, - longitude=100, - market_id=setup_markets["epex_da"].id, unit="MW", + attributes=dict( + daily_seasonality=True, + weekly_seasonality=True, + yearly_seasonality=True, + ), ) - db.session.add(test_battery_no_prices) + db.session.add(test_battery_sensor_no_prices) return { "Test battery": test_battery, "Test battery with no known prices": test_battery_no_prices, @@ -633,84 +681,92 @@ def create_test_battery_assets( @pytest.fixture(scope="module") def add_charging_station_assets( - db: SQLAlchemy, setup_roles_users, setup_markets -) -> dict[str, Asset]: - return create_charging_station_assets(db, setup_roles_users, setup_markets) + db: SQLAlchemy, setup_accounts, setup_markets +) -> dict[str, GenericAsset]: + return create_charging_station_assets(db, setup_accounts, setup_markets) @pytest.fixture(scope="function") def add_charging_station_assets_fresh_db( - fresh_db: SQLAlchemy, setup_roles_users_fresh_db, setup_markets_fresh_db -) -> dict[str, Asset]: + fresh_db: SQLAlchemy, setup_accounts_fresh_db, setup_markets_fresh_db +) -> dict[str, GenericAsset]: return create_charging_station_assets( - fresh_db, setup_roles_users_fresh_db, setup_markets_fresh_db + fresh_db, setup_accounts_fresh_db, setup_markets_fresh_db ) def create_charging_station_assets( - db: SQLAlchemy, setup_roles_users, setup_markets -) -> dict[str, Asset]: + db: SQLAlchemy, setup_accounts, setup_markets +) -> dict[str, GenericAsset]: """Add uni- and bi-directional charging station assets, set their capacity value and their initial SOC.""" - db.session.add( - AssetType( - name="one-way_evse", + oneway_evse = GenericAssetType(name="one-way_evse") + twoway_evse = GenericAssetType(name="two-way_evse") + + charging_station = GenericAsset( + name="Test charging station", + owner=setup_accounts["Prosumer"], + generic_asset_type=oneway_evse, + latitude=10, + longitude=100, + attributes=dict( + capacity_in_mw=2, + max_soc_in_mwh=5, + min_soc_in_mwh=0, + soc_in_mwh=2.5, + soc_datetime="2015-01-01T00:00+01", + soc_udi_event_id=203, + market_id=setup_markets["epex_da"].id, is_consumer=True, is_producer=False, can_curtail=True, can_shift=True, + ), + ) + charging_station_power_sensor = Sensor( + name="power", + generic_asset=charging_station, + unit="MW", + event_resolution=timedelta(minutes=15), + attributes=dict( daily_seasonality=True, weekly_seasonality=True, yearly_seasonality=True, - ) + ), ) - db.session.add( - AssetType( - name="two-way_evse", + db.session.add(charging_station_power_sensor) + + bidirectional_charging_station = GenericAsset( + name="Test charging station (bidirectional)", + owner=setup_accounts["Prosumer"], + generic_asset_type=twoway_evse, + latitude=10, + longitude=100, + attributes=dict( + capacity_in_mw=2, + max_soc_in_mwh=5, + min_soc_in_mwh=0, + soc_in_mwh=2.5, + soc_datetime="2015-01-01T00:00+01", + soc_udi_event_id=203, + market_id=setup_markets["epex_da"].id, is_consumer=True, is_producer=True, can_curtail=True, can_shift=True, - daily_seasonality=True, - weekly_seasonality=True, - yearly_seasonality=True, - ) + ), ) - - charging_station = Asset( - name="Test charging station", - owner_id=setup_roles_users["Test Prosumer User"], - asset_type_name="one-way_evse", - event_resolution=timedelta(minutes=15), - capacity_in_mw=2, - max_soc_in_mwh=5, - min_soc_in_mwh=0, - soc_in_mwh=2.5, - soc_datetime=pytz.timezone("Europe/Amsterdam").localize(datetime(2015, 1, 1)), - soc_udi_event_id=203, - latitude=10, - longitude=100, - market_id=setup_markets["epex_da"].id, + bidirectional_charging_station_power_sensor = Sensor( + name="power", + generic_asset=bidirectional_charging_station, unit="MW", - ) - db.session.add(charging_station) - - bidirectional_charging_station = Asset( - name="Test charging station (bidirectional)", - owner_id=setup_roles_users["Test Prosumer User"], - asset_type_name="two-way_evse", event_resolution=timedelta(minutes=15), - capacity_in_mw=2, - max_soc_in_mwh=5, - min_soc_in_mwh=0, - soc_in_mwh=2.5, - soc_datetime=pytz.timezone("Europe/Amsterdam").localize(datetime(2015, 1, 1)), - soc_udi_event_id=203, - latitude=10, - longitude=100, - market_id=setup_markets["epex_da"].id, - unit="MW", + attributes=dict( + daily_seasonality=True, + weekly_seasonality=True, + yearly_seasonality=True, + ), ) - db.session.add(bidirectional_charging_station) + db.session.add(bidirectional_charging_station_power_sensor) return { "Test charging station": charging_station, "Test charging station (bidirectional)": bidirectional_charging_station, diff --git a/flexmeasures/data/config.py b/flexmeasures/data/config.py index 99c63e5cc..8209b0243 100644 --- a/flexmeasures/data/config.py +++ b/flexmeasures/data/config.py @@ -39,13 +39,10 @@ def configure_db_for(app: Flask): Base.query = db.session.query_property() # Import all modules here that might define models so that - # they will be registered properly on the metadata. Otherwise + # they will be registered properly on the metadata. Otherwise, # you will have to import them first before calling configure_db(). from flexmeasures.data.models import ( # noqa: F401 time_series, - markets, - assets, - weather, data_sources, user, task_runs, @@ -59,7 +56,7 @@ def configure_db_for(app: Flask): def commit_and_start_new_session(app: Flask): """Use this when a script wants to save a state before continuing Not tested well, just a starting point - not recommended anyway for any logic used by views or tasks. - Maybe session.flush can help you there.""" + Maybe session.flush() can help you there.""" global db, Base, session_options db.session.commit() db.session.close() diff --git a/flexmeasures/data/models/assets.py b/flexmeasures/data/models/assets.py deleted file mode 100644 index 16919d411..000000000 --- a/flexmeasures/data/models/assets.py +++ /dev/null @@ -1,406 +0,0 @@ -from datetime import datetime, timedelta -from typing import Dict, List, Optional, Tuple, Union - -import isodate -import timely_beliefs as tb -import timely_beliefs.utils as tb_utils -from sqlalchemy.orm import Query - -from flexmeasures.data import db -from flexmeasures.data.models.legacy_migration_utils import ( - copy_old_sensor_attributes, - get_old_model_type, -) -from flexmeasures.data.models.user import User -from flexmeasures.data.models.time_series import Sensor, TimedValue, TimedBelief -from flexmeasures.data.models.generic_assets import ( - create_generic_asset, - GenericAsset, - GenericAssetType, -) -from flexmeasures.utils.entity_address_utils import build_entity_address -from flexmeasures.utils.flexmeasures_inflection import humanize, pluralize - - -class AssetType(db.Model): - """ - Describing asset types for our purposes - - This model is now considered legacy. See GenericAssetType. - """ - - name = db.Column(db.String(80), primary_key=True) - # The name we want to see (don't unnecessarily capitalize, so it can be used in a sentence) - display_name = db.Column(db.String(80), default="", unique=True) - # The explanatory hovel label (don't unnecessarily capitalize, so it can be used in a sentence) - hover_label = db.Column(db.String(80), nullable=True, unique=False) - is_consumer = db.Column(db.Boolean(), nullable=False, default=False) - is_producer = db.Column(db.Boolean(), nullable=False, default=False) - can_curtail = db.Column(db.Boolean(), nullable=False, default=False, index=True) - can_shift = db.Column(db.Boolean(), nullable=False, default=False, index=True) - daily_seasonality = db.Column(db.Boolean(), nullable=False, default=False) - weekly_seasonality = db.Column(db.Boolean(), nullable=False, default=False) - yearly_seasonality = db.Column(db.Boolean(), nullable=False, default=False) - - def __init__(self, **kwargs): - generic_asset_type = GenericAssetType.query.filter_by( - name=kwargs["name"] - ).one_or_none() - if not generic_asset_type: - generic_asset_type = GenericAssetType( - name=kwargs["name"], description=kwargs.get("hover_label", None) - ) - db.session.add(generic_asset_type) - super(AssetType, self).__init__(**kwargs) - self.name = self.name.replace(" ", "_").lower() - if "display_name" not in kwargs: - self.display_name = humanize(self.name) - - @property - def plural_name(self) -> str: - return pluralize(self.display_name) - - @property - def preconditions(self) -> Dict[str, bool]: - """Assumptions about the time series data set, such as normality and stationarity - For now, this is usable input for Prophet (see init), but it might evolve or go away.""" - return dict( - daily_seasonality=self.daily_seasonality, - weekly_seasonality=self.weekly_seasonality, - yearly_seasonality=self.yearly_seasonality, - ) - - @property - def weather_correlations(self) -> List[str]: - """Known correlations of weather sensor type and asset type.""" - correlations = [] - if self.name == "solar": - correlations.append("irradiance") - if self.name == "wind": - correlations.append("wind speed") - if self.name in ( - "one-way_evse", - "two-way_evse", - "battery", - "building", - ): - correlations.append("temperature") - return correlations - - def __repr__(self): - return "" % self.name - - -class Asset(db.Model, tb.SensorDBMixin): - """ - Each asset is an energy- consuming or producing hardware. - - This model is now considered legacy. See GenericAsset and Sensor. - """ - - id = db.Column( - db.Integer, - db.ForeignKey("sensor.id", ondelete="CASCADE"), - primary_key=True, - autoincrement=True, - ) - # The name - name = db.Column(db.String(80), default="", unique=True) - # The name we want to see (don't unnecessarily capitalize, so it can be used in a sentence) - display_name = db.Column(db.String(80), default="", unique=True) - # The name of the assorted AssetType - asset_type_name = db.Column( - db.String(80), db.ForeignKey("asset_type.name"), nullable=False - ) - # How many MW at peak usage - capacity_in_mw = db.Column(db.Float, nullable=False) - # State of charge in MWh and its datetime and udi event - min_soc_in_mwh = db.Column(db.Float, nullable=True) - max_soc_in_mwh = db.Column(db.Float, nullable=True) - soc_in_mwh = db.Column(db.Float, nullable=True) - soc_datetime = db.Column(db.DateTime(timezone=True), nullable=True) - soc_udi_event_id = db.Column(db.Integer, nullable=True) - # latitude is the North/South coordinate - latitude = db.Column(db.Float, nullable=False) - # longitude is the East/West coordinate - longitude = db.Column(db.Float, nullable=False) - # owner - owner_id = db.Column(db.Integer, db.ForeignKey("fm_user.id", ondelete="CASCADE")) - # market - market_id = db.Column(db.Integer, db.ForeignKey("market.id"), nullable=True) - - def __init__(self, **kwargs): - - if "unit" not in kwargs: - kwargs["unit"] = "MW" # current default - super(Asset, self).__init__(**kwargs) - - # Create a new Sensor with unique id across assets, markets and weather sensors - # Also keep track of ownership by creating a GenericAsset and assigning the new Sensor to it. - if "id" not in kwargs: - - asset_type = get_old_model_type( - kwargs, AssetType, "asset_type_name", "asset_type" - ) - - # Set up generic asset - generic_asset_kwargs = { - **kwargs, - **copy_old_sensor_attributes( - self, - old_sensor_type_attributes=[ - "can_curtail", - "can_shift", - ], - old_sensor_attributes=[ - "display_name", - "min_soc_in_mwh", - "max_soc_in_mwh", - "soc_in_mwh", - "soc_datetime", - "soc_udi_event_id", - ], - old_sensor_type=asset_type, - ), - } - - if "owner_id" in kwargs: - owner = User.query.get(kwargs["owner_id"]) - if owner: - generic_asset_kwargs.update(account_id=owner.account_id) - new_generic_asset = create_generic_asset("asset", **generic_asset_kwargs) - - # Set up sensor - new_sensor = Sensor( - name=kwargs["name"], - generic_asset=new_generic_asset, - **copy_old_sensor_attributes( - self, - old_sensor_type_attributes=[ - "is_consumer", - "is_producer", - "daily_seasonality", - "weekly_seasonality", - "yearly_seasonality", - "weather_correlations", - ], - old_sensor_attributes=[ - "display_name", - "capacity_in_mw", - "market_id", - ], - old_sensor_type=asset_type, - ), - ) - db.session.add(new_sensor) - db.session.flush() # generates the pkey for new_sensor - sensor_id = new_sensor.id - else: - # The UI may initialize Asset objects from API form data with a known id - sensor_id = kwargs["id"] - self.id = sensor_id - if self.unit != "MW": - raise Exception("FlexMeasures only supports MW as unit for now.") - self.name = self.name.replace(" (MW)", "") - if "display_name" not in kwargs: - self.display_name = humanize(self.name) - - # Copy over additional columns from (newly created) Asset to (newly created) Sensor - if "id" not in kwargs: - db.session.add(self) - db.session.flush() # make sure to generate each column for the old sensor - new_sensor.unit = self.unit - new_sensor.event_resolution = self.event_resolution - new_sensor.knowledge_horizon_fnc = self.knowledge_horizon_fnc - new_sensor.knowledge_horizon_par = self.knowledge_horizon_par - - asset_type = db.relationship("AssetType", backref=db.backref("assets", lazy=True)) - owner = db.relationship( - "User", - backref=db.backref( - "assets", lazy=True, cascade="all, delete-orphan", passive_deletes=True - ), - ) - market = db.relationship("Market", backref=db.backref("assets", lazy=True)) - - def latest_state(self, event_ends_before: Optional[datetime] = None) -> "Power": - """Search the most recent event for this sensor, optionally before some datetime.""" - # todo: replace with Sensor.latest_state - power_query = ( - Power.query.filter(Power.sensor_id == self.id) - .filter(Power.horizon <= timedelta(hours=0)) - .order_by(Power.datetime.desc()) - ) - if event_ends_before is not None: - power_query = power_query.filter( - Power.datetime + self.event_resolution <= event_ends_before - ) - return power_query.first() - - @property - def corresponding_sensor(self) -> Sensor: - return db.session.query(Sensor).get(self.id) - - @property - def generic_asset(self) -> GenericAsset: - return db.session.query(GenericAsset).get(self.corresponding_sensor.id) - - def get_attribute(self, attribute: str): - """Looks for the attribute on the corresponding Sensor. - - This should be used by all code to read these attributes, - over accessing them directly on this class, - as this table is in the process to be replaced by the Sensor table. - """ - return self.corresponding_sensor.get_attribute(attribute) - - @property - def power_unit(self) -> float: - """Return the 'unit' property of the generic asset, just with a more insightful name.""" - return self.unit - - @property - def entity_address_fm0(self) -> str: - """Entity address under the fm0 scheme for entity addresses.""" - return build_entity_address( - dict(owner_id=self.owner_id, asset_id=self.id), - "connection", - fm_scheme="fm0", - ) - - @property - def entity_address(self) -> str: - """Entity address under the latest fm scheme for entity addresses.""" - return build_entity_address(dict(sensor_id=self.id), "sensor") - - @property - def location(self) -> Tuple[float, float]: - return self.latitude, self.longitude - - def capacity_factor_in_percent_for(self, load_in_mw) -> int: - if self.capacity_in_mw == 0: - return 0 - return min(round((load_in_mw / self.capacity_in_mw) * 100, 2), 100) - - @property - def is_pure_consumer(self) -> bool: - """Return True if this asset is consuming but not producing.""" - return self.asset_type.is_consumer and not self.asset_type.is_producer - - @property - def is_pure_producer(self) -> bool: - """Return True if this asset is producing but not consuming.""" - return self.asset_type.is_producer and not self.asset_type.is_consumer - - def to_dict(self) -> Dict[str, Union[str, float]]: - return dict( - name=self.name, - display_name=self.display_name, - asset_type_name=self.asset_type_name, - latitude=self.latitude, - longitude=self.longitude, - capacity_in_mw=self.capacity_in_mw, - ) - - def __repr__(self): - return "" % ( - self.id, - self.name, - self.asset_type_name, - self.event_resolution, - self.market, - ) - - -def assets_share_location(assets: List[Asset]) -> bool: - """ - Return True if all assets in this list are located on the same spot. - TODO: In the future, we might soften this to compare if assets are in the same "housing" or "site". - """ - if not assets: - return True - return all([a.location == assets[0].location for a in assets]) - - -class Power(TimedValue, db.Model): - """ - All measurements of power data are stored in one slim table. - Negative values indicate consumption. - - This model is now considered legacy. See TimedBelief. - """ - - sensor_id = db.Column( - db.Integer(), - db.ForeignKey("sensor.id", ondelete="CASCADE"), - primary_key=True, - index=True, - ) - sensor = db.relationship( - "Sensor", - backref=db.backref( - "measurements", - lazy=True, - cascade="all, delete-orphan", - passive_deletes=True, - ), - ) - - @classmethod - def make_query( - cls, - **kwargs, - ) -> Query: - """Construct the database query.""" - return super().make_query(**kwargs) - - def to_dict(self): - return { - "datetime": isodate.datetime_isoformat(self.datetime), - "sensor_id": self.sensor_id, - "value": self.value, - "horizon": self.horizon, - } - - def __init__(self, use_legacy_kwargs: bool = True, **kwargs): - # todo: deprecate the 'asset_id' argument in favor of 'sensor_id' (announced v0.8.0) - if "asset_id" in kwargs and "sensor_id" not in kwargs: - kwargs["sensor_id"] = tb_utils.replace_deprecated_argument( - "asset_id", - kwargs["asset_id"], - "sensor_id", - None, - ) - kwargs.pop("asset_id", None) - - # todo: deprecate the 'Power' class in favor of 'TimedBelief' (announced v0.8.0) - if use_legacy_kwargs is False: - # Create corresponding TimedBelief - belief = TimedBelief(**kwargs) - db.session.add(belief) - - # Convert key names for legacy model - kwargs["value"] = kwargs.pop("event_value") - kwargs["datetime"] = kwargs.pop("event_start") - kwargs["horizon"] = kwargs.pop("belief_horizon") - kwargs["sensor_id"] = kwargs.pop("sensor").id - kwargs["data_source_id"] = kwargs.pop("source").id - - else: - import warnings - - warnings.warn( - f"The {self.__class__} class is deprecated. Switch to using the TimedBelief class to suppress this warning.", - FutureWarning, - ) - - super(Power, self).__init__(**kwargs) - - def __repr__(self): - return "" % ( - self.value, - self.sensor_id, - self.datetime, - self.data_source_id, - self.horizon, - ) diff --git a/flexmeasures/data/models/forecasting/model_spec_factory.py b/flexmeasures/data/models/forecasting/model_spec_factory.py index abaaa0f70..4a092a9f1 100644 --- a/flexmeasures/data/models/forecasting/model_spec_factory.py +++ b/flexmeasures/data/models/forecasting/model_spec_factory.py @@ -1,4 +1,4 @@ -from typing import Any, Dict, List, Optional, Union +from typing import Any, Dict, List, Optional from datetime import datetime, timedelta, tzinfo from pprint import pformat import logging @@ -20,7 +20,6 @@ import pandas as pd from flexmeasures.data.models.time_series import Sensor, TimedBelief -from flexmeasures.data.models.weather import WeatherSensor from flexmeasures.data.models.forecasting.utils import ( create_lags, set_training_and_testing_dates, @@ -229,7 +228,7 @@ def _parameterise_forecasting_by_asset_and_asset_type( def get_normalization_transformation_from_sensor_attributes( - sensor: Union[Sensor, WeatherSensor], + sensor: Sensor, ) -> Optional[Transformation]: """ Transform data to be normal, using the BoxCox transformation. Lambda parameter is chosen @@ -270,9 +269,9 @@ def configure_regressors_for_nearest_weather_sensor( ) for sensor_name in correlated_sensor_names: - # Find nearest weather sensor + # Find the nearest weather sensor closest_sensor = Sensor.find_closest( - generic_asset_type_name=sensor.generic_asset.generic_asset_type.name, + generic_asset_type_name="weather station", sensor_name=sensor_name, object=sensor, ) diff --git a/flexmeasures/data/models/generic_assets.py b/flexmeasures/data/models/generic_assets.py index a2b57df6f..6499eb12d 100644 --- a/flexmeasures/data/models/generic_assets.py +++ b/flexmeasures/data/models/generic_assets.py @@ -38,6 +38,9 @@ class GenericAssetType(db.Model): name = db.Column(db.String(80), default="", unique=True) description = db.Column(db.String(80), nullable=True, unique=False) + def __repr__(self): + return "" % (self.id, self.name) + class GenericAsset(db.Model, AuthModelMixin): """An asset is something that has economic value. diff --git a/flexmeasures/data/models/markets.py b/flexmeasures/data/models/markets.py deleted file mode 100644 index a3ebdad61..000000000 --- a/flexmeasures/data/models/markets.py +++ /dev/null @@ -1,244 +0,0 @@ -from typing import Dict - -import timely_beliefs as tb -from timely_beliefs.sensors.func_store import knowledge_horizons -import timely_beliefs.utils as tb_utils -from sqlalchemy.orm import Query - -from flexmeasures.data import db -from flexmeasures.data.models.generic_assets import ( - create_generic_asset, - GenericAsset, - GenericAssetType, -) -from flexmeasures.data.models.legacy_migration_utils import ( - copy_old_sensor_attributes, - get_old_model_type, -) -from flexmeasures.data.models.time_series import Sensor, TimedValue, TimedBelief -from flexmeasures.utils.entity_address_utils import build_entity_address -from flexmeasures.utils.flexmeasures_inflection import humanize - - -class MarketType(db.Model): - """ - Describing market types for our purposes. - This model is now considered legacy. See GenericAssetType. - """ - - name = db.Column(db.String(80), primary_key=True) - display_name = db.Column(db.String(80), default="", unique=True) - - daily_seasonality = db.Column(db.Boolean(), nullable=False, default=False) - weekly_seasonality = db.Column(db.Boolean(), nullable=False, default=False) - yearly_seasonality = db.Column(db.Boolean(), nullable=False, default=False) - - def __init__(self, **kwargs): - kwargs["name"] = kwargs["name"].replace(" ", "_").lower() - if "display_name" not in kwargs: - kwargs["display_name"] = humanize(kwargs["name"]) - - super(MarketType, self).__init__(**kwargs) - - generic_asset_type = GenericAssetType( - name=kwargs["name"], description=kwargs.get("hover_label", None) - ) - db.session.add(generic_asset_type) - - @property - def preconditions(self) -> Dict[str, bool]: - """Assumptions about the time series data set, such as normality and stationarity - For now, this is usable input for Prophet (see init), but it might evolve or go away.""" - return dict( - daily_seasonality=self.daily_seasonality, - weekly_seasonality=self.weekly_seasonality, - yearly_seasonality=self.yearly_seasonality, - ) - - def __repr__(self): - return "" % self.name - - -class Market(db.Model, tb.SensorDBMixin): - """ - Each market is a pricing service. - - This model is now considered legacy. See GenericAsset and Sensor. - """ - - id = db.Column( - db.Integer, db.ForeignKey("sensor.id"), primary_key=True, autoincrement=True - ) - name = db.Column(db.String(80), unique=True) - display_name = db.Column(db.String(80), default="", unique=True) - market_type_name = db.Column( - db.String(80), db.ForeignKey("market_type.name"), nullable=False - ) - - def __init__(self, **kwargs): - # Set default knowledge horizon function for an economic sensor - if "knowledge_horizon_fnc" not in kwargs: - kwargs["knowledge_horizon_fnc"] = knowledge_horizons.ex_ante.__name__ - if "knowledge_horizon_par" not in kwargs: - kwargs["knowledge_horizon_par"] = { - knowledge_horizons.ex_ante.__code__.co_varnames[1]: "PT0H" - } - kwargs["name"] = kwargs["name"].replace(" ", "_").lower() - if "display_name" not in kwargs: - kwargs["display_name"] = humanize(kwargs["name"]) - - super(Market, self).__init__(**kwargs) - - # Create a new Sensor with unique id across assets, markets and weather sensors - if "id" not in kwargs: - - market_type = get_old_model_type( - kwargs, MarketType, "market_type_name", "market_type" - ) - - generic_asset_kwargs = { - **kwargs, - **copy_old_sensor_attributes( - self, - old_sensor_type_attributes=[], - old_sensor_attributes=[ - "display_name", - ], - old_sensor_type=market_type, - ), - } - new_generic_asset = create_generic_asset("market", **generic_asset_kwargs) - new_sensor = Sensor( - name=kwargs["name"], - generic_asset=new_generic_asset, - **copy_old_sensor_attributes( - self, - old_sensor_type_attributes=[ - "daily_seasonality", - "weekly_seasonality", - "yearly_seasonality", - ], - old_sensor_attributes=[ - "display_name", - ], - old_sensor_type=market_type, - ), - ) - db.session.add(new_sensor) - db.session.flush() # generates the pkey for new_sensor - new_sensor_id = new_sensor.id - else: - # The UI may initialize Market objects from API form data with a known id - new_sensor_id = kwargs["id"] - - self.id = new_sensor_id - - # Copy over additional columns from (newly created) Market to (newly created) Sensor - if "id" not in kwargs: - db.session.add(self) - db.session.flush() # make sure to generate each column for the old sensor - new_sensor.unit = self.unit - new_sensor.event_resolution = self.event_resolution - new_sensor.knowledge_horizon_fnc = self.knowledge_horizon_fnc - new_sensor.knowledge_horizon_par = self.knowledge_horizon_par - - @property - def entity_address_fm0(self) -> str: - """Entity address under the fm0 scheme for entity addresses.""" - return build_entity_address( - dict(market_name=self.name), "market", fm_scheme="fm0" - ) - - @property - def entity_address(self) -> str: - """Entity address under the latest fm scheme for entity addresses.""" - return build_entity_address(dict(sensor_id=self.id), "sensor") - - @property - def corresponding_sensor(self) -> Sensor: - return db.session.query(Sensor).get(self.id) - - @property - def generic_asset(self) -> GenericAsset: - return db.session.query(GenericAsset).get(self.corresponding_sensor.id) - - def get_attribute(self, attribute: str): - """Looks for the attribute on the corresponding Sensor. - - This should be used by all code to read these attributes, - over accessing them directly on this class, - as this table is in the process to be replaced by the Sensor table. - """ - return self.corresponding_sensor.get_attribute(attribute) - - @property - def price_unit(self) -> str: - """Return the 'unit' property of the generic asset, just with a more insightful name.""" - return self.unit - - market_type = db.relationship( - "MarketType", backref=db.backref("markets", lazy=True) - ) - - def __repr__(self): - return "" % ( - self.id, - self.name, - self.market_type_name, - self.event_resolution, - ) - - def to_dict(self) -> Dict[str, str]: - return dict(name=self.name, market_type=self.market_type.name) - - -class Price(TimedValue, db.Model): - """ - All prices are stored in one slim table. - - This model is now considered legacy. See TimedBelief. - """ - - sensor_id = db.Column( - db.Integer(), db.ForeignKey("sensor.id"), primary_key=True, index=True - ) - sensor = db.relationship("Sensor", backref=db.backref("prices", lazy=True)) - - @classmethod - def make_query(cls, **kwargs) -> Query: - """Construct the database query.""" - return super().make_query(**kwargs) - - def __init__(self, use_legacy_kwargs: bool = True, **kwargs): - # todo: deprecate the 'market_id' argument in favor of 'sensor_id' (announced v0.8.0) - if "market_id" in kwargs and "sensor_id" not in kwargs: - kwargs["sensor_id"] = tb_utils.replace_deprecated_argument( - "market_id", - kwargs["market_id"], - "sensor_id", - None, - ) - kwargs.pop("market_id", None) - - # todo: deprecate the 'Price' class in favor of 'TimedBelief' (announced v0.8.0) - if use_legacy_kwargs is False: - # Create corresponding TimedBelief - belief = TimedBelief(**kwargs) - db.session.add(belief) - - # Convert key names for legacy model - kwargs["value"] = kwargs.pop("event_value") - kwargs["datetime"] = kwargs.pop("event_start") - kwargs["horizon"] = kwargs.pop("belief_horizon") - kwargs["sensor_id"] = kwargs.pop("sensor").id - kwargs["data_source_id"] = kwargs.pop("source").id - - else: - import warnings - - warnings.warn( - f"The {self.__class__} class is deprecated. Switch to using the TimedBelief class to suppress this warning.", - FutureWarning, - ) - - super(Price, self).__init__(**kwargs) diff --git a/flexmeasures/data/models/planning/tests/conftest.py b/flexmeasures/data/models/planning/tests/conftest.py index 2d7981baf..5510778db 100644 --- a/flexmeasures/data/models/planning/tests/conftest.py +++ b/flexmeasures/data/models/planning/tests/conftest.py @@ -17,6 +17,7 @@ def setup_planning_test_data(db, add_market_prices, add_charging_station_assets) Set up data for all planning tests. """ print("Setting up data for planning tests on %s" % db.engine) + return add_charging_station_assets @pytest.fixture(scope="module") diff --git a/flexmeasures/data/models/planning/tests/test_solver.py b/flexmeasures/data/models/planning/tests/test_solver.py index ec16608c5..1fd0fa073 100644 --- a/flexmeasures/data/models/planning/tests/test_solver.py +++ b/flexmeasures/data/models/planning/tests/test_solver.py @@ -57,7 +57,7 @@ def test_storage_loss_function( def test_battery_solver_day_1( add_battery_assets, add_inflexible_device_forecasts, use_inflexible_device ): - epex_da, battery = get_sensors_from_db() + epex_da, battery = get_sensors_from_db(add_battery_assets) tz = pytz.timezone("Europe/Amsterdam") start = tz.localize(datetime(2015, 1, 1)) end = tz.localize(datetime(2015, 1, 2)) @@ -116,7 +116,7 @@ def test_battery_solver_day_2( and so we expect the scheduler to only: - completely discharge within the last 8 hours """ - _epex_da, battery = get_sensors_from_db() + _epex_da, battery = get_sensors_from_db(add_battery_assets) tz = pytz.timezone("Europe/Amsterdam") start = tz.localize(datetime(2015, 1, 2)) end = tz.localize(datetime(2015, 1, 3)) @@ -197,7 +197,9 @@ def test_battery_solver_day_2( (5, "Test charging station (bidirectional)"), ], ) -def test_charging_station_solver_day_2(target_soc, charging_station_name): +def test_charging_station_solver_day_2( + target_soc, charging_station_name, setup_planning_test_data +): """Starting with a state of charge 1 kWh, within 2 hours we should be able to reach any state of charge in the range [1, 5] kWh for a unidirectional station, or [0, 5] for a bidirectional station, given a charging capacity of 2 kW. @@ -206,9 +208,7 @@ def test_charging_station_solver_day_2(target_soc, charging_station_name): duration_until_target = timedelta(hours=2) epex_da = Sensor.query.filter(Sensor.name == "epex_da").one_or_none() - charging_station = Sensor.query.filter( - Sensor.name == charging_station_name - ).one_or_none() + charging_station = setup_planning_test_data[charging_station_name].sensors[0] assert charging_station.get_attribute("capacity_in_mw") == 2 assert charging_station.get_attribute("market_id") == epex_da.id tz = pytz.timezone("Europe/Amsterdam") @@ -269,7 +269,9 @@ def test_charging_station_solver_day_2(target_soc, charging_station_name): (15, "Test charging station (bidirectional)"), ], ) -def test_fallback_to_unsolvable_problem(target_soc, charging_station_name): +def test_fallback_to_unsolvable_problem( + target_soc, charging_station_name, setup_planning_test_data +): """Starting with a state of charge 10 kWh, within 2 hours we should be able to reach any state of charge in the range [10, 14] kWh for a unidirectional station, or [6, 14] for a bidirectional station, given a charging capacity of 2 kW. @@ -282,9 +284,7 @@ def test_fallback_to_unsolvable_problem(target_soc, charging_station_name): expected_gap = 1 epex_da = Sensor.query.filter(Sensor.name == "epex_da").one_or_none() - charging_station = Sensor.query.filter( - Sensor.name == charging_station_name - ).one_or_none() + charging_station = setup_planning_test_data[charging_station_name].sensors[0] assert charging_station.get_attribute("capacity_in_mw") == 2 assert charging_station.get_attribute("market_id") == epex_da.id tz = pytz.timezone("Europe/Amsterdam") @@ -486,7 +486,7 @@ def test_soc_bounds_timeseries(add_battery_assets): """ # get the sensors from the database - epex_da, battery = get_sensors_from_db() + epex_da, battery = get_sensors_from_db(add_battery_assets) # time parameters tz = pytz.timezone("Europe/Amsterdam") @@ -792,7 +792,7 @@ def test_infeasible_problem_error(add_battery_assets): """Try to create a schedule with infeasible constraints. soc-max is 4.5 and soc-target is 8.0""" # get the sensors from the database - _epex_da, battery = get_sensors_from_db() + _epex_da, battery = get_sensors_from_db(add_battery_assets) # time parameters tz = pytz.timezone("Europe/Amsterdam") @@ -837,10 +837,10 @@ def compute_schedule(flex_model): compute_schedule(flex_model) -def get_sensors_from_db(): +def get_sensors_from_db(battery_assets): # get the sensors from the database epex_da = Sensor.query.filter(Sensor.name == "epex_da").one_or_none() - battery = Sensor.query.filter(Sensor.name == "Test battery").one_or_none() + battery = battery_assets["Test battery"].sensors[0] assert battery.get_attribute("market_id") == epex_da.id return epex_da, battery diff --git a/flexmeasures/data/models/time_series.py b/flexmeasures/data/models/time_series.py index 70f1c1f08..399cd145d 100644 --- a/flexmeasures/data/models/time_series.py +++ b/flexmeasures/data/models/time_series.py @@ -8,7 +8,6 @@ import pandas as pd from sqlalchemy.ext.declarative import declared_attr from sqlalchemy.ext.mutable import MutableDict -from sqlalchemy.orm import Query, Session from sqlalchemy.schema import UniqueConstraint from sqlalchemy import inspect import timely_beliefs as tb @@ -20,15 +19,8 @@ from flexmeasures.data.models.parsing_utils import parse_source_arg from flexmeasures.data.services.annotations import prepare_annotations_for_chart from flexmeasures.data.services.timerange import get_timerange -from flexmeasures.data.queries.utils import ( - create_beliefs_query, - get_belief_timing_criteria, - get_source_criteria, -) -from flexmeasures.data.services.time_series import ( - collect_time_series_data, - aggregate_values, -) +from flexmeasures.data.queries.utils import get_source_criteria +from flexmeasures.data.services.time_series import aggregate_values from flexmeasures.utils.entity_address_utils import ( EntityAddressException, build_entity_address, @@ -505,12 +497,12 @@ def find_closest( Can be called with an object that has latitude and longitude properties, for example: - sensor = Sensor.find_closest("weather_station", "wind speed", object=generic_asset) + sensor = Sensor.find_closest("weather station", "wind speed", object=generic_asset) Can also be called with latitude and longitude parameters, for example: - sensor = Sensor.find_closest("weather_station", "temperature", latitude=32, longitude=54) - sensor = Sensor.find_closest("weather_station", "temperature", lat=32, lng=54) + sensor = Sensor.find_closest("weather station", "temperature", latitude=32, longitude=54) + sensor = Sensor.find_closest("weather station", "temperature", lat=32, lng=54) Finally, pass in an account_id parameter if you want to query an account other than your own. This only works for admins. Public assets are always queried. """ @@ -766,130 +758,3 @@ def add( def __repr__(self) -> str: """timely-beliefs representation of timed beliefs.""" return tb.TimedBelief.__repr__(self) - - -class TimedValue(object): - """ - A mixin of all tables that store time series data, either forecasts or measurements. - Represents one row. - - Note: This will be deprecated in favour of Timely-Beliefs - based code (see Sensor/TimedBelief) - """ - - @declared_attr - def __tablename__(cls): # noqa: B902 - return cls.__name__.lower() - - """The time at which the value is supposed to (have) happen(ed).""" - - @declared_attr - def datetime(cls): # noqa: B902 - return db.Column(db.DateTime(timezone=True), primary_key=True, index=True) - - """The time delta of measuring or forecasting. - This should be a duration in ISO8601, e.g. "PT10M", which you can turn into a timedelta with - isodate.parse_duration, optionally with a minus sign, e.g. "-PT10M". - Positive durations indicate a forecast into the future, negative ones a backward forecast into the past or simply - a measurement after the fact. - """ - - @declared_attr - def horizon(cls): # noqa: B902 - return db.Column( - db.Interval(), nullable=False, primary_key=True - ) # todo: default=timedelta(hours=0) - - """The value.""" - - @declared_attr - def value(cls): # noqa: B902 - return db.Column(db.Float, nullable=False) - - """The data source.""" - - @declared_attr - def data_source_id(cls): # noqa: B902 - return db.Column(db.Integer, db.ForeignKey("data_source.id"), primary_key=True) - - @classmethod - def make_query( - cls, - old_sensor_names: tuple[str], - query_window: tuple[datetime_type | None, datetime_type | None], - belief_horizon_window: tuple[timedelta | None, timedelta | None] = ( - None, - None, - ), - belief_time_window: tuple[datetime_type | None, datetime_type | None] = ( - None, - None, - ), - belief_time: datetime_type | None = None, - user_source_ids: int | list[int] | None = None, - source_types: list[str] | None = None, - exclude_source_types: list[str] | None = None, - session: Session = None, - ) -> Query: - """ - Can be extended with the make_query function in subclasses. - We identify the assets by their name, which assumes a unique string field can be used. - The query window consists of two optional datetimes (start and end). - The horizon window expects first the shorter horizon (e.g. 6H) and then the longer horizon (e.g. 24H). - The session can be supplied, but if None, the implementation should find a session itself. - - :param user_source_ids: Optional list of user source ids to query only specific user sources - :param source_types: Optional list of source type names to query only specific source types * - :param exclude_source_types: Optional list of source type names to exclude specific source types * - - * If user_source_ids is specified, the "user" source type is automatically included (and not excluded). - Somewhat redundant, though still allowed, is to set both source_types and exclude_source_types. - - - # todo: add examples - # todo: switch to using timely_beliefs queries, which are more powerful - """ - if session is None: - session = db.session - start, end = query_window - query = create_beliefs_query(cls, session, Sensor, old_sensor_names, start, end) - belief_timing_criteria = get_belief_timing_criteria( - cls, Sensor, belief_horizon_window, belief_time_window - ) - source_criteria = get_source_criteria( - cls, user_source_ids, source_types, exclude_source_types - ) - return query.filter(*belief_timing_criteria, *source_criteria) - - @classmethod - def search( - cls, - old_sensor_names: str | list[str], - event_starts_after: datetime_type | None = None, - event_ends_before: datetime_type | None = None, - horizons_at_least: timedelta | None = None, - horizons_at_most: timedelta | None = None, - beliefs_after: datetime_type | None = None, - beliefs_before: datetime_type | None = None, - user_source_ids: int - | list[int] - | None = None, # None is interpreted as all sources - source_types: list[str] | None = None, - exclude_source_types: list[str] | None = None, - resolution: str | timedelta = None, - sum_multiple: bool = True, - ) -> tb.BeliefsDataFrame | dict[str, tb.BeliefsDataFrame]: - """Basically a convenience wrapper for services.collect_time_series_data, - where time series data collection is implemented. - """ - return collect_time_series_data( - old_sensor_names=old_sensor_names, - make_query=cls.make_query, - query_window=(event_starts_after, event_ends_before), - belief_horizon_window=(horizons_at_least, horizons_at_most), - belief_time_window=(beliefs_after, beliefs_before), - user_source_ids=user_source_ids, - source_types=source_types, - exclude_source_types=exclude_source_types, - resolution=resolution, - sum_multiple=sum_multiple, - ) diff --git a/flexmeasures/data/models/weather.py b/flexmeasures/data/models/weather.py index d11b3469f..e69de29bb 100644 --- a/flexmeasures/data/models/weather.py +++ b/flexmeasures/data/models/weather.py @@ -1,296 +0,0 @@ -from typing import Dict, Tuple - -import timely_beliefs as tb -from sqlalchemy.orm import Query -from sqlalchemy.ext.hybrid import hybrid_method -from sqlalchemy.sql.expression import func -from sqlalchemy.schema import UniqueConstraint - -from flexmeasures.data import db -from flexmeasures.data.models.legacy_migration_utils import ( - copy_old_sensor_attributes, - get_old_model_type, -) -from flexmeasures.data.models.time_series import Sensor, TimedValue, TimedBelief -from flexmeasures.data.models.generic_assets import ( - create_generic_asset, - GenericAsset, - GenericAssetType, -) -from flexmeasures.utils import geo_utils -from flexmeasures.utils.entity_address_utils import build_entity_address -from flexmeasures.utils.flexmeasures_inflection import humanize - - -class WeatherSensorType(db.Model): - """ - This model is now considered legacy. See GenericAssetType. - """ - - name = db.Column(db.String(80), primary_key=True) - display_name = db.Column(db.String(80), default="", unique=True) - - daily_seasonality = True - weekly_seasonality = False - yearly_seasonality = True - - def __init__(self, **kwargs): - generic_asset_type = GenericAssetType( - name=kwargs["name"], description=kwargs.get("hover_label", None) - ) - db.session.add(generic_asset_type) - super(WeatherSensorType, self).__init__(**kwargs) - if "display_name" not in kwargs: - self.display_name = humanize(self.name) - - def __repr__(self): - return "" % self.name - - -class WeatherSensor(db.Model, tb.SensorDBMixin): - """ - A weather sensor has a location on Earth and measures weather values of a certain weather sensor type, such as - temperature, wind speed and irradiance. - - This model is now considered legacy. See GenericAsset and Sensor. - """ - - id = db.Column( - db.Integer, db.ForeignKey("sensor.id"), primary_key=True, autoincrement=True - ) - name = db.Column(db.String(80), unique=True) - display_name = db.Column(db.String(80), default="", unique=False) - weather_sensor_type_name = db.Column( - db.String(80), db.ForeignKey("weather_sensor_type.name"), nullable=False - ) - # latitude is the North/South coordinate - latitude = db.Column(db.Float, nullable=False) - # longitude is the East/West coordinate - longitude = db.Column(db.Float, nullable=False) - - # only one sensor of any type is needed at one location - __table_args__ = ( - UniqueConstraint( - "weather_sensor_type_name", - "latitude", - "longitude", - name="weather_sensor_type_name_latitude_longitude_key", - ), - ) - - def __init__(self, **kwargs): - - super(WeatherSensor, self).__init__(**kwargs) - - # Create a new Sensor with unique id across assets, markets and weather sensors - if "id" not in kwargs: - - weather_sensor_type = get_old_model_type( - kwargs, - WeatherSensorType, - "weather_sensor_type_name", - "sensor_type", # NB not "weather_sensor_type" (slight inconsistency in this old sensor class) - ) - - generic_asset_kwargs = { - **kwargs, - **copy_old_sensor_attributes( - self, - old_sensor_type_attributes=[], - old_sensor_attributes=[ - "display_name", - ], - old_sensor_type=weather_sensor_type, - ), - } - new_generic_asset = create_generic_asset( - "weather_sensor", **generic_asset_kwargs - ) - new_sensor = Sensor( - name=kwargs["name"], - generic_asset=new_generic_asset, - **copy_old_sensor_attributes( - self, - old_sensor_type_attributes=[ - "daily_seasonality", - "weekly_seasonality", - "yearly_seasonality", - ], - old_sensor_attributes=[ - "display_name", - ], - old_sensor_type=weather_sensor_type, - ), - ) - db.session.add(new_sensor) - db.session.flush() # generates the pkey for new_sensor - new_sensor_id = new_sensor.id - else: - # The UI may initialize WeatherSensor objects from API form data with a known id - new_sensor_id = kwargs["id"] - - self.id = new_sensor_id - - # Copy over additional columns from (newly created) WeatherSensor to (newly created) Sensor - if "id" not in kwargs: - db.session.add(self) - db.session.flush() # make sure to generate each column for the old sensor - new_sensor.unit = self.unit - new_sensor.event_resolution = self.event_resolution - new_sensor.knowledge_horizon_fnc = self.knowledge_horizon_fnc - new_sensor.knowledge_horizon_par = self.knowledge_horizon_par - - @property - def entity_address_fm0(self) -> str: - """Entity address under the fm0 scheme for entity addresses.""" - return build_entity_address( - dict( - weather_sensor_type_name=self.weather_sensor_type_name, - latitude=self.latitude, - longitude=self.longitude, - ), - "weather_sensor", - fm_scheme="fm0", - ) - - @property - def entity_address(self) -> str: - """Entity address under the latest fm scheme for entity addresses.""" - return build_entity_address( - dict(sensor_id=self.id), - "sensor", - ) - - @property - def corresponding_sensor(self) -> Sensor: - return db.session.query(Sensor).get(self.id) - - @property - def generic_asset(self) -> GenericAsset: - return db.session.query(GenericAsset).get(self.corresponding_sensor.id) - - def get_attribute(self, attribute: str): - """Looks for the attribute on the corresponding Sensor. - - This should be used by all code to read these attributes, - over accessing them directly on this class, - as this table is in the process to be replaced by the Sensor table. - """ - return self.corresponding_sensor.get_attribute(attribute) - - @property - def weather_unit(self) -> float: - """Return the 'unit' property of the generic asset, just with a more insightful name.""" - return self.unit - - @property - def location(self) -> Tuple[float, float]: - return self.latitude, self.longitude - - @hybrid_method - def great_circle_distance(self, **kwargs): - """Query great circle distance (in km). - - Can be called with an object that has latitude and longitude properties, for example: - - great_circle_distance(object=asset) - - Can also be called with latitude and longitude parameters, for example: - - great_circle_distance(latitude=32, longitude=54) - great_circle_distance(lat=32, lng=54) - - """ - other_location = geo_utils.parse_lat_lng(kwargs) - if None in other_location: - return None - return geo_utils.earth_distance(self.location, other_location) - - @great_circle_distance.expression - def great_circle_distance(self, **kwargs): - """Query great circle distance (unclear if in km or in miles). - - Can be called with an object that has latitude and longitude properties, for example: - - great_circle_distance(object=asset) - - Can also be called with latitude and longitude parameters, for example: - - great_circle_distance(latitude=32, longitude=54) - great_circle_distance(lat=32, lng=54) - - """ - other_location = geo_utils.parse_lat_lng(kwargs) - if None in other_location: - return None - return func.earth_distance( - func.ll_to_earth(self.latitude, self.longitude), - func.ll_to_earth(*other_location), - ) - - sensor_type = db.relationship( - "WeatherSensorType", backref=db.backref("sensors", lazy=True) - ) - - def __repr__(self): - return "" % ( - self.id, - self.name, - self.weather_sensor_type_name, - self.event_resolution, - ) - - def to_dict(self) -> Dict[str, str]: - return dict(name=self.name, sensor_type=self.weather_sensor_type_name) - - -class Weather(TimedValue, db.Model): - """ - All weather measurements are stored in one slim table. - - This model is now considered legacy. See TimedBelief. - """ - - sensor_id = db.Column( - db.Integer(), db.ForeignKey("sensor.id"), primary_key=True, index=True - ) - sensor = db.relationship("Sensor", backref=db.backref("weather", lazy=True)) - - @classmethod - def make_query(cls, **kwargs) -> Query: - """Construct the database query.""" - return super().make_query(**kwargs) - - def __init__(self, use_legacy_kwargs: bool = True, **kwargs): - - # todo: deprecate the 'Weather' class in favor of 'TimedBelief' (announced v0.8.0) - if use_legacy_kwargs is False: - - # Create corresponding TimedBelief - belief = TimedBelief(**kwargs) - db.session.add(belief) - - # Convert key names for legacy model - kwargs["value"] = kwargs.pop("event_value") - kwargs["datetime"] = kwargs.pop("event_start") - kwargs["horizon"] = kwargs.pop("belief_horizon") - kwargs["sensor_id"] = kwargs.pop("sensor").id - kwargs["data_source_id"] = kwargs.pop("source").id - else: - import warnings - - warnings.warn( - f"The {self.__class__} class is deprecated. Switch to using the TimedBelief class to suppress this warning.", - FutureWarning, - ) - - super(Weather, self).__init__(**kwargs) - - def __repr__(self): - return "" % ( - self.value, - self.sensor_id, - self.datetime, - self.data_source_id, - self.horizon, - ) diff --git a/flexmeasures/data/queries/analytics.py b/flexmeasures/data/queries/analytics.py deleted file mode 100644 index 08967be27..000000000 --- a/flexmeasures/data/queries/analytics.py +++ /dev/null @@ -1,436 +0,0 @@ -from typing import List, Dict, Tuple, Union -from datetime import datetime, timedelta - -import numpy as np -import pandas as pd -import timely_beliefs as tb - -from flexmeasures.data.queries.utils import ( - simplify_index, - multiply_dataframe_with_deterministic_beliefs, -) -from flexmeasures.data.services.time_series import set_bdf_source -from flexmeasures.utils import calculations, time_utils -from flexmeasures.data.services.resources import Resource -from flexmeasures.data.models.assets import Asset, Power -from flexmeasures.data.models.time_series import Sensor, TimedBelief -from flexmeasures.data.models.weather import WeatherSensorType - -""" -These queries are considered legacy by now. -They are used in legacy views and use the old data model. -""" - - -def get_power_data( - resource: Union[str, Resource], # name or instance - show_consumption_as_positive: bool, - showing_individual_traces_for: str, - metrics: dict, - query_window: Tuple[datetime, datetime], - resolution: str, - forecast_horizon: timedelta, -) -> Tuple[pd.DataFrame, pd.DataFrame, pd.DataFrame, dict]: - """Get power data and metrics. - - Return power observations, power forecasts and power schedules (each might be an empty DataFrame) - and a dict with the following metrics: - - expected value - - mean absolute error - - mean absolute percentage error - - weighted absolute percentage error - - Todo: Power schedules ignore horizon. - """ - if isinstance(resource, str): - resource = Resource(resource) - - default_columns = ["event_value", "belief_horizon", "source"] - - # Get power data - if showing_individual_traces_for != "schedules": - resource.load_sensor_data( - sensor_types=[Power], - start=query_window[0], - end=query_window[-1], - resolution=resolution, - belief_horizon_window=(None, timedelta(hours=0)), - exclude_source_types=["scheduler"], - ) - if showing_individual_traces_for == "power": - power_bdf = resource.power_data - # In this case, power_bdf is actually a dict of BeliefDataFrames. - # We join the frames into one frame, remembering -per frame- the sensor name as source. - power_bdf = pd.concat( - [ - set_bdf_source(bdf, sensor_name) - for sensor_name, bdf in power_bdf.items() - ] - ) - else: - # Here, we aggregate all rows together - power_bdf = resource.aggregate_power_data - power_df: pd.DataFrame = simplify_index( - power_bdf, index_levels_to_columns=["belief_horizon", "source"] - ) - if showing_individual_traces_for == "power": - # In this case, we keep on indexing by source (as we have more than one) - power_df.set_index("source", append=True, inplace=True) - else: - power_df = pd.DataFrame(columns=default_columns) - - # Get power forecast - if showing_individual_traces_for == "none": - power_forecast_bdf: tb.BeliefsDataFrame = resource.load_sensor_data( - sensor_types=[Power], - start=query_window[0], - end=query_window[-1], - resolution=resolution, - belief_horizon_window=(forecast_horizon, None), - exclude_source_types=["scheduler"], - ).aggregate_power_data - power_forecast_df: pd.DataFrame = simplify_index( - power_forecast_bdf, index_levels_to_columns=["belief_horizon", "source"] - ) - else: - power_forecast_df = pd.DataFrame(columns=default_columns) - - # Get power schedule - if showing_individual_traces_for != "power": - resource.load_sensor_data( - sensor_types=[Power], - start=query_window[0], - end=query_window[-1], - resolution=resolution, - belief_horizon_window=(None, None), - source_types=["scheduler"], - ) - if showing_individual_traces_for == "schedules": - power_schedule_bdf = resource.power_data - power_schedule_bdf = pd.concat( - [ - set_bdf_source(bdf, sensor_name) - for sensor_name, bdf in power_schedule_bdf.items() - ] - ) - else: - power_schedule_bdf = resource.aggregate_power_data - power_schedule_df: pd.DataFrame = simplify_index( - power_schedule_bdf, index_levels_to_columns=["belief_horizon", "source"] - ) - if showing_individual_traces_for == "schedules": - power_schedule_df.set_index("source", append=True, inplace=True) - else: - power_schedule_df = pd.DataFrame(columns=default_columns) - - if show_consumption_as_positive: - power_df["event_value"] *= -1 - power_forecast_df["event_value"] *= -1 - power_schedule_df["event_value"] *= -1 - - # Calculate the power metrics - power_hour_factor = time_utils.resolution_to_hour_factor(resolution) - realised_power_in_mwh = pd.Series( - power_df["event_value"] * power_hour_factor - ).values - - if not power_df.empty: - metrics["realised_power_in_mwh"] = np.nansum(realised_power_in_mwh) - else: - metrics["realised_power_in_mwh"] = np.NaN - if not power_forecast_df.empty and power_forecast_df.size == power_df.size: - expected_power_in_mwh = pd.Series( - power_forecast_df["event_value"] * power_hour_factor - ).values - metrics["expected_power_in_mwh"] = np.nansum(expected_power_in_mwh) - metrics["mae_power_in_mwh"] = calculations.mean_absolute_error( - realised_power_in_mwh, expected_power_in_mwh - ) - metrics["mape_power"] = calculations.mean_absolute_percentage_error( - realised_power_in_mwh, expected_power_in_mwh - ) - metrics["wape_power"] = calculations.weighted_absolute_percentage_error( - realised_power_in_mwh, expected_power_in_mwh - ) - else: - metrics["expected_power_in_mwh"] = np.NaN - metrics["mae_power_in_mwh"] = np.NaN - metrics["mape_power"] = np.NaN - metrics["wape_power"] = np.NaN - return power_df, power_forecast_df, power_schedule_df, metrics - - -def get_prices_data( - metrics: dict, - market_sensor: Sensor, - query_window: Tuple[datetime, datetime], - resolution: str, - forecast_horizon: timedelta, -) -> Tuple[pd.DataFrame, pd.DataFrame, dict]: - """Get price data and metrics. - - Return price observations, price forecasts (either might be an empty DataFrame) - and a dict with the following metrics: - - expected value - - mean absolute error - - mean absolute percentage error - - weighted absolute percentage error - """ - - market_name = "" if market_sensor is None else market_sensor.name - - # Get price data - price_bdf: tb.BeliefsDataFrame = TimedBelief.search( - [market_name], - event_starts_after=query_window[0], - event_ends_before=query_window[1], - resolution=resolution, - horizons_at_least=None, - horizons_at_most=timedelta(hours=0), - ) - price_df: pd.DataFrame = simplify_index( - price_bdf, index_levels_to_columns=["belief_horizon", "source"] - ) - - if not price_bdf.empty: - metrics["realised_unit_price"] = price_df["event_value"].mean() - else: - metrics["realised_unit_price"] = np.NaN - - # Get price forecast - price_forecast_bdf: tb.BeliefsDataFrame = TimedBelief.search( - [market_name], - event_starts_after=query_window[0], - event_ends_before=query_window[1], - resolution=resolution, - horizons_at_least=forecast_horizon, - horizons_at_most=None, - source_types=["user", "forecaster", "script"], - ) - price_forecast_df: pd.DataFrame = simplify_index( - price_forecast_bdf, index_levels_to_columns=["belief_horizon", "source"] - ) - - # Calculate the price metrics - if not price_forecast_df.empty and price_forecast_df.size == price_df.size: - metrics["expected_unit_price"] = price_forecast_df["event_value"].mean() - metrics["mae_unit_price"] = calculations.mean_absolute_error( - price_df["event_value"], price_forecast_df["event_value"] - ) - metrics["mape_unit_price"] = calculations.mean_absolute_percentage_error( - price_df["event_value"], price_forecast_df["event_value"] - ) - metrics["wape_unit_price"] = calculations.weighted_absolute_percentage_error( - price_df["event_value"], price_forecast_df["event_value"] - ) - else: - metrics["expected_unit_price"] = np.NaN - metrics["mae_unit_price"] = np.NaN - metrics["mape_unit_price"] = np.NaN - metrics["wape_unit_price"] = np.NaN - return price_df, price_forecast_df, metrics - - -def get_weather_data( - assets: List[Asset], - metrics: dict, - sensor_type: WeatherSensorType, - query_window: Tuple[datetime, datetime], - resolution: str, - forecast_horizon: timedelta, -) -> Tuple[pd.DataFrame, pd.DataFrame, str, Sensor, dict]: - """Get most recent weather data and forecast weather data for the requested forecast horizon. - - Return weather observations, weather forecasts (either might be an empty DataFrame), - the name of the sensor type, the weather sensor and a dict with the following metrics: - - expected value - - mean absolute error - - mean absolute percentage error - - weighted absolute percentage error""" - - # Todo: for now we only collect weather data for a single asset - asset = assets[0] - - weather_data = tb.BeliefsDataFrame(columns=["event_value"]) - weather_forecast_data = tb.BeliefsDataFrame(columns=["event_value"]) - sensor_type_name = "" - closest_sensor = None - if sensor_type: - # Find the 50 closest weather sensors - sensor_type_name = sensor_type.name - closest_sensors = Sensor.find_closest( - generic_asset_type_name=asset.generic_asset.generic_asset_type.name, - sensor_name=sensor_type_name, - n=50, - object=asset, - ) - if closest_sensors: - closest_sensor = closest_sensors[0] - - # Collect the weather data for the requested time window - sensor_names = [sensor.name for sensor in closest_sensors] - - # Get weather data - weather_bdf_dict: Dict[str, tb.BeliefsDataFrame] = TimedBelief.search( - sensor_names, - event_starts_after=query_window[0], - event_ends_before=query_window[1], - resolution=resolution, - horizons_at_least=None, - horizons_at_most=timedelta(hours=0), - sum_multiple=False, - ) - weather_df_dict: Dict[str, pd.DataFrame] = {} - for sensor_name in weather_bdf_dict: - weather_df_dict[sensor_name] = simplify_index( - weather_bdf_dict[sensor_name], - index_levels_to_columns=["belief_horizon", "source"], - ) - - # Get weather forecasts - weather_forecast_bdf_dict: Dict[ - str, tb.BeliefsDataFrame - ] = TimedBelief.search( - sensor_names, - event_starts_after=query_window[0], - event_ends_before=query_window[1], - resolution=resolution, - horizons_at_least=forecast_horizon, - horizons_at_most=None, - source_types=["user", "forecaster", "script"], - sum_multiple=False, - ) - weather_forecast_df_dict: Dict[str, pd.DataFrame] = {} - for sensor_name in weather_forecast_bdf_dict: - weather_forecast_df_dict[sensor_name] = simplify_index( - weather_forecast_bdf_dict[sensor_name], - index_levels_to_columns=["belief_horizon", "source"], - ) - - # Take the closest weather sensor which contains some data for the selected time window - for sensor, sensor_name in zip(closest_sensors, sensor_names): - if ( - not weather_df_dict[sensor_name]["event_value"] - .isnull() - .values.all() - or not weather_forecast_df_dict[sensor_name]["event_value"] - .isnull() - .values.all() - ): - closest_sensor = sensor - break - - weather_data = weather_df_dict[sensor_name] - weather_forecast_data = weather_forecast_df_dict[sensor_name] - - # Calculate the weather metrics - if not weather_data.empty: - metrics["realised_weather"] = weather_data["event_value"].mean() - else: - metrics["realised_weather"] = np.NaN - if ( - not weather_forecast_data.empty - and weather_forecast_data.size == weather_data.size - ): - metrics["expected_weather"] = weather_forecast_data[ - "event_value" - ].mean() - metrics["mae_weather"] = calculations.mean_absolute_error( - weather_data["event_value"], weather_forecast_data["event_value"] - ) - metrics["mape_weather"] = calculations.mean_absolute_percentage_error( - weather_data["event_value"], weather_forecast_data["event_value"] - ) - metrics[ - "wape_weather" - ] = calculations.weighted_absolute_percentage_error( - weather_data["event_value"], weather_forecast_data["event_value"] - ) - else: - metrics["expected_weather"] = np.NaN - metrics["mae_weather"] = np.NaN - metrics["mape_weather"] = np.NaN - metrics["wape_weather"] = np.NaN - return ( - weather_data, - weather_forecast_data, - sensor_type_name, - closest_sensor, - metrics, - ) - - -def get_revenues_costs_data( - power_data: pd.DataFrame, - prices_data: pd.DataFrame, - power_forecast_data: pd.DataFrame, - prices_forecast_data: pd.DataFrame, - metrics: Dict[str, float], - unit_factor: float, - resolution: str, - showing_individual_traces: bool, -) -> Tuple[pd.DataFrame, pd.DataFrame, dict]: - """Compute revenues/costs data. These data are purely derivative from power and prices. - For forecasts we use the WAPE metrics. Then we calculate metrics on this construct. - The unit factor is used when multiplying quantities and prices, e.g. when multiplying quantities in kWh with prices - in EUR/MWh, use a unit factor of 0.001. - - Return revenue/cost observations, revenue/cost forecasts (either might be an empty DataFrame) - and a dict with the following metrics: - - expected value - - mean absolute error - - mean absolute percentage error - - weighted absolute percentage error - """ - power_hour_factor = time_utils.resolution_to_hour_factor(resolution) - - rev_cost_data = multiply_dataframe_with_deterministic_beliefs( - power_data, - prices_data, - result_source=None - if showing_individual_traces - else "Calculated from power and price data", - multiplication_factor=power_hour_factor * unit_factor, - ) - if power_data.empty or prices_data.empty: - metrics["realised_revenues_costs"] = np.NaN - else: - metrics["realised_revenues_costs"] = np.nansum( - rev_cost_data["event_value"].values - ) - - rev_cost_forecasts = multiply_dataframe_with_deterministic_beliefs( - power_forecast_data, - prices_forecast_data, - result_source="Calculated from power and price data", - multiplication_factor=power_hour_factor * unit_factor, - ) - if power_forecast_data.empty or prices_forecast_data.empty: - metrics["expected_revenues_costs"] = np.NaN - metrics["mae_revenues_costs"] = np.NaN - metrics["mape_revenues_costs"] = np.NaN - metrics["wape_revenues_costs"] = np.NaN - else: - metrics["expected_revenues_costs"] = np.nansum( - rev_cost_forecasts["event_value"] - ) - metrics["mae_revenues_costs"] = calculations.mean_absolute_error( - rev_cost_data["event_value"], rev_cost_forecasts["event_value"] - ) - metrics["mape_revenues_costs"] = calculations.mean_absolute_percentage_error( - rev_cost_data["event_value"], rev_cost_forecasts["event_value"] - ) - metrics[ - "wape_revenues_costs" - ] = calculations.weighted_absolute_percentage_error( - rev_cost_data["event_value"], rev_cost_forecasts["event_value"] - ) - - # Todo: compute confidence interval properly - this is just a simple heuristic - rev_cost_forecasts["yhat_upper"] = rev_cost_forecasts["event_value"] * ( - 1 + metrics["wape_revenues_costs"] - ) - rev_cost_forecasts["yhat_lower"] = rev_cost_forecasts["event_value"] * ( - 1 - metrics["wape_revenues_costs"] - ) - return rev_cost_data, rev_cost_forecasts, metrics diff --git a/flexmeasures/data/queries/portfolio.py b/flexmeasures/data/queries/portfolio.py deleted file mode 100644 index 8b5858340..000000000 --- a/flexmeasures/data/queries/portfolio.py +++ /dev/null @@ -1,127 +0,0 @@ -from typing import Dict, List, Tuple - -import pandas as pd -import timely_beliefs as tb - -from flexmeasures.data.models.assets import Asset, AssetType -from flexmeasures.data.models.markets import Market -from flexmeasures.data.queries.utils import simplify_index -from flexmeasures.data.services.resources import Resource - - -""" -This is considered legacy code now. -The view is considered legacy, and it relies on the old data model. -""" - - -def get_structure( - assets: List[Asset], -) -> Tuple[Dict[str, AssetType], List[Market], Dict[str, Resource]]: - """Get asset portfolio structured as Resources, based on AssetTypes present in a list of Assets. - - Initializing Resources leads to some database queries. - - :param assets: a list of Assets - :returns: a tuple comprising: - - a dictionary of resource names (as keys) and the asset type represented by these resources (as values) - - a list of (unique) Markets that are relevant to these resources - - a dictionary of resource names (as keys) and Resources (as values) - """ - - # Set up a resource name for each asset type - represented_asset_types = { - asset_type.plural_name: asset_type - for asset_type in [asset.asset_type for asset in assets] - } - - # Load structure (and set up resources) - resource_dict = {} - markets: List[Market] = [] - for resource_name in represented_asset_types.keys(): - resource = Resource(resource_name) - if len(resource.assets) == 0: - continue - resource_dict[resource_name] = resource - markets.extend(list(set(asset.market for asset in resource.assets))) - markets = list(set(markets)) - - return represented_asset_types, markets, resource_dict - - -def get_power_data( - resource_dict: Dict[str, Resource] -) -> Tuple[ - Dict[str, pd.DataFrame], - Dict[str, pd.DataFrame], - Dict[str, float], - Dict[str, float], - Dict[str, float], - Dict[str, float], -]: - """Get power data, separating demand and supply, - as time series per resource and as totals (summed over time) per resource and per asset. - - Getting sensor data of a Resource leads to database queries (unless results are already cached). - - :returns: a tuple comprising: - - a dictionary of resource names (as keys) and a DataFrame with aggregated time series of supply (as values) - - a dictionary of resource names (as keys) and a DataFrame with aggregated time series of demand (as values) - - a dictionary of resource names (as keys) and their total supply summed over time (as values) - - a dictionary of resource names (as keys) and their total demand summed over time (as values) - - a dictionary of asset names (as keys) and their total supply summed over time (as values) - - a dictionary of asset names (as keys) and their total demand summed over time (as values) - """ - - # Load power data (separate demand and supply, and group data per resource) - supply_per_resource: Dict[ - str, pd.DataFrame - ] = {} # power >= 0, production/supply >= 0 - demand_per_resource: Dict[ - str, pd.DataFrame - ] = {} # power <= 0, consumption/demand >=0 !!! - total_supply_per_asset: Dict[str, float] = {} - total_demand_per_asset: Dict[str, float] = {} - for resource_name, resource in resource_dict.items(): - if (resource.aggregate_demand.values != 0).any(): - demand_per_resource[resource_name] = simplify_index( - resource.aggregate_demand - ) - if (resource.aggregate_supply.values != 0).any(): - supply_per_resource[resource_name] = simplify_index( - resource.aggregate_supply - ) - total_supply_per_asset = {**total_supply_per_asset, **resource.total_supply} - total_demand_per_asset = {**total_demand_per_asset, **resource.total_demand} - total_supply_per_resource = { - k: v.total_aggregate_supply for k, v in resource_dict.items() - } - total_demand_per_resource = { - k: v.total_aggregate_demand for k, v in resource_dict.items() - } - return ( - supply_per_resource, - demand_per_resource, - total_supply_per_resource, - total_demand_per_resource, - total_supply_per_asset, - total_demand_per_asset, - ) - - -def get_price_data( - resource_dict: Dict[str, Resource] -) -> Tuple[Dict[str, tb.BeliefsDataFrame], Dict[str, float]]: - - # Load price data - price_bdf_dict: Dict[str, tb.BeliefsDataFrame] = {} - for resource_name, resource in resource_dict.items(): - price_bdf_dict = {**resource.cached_price_data, **price_bdf_dict} - average_price_dict = {k: v["event_value"].mean() for k, v in price_bdf_dict.items()} - - # Uncomment if needed - # revenue_per_asset_type = {k: v.aggregate_revenue for k, v in resource_dict.items()} - # cost_per_asset_type = {k: v.aggregate_cost for k, v in resource_dict.items()} - # profit_per_asset_type = {k: v.aggregate_profit_or_loss for k, v in resource_dict.items()} - - return price_bdf_dict, average_price_dict diff --git a/flexmeasures/data/schemas/__init__.py b/flexmeasures/data/schemas/__init__.py index 5965fb501..7f6770a85 100644 --- a/flexmeasures/data/schemas/__init__.py +++ b/flexmeasures/data/schemas/__init__.py @@ -2,8 +2,8 @@ Data schemas (Marshmallow) """ -from .assets import LatitudeField, LongitudeField # noqa F401 from .generic_assets import GenericAssetIdField as AssetIdField # noqa F401 +from .locations import LatitudeField, LongitudeField # noqa F401 from .sensors import SensorIdField # noqa F401 from .sources import DataSourceIdField as SourceIdField # noqa F401 from .times import AwareDateTimeField, DurationField, TimeIntervalField # noqa F401 diff --git a/flexmeasures/data/schemas/generic_assets.py b/flexmeasures/data/schemas/generic_assets.py index 30867f21c..766b7bfbb 100644 --- a/flexmeasures/data/schemas/generic_assets.py +++ b/flexmeasures/data/schemas/generic_assets.py @@ -8,7 +8,7 @@ from flexmeasures.data import ma from flexmeasures.data.models.user import Account from flexmeasures.data.models.generic_assets import GenericAsset, GenericAssetType -from flexmeasures.data.schemas import LatitudeField, LongitudeField +from flexmeasures.data.schemas.locations import LatitudeField, LongitudeField from flexmeasures.data.schemas.utils import ( FMValidationError, MarshmallowClickMixin, diff --git a/flexmeasures/data/schemas/assets.py b/flexmeasures/data/schemas/locations.py similarity index 50% rename from flexmeasures/data/schemas/assets.py rename to flexmeasures/data/schemas/locations.py index 79c29a2f7..464dc9a9a 100644 --- a/flexmeasures/data/schemas/assets.py +++ b/flexmeasures/data/schemas/locations.py @@ -1,12 +1,7 @@ from __future__ import annotations -from marshmallow import validates, ValidationError, validates_schema, fields, validate +from marshmallow import ValidationError, fields, validate -from flexmeasures.data import ma -from flexmeasures.data.models.assets import Asset, AssetType -from flexmeasures.data.models.time_series import Sensor -from flexmeasures.data.models.user import User -from flexmeasures.data.schemas.sensors import SensorSchemaMixin from flexmeasures.data.schemas.utils import FMValidationError, MarshmallowClickMixin @@ -88,66 +83,3 @@ def __init__(self, *args, **kwargs): self.validators.insert( 0, LongitudeValidator(allow_none=kwargs.get("allow_none", False)) ) - - -class AssetSchema(SensorSchemaMixin, ma.SQLAlchemySchema): - """ - Asset schema, with validations. - - TODO: deprecate, as it is based on legacy data model. Move some attributes to SensorSchema. - """ - - class Meta: - model = Asset - - @validates("name") - def validate_name(self, name: str): - asset = Asset.query.filter(Asset.name == name).one_or_none() - if asset: - raise ValidationError(f"An asset with the name {name} already exists.") - - @validates("owner_id") - def validate_owner(self, owner_id: int): - owner = User.query.get(owner_id) - if not owner: - raise ValidationError(f"Owner with id {owner_id} doesn't exist.") - if not owner.account.has_role("Prosumer"): - raise ValidationError( - "Asset owner's account must have role 'Prosumer'." - f" User {owner_id}'s account has roles: {'.'.join([r.name for r in owner.account.account_roles])}." - ) - - @validates("market_id") - def validate_market(self, market_id: int): - sensor = Sensor.query.get(market_id) - if not sensor: - raise ValidationError(f"Market with id {market_id} doesn't exist.") - - @validates("asset_type_name") - def validate_asset_type(self, asset_type_name: str): - asset_type = AssetType.query.get(asset_type_name) - if not asset_type: - raise ValidationError(f"Asset type {asset_type_name} doesn't exist.") - - @validates_schema(skip_on_field_errors=False) - def validate_soc_constraints(self, data, **kwargs): - if "max_soc_in_mwh" in data and "min_soc_in_mwh" in data: - if data["max_soc_in_mwh"] < data["min_soc_in_mwh"]: - errors = { - "max_soc_in_mwh": "This value must be equal or higher than the minimum soc." - } - raise ValidationError(errors) - - id = ma.auto_field() - display_name = fields.Str(validate=validate.Length(min=4)) - capacity_in_mw = fields.Float(required=True, validate=validate.Range(min=0.0001)) - min_soc_in_mwh = fields.Float(validate=validate.Range(min=0)) - max_soc_in_mwh = fields.Float(validate=validate.Range(min=0)) - soc_in_mwh = ma.auto_field() - soc_datetime = ma.auto_field() - soc_udi_event_id = ma.auto_field() - latitude = LatitudeField(allow_none=True) - longitude = LongitudeField(allow_none=True) - asset_type_name = ma.auto_field(required=True) - owner_id = ma.auto_field(required=True) - market_id = ma.auto_field(required=True) diff --git a/flexmeasures/data/schemas/tests/test_latitude_longitude.py b/flexmeasures/data/schemas/tests/test_latitude_longitude.py index dd695a56b..46b48a89f 100644 --- a/flexmeasures/data/schemas/tests/test_latitude_longitude.py +++ b/flexmeasures/data/schemas/tests/test_latitude_longitude.py @@ -1,6 +1,6 @@ import pytest -from flexmeasures.data.schemas.assets import LatitudeField, LongitudeField +from flexmeasures.data.schemas.locations import LatitudeField, LongitudeField from flexmeasures.data.schemas.utils import ValidationError diff --git a/flexmeasures/data/scripts/visualize_data_model.py b/flexmeasures/data/scripts/visualize_data_model.py index 5e67663a9..9fc97c46b 100755 --- a/flexmeasures/data/scripts/visualize_data_model.py +++ b/flexmeasures/data/scripts/visualize_data_model.py @@ -27,17 +27,16 @@ DEBUG = True +# List here modules which should be scanned for the UML version RELEVANT_MODULES = [ "task_runs", "data_sources", - "markets", - "assets", "generic_assets", - "weather", "user", "time_series", ] +# List here tables in the data model which are currently relevant RELEVANT_TABLES = [ "role", "account", @@ -45,26 +44,17 @@ "fm_user", "data_source", "latest_task_run", -] -LEGACY_TABLES = [ - "asset", - "asset_type", - "market", - "market_type", - "power", - "price", - "weather", - "weather_sensor", - "weather_sensor_type", -] -RELEVANT_TABLES_NEW = [ "generic_asset_type", "generic_asset", "sensor", "timed_belief", "timed_value", ] -IGNORED_TABLES = ["alembic_version", "roles_users", "roles_accounts"] + +# The following two lists are useful for transition periods, when some tables are legacy, and some have been added. +# This allows you to show the old model as well as the future model. +LEGACY_TABLES = [] +RELEVANT_TABLES_NEW = [] def check_sqlalchemy_schemadisplay_installation(): diff --git a/flexmeasures/data/services/resources.py b/flexmeasures/data/services/resources.py deleted file mode 100644 index e153dd5be..000000000 --- a/flexmeasures/data/services/resources.py +++ /dev/null @@ -1,677 +0,0 @@ -""" -Generic services for accessing asset data. - -TODO: This works with the legacy data model (esp. Assets), so it is marked for deprecation. - We are building data.services.asset_grouping, porting much of the code here. - The data access logic here might also be useful for sensor data access logic we'll build - elsewhere, but that's not quite certain at this point in time. -""" - -from __future__ import annotations -from functools import cached_property, wraps -from typing import List, Dict, Tuple, Type, TypeVar, Union, Optional -from datetime import datetime - -from flexmeasures.data import db -from flexmeasures.utils.flexmeasures_inflection import parameterize, pluralize -from itertools import groupby - -from flask_security.core import current_user -import inflect -import pandas as pd -from sqlalchemy.orm import Query -from sqlalchemy.engine import Row -import timely_beliefs as tb - -from flexmeasures.auth.policy import ADMIN_ROLE -from flexmeasures.data.models.assets import ( - AssetType, - Asset, - Power, - assets_share_location, -) -from flexmeasures.data.models.markets import Market, Price -from flexmeasures.data.models.time_series import Sensor, TimedBelief -from flexmeasures.data.models.weather import Weather, WeatherSensorType -from flexmeasures.data.models.user import User -from flexmeasures.data.queries.utils import simplify_index -from flexmeasures.data.services.time_series import aggregate_values -from flexmeasures.utils import coding_utils, time_utils - -""" -This module is legacy, as we move to the new data model (see projects on Github). -Do check, but apart from get_sensors (which needs a rewrite), functionality has -either been copied in services/asset_grouping or is not needed any more. -Two views using this (analytics and portfolio) are also considered legacy. -""" - -p = inflect.engine() -cached_property = coding_utils.make_registering_decorator(cached_property) -SensorType = TypeVar("SensorType", Type[Power], Type[Price], Type[Weather]) - - -def get_markets() -> List[Market]: - """Return a list of all Market objects.""" - return Market.query.order_by(Market.name.asc()).all() - - -def get_assets( - owner_id: Optional[int] = None, - order_by_asset_attribute: str = "id", - order_direction: str = "desc", -) -> List[Asset]: - """Return a list of all Asset objects owned by current_user - (or all users or a specific user - for this, admins can set an owner_id). - """ - return _build_asset_query(owner_id, order_by_asset_attribute, order_direction).all() - - -def get_sensors( - owner_id: Optional[int] = None, - order_by_asset_attribute: str = "id", - order_direction: str = "desc", -) -> List[Sensor]: - """Return a list of all Sensor objects owned by current_user's organisation account - (or all users or a specific user - for this, admins can set an owner_id). - """ - # todo: switch to using authz from https://github.com/SeitaBV/flexmeasures/pull/234 - return [ - asset.corresponding_sensor - for asset in get_assets(owner_id, order_by_asset_attribute, order_direction) - ] - - -def has_assets(owner_id: Optional[int] = None) -> bool: - """Return True if the current user owns any assets. - (or all users or a specific user - for this, admins can set an owner_id). - """ - return _build_asset_query(owner_id).count() > 0 - - -def can_access_asset(asset_or_sensor: Union[Asset, Sensor]) -> bool: - """Return True if: - - the current user is an admin, or - - the current user is the owner of the asset, or - - the current user's organisation account owns the corresponding generic asset, or - - the corresponding generic asset is public - - todo: refactor to `def can_access_sensor(sensor: Sensor) -> bool` once `ui.views.state.state_view` stops calling it with an Asset - todo: let this function use our new auth model (row-level authorization) - todo: deprecate this function in favor of an authz decorator on the API route - """ - if current_user.is_authenticated: - if current_user.has_role(ADMIN_ROLE): - return True - if isinstance(asset_or_sensor, Sensor): - if asset_or_sensor.generic_asset.owner in (None, current_user.account): - return True - elif asset_or_sensor.owner == current_user: - return True - return False - - -def _build_asset_query( - owner_id: Optional[int] = None, - order_by_asset_attribute: str = "id", - order_direction: str = "desc", -) -> Query: - """Build an Asset query. Only authenticated users can use this. - Admins can query for all assets (owner_id is None) or for any user (the asset's owner). - Non-admins can only query for themselves (owner_id is ignored). - - order_direction can be "asc" or "desc". - """ - if current_user.is_authenticated: - if current_user.has_role(ADMIN_ROLE): - if owner_id is not None: - if not isinstance(owner_id, int): - try: - owner_id = int(owner_id) - except TypeError: - raise Exception( - "Owner id %s cannot be parsed as integer, thus seems to be invalid." - % owner_id - ) - query = Asset.query.filter(Asset.owner_id == owner_id) - else: - query = Asset.query - else: - query = Asset.query.filter_by(owner=current_user) - else: - query = Asset.query.filter(Asset.owner_id == -1) - query = query.order_by( - getattr(getattr(Asset, order_by_asset_attribute), order_direction)() - ) - return query - - -def get_asset_group_queries( - custom_additional_groups: Optional[List[str]] = None, - all_users: bool = False, -) -> Dict[str, Query]: - """ - An asset group is defined by Asset queries. Each query has a name, and we prefer pluralised display names. - They still need an executive call, like all(), count() or first(). - - :param custom_additional_groups: list of additional groups next to groups that represent unique asset types. - Valid names are: - - "renewables", to query all solar and wind assets - - "EVSE", to query all Electric Vehicle Supply Equipment - - "location", to query each individual location with assets - (i.e. all EVSE at 1 location or each household) - :param all_users: if True, do not filter out assets that do not belong to the user (use with care) - """ - - if custom_additional_groups is None: - custom_additional_groups = [] - asset_queries = {} - - # 1. Custom asset groups by combinations of asset types - if "renewables" in custom_additional_groups: - asset_queries["renewables"] = Asset.query.filter( - Asset.asset_type_name.in_(["solar", "wind"]) - ) - if "EVSE" in custom_additional_groups: - asset_queries["EVSE"] = Asset.query.filter( - Asset.asset_type_name.in_(["one-way_evse", "two-way_evse"]) - ) - - # 2. We also include a group per asset type - using the pluralised asset type display name - for asset_type in AssetType.query.all(): - asset_queries[pluralize(asset_type.display_name)] = Asset.query.filter_by( - asset_type_name=asset_type.name - ) - - # 3. Finally, we group assets by location - if "location" in custom_additional_groups: - asset_queries.update(get_location_queries()) - - if not all_users: - asset_queries = mask_inaccessible_assets(asset_queries) - - return asset_queries - - -def get_location_queries() -> Dict[str, Query]: - """ - We group EVSE assets by location (if they share a location, they belong to the same Charge Point) - Like get_asset_group_queries, the values in the returned dict still need an executive call, like all(), count() or first(). - - The Charge Points are named on the basis of the first EVSE in their list, - using either the whole EVSE display name or that part that comes before a " -" delimiter. For example: - If: - evse_display_name = "Seoul Hilton - charger 1" - Then: - charge_point_display_name = "Seoul Hilton (Charge Point)" - - A Charge Point is a special case. If all assets on a location are of type EVSE, - we can call the location a "Charge Point". - """ - asset_queries = {} - all_assets = Asset.query.all() - loc_groups = group_assets_by_location(all_assets) - for loc_group in loc_groups: - if len(loc_group) == 1: - continue - location_type = "(Location)" - if all( - [ - asset.asset_type_name in ["one-way_evse", "two-way_evse"] - for asset in loc_group - ] - ): - location_type = "(Charge Point)" - location_name = f"{loc_group[0].display_name.split(' -')[0]} {location_type}" - asset_queries[location_name] = Asset.query.filter( - Asset.name.in_([asset.name for asset in loc_group]) - ) - return asset_queries - - -def mask_inaccessible_assets( - asset_queries: Union[Query, Dict[str, Query]] -) -> Union[Query, Dict[str, Query]]: - """Filter out any assets that the user should not be able to access. - - We do not explicitly check user authentication here, because non-authenticated users are not admins - and have no asset ownership, so applying this filter for non-admins masks all assets. - """ - if not current_user.has_role(ADMIN_ROLE): - if isinstance(asset_queries, dict): - for name, query in asset_queries.items(): - asset_queries[name] = query.filter_by(owner=current_user) - else: - asset_queries = asset_queries.filter_by(owner=current_user) - return asset_queries - - -def get_center_location(user: Optional[User]) -> Tuple[float, float]: - """ - Find the center position between all assets. - If user is passed and not admin then we only consider assets - owned by the user. - TODO: if we introduce accounts, this logic should look for these assets. - """ - query = ( - "Select (min(latitude) + max(latitude)) / 2 as latitude," - " (min(longitude) + max(longitude)) / 2 as longitude" - " from asset" - ) - if user and not user.has_role(ADMIN_ROLE): - query += f" where owner_id = {user.id}" - locations: List[Row] = db.session.execute(query + ";").fetchall() - if ( - len(locations) == 0 - or locations[0].latitude is None - or locations[0].longitude is None - ): - return 52.366, 4.904 # Amsterdam, NL - return locations[0].latitude, locations[0].longitude - - -def check_cache(attribute): - """Decorator for Resource class attributes to check if the resource has cached the attribute. - - Example usage: - @check_cache("cached_data") - def some_property(self): - return self.cached_data - """ - - def inner_function(fn): - @wraps(fn) - def wrapper(self, *args, **kwargs): - if not hasattr(self, attribute) or not getattr(self, attribute): - raise ValueError( - "Resource has no cached data. Call resource.load_sensor_data() first." - ) - return fn(self, *args, **kwargs) - - return wrapper - - return inner_function - - -class Resource: - """ - This class represents a group of assets of the same type, and provides - helpful functions to retrieve their time series data and derived statistics. - - Resolving asset type names - -------------------------- - When initialised with a plural asset type name, the resource will contain all assets of - the given type that are accessible to the user. - When initialised with just one asset name, the resource will list only that asset. - - Loading structure - ----------------- - Initialization only loads structural information from the database (which assets the resource groups). - - Loading and caching time series - ------------------------------- - To load time series data for a certain time window, use the load_sensor_data() method. - This loads beliefs data from the database and caches the results (as a named attribute). - Caches are cleared when new time series data is loaded (or when the Resource instance seizes to exist). - - Loading and caching derived statistics - -------------------------------------- - Cached time series data is used to compute derived statistics, such as aggregates and scores. - More specifically: - - demand and supply - - aggregated values (summed over assets) - - total values (summed over time) - - mean values (averaged over time) (todo: add this property) - - revenue and cost - - profit/loss - When a derived statistic is called for, the results are also cached (using @functools.cached_property). - - * Resource(session["resource"]).assets - * Resource(session["resource"]).display_name - * Resource(session["resource"]).get_data() - - Usage - ----- - >>> from flask import session - >>> resource = Resource(session["resource"]) - >>> resource.assets - >>> resource.display_name - >>> resource.load_sensor_data(Power) - >>> resource.cached_power_data - >>> resource.load_sensor_data(Price, sensor_key_attribute="market.name") - >>> resource.cached_price_data - """ - - # Todo: Our Resource may become an (Aggregated*)Asset with a grouping relationship with other Assets. - # Each component asset may have sensors that may have an is_scored_by relationship, - # with e.g. a price sensor of a market. - # * Asset == AggregatedAsset if it groups assets of only 1 type, - # Asset == GeneralizedAsset if it groups assets of multiple types - - assets: List[Asset] - count: int - count_all: int - name: str - unique_asset_types: List[AssetType] - unique_asset_type_names: List[str] - cached_power_data: Dict[ - str, tb.BeliefsDataFrame - ] # todo: use standard library caching - cached_price_data: Dict[str, tb.BeliefsDataFrame] - asset_name_to_market_name_map: Dict[str, str] - - def __init__(self, name: str): - """The resource name is either the name of an asset group or an individual asset.""" - if name is None or name == "": - raise Exception("Empty resource name passed (%s)" % name) - self.name = name - - # Query assets for all users to set some public information about the resource - asset_queries = get_asset_group_queries( - custom_additional_groups=["renewables", "EVSE", "location"], - all_users=True, - ) - asset_query = ( - asset_queries[self.name] - if name in asset_queries - else Asset.query.filter_by(name=self.name) - ) # gather assets that are identified by this resource's name - - # List unique asset types and asset type names represented by this resource - assets = asset_query.all() - self.unique_asset_types = list(set([a.asset_type for a in assets])) - self.unique_asset_type_names = list(set([a.asset_type.name for a in assets])) - - # Count all assets in the system that are identified by this resource's name, no matter who is the owner - self.count_all = len(assets) - - # List all assets that are identified by this resource's name and accessible by the current user - self.assets = mask_inaccessible_assets(asset_query).all() - - # Count all assets that are identified by this resource's name and accessible by the current user - self.count = len(self.assets) - - # Construct a convenient mapping to get from an asset name to the market name of the asset's relevant market - self.asset_name_to_market_name_map = { - asset.name: asset.market.name if asset.market is not None else None - for asset in self.assets - } - - @property - def is_unique_asset(self) -> bool: - """Determines whether the resource represents a unique asset.""" - return [self.name] == [a.name for a in self.assets] - - @property - def display_name(self) -> str: - """Attempt to get a beautiful name to show if possible.""" - if self.is_unique_asset: - return self.assets[0].display_name - return self.name - - def is_eligible_for_comparing_individual_traces(self, max_traces: int = 7) -> bool: - """ - Decide whether comparing individual traces for assets in this resource - is a useful feature. - The number of assets that can be compared is parametrizable with max_traces. - Plot colors are reused if max_traces > 7, and run out if max_traces > 105. - """ - return len(self.assets) in range(2, max_traces + 1) and assets_share_location( - self.assets - ) - - @property - def hover_label(self) -> Optional[str]: - """Attempt to get a hover label to show if possible.""" - label = p.join( - [ - asset_type.hover_label - for asset_type in self.unique_asset_types - if asset_type.hover_label is not None - ] - ) - return label if label else None - - @property - def parameterized_name(self) -> str: - """Get a parametrized name for use in javascript.""" - return parameterize(self.name) - - def load_sensor_data( - self, - sensor_types: List[SensorType] = None, - start: datetime = None, - end: datetime = None, - resolution: str = None, - belief_horizon_window=(None, None), - belief_time_window=(None, None), - source_types: Optional[List[str]] = None, - exclude_source_types: Optional[List[str]] = None, - ) -> Resource: - """Load data for one or more assets and cache the results. - If the time range parameters are None, they will be gotten from the session. - The horizon window will default to the latest measurement (anything more in the future than the - end of the time interval. - To load data for a specific source, pass a source id. - - :returns: self (to allow piping) - - Usage - ----- - >>> resource = Resource() - >>> resource.load_sensor_data([Power], start=datetime(2014, 3, 1), end=datetime(2014, 3, 1)) - >>> resource.cached_power_data - >>> resource.load_sensor_data([Power, Price], start=datetime(2014, 3, 1), end=datetime(2014, 3, 1)).cached_price_data - """ - - # Invalidate old caches - self.clear_cache() - - # Look up all relevant sensor types for the given resource - if sensor_types is None: - # todo: after splitting Assets and Sensors, construct here a list of sensor types - sensor_types = [Power, Price, Weather] - - # todo: after combining the Power, Price and Weather tables into one TimedBeliefs table, - # retrieve data from different sensor types in a single query, - # and cache the results grouped by sensor type (cached_price_data, cached_power_data, etc.) - for sensor_type in sensor_types: - if sensor_type == Power: - sensor_key_attribute = "name" - elif sensor_type == Price: - sensor_key_attribute = "market.name" - else: - raise NotImplementedError("Unsupported sensor type") - - # Determine which sensors we need to query - names_of_resource_sensors = set( - coding_utils.rgetattr(asset, sensor_key_attribute) - for asset in self.assets - ) - - # Query the sensors - resource_data: Dict[str, tb.BeliefsDataFrame] = TimedBelief.search( - list(names_of_resource_sensors), - event_starts_after=start, - event_ends_before=end, - horizons_at_least=belief_horizon_window[0], - horizons_at_most=belief_horizon_window[1], - beliefs_after=belief_time_window[0], - beliefs_before=belief_time_window[1], - source_types=source_types, - exclude_source_types=exclude_source_types, - resolution=resolution, - sum_multiple=False, - ) - - # Cache the data - setattr( - self, f"cached_{sensor_type.__name__.lower()}_data", resource_data - ) # e.g. cached_price_data for sensor type Price - return self - - @property - @check_cache("cached_power_data") - def power_data(self) -> Dict[str, tb.BeliefsDataFrame]: - return self.cached_power_data - - @property - @check_cache("cached_price_data") - def price_data(self) -> Dict[str, tb.BeliefsDataFrame]: - return self.cached_price_data - - @cached_property - def demand(self) -> Dict[str, tb.BeliefsDataFrame]: - """Returns each asset's demand as positive values.""" - return {k: get_demand_from_bdf(v) for k, v in self.power_data.items()} - - @cached_property - def supply(self) -> Dict[str, tb.BeliefsDataFrame]: - """Returns each asset's supply as positive values.""" - return {k: get_supply_from_bdf(v) for k, v in self.power_data.items()} - - @cached_property - def aggregate_power_data(self) -> tb.BeliefsDataFrame: - return aggregate_values(self.power_data) - - @cached_property - def aggregate_demand(self) -> tb.BeliefsDataFrame: - """Returns aggregate demand as positive values.""" - return get_demand_from_bdf(self.aggregate_power_data) - - @cached_property - def aggregate_supply(self) -> tb.BeliefsDataFrame: - """Returns aggregate supply (as positive values).""" - return get_supply_from_bdf(self.aggregate_power_data) - - @cached_property - def total_demand(self) -> Dict[str, float]: - """Returns each asset's total demand as a positive value.""" - return { - k: v.sum().values[0] - * time_utils.resolution_to_hour_factor(v.event_resolution) - for k, v in self.demand.items() - } - - @cached_property - def total_supply(self) -> Dict[str, float]: - """Returns each asset's total supply as a positive value.""" - return { - k: v.sum().values[0] - * time_utils.resolution_to_hour_factor(v.event_resolution) - for k, v in self.supply.items() - } - - @cached_property - def total_aggregate_demand(self) -> float: - """Returns total aggregate demand as a positive value.""" - return self.aggregate_demand.sum().values[ - 0 - ] * time_utils.resolution_to_hour_factor(self.aggregate_demand.event_resolution) - - @cached_property - def total_aggregate_supply(self) -> float: - """Returns total aggregate supply as a positive value.""" - return self.aggregate_supply.sum().values[ - 0 - ] * time_utils.resolution_to_hour_factor(self.aggregate_supply.event_resolution) - - @cached_property - def revenue(self) -> Dict[str, float]: - """Returns each asset's total revenue from supply.""" - revenue_dict = {} - for k, v in self.supply.items(): - market_name = self.asset_name_to_market_name_map[k] - if market_name is not None: - revenue_dict[k] = ( - simplify_index(v) * simplify_index(self.price_data[market_name]) - ).sum().values[0] * time_utils.resolution_to_hour_factor( - v.event_resolution - ) - else: - revenue_dict[k] = None - return revenue_dict - - @cached_property - def aggregate_revenue(self) -> float: - """Returns total aggregate revenue from supply.""" - return sum(self.revenue.values()) - - @cached_property - def cost(self) -> Dict[str, float]: - """Returns each asset's total cost from demand.""" - cost_dict = {} - for k, v in self.demand.items(): - market_name = self.asset_name_to_market_name_map[k] - if market_name is not None: - cost_dict[k] = ( - simplify_index(v) * simplify_index(self.price_data[market_name]) - ).sum().values[0] * time_utils.resolution_to_hour_factor( - v.event_resolution - ) - else: - cost_dict[k] = None - return cost_dict - - @cached_property - def aggregate_cost(self) -> float: - """Returns total aggregate cost from demand.""" - return sum(self.cost.values()) - - @cached_property - def aggregate_profit_or_loss(self) -> float: - """Returns total aggregate profit (loss is negative).""" - return self.aggregate_revenue - self.aggregate_cost - - def clear_cache(self): - self.cached_power_data = {} - self.cached_price_data = {} - for prop in coding_utils.methods_with_decorator(Resource, cached_property): - if prop.__name__ in self.__dict__: - del self.__dict__[prop.__name__] - - def __str__(self): - return self.display_name - - -def get_demand_from_bdf( - bdf: Union[pd.DataFrame, tb.BeliefsDataFrame] -) -> Union[pd.DataFrame, tb.BeliefsDataFrame]: - """Positive values become 0 and negative values become positive values.""" - return bdf.clip(upper=0).abs() - - -def get_supply_from_bdf( - bdf: Union[pd.DataFrame, tb.BeliefsDataFrame] -) -> Union[pd.DataFrame, tb.BeliefsDataFrame]: - """Negative values become 0.""" - return bdf.clip(lower=0) - - -def get_sensor_types(resource: Resource) -> List[WeatherSensorType]: - """Return a list of WeatherSensorType objects applicable to the given resource.""" - sensor_type_names = [] - for asset_type in resource.unique_asset_types: - sensor_type_names.extend(asset_type.weather_correlations) - unique_sensor_type_names = list(set(sensor_type_names)) - - sensor_types = [] - for name in unique_sensor_type_names: - sensor_type = WeatherSensorType.query.filter( - WeatherSensorType.name == name - ).one_or_none() - if sensor_type is not None: - sensor_types.append(sensor_type) - - return sensor_types - - -def group_assets_by_location(asset_list: List[Asset]) -> List[List[Asset]]: - groups = [] - - def key_function(x): - return x.location - - sorted_asset_list = sorted(asset_list, key=key_function) - for _k, g in groupby(sorted_asset_list, key=key_function): - groups.append(list(g)) - return groups diff --git a/flexmeasures/data/tests/conftest.py b/flexmeasures/data/tests/conftest.py index 41e065f29..ebf057e39 100644 --- a/flexmeasures/data/tests/conftest.py +++ b/flexmeasures/data/tests/conftest.py @@ -1,16 +1,15 @@ +from __future__ import annotations + import pytest from datetime import datetime, timedelta from random import random -from isodate import parse_duration import pandas as pd import numpy as np from flask_sqlalchemy import SQLAlchemy from statsmodels.api import OLS -from flexmeasures import User from flexmeasures.data.models.annotations import Annotation -from flexmeasures.data.models.assets import Asset from flexmeasures.data.models.data_sources import DataSource from flexmeasures.data.models.time_series import TimedBelief, Sensor from flexmeasures.data.models.generic_assets import GenericAsset, GenericAssetType @@ -28,7 +27,6 @@ def setup_test_data( add_market_prices, setup_assets, setup_generic_asset_types, - remove_seasonality_for_power_forecasts, ): """ Adding a few forecasting jobs (based on data made in flexmeasures.conftest). @@ -38,77 +36,20 @@ def setup_test_data( add_test_weather_sensor_and_forecasts(db, setup_generic_asset_types) print("Done setting up data for data tests") + return setup_assets @pytest.fixture(scope="function") def setup_fresh_test_data( fresh_db, setup_markets_fresh_db, - setup_roles_users_fresh_db, + setup_accounts_fresh_db, + setup_assets_fresh_db, setup_generic_asset_types_fresh_db, app, - fresh_remove_seasonality_for_power_forecasts, -): - db = fresh_db - setup_roles_users = setup_roles_users_fresh_db - setup_markets = setup_markets_fresh_db - - data_source = DataSource(name="Seita", type="demo script") - db.session.add(data_source) - db.session.flush() - - for asset_name in ["wind-asset-2", "solar-asset-1"]: - asset = Asset( - name=asset_name, - asset_type_name="wind" if "wind" in asset_name else "solar", - event_resolution=timedelta(minutes=15), - capacity_in_mw=1, - latitude=10, - longitude=100, - min_soc_in_mwh=0, - max_soc_in_mwh=0, - soc_in_mwh=0, - unit="MW", - market_id=setup_markets["epex_da"].id, - ) - asset.owner = User.query.get(setup_roles_users["Test Prosumer User"]) - db.session.add(asset) - - time_slots = pd.date_range( - datetime(2015, 1, 1), datetime(2015, 1, 1, 23, 45), freq="15T" - ) - values = [random() * (1 + np.sin(x / 15)) for x in range(len(time_slots))] - beliefs = [ - TimedBelief( - event_start=as_server_time(dt), - belief_horizon=parse_duration("PT0M"), - event_value=val, - sensor=asset.corresponding_sensor, - source=data_source, - ) - for dt, val in zip(time_slots, values) - ] - db.session.add_all(beliefs) +) -> dict[str, GenericAsset]: add_test_weather_sensor_and_forecasts(fresh_db, setup_generic_asset_types_fresh_db) - - -@pytest.fixture(scope="module", autouse=True) -def remove_seasonality_for_power_forecasts(db, setup_asset_types): - """Make sure the AssetType specs make us query only data we actually have in the test db""" - for asset_type in setup_asset_types.keys(): - setup_asset_types[asset_type].daily_seasonality = False - setup_asset_types[asset_type].weekly_seasonality = False - setup_asset_types[asset_type].yearly_seasonality = False - - -@pytest.fixture(scope="function") -def fresh_remove_seasonality_for_power_forecasts(db, setup_asset_types_fresh_db): - """Make sure the AssetType specs make us query only data we actually have in the test db""" - setup_asset_types = setup_asset_types_fresh_db - for asset_type in setup_asset_types.keys(): - setup_asset_types[asset_type].daily_seasonality = False - setup_asset_types[asset_type].weekly_seasonality = False - setup_asset_types[asset_type].yearly_seasonality = False + return setup_assets_fresh_db def add_test_weather_sensor_and_forecasts(db: SQLAlchemy, setup_generic_asset_types): diff --git a/flexmeasures/data/tests/test_forecasting_jobs.py b/flexmeasures/data/tests/test_forecasting_jobs.py index 85c16af88..531c18453 100644 --- a/flexmeasures/data/tests/test_forecasting_jobs.py +++ b/flexmeasures/data/tests/test_forecasting_jobs.py @@ -52,7 +52,13 @@ def test_forecasting_an_hour_of_wind(db, run_as_cli, app, setup_test_data): - data source was made, - forecasts have been made """ - wind_device_1 = Sensor.query.filter_by(name="wind-asset-1").one_or_none() + # asset has only 1 power sensor + wind_device_1: Sensor = setup_test_data["wind-asset-1"].sensors[0] + + # Remove each seasonality, so we don't query test data that isn't there + wind_device_1.set_attribute("daily_seasonality", False) + wind_device_1.set_attribute("weekly_seasonality", False) + wind_device_1.set_attribute("yearly_seasonality", False) assert get_data_source() is None @@ -88,11 +94,12 @@ def test_forecasting_an_hour_of_wind(db, run_as_cli, app, setup_test_data): def test_forecasting_two_hours_of_solar_at_edge_of_data_set( db, run_as_cli, app, setup_test_data ): - solar_device1: Sensor = Sensor.query.filter_by(name="solar-asset-1").one_or_none() + # asset has only 1 power sensor + solar_device_1: Sensor = setup_test_data["solar-asset-1"].sensors[0] last_power_datetime = ( ( - TimedBelief.query.filter(TimedBelief.sensor_id == solar_device1.id) + TimedBelief.query.filter(TimedBelief.sensor_id == solar_device_1.id) .filter(TimedBelief.belief_horizon == timedelta(hours=0)) .order_by(TimedBelief.event_start.desc()) ) @@ -112,7 +119,7 @@ def test_forecasting_two_hours_of_solar_at_edge_of_data_set( horizons=[ timedelta(hours=6) ], # so we want forecasts for 11.15pm (Jan 1st) to 0.15am (Jan 2nd) - sensor_id=solar_device1.id, + sensor_id=solar_device_1.id, custom_model_params=custom_model_params(), ) print("Job: %s" % job[0].id) @@ -120,13 +127,13 @@ def test_forecasting_two_hours_of_solar_at_edge_of_data_set( work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) forecasts = ( - TimedBelief.query.filter(TimedBelief.sensor_id == solar_device1.id) + TimedBelief.query.filter(TimedBelief.sensor_id == solar_device_1.id) .filter(TimedBelief.belief_horizon == horizon) .filter(TimedBelief.event_start > last_power_datetime) .all() ) assert len(forecasts) == 1 - check_aggregate(4, horizon, solar_device1.id) + check_aggregate(4, horizon, solar_device_1.id) def check_failures( @@ -176,12 +183,15 @@ def test_failed_forecasting_insufficient_data( ): """This one (as well as the fallback) should fail as there is no underlying data. (Power data is in 2015)""" - solar_device1: Sensor = Sensor.query.filter_by(name="solar-asset-1").one_or_none() + + # asset has only 1 power sensor + solar_device_1: Sensor = setup_test_data["solar-asset-1"].sensors[0] + create_forecasting_jobs( start_of_roll=as_server_time(datetime(2016, 1, 1, 20)), end_of_roll=as_server_time(datetime(2016, 1, 1, 22)), horizons=[timedelta(hours=1)], - sensor_id=solar_device1.id, + sensor_id=solar_device_1.id, custom_model_params=custom_model_params(), ) work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) @@ -192,12 +202,15 @@ def test_failed_forecasting_invalid_horizon( app, run_as_cli, clean_redis, setup_test_data ): """This one (as well as the fallback) should fail as the horizon is invalid.""" - solar_device1: Sensor = Sensor.query.filter_by(name="solar-asset-1").one_or_none() + + # asset has only 1 power sensor + solar_device_1: Sensor = setup_test_data["solar-asset-1"].sensors[0] + create_forecasting_jobs( start_of_roll=as_server_time(datetime(2015, 1, 1, 21)), end_of_roll=as_server_time(datetime(2015, 1, 1, 23)), horizons=[timedelta(hours=18)], - sensor_id=solar_device1.id, + sensor_id=solar_device_1.id, custom_model_params=custom_model_params(), ) work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) @@ -206,7 +219,10 @@ def test_failed_forecasting_invalid_horizon( def test_failed_unknown_model(app, clean_redis, setup_test_data): """This one should fail because we use a model search term which yields no model configurator.""" - solar_device1: Sensor = Sensor.query.filter_by(name="solar-asset-1").one_or_none() + + # asset has only 1 power sensor + solar_device_1: Sensor = setup_test_data["solar-asset-1"].sensors[0] + horizon = timedelta(hours=1) cmp = custom_model_params() @@ -216,7 +232,7 @@ def test_failed_unknown_model(app, clean_redis, setup_test_data): start_of_roll=as_server_time(datetime(2015, 1, 1, 12)), end_of_roll=as_server_time(datetime(2015, 1, 1, 14)), horizons=[horizon], - sensor_id=solar_device1.id, + sensor_id=solar_device_1.id, model_search_term="no-one-knows-this", custom_model_params=cmp, ) diff --git a/flexmeasures/data/tests/test_forecasting_jobs_fresh_db.py b/flexmeasures/data/tests/test_forecasting_jobs_fresh_db.py index a01b1cbfc..841e233b8 100644 --- a/flexmeasures/data/tests/test_forecasting_jobs_fresh_db.py +++ b/flexmeasures/data/tests/test_forecasting_jobs_fresh_db.py @@ -21,7 +21,8 @@ def test_forecasting_three_hours_of_wind( app, run_as_cli, setup_fresh_test_data, clean_redis ): - wind_device2: Sensor = Sensor.query.filter_by(name="wind-asset-2").one_or_none() + # asset has only 1 power sensor + wind_device_2: Sensor = setup_fresh_test_data["wind-asset-2"].sensors[0] # makes 12 forecasts horizon = timedelta(hours=1) @@ -29,7 +30,7 @@ def test_forecasting_three_hours_of_wind( start_of_roll=as_server_time(datetime(2015, 1, 1, 10)), end_of_roll=as_server_time(datetime(2015, 1, 1, 13)), horizons=[horizon], - sensor_id=wind_device2.id, + sensor_id=wind_device_2.id, custom_model_params=custom_model_params(), ) print("Job: %s" % job[0].id) @@ -37,7 +38,7 @@ def test_forecasting_three_hours_of_wind( work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) forecasts = ( - TimedBelief.query.filter(TimedBelief.sensor_id == wind_device2.id) + TimedBelief.query.filter(TimedBelief.sensor_id == wind_device_2.id) .filter(TimedBelief.belief_horizon == horizon) .filter( (TimedBelief.event_start >= as_server_time(datetime(2015, 1, 1, 11))) @@ -46,16 +47,14 @@ def test_forecasting_three_hours_of_wind( .all() ) assert len(forecasts) == 12 - check_aggregate(12, horizon, wind_device2.id) + check_aggregate(12, horizon, wind_device_2.id) def test_forecasting_two_hours_of_solar( app, run_as_cli, setup_fresh_test_data, clean_redis ): - solar_device1: Sensor = Sensor.query.filter_by(name="solar-asset-1").one_or_none() - wind_device2: Sensor = Sensor.query.filter_by(name="wind-asset-2").one_or_none() - print(solar_device1) - print(wind_device2) + # asset has only 1 power sensor + solar_device_1: Sensor = setup_fresh_test_data["solar-asset-1"].sensors[0] # makes 8 forecasts horizon = timedelta(hours=1) @@ -63,14 +62,14 @@ def test_forecasting_two_hours_of_solar( start_of_roll=as_server_time(datetime(2015, 1, 1, 12)), end_of_roll=as_server_time(datetime(2015, 1, 1, 14)), horizons=[horizon], - sensor_id=solar_device1.id, + sensor_id=solar_device_1.id, custom_model_params=custom_model_params(), ) print("Job: %s" % job[0].id) work_on_rq(app.queues["forecasting"], exc_handler=handle_forecasting_exception) forecasts = ( - TimedBelief.query.filter(TimedBelief.sensor_id == solar_device1.id) + TimedBelief.query.filter(TimedBelief.sensor_id == solar_device_1.id) .filter(TimedBelief.belief_horizon == horizon) .filter( (TimedBelief.event_start >= as_server_time(datetime(2015, 1, 1, 13))) @@ -79,11 +78,15 @@ def test_forecasting_two_hours_of_solar( .all() ) assert len(forecasts) == 8 - check_aggregate(8, horizon, solar_device1.id) + check_aggregate(8, horizon, solar_device_1.id) @pytest.mark.parametrize( - "model_to_start_with, model_version", [("failing-test", 1), ("linear-OLS", 2)] + "model_to_start_with, model_version", + [ + ("failing-test", 1), + ("linear-OLS", 2), + ], ) def test_failed_model_with_too_much_training_then_succeed_with_fallback( app, @@ -100,7 +103,14 @@ def test_failed_model_with_too_much_training_then_succeed_with_fallback( (fail-test falls back to linear & linear falls back to naive). As a result, there should be forecasts in the DB. """ - solar_device1: Sensor = Sensor.query.filter_by(name="solar-asset-1").one_or_none() + # asset has only 1 power sensor + solar_device_1: Sensor = setup_fresh_test_data["solar-asset-1"].sensors[0] + + # Remove each seasonality, so we don't query test data that isn't there + solar_device_1.set_attribute("daily_seasonality", False) + solar_device_1.set_attribute("weekly_seasonality", False) + solar_device_1.set_attribute("yearly_seasonality", False) + horizon_hours = 1 horizon = timedelta(hours=horizon_hours) @@ -115,7 +125,7 @@ def test_failed_model_with_too_much_training_then_succeed_with_fallback( start_of_roll=as_server_time(datetime(2015, 1, 1, hour_start)), end_of_roll=as_server_time(datetime(2015, 1, 1, hour_start + 2)), horizons=[horizon], - sensor_id=solar_device1.id, + sensor_id=solar_device_1.id, model_search_term=model_to_start_with, custom_model_params=cmp, ) @@ -132,7 +142,7 @@ def test_failed_model_with_too_much_training_then_succeed_with_fallback( def make_query(the_horizon_hours: int) -> Query: the_horizon = timedelta(hours=the_horizon_hours) return ( - TimedBelief.query.filter(TimedBelief.sensor_id == solar_device1.id) + TimedBelief.query.filter(TimedBelief.sensor_id == solar_device_1.id) .filter(TimedBelief.belief_horizon == the_horizon) .filter( ( @@ -154,7 +164,7 @@ def make_query(the_horizon_hours: int) -> Query: forecasts = make_query(the_horizon_hours=horizon_hours).all() assert len(forecasts) == 8 - check_aggregate(8, horizon, solar_device1.id) + check_aggregate(8, horizon, solar_device_1.id) if model_to_start_with == "linear-OLS": existing_data = make_query(the_horizon_hours=0).all() diff --git a/flexmeasures/data/tests/test_queries.py b/flexmeasures/data/tests/test_queries.py index 109cc0219..72737af8c 100644 --- a/flexmeasures/data/tests/test_queries.py +++ b/flexmeasures/data/tests/test_queries.py @@ -40,7 +40,8 @@ ], ) def test_collect_power(db, app, query_start, query_end, num_values, setup_test_data): - wind_device_1 = Sensor.query.filter_by(name="wind-asset-1").one_or_none() + # asset has only 1 power sensor + wind_device_1: Sensor = setup_test_data["wind-asset-1"].sensors[0] data = TimedBelief.query.filter(TimedBelief.sensor_id == wind_device_1.id).all() print(data) bdf: tb.BeliefsDataFrame = TimedBelief.search( @@ -94,7 +95,8 @@ def test_collect_power(db, app, query_start, query_end, num_values, setup_test_d def test_collect_power_resampled( db, app, query_start, query_end, resolution, num_values, setup_test_data ): - wind_device_1 = Sensor.query.filter_by(name="wind-asset-1").one_or_none() + # asset has only 1 power sensor + wind_device_1: Sensor = setup_test_data["wind-asset-1"].sensors[0] bdf: tb.BeliefsDataFrame = TimedBelief.search( wind_device_1.name, event_starts_after=query_start, @@ -207,7 +209,8 @@ def test_multiplication_with_both_empty_dataframe(): @pytest.mark.parametrize("check_empty_frame", [True, False]) def test_simplify_index(setup_test_data, check_empty_frame): """Check whether simplify_index retains the event resolution.""" - wind_device_1 = Sensor.query.filter_by(name="wind-asset-1").one_or_none() + # asset has only 1 power sensor + wind_device_1: Sensor = setup_test_data["wind-asset-1"].sensors[0] bdf: tb.BeliefsDataFrame = TimedBelief.search( wind_device_1.name, event_starts_after=datetime(2015, 1, 1, tzinfo=pytz.utc), diff --git a/flexmeasures/data/tests/test_scheduling_jobs.py b/flexmeasures/data/tests/test_scheduling_jobs.py index cdd2f1a4a..5bdd6c09b 100644 --- a/flexmeasures/data/tests/test_scheduling_jobs.py +++ b/flexmeasures/data/tests/test_scheduling_jobs.py @@ -6,7 +6,7 @@ from rq.job import Job from flexmeasures.data.models.data_sources import DataSource -from flexmeasures.data.models.time_series import Sensor, TimedBelief +from flexmeasures.data.models.time_series import TimedBelief from flexmeasures.data.tests.utils import work_on_rq, exception_reporter from flexmeasures.data.services.scheduling import ( create_scheduling_job, @@ -20,7 +20,7 @@ def test_scheduling_a_battery(db, app, add_battery_assets, setup_test_data): - schedule has been made """ - battery = Sensor.query.filter(Sensor.name == "Test battery").one_or_none() + battery = add_battery_assets["Test battery"].sensors[0] tz = pytz.timezone("Europe/Amsterdam") start = tz.localize(datetime(2015, 1, 2)) end = tz.localize(datetime(2015, 1, 3)) @@ -104,7 +104,7 @@ def test_assigning_custom_scheduler(db, app, add_battery_assets, is_path: bool): """ scheduler_specs["module"] = make_module_descr(is_path) - battery = Sensor.query.filter(Sensor.name == "Test battery").one_or_none() + battery = add_battery_assets["Test battery"].sensors[0] battery.attributes["custom-scheduler"] = scheduler_specs tz = pytz.timezone("Europe/Amsterdam") diff --git a/flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py b/flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py index 02b966a62..053682f59 100644 --- a/flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py +++ b/flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py @@ -4,7 +4,7 @@ import pandas as pd from flexmeasures.data.models.data_sources import DataSource -from flexmeasures.data.models.time_series import Sensor, TimedBelief +from flexmeasures.data.models.time_series import TimedBelief from flexmeasures.data.services.scheduling import create_scheduling_job from flexmeasures.data.tests.utils import work_on_rq, exception_reporter @@ -22,9 +22,7 @@ def test_scheduling_a_charging_station( target_soc = 5 duration_until_target = timedelta(hours=2) - charging_station = Sensor.query.filter( - Sensor.name == "Test charging station" - ).one_or_none() + charging_station = add_charging_station_assets["Test charging station"].sensors[0] tz = pytz.timezone("Europe/Amsterdam") start = tz.localize(datetime(2015, 1, 2)) end = tz.localize(datetime(2015, 1, 3)) diff --git a/flexmeasures/data/tests/test_scheduling_repeated_jobs.py b/flexmeasures/data/tests/test_scheduling_repeated_jobs.py index 9ede63229..676be3043 100644 --- a/flexmeasures/data/tests/test_scheduling_repeated_jobs.py +++ b/flexmeasures/data/tests/test_scheduling_repeated_jobs.py @@ -9,6 +9,7 @@ from rq.job import Job, JobStatus from flexmeasures.data.models.data_sources import DataSource +from flexmeasures.data.models.generic_assets import GenericAsset from flexmeasures.data.models.time_series import Sensor from flexmeasures.data.tests.utils import work_on_rq, exception_reporter from flexmeasures.data.services.scheduling import create_scheduling_job @@ -132,9 +133,15 @@ def test_hashing(db, app, add_charging_station_assets, setup_test_data): target_soc = 5 duration_until_target = timedelta(hours=2) - charging_station = Sensor.query.filter( - Sensor.name == "Test charging station" - ).one_or_none() + # Here, we need to obtain the object through a db query, otherwise we run into session issues with deepcopy later on + # charging_station = add_charging_station_assets["Test charging station"].sensors[0] + charging_station = ( + Sensor.query.filter(Sensor.name == "power") + .join(GenericAsset) + .filter(GenericAsset.id == Sensor.generic_asset_id) + .filter(GenericAsset.name == "Test charging stations") + .one_or_none() + ) tz = pytz.timezone("Europe/Amsterdam") start = tz.localize(datetime(2015, 1, 2)) end = tz.localize(datetime(2015, 1, 3)) @@ -156,7 +163,7 @@ def test_hashing(db, app, add_charging_station_assets, setup_test_data): print("RIGHT HASH: ", hash) # checks that hashes are consistent between different runtime calls - assert hash == "4ed0V9h247brxusBYk3ug9Cy7miPnynOeSNBT8hd5Mo=" + assert hash == "oAZ8tzzq50zl3I+7oFeabrj1QeH709mZdXWbpkn0krA=" kwargs2 = copy.deepcopy(kwargs) args2 = copy.deepcopy(args) @@ -180,9 +187,7 @@ def test_scheduling_multiple_triggers( duration_until_target = timedelta(hours=2) - charging_station = Sensor.query.filter( - Sensor.name == "Test charging station" - ).one_or_none() + charging_station = add_charging_station_assets["Test charging station"].sensors[0] tz = pytz.timezone("Europe/Amsterdam") start = tz.localize(datetime(2015, 1, 2)) end = tz.localize(datetime(2015, 1, 3)) diff --git a/flexmeasures/data/tests/test_scheduling_repeated_jobs_fresh_db.py b/flexmeasures/data/tests/test_scheduling_repeated_jobs_fresh_db.py index ef267a5f1..8852e5e64 100644 --- a/flexmeasures/data/tests/test_scheduling_repeated_jobs_fresh_db.py +++ b/flexmeasures/data/tests/test_scheduling_repeated_jobs_fresh_db.py @@ -4,7 +4,6 @@ import pytz -from flexmeasures.data.models.time_series import Sensor from flexmeasures.data.tests.utils import work_on_rq, exception_reporter from flexmeasures.data.services.scheduling import create_scheduling_job from flexmeasures.data.models.planning import Scheduler @@ -12,7 +11,6 @@ class FailingScheduler(Scheduler): - __author__ = "Test Organization" __version__ = "1" @@ -41,9 +39,9 @@ def test_requeue_failing_job( end = tz.localize(datetime(2016, 1, 3)) resolution = timedelta(minutes=15) - charging_station = Sensor.query.filter( - Sensor.name == "Test charging station" - ).one_or_none() + charging_station = add_charging_station_assets_fresh_db[ + "Test charging station" + ].sensors[0] custom_scheduler = { "module": "flexmeasures.data.tests.test_scheduling_repeated_jobs_fresh_db", diff --git a/flexmeasures/data/tests/test_user_services.py b/flexmeasures/data/tests/test_user_services.py index 8cc17437c..932fb4556 100644 --- a/flexmeasures/data/tests/test_user_services.py +++ b/flexmeasures/data/tests/test_user_services.py @@ -7,7 +7,7 @@ delete_user, InvalidFlexMeasuresUser, ) -from flexmeasures.data.models.assets import Asset +from flexmeasures.data.models.generic_assets import GenericAsset from flexmeasures.data.models.data_sources import DataSource from flexmeasures.data.models.time_series import TimedBelief @@ -79,26 +79,42 @@ def test_create_invalid_user( assert "without knowing the name of the account" in str(exc_info.value) -def test_delete_user(fresh_db, setup_roles_users_fresh_db, app): - """Assert user has assets and power measurements. Deleting removes all of that.""" +def test_delete_user(fresh_db, setup_roles_users_fresh_db, setup_assets_fresh_db, app): + """Check that deleting a user does not lead to deleting their organisation's (asset/sensor/beliefs) data.""" prosumer: User = find_user_by_email("test_prosumer_user@seita.nl") num_users_before = User.query.count() - user_assets_with_measurements_before = Asset.query.filter( - Asset.owner_id == prosumer.id, Asset.asset_type_name.in_(["wind", "solar"]) - ).all() - asset_ids = [asset.id for asset in user_assets_with_measurements_before] - for asset_id in asset_ids: - num_power_measurements = TimedBelief.query.filter( - TimedBelief.sensor_id == asset_id - ).count() - assert num_power_measurements == 96 + + # Find assets belonging to the user's organisation + asset_query = GenericAsset.query.filter( + GenericAsset.account_id == prosumer.account_id + ) + assets_before = asset_query.all() + assert ( + len(assets_before) > 0 + ), "Test assets should have been set up, otherwise we'd not be testing whether they're kept." + + # Find all the organisation's sensors + sensors_before = [] + for asset in assets_before: + sensors_before.extend(asset.sensors) + + # Count all the organisation's beliefs + beliefs_query = TimedBelief.query.filter( + TimedBelief.sensor_id.in_([sensor.id for sensor in sensors_before]) + ) + num_beliefs_before = beliefs_query.count() + assert ( + num_beliefs_before > 0 + ), "Some beliefs should have been set up, otherwise we'd not be testing whether they're kept." + + # Delete the user delete_user(prosumer) assert find_user_by_email("test_prosumer_user@seita.nl") is None - user_assets_after = Asset.query.filter(Asset.owner_id == prosumer.id).all() - assert len(user_assets_after) == 0 assert User.query.count() == num_users_before - 1 - for asset_id in asset_ids: - num_power_measurements = TimedBelief.query.filter( - TimedBelief.sensor_id == asset_id - ).count() - assert num_power_measurements == 0 + + # Check whether the organisation's assets, sensors and beliefs were kept + assets_after = asset_query.all() + assert assets_after == assets_before + + num_beliefs_after = beliefs_query.count() + assert num_beliefs_after == num_beliefs_before diff --git a/flexmeasures/ui/static/css/flexmeasures.css b/flexmeasures/ui/static/css/flexmeasures.css index 348317962..0fb828474 100644 --- a/flexmeasures/ui/static/css/flexmeasures.css +++ b/flexmeasures/ui/static/css/flexmeasures.css @@ -1041,10 +1041,6 @@ body .dataTables_wrapper .dataTables_paginate .paginate_button.current:hover { /* ---- Date picker ---- */ -.datetimepicker input { - width: 100%; -} - .litepicker { font-size: 14px; } diff --git a/flexmeasures/ui/static/js/daterangepicker-init.js b/flexmeasures/ui/static/js/daterangepicker-init.js deleted file mode 100644 index 5e8c98f98..000000000 --- a/flexmeasures/ui/static/js/daterangepicker-init.js +++ /dev/null @@ -1,30 +0,0 @@ -$(document).ready(function() { - - $('input[name="daterange"]').daterangepicker({ - "timePicker": true, - "timePickerIncrement": 15, - locale: { - format: 'YYYY-MM-DD h:mm A' - }, - "ranges": { - 'Tomorrow': [moment().add(1, 'day').startOf('day'), moment().add(1, 'day').endOf('day')], - 'Today': [moment().startOf('day'), moment().endOf('day')], - 'Yesterday': [moment().subtract(1, 'days').startOf('day'), moment().subtract(1, 'days').endOf('day')], - 'This week': [moment().startOf('week').startOf('week'), moment().endOf('week').endOf('week')], - 'Last 7 Days': [moment().subtract(6, 'days').startOf('day'), moment().endOf('day')], - 'Last 30 Days': [moment().subtract(29, 'days').startOf('day'), moment().endOf('day')], - 'This Month': [moment().startOf('month').startOf('month'), moment().endOf('month').endOf('month')], - 'Last Month': [moment().subtract(1, 'month').startOf('month'), moment().subtract(1, 'month').endOf('month')] - }, - "linkedCalendars": false, - "startDate": timerangeStart, - "endDate": timerangeEnd - }, function(start, end, label) { - console.log('New date range selected: ' + start.format('YYYY-MM-DD HH:mm') + ' to ' + end.format('YYYY-MM-DD HH:mm') + ' (predefined range: ' + label + ')'); - $("#datepicker_form_start_time").val(start.format('YYYY-MM-DD HH:mm') ); - $("#datepicker_form_end_time").val(end.format('YYYY-MM-DD HH:mm') ); - // remove any URL params from an earlier call and point to whatever resource is actually selected - $("#datepicker_form").attr("action", location.pathname + "?resource=" + $("#resource").val()); - $("#datepicker_form").submit(); // reload page with new time range - }); -}); diff --git a/flexmeasures/ui/templates/base.html b/flexmeasures/ui/templates/base.html index 9a5405e16..294cd45ea 100644 --- a/flexmeasures/ui/templates/base.html +++ b/flexmeasures/ui/templates/base.html @@ -24,10 +24,6 @@ - {% if show_datepicker %} - - {% endif %} @@ -41,7 +37,7 @@ {% if active_page == "tasks" %} - {% elif active_page in ("assets", "users", "portfolio","accounts") %} + {% elif active_page in ("assets", "users", "accounts") %} {% endif %} {% if extra_css %} @@ -107,9 +103,9 @@ {% else %}
  • - {{ caption|e }} @@ -141,38 +137,9 @@ - - - {# Div blocks that child pages can reference #} {% block divs %} - - {% block datetimepicker %} - -
    -
    -
    -
    -
    - -
    - -
    -
    -
    - -
    - - -
    - - - {% endblock datetimepicker %} - {% block forecastpicker %}
    @@ -670,14 +637,6 @@ - {% if show_datepicker %} - - {% endif %} - - - {% if show_datepicker %} - - {% endif %} {% endblock scripts %} diff --git a/flexmeasures/ui/templates/crud/assets.html b/flexmeasures/ui/templates/crud/assets.html index e1ccdd368..987f31fd2 100644 --- a/flexmeasures/ui/templates/crud/assets.html +++ b/flexmeasures/ui/templates/crud/assets.html @@ -65,14 +65,6 @@

    Asset overview {{ asset.sensors | length }} - - {% endfor %} diff --git a/flexmeasures/ui/tests/conftest.py b/flexmeasures/ui/tests/conftest.py index de88760e7..d2b78f811 100644 --- a/flexmeasures/ui/tests/conftest.py +++ b/flexmeasures/ui/tests/conftest.py @@ -1,8 +1,6 @@ import pytest from flexmeasures.data.services.users import create_user -from flexmeasures.data.models.assets import Asset -from flexmeasures.data.models.weather import WeatherSensor, WeatherSensorType from flexmeasures.ui.tests.utils import login, logout @@ -33,16 +31,9 @@ def setup_ui_test_data( setup_roles_users, setup_markets, setup_sources, - setup_asset_types, + setup_generic_asset_types, ): - """ - Create another prosumer, without data, and an admin - Also, a weather sensor (and sensor type). - - TODO: review if any of these are really needed (might be covered now by main conftest) - """ - print("Setting up data for UI tests on %s" % db.engine) - + """Create an admin.""" create_user( username="Site Admin", email="flexmeasures-admin@seita.nl", @@ -50,38 +41,3 @@ def setup_ui_test_data( account_name=setup_accounts["Prosumer"].name, user_roles=dict(name="admin", description="A site admin."), ) - - test_user_ui = create_user( - username=" Test Prosumer User UI", - email="test_user_ui@seita.nl", - password="testtest", - account_name=setup_accounts["Prosumer"].name, - ) - asset = Asset( - name="solar pane 1", - display_name="Solar Pane 1", - asset_type_name="solar", - unit="MW", - capacity_in_mw=10, - latitude=10, - longitude=100, - min_soc_in_mwh=0, - max_soc_in_mwh=0, - soc_in_mwh=0, - ) - db.session.add(asset) - asset.owner = test_user_ui - - # Create 1 weather sensor - test_sensor_type = WeatherSensorType(name="irradiance") - db.session.add(test_sensor_type) - sensor = WeatherSensor( - name="irradiance_sensor", - weather_sensor_type_name="irradiance", - latitude=33.4843866, - longitude=126, - unit="kW/m²", - ) - db.session.add(sensor) - - print("Done setting up data for UI tests") diff --git a/flexmeasures/ui/tests/test_asset_crud.py b/flexmeasures/ui/tests/test_asset_crud.py index d673e10aa..ae316910a 100644 --- a/flexmeasures/ui/tests/test_asset_crud.py +++ b/flexmeasures/ui/tests/test_asset_crud.py @@ -56,7 +56,7 @@ def test_new_asset_page(client, setup_assets, as_admin): def test_asset_page(db, client, setup_assets, requests_mock, as_prosumer_user1): user = find_user_by_email("test_prosumer_user@seita.nl") - asset = user.assets[0] + asset = user.account.generic_assets[0] db.session.expunge(user) mock_asset = mock_asset_response(as_list=False) mock_asset["latitude"] = asset.latitude diff --git a/flexmeasures/ui/utils/view_utils.py b/flexmeasures/ui/utils/view_utils.py index 1123f108e..d8a03a5d0 100644 --- a/flexmeasures/ui/utils/view_utils.py +++ b/flexmeasures/ui/utils/view_utils.py @@ -4,22 +4,15 @@ import json import os import subprocess -from datetime import datetime from flask import render_template, request, session, current_app from flask_security.core import current_user -from werkzeug.exceptions import BadRequest -import iso8601 from flexmeasures import __version__ as flexmeasures_version from flexmeasures.auth.policy import user_has_admin_access from flexmeasures.utils import time_utils from flexmeasures.ui import flexmeasures_ui from flexmeasures.data.models.user import User, Account -from flexmeasures.data.models.assets import Asset -from flexmeasures.data.models.markets import Market -from flexmeasures.data.models.weather import WeatherSensorType -from flexmeasures.data.services.resources import Resource from flexmeasures.ui.utils.chart_defaults import chart_options @@ -40,21 +33,11 @@ def render_flexmeasures_template(html_filename: str, **variables): ): variables["show_queues"] = True - variables["start_time"] = time_utils.get_default_start_time() - if "start_time" in session: - variables["start_time"] = session["start_time"] - - variables["end_time"] = time_utils.get_default_end_time() - if "end_time" in session: - variables["end_time"] = session["end_time"] - variables["event_starts_after"] = session.get("event_starts_after") variables["event_ends_before"] = session.get("event_ends_before") variables["chart_type"] = session.get("chart_type", "bar_chart") variables["page"] = html_filename.split("/")[-1].replace(".html", "") - if "show_datepicker" not in variables: - variables["show_datepicker"] = variables["page"] in ("analytics", "portfolio") variables["resolution"] = session.get("resolution", "") variables["resolution_human"] = time_utils.freq_label_to_human_readable_label( @@ -122,193 +105,6 @@ def set_session_variables(*var_names: str): session[var_name] = var -def set_time_range_for_session(): - """Set period on session if they are not yet set. - The daterangepicker sends times as tz-aware UTC strings. - We re-interpret them as being in the server's timezone. - Also set the forecast horizon, if given. - - TODO: event_[stars|ends]_before are used on the new asset and sensor pages. - We simply store the UTC strings. - It might be that the other settings & logic can be deprecated when we clean house. - Tip: grep for timerangeEnd, where end_time is used in our base template, - and then used in the daterangepicker. We seem to use litepicker now. - """ - if "start_time" in request.values: - session["start_time"] = time_utils.localized_datetime( - iso8601.parse_date(request.values.get("start_time")) - ) - elif "start_time" not in session: - session["start_time"] = time_utils.get_default_start_time() - else: - if ( - session["start_time"].tzinfo is None - ): # session storage seems to lose tz info and becomes UTC - session["start_time"] = time_utils.as_server_time(session["start_time"]) - - session["event_starts_after"] = request.values.get("event_starts_after") - session["event_ends_before"] = request.values.get("event_ends_before") - if "end_time" in request.values: - session["end_time"] = time_utils.localized_datetime( - iso8601.parse_date(request.values.get("end_time")) - ) - elif "end_time" not in session: - session["end_time"] = time_utils.get_default_end_time() - else: - if session["end_time"].tzinfo is None: - session["end_time"] = time_utils.as_server_time(session["end_time"]) - - # Our demo server's UI should only work with the current year's data - if current_app.config.get("FLEXMEASURES_MODE", "") == "demo": - session["start_time"] = session["start_time"].replace(year=datetime.now().year) - session["end_time"] = session["end_time"].replace(year=datetime.now().year) - if session["start_time"] >= session["end_time"]: - session["start_time"], session["end_time"] = ( - session["end_time"], - session["start_time"], - ) - - if session["start_time"] >= session["end_time"]: - raise BadRequest( - "Start time %s is not after end time %s." - % (session["start_time"], session["end_time"]) - ) - - session["resolution"] = time_utils.decide_resolution( - session["start_time"], session["end_time"] - ) - - if "forecast_horizon" in request.values: - session["forecast_horizon"] = request.values.get("forecast_horizon") - allowed_horizons = time_utils.forecast_horizons_for(session["resolution"]) - if ( - session.get("forecast_horizon") not in allowed_horizons - and len(allowed_horizons) > 0 - ): - session["forecast_horizon"] = allowed_horizons[0] - - -def ensure_timing_vars_are_set( - time_window: tuple[datetime | None, datetime | None], - resolution: str | None, -) -> tuple[tuple[datetime, datetime], str]: - """ - Ensure that time window and resolution variables are set, - even if we don't have them available ― in that case, - get them from the session. - """ - start = time_window[0] - end = time_window[-1] - if None in (start, end, resolution): - current_app.logger.warning("Setting time range for session.") - set_time_range_for_session() - start_out: datetime = session["start_time"] - end_out: datetime = session["end_time"] - resolution_out: str = session["resolution"] - else: - start_out = start # type: ignore - end_out = end # type: ignore - resolution_out = resolution # type: ignore - - return (start_out, end_out), resolution_out - - -def set_session_market(resource: Resource) -> Market: - """Set session["market"] to something, based on the available markets or the request. - Returns the selected market, or None.""" - market = resource.assets[0].market - if market is not None: - session["market"] = market.name - elif "market" not in session: - session["market"] = None - if ( - "market" in request.args - ): # [GET] Set by user clicking on a link somewhere (e.g. dashboard) - session["market"] = request.args["market"] - if ( - "market" in request.form - ): # [POST] Set by user in drop-down field. This overwrites GET, as the URL remains. - session["market"] = request.form["market"] - return Market.query.filter(Market.name == session["market"]).one_or_none() - - -def set_session_sensor_type( - accepted_sensor_types: list[WeatherSensorType], -) -> WeatherSensorType: - """Set session["sensor_type"] to something, based on the available sensor types or the request. - Returns the selected sensor type, or None.""" - - sensor_type_name = "" - if "sensor_type" in session: - sensor_type_name = session["sensor_type"] - if ( - "sensor_type" in request.args - ): # [GET] Set by user clicking on a link somewhere (e.g. dashboard) - sensor_type_name = request.args["sensor_type"] - if ( - "sensor_type" in request.form - ): # [POST] Set by user in drop-down field. This overwrites GET, as the URL remains. - sensor_type_name = request.form["sensor_type"] - requested_sensor_type = WeatherSensorType.query.filter( - WeatherSensorType.name == sensor_type_name - ).one_or_none() - if ( - requested_sensor_type not in accepted_sensor_types - and len(accepted_sensor_types) > 0 - ): - sensor_type = accepted_sensor_types[0] - session["sensor_type"] = sensor_type.name - return sensor_type - elif len(accepted_sensor_types) == 0: - session["sensor_type"] = None - else: - session["sensor_type"] = requested_sensor_type.name - return requested_sensor_type - - -def set_session_resource( - assets: list[Asset], groups_with_assets: list[str] -) -> Resource | None: - """ - Set session["resource"] to something, based on the available asset groups or the request. - - Returns the selected resource instance, or None. - """ - if ( - "resource" in request.args - ): # [GET] Set by user clicking on a link somewhere (e.g. dashboard) - session["resource"] = request.args["resource"] - if ( - "resource" in request.form - ): # [POST] Set by user in drop-down field. This overwrites GET, as the URL remains. - session["resource"] = request.form["resource"] - - if "resource" not in session: # set some default, if possible - if len(groups_with_assets) > 0: - session["resource"] = groups_with_assets[0] - elif len(assets) > 0: - session["resource"] = assets[0].name - else: - return None - - return Resource(session["resource"]) - - -def set_individual_traces_for_session(): - """ - Set session["showing_individual_traces_for"] to a value ("none", "power", "schedules"). - """ - var_name = "showing_individual_traces_for" - if var_name not in session: - session[var_name] = "none" # default setting: we show traces aggregated - if var_name in request.values and request.values[var_name] in ( - "none", - "power", - "schedules", - ): - session[var_name] = request.values[var_name] - - def get_git_description() -> tuple[str, int, str]: """ Get information about the SCM (git) state if possible (if a .git directory exists). diff --git a/flexmeasures/utils/coding_utils.py b/flexmeasures/utils/coding_utils.py index 3994a7b72..39fd2e9da 100644 --- a/flexmeasures/utils/coding_utils.py +++ b/flexmeasures/utils/coding_utils.py @@ -9,79 +9,6 @@ from flask import current_app -def make_registering_decorator(foreign_decorator): - """ - Returns a copy of foreign_decorator, which is identical in every - way(*), except also appends a .decorator property to the callable it - spits out. - - # (*)We can be somewhat "hygienic", but new_decorator still isn't signature-preserving, - i.e. you will not be able to get a runtime list of parameters. For that, you need hackish libraries... - but in this case, the only argument is func, so it's not a big issue - - Works on outermost decorators, based on Method 3 of https://stackoverflow.com/a/5910893/13775459 - """ - - def new_decorator(func): - # Call to new_decorator(method) - # Exactly like old decorator, but output keeps track of what decorated it - r = foreign_decorator( - func - ) # apply foreign_decorator, like call to foreign_decorator(method) would have done - r.decorator = new_decorator # keep track of decorator - r.original = func # keep track of decorated function - return r - - new_decorator.__name__ = foreign_decorator.__name__ - new_decorator.__doc__ = foreign_decorator.__doc__ - - return new_decorator - - -def methods_with_decorator(cls, decorator): - """ - Returns all methods in CLS with DECORATOR as the - outermost decorator. - - DECORATOR must be a "registering decorator"; one - can make any decorator "registering" via the - make_registering_decorator function. - - Doesn't work for the @property decorator, but does work for the @functools.cached_property decorator. - - Works on outermost decorators, based on Method 3 of https://stackoverflow.com/a/5910893/13775459 - """ - for maybe_decorated in cls.__dict__.values(): - if hasattr(maybe_decorated, "decorator"): - if maybe_decorated.decorator == decorator: - if hasattr(maybe_decorated, "original"): - yield maybe_decorated.original - else: - yield maybe_decorated - - -def rgetattr(obj, attr, *args): - """Get chained properties. - - Usage - ----- - >>> class Pet: - def __init__(self): - self.favorite_color = "orange" - >>> class Person: - def __init__(self): - self.pet = Pet() - >>> p = Person() - >>> rgetattr(p, 'pet.favorite_color') # "orange" - - From https://stackoverflow.com/a/31174427/13775459""" - - def _getattr(obj, attr): - return getattr(obj, attr, *args) - - return functools.reduce(_getattr, [obj] + attr.split(".")) - - def optional_arg_decorator(fn): """ A decorator which _optionally_ accepts arguments. diff --git a/flexmeasures/utils/time_utils.py b/flexmeasures/utils/time_utils.py index 5a7b4c880..2c3107d94 100644 --- a/flexmeasures/utils/time_utils.py +++ b/flexmeasures/utils/time_utils.py @@ -246,14 +246,6 @@ def get_most_recent_clocktime_window( return begin_time, end_time -def get_default_start_time() -> datetime: - return get_most_recent_quarter() - timedelta(days=1) - - -def get_default_end_time() -> datetime: - return get_most_recent_quarter() + timedelta(days=1) - - def get_first_day_of_next_month() -> datetime: return (datetime.now().replace(day=1) + timedelta(days=32)).replace(day=1)