diff --git a/documentation/changelog.rst b/documentation/changelog.rst index dacb07dc7..19aed13dd 100644 --- a/documentation/changelog.rst +++ b/documentation/changelog.rst @@ -8,9 +8,12 @@ v0.13.0 | April XX, 2023 .. warning:: The API endpoint (`[POST] /sensors/(id)/schedules/trigger `_) to make new schedules sunsets the deprecated (since v0.12) storage flexibility parameters (they move to the ``flex-model`` parameter group), as well as the parameters describing other sensors (they move to ``flex-context``). +.. warning:: Upgrading to this version requires running ``flexmeasures db upgrade`` (you can create a backup first with ``flexmeasures db-ops dump``). + New features ------------- * Keyboard control over replay [see `PR #562 `_] +* Overlay charts (e.g. power profiles) on the asset page using the `sensors_to_show` attribute, and distinguish plots by source (different trace), sensor (different color) and source type (different stroke dash) [see `PR #534 `_] * The ``FLEXMEASURES_MAX_PLANNING_HORIZON`` config setting can also be set as an integer number of planning steps rather than just as a fixed duration, which makes it possible to schedule further ahead in coarser time steps [see `PR #583 `_] * Different text styles for CLI output for errors, warnings or success messages. [see `PR #609 `_] diff --git a/documentation/cli/change_log.rst b/documentation/cli/change_log.rst index 5b3edb70c..417eb1022 100644 --- a/documentation/cli/change_log.rst +++ b/documentation/cli/change_log.rst @@ -4,6 +4,12 @@ FlexMeasures CLI Changelog ********************** +since v0.13.0 | April XX, 2023 +================================= + +* Add ``flexmeasures add source`` CLI command for adding a new data source. +* Add ``--inflexible-device-sensor`` option to ``flexmeasures add schedule``. + since v0.12.0 | January 04, 2023 ================================= diff --git a/documentation/cli/commands.rst b/documentation/cli/commands.rst index 347e90659..f2bd3c40c 100644 --- a/documentation/cli/commands.rst +++ b/documentation/cli/commands.rst @@ -33,6 +33,7 @@ of which some are referred to in this documentation. ``flexmeasures add asset`` Create a new asset. ``flexmeasures add sensor`` Add a new sensor. ``flexmeasures add beliefs`` Load beliefs from file. +``flexmeasures add source`` Add a new data source. ``flexmeasures add forecasts`` Create forecasts. ``flexmeasures add schedule for-storage`` Create a charging schedule for a storage asset. ``flexmeasures add holidays`` Add holiday annotations to accounts and/or assets. diff --git a/documentation/index.rst b/documentation/index.rst index 9e9920e82..079454a77 100644 --- a/documentation/index.rst +++ b/documentation/index.rst @@ -39,12 +39,12 @@ A tiny, but complete example: Let's install FlexMeasures from scratch. Then, usi $ docker pull postgres; docker run --name pg-docker -e POSTGRES_PASSWORD=docker -e POSTGRES_DB=flexmeasures-db -d -p 5433:5432 postgres:latest $ export SQLALCHEMY_DATABASE_URI="postgresql://postgres:docker@127.0.0.1:5433/flexmeasures-db" && export SECRET_KEY=notsecret $ flexmeasures db upgrade # create tables - $ flexmeasures add toy-account --kind battery # setup account & a user, a battery (Id 2) and a market (Id 3) - $ flexmeasures add beliefs --sensor-id 3 --source toy-user prices-tomorrow.csv --timezone utc # load prices, also possible per API - $ flexmeasures add schedule for-storage --sensor-id 2 --consumption-price-sensor 3 \ + $ flexmeasures add toy-account --kind battery # setup account incl. a user, battery (ID 1) and market (ID 2) + $ flexmeasures add beliefs --sensor-id 2 --source toy-user prices-tomorrow.csv --timezone utc # load prices, also possible per API + $ flexmeasures add schedule for-storage --sensor-id 1 --consumption-price-sensor 2 \ --start ${TOMORROW}T07:00+01:00 --duration PT12H \ --soc-at-start 50% --roundtrip-efficiency 90% # this is also possible per API - $ flexmeasures show beliefs --sensor-id 2 --start ${TOMORROW}T07:00:00+01:00 --duration PT12H # also visible per UI, of course + $ flexmeasures show beliefs --sensor-id 1 --start ${TOMORROW}T07:00:00+01:00 --duration PT12H # also visible per UI, of course We discuss this in more depth at :ref:`tut_toy_schedule`. @@ -94,7 +94,7 @@ Your journey, from dipping your toes in the water towards being a happy FlexMeas -Where to start reading ? +Where to start reading? -------------------------- You (the reader) might be a user connecting with a FlexMeasures server or working on hosting FlexMeasures. Maybe you are planning to develop a plugin or even core functionality. In :ref:`getting_started`, we have some helpful tips how to dive into this documentation! diff --git a/documentation/tut/forecasting_scheduling.rst b/documentation/tut/forecasting_scheduling.rst index 9c0453ced..d2c545f5b 100644 --- a/documentation/tut/forecasting_scheduling.rst +++ b/documentation/tut/forecasting_scheduling.rst @@ -58,7 +58,7 @@ In FlexMeasures, the usual way of creating forecasting jobs would be right in th So technically, you don't have to do anything to keep fresh forecasts. The decision which horizons to forecast is currently also taken by FlexMeasures. For power data, FlexMeasures makes this decision depending on the asset resolution. For instance, a resolution of 15 minutes leads to forecast horizons of 1, 6, 24 and 48 hours. For price data, FlexMeasures chooses to forecast prices forward 24 and 48 hours -These are decent defaults, and fixing them has the advantage that scheduling scripts (see below) will know what to expect. However, horizons will probably become more configurable in the near future of FlexMeasures. +These are decent defaults, and fixing them has the advantage that schedulers (see below) will know what to expect. However, horizons will probably become more configurable in the near future of FlexMeasures. You can also add forecasting jobs directly via the CLI. We explain this practice in the next section. diff --git a/documentation/tut/toy-example-from-scratch.rst b/documentation/tut/toy-example-from-scratch.rst index 194dea861..9391830fa 100644 --- a/documentation/tut/toy-example-from-scratch.rst +++ b/documentation/tut/toy-example-from-scratch.rst @@ -10,18 +10,18 @@ Let's walk through an example from scratch! We'll ... - load hourly prices - optimize a 12h-schedule for a battery that is half full -What do you need? Your own computer, with one of two situations: Either you have `Docker `_ or your computer supports Python 3.8+, pip and PostgresDB. The former might be easier, see the installation step below. But you choose. +What do you need? Your own computer, with one of two situations: either you have `Docker `_ or your computer supports Python 3.8+, pip and PostgresDB. The former might be easier, see the installation step below. But you choose. Below are the ``flexmeasures`` CLI commands we'll run, and which we'll explain step by step. There are some other crucial steps for installation and setup, so this becomes a complete example from scratch, but this is the meat: .. code-block:: console - # setup an account with a user, a battery (Id 2) and a market (Id 3) + # setup an account with a user, battery (ID 1) and market (ID 2) $ flexmeasures add toy-account --kind battery # load prices to optimise the schedule against - $ flexmeasures add beliefs --sensor-id 3 --source toy-user prices-tomorrow.csv --timezone utc + $ flexmeasures add beliefs --sensor-id 2 --source toy-user prices-tomorrow.csv --timezone Europe/Amsterdam # make the schedule - $ flexmeasures add schedule for-storage --sensor-id 2 --consumption-price-sensor 3 \ + $ flexmeasures add schedule for-storage --sensor-id 1 --consumption-price-sensor 2 \ --start ${TOMORROW}T07:00+01:00 --duration PT12H \ --soc-at-start 50% --roundtrip-efficiency 90% @@ -117,8 +117,9 @@ FlexMeasures offers a command to create a toy account with a battery: $ flexmeasures add toy-account --kind battery Toy account Toy Account with user toy-user@flexmeasures.io created successfully. You might want to run `flexmeasures show account --id 1` - The sensor for battery (dis)charging is . - The sensor for Day ahead prices is . + The sensor recording battery power is . + The sensor recording day-ahead prices is . + The sensor recording solar forecasts is . And with that, we're done with the structural data for this tutorial! @@ -128,9 +129,9 @@ If you want, you can inspect what you created: $ flexmeasures show account --id 1 - ============================= - Account Toy Account (ID:1): - ============================= + =========================== + Account Toy Account (ID: 1) + =========================== Account has no roles. @@ -142,29 +143,29 @@ If you want, you can inspect what you created: All assets: - Id Name Type Location + ID Name Type Location ---- ------------ -------- ----------------- - 3 toy-battery battery (52.374, 4.88969) - 2 toy-building building (52.374, 4.88969) - 1 toy-solar solar (52.374, 4.88969) + 1 toy-battery battery (52.374, 4.88969) + 3 toy-solar solar (52.374, 4.88969) - $ flexmeasures show asset --id 3 + $ flexmeasures show asset --id 1 - =========================== - Asset toy-battery (ID:3): - =========================== + ========================= + Asset toy-battery (ID: 1) + ========================= Type Location Attributes ------- ----------------- --------------------- - battery (52.374, 4.88969) capacity_in_mw:0.5 - min_soc_in_mwh:0.05 - max_soc_in_mwh:0.45 + battery (52.374, 4.88969) capacity_in_mw: 0.5 + min_soc_in_mwh: 0.05 + max_soc_in_mwh: 0.45 + sensors_to_show: [2, [3, 1]] All sensors in asset: - Id Name Unit Resolution Timezone Attributes - ---- -------- ------ ------------ ---------------- ------------ - 2 charging MW 15 minutes Europe/Amsterdam + ID Name Unit Resolution Timezone Attributes + ---- ----------- ------ ------------ ---------------- ------------ + 1 discharging MW 15 minutes Europe/Amsterdam Yes, that is quite a large battery :) @@ -185,7 +186,7 @@ Visit `http://localhost:5000/assets `_ (username i Add some price data --------------------------------------- -Now to add price data. First, we'll create the csv file with prices (EUR/MWh, see the setup for sensor 3 above) for tomorrow. +Now to add price data. First, we'll create the csv file with prices (EUR/MWh, see the setup for sensor 2 above) for tomorrow. .. code-block:: console @@ -220,46 +221,46 @@ This is time series data, in FlexMeasures we call "beliefs". Beliefs can also be .. code-block:: console - $ flexmeasures add beliefs --sensor-id 3 --source toy-user prices-tomorrow.csv --timezone utc + $ flexmeasures add beliefs --sensor-id 2 --source toy-user prices-tomorrow.csv --timezone Europe/Amsterdam Successfully created beliefs In FlexMeasures, all beliefs have a data source. Here, we use the username of the user we created earlier. We could also pass a user ID, or the name of a new data source we want to use for CLI scripts. -.. note:: Attention: We created and imported prices where the times have no time zone component! That happens a lot. Here, we localized the data to UTC time. So if you are in Amsterdam time, the start time for the first price, when expressed in your time zone, is actually `2022-03-03 01:00:00+01:00`. +.. note:: Attention: We created and imported prices where the times have no time zone component! That happens a lot. FlexMeasures can localize them for you to a given timezone. Here, we localized the data to the timezone of the price sensor - ``Europe/Amsterdam`` - so the start time for the first price is `2022-03-03 00:00:00+01:00` (midnight in Amsterdam). Let's look at the price data we just loaded: .. code-block:: console - $ flexmeasures show beliefs --sensor-id 3 --start ${TOMORROW}T01:00:00+01:00 --duration PT24H - Beliefs for Sensor 'Day ahead prices' (Id 3). - Data spans a day and starts at 2022-03-03 01:00:00+01:00. + $ flexmeasures show beliefs --sensor-id 2 --start ${TOMORROW}T00:00:00+01:00 --duration PT24H + Beliefs for Sensor 'day-ahead prices' (ID 2). + Data spans a day and starts at 2022-03-03 00:00:00+01:00. The time resolution (x-axis) is an hour. ┌────────────────────────────────────────────────────────────┐ - │ ▗▀▚▖ │ 18EUR/MWh - │ ▞ ▝▌ │ - │ ▐ ▚ │ - │ ▗▘ ▐ │ - │ ▌ ▌ ▖ │ - │ ▞ ▚ ▗▄▀▝▄ │ - │ ▗▘ ▐ ▗▞▀ ▚ │ 13EUR/MWh - │ ▗▄▘ ▌ ▐▘ ▚ │ - │ ▗▞▘ ▚ ▌ ▚ │ - │▞▘ ▝▄ ▗ ▐ ▝▖ │ - │ ▚▄▄▀▚▄▄ ▞▘▚ ▌ ▝▖ │ - │ ▀▀▛ ▚ ▐ ▚ │ - │ ▚ ▗▘ ▚│ 8EUR/MWh - │ ▌ ▗▘ ▝│ - │ ▝▖ ▞ │ - │ ▐▖ ▗▀ │ - │ ▝▚▄▄▄▄▘ │ + │ ▗▀▚▖ │ + │ ▗▘ ▝▖ │ + │ ▞ ▌ │ + │ ▟ ▐ │ 15EUR/MWh + │ ▗▘ ▝▖ ▗ │ + │ ▗▘ ▚ ▄▞▘▚▖ │ + │ ▞ ▐ ▄▀▘ ▝▄ │ + │ ▄▞ ▌ ▛ ▖ │ + │▀ ▚ ▐ ▝▖ │ + │ ▝▚ ▖ ▗▘ ▝▖ │ 10EUR/MWh + │ ▀▄▄▞▀▄▄ ▗▀▝▖ ▞ ▐ │ + │ ▀▀▜▘ ▝▚ ▗▘ ▚ │ + │ ▌ ▞ ▌│ + │ ▝▖ ▞ ▝│ + │ ▐ ▞ │ + │ ▚ ▗▞ │ 5EUR/MWh + │ ▀▚▄▄▄▄▘ │ └────────────────────────────────────────────────────────────┘ - 5 10 15 20 - ██ Day ahead prices + 5 10 15 20 + ██ day-ahead prices -Again, we can also view these prices in the `FlexMeasures UI `_: +Again, we can also view these prices in the `FlexMeasures UI `_: .. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-prices.png :align: center @@ -272,12 +273,12 @@ Make a schedule Finally, we can create the schedule, which is the main benefit of FlexMeasures (smart real-time control). -We'll ask FlexMeasures for a schedule for our charging sensor (Id 2). We also need to specify what to optimise against. Here we pass the Id of our market price sensor (3). +We'll ask FlexMeasures for a schedule for our discharging sensor (ID 1). We also need to specify what to optimise against. Here we pass the Id of our market price sensor (3). To keep it short, we'll only ask for a 12-hour window starting at 7am. Finally, the scheduler should know what the state of charge of the battery is when the schedule starts (50%) and what its roundtrip efficiency is (90%). .. code-block:: console - $ flexmeasures add schedule for-storage --sensor-id 2 --consumption-price-sensor 3 \ + $ flexmeasures add schedule for-storage --sensor-id 1 --consumption-price-sensor 2 \ --start ${TOMORROW}T07:00+01:00 --duration PT12H \ --soc-at-start 50% --roundtrip-efficiency 90% New schedule is stored. @@ -286,41 +287,107 @@ Great. Let's see what we made: .. code-block:: console - $ flexmeasures show beliefs --sensor-id 2 --start ${TOMORROW}T07:00:00+01:00 --duration PT12H - Beliefs for Sensor 'discharging' (Id 2). + $ flexmeasures show beliefs --sensor-id 1 --start ${TOMORROW}T07:00:00+01:00 --duration PT12H + Beliefs for Sensor 'discharging' (ID 1). Data spans 12 hours and starts at 2022-03-04 07:00:00+01:00. The time resolution (x-axis) is 15 minutes. ┌────────────────────────────────────────────────────────────┐ - │ ▐ ▐▀▀▌ ▛▀▀│ - │ ▞▌ ▞ ▐ ▌ │ 0.4MW - │ ▌▌ ▌ ▐ ▐ │ - │ ▗▘▌ ▌ ▐ ▐ │ - │ ▐ ▐ ▗▘ ▝▖ ▐ │ - │ ▞ ▐ ▐ ▌ ▌ │ 0.2MW - │ ▗▘ ▐ ▐ ▌ ▌ │ - │ ▐ ▝▖ ▌ ▚ ▞ │ - │▀▘───▀▀▀▀▀▀▀▀▀▀▀▀▀▀▌────▐─────▝▀▀▀▀▀▀▀▀▜─────▐▀▀▀▀▀▀▀▀▀─────│ 0MW - │ ▌ ▞ ▐ ▗▘ │ - │ ▚ ▌ ▐ ▐ │ - │ ▐ ▗▘ ▝▖ ▌ │ -0.2MW - │ ▐ ▐ ▌ ▌ │ - │ ▐ ▐ ▌ ▗▘ │ - │ ▌ ▞ ▌ ▐ │ - │ ▌ ▌ ▐ ▐ │ -0.4MW - │ ▙▄▄▌ ▐▄▄▞ │ + │ ▐ ▐▀▀▌ ▛▀▀│ 0.5MW + │ ▞▌ ▌ ▌ ▌ │ + │ ▌▌ ▌ ▐ ▗▘ │ + │ ▌▌ ▌ ▐ ▐ │ + │ ▐ ▐ ▐ ▐ ▐ │ + │ ▐ ▐ ▐ ▝▖ ▞ │ + │ ▌ ▐ ▐ ▌ ▌ │ + │ ▐ ▝▖ ▌ ▌ ▌ │ + │▀▘───▀▀▀▀▖─────▌────▀▀▀▀▀▀▀▀▀▌─────▐▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▘───│ 0.0MW + │ ▌ ▐ ▚ ▌ │ + │ ▌ ▞ ▐ ▗▘ │ + │ ▌ ▌ ▐ ▞ │ + │ ▐ ▐ ▝▖ ▌ │ + │ ▐ ▐ ▌ ▗▘ │ + │ ▐ ▌ ▌ ▐ │ + │ ▝▖ ▌ ▌ ▞ │ + │ ▙▄▟ ▐▄▄▌ │ -0.5MW └────────────────────────────────────────────────────────────┘ - 10 20 30 40 + 10 20 30 40 ██ discharging Here, negative values denote output from the grid, so that's when the battery gets charged. -We can also look at the charging schedule in the `FlexMeasures UI `_ (reachable via the asset page for the battery): +We can also look at the charging schedule in the `FlexMeasures UI `_ (reachable via the asset page for the battery): .. image:: https://github.com/FlexMeasures/screenshots/raw/main/tut/toy-schedule/sensor-data-charging.png :align: center -Recall that we only asked for a 12 hour schedule here. We started our schedule *after* the high price peak (at 5am) and it also had to end *before* the second price peak fully realised (at 9pm). Our scheduler didn't have many opportunities to optimize, but it found some. For instance, it does buy at the lowest price (around 3pm) and sells it off when prices start rising again (around 6pm). +Recall that we only asked for a 12 hour schedule here. We started our schedule *after* the high price peak (at 4am) and it also had to end *before* the second price peak fully realised (at 8pm). Our scheduler didn't have many opportunities to optimize, but it found some. For instance, it does buy at the lowest price (at 2pm) and sells it off at the highest price within the given 12 hours (at 6pm). + + +.. note:: The ``flexmeasures add schedule for-storage`` command also accepts state-of-charge targets, so the schedule can be more sophisticated. But that is not the point of this tutorial. See ``flexmeasures add schedule for-storage --help``. + +Take into account solar production +--------------------------------------- + +So far we haven't taken into account any other devices that consume or produce electricity. We'll now add solar production forecasts and reschedule, to see the effect of solar on the available headroom for the battery. + +First, we'll create a new csv file with solar forecasts (MW, see the setup for sensor 3 above) for tomorrow. + +.. code-block:: console + + $ TOMORROW=$(date --date="next day" '+%Y-%m-%d') + $ echo "Hour,Price + $ ${TOMORROW}T00:00:00,0.0 + $ ${TOMORROW}T01:00:00,0.0 + $ ${TOMORROW}T02:00:00,0.0 + $ ${TOMORROW}T03:00:00,0.0 + $ ${TOMORROW}T04:00:00,0.01 + $ ${TOMORROW}T05:00:00,0.03 + $ ${TOMORROW}T06:00:00,0.06 + $ ${TOMORROW}T07:00:00,0.1 + $ ${TOMORROW}T08:00:00,0.14 + $ ${TOMORROW}T09:00:00,0.17 + $ ${TOMORROW}T10:00:00,0.19 + $ ${TOMORROW}T11:00:00,0.21 + $ ${TOMORROW}T12:00:00,0.22 + $ ${TOMORROW}T13:00:00,0.21 + $ ${TOMORROW}T14:00:00,0.19 + $ ${TOMORROW}T15:00:00,0.17 + $ ${TOMORROW}T16:00:00,0.14 + $ ${TOMORROW}T17:00:00,0.1 + $ ${TOMORROW}T18:00:00,0.06 + $ ${TOMORROW}T19:00:00,0.03 + $ ${TOMORROW}T20:00:00,0.01 + $ ${TOMORROW}T21:00:00,0.0 + $ ${TOMORROW}T22:00:00,0.0 + $ ${TOMORROW}T23:00:00,0.0" > solar-tomorrow.csv + +Then, we read in the created CSV file as beliefs data. +This time, different to above, we want to use a new data source (not the user) ― it represents whoever is making these solar production forecasts. +We create that data source first, so we can tell `flexmeasures add beliefs` to use it. +Setting the data source type to "forecaster" helps FlexMeasures to visualize distinguish its data from e.g. schedules and measurements. + +.. note:: The ``flexmeasures add source`` command also allows to set a model and version, so sources can be distinguished in more detail. But that is not the point of this tutorial. See ``flexmeasures add source --help``. + +.. code-block:: console + + $ flexmeasures add source --name "toy-forecaster" --type forecaster + Added source + $ flexmeasures add beliefs --sensor-id 3 --source 4 solar-tomorrow.csv --timezone Europe/Amsterdam + Successfully created beliefs + +The one-hour CSV data is automatically resampled to the 15-minute resolution of the sensor that is recording solar production. + +.. note:: The ``flexmeasures add beliefs`` command has many options to make sure the read-in data is correctly interpreted (unit, timezone, delimiter, etc). But that is not the point of this tutorial. See ``flexmeasures add beliefs --help``. + +Now, we'll reschedule the battery while taking into account the solar production. This will have an effect on the available headroom for the battery. + +.. code-block:: console + + $ flexmeasures add schedule for-storage --sensor-id 1 --consumption-price-sensor 2 \ + --inflexible-device-sensor 3 \ + --start ${TOMORROW}T07:00+01:00 --duration PT12H \ + --soc-at-start 50% --roundtrip-efficiency 90% + New schedule is stored. -.. note:: The ``flexmeasures add schedule for-storage`` command also accepts state-of-charge targets, so the schedule can be more sophisticated. But that is not the point of this tutorial. See ``flexmeasures add schedule for-storage --help``. diff --git a/documentation/views/asset-data.rst b/documentation/views/asset-data.rst index 073e0786f..9631a4e26 100644 --- a/documentation/views/asset-data.rst +++ b/documentation/views/asset-data.rst @@ -17,6 +17,7 @@ This includes the possibility to specify which sensors the asset page should sho | | +.. note:: It is possible to overlay data for multiple sensors, by setting the `sensors_to_show` attribute to a nested list. For example, ``{"sensor_to_show": [3, [2, 4]]}`` would show the data for sensor 4 laid over the data for sensor 2. .. note:: While it is possible to show an arbitrary number of sensors this way, we recommend showing only the most crucial ones for faster loading, less page scrolling, and generally, a quick grasp of what the asset is up to. .. note:: Asset attributes can be edited through the CLI as well, with the CLI command ``flexmeasures edit attribute``. diff --git a/flexmeasures/api/v1_1/implementations.py b/flexmeasures/api/v1_1/implementations.py index 639d9a37d..6b4900976 100644 --- a/flexmeasures/api/v1_1/implementations.py +++ b/flexmeasures/api/v1_1/implementations.py @@ -255,7 +255,7 @@ def get_prognosis_response( belief_time_window = (None, prior) # Check the user's intention first, fall back to schedules, then forecasts, then other data from script - source_types = ["user", "scheduling script", "forecasting script", "script"] + source_types = ["user", "scheduler", "forecaster", "script"] return collect_connection_and_value_groups( unit, diff --git a/flexmeasures/api/v1_3/tests/test_api_v1_3.py b/flexmeasures/api/v1_3/tests/test_api_v1_3.py index 12887972a..3ac080ac0 100644 --- a/flexmeasures/api/v1_3/tests/test_api_v1_3.py +++ b/flexmeasures/api/v1_3/tests/test_api_v1_3.py @@ -93,7 +93,7 @@ def test_post_udi_event_and_get_device_message( job.refresh() # catch meta info that was added on this very instance data_source_info = job.meta.get("data_source_info") scheduler_source = DataSource.query.filter_by( - type="scheduling script", **data_source_info + type="scheduler", **data_source_info ).one_or_none() assert ( scheduler_source is not None diff --git a/flexmeasures/api/v1_3/tests/test_api_v1_3_fresh_db.py b/flexmeasures/api/v1_3/tests/test_api_v1_3_fresh_db.py index 707d7ab6e..ea53f0b47 100644 --- a/flexmeasures/api/v1_3/tests/test_api_v1_3_fresh_db.py +++ b/flexmeasures/api/v1_3/tests/test_api_v1_3_fresh_db.py @@ -56,7 +56,7 @@ def test_post_udi_event_and_get_device_message_with_unknown_prices( # check results are not in the database scheduler_source = DataSource.query.filter_by( - name="Seita", type="scheduling script" + name="Seita", type="scheduler" ).one_or_none() assert ( scheduler_source is None diff --git a/flexmeasures/api/v3_0/assets.py b/flexmeasures/api/v3_0/assets.py index 919f1f4c3..5f6ff730e 100644 --- a/flexmeasures/api/v3_0/assets.py +++ b/flexmeasures/api/v3_0/assets.py @@ -15,6 +15,7 @@ from flexmeasures.data.schemas.generic_assets import GenericAssetSchema as AssetSchema from flexmeasures.api.common.schemas.generic_assets import AssetIdField from flexmeasures.api.common.schemas.users import AccountIdField +from flexmeasures.utils.coding_utils import flatten_unique from flexmeasures.ui.utils.view_utils import set_time_range_for_session @@ -309,5 +310,5 @@ def get_chart_data(self, id: int, asset: GenericAsset, **kwargs): Data for use in charts (in case you have the chart specs already). """ - sensors = asset.sensors_to_show + sensors = flatten_unique(asset.sensors_to_show) return asset.search_beliefs(sensors=sensors, as_json=True, **kwargs) diff --git a/flexmeasures/api/v3_0/tests/test_assets_api.py b/flexmeasures/api/v3_0/tests/test_assets_api.py index 6d2be3581..0a7aca430 100644 --- a/flexmeasures/api/v3_0/tests/test_assets_api.py +++ b/flexmeasures/api/v3_0/tests/test_assets_api.py @@ -147,15 +147,25 @@ def test_alter_an_asset(client, setup_api_test_data, setup_accounts): @pytest.mark.parametrize( - "bad_json_str", + "bad_json_str, error_msg", [ - None, - "{", - '{"hallo": world}', + (None, "may not be null"), + ("{", "Not a valid JSON"), + ('{"hallo": world}', "Not a valid JSON"), + ('{"sensors_to_show": [0, 1]}', "No sensor found"), # no sensor with ID 0 + ('{"sensors_to_show": [1, [0, 2]]}', "No sensor found"), # no sensor with ID 0 + ( + '{"sensors_to_show": [1, [2, [3, 4]]]}', + "should only contain", + ), # nesting level max 1 + ( + '{"sensors_to_show": [1, "2"]}', + "should only contain", + ), # non-integer sensor ID ], ) def test_alter_an_asset_with_bad_json_attributes( - client, setup_api_test_data, setup_accounts, bad_json_str + client, setup_api_test_data, setup_accounts, bad_json_str, error_msg ): """Check whether updating an asset's attributes with a badly structured JSON fails.""" with UserContext("test_prosumer_user@seita.nl") as prosumer1: @@ -169,6 +179,7 @@ def test_alter_an_asset_with_bad_json_attributes( ) print(f"Editing Response: {asset_edit_response.json}") assert asset_edit_response.status_code == 422 + assert error_msg in asset_edit_response.json["message"]["json"]["attributes"][0] def test_alter_an_asset_with_json_attributes( @@ -179,6 +190,9 @@ def test_alter_an_asset_with_json_attributes( auth_token = prosumer1.get_auth_token() with AccountContext("Test Prosumer Account") as prosumer: prosumer_asset = prosumer.generic_assets[0] + assert prosumer_asset.attributes[ + "sensors_to_show" + ] # make sure we run this test on an asset with a non-empty sensors_to_show attribute asset_edit_response = client.patch( url_for("AssetAPI:patch", id=prosumer_asset.id), headers={"content-type": "application/json", "Authorization": auth_token}, diff --git a/flexmeasures/cli/data_add.py b/flexmeasures/cli/data_add.py index d34a6b5ae..662f910d8 100755 --- a/flexmeasures/cli/data_add.py +++ b/flexmeasures/cli/data_add.py @@ -57,6 +57,7 @@ from flexmeasures.data.services.data_sources import ( get_source_or_none, ) +from flexmeasures.data.services.utils import get_or_create_model from flexmeasures.utils import flexmeasures_inflection from flexmeasures.utils.time_utils import server_now from flexmeasures.utils.unit_utils import convert_units, ur @@ -302,6 +303,44 @@ def add_initial_structure(): populate_initial_structure(db) +@fm_add_data.command("source") +@with_appcontext +@click.option( + "--name", + required=True, + type=str, + help="Name of the source (usually an organisation)", +) +@click.option( + "--model", + required=False, + type=str, + help="Optionally, specify a model (for example, a class name, function name or url).", +) +@click.option( + "--version", + required=False, + type=str, + help="Optionally, specify a version (for example, '1.0'.", +) +@click.option( + "--type", + "source_type", + required=True, + type=str, + help="Type of source (for example, 'forecaster' or 'scheduler').", +) +def add_source(name: str, model: str, version: str, source_type: str): + source = get_or_create_source( + source=name, + model=model, + version=version, + source_type=source_type, + ) + db.session.commit() + click.secho(f"Added source {source.__repr__()}", **MsgStyle.SUCCESS) + + @fm_add_data.command("beliefs") @with_appcontext @click.argument("file", type=click.Path(exists=True)) @@ -909,6 +948,14 @@ def create_schedule(ctx): required=False, help="To be deprecated. Use consumption-price-sensor instead.", ) +@click.option( + "--inflexible-device-sensor", + "inflexible_device_sensors", + type=SensorIdField(), + multiple=True, + help="Take into account the power flow of inflexible devices. Follow up with the sensor's ID." + " This argument can be given multiple times.", +) @click.option( "--start", "start", @@ -975,6 +1022,7 @@ def add_schedule_for_storage( consumption_price_sensor: Sensor, production_price_sensor: Sensor, optimization_context_sensor: Sensor, + inflexible_device_sensors: list[Sensor], start: datetime, duration: timedelta, soc_at_start: ur.Quantity, @@ -1054,6 +1102,7 @@ def add_schedule_for_storage( flex_context={ "consumption-price-sensor": consumption_price_sensor.id, "production-price-sensor": production_price_sensor.id, + "inflexible-device-sensors": [s.id for s in inflexible_device_sensors], }, ) if as_job: @@ -1088,7 +1137,10 @@ def add_toy_account(kind: str, name: str): # make an account (if not exist) account = Account.query.filter(Account.name == name).one_or_none() if account: - click.echo(f"Account {name} already exists.") + click.secho( + f"Account '{account}' already exists. Use `flexmeasures delete account --id {account.id}` to remove it first.", + **MsgStyle.ERROR, + ) raise click.Abort() # make an account user (account-admin?) email = "toy-user@flexmeasures.io" @@ -1106,57 +1158,69 @@ def add_toy_account(kind: str, name: str): user_roles=["account-admin"], account_name=name, ) - # make assets - for asset_type in ("solar", "building", "battery"): - asset = GenericAsset( + + def create_power_asset(asset_type: str, sensor_name: str, **attributes): + asset = get_or_create_model( + GenericAsset, name=f"toy-{asset_type}", generic_asset_type=asset_types[asset_type], owner=user.account, latitude=location[0], longitude=location[1], ) - db.session.add(asset) - if asset_type == "battery": - asset.attributes = dict( - capacity_in_mw=0.5, - min_soc_in_mwh=0.05, - max_soc_in_mwh=0.45, - ) - # add charging sensor to battery - charging_sensor = Sensor( - name="discharging", - generic_asset=asset, - unit="MW", - timezone="Europe/Amsterdam", - event_resolution=timedelta(minutes=15), - ) - db.session.add(charging_sensor) + asset.attributes = attributes + power_sensor_specs = dict( + generic_asset=asset, + unit="MW", + timezone="Europe/Amsterdam", + event_resolution=timedelta(minutes=15), + ) + power_sensor = get_or_create_model( + Sensor, + name=sensor_name, + **power_sensor_specs, + ) + return power_sensor + + # create battery + discharging_sensor = create_power_asset( + "battery", + "discharging", + capacity_in_mw=0.5, + min_soc_in_mwh=0.05, + max_soc_in_mwh=0.45, + ) # add public day-ahead market (as sensor of transmission zone asset) nl_zone = add_transmission_zone_asset("NL", db=db) - day_ahead_sensor = Sensor.query.filter( - Sensor.generic_asset == nl_zone, Sensor.name == "Day ahead prices" - ).one_or_none() - if not day_ahead_sensor: - day_ahead_sensor = Sensor( - name="Day ahead prices", - generic_asset=nl_zone, - unit="EUR/MWh", - timezone="Europe/Amsterdam", - event_resolution=timedelta(minutes=60), - knowledge_horizon=( - x_days_ago_at_y_oclock, - {"x": 1, "y": 12, "z": "Europe/Paris"}, - ), - ) - db.session.add(day_ahead_sensor) + day_ahead_sensor = get_or_create_model( + Sensor, + name="day-ahead prices", + generic_asset=nl_zone, + unit="EUR/MWh", + timezone="Europe/Amsterdam", + event_resolution=timedelta(minutes=60), + knowledge_horizon=( + x_days_ago_at_y_oclock, + {"x": 1, "y": 12, "z": "Europe/Paris"}, + ), + ) - # add day-ahead sensor to battery page + # create solar + production_sensor = create_power_asset( + "solar", + "production", + ) + + # add day-ahead price sensor and PV production sensor to show on the battery's asset page db.session.flush() - battery = charging_sensor.generic_asset + battery = discharging_sensor.generic_asset battery.attributes["sensors_to_show"] = [ day_ahead_sensor.id, - charging_sensor.id, + [ + production_sensor.id, + discharging_sensor.id, + ], ] db.session.commit() @@ -1165,11 +1229,15 @@ def add_toy_account(kind: str, name: str): **MsgStyle.SUCCESS, ) click.secho( - f"The sensor for battery discharging is {charging_sensor} (ID: {charging_sensor.id}).", + f"The sensor recording battery discharging is {discharging_sensor} (ID: {discharging_sensor.id}).", + **MsgStyle.SUCCESS, + ) + click.secho( + f"The sensor recording day-ahead prices is {day_ahead_sensor} (ID: {day_ahead_sensor.id}).", **MsgStyle.SUCCESS, ) click.secho( - f"The sensor for Day ahead prices is {day_ahead_sensor} (ID: {day_ahead_sensor.id}).", + f"The sensor recording solar forecasts is {production_sensor} (ID: {production_sensor.id}).", **MsgStyle.SUCCESS, ) diff --git a/flexmeasures/cli/data_show.py b/flexmeasures/cli/data_show.py index b1ef6f891..0be61300c 100644 --- a/flexmeasures/cli/data_show.py +++ b/flexmeasures/cli/data_show.py @@ -163,9 +163,9 @@ def show_generic_asset(asset): """ Show asset info and list sensors """ - click.echo(f"======{len(asset.name) * '='}=========") + click.echo(f"======{len(asset.name) * '='}========") click.echo(f"Asset {asset.name} (ID: {asset.id})") - click.echo(f"======{len(asset.name) * '='}=========\n") + click.echo(f"======{len(asset.name) * '='}========\n") asset_data = [ ( @@ -343,9 +343,9 @@ def plot_beliefs( # Build title if len(sensors) == 1: - title = f"Beliefs for Sensor '{sensors[0].name}' (Id {sensors[0].id}).\n" + title = f"Beliefs for Sensor '{sensors[0].name}' (ID {sensors[0].id}).\n" else: - title = f"Beliefs for Sensor(s) [{', '.join([s.name for s in sensors])}], (Id(s): [{', '.join([str(s.id) for s in sensors])}]).\n" + title = f"Beliefs for Sensor(s) [{', '.join([s.name for s in sensors])}], (ID(s): [{', '.join([str(s.id) for s in sensors])}]).\n" title += f"Data spans {naturaldelta(duration)} and starts at {start}." if belief_time_before: title += f"\nOnly beliefs made before: {belief_time_before}." diff --git a/flexmeasures/conftest.py b/flexmeasures/conftest.py index 44b4b7411..85124374f 100644 --- a/flexmeasures/conftest.py +++ b/flexmeasures/conftest.py @@ -316,6 +316,7 @@ def create_generic_assets(db, setup_generic_asset_types, setup_accounts): name="Test grid connected battery storage", generic_asset_type=setup_generic_asset_types["battery"], owner=setup_accounts["Prosumer"], + attributes={"some-attribute": "some-value", "sensors_to_show": [1, 2]}, ) db.session.add(test_battery) test_wind_turbine = GenericAsset( diff --git a/flexmeasures/data/migrations/versions/a528c3c81506_unique_generic_sensor_ids.py b/flexmeasures/data/migrations/versions/a528c3c81506_unique_generic_sensor_ids.py index cbcb784f8..b83560d83 100644 --- a/flexmeasures/data/migrations/versions/a528c3c81506_unique_generic_sensor_ids.py +++ b/flexmeasures/data/migrations/versions/a528c3c81506_unique_generic_sensor_ids.py @@ -203,7 +203,7 @@ def upgrade_data(): sequence_name = "%s_id_seq" % t_sensors.name # Set next id for table seq to just after max id of all old sensors combined connection.execute( - "SELECT setval('%s', %s, true);" + "SELECT setval('%s', %s, false);" # is_called = False % (sequence_name, max_asset_id + max_market_id + max_weather_sensor_id + 1) ) diff --git a/flexmeasures/data/migrations/versions/c41beee0c904_rename_DataSource_type_for_forecasters_and_schedulers.py b/flexmeasures/data/migrations/versions/c41beee0c904_rename_DataSource_type_for_forecasters_and_schedulers.py new file mode 100644 index 000000000..4510f7294 --- /dev/null +++ b/flexmeasures/data/migrations/versions/c41beee0c904_rename_DataSource_type_for_forecasters_and_schedulers.py @@ -0,0 +1,33 @@ +"""Rename DataSource types for forecasters and schedulers + +Revision ID: c41beee0c904 +Revises: 650b085c0ad3 +Create Date: 2022-11-30 21:33:09.046751 + +""" +from alembic import op + + +# revision identifiers, used by Alembic. +revision = "c41beee0c904" +down_revision = "650b085c0ad3" +branch_labels = None +depends_on = None + + +def upgrade(): + op.execute( + "update data_source set type='scheduler' where type='scheduling script';" + ) + op.execute( + "update data_source set type='forecaster' where type='forecasting script';" + ) + + +def downgrade(): + op.execute( + "update data_source set type='scheduling script' where type='scheduler';" + ) + op.execute( + "update data_source set type='forecasting script' where type='forecaster';" + ) diff --git a/flexmeasures/data/migrations/versions/d814c0688ae0_merge.py b/flexmeasures/data/migrations/versions/d814c0688ae0_merge.py new file mode 100644 index 000000000..ff3204329 --- /dev/null +++ b/flexmeasures/data/migrations/versions/d814c0688ae0_merge.py @@ -0,0 +1,22 @@ +"""merge + +Revision ID: d814c0688ae0 +Revises: 75f53d2dbfae, c41beee0c904 +Create Date: 2022-12-12 15:31:41.509921 + +""" + + +# revision identifiers, used by Alembic. +revision = "d814c0688ae0" +down_revision = ("75f53d2dbfae", "c41beee0c904") +branch_labels = None +depends_on = None + + +def upgrade(): + pass + + +def downgrade(): + pass diff --git a/flexmeasures/data/models/charts/belief_charts.py b/flexmeasures/data/models/charts/belief_charts.py index 428368aed..09782470e 100644 --- a/flexmeasures/data/models/charts/belief_charts.py +++ b/flexmeasures/data/models/charts/belief_charts.py @@ -3,7 +3,16 @@ from datetime import datetime, timedelta from flexmeasures.data.models.charts.defaults import FIELD_DEFINITIONS, REPLAY_RULER -from flexmeasures.utils.flexmeasures_inflection import capitalize +from flexmeasures.utils.flexmeasures_inflection import ( + capitalize, + join_words_into_a_list, +) +from flexmeasures.utils.coding_utils import flatten_unique +from flexmeasures.utils.unit_utils import ( + is_power_unit, + is_energy_unit, + is_energy_price_unit, +) def bar_chart( @@ -21,6 +30,10 @@ def bar_chart( **FIELD_DEFINITIONS["event_value"], ) event_start_field_definition = FIELD_DEFINITIONS["event_start"] + event_start_field_definition["timeUnit"] = { + "unit": "yearmonthdatehoursminutesseconds", + "step": sensor.event_resolution.total_seconds(), + } if event_starts_after and event_ends_before: event_start_field_definition["scale"] = { "domain": [ @@ -28,19 +41,19 @@ def bar_chart( event_ends_before.timestamp() * 10**3, ] } - resolution_in_ms = sensor.event_resolution.total_seconds() * 1000 chart_specs = { "description": "A simple bar chart showing sensor data.", + # the sensor type is already shown as the y-axis title (avoid redundant info) "title": capitalize(sensor.name) if sensor.name != sensor.sensor_type else None, "layer": [ { "mark": { "type": "bar", "clip": True, + "width": {"band": 0.999}, }, "encoding": { "x": event_start_field_definition, - "x2": FIELD_DEFINITIONS["event_end"], "y": event_value_field_definition, "color": FIELD_DEFINITIONS["source_name"], "detail": FIELD_DEFINITIONS["source"], @@ -56,10 +69,6 @@ def bar_chart( ], }, "transform": [ - { - "calculate": f"datum.event_start + {resolution_in_ms}", - "as": "event_end", - }, { "calculate": "datum.source.name + ' (ID: ' + datum.source.id + ')'", "as": "source_name_and_id", @@ -75,38 +84,66 @@ def bar_chart( def chart_for_multiple_sensors( - sensors: list["Sensor"], # noqa F821 + sensors_to_show: list["Sensor", list["Sensor"]], # noqa F821 event_starts_after: datetime | None = None, event_ends_before: datetime | None = None, **override_chart_specs: dict, ): - sensors_specs = [] + # Determine the shared data resolution condition = list( sensor.event_resolution - for sensor in sensors + for sensor in flatten_unique(sensors_to_show) if sensor.event_resolution > timedelta(0) ) - minimum_non_zero_resolution_in_ms = ( - min(condition).total_seconds() * 1000 if any(condition) else 0 - ) - for sensor in sensors: - unit = sensor.unit if sensor.unit else "a.u." + minimum_non_zero_resolution = min(condition) if any(condition) else timedelta(0) + + # Set up field definition for event starts + event_start_field_definition = FIELD_DEFINITIONS["event_start"] + event_start_field_definition["timeUnit"] = { + "unit": "yearmonthdatehoursminutesseconds", + "step": minimum_non_zero_resolution.total_seconds(), + } + # If a time window was set explicitly, adjust the domain to show the full window regardless of available data + if event_starts_after and event_ends_before: + event_start_field_definition["scale"] = { + "domain": [ + event_starts_after.timestamp() * 10**3, + event_ends_before.timestamp() * 10**3, + ] + } + + sensors_specs = [] + for s in sensors_to_show: + # List the sensors that go into one row + if isinstance(s, list): + row_sensors: list["Sensor"] = s # noqa F821 + else: + row_sensors: list["Sensor"] = [s] # noqa F821 + + # Derive the unit that should be shown + unit = determine_shared_unit(row_sensors) + sensor_type = determine_shared_sensor_type(row_sensors) + + # Set up field definition for event values event_value_field_definition = dict( - title=f"{capitalize(sensor.sensor_type)} ({unit})", + title=f"{capitalize(sensor_type)} ({unit})", format=[".3~r", unit], formatType="quantityWithUnitFormat", stack=None, **FIELD_DEFINITIONS["event_value"], ) - event_start_field_definition = FIELD_DEFINITIONS["event_start"] - if event_starts_after and event_ends_before: - event_start_field_definition["scale"] = { - "domain": [ - event_starts_after.timestamp() * 10**3, - event_ends_before.timestamp() * 10**3, - ] - } + + # Set up shared tooltip shared_tooltip = [ + dict( + field="sensor.name", + type="nominal", + title="Sensor", + ), + { + **event_value_field_definition, + **dict(title=f"{capitalize(sensor_type)}"), + }, FIELD_DEFINITIONS["full_date"], dict( field="belief_horizon", @@ -117,122 +154,67 @@ def chart_for_multiple_sensors( ), { **event_value_field_definition, - **dict(title=f"{capitalize(sensor.sensor_type)}"), + **dict(title=f"{capitalize(sensor_type)}"), }, FIELD_DEFINITIONS["source_name_and_id"], + FIELD_DEFINITIONS["source_type"], FIELD_DEFINITIONS["source_model"], ] - line_layer = { - "mark": { - "type": "line", - "interpolate": "step-after" - if sensor.event_resolution != timedelta(0) - else "linear", - "clip": True, - }, - "encoding": { - "x": event_start_field_definition, - "y": event_value_field_definition, - "color": FIELD_DEFINITIONS["source_name"], - "strokeDash": { - "field": "belief_horizon", - "type": "quantitative", - "bin": { - # Divide belief horizons into 2 bins by setting a very large bin size. - # The bins should be defined as follows: ex ante (>0) and ex post (<=0), - # but because the bin anchor is included in the ex-ante bin, - # and 0 belief horizons should be attributed to the ex-post bin, - # (and belief horizons are given with 1 ms precision,) - # the bin anchor is set at 1 ms before knowledge time to obtain: ex ante (>=1) and ex post (<1). - "anchor": 1, - "step": 8640000000000000, # JS max ms for a Date object (NB 10 times less than Python max ms) - # "step": timedelta.max.total_seconds() * 10**2, - }, - "legend": { - # Belief horizons binned as 1 ms contain ex-ante beliefs; the other bin contains ex-post beliefs - "labelExpr": "datum.label > 0 ? 'ex ante' : 'ex post'", - "title": "Recorded", - }, - "scale": { - # Positive belief horizons are clamped to 1, negative belief horizons are clamped to 0 - "domain": [1, 0], - # belief horizons >= 1 ms get a dashed line, belief horizons < 1 ms get a solid line - "range": [[1, 2], [1, 0]], - }, - }, - "detail": FIELD_DEFINITIONS["source"], - }, - } + + # Draw a line for each sensor (and each source) + layers = [ + create_line_layer( + row_sensors, event_start_field_definition, event_value_field_definition + ) + ] + + # Optionally, draw transparent full-height rectangles that activate the tooltip anywhere in the graph + # (to be precise, only at points on the x-axis where there is data) + if len(row_sensors) == 1: + # With multiple sensors, we cannot do this, because it is ambiguous which tooltip to activate (instead, we use a different brush in the circle layer) + layers.append( + create_rect_layer( + event_start_field_definition, + event_value_field_definition, + shared_tooltip, + ) + ) + + # Draw circle markers that are shown on hover + layers.append( + create_circle_layer( + row_sensors, + event_start_field_definition, + event_value_field_definition, + shared_tooltip, + ) + ) + layers.append(REPLAY_RULER) + + # Layer the lines, rectangles and circles within one row, and filter by which sensors are represented in the row sensor_specs = { - "title": capitalize(sensor.name) - if sensor.name != sensor.sensor_type - else None, - "transform": [{"filter": f"datum.sensor.id == {sensor.id}"}], - "layer": [ - line_layer, - { - "mark": { - "type": "rect", - "y2": "height", - "opacity": 0, - }, - "encoding": { - "x": event_start_field_definition, - "x2": FIELD_DEFINITIONS["event_end"], - "y": { - "condition": { - "test": "isNaN(datum['event_value'])", - **event_value_field_definition, - }, - "value": 0, - }, - "detail": FIELD_DEFINITIONS["source"], - "tooltip": shared_tooltip, - }, - "transform": [ - { - "calculate": f"datum.event_start + {minimum_non_zero_resolution_in_ms}", - "as": "event_end", - }, - ], - }, + "title": join_words_into_a_list( + [ + f"{capitalize(sensor.name)}" + for sensor in row_sensors + # the sensor type is already shown as the y-axis title (avoid redundant info) + if sensor.name != sensor.sensor_type + ] + ), + "transform": [ { - "mark": { - "type": "circle", - "opacity": 1, - "clip": True, - }, - "encoding": { - "x": event_start_field_definition, - "y": event_value_field_definition, - "color": FIELD_DEFINITIONS["source_name"], - "detail": FIELD_DEFINITIONS["source"], - "size": { - "condition": { - "value": "200", - "test": {"param": "paintbrush", "empty": False}, - }, - "value": "0", - }, - "tooltip": shared_tooltip, - }, - "params": [ - { - "name": "paintbrush", - "select": { - "type": "point", - "encodings": ["x"], - "on": "mouseover", - "nearest": False, - }, - }, - ], - }, - REPLAY_RULER, + "filter": { + "field": "sensor.id", + "oneOf": [sensor.id for sensor in row_sensors], + } + } ], + "layer": layers, "width": "container", } sensors_specs.append(sensor_specs) + + # Vertically concatenate the rows chart_specs = dict( description="A vertically concatenated chart showing sensor data.", vconcat=[*sensors_specs], @@ -253,3 +235,152 @@ def chart_for_multiple_sensors( for k, v in override_chart_specs.items(): chart_specs[k] = v return chart_specs + + +def determine_shared_unit(sensors: list["Sensor"]) -> str: # noqa F821 + units = list(set([sensor.unit for sensor in sensors if sensor.unit])) + + # Replace with 'a.u.' in case of mixing units + shared_unit = units[0] if len(units) == 1 else "a.u." + + # Replace with 'dimensionless' in case of empty unit + return shared_unit if shared_unit else "dimensionless" + + +def determine_shared_sensor_type(sensors: list["Sensor"]) -> str: # noqa F821 + sensor_types = list(set([sensor.sensor_type for sensor in sensors])) + + # Return the sole sensor type + if len(sensor_types) == 1: + return sensor_types[0] + + # Check the units for common cases + shared_unit = determine_shared_unit(sensors) + if is_power_unit(shared_unit): + return "power" + elif is_energy_unit(shared_unit): + return "energy" + elif is_energy_price_unit(shared_unit): + return "energy price" + return "value" + + +def create_line_layer( + sensors: list["Sensor"], # noqa F821 + event_start_field_definition: dict, + event_value_field_definition: dict, +): + event_resolutions = list(set([sensor.event_resolution for sensor in sensors])) + assert ( + len(event_resolutions) == 1 + ), "Sensors shown within one row must share the same event resolution." + event_resolution = event_resolutions[0] + line_layer = { + "mark": { + "type": "line", + "interpolate": "step-after" + if event_resolution != timedelta(0) + else "linear", + "clip": True, + }, + "encoding": { + "x": event_start_field_definition, + "y": event_value_field_definition, + "color": FIELD_DEFINITIONS["sensor_description"], + "strokeDash": { + "scale": { + # Distinguish forecasters and schedulers by line stroke + "domain": ["forecaster", "scheduler", "other"], + # Schedulers get a dashed line, forecasters get a dotted line, the rest gets a solid line + "range": [[2, 2], [4, 4], [1, 0]], + }, + "field": "source.type", + "legend": { + "title": "Source", + }, + }, + "detail": [FIELD_DEFINITIONS["source"]], + }, + } + return line_layer + + +def create_circle_layer( + sensors: list["Sensor"], # noqa F821 + event_start_field_definition: dict, + event_value_field_definition: dict, + shared_tooltip: list, +): + params = [ + { + "name": "hover_x_brush", + "select": { + "type": "point", + "encodings": ["x"], + "on": "mouseover", + "nearest": False, + "clear": "mouseout", + }, + } + ] + if len(sensors) > 1: + # extra brush for showing the tooltip of the closest sensor + params.append( + { + "name": "hover_nearest_brush", + "select": { + "type": "point", + "on": "mouseover", + "nearest": True, + "clear": "mouseout", + }, + } + ) + or_conditions = [{"param": "hover_x_brush", "empty": False}] + if len(sensors) > 1: + or_conditions.append({"param": "hover_nearest_brush", "empty": False}) + circle_layer = { + "mark": { + "type": "circle", + "opacity": 1, + "clip": True, + }, + "encoding": { + "x": event_start_field_definition, + "y": event_value_field_definition, + "color": FIELD_DEFINITIONS["sensor_description"], + "size": { + "condition": {"value": "200", "test": {"or": or_conditions}}, + "value": "0", + }, + "tooltip": shared_tooltip, + }, + "params": params, + } + return circle_layer + + +def create_rect_layer( + event_start_field_definition: dict, + event_value_field_definition: dict, + shared_tooltip: list, +): + rect_layer = { + "mark": { + "type": "rect", + "y2": "height", + "opacity": 0, + }, + "encoding": { + "x": event_start_field_definition, + "y": { + "condition": { + "test": "isNaN(datum['event_value'])", + **event_value_field_definition, + }, + "value": 0, + }, + "tooltip": shared_tooltip, + }, + } + return rect_layer diff --git a/flexmeasures/data/models/charts/defaults.py b/flexmeasures/data/models/charts/defaults.py index f13c3b50b..0bdf4b911 100644 --- a/flexmeasures/data/models/charts/defaults.py +++ b/flexmeasures/data/models/charts/defaults.py @@ -21,21 +21,35 @@ title=None, axis={"labelExpr": FORMAT_24H, "labelOverlap": True, "labelSeparation": 1}, ), - "event_end": dict( - field="event_end", - type="temporal", - title=None, - axis={"labelExpr": FORMAT_24H, "labelOverlap": True, "labelSeparation": 1}, - ), "event_value": dict( field="event_value", type="quantitative", ), + "sensor": dict( + field="sensor.id", + type="nominal", + title=None, + ), + "sensor_name": dict( + field="sensor.name", + type="nominal", + title="Sensor", + ), + "sensor_description": dict( + field="sensor.description", + type="nominal", + title="Sensor", + ), "source": dict( field="source.id", type="nominal", title=None, ), + "source_type": dict( + field="source.type", + type="nominal", + title="Type", + ), "source_name": dict( field="source.name", type="nominal", diff --git a/flexmeasures/data/models/data_sources.py b/flexmeasures/data/models/data_sources.py index 7705400c0..042195ab9 100644 --- a/flexmeasures/data/models/data_sources.py +++ b/flexmeasures/data/models/data_sources.py @@ -15,7 +15,7 @@ class DataSource(db.Model, tb.BeliefSourceDBMixin): __tablename__ = "data_source" __table_args__ = (db.UniqueConstraint("name", "user_id", "model", "version"),) - # The type of data source (e.g. user, forecasting script or scheduling script) + # The type of data source (e.g. user, forecaster or scheduler) type = db.Column(db.String(80), default="") # The id of the user source (can link e.g. to fm_user table) @@ -50,12 +50,12 @@ def __init__( @property def label(self): - """Human-readable label (preferably not starting with a capital letter so it can be used in a sentence).""" + """Human-readable label (preferably not starting with a capital letter, so it can be used in a sentence).""" if self.type == "user": return f"data entered by user {self.user.username}" # todo: give users a display name - elif self.type == "forecasting script": + elif self.type == "forecaster": return f"forecast by {self.name}" # todo: give DataSource an optional db column to persist versioned models separately to the name of the data source? - elif self.type == "scheduling script": + elif self.type == "scheduler": return f"schedule by {self.name}" elif self.type == "crawling script": return f"data retrieved from {self.name}" @@ -70,7 +70,7 @@ def description(self): For example: - >>> DataSource("Seita", type="forecasting script", model="naive", version="1.2").description + >>> DataSource("Seita", type="forecaster", model="naive", version="1.2").description <<< "Seita's naive model v1.2.0" """ @@ -90,10 +90,11 @@ def __str__(self) -> str: def to_dict(self) -> dict: model_incl_version = self.model if self.model else "" if self.model and self.version: - model_incl_version += f" ({self.version}" + model_incl_version += f" (v{self.version})" return dict( id=self.id, name=self.name, model=model_incl_version, + type=self.type if self.type in ("forecaster", "scheduler") else "other", description=self.description, ) diff --git a/flexmeasures/data/models/generic_assets.py b/flexmeasures/data/models/generic_assets.py index 2d8117210..facdf2c7b 100644 --- a/flexmeasures/data/models/generic_assets.py +++ b/flexmeasures/data/models/generic_assets.py @@ -1,3 +1,5 @@ +from __future__ import annotations + from datetime import datetime, timedelta from typing import Any, Dict, Optional, Tuple, List, Union import json @@ -20,6 +22,7 @@ from flexmeasures.data.queries.annotations import query_asset_annotations from flexmeasures.auth.policy import AuthModelMixin, EVERY_LOGGED_IN_USER from flexmeasures.utils import geo_utils +from flexmeasures.utils.coding_utils import flatten_unique from flexmeasures.utils.time_utils import ( determine_minimum_resampling_resolution, server_now, @@ -296,7 +299,7 @@ def chart( :param dataset_name: optionally name the dataset used in the chart (the default name is sensor_) :returns: JSON string defining vega-lite chart specs """ - sensors = self.sensors_to_show + sensors = flatten_unique(self.sensors_to_show) for sensor in sensors: sensor.sensor_type = sensor.get_attribute("sensor_type", sensor.name) @@ -309,7 +312,7 @@ def chart( kwargs["event_ends_before"] = event_ends_before chart_specs = chart_type_to_chart_specs( chart_type, - sensors=sensors, + sensors_to_show=self.sensors_to_show, dataset_name=dataset_name, **kwargs, ) @@ -427,34 +430,51 @@ def search_beliefs( return bdf_dict @property - def sensors_to_show(self) -> List["Sensor"]: # noqa F821 + def sensors_to_show(self) -> list["Sensor" | list["Sensor"]]: # noqa F821 """Sensors to show, as defined by the sensors_to_show attribute. Sensors to show are defined as a list of sensor ids, which is set by the "sensors_to_show" field of the asset's "attributes" column. Valid sensors either belong to the asset itself, to other assets in the same account, or to public assets. + In case the field is missing, defaults to two of the asset's sensors. + + Sensor ids can be nested to denote that sensors should be 'shown together', + for example, layered rather than vertically concatenated. + How to interpret 'shown together' is technically left up to the function returning chart specs, + as are any restrictions regarding what sensors can be shown together, such as: + - whether they should share the same unit + - whether they should share the same name + - whether they should belong to different assets + For example, this denotes showing sensors 42 and 44 together: + + sensors_to_show = [40, 35, 41, [42, 44], 43, 45] - Defaults to two of the asset's sensors. """ if not self.has_attribute("sensors_to_show"): return self.sensors[:2] from flexmeasures.data.services.sensors import get_sensors - sensor_ids = self.get_attribute("sensors_to_show") + sensor_ids_to_show = self.get_attribute("sensors_to_show") sensor_map = { sensor.id: sensor for sensor in get_sensors( account=self.owner, include_public_assets=True, - sensor_id_allowlist=sensor_ids, + sensor_id_allowlist=flatten_unique(sensor_ids_to_show), ) } - # Return sensors in the order given by the sensors_to_show attribute - return [sensor_map[sensor_id] for sensor_id in sensor_ids] + # Return sensors in the order given by the sensors_to_show attribute, and with the same nesting + sensors_to_show = [] + for s in sensor_ids_to_show: + if isinstance(s, list): + sensors_to_show.append([sensor_map[sensor_id] for sensor_id in s]) + else: + sensors_to_show.append(sensor_map[s]) + return sensors_to_show @property def timezone( @@ -507,7 +527,7 @@ def get_timerange(cls, sensors: List["Sensor"]) -> Dict[str, datetime]: # noqa """ from flexmeasures.data.models.time_series import TimedBelief - sensor_ids = [sensor.id for sensor in sensors] + sensor_ids = [s.id for s in flatten_unique(sensors)] least_recent_query = ( TimedBelief.query.filter(TimedBelief.sensor_id.in_(sensor_ids)) .order_by(TimedBelief.event_start.asc()) diff --git a/flexmeasures/data/models/time_series.py b/flexmeasures/data/models/time_series.py index 17c8f7fe6..911f381b8 100644 --- a/flexmeasures/data/models/time_series.py +++ b/flexmeasures/data/models/time_series.py @@ -489,6 +489,7 @@ def to_dict(self) -> dict: return dict( id=self.id, name=self.name, + description=f"{self.name} ({self.generic_asset.name})", ) @classmethod diff --git a/flexmeasures/data/queries/analytics.py b/flexmeasures/data/queries/analytics.py index 3183678b9..08967be27 100644 --- a/flexmeasures/data/queries/analytics.py +++ b/flexmeasures/data/queries/analytics.py @@ -55,7 +55,7 @@ def get_power_data( end=query_window[-1], resolution=resolution, belief_horizon_window=(None, timedelta(hours=0)), - exclude_source_types=["scheduling script"], + exclude_source_types=["scheduler"], ) if showing_individual_traces_for == "power": power_bdf = resource.power_data @@ -87,7 +87,7 @@ def get_power_data( end=query_window[-1], resolution=resolution, belief_horizon_window=(forecast_horizon, None), - exclude_source_types=["scheduling script"], + exclude_source_types=["scheduler"], ).aggregate_power_data power_forecast_df: pd.DataFrame = simplify_index( power_forecast_bdf, index_levels_to_columns=["belief_horizon", "source"] @@ -103,7 +103,7 @@ def get_power_data( end=query_window[-1], resolution=resolution, belief_horizon_window=(None, None), - source_types=["scheduling script"], + source_types=["scheduler"], ) if showing_individual_traces_for == "schedules": power_schedule_bdf = resource.power_data @@ -205,7 +205,7 @@ def get_prices_data( resolution=resolution, horizons_at_least=forecast_horizon, horizons_at_most=None, - source_types=["user", "forecasting script", "script"], + source_types=["user", "forecaster", "script"], ) price_forecast_df: pd.DataFrame = simplify_index( price_forecast_bdf, index_levels_to_columns=["belief_horizon", "source"] @@ -297,7 +297,7 @@ def get_weather_data( resolution=resolution, horizons_at_least=forecast_horizon, horizons_at_most=None, - source_types=["user", "forecasting script", "script"], + source_types=["user", "forecaster", "script"], sum_multiple=False, ) weather_forecast_df_dict: Dict[str, pd.DataFrame] = {} diff --git a/flexmeasures/data/schemas/generic_assets.py b/flexmeasures/data/schemas/generic_assets.py index b7aa0b86a..469c29951 100644 --- a/flexmeasures/data/schemas/generic_assets.py +++ b/flexmeasures/data/schemas/generic_assets.py @@ -14,16 +14,17 @@ ) from flexmeasures.auth.policy import user_has_admin_access from flexmeasures.cli import is_running as running_as_cli +from flexmeasures.utils.coding_utils import flatten_unique class JSON(fields.Field): - def _deserialize(self, value, attr, data, **kwargs): + def _deserialize(self, value, attr, data, **kwargs) -> dict: try: return json.loads(value) except ValueError: raise ValidationError("Not a valid JSON string.") - def _serialize(self, value, attr, data, **kwargs): + def _serialize(self, value, attr, data, **kwargs) -> str: return json.dumps(value) @@ -77,6 +78,32 @@ def validate_account(self, account_id: int): "User is not allowed to create assets for this account." ) + @validates("attributes") + def validate_attributes(self, attributes: dict): + sensors_to_show = attributes.get("sensors_to_show", []) + + # Check type + if not isinstance(sensors_to_show, list): + raise ValidationError("sensors_to_show should be a list.") + for sensor_listing in sensors_to_show: + if not isinstance(sensor_listing, (int, list)): + raise ValidationError( + "sensors_to_show should only contain sensor IDs (integers) or lists thereof." + ) + if isinstance(sensor_listing, list): + for sensor_id in sensor_listing: + if not isinstance(sensor_id, int): + raise ValidationError( + "sensors_to_show should only contain sensor IDs (integers) or lists thereof." + ) + + # Check whether IDs represent accessible sensors + from flexmeasures.data.schemas import SensorIdField + + sensor_ids = flatten_unique(sensors_to_show) + for sensor_id in sensor_ids: + SensorIdField().deserialize(sensor_id) + class GenericAssetTypeSchema(ma.SQLAlchemySchema): """ diff --git a/flexmeasures/data/scripts/data_gen.py b/flexmeasures/data/scripts/data_gen.py index 9da2b336c..e527dc803 100644 --- a/flexmeasures/data/scripts/data_gen.py +++ b/flexmeasures/data/scripts/data_gen.py @@ -37,8 +37,8 @@ def add_default_data_sources(db: SQLAlchemy): for source_name, source_type in ( ("Seita", "demo script"), - ("Seita", "forecasting script"), - ("Seita", "scheduling script"), + ("Seita", "forecaster"), + ("Seita", "scheduler"), ): source = DataSource.query.filter( and_(DataSource.name == source_name, DataSource.type == source_type) diff --git a/flexmeasures/data/services/data_sources.py b/flexmeasures/data/services/data_sources.py index 8f272d9f6..bc6926df0 100644 --- a/flexmeasures/data/services/data_sources.py +++ b/flexmeasures/data/services/data_sources.py @@ -13,6 +13,7 @@ def get_or_create_source( source: Union[User, str], source_type: Optional[str] = None, model: Optional[str] = None, + version: Optional[str] = None, flush: bool = True, ) -> DataSource: if is_user(source): @@ -20,6 +21,8 @@ def get_or_create_source( query = DataSource.query.filter(DataSource.type == source_type) if model is not None: query = query.filter(DataSource.model == model) + if version is not None: + query = query.filter(DataSource.version == version) if is_user(source): query = query.filter(DataSource.user == source) elif isinstance(source, str): @@ -29,11 +32,13 @@ def get_or_create_source( _source = query.one_or_none() if not _source: if is_user(source): - _source = DataSource(user=source, model=model) + _source = DataSource(user=source, model=model, version=version) else: if source_type is None: raise TypeError("Please specify a source type") - _source = DataSource(name=source, model=model, type=source_type) + _source = DataSource( + name=source, model=model, version=version, type=source_type + ) current_app.logger.info(f"Setting up {_source} as new data source...") db.session.add(_source) if flush: diff --git a/flexmeasures/data/services/scheduling.py b/flexmeasures/data/services/scheduling.py index f88dc3c23..4118ecaf7 100644 --- a/flexmeasures/data/services/scheduling.py +++ b/flexmeasures/data/services/scheduling.py @@ -125,7 +125,7 @@ def make_schedule( data_source_name=data_source_info["name"], data_source_model=data_source_info["model"], data_source_version=data_source_info["version"], - data_source_type="scheduling script", + data_source_type="scheduler", ) # saving info on the job, so the API for a job can look the data up @@ -273,7 +273,7 @@ def get_data_source_for_job(job: Job | None) -> DataSource | None: ) scheduler_sources = ( DataSource.query.filter_by( - type="scheduling script", + type="scheduler", **data_source_info, ) .order_by(DataSource.version.desc()) diff --git a/flexmeasures/data/services/utils.py b/flexmeasures/data/services/utils.py new file mode 100644 index 000000000..96dd9b9a4 --- /dev/null +++ b/flexmeasures/data/services/utils.py @@ -0,0 +1,66 @@ +from __future__ import annotations + +from typing import Type + +import click +from sqlalchemy import JSON, String, cast, literal + +from flexmeasures import Sensor +from flexmeasures.data import db +from flexmeasures.data.models.generic_assets import GenericAsset, GenericAssetType + + +def get_or_create_model( + model_class: Type[GenericAsset | GenericAssetType | Sensor], **kwargs +) -> GenericAsset | GenericAssetType | Sensor: + """Get a model from the database or add it if it's missing. + + For example: + >>> weather_station_type = get_or_create_model( + >>> GenericAssetType, + >>> name="weather station", + >>> description="A weather station with various sensors.", + >>> ) + """ + + # unpack custom initialization parameters that map to multiple database columns + init_kwargs = kwargs.copy() + lookup_kwargs = kwargs.copy() + if "knowledge_horizon" in kwargs: + ( + lookup_kwargs["knowledge_horizon_fnc"], + lookup_kwargs["knowledge_horizon_par"], + ) = lookup_kwargs.pop("knowledge_horizon") + + # Find out which attributes are dictionaries mapped to JSON database columns, + # or callables mapped to string database columns (by their name) + filter_json_kwargs = {} + filter_by_kwargs = lookup_kwargs.copy() + for kw, arg in lookup_kwargs.items(): + model_attribute = getattr(model_class, kw) + if hasattr(model_attribute, "type") and isinstance(model_attribute.type, JSON): + filter_json_kwargs[kw] = filter_by_kwargs.pop(kw) + elif callable(arg) and isinstance(model_attribute.type, String): + # Callables are stored in the database by their name + # e.g. knowledge_horizon_fnc = x_days_ago_at_y_oclock + # is stored as "x_days_ago_at_y_oclock" + filter_by_kwargs[kw] = filter_by_kwargs[kw].__name__ + else: + # The kw is already present in filter_by_kwargs and doesn't need to be adapted + # i.e. it can be used as an argument to .filter_by() + pass + + # See if the model already exists as a db row + model_query = model_class.query.filter_by(**filter_by_kwargs) + for kw, arg in filter_json_kwargs.items(): + model_query = model_query.filter( + cast(getattr(model_class, kw), String) == cast(literal(arg, JSON()), String) + ) + model = model_query.one_or_none() + + # Create the model and add it to the database if it didn't already exist + if model is None: + model = model_class(**init_kwargs) + click.echo(f"Created {model}") + db.session.add(model) + return model diff --git a/flexmeasures/data/tests/test_scheduling_jobs.py b/flexmeasures/data/tests/test_scheduling_jobs.py index 517273c39..c7fc313c6 100644 --- a/flexmeasures/data/tests/test_scheduling_jobs.py +++ b/flexmeasures/data/tests/test_scheduling_jobs.py @@ -28,9 +28,7 @@ def test_scheduling_a_battery(db, app, add_battery_assets, setup_test_data): resolution = timedelta(minutes=15) assert ( - DataSource.query.filter_by( - name="FlexMeasures", type="scheduling script" - ).one_or_none() + DataSource.query.filter_by(name="FlexMeasures", type="scheduler").one_or_none() is None ) # Make sure the scheduler data source isn't there @@ -47,7 +45,7 @@ def test_scheduling_a_battery(db, app, add_battery_assets, setup_test_data): work_on_rq(app.queues["scheduling"], exc_handler=exception_reporter) scheduler_source = DataSource.query.filter_by( - name="Seita", type="scheduling script" + name="Seita", type="scheduler" ).one_or_none() assert ( scheduler_source is not None @@ -125,7 +123,7 @@ def test_assigning_custom_scheduler(db, app, add_battery_assets, is_path: bool): assert finished_job.meta["data_source_info"]["model"] == scheduler_specs["class"] scheduler_source = DataSource.query.filter_by( - type="scheduling script", + type="scheduler", **finished_job.meta["data_source_info"], ).one_or_none() assert ( diff --git a/flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py b/flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py index 5b8e8dfcd..ca125614f 100644 --- a/flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py +++ b/flexmeasures/data/tests/test_scheduling_jobs_fresh_db.py @@ -33,8 +33,7 @@ def test_scheduling_a_charging_station( soc_targets = [dict(datetime=target_datetime.isoformat(), value=target_soc)] assert ( - DataSource.query.filter_by(name="Seita", type="scheduling script").one_or_none() - is None + DataSource.query.filter_by(name="Seita", type="scheduler").one_or_none() is None ) # Make sure the scheduler data source isn't there job = create_scheduling_job( @@ -51,7 +50,7 @@ def test_scheduling_a_charging_station( work_on_rq(app.queues["scheduling"], exc_handler=exception_reporter) scheduler_source = DataSource.query.filter_by( - name="Seita", type="scheduling script" + name="Seita", type="scheduler" ).one_or_none() assert ( scheduler_source is not None diff --git a/flexmeasures/utils/coding_utils.py b/flexmeasures/utils/coding_utils.py index 11182a1f3..5c2f1b3c0 100644 --- a/flexmeasures/utils/coding_utils.py +++ b/flexmeasures/utils/coding_utils.py @@ -1,3 +1,5 @@ +from __future__ import annotations + import functools import time import inspect @@ -120,6 +122,22 @@ def sort_dict(unsorted_dict: dict) -> dict: return sorted_dict +def flatten_unique(nested_list_of_objects: list) -> list: + """Returns unique objects in a possibly nested (one level) list of objects. + + For example: + >>> flatten_unique([1, [2, 3, 4], 3, 5]) + <<< [1, 2, 3, 4, 5] + """ + all_objects = [] + for s in nested_list_of_objects: + if isinstance(s, list): + all_objects.extend(s) + else: + all_objects.append(s) + return list(set(all_objects)) + + def timeit(func): """Decorator for printing the time it took to execute the decorated function.""" diff --git a/flexmeasures/utils/config_defaults.py b/flexmeasures/utils/config_defaults.py index d38fe8c9e..e62b7a82d 100644 --- a/flexmeasures/utils/config_defaults.py +++ b/flexmeasures/utils/config_defaults.py @@ -126,8 +126,8 @@ class Config(object): FLEXMEASURES_REDIS_PASSWORD: Optional[str] = None FLEXMEASURES_JS_VERSIONS: dict = dict( vega="5.22.1", - vegaembed="6.20.8", - vegalite="5.2.0", + vegaembed="6.21.0", + vegalite="5.5.0", # "5.6.0" has a problematic bar chart: see our sensor page and https://github.com/vega/vega-lite/issues/8496 # todo: expand with other js versions used in FlexMeasures ) diff --git a/flexmeasures/utils/flexmeasures_inflection.py b/flexmeasures/utils/flexmeasures_inflection.py index 3c3d07cef..eaa825cad 100644 --- a/flexmeasures/utils/flexmeasures_inflection.py +++ b/flexmeasures/utils/flexmeasures_inflection.py @@ -25,7 +25,7 @@ def humanize(word): def parameterize(word): - """Parameterize the word so it can be used as a python or javascript variable name. + """Parameterize the word, so it can be used as a python or javascript variable name. For example: >>> word = "Acme® EV-Charger™" "acme_ev_chargertm" @@ -55,3 +55,7 @@ def titleize(word): for ac in ACRONYMS: word = re.sub(inflection.titleize(ac), ac, word) return word + + +def join_words_into_a_list(words: list[str]) -> str: + return p.join(words, final_sep="")