Skip to content

Commit

Permalink
Document & refactor scheduling specs for storage flexibility model (#511
Browse files Browse the repository at this point in the history
)

* Better documentation of flexibility model for storage in endpoint; refactor its parameters and handling within the code for readability

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* add changelog entry

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* make tests work, include updating older API versions, make prefer_charging_sooner part of storage specs & an optional parameter in API v3

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* use storage_specs in CLI command, as well

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* remove default resolution of 15M, for now pass in what you want

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* various review comments

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* black

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* fix tests

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* always load sensor when checking storage specs

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* begin to handle source model and version during scheduling

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* we can get multiple sources from our query (in the old setting, when we use name, but also in the new setting, unless we always include the user_id)

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* give our two in-built schedulers an official __author__ and __version__

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* review comments

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* refactor getting data source for a job to util function; use the actual data source ID for this lookup if possible

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* pass sensor to check_storage_specs, as we always have it already

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* wrap Scheduler in classes, unify data source handling a bit more

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* Support pandas 1.4 (#525)

Add a pandas version check in initialize_index.


* Use initialize_series util function

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Update initialize_index for pandas>=1.4

Signed-off-by: F.N. Claessen <felix@seita.nl>

* flake8

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Use initialize_index or initialize_series in all places where the closed keyword argument was used

Signed-off-by: F.N. Claessen <felix@seita.nl>

* flake8

Signed-off-by: F.N. Claessen <felix@seita.nl>

* mypy: PEP 484 prohibits implicit Optional

Signed-off-by: F.N. Claessen <felix@seita.nl>

* black after mypy

Signed-off-by: F.N. Claessen <felix@seita.nl>

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Stop requiring min/max SoC attributes, which have defaults:
- Default min = 0
- Default max = the highest target value, or np.nan if there are no targets, which subsequently maps to infinity in our solver

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Set up device constraint columns for efficiencies in Charge Point scheduler, too

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Derive flow constraints for battery scheduling, too (copied from Charge Point scheduler)

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Refactor: rename BatteryScheduler to StorageScheduler

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Warn for deprecation of
schedule_battery and schedule_charging_station

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Use StorageScheduler instead of ChargingStationScheduler

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Deprecate ChargingStationScheduler

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Refactor: move StorageScheduler to dedicated module

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Update docstring

Signed-off-by: F.N. Claessen <felix@seita.nl>

* fix test

Signed-off-by: F.N. Claessen <felix@seita.nl>

* flake8

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Lose the v in version strings; prefer versions showing up as 'version: 3' over 'version: v3'.

Even though Scheduler versioning does not necessarily need to follow semantic versioning (see discussion here: semver/semver#235), the v is still redundant.

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Refactor: rename module

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Deal with empty SoC targets

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Stop wrapping DataFrame representations in logging

Signed-off-by: F.N. Claessen <felix@seita.nl>

* Log warning instead of raising UnknownForecastException, and assume zero power values for missing values

Signed-off-by: F.N. Claessen <felix@seita.nl>

* mention scheduler merging in changelog

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* amend existing data source information to reflect our StorageScheduler

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* add db upgrade notice to changelog

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

* more specific downgrade command

Signed-off-by: Nicolas Höning <nicolas@seita.nl>

Signed-off-by: Nicolas Höning <nicolas@seita.nl>
Signed-off-by: F.N. Claessen <felix@seita.nl>
Co-authored-by: Felix Claessen <30658763+Flix6x@users.noreply.github.com>
Co-authored-by: F.N. Claessen <felix@seita.nl>
  • Loading branch information
3 people committed Nov 18, 2022
1 parent 35591a3 commit 7715219
Show file tree
Hide file tree
Showing 26 changed files with 758 additions and 651 deletions.
7 changes: 5 additions & 2 deletions documentation/changelog.rst
Expand Up @@ -5,10 +5,12 @@ FlexMeasures Changelog
v0.12.0 | October XX, 2022
============================

.. warning:: Upgrading to this version requires running ``flexmeasures db upgrade`` (you can create a backup first with ``flexmeasures db-ops dump``).

New features
-------------

* Hit the replay button to replay what happened, available on the sensor and asset pages [see `PR #463 <http://www.github.com/FlexMeasures/flexmeasures/pull/463>`_]
* Hit the replay button to visually replay what happened, available on the sensor and asset pages [see `PR #463 <http://www.github.com/FlexMeasures/flexmeasures/pull/463>`_]
* Ability to provide your own custom scheduling function [see `PR #505 <http://www.github.com/FlexMeasures/flexmeasures/pull/505>`_]
* Visually distinguish forecasts/schedules (dashed lines) from measurements (solid lines), and expand the tooltip with timing info regarding the forecast/schedule horizon or measurement lag [see `PR #503 <http://www.github.com/FlexMeasures/flexmeasures/pull/503>`_]
* The asset page also allows to show sensor data from other assets that belong to the same account [see `PR #500 <http://www.github.com/FlexMeasures/flexmeasures/pull/500>`_]
Expand All @@ -28,6 +30,7 @@ Infrastructure / Support
* Remove bokeh dependency and obsolete UI views [see `PR #476 <http://www.github.com/FlexMeasures/flexmeasures/pull/476>`_]
* Fix ``flexmeasures db-ops dump`` and ``flexmeasures db-ops restore`` not working in docker containers [see `PR #530 <http://www.github.com/FlexMeasures/flexmeasures/pull/530>`_] and incorrectly reporting a success when `pg_dump` and `pg_restore` are not installed [see `PR #526 <http://www.github.com/FlexMeasures/flexmeasures/pull/526>`_]
* Plugins can save BeliefsSeries, too, instead of just BeliefsDataFrames [see `PR #523 <http://www.github.com/FlexMeasures/flexmeasures/pull/523>`_]
* Improve documentation and code w.r.t. storage flexibility modelling ― prepare for handling other schedulers & merge battery and car charging schedulers [see `PR #511 <http://www.github.com/FlexMeasures/flexmeasures/pull/511>`_]
* Revised strategy for removing unchanged beliefs when saving data: retain the oldest measurement (ex-post belief), too [see `PR #518 <http://www.github.com/FlexMeasures/flexmeasures/pull/518>`_]


Expand Down Expand Up @@ -84,7 +87,7 @@ Bugfixes
* The docker-based tutorial now works with UI on all platforms (port 5000 did not expose on MacOS) [see `PR #465 <http://www.github.com/FlexMeasures/flexmeasures/pull/465>`_]
* Fix interpretation of scheduling results in toy tutorial [see `PR #466 <http://www.github.com/FlexMeasures/flexmeasures/pull/466>`_ and `PR #475 <http://www.github.com/FlexMeasures/flexmeasures/pull/475>`_]
* Avoid formatting datetime.timedelta durations as nominal ISO durations [see `PR #459 <http://www.github.com/FlexMeasures/flexmeasures/pull/459>`_]
* Account admins cannot add assets to other accounts anymore; and they are shown a button for asset creation in UI [see `PR #488 <http://www.github.com/FlexMeasures/flexmeasures/pull/488>`_]
* Account admins cannot add assets to other accounts any more; and they are shown a button for asset creation in UI [see `PR #488 <http://www.github.com/FlexMeasures/flexmeasures/pull/488>`_]

Infrastructure / Support
----------------------
Expand Down
46 changes: 29 additions & 17 deletions documentation/plugin/customisation.rst
Expand Up @@ -15,32 +15,45 @@ but in the background your custom scheduling algorithm is being used.

Let's walk through an example!

First, we need to write a function which accepts arguments just like the in-built schedulers (their code is `here <https://github.com/FlexMeasures/flexmeasures/tree/main/flexmeasures/data/models/planning>`_).
The following minimal example gives you an idea of the inputs and outputs:
First, we need to write a a class (inhering from the Base Scheduler) with a `schedule` function which accepts arguments just like the in-built schedulers (their code is `here <https://github.com/FlexMeasures/flexmeasures/tree/main/flexmeasures/data/models/planning>`_).
The following minimal example gives you an idea of some meta information you can add for labelling your data, as well as the inputs and outputs of such a scheduling function:

.. code-block:: python
from datetime import datetime, timedelta
import pandas as pd
from pandas.tseries.frequencies import to_offset
from flexmeasures.data.models.time_series import Sensor
from flexmeasures.data.models.planning import Scheduler
def compute_a_schedule(
sensor: Sensor,
start: datetime,
end: datetime,
resolution: timedelta,
*args,
**kwargs
):
"""Just a dummy scheduler, advising to do nothing"""
return pd.Series(
0, index=pd.date_range(start, end, freq=resolution, closed="left")
)
class DummyScheduler(Scheduler):
__author__ = "My Company"
__version__ = "2"
def schedule(
self,
sensor: Sensor,
start: datetime,
end: datetime,
resolution: timedelta,
*args,
**kwargs
):
"""
Just a dummy scheduler that always plans to consume at maximum capacity.
(Schedulers return positive values for consumption, and negative values for production)
"""
return pd.Series(
sensor.get_attribute("capacity_in_mw"),
index=pd.date_range(start, end, freq=resolution, closed="left"),
)
.. note:: It's possible to add arguments that describe the asset flexibility and the EMS context in more detail. For example,
for storage assets we support various state-of-charge parameters. For now, the existing schedulers are the best documentation.
for storage assets we support various state-of-charge parameters. For now, the existing in-built schedulers are the best documentation.
We are working on documenting this better, so the learning curve becomes easier.


Finally, make your scheduler be the one that FlexMeasures will use for certain sensors:
Expand All @@ -52,8 +65,7 @@ Finally, make your scheduler be the one that FlexMeasures will use for certain s
scheduler_specs = {
"module": "flexmeasures.data.tests.dummy_scheduler", # or a file path, see note below
"function": "compute_a_schedule",
"source": "My Company"
"class": "DummyScheduler",
}
my_sensor = Sensor.query.filter(Sensor.name == "My power sensor on a flexible asset").one_or_none()
Expand Down
5 changes: 2 additions & 3 deletions flexmeasures/api/common/schemas/sensor_data.py
Expand Up @@ -14,6 +14,7 @@
from flexmeasures.data.models.time_series import Sensor
from flexmeasures.api.common.schemas.sensors import SensorField
from flexmeasures.api.common.utils.api_utils import upsample_values
from flexmeasures.data.models.planning.utils import initialize_index
from flexmeasures.data.schemas.times import AwareDateTimeField, DurationField
from flexmeasures.data.services.time_series import simplify_index
from flexmeasures.utils.time_utils import duration_isoformat, server_now
Expand Down Expand Up @@ -179,9 +180,7 @@ def dump_bdf(self, sensor_data_description: dict, **kwargs) -> dict:
)

# Convert to desired time range
index = pd.date_range(
start=start, end=end, freq=df.event_resolution, closed="left"
)
index = initialize_index(start=start, end=end, resolution=df.event_resolution)
df = df.reindex(index)

# Convert to desired unit
Expand Down
20 changes: 12 additions & 8 deletions flexmeasures/api/v1_2/implementations.py
Expand Up @@ -32,13 +32,14 @@
parse_isodate_str,
)
from flexmeasures.data import db
from flexmeasures.data.models.planning.battery import schedule_battery
from flexmeasures.data.models.planning.storage import StorageScheduler
from flexmeasures.data.models.planning.exceptions import (
UnknownMarketException,
UnknownPricesException,
)
from flexmeasures.data.models.time_series import Sensor
from flexmeasures.data.services.resources import has_assets, can_access_asset
from flexmeasures.data.models.planning.utils import ensure_storage_specs
from flexmeasures.utils.time_utils import duration_isoformat


Expand Down Expand Up @@ -93,17 +94,20 @@ def get_device_message_response(generic_asset_name_groups, duration):
start = datetime.fromisoformat(
sensor.generic_asset.get_attribute("soc_datetime")
)
end = start + planning_horizon
resolution = sensor.event_resolution

# Schedule the asset
storage_specs = dict(
soc_at_start=sensor.generic_asset.get_attribute("soc_in_mwh"),
prefer_charging_sooner=False,
)
storage_specs = ensure_storage_specs(
storage_specs, sensor, start, end, resolution
)
try:
schedule = schedule_battery(
sensor,
start,
start + planning_horizon,
resolution,
soc_at_start=sensor.generic_asset.get_attribute("soc_in_mwh"),
prefer_charging_sooner=False,
schedule = StorageScheduler().schedule(
sensor, start, end, resolution, storage_specs=storage_specs
)
except UnknownPricesException:
return unknown_prices()
Expand Down
43 changes: 23 additions & 20 deletions flexmeasures/api/v1_3/implementations.py
Expand Up @@ -39,11 +39,14 @@
parse_isodate_str,
)
from flexmeasures.data import db
from flexmeasures.data.models.data_sources import DataSource
from flexmeasures.data.models.planning.utils import initialize_series
from flexmeasures.data.models.time_series import Sensor
from flexmeasures.data.queries.utils import simplify_index
from flexmeasures.data.services.resources import has_assets, can_access_asset
from flexmeasures.data.services.scheduling import create_scheduling_job
from flexmeasures.data.services.scheduling import (
create_scheduling_job,
get_data_source_for_job,
)
from flexmeasures.utils.time_utils import duration_isoformat


Expand Down Expand Up @@ -99,6 +102,7 @@ def get_device_message_response(generic_asset_name_groups, duration):
if event_type not in ("soc", "soc-with-targets"):
return unrecognized_event_type(event_type)
connection = current_app.queues["scheduling"].connection
job = None
try: # First try the scheduling queue
job = Job.fetch(event, connection=connection)
except NoSuchJobError: # Then try the most recent event_id (stored as a generic asset attribute)
Expand Down Expand Up @@ -144,19 +148,15 @@ def get_device_message_response(generic_asset_name_groups, duration):
return unknown_schedule("Scheduling job has an unknown status.")
schedule_start = job.kwargs["start"]

schedule_data_source_name = "Seita"
scheduler_source = DataSource.query.filter_by(
name=schedule_data_source_name, type="scheduling script"
).one_or_none()
if scheduler_source is None:
data_source = get_data_source_for_job(job, sensor=sensor)
if data_source is None:
return unknown_schedule(
message + f'no data is known from "{schedule_data_source_name}".'
message + f"no data source could be found for job {job}."
)

power_values = sensor.search_beliefs(
event_starts_after=schedule_start,
event_ends_before=schedule_start + planning_horizon,
source=scheduler_source,
source=data_source,
most_recent_beliefs_only=True,
one_deterministic_belief_per_event=True,
)
Expand Down Expand Up @@ -301,11 +301,12 @@ def post_udi_event_response(unit: str, prior: datetime):
start_of_schedule = datetime
end_of_schedule = datetime + current_app.config.get("FLEXMEASURES_PLANNING_HORIZON")
resolution = sensor.event_resolution
soc_targets = pd.Series(
soc_targets = initialize_series(
np.nan,
index=pd.date_range(
start_of_schedule, end_of_schedule, freq=resolution, closed="right"
), # note that target values are indexed by their due date (i.e. closed="right")
start=start_of_schedule,
end=end_of_schedule,
resolution=resolution,
inclusive="right", # note that target values are indexed by their due date (i.e. inclusive="right")
)

if event_type == "soc-with-targets":
Expand Down Expand Up @@ -359,16 +360,18 @@ def post_udi_event_response(unit: str, prior: datetime):
soc_targets.loc[target_datetime] = target_value

create_scheduling_job(
sensor_id,
sensor,
start_of_schedule,
end_of_schedule,
resolution=resolution,
belief_time=prior, # server time if no prior time was sent
soc_at_start=value,
soc_targets=soc_targets,
soc_min=soc_min,
soc_max=soc_max,
roundtrip_efficiency=roundtrip_efficiency,
storage_specs=dict(
soc_at_start=value,
soc_targets=soc_targets,
soc_min=soc_min,
soc_max=soc_max,
roundtrip_efficiency=roundtrip_efficiency,
),
job_id=form.get("event"),
enqueue=True,
)
Expand Down
6 changes: 4 additions & 2 deletions flexmeasures/api/v1_3/tests/test_api_v1_3.py
Expand Up @@ -88,9 +88,10 @@ def test_post_udi_event_and_get_device_message(
)

# check results are in the database
resolution = timedelta(minutes=15)
job.refresh() # catch meta info that was added on this very instance
data_source_info = job.meta.get("data_source_info")
scheduler_source = DataSource.query.filter_by(
name="Seita", type="scheduling script"
type="scheduling script", **data_source_info
).one_or_none()
assert (
scheduler_source is not None
Expand All @@ -100,6 +101,7 @@ def test_post_udi_event_and_get_device_message(
.filter(TimedBelief.source_id == scheduler_source.id)
.all()
)
resolution = timedelta(minutes=15)
consumption_schedule = pd.Series(
[-v.event_value for v in power_values],
index=pd.DatetimeIndex([v.event_start for v in power_values], freq=resolution),
Expand Down

0 comments on commit 7715219

Please sign in to comment.