Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document & refactor scheduling specs for storage flexibility model #511

Merged
merged 41 commits into from Nov 18, 2022
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
ce8169c
Better documentation of flexibility model for storage in endpoint; re…
nhoening Oct 1, 2022
3bed7c0
add changelog entry
nhoening Oct 1, 2022
1fba29e
make tests work, include updating older API versions, make prefer_cha…
nhoening Oct 1, 2022
0e9259d
use storage_specs in CLI command, as well
nhoening Oct 1, 2022
c2b2787
remove default resolution of 15M, for now pass in what you want
nhoening Oct 2, 2022
c0400ae
various review comments
nhoening Oct 28, 2022
4718a8a
black
nhoening Oct 29, 2022
2eef27d
fix tests
nhoening Oct 29, 2022
1f68659
always load sensor when checking storage specs
nhoening Oct 31, 2022
2beb928
begin to handle source model and version during scheduling
nhoening Oct 31, 2022
378caba
we can get multiple sources from our query (in the old setting, when …
nhoening Oct 31, 2022
5b48baf
give our two in-built schedulers an official __author__ and __version__
nhoening Oct 31, 2022
b664910
review comments
nhoening Oct 31, 2022
e9ff60b
refactor getting data source for a job to util function; use the actu…
nhoening Nov 2, 2022
5fe72dc
pass sensor to check_storage_specs, as we always have it already
nhoening Nov 2, 2022
f80171d
wrap Scheduler in classes, unify data source handling a bit more
nhoening Nov 3, 2022
c04f0d7
Merge branch 'main' into refactor-scheduling-storage-specs
nhoening Nov 4, 2022
22cb852
Support pandas 1.4 (#525)
Flix6x Nov 10, 2022
dd47dab
Stop requiring min/max SoC attributes, which have defaults:
Flix6x Nov 10, 2022
add377f
Set up device constraint columns for efficiencies in Charge Point sch…
Flix6x Nov 10, 2022
ccab2ee
Derive flow constraints for battery scheduling, too (copied from Char…
Flix6x Nov 10, 2022
6344cb0
Refactor: rename BatteryScheduler to StorageScheduler
Flix6x Nov 10, 2022
7f9eced
Warn for deprecation of
Flix6x Nov 10, 2022
4bc593c
Use StorageScheduler instead of ChargingStationScheduler
Flix6x Nov 10, 2022
ec40bc0
Deprecate ChargingStationScheduler
Flix6x Nov 10, 2022
6914a4b
Refactor: move StorageScheduler to dedicated module
Flix6x Nov 10, 2022
5e6bd4e
Update docstring
Flix6x Nov 10, 2022
beb2770
fix test
Flix6x Nov 10, 2022
3bf1b97
flake8
Flix6x Nov 10, 2022
ed3284d
Merge remote-tracking branch 'origin/main' into refactor-scheduling-s…
Flix6x Nov 10, 2022
a9899a2
Lose the v in version strings; prefer versions showing up as 'version…
Flix6x Nov 10, 2022
ad22c35
Refactor: rename module
Flix6x Nov 11, 2022
4efe883
Deal with empty SoC targets
Flix6x Nov 11, 2022
09cf700
Stop wrapping DataFrame representations in logging
Flix6x Oct 9, 2022
5a3845d
Log warning instead of raising UnknownForecastException, and assume z…
Flix6x Oct 9, 2022
172753d
mention scheduler merging in changelog
nhoening Nov 16, 2022
78250ef
amend existing data source information to reflect our StorageScheduler
nhoening Nov 16, 2022
ac2ddcc
Merge branch 'main' into refactor-scheduling-storage-specs
nhoening Nov 17, 2022
0bf52dd
add db upgrade notice to changelog
nhoening Nov 17, 2022
e1c2b47
Merge branch 'refactor-scheduling-storage-specs' of github.com:FlexMe…
nhoening Nov 17, 2022
1368fab
more specific downgrade command
nhoening Nov 18, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 2 additions & 1 deletion documentation/changelog.rst
Expand Up @@ -8,7 +8,7 @@ v0.12.0 | October XX, 2022
New features
-------------

* Hit the replay button to replay what happened, available on the sensor and asset pages [see `PR #463 <http://www.github.com/FlexMeasures/flexmeasures/pull/463>`_]
* Hit the replay button to visually replay what happened, available on the sensor and asset pages [see `PR #463 <http://www.github.com/FlexMeasures/flexmeasures/pull/463>`_]
nhoening marked this conversation as resolved.
Show resolved Hide resolved
* Ability to provide your own custom scheduling function [see `PR #505 <http://www.github.com/FlexMeasures/flexmeasures/pull/505>`_]
* Visually distinguish forecasts/schedules (dashed lines) from measurements (solid lines), and expand the tooltip with timing info regarding the forecast/schedule horizon or measurement lag [see `PR #503 <http://www.github.com/FlexMeasures/flexmeasures/pull/503>`_]
* The asset page also allows to show sensor data from other assets that belong to the same account [see `PR #500 <http://www.github.com/FlexMeasures/flexmeasures/pull/500>`_]
Expand All @@ -23,6 +23,7 @@ Infrastructure / Support

* Reduce size of Docker image (from 2GB to 1.4GB) [see `PR #512 <http://www.github.com/FlexMeasures/flexmeasures/pull/512>`_]
* Remove bokeh dependency and obsolete UI views [see `PR #476 <http://www.github.com/FlexMeasures/flexmeasures/pull/476>`_]
* Improve documentation and code w.r.t. storage flexibility modelling [see `PR #511 <http://www.github.com/FlexMeasures/flexmeasures/pull/511>`_]


v0.11.2 | September 6, 2022
Expand Down
17 changes: 12 additions & 5 deletions documentation/plugin/customisation.rst
Expand Up @@ -16,7 +16,7 @@ but in the background your custom scheduling algorithm is being used.
Let's walk through an example!

First, we need to write a function which accepts arguments just like the in-built schedulers (their code is `here <https://github.com/FlexMeasures/flexmeasures/tree/main/flexmeasures/data/models/planning>`_).
The following minimal example gives you an idea of the inputs and outputs:
The following minimal example gives you an idea of some meta information you can add for labeling your data, as well as the inputs and outputs of such a scheduling function:

.. code-block:: python

Expand All @@ -25,6 +25,10 @@ The following minimal example gives you an idea of the inputs and outputs:
from pandas.tseries.frequencies import to_offset
from flexmeasures.data.models.time_series import Sensor


__author__ = "My Company"
__version__ = "v2"

def compute_a_schedule(
sensor: Sensor,
start: datetime,
Expand All @@ -33,14 +37,18 @@ The following minimal example gives you an idea of the inputs and outputs:
*args,
**kwargs
):
"""Just a dummy scheduler, advising to do nothing"""
"""
Just a dummy scheduler that always plans to consume at maximum capacity.
(Schedulers return positive values for consumption, and negative values for production)
"""
return pd.Series(
0, index=pd.date_range(start, end, freq=resolution, closed="left")
sensor.get_attribute("capacity_in_mw"),
index=pd.date_range(start, end, freq=resolution, closed="left"),
)


.. note:: It's possible to add arguments that describe the asset flexibility and the EMS context in more detail. For example,
for storage assets we support various state-of-charge parameters. For now, the existing schedulers are the best documentation.
for storage assets we support various state-of-charge parameters. For now, the existing in-built schedulers are the best documentation.


Finally, make your scheduler be the one that FlexMeasures will use for certain sensors:
Expand All @@ -53,7 +61,6 @@ Finally, make your scheduler be the one that FlexMeasures will use for certain s
scheduler_specs = {
"module": "flexmeasures.data.tests.dummy_scheduler", # or a file path, see note below
"function": "compute_a_schedule",
"source": "My Company"
}

my_sensor = Sensor.query.filter(Sensor.name == "My power sensor on a flexible asset").one_or_none()
Expand Down
16 changes: 10 additions & 6 deletions flexmeasures/api/v1_2/implementations.py
Expand Up @@ -39,6 +39,7 @@
)
from flexmeasures.data.models.time_series import Sensor
from flexmeasures.data.services.resources import has_assets, can_access_asset
from flexmeasures.data.models.planning.utils import ensure_storage_specs
from flexmeasures.utils.time_utils import duration_isoformat


Expand Down Expand Up @@ -93,17 +94,20 @@ def get_device_message_response(generic_asset_name_groups, duration):
start = datetime.fromisoformat(
sensor.generic_asset.get_attribute("soc_datetime")
)
end = start + planning_horizon
resolution = sensor.event_resolution

# Schedule the asset
storage_specs = dict(
soc_at_start=sensor.generic_asset.get_attribute("soc_in_mwh"),
prefer_charging_sooner=False,
)
storage_specs = ensure_storage_specs(
storage_specs, sensor_id, start, end, resolution
)
try:
schedule = schedule_battery(
sensor,
start,
start + planning_horizon,
resolution,
soc_at_start=sensor.generic_asset.get_attribute("soc_in_mwh"),
prefer_charging_sooner=False,
sensor, start, end, resolution, storage_specs=storage_specs
)
except UnknownPricesException:
return unknown_prices()
Expand Down
37 changes: 23 additions & 14 deletions flexmeasures/api/v1_3/implementations.py
Expand Up @@ -99,6 +99,7 @@ def get_device_message_response(generic_asset_name_groups, duration):
if event_type not in ("soc", "soc-with-targets"):
return unrecognized_event_type(event_type)
connection = current_app.queues["scheduling"].connection
job = None
try: # First try the scheduling queue
job = Job.fetch(event, connection=connection)
except NoSuchJobError: # Then try the most recent event_id (stored as a generic asset attribute)
Expand Down Expand Up @@ -144,19 +145,25 @@ def get_device_message_response(generic_asset_name_groups, duration):
return unknown_schedule("Scheduling job has an unknown status.")
schedule_start = job.kwargs["start"]

schedule_data_source_name = "Seita"
scheduler_source = DataSource.query.filter_by(
name=schedule_data_source_name, type="scheduling script"
).one_or_none()
if scheduler_source is None:
return unknown_schedule(
message + f'no data is known from "{schedule_data_source_name}".'
)
data_source_info = None
if job:
data_source_info = job.meta.get("data_source_info")
if data_source_info is None:
data_source_info = dict(
name="Seita"
) # TODO: change to raise later - all scheduling jobs now get full info
nhoening marked this conversation as resolved.
Show resolved Hide resolved
scheduler_sources = DataSource.query.filter_by(
type="scheduling script",
**data_source_info,
).all() # Might be more than one, e.g. per user
if len(scheduler_sources) == 0:
s_info = ",".join([f"{k}={v}" for k, v in data_source_info.items()])
return unknown_schedule(message + f"no data is known from [{s_info}].")

power_values = sensor.search_beliefs(
event_starts_after=schedule_start,
event_ends_before=schedule_start + planning_horizon,
source=scheduler_source,
source=scheduler_sources[-1],
nhoening marked this conversation as resolved.
Show resolved Hide resolved
most_recent_beliefs_only=True,
one_deterministic_belief_per_event=True,
)
Expand Down Expand Up @@ -364,11 +371,13 @@ def post_udi_event_response(unit: str, prior: datetime):
end_of_schedule,
resolution=resolution,
belief_time=prior, # server time if no prior time was sent
soc_at_start=value,
soc_targets=soc_targets,
soc_min=soc_min,
soc_max=soc_max,
roundtrip_efficiency=roundtrip_efficiency,
storage_specs=dict(
soc_at_start=value,
soc_targets=soc_targets,
soc_min=soc_min,
soc_max=soc_max,
roundtrip_efficiency=roundtrip_efficiency,
),
job_id=form.get("event"),
enqueue=True,
)
Expand Down
6 changes: 4 additions & 2 deletions flexmeasures/api/v1_3/tests/test_api_v1_3.py
Expand Up @@ -88,9 +88,10 @@ def test_post_udi_event_and_get_device_message(
)

# check results are in the database
resolution = timedelta(minutes=15)
job.refresh() # catch meta info that was added on this very instance
data_source_info = job.meta.get("data_source_info")
scheduler_source = DataSource.query.filter_by(
name="Seita", type="scheduling script"
type="scheduling script", **data_source_info
).one_or_none()
assert (
scheduler_source is not None
Expand All @@ -100,6 +101,7 @@ def test_post_udi_event_and_get_device_message(
.filter(TimedBelief.source_id == scheduler_source.id)
.all()
)
resolution = timedelta(minutes=15)
consumption_schedule = pd.Series(
[-v.event_value for v in power_values],
index=pd.DatetimeIndex([v.event_start for v in power_values], freq=resolution),
Expand Down
84 changes: 61 additions & 23 deletions flexmeasures/api/v3_0/sensors.py
Expand Up @@ -204,7 +204,7 @@ def get_data(self, response: dict):
validate=validate.Range(min=0, max=1),
data_key="roundtrip-efficiency",
),
"value": fields.Float(data_key="soc-at-start"),
"start_value": fields.Float(data_key="soc-at-start"),
"soc_min": fields.Float(data_key="soc-min"),
"soc_max": fields.Float(data_key="soc-max"),
"start_of_schedule": AwareDateTimeField(
Expand All @@ -220,6 +220,9 @@ def get_data(self, response: dict):
),
), # todo: allow unit to be set per field, using QuantityField("%", validate=validate.Range(min=0, max=1))
"targets": fields.List(fields.Nested(TargetSchema), data_key="soc-targets"),
"prefer_charging_sooner": fields.Bool(
data_key="prefer-charging-sooner", required=False
),
# todo: add a duration parameter, instead of falling back to FLEXMEASURES_PLANNING_HORIZON
"consumption_price_sensor": SensorIdField(
data_key="consumption-price-sensor", required=False
Expand All @@ -241,6 +244,7 @@ def trigger_schedule( # noqa: C901
unit: str,
prior: datetime,
roundtrip_efficiency: Optional[ur.Quantity] = None,
prefer_charging_sooner: Optional[bool] = True,
consumption_price_sensor: Optional[Sensor] = None,
production_price_sensor: Optional[Sensor] = None,
inflexible_device_sensors: Optional[List[Sensor]] = None,
Expand All @@ -251,11 +255,37 @@ def trigger_schedule( # noqa: C901

.. :quickref: Schedule; Trigger scheduling job

The message should contain a flexibility model.
Trigger FlexMeasures to create a schedule for this sensor.
The assumption is that this sensor is the power sensor on a flexible asset.

In this request, you can describe:

- the schedule (start, unit, prior)
- the flexibility model for the sensor (see below, only storage models are supported at the moment)
- the EMS the sensor operates in (inflexible device sensors, and sensors that put a price on consumption and/or production)

Note: This endpoint does not support to schedule an EMS with multiple flexible sensors at once. This will happen in another endpoint.
See https://github.com/FlexMeasures/flexmeasures/issues/485. Until then, it is possible to call this endpoint for one flexible endpoint at a time
(considering already scheduled sensors as inflexible).

Flexibility models apply to the sensor's asset type:

1) For storage sensors (e.g. battery, charge points), the schedule deals with the state of charge (SOC).
The possible flexibility parameters are:

- soc-at-start (defaults to 0)
- soc-unit (kWh or MWh)
- soc-min (defaults to 0)
- soc-max (defaults to max soc target)
- soc-targets (defaults to NaN values)
- roundtrip-efficiency (defaults to 100%)
- prefer-charging-sooner (defaults to True, also signals a preference to discharge later)

2) Heat pump sensors are work in progress.

**Example request A**

This message triggers a schedule starting at 10.00am, at which the state of charge (soc) is 12.1 kWh.
This message triggers a schedule for a storage asset, starting at 10.00am, at which the state of charge (soc) is 12.1 kWh.

.. code-block:: json

Expand Down Expand Up @@ -324,19 +354,19 @@ def trigger_schedule( # noqa: C901
# todo: if a soc-sensor entity address is passed, persist those values to the corresponding sensor
# (also update the note in posting_data.rst about flexibility states not being persisted).

# get value
if "value" not in kwargs:
# get starting value
if "start_value" not in kwargs:
return ptus_incomplete()
try:
value = float(kwargs.get("value")) # type: ignore
start_value = float(kwargs.get("start_value")) # type: ignore
except ValueError:
extra_info = "Request includes empty or ill-formatted value(s)."
current_app.logger.warning(extra_info)
return ptus_incomplete(extra_info)
if unit == "kWh":
value = value / 1000.0
start_value = start_value / 1000.0

# Convert round-trip efficiency to dimensionless
# Convert round-trip efficiency to dimensionless (to the (0,1] range)
if roundtrip_efficiency is not None:
roundtrip_efficiency = roundtrip_efficiency.to(
ur.Quantity("dimensionless")
Expand All @@ -345,6 +375,7 @@ def trigger_schedule( # noqa: C901
# get optional min and max SOC
soc_min = kwargs.get("soc_min", None)
soc_max = kwargs.get("soc_max", None)
# TODO: review when we moved away from capacity having to be described in MWh
if soc_min is not None and unit == "kWh":
soc_min = soc_min / 1000.0
if soc_max is not None and unit == "kWh":
Expand All @@ -361,7 +392,7 @@ def trigger_schedule( # noqa: C901
start_of_schedule, end_of_schedule, freq=resolution, closed="right"
), # note that target values are indexed by their due date (i.e. closed="right")
)
# todo: move deserialization of targets into TargetSchema
# todo: move this deserialization of targets into newly-created ScheduleTargetSchema
for target in kwargs.get("targets", []):

# get target value
Expand Down Expand Up @@ -411,11 +442,14 @@ def trigger_schedule( # noqa: C901
end_of_schedule,
resolution=resolution,
belief_time=prior, # server time if no prior time was sent
soc_at_start=value,
soc_targets=soc_targets,
soc_min=soc_min,
soc_max=soc_max,
roundtrip_efficiency=roundtrip_efficiency,
storage_specs=dict(
soc_at_start=start_value,
soc_targets=soc_targets,
soc_min=soc_min,
soc_max=soc_max,
roundtrip_efficiency=roundtrip_efficiency,
prefer_charging_sooner=prefer_charging_sooner,
),
consumption_price_sensor=consumption_price_sensor,
production_price_sensor=production_price_sensor,
inflexible_device_sensors=inflexible_device_sensors,
Expand Down Expand Up @@ -518,21 +552,25 @@ def get_schedule(self, sensor: Sensor, job_id: str, duration: timedelta, **kwarg
return unknown_schedule("Scheduling job has an unknown status.")
schedule_start = job.kwargs["start"]

schedule_data_source_name = "Seita"
if "data_source_name" in job.meta:
schedule_data_source_name = job.meta["data_source_name"]
scheduler_source = DataSource.query.filter_by(
name=schedule_data_source_name, type="scheduling script"
).one_or_none()
if scheduler_source is None:
data_source_info = job.meta.get("data_source_info")
if data_source_info is None:
data_source_info = dict(
name="Seita"
) # TODO: change to raise later - all scheduling jobs now get full info
scheduler_sources = DataSource.query.filter_by(
type="scheduling script",
**data_source_info,
).all() # there can be more than one, e.g. different users
if len(scheduler_sources) == 0:
s_info = ",".join([f"{k}={v}" for k, v in data_source_info.items()])
return unknown_schedule(
error_message + f'no data is known from "{schedule_data_source_name}".'
error_message + f"no data is known from [{s_info}]."
)
nhoening marked this conversation as resolved.
Show resolved Hide resolved

power_values = sensor.search_beliefs(
event_starts_after=schedule_start,
event_ends_before=schedule_start + planning_horizon,
source=scheduler_source,
source=scheduler_sources[-1],
most_recent_beliefs_only=True,
one_deterministic_belief_per_event=True,
)
Expand Down
6 changes: 4 additions & 2 deletions flexmeasures/api/v3_0/tests/test_sensor_schedules.py
Expand Up @@ -66,9 +66,10 @@ def test_trigger_and_get_schedule(
)

# check results are in the database
resolution = timedelta(minutes=15)
job.refresh() # catch meta info that was added on this very instance
data_source_info = job.meta.get("data_source_info")
scheduler_source = DataSource.query.filter_by(
name="Seita", type="scheduling script"
type="scheduling script", **data_source_info
).one_or_none()
assert (
scheduler_source is not None
nhoening marked this conversation as resolved.
Show resolved Hide resolved
Expand All @@ -78,6 +79,7 @@ def test_trigger_and_get_schedule(
.filter(TimedBelief.source_id == scheduler_source.id)
.all()
)
resolution = timedelta(minutes=15)
consumption_schedule = pd.Series(
[-v.event_value for v in power_values],
index=pd.DatetimeIndex([v.event_start for v in power_values], freq=resolution),
Expand Down
12 changes: 7 additions & 5 deletions flexmeasures/cli/data_add.py
Expand Up @@ -958,11 +958,13 @@ def create_schedule(
end_of_schedule=end,
belief_time=server_now(),
resolution=power_sensor.event_resolution,
soc_at_start=soc_at_start,
soc_targets=soc_targets,
soc_min=soc_min,
soc_max=soc_max,
roundtrip_efficiency=roundtrip_efficiency,
storage_specs=dict(
soc_at_start=soc_at_start,
soc_targets=soc_targets,
soc_min=soc_min,
soc_max=soc_max,
roundtrip_efficiency=roundtrip_efficiency,
),
consumption_price_sensor=consumption_price_sensor,
production_price_sensor=production_price_sensor,
)
Expand Down