New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fallback schedules should get a distinct source #841
Comments
Good idea! Just wanted to add that another benefit from option (1) is that we'll be able to track if a certain schedule failed via the rq job dashboard. This option also allows for recursive fallback mechanisms (in case the fallback fails). Option (2) is simpler which is nicer IMO. Aside, I think we should link the fallback mechanism to the StorageScheduler, for example, as a class attribute. That way, we can have specific fallback for the different schedulers. I was thinking in something like: class StorageScheduler:
fallback_scheduler : Scheduler
[...]
def __init__(...):
self.fallback_scheduler = StorageSchedulerFallback(...)
[...]
def compute_fallback(...):
return self.fallback_scheduler.compute() |
Good idea. Also, I realised that fetching schedules is done on the basis of the data source ID of the Scheduler, so the issue of pointing users that are getting a schedule to the right place, is an issue for both options. Luckily, I found that we store the data source ID on the job already after we call the compute function (in |
Also, we should add the source id (maybe the name too) to the response. Perhaps it's also a good idea to add an open field to share unstructured info regarding the state of the schedule. An alternative to keep the Job ID but relying on RQ is to set the retry = 1 and if it fails, check the number of trials and call the fallback schedule. We can even set the retry > 1 to allow for multiple retries. The benefit is that the setup part of the main scheduler can be the same but instead of calling |
My one fear is that retrying the job will make us lose the error trace. But even if it's not lost, I think creating a new job would be cleaner, so a failed compute call doesn't end up showing under a successful job. |
Good point! Let's create a new job in case of failure and link them via meta attributes. Regarding the fetching of the schedules (GET
I favor more the second as not to mix both jobs info. For reference, the status code 303 states:
Source: https://datatracker.ietf.org/doc/html/rfc7231#section-6.4.4 |
Closed as completed by PR #846. This is covered by the following test: flexmeasures/flexmeasures/api/v3_0/tests/test_sensor_schedules.py Lines 460 to 473 in 78e2023
|
In case of an infeasible problem, the
StorageScheduler
falls back to using thefallback_charging_policy
, while schedule data is still being attributed to theStorageScheduler
. It would help debugging to be able to visually distinguish schedule data originating from one or the other, by having fallback schedules saved with their own distinct source.I propose we refactor
fallback_charging_policy
to its ownFallbackStorageScheduler
, and think about triggering. For example:StorageScheduler.compute
method fail, and trigger the fallback within theservices.scheduling.make_schedule
method.The text was updated successfully, but these errors were encountered: