Skip to content

nmpowell/carbon-intensity-forecast-tracking

Repository files navigation

Carbon Intensity Forecast Tracking

Tracking differences between the UK National Grid's Carbon Intensity forecast and its eventual recorded value.

What is this?

See accompanying blog post.

The carbon intensity of electricity is a measure of the $CO_2$ emissions produced per kilowatt-hour (kWh) of electricity consumed. Units are usually grams of $CO_2$ per kWh.

The UK's National Grid Electricity System Operator (NGESO) publishes an API showing half-hourly carbon intensity (CI), together with a 48-hour forecast. Its national data is based upon recorded and estimated generation statistics and values representing the relative CI of different energy sources. Regional data is based upon forecasted generation, consumption, and a model describing inter-region interaction.

The forecasts are updated every half hour, but the API does not keep historical forecasts; they're unavailable or overwritten. How reliable are they?

Published CI values

The above figure shows the evolution of 24 hours' worth of time windows' national forecasts. The more recent time windows are darker blue. Each window is forecasted about 96 times in the preceeding 48 hours, from the fw48h endpoint (left of the dashed line). A further 48 post-hoc "forecasts" and "actual" values, from the pt24h endpoint, are shown to the right of the dashed line.

Basic idea

  • Git scrape the National Grid Carbon Intensity API using GitHub Actions, as inspired by food-scraper.
  • Scraping occurs twice per hour on a cron schedule (docs).
  • Download JSON data from the various endpoints, and save to data/.
  • Once per day, data is converted to CSV, and parsed into a Pandas dataframe for summarising, plotting and analysis. The plots on this page are updated daily.
  • With summary statistics and plots, we can attempt to estimate the accuracy and reliability of the forecasts, and predict the likelihood of errors.

Notebooks

Forecast Accuracy - National

For the complete history since the start of this project, see ./charts/stats_history_national.csv.

24 hours

This boxplot shows the range of all published forecast values for each 30-minute time window, defined by its "from" datetime in the API.

Published CI values 24h

The plot below shows forecast percentage error (compared with "actual" values, i.e. $100\times(forecast-actual)/actual$) for the same times.

CI error 24h

7-day summary

These are daily summaries of forecast error from all 48 half-hour windows on each day.

Absolute error, gCO2/kWh

count mean sem 95% confidence interval
2023-10-10 450 17.45 0.54 (16.39, 18.5)
2023-10-11 4320 31.78 0.35 (31.1, 32.47)
2023-10-12 4320 42.74 0.33 (42.08, 43.39)
2023-10-13 4320 22.34 0.19 (21.97, 22.71)
2023-10-14 4320 10.17 0.12 (9.93, 10.4)
2023-10-15 4274 26.66 0.28 (26.1, 27.22)
2023-10-16 4272 11.64 0.16 (11.34, 11.95)
2023-10-17 3861 16.47 0.18 (16.11, 16.83)

Absolute percentage error

mean sem 95% confidence interval
2023-10-10 30.82 0.96 (28.94, 32.7)
2023-10-11 26.14 0.25 (25.66, 26.62)
2023-10-12 18.45 0.13 (18.2, 18.69)
2023-10-13 24.66 0.19 (24.29, 25.03)
2023-10-14 15.33 0.19 (14.96, 15.71)
2023-10-15 30.67 0.42 (29.85, 31.49)
2023-10-16 5 0.07 (4.87, 5.13)
2023-10-17 13.75 0.2 (13.36, 14.14)

30 days

CI error 30d

All data summary - absolute error

count mean median std sem
absolute error 849545 24.4642 20 19.4646 0.0211179

Forecast reliability

The next plot shows the relative frequency of all errors to date with respect to their size. By fitting a distribution, we can estimate the probability of future forecast errors of a certain magnitude, and hence decide whether to rely upon a given forecast.

CI forecast error distribution

By comparing with the published numerical bands representing the CI index, we can decide the magnitude of acceptable error. For example, to cross two bands (from low to high) in 2023 requires an error of at least 81 $gCO_2/kWh$. Given the Normal distribution fits the data quite well1, the chance of seeing an error this large is (as of this writing) around 0.34%.

If we check the CI forecast at a given time (via an app or the API directly), making some assumptions about the independence of the forecasts, the chances of seeing a large enough error to cross two bands (in either direction) is currently reasonably small. To cross from the middle of one band to the adjacent band naturally requires a smaller error, and this is correspondingly more likely.

Note that the bands narrow and their upper bounds reduce, year on year, to 2030. In 2023, to cross from mid-low to moderate, or mid-moderate to high, would require an error of only about 40 $gCO_2/kWh$, with a probability of about 14%. In 2025, these figures will be 32.5 $gCO_2/kWh$ and around 23%, respectively. In 2030, they will be around 17.5 $gCO_2/kWh$ and 52% (assuming current error rates). To cross two bands in 2030, e.g. from very low to moderate, will require an error of about 46 $gCO_2/kWh$, with a 9% probability.

These rates suggest we should hope to see some improvement in forecast accuracy if it is to continue to be reliable.

Error magnitudes and their probabilities

error value Student's t probability Normal probability Laplace probability
100 0.000712214 0.000712446 0.0328253
90 0.00232759 0.00232821 0.0463294
80 0.00682311 0.00682455 0.065389
70 0.0179719 0.0179748 0.0922896
60 0.0426215 0.0426268 0.130257
50 0.0912317 0.0912399 0.183844
40 0.17677 0.176781 0.259476
30 0.311134 0.311146 0.366223
20 0.4996 0.499611 0.516885
10 0.735721 0.735727 0.729528

Prior work

I'm unsure whether this has been done before. NGESO do not seem to release historic forecasts or figures about their accuracy. If you know more, please open an Issue or get in touch!

Kate Rose Morley created the canonincal great design for viewing the UK's live carbon intensity.

The API site shows a graph of the forecast and "actual" values. You can create plots of a custom time range using NGESO datasets: go to Regional/National "Carbon Intensity Forecast", click "Explore", choose "Chart", deselect "datetime" and add the regions. The "National" dataset includes the "actual" CI. But these are the final/latest values, and as far as I know they're not statistically compared. This project aims to track the accuracy of the forecasts as they are published.

APIs and Data

  • The JSON format isn't great for parsing and plotting, and the files are huge. So here they're wrangled (wrangle.py) to CSV.

National

  1. For each actual 30-minute period defined by its "from" datetime, capture published forecasts for that period.
  2. Forecasts are published up to 48 hours ahead, so we should expect about 96 future forecasts for one real period, and 48 more from the "past" 24 hours.
  3. Also capture "actual" values by choosing the latest available "actual" value (national data only) up to 24 hours after the window has passed.
  • We can do this for each of the published regions and the National data.

Regional

To do!

  • For the regional data, absent "actual" values we should choose the final available forecast 24h after the window has passed (usually, this does not change).
  • There are 17 DNO regions including national. In the 48 hour forecasts, there's an 18th region which is "GB", which may approximate the "national" forecast but doesn't match it exactly. (Unclear what this is.)
  • The earliest regional forecast data is from May 2018.

Dates and times

  • All times are UTC. Seconds are ignored.
  • Throughout, I represent the 30-minute time window defined by a "from" and "to" timestamp in the API using just the "from" datetime. Thus a forecasted datetime given here represents a 30-minute window beginning at that time.
  • If we query the 48h forecast API at a given time e.g. 18:45, the earliest time window (the 0th entry in the data) begins at the current time rounded down to the nearest half hour, i.e. the "from" timepoint 0 will be 18:30 and represents the window covering the time requested. A wrinkle is that if you request 18:30, you'll get the window beginning 18:00, i.e. (2023-03-10T18:00Z, 2023-03-10T18:30Z], so the code here always requests +1 minute from the rounded-down half-hour.
  • Dates from the API don't seem to wrap around years, 31st December - 1st Jan.

Limitations

  • Because Github's Actions runners are shared (and free), the cronjobs aren't 100% reliable. Expect occasional missing data.
  • There could be many contributing factors to broad error standard deviation, including missing data (not scraped successfully).
  • Most statistics assume forecasts are independent. I do not have access to the models themselves, but I think this is likely not the case: forecasts are probably weighted by prior forecasts.

Actual intensity and generation mix

The "actual" CI values are of course indicative rather than precise measurements.

From tracking the pt24h data, these "actual" values are sometimes adjusted post-hoc, i.e. several hours after the relevant time window has passed. This is because some renewable generation data becomes available after the fact, and NGESO update their numbers. We could continue monitoring this, but we have to stop sometime. For the purposes of this project, to give an anchor against which to measure forecast accuracy, I choose the "actual" and "final forecast" values as the latest ones accessible up to 24 hours after the start of the time window, from the pt24h endpoint.

To measure regional forecast accuracy it would be preferable to have a retrospective actual CI value for each region, but the API only provides this at the national level.

Usage

Expects Python 3.10+.

Install

  1. Clone this repository git clone git@github.com:nmpowell/carbon-intensity-forecast-tracking.git
  2. Set up a local virtual environment using Python 3.10+
    cd carbon-intensity-forecast-tracking/
    python3 -m venv venv                 # use this subdirectory name to piggyback on .gitignore
    source venv/bin/activate
  3. You can install the requirements in this virtual environment in a couple of ways:
    python3 -m pip install --upgrade pip
    python3 -m pip install -r requirements.txt
    # or
    make install
    # or for development
    make install-dev
    make will call pip-sync which will use the requirements.txt file to install requirements. To regenerate that file, use pip-compile requirements.in

Run

There are examples of downloading and parsing data in the .github/workflows/scrape_data.yaml and .github/workflows/wrangle.yaml files. For more details, see ./notebook.ipynb.

  1. Activate the venv: source venv/bin/activate
  2. Download JSON files. Example:
    # 48-hour forward forecast from the current window, national data
    python3 run.py download --output_dir "data" --now --endpoint national_fw48h
    Output JSON files are named for the {from} time given: data/<endpoint>/<from-datetime>.json.
  3. Parse the data and produce CSV files: python3 run.py wrangle --input_directory "data/national_fw48h"
  4. Summarise the CSVs: python3 run.py summary --input_directory "data/national_fw48h" --output_directory "data" --endpoint "national_fw48h". Old CSVs are moved to an _archive subdirectory.
  5. Generate plots: python3 run.py graph --input_directory "data" --output_directory "charts"

To copy the scraping functionality of this repo, enable GitHub Actions within your repo Settings > Actions > General > Workflow permissions > Read and write permissions.

Future work

See TODO.md


Footnotes

  1. Student's t distribution closely follows the Normal as the degrees of freedom are large.

About

The reliability of the National Grid's Carbon Intensity forecast

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages