Skip to content

Using q-learning to optimise offshore wind farm power generation as a wake-steering optimisation framework.

Notifications You must be signed in to change notification settings

RichardFindlay/wind-farm-wake-steering-optimisation-with-rl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Optimisation of Wake Steering Strategies Using Q-Learning

🚀 Blog post on personal website 🔗 Reinforcement Learning for Offshore Wind Farm Optimisation

screenshot of animation illustrating optimisation process in quasi-dynamic environment

Project Description 📖:

This repository holds the coded implementation of a conference paper published by NREL, where there was no publicly available code, work was done to replicate some of the key components of the paper. The use case demonstrates the potential of how even rudimental Reinforcement Learning (RL) techniques can be applied to the wake steering control problem and can even lead to an improvement in performance when compared to traditional optimisation techniques.

The code uses NREL's FLORIS - a control-oriented model traditionally used to investigate steady-state wake interactions in wind farm layouts - as a foundation to the using RL as the optimisation.

Performance Overview 🏎️:

There are two distinct environments implemented for the problem, in which the q-learning optimisation is carried out for a 'static' environment, where there is no time dependency associated with wake propagation and is the conventional strategy adopted by the FLORIS. The second environment infers a temporal component to the optimisation, creating novel exploration of wake propagation in a RANs based solver and in effect producing a quasi-dynamic control environment and creating a more interesting insight to the formulation of the reward strategy for the problem.

til

The above illustrates the reward profiles observed during the training of the latterly described environments, with further insight available for the operation of the quasi-dynamic environment through the accompanying animation shown in the repository and in the blog post.Through discretising the state space, q-learning has shown to yield effective results, surpassing improvements proposed by traditional optimisation techniques and packages.

Notes on Code 📓:

Install python dependencies for repository:

$ pip install -r requirements.txt

🏋️ Training conducted locally on 2018 MacBook pro with 8GB RAM.

Further Work 🔭:

  • Investigate the opportunities to pursue other RL techniques which may eliminate necessity to discretise state space and yield better optimisation strategies.
  • Roll out optimisation to larger windfarm to validate practical use.
  • Integrate and infer wind farm meta data (turbine choice, size, spacing etc.) to model and enable a more ambiguous optimisation strategy.

To Do 🧪:

  • Code links and references to be validated since re-organisation.
  • Further validate environments and optimisation scripts.

Resources 💎:

About

Using q-learning to optimise offshore wind farm power generation as a wake-steering optimisation framework.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages