Skip to content

This repository includes a reinforcement learning framework for end-to-end type integrated thermal updraft localization and exploitation.

License

Notifications You must be signed in to change notification settings

ifrunistuttgart/RL_Integrated-Updraft-Exploitation

Repository files navigation

Deep RL Approach for Integrated Updraft Mapping and Exploitation

This repository includes a reinforcement learning framework for end-to-end type integrated thermal updraft localization and exploitation.

Overview

Autonomous soaring constitutes an appealing academic sample problem for investigating machine learning methods within the scope of aerospace guidance, navigation, and control. The stochastic nature of small-scale meteorological phenomena renders the task of localizing and exploiting thermal updrafts suited for applying a reinforcement learning approach.

This repository includes a reinforcement learning framework for solving the problem of integrated thermal updraft localizations and exploitation. The framework was developed by researchers at the Institute of Flight Mechanics and Controls (iFR) at the University of Stuttgart.

Autonomous soaring fligh test result

More detailed information about the learning approach, the implementation, and the flight test results can be found in the associated paper listed below.

Getting started

This repository contains the full source code, which was used to train the agent. The glider training environment is an extension of the OpenAI gym library. It implements a novel three-degrees-of-freedom (3 DoF) model of the aircraft dynamics in the presence of an arbitrary, dynamic wind field.

Prerequisites

To run the training environment, you need to install a virtual Python 3.8 environment with the following packages: gym (0.17.1), pytorch (1.4), numpy (1.12.3), scipy (1.6.2), pandas (1.1.3) and matplotlib (3.4.3).

To register the gilder module in your virtual environment, run the following command inside this project folder:

pip install -e glider

Credits

If you like to use our work or build upon the algorithms in an academic context, please cite:

Notter, S., Gall, C., Müller, G., Ahmad, A., & Fichter, W., "Deep Reinforcement Learning Approach for Integrated Updraft Mapping and Exploitation," AIAA Journal of Guidance, Control, and Dynamics, 2023, doi: 10.2514/1.G007572.

Notter, S., Müller, G., & Fichter, W., "Integrated Updraft Localization and Exploitation: End-to-End Type Reinforcement Learning Approach," CEAS EuroGNC, 2022.