Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2023
License: CC BY
Data sources: ZENODO
ZENODO
Thesis . 2023
License: CC BY
Data sources: Datacite
ZENODO
Thesis . 2023
License: CC BY
Data sources: Datacite
versions View all 2 versions
addClaim

Robust model-based deep reinforcement learning for flow control

Authors: Geise, Janis;

Robust model-based deep reinforcement learning for flow control

Abstract

Active flow control has the potential of achieving remarkable drag reductions in applications for fluid mechanics, when combined with deep reinforcement learning (DRL). The high computational demands for CFD simulations currently limits the applicability of DRL to rather simple cases, such as the flow past a cylinder, as a consequence of the large amount of simulations which have to be carried out throughout the training. One possible approach of reducing the computational requirements is to substitute the simulations partially with models, e.g. deep neural networks; however, model uncertainties and error propagation may lead an unstable training and deteriorated performance compared to the model-free counterpart. The present thesis aims to modify the model-free training routine for controlling the flow past a cylinder towards a model-based one. Therefore, the policy training alternates between the CFD environment and environment models, which are trained successively over the course of the policy optimization. In order to reduce uncertainties and consequently improve the prediction accuracy, the CFD environment is represented by two model ensembles responsible for predicting the states and lift force as well as the aerodynamic drag, respectively. It could have been shown that this approach is able to yield a comparable performance to the model-free training routine at a Reynolds number of Re = 100 while reducing the overall runtime by up to 68.91%. The model-based training, however, showed a high dependency of the performance and stability on the initialization, which needs to be investigated further.

Keywords

model-based Deep Reinforcement Learning (MB-DRL), Computational Fluid Dynamics (CFD), Deep Reinforcement Learning (DRL), active flow control

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
    OpenAIRE UsageCounts
    Usage byUsageCounts
    visibility views 61
    download downloads 72
  • 61
    views
    72
    downloads
    Powered byOpenAIRE UsageCounts
Powered by OpenAIRE graph
Found an issue? Give us feedback
visibility
download
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
views
OpenAIRE UsageCountsViews provided by UsageCounts
downloads
OpenAIRE UsageCountsDownloads provided by UsageCounts
0
Average
Average
Average
61
72