h1

h2

h3

h4

h5
h6
TY  - THES
AU  - Dresia, Kai
TI  - Rocket engine control with deep reinforcement learning
VL  - 20
PB  - Rheinisch-Westfälische Technische Hochschule Aachen
VL  - Dissertation
CY  - Köln
M1  - RWTH-2025-05076
T2  - Forschungsbericht / Deutsches Zentrum für Luft- und Raumfahrt DLR
SP  - 1 Online-Ressource : Illustrationen
PY  - 2025
N1  - Veröffentlicht auf dem Publikationsserver der RWTH Aachen University
N1  - Dissertation, Rheinisch-Westfälische Technische Hochschule Aachen, 2025
AB  - The space industry is transitioning to reusable and cost-efficient launch vehicles. This significant transformation creates new operational challenges for liquid propellant rocket engines. Reusable launch vehicles require precise and fast thrust control over a wide throttling range, while component aging with each flight can lead to changes in engine system dynamics due to wear and tear. In addition, the increasing use of low-cost additive manufacturing introduces variability in hardware geometry, creating additional uncertainties. The transition to more efficient but more complex engine cycles, such as staged combustion, and the use of clustered engine configurations further complicate operations. Closed-loop control systems offer a promising solution by providing accurate thrust control even as system dynamics change due to engine reuse and manufacturing variability. However, designing classical controllers for liquid propellant rocket engines is challenging because it requires the simultaneous control of multiple coupled variables, while the computational power of space-qualified processors is limited. As a result, there is a growing need for new advanced control algorithms suitable for reusable engines. One promising solution is to use neural network-based controllers trained with deep reinforcement learning on a simulation model of the engine: They require minimal computational power during deployment and can learn optimal control policies for complex nonlinear systems. This thesis therefore investigates the suitability of deep reinforcement learning for liquid rocket engine control and compares the results with state-of-the-art model predictive control. Two test cases around the LOX/LNG expander-bleed LUMEN engine are studied, each presenting different control challenges, including thrust control over a wide throttling range, constraint handling, and maximizing engine efficiency. A controller is also being experimentally tested at the P8.3 test facility in Lampoldshausen, Germany. The controller achieved promising accuracy in controlling multiple variables simultaneously with an average error of only 1.3 
LB  - PUB:(DE-HGF)11 ; PUB:(DE-HGF)3
DO  - DOI:10.57676/bwbm-p528
UR  - https://publications.rwth-aachen.de/record/1012640
ER  -