-
Manuel Schaller8/12/24, 2:00 PMMS 01: Optimal Control and Machine LearningMinisymposium Contribution
Extended Dynamic Mode Decomposition is a popular data-driven method to approximate the flow of a dynamical control system through the lens of observable functions. In this talk, we discuss how this framework and corresponding finite-data error bounds may be used in data-driven Model Predictive Control to establish (practical) asymptotic stability. The key ingredient are proportional error...
Go to contribution page -
Mathias Staudigl8/12/24, 2:30 PMMS 01: Optimal Control and Machine LearningMinisymposium Contribution
We consider a class of convex risk-neutral PDE-constrained optimization problems subject to pointwise control and state constraints. Due to the many challenges asso- ciated with almost sure constraints on pointwise evaluations of the state, we suggest a relaxation via a smooth functional bound with similar properties to well-known probability constraints. First, we introduce and analyze the...
Go to contribution page -
Simon Buchwald8/12/24, 3:00 PMMS 01: Optimal Control and Machine LearningMinisymposium Contribution
The efficiency of training data is a prominent issue in machine learning. While too little data can lead to insufficient learning, too much data can result in overfitting or can be computationally expensive to generate.
Go to contribution page
In this talk, we investigate a class of greedy-type algorithms that have previously proven to compute efficient control functions for the reconstruction of operators in... -
Donato Vásquez Varas (Johann Radon Institute for Computational and Applied Mathematics (RICAM))8/12/24, 4:00 PMMS 01: Optimal Control and Machine LearningMinisymposium Contribution
The synthesis of feedback laws for infinite horizon via machine learning methods instead of classical methods has been a theme of interest in recent years, since they have the potential of mitigate the curse of dimensionality. There are two methods which are under study in this talk.
The first consists in looking for a feedback law in a finite dimensional functional space (for example...
Go to contribution page -
Frederik Köhne (Universität Bayreuth)8/12/24, 4:30 PMMS 01: Optimal Control and Machine LearningMinisymposium Contribution
The choice of the step size (or learning rate) in stochastic optimization algorithms, such as stochastic gradient descent, plays a central role in the training of machine learning models. Both theoretical investigations and empirical analyses emphasize that an optimal step size not only requires taking into account the nonlinearity of the underlying problem, but also relies on accounting for...
Go to contribution page -
Mario Sperl (University of Bayreuth)8/13/24, 9:00 AMMS 01: Optimal Control and Machine LearningMinisymposium Contribution
In this presentation, we consider interconnected optimal control problems, wherein the interconnection is represented as a graph. We establish a decaying sensitivity condition, where the influence between graph nodes diminishes with their distance, and leverage this assumption to construct a separable approximation of the optimal value function. This approach allows us to identify scenarios in...
Go to contribution page -
Isabel Jacob (TU Darmstadt)8/13/24, 9:30 AMMS 01: Optimal Control and Machine LearningContributed Talk
As the use cases for neural networks become increasingly complex, modern neural networks must also become deeper and more intricate to keep up, indicating the need for more efficient learning algorithms. Multilevel methods, traditionally used to solve differential equations using a hierarchy of discretizations, offer the potential to reduce computational effort.
In this talk, we combine...
Go to contribution page -
Jens Püttschneider (TU Dortmund)8/13/24, 10:00 AMMS 01: Optimal Control and Machine LearningMinisymposium Contribution
System-theoretic dissipativity notions introduced by Jan C. Willems play a fundamental role in the analysis of optimal control problems. They enable the understanding of infinite-horizon asymptotics and turnpike properties. This talk introduces a dissipative formulation for training deep Residual Neural Networks (ResNets) in classification problems. To this end, we formulate the training of...
Go to contribution page -
Hans Harder (University of Paderborn)8/13/24, 11:00 AMMS 01: Optimal Control and Machine LearningMinisymposium Contribution
The value function plays a crucial role as a measure for the cumulative future reward an agent receives in both reinforcement learning and optimal control. It is therefore of interest to study how similar the values of neighboring states are, i.e. to investigate the continuity of the value function. We do so by providing and verifying upper bounds on the value function's modulus of continuity....
Go to contribution page -
Yongcun Song8/13/24, 11:30 AMMS 01: Optimal Control and Machine LearningMinisymposium Contribution
We study the application of well-known physics-informed neural networks (PINNs) for solving non-smooth PDE-constrained optimization problems. First, we consider a class of PDE-constrained optimization problems where additional nonsmooth regularization is employed for constraints on the control or design variables. For solving such problems, we combine the alternating direction method of...
Go to contribution page
Choose timezone
Your profile timezone: