Aug 12 – 16, 2024
Von-Melle-Park 8
Europe/Berlin timezone

Session

MS 01: Optimal Control and Machine Learning

Aug 12, 2024, 2:00 PM
Seminarraum 205 (Von-Melle-Park 8)

Seminarraum 205

Von-Melle-Park 8

Presentation materials

There are no materials yet.

  1. Manuel Schaller
    8/12/24, 2:00 PM
    MS 01: Optimal Control and Machine Learning
    Minisymposium Contribution

    Extended Dynamic Mode Decomposition is a popular data-driven method to approximate the flow of a dynamical control system through the lens of observable functions. In this talk, we discuss how this framework and corresponding finite-data error bounds may be used in data-driven Model Predictive Control to establish (practical) asymptotic stability. The key ingredient are proportional error...

    Go to contribution page
  2. Mathias Staudigl
    8/12/24, 2:30 PM
    MS 01: Optimal Control and Machine Learning
    Minisymposium Contribution

    We consider a class of convex risk-neutral PDE-constrained optimization problems subject to pointwise control and state constraints. Due to the many challenges asso- ciated with almost sure constraints on pointwise evaluations of the state, we suggest a relaxation via a smooth functional bound with similar properties to well-known probability constraints. First, we introduce and analyze the...

    Go to contribution page
  3. Simon Buchwald
    8/12/24, 3:00 PM
    MS 01: Optimal Control and Machine Learning
    Minisymposium Contribution

    The efficiency of training data is a prominent issue in machine learning. While too little data can lead to insufficient learning, too much data can result in overfitting or can be computationally expensive to generate.
    In this talk, we investigate a class of greedy-type algorithms that have previously proven to compute efficient control functions for the reconstruction of operators in...

    Go to contribution page
  4. Donato Vásquez Varas (Johann Radon Institute for Computational and Applied Mathematics (RICAM))
    8/12/24, 4:00 PM
    MS 01: Optimal Control and Machine Learning
    Minisymposium Contribution

    The synthesis of feedback laws for infinite horizon via machine learning methods instead of classical methods has been a theme of interest in recent years, since they have the potential of mitigate the curse of dimensionality. There are two methods which are under study in this talk.

    The first consists in looking for a feedback law in a finite dimensional functional space (for example...

    Go to contribution page
  5. Frederik Köhne (Universität Bayreuth)
    8/12/24, 4:30 PM
    MS 01: Optimal Control and Machine Learning
    Minisymposium Contribution

    The choice of the step size (or learning rate) in stochastic optimization algorithms, such as stochastic gradient descent, plays a central role in the training of machine learning models. Both theoretical investigations and empirical analyses emphasize that an optimal step size not only requires taking into account the nonlinearity of the underlying problem, but also relies on accounting for...

    Go to contribution page
  6. Mario Sperl (University of Bayreuth)
    8/13/24, 9:00 AM
    MS 01: Optimal Control and Machine Learning
    Minisymposium Contribution

    In this presentation, we consider interconnected optimal control problems, wherein the interconnection is represented as a graph. We establish a decaying sensitivity condition, where the influence between graph nodes diminishes with their distance, and leverage this assumption to construct a separable approximation of the optimal value function. This approach allows us to identify scenarios in...

    Go to contribution page
  7. Isabel Jacob (TU Darmstadt)
    8/13/24, 9:30 AM
    MS 01: Optimal Control and Machine Learning
    Contributed Talk

    As the use cases for neural networks become increasingly complex, modern neural networks must also become deeper and more intricate to keep up, indicating the need for more efficient learning algorithms. Multilevel methods, traditionally used to solve differential equations using a hierarchy of discretizations, offer the potential to reduce computational effort.

    In this talk, we combine...

    Go to contribution page
  8. Jens Püttschneider (TU Dortmund)
    8/13/24, 10:00 AM
    MS 01: Optimal Control and Machine Learning
    Minisymposium Contribution

    System-theoretic dissipativity notions introduced by Jan C. Willems play a fundamental role in the analysis of optimal control problems. They enable the understanding of infinite-horizon asymptotics and turnpike properties. This talk introduces a dissipative formulation for training deep Residual Neural Networks (ResNets) in classification problems. To this end, we formulate the training of...

    Go to contribution page
  9. Hans Harder (University of Paderborn)
    8/13/24, 11:00 AM
    MS 01: Optimal Control and Machine Learning
    Minisymposium Contribution

    The value function plays a crucial role as a measure for the cumulative future reward an agent receives in both reinforcement learning and optimal control. It is therefore of interest to study how similar the values of neighboring states are, i.e. to investigate the continuity of the value function. We do so by providing and verifying upper bounds on the value function's modulus of continuity....

    Go to contribution page
  10. Yongcun Song
    8/13/24, 11:30 AM
    MS 01: Optimal Control and Machine Learning
    Minisymposium Contribution

    We study the application of well-known physics-informed neural networks (PINNs) for solving non-smooth PDE-constrained optimization problems. First, we consider a class of PDE-constrained optimization problems where additional nonsmooth regularization is employed for constraints on the control or design variables. For solving such problems, we combine the alternating direction method of...

    Go to contribution page
Building timetable...