Extended Dynamic Mode Decomposition is a popular data-driven method to approximate the flow of a dynamical control system through the lens of observable functions. In this talk, we discuss how this framework and corresponding finite-data error bounds may be used in data-driven Model Predictive Control to establish (practical) asymptotic stability. The key ingredient are proportional error...
We consider a class of convex risk-neutral PDE-constrained optimization problems subject to pointwise control and state constraints. Due to the many challenges asso- ciated with almost sure constraints on pointwise evaluations of the state, we suggest a relaxation via a smooth functional bound with similar properties to well-known probability constraints. First, we introduce and analyze the...
The efficiency of training data is a prominent issue in machine learning. While too little data can lead to insufficient learning, too much data can result in overfitting or can be computationally expensive to generate.
In this talk, we investigate a class of greedy-type algorithms that have previously proven to compute efficient control functions for the reconstruction of operators in...
The synthesis of feedback laws for infinite horizon via machine learning methods instead of classical methods has been a theme of interest in recent years, since they have the potential of mitigate the curse of dimensionality. There are two methods which are under study in this talk.
The first consists in looking for a feedback law in a finite dimensional functional space (for example...
The choice of the step size (or learning rate) in stochastic optimization algorithms, such as stochastic gradient descent, plays a central role in the training of machine learning models. Both theoretical investigations and empirical analyses emphasize that an optimal step size not only requires taking into account the nonlinearity of the underlying problem, but also relies on accounting for...
In this presentation, we consider interconnected optimal control problems, wherein the interconnection is represented as a graph. We establish a decaying sensitivity condition, where the influence between graph nodes diminishes with their distance, and leverage this assumption to construct a separable approximation of the optimal value function. This approach allows us to identify scenarios in...
As the use cases for neural networks become increasingly complex, modern neural networks must also become deeper and more intricate to keep up, indicating the need for more efficient learning algorithms. Multilevel methods, traditionally used to solve differential equations using a hierarchy of discretizations, offer the potential to reduce computational effort.
In this talk, we combine...
System-theoretic dissipativity notions introduced by Jan C. Willems play a fundamental role in the analysis of optimal control problems. They enable the understanding of infinite-horizon asymptotics and turnpike properties. This talk introduces a dissipative formulation for training deep Residual Neural Networks (ResNets) in classification problems. To this end, we formulate the training of...
The value function plays a crucial role as a measure for the cumulative future reward an agent receives in both reinforcement learning and optimal control. It is therefore of interest to study how similar the values of neighboring states are, i.e. to investigate the continuity of the value function. We do so by providing and verifying upper bounds on the value function's modulus of continuity....
We study the application of well-known physics-informed neural networks (PINNs) for solving non-smooth PDE-constrained optimization problems. First, we consider a class of PDE-constrained optimization problems where additional nonsmooth regularization is employed for constraints on the control or design variables. For solving such problems, we combine the alternating direction method of...