Speaker
Description
We address the computational efficiency of the A-optimal Bayesian design of experiments . A-optimality is a widely used criterion in Bayesian experiment design, aiming to minimize the expected conditional variance and find the optimal design. We propose a novel likelihood-free method for the A-optimal experiment design that does not require sampling or approximating the Bayesian posterior distribution, avoiding issues with posterior intractability. Our approach leverages two principle properties of the conditional expectation: the law of total variance and the orthogonal projection property. By utilizing the law of total variance, we obtain the expected conditional variance through the variance of the conditional expectation. Furthermore, we exploit the orthogonal projection property to approximate the conditional expectation using regression, eliminating the need for likelihood function evaluation. To implement our approach, we employ deep artificial neural networks (ANN) for approximating the nonlinear conditional expectation. Particularly for continuous experimental design parameters, we integrate the minimization of the expected conditional variance into the training process of the ANN-based approximation. This integration is enabled by the shared objective function, leading to improved algorithm efficiency. Through numerical experiments, we demonstrate that our method significantly reduces the number of computationally expensive forward-model evaluations compared to common likelihood-based approaches, effectively overcoming a significant bottleneck.