- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Welcome to the website of the international conference "Machine Learning in Natural Sciences: From Quantum Physics to Nanoscience and Structural Biology". The conference is hosted by the Cluster of Excellence "CUI: Advanced Imaging of Matter" and aims to bring different scientific communities that are exploring the new possibilities of machine learning together in order to connect and share experiences.
Please read the General Information page before you start the registration process.
Most of the talks have been recorded and are available here: https://lecture2go.uni-hamburg.de/l2go/-/get/l/7162
Alexandre Dauphin (ICFO Barcelona)
Niklas Käming (U Hamburg)
Christof Weitenberg (U Hamburg)
State-of-the-art quantum computers cannot run arbitrarily long quantum algorithms since their decoherence time is limited. The quality of the results unavoidably decays as the execution time increases. This work introduces a method to reduce the depth requirements of a circuit to be executed in a quantum device. The method consists in splitting the circuit in several stages to be applied sequentially. The output of one stage is the input of the next one. All outputs are recombined at the end to estimate the results of the original algorithm. The output of each stage should have a small number of non-neglectible outcomes to avoid an exponential computational cost. With this purpose, a variational reducer is added to the cut of the circuit. The optimization procedure of the reducer is done activating the circuit adiabatically. The method is numerically simulated for estimating the probability of sampling a bitstring in the outcomes of a quantum circuit split in few stages.
Quantum machine learning has emerged as a promising utilization of near-term quantum computation devices. However, algorithmic classes such as variational quantum algorithms have been shown to suffer from barren plateaus due to vanishing gradients in their parameters spaces. We present an approach to quantum algorithm optimization that is based on trainable Fourier coefficients of Hamiltonian system parameters. Our ansatz applies to the extension of discrete quantum variational algorithms to analogue quantum optimal control schemes and is non-local in time. We demonstrate the viability of our ansatz on the objectives of compiling the quantum Fourier transform and preparing ground states of random problem Hamiltonians using quantum natural gradient descent. In comparison to the temporally local discretization ansätze in quantum optimal control and parametrized circuits, our ansatz exhibits faster and more consistent convergence without suffering from vanishing gradients. We uniformly sample objective gradients across the parameter space and find that in our ansatz the variance decays at a decreasing rate with the number of qubits, which indicates the absence of barren plateaus. We propose our ansatz as a viable candidate for near-term quantum machine learning.
The analysis of the absorption and emission of electromagnetic radiation is a powerful method for exploring the quantum world of atoms and molecules. The ability to use well-defined laser pulses provides an opportunity to study the underlying structure and mechanisms of these microscopic systems with a very high resolution. A large number of techniques developed up to date for the characterization of laser pulse rely on dedicated optical setups and, consequently, are commonly employed in ex-situ measurements, i.e., far from the light-target interaction. Such implementations can give rise to inaccuracies in estimating the in-situ properties of ultrashort laser pulses; thus, direct in-situ characterization methods are desirable.
In our work we theoretically investigate the in-situ characterization of few-femtosecond near-infrared laser pulses through strong-field-ionization driven autocorrelation using machine learning approach. The process of ionization by a strong field is nonperturbative and nonlinear, and thus it cannot be represented by a simple analytical autocorrelation function of the field. In this context, we employ first-principles quantum-mechanical calculations to model the strong-field ionization of rare gas atoms and produce autocorrelation patterns for a range of laser parameters. Then, in order to retrieve the properties of the laser field driving the ionization, such patterns are used as a database for a machine learning algorithm. In our work we compare two approaches: the one based on the Random Forest algorithm and the one utilizing our novel machine-learning-based algorithm. We demonstrate that the combination of first-principles calculations and machine-learning method allows for the retrieval of key parameters of the laser, such as pulse duration and bandwidth as a promising way for the in-situ characterization of ultrashort low-frequency laser pulses.
Quantum many-body control is a central milestone en route to harnessing quantum technologies. However, the exponential growth of the Hilbert space dimension with the number of qubits makes it challenging to classically simulate quantum many-body systems and consequently, to devise reliable and robust optimal control protocols. Here, we present a novel framework for efficiently controlling quantum many-body systems based on reinforcement learning (RL). We tackle the quantum control problem by leveraging matrix product states for representing the many-body state and, as part of the trainable machine learning architecture for our RL agent. The framework is applied to prepare ground states of the quantum Ising chain, including critical states. It allows us to control systems far larger than neural-network-only architectures permit, while retaining the advantages of deep learning algorithms, such as generalizability and trainable robustness to noise. In particular, we demonstrate that RL agents are capable of finding universal controls, of learning how to optimally steer previously unseen many-body states, and of adapting control protocols on-the-fly when the quantum dynamics is subject to stochastic perturbations.
Quantum many-body control is a central milestone en route to harnessing quantum technologies. However, the exponential growth of the Hilbert space dimension with the number of qubits makes it challenging to classically simulate quantum many-body systems and consequently, to devise reliable and robust optimal control protocols. Here, we present a novel framework for efficiently controlling quantum many-body systems based on reinforcement learning (RL). We tackle the quantum control problem by leveraging matrix product states (i) for representing the many-body state and, (ii) as part of the trainable machine learning architecture for our RL agent. The framework is applied to prepare ground states of the quantum Ising chain, including critical states. It allows us to control systems far larger than neural-network-only architectures permit, while retaining the advantages of deep learning algorithms, such as generalizability and trainable robustness to noise. In particular, we demonstrate that RL agents are capable of finding universal controls, of learning how to optimally steer previously unseen many-body states, and of adapting control protocols on-the-fly when the quantum dynamics is subject to stochastic perturbations.
The estimation of decoherence timescales is important not only as a key performance indicator for quantum technology, but also to measure physical quantities through the change they induce in the relaxation of quantum sensors. Typically, decoherence times are estimated by fitting a signal acquired while sweeping the time delay between qubit preparation and detection on a pre-determined range. Here we describe an adaptive Bayesian approach, based on a simple analytical update rule, to estimate T$_1$, T$_2^*$ and T$_2$ with the fewest number of measurements, demonstrating a speed-up of factor 3-10, depending on the specific experiment, compared to the standard protocols. We also demonstrate that, when sensing time $\tau$ is the resource to be minimised, a further speed-up of a factor $\sim $2 can be obtained by maximising the ratio between Fisher information and time $\tau$, compared to the Fisher information.
We demonstrate the online adaptive protocols on a single electronic spin qubit associated with a nitrogen-vacancy (NV) centre in diamond, implementing Bayesian inference on a hard-realtime microcontroller in less than 100 $\mu$s, a time negligible compared to the duration of each measurement. Our protocol can be readily applied to different types of quantum systems.
Quantum dots must be tuned to a specific charge regime before being used for qubit operation. This calibration procedure requires measuring the stability diagram and finding the proper gate voltages to confine one electron in the dot. Currently, this operation is performed manually, which is time-consuming and therefore not desirable for a large-scale quantum system. To overcome this limitation, autotuning protocols based on machine learning techniques were suggested. However, this approach generally lacks robustness when applied to noisy diagrams, since only one misclassification can lead to unexpected failures of the tuning procedure. In this work, we propose to train a Bayesian Neural Network for the task of charge transition detection. The uncertainty provided by this model gives valuable information about the quality of the measurement and the confidence of the detection, which allows us to design a robust procedure for single dots autotuning. This approach has been successfully validated on experimental stability diagrams from two different quantum dot technologies.
In this talk, I will present the Anomalous Diffusion (AnDi) challenge, a community driven event aimed at pushing our understanding of diffusion phenomena. Deviations from Brownian motion leading to anomalous diffusion are found in transport dynamics from quantum physics to life sciences. The characterization of anomalous diffusion from the measurement of an individual trajectory is a challenging task. Recently, several new approaches have been proposed, mostly building on the ongoing machine-learning revolution. To perform an objective comparison of methods, we gathered the community and organized an open competition, the AnDi challenge (AnDi). Participating teams applied their algorithms to a commonly-defined dataset including diverse conditions. Although no single method performed best across all scenarios, machine-learning-based approaches achieved superior performance for all tasks. The discussion of the challenge results provides practical advice for users and a benchmark for developers.
We propose a machine learning method to characterize heterogeneous diffusion processes at a single-step level. The machine learning model takes a trajectory of arbitrary length as input and outputs the prediction of a property of interest, such as the diffusion coefficient or the anomalous exponent, at every time step. This way, changes in the diffusive properties along the trajectory emerge naturally in the prediction, allowing us to characterize both homogeneous and heterogeneous processes without any prior knowledge or assumption about the system. In this work, we showcase the power of the method in various scenarios. We illustrate its capacity to characterize trajectories with zero to ten discrete changes in the diffusion coefficient of the diffusing particle, considering different segment lengths and localization noise levels. Additionally, we show how to use the same method to study continuous changes, such as the aging of the diffusion coefficient, as well as changes in the anomalous exponent. Finally, we provide results studying an experimental system of Integrin α5β1 diffusing in the membrane of HeLa cells. We find two diffusive states with changes in both the diffusion coefficient and the anomalous exponent, and we show that one of these can be related to trapping events.
Machine learning can enable and accelerate the design of new molecules and materials in multiple ways, e.g. by learning from large amounts of (simulated or experimental) data to predict molecular or materials properties faster, or even by interfacing machine learning algorithms for autonomous decision-making directly with automated high-throughput experiments. This talk will give a brief overview of our research activities on graph neural networks for materials property prediction [1], machine learning accelerated atomistic simulations of photochemical reactions [2,3], as well as on the machine learning based prediction of synthesis conditions for metal-organic frameworks [4].
[1] Reiser et al., Software Impacts 2021, https://www.sciencedirect.com/science/article/pii/S266596382100035X
[2] Friederich et al., Nature Materials 2021, https://www.nature.com/articles/s41563-020-0777-6
[3] Li et al., Chemical Science 2021, https://pubs.rsc.org/en/content/articlehtml/2021/sc/d0sc05610c
[4] Luo et al., Angewandte Chemie 2022, https://onlinelibrary.wiley.com/doi/full/10.1002/anie.202200242
Recent serial crystallography experiments at FELs produce a large amount of data, where typically the ratio of useful images containing crystal diffraction (hit fraction) is about 5-10% but hit fractions even lower than 0.1 % have been observed in some experiments. Demands on data storage could be greatly reduced by rejecting bad images before saving them to disk, but this requires reliable methods for detecting these images that do not rely on expert tuning or intervention during the experiment. Traditional Non-Machine learning techniques successfully classify good and bad images based on number of detected peak on the images. But those techniques require tuning many parameters that are vary in each experiment. Also traditional machine learning methods like artificial neural networks successfully classify diffraction patterns, but the major challenge is cross-domain performance, in which a classifier trained by a set of local feature vectors extracted on one dataset cannot necessarily be applied to data collected with different samples and experimental settings. Scale-invariance, good localization, and robustness to noise and artifacts are the main properties that a local feature detector should possess. In this paper, we propose a real-time, automatic, and parameter-free image descriptor as a local feature detector for use as input data for ML models. Our method describes each diffraction pattern by a vector, consisting of the number of keypoints (Bragg spots) in different areas of the image. Additionally, we parallelized our novel and parameter-free keypoint detection algorithm to work computationally efficiently compared to the traditional keypoint detection algorithms. A machine learning model trained by this introduced image descriptor(vector) performs well across different experimental settings(cross-domain) to classify diffraction patterns as hit and miss diffraction patterns. Our initial experimental results show a significant improvement in decreasing the domain gap when a ML classifier is trained by new image descriptor and tested by another unseen dataset.
In recent years, serial femtosecond crystallography has made remarkable progress for the measurement of macromolecular structures and dynamics using intense femtosecond duration pulses from X-ray Free Electron Laser (FEL). In these experiments, FEL X-ray pulses are fired at a jet of protein crystals, and the resulting diffraction pattern is measured for each pulse. If the pulse hits protein crystals, the resulting diffraction pattern is recorded. However, most of the time the beam does not hit a crystal. As a result, out of the hundreds of thousands of diffraction patterns, only a small fraction is useful, so there is tremendous potential for data reduction. Diffraction from a protein crystal produce distinctive patterns known as Bragg peaks. Thus, recent years have seen an increased interest in artificial intelligence or more specifically machine learning to process diffraction patterns, resulting considerable data reduction.
Existing statistical methods utilize peak finding to identify and discriminate diffraction patterns that contains Braggs peaks and remove any patterns which only contain empty shots, resulting in considerable data reduction. Typically, peak finding methods require carefully crafted parameters from domain experts. Thus, we leverage the ``You only look once'' (YOLO) object detection network to detect Bragg peaks in diffraction patterns with no parameters from users. In addition, we develop a pixel level labeling mechanism based on feature extractors to extract bounding boxes information to train the object detector.
There will be food and drinks during the poster session.
We study the effect of adding intra-layer connections in restricted Boltzmann machines (RBM), in the hidden layer, in the visible layer, or in both layers at the same time. The improvement obtained with these new connections is evaluated with the negative log-likelihood in the MNIST dataset. We have also implemented different ways to calculate the connection updates, some more precise (and more computationally expensive) than others. In all cases we have found improvements that, although considerable in some cases, are not an overwhelming change. We find that, on the light of these results, new training methods like RAPID, which we introduced in [1] and which is motivated by the control of the spin-glass properties of the Ising model, are very useful. We are currently testing RAPID and some variations of it on deeper Boltzmann machines architectures using the "mean field" theory of condensed matter physics
[1] A. Pozas-Kerstjens, G. Muñoz-Gil, E. Piñol, M. A. García-March, A. Acín, M. Lewenstein, and P. R. Grzybowsky, Mach. Learn.: Sci. Technol. 2, 025026 (2021).
Finding the closest separable state to a given target state is a notoriously difficult task, even more difficult than deciding whether a state is entangled or separable. To tackle this task, we parametrize separable states with a neural network and train it to minimize the distance to a given target state, with respect to a differentiable distance, such as the trace distance or Hilbert-Schmidt distance. By examining the output of the algorithm, we obtain an upper bound on the entanglement of the target state, and construct an approximation for its closest separable state. We benchmark the method on a variety of well-known classes of bipartite states and find excellent agreement, even up to local dimension of $d=10$, while providing conjectures and analytic insight for isotropic and Werner states. Moreover, we show our method to be efficient in the multipartite case, considering different notions of separability. Examining three and four-party GHZ and W states we recover known bounds and obtain additional ones, for instance for triseparability.
We show that a Support Vector Machine with a quantum kernel provides an accurate prediction of the phase transition in quantum many-body models, even when trained far from the critical point.
The surging popularity of machine learning techniques has prompted their application to the study of physical properties, in particular to the detection of phase transitions. Recently, SVMs have been successfully employed for the prediction of the critical point of the 2D classical Ising model [1]. At the same time, machine learning has hybridized with quantum computation to yield quantum machine learning, where neural networks are replaced by variational quantum circuits and classical kernels give way to quantum kernels. These leverage the exponentially large Hilbert space in their favour, providing rich feature spaces where complex datasets can be accurately classified and promising a quantum advantage [2], [3]. However, many proposals ignore the challenge of loading classical data onto quantum memory or deal with synthetic data sets with no practical applications. In this context, we find that the study of quantum many-body systems provides the perfect testbed in the search for a practical quantum speed up, because the ground state wavefunctions that constitute the data set are quantum in origin, and thus classically intractable in large systems.
In this poster we propose to train a SVM with a kernel constructed with the ground states of a given Hamiltonian to identify phase transitions. We also present two quantum algorithms that materialize the implementation of this technique in a quantum computer. In particular, we test the validity of the method with the transition of the Ising chain in transverse field (ICTF) at different extremes of the ferromagnetic constant $J$. The SVM learns to classify the two phases and is then able to correctly classify a set of ground states spanning uniformly along $J$, giving us an estimation of the critical point $J_c$. We then perform a finite size scaling analysis to extract the $N \to \infty$ critical point. To benchmark our results, we replicate the results in [4] where a dip in the fidelity, the overlap between adjacent ground states in a uniform sampling of $J$, is used as a signature of the phase transition.We show that quantum-kernel SVMs, despite being trained with samples far from the phase transition, are an excellent tool to accurately predict quantum critical points.
[1] C. Giannetti, B. Lucini and D. Vadacchino, ”Machine Learning as a universal tool for quantitative investigations of phase transitions”, Nucl. Phys. B Vol. 944 114639 (2019)
[2] M. Schuld and N. Killoran ”Quantum Machine Learning in Feature Hilbert Spaces”, 10.1103/Phys-RevLett.122.040504 (2019)
[3] Y. Liu, S. Arunachalam and K. Temme A rigorous and robust quantum speed-up in supervised machine learning, arXiv:2010.02174v2 [quant-ph]
[4] SHI-JIAN GU, ”Fidelity approach to quantum phase transitions”, Int. J. Mod. Phys. B Vol. 24, No. 23, pp. 4371-4458 (2010)
In supervised Machine Learning (ML) or Deep Learning (DL) projects, a model is trained, validated and tested by selecting the optimal preprocessing parameters, hyperparameters, and model architecture. The model’s performance is then optimized based on the preferred performance metric, such as accuracy or F1-score. In most cases, the number and distribution of the input data is kept fixed. However, when new data is added, the distribution changes over time and in turn the model performance degenerates. To reach the original performance, the model needs to be updated, by re-running the training pipeline.
With Continuous Machine Learning (CML) the model performance can be monitored with dedicated behavioral tests as the characteristics of the data might change over time. Additionally, these behavioral tests provide insights on the performance of corner cases or especially difficult examples. This can be helpful, if the so far best model needs to be updated based on evaluation metrics regarding the whole data set. Although, the overall performance of the successor model has improved, it might have a degenerated performance on certain corner cases you otherwise do not directly become aware of.
In our work, we train a Wasserstein GANs (wGAN) with FIRST radio galaxy images to generate additional radio galaxy images because the number of available labeled images is limited. The idea is to improve a classifier by adding additional generated images to the training data set besides the real images. In this case, the training data set is changing with every new version of the generator. Thus, with every new version of the generator, the question is how much the generated images improved the classifier and what are the reasons for the improvement. Therefore, we implemented automated behavioral tests that always run when the training data set changes due to new versions of the generator. For one behavioral test, we selected radio galaxy sources from the test data set and used their labels, but we acquired the images from a different type of radio telescope named LOFAR. Thus, the wGAN and the classifier are not provided with these images in their training and validation data sets. To gain more insights on how the generated images might improve the classifier, we used a Vision Transformer (ViT) as a classifier. The ViT provides attention maps that indicate on which part of the image the classifier bases its decision on. This gives a better understanding on what the model is focusing on and how that changes with additional generated images.
The results show that we can improve the classifier with additional generated images, especially in one class, and the attention maps indicate that the classifier is focusing on the expected structures within the image. Overall, the combination of automated behavioral tests and the ViT as classifier continuously provides insights about the current best model in the iterative development of the wGAN and classifier.
Quantum logic gates are the building blocks of quantum circuits and algorithms, where the generation of entanglement is essential to perform quantum computations. The amount of entanglement that a unitary quantum gate can produce from product states can be quantified by the so-called entangling power, which is a function of the gate’s unitary or Choi matrix representation. In this work, I introduce an efficient approach to the practical problem of estimating the entangling power of an unknown two-qubit gate from measurement data. The approach uses a deep neural network trained with noisy data simulating the outcomes of prepare-and-measure experiments on random gates. The training data is restricted to 48 measurement settings, which is significantly less than the 256 dimensions of the ambient space of 16×16 Choi matrices and very close to the minimum number of settings that guarantees the recovery of a two-qubit unitary gate using the compressed sensing technique at an acceptable error rate. This method does not make any prior assumptions about the quantum gate, and it also avoids the need for standard reconstruction tools based on full quantum tomography, which is prone to systematic errors.
Experiments with ultra-short laser pulses applied to a single atom invoke highly non-linear phenomena. Thus, they are strongly sensitive to the laser pulse parameters such as intensity, carrier envelope phase (CEP), frequency, polarization, and the number of cycles. Several techniques of retrieving pulse parameters have been developed, with state-of-the-art precision and accuracy achieved through in situ measurements that fit atom ionization rates to values predicted by the exact time-dependent Schrödinger equation [1]. Ref. [2] identifies room for one to two orders of magnitude accuracy improvement through the measurement of 2-D photoelectron momentum distribution. However, recovering laser pulse parameters by comparing these distributions to theoretical predictions is a complicated inverse problem. Here, we propose a machine learning approach, namely - to train a convolutional neural network in a supervised way to interpolate between 2-D momentum distributions for a finite set of laser pulse parameters, and then to use it to predict those parameters given a new momentum distribution. We find that upon training the network can predict pulse parameters with satisfactory accuracy, e.g. with mean absolute error of less than 1% for intensity. The model may serve as a pretrained architecture for further fine-tuning on real-world data from attosecond laboratories.
[1] M. G. Pullen et al., Phys. Rev. A 87, 053411 (2013)
[2] A. S. Maxwell et al., Phys. Rev. A 103, 043519 (2021)
The state space of a quantum-mechanical system grows exponentially in the number of its classical degrees of freedom. Thus, efficient approximations are crucial for extracting physical information from this vast space. In the variational approach, computations are performed on trial states determined by a tractable number of parameters. Neural quantum states (NQS) provide a large family of such trial states by using a neural-network ansatz to parametrize probability amplitudes. NQS-based methods can be applied to learning both ground states and dynamics of quantum many-body system.
In this poster, we present our current efforts in applying NQS methods to simulating strongly correlated quantum systems in and out of equilibrium. In particular, we highlight our recent work on understanding the stability properties of time-evolution algorithms for NQS based on the time-dependent variational Monte Carlo method (Hofmann et al., SciPost Phys. 12, 2022). Furthermore, we will present results of our ongoing research into the application of NQS for representing states in quantum spin liquid systems. Our computational work is based on NetKet, a collaboratively developed open-source software framework providing models and algorithms for machine learning in quantum many-body physics (Carleo et al., SoftwareX 10, 2019; Vicentini et al., arXiv:2112.10526).
In this paper, we present a model for detecting GaN pyramids in SEM images which relies on the strong use of data augmentation, due to the complexity of microscopic structures. A procedure has been developed to generate synthetic images for training the algorithm owing to this fact real images are hard to be prepared and labeled. In the next stage, YOLO algorithm has been employed for the object detection process. A minimum confidence of 70% for detecting real objects has been realized together with this fact that test and train accuracy and loss prove significant convergence.
Variational methods aim to approximate the quantum states of interest efficiently. Recently, artificial neural networks are being used as the variational ansatz to represent the wave function. These variational states are known as neural network quantum states (NQSs). The success of these NQSs in finding the ground states of spin systems has motivated researchers to explore their capabilities in other more complicated systems. One such complicated system is of magnetic skyrmions, which are topologically nontrivial. These quantum skyrmions are stabilized by the Dzyaloshinskii-Moriya interaction (DMI), an antisymmetric spin exchange energy term. In this work, we study the formation of quantum skyrmions in the ground state of the Heisenberg Hamiltonian in presence of the DMI term using NQSs. We show that a stable skyrmion phase exists as the ground state, and can be obtained by the relatively cheap to train feed forward neural networks. We also test the limits of different neural network architectures in describing the skyrmion phase and compare their performance vs cost.
Data in machine learning scenarios is typically scattered over a large amount of files. This comes with a number of undesired side effects. First, operating systems are not designed for storing thousands of files in a flat file system. As a result, a simple scan of a directory does not terminate anymore in the worst case. Implicitly called operations like user name resolution and sorting increase the execution time of a scan significantly in the case of thousands of files. Second, storing small files wastes disc space. A file always occupies at least one disc cluster. Hence, all files which are smaller than a disc cluster, block space which is not used. Further, whenever metadata is involved, the connection between metadata and the stored files has to be implemented by the scientist. This leads to a development overhead for each individual dataset and application. Finally, the processes of storing and sharing data are increasingly inefficent the more files and individual scipts are involved.
For these reasons, digital asset management systems (DAMS) are already popular in other fields, such as photography or music. However, DAMS are hardly used in science, mainly due to a lack of available systems. To close this gap, we present ScienceFiles (SciFi), an embedded DAMS specially developed for scientific data. It combines a traditional relational database and a key-value store to store data and query metadata efficiently. SciFi consists of an extensible framework and a shell which serves as a stand-alone DAMS. For providing access to the data stored in SciFi for machine learning, we extended the Dataset
class of PyTorch. To ensure the usability of our solution, we chose a lightweight design that runs on laptops and lab PCs without requiring special permissions or an installation.
In detail, SciFi provides the following features and advantages over exclusively using a file system:
DataLoader
Recently proposed spintronic devices use magnetic skyrmions as bits of information. The reliable detection of those chiral magnetic objects is an indispensable requirement. Yet, the high mobility of magnetic skyrmions leads to their stochastic motion at finite temperatures, which hinders the precise measurement of the topological numbers.
Here, we demonstrate the successful training of artificial neural networks to reconstruct the skyrmion number in confined geometries from time-integrated, dimensionally reduced data.
Our results prove the possibility to recover the topological charge from a time-averaged measurement and hence smeared dynamic skyrmion ensemble, which is of immediate relevance to the interpretation of experimental results, skyrmion-based computing, and memory concepts.
X-ray free-electron lasers (XFELs) provide a powerful tool to probe atomic and molecular dynamics with both exceptional temporal and spatial resolution. For a quantitative comparison between experimental results and their simulated theoretical counterpart, however, a precise characterisation of the X-ray pulse profile is essential. Generally, the pulse profile provides a non-uniform photon distribution that depends on space and time coordinates and hence, heavily affects the interaction between the X-ray pulse and the target. The determination of the pulse profile, referred to as calibration, constitutes a major experimental challenge and is yet inevitable.
Here, we propose a calibration scheme utilising charge state distributions (CSD) of light noble gas atoms. The corresponding experiments can be performed with little effort and, subsequently, be used for the calibration. In order to perform the pulse calibration, we employ high-level electronic structure simulations and a machine learning-based numerical optimisation technique called Bayesian optimisation (BO). The application of BO to experimental and theoretical data constitutes the main focus of our work.
We demonstrate that BO can accomplish the optimisation tasks efficiently and with high accuracy. We further show that the proposed calibration based on charge state distributions determines the spatial profile as well as the pulse duration of XFEL pulses with equivalent precision in comparison to a fully experimental determination. Therefore, the presented calibration scheme serves as a comprehensive, efficient, and accurate tool for XFEL pulse characterisation.
Unveiling the microscopic origins of quantum many-body phases dominated by the interplay of spin and charge degrees of freedom constitutes one of the central challenge in modern strongly correlated many-body physics. When holes hop through a background of insulating spins, they displace their positions, which in turn induces effective frustration in the magnetic background. However, the precise quantification of this effect in a quantum many-body system is an extremely challenging task.
We use Hamiltonian learning schemes to associate the hole-removed spin background with a purely magnetic Hamiltonian. This approach allows us to quantify the effect of the hole-motion on the spin background, using Fock space snapshots at intermediate temperatures, readily accessible to quantum gas microscopes.
In particular, we study a one-dimensional Fermi-Hubbard system, and reveal effects of charge correlations on the spin correlations through Hamiltonian reconstruction. We next consider a model in mixed-dimensions, where holes are restricted to move in one dimension, but spin couplings are two-dimensional, and establish a quantitative understanding of the interplay of spin and charge through the introduction of frustrating diagonal bonds.
We develop a variational approach to simulating the dynamics of open quantum and classical many-body systems using artificial neural networks. The parameters of a compressed representation of a probability distribution are adapted dynamically according to the Lindblad master equation or Fokker Planck equation, respectively, by employing a time-dependent variational principle. We illustrate our approach by solving the dissipative quantum Heisenberg model in one and two dimensions for up to 40 spins and by applying it to the simulation of confinement dynamics in the presence of dissipation [1]. Also, we use normalizing flows to variationally solve diffusive classical dynamics in high dimensions [2].
[1] M. Reh, M. Schmitt, M. Gärttner, Phys. Rev. Lett. 127, 230501 (2021)
[2] M. Reh, M. Gärttner, arXiv:2206.01927
How much does it cost to generate a target quantum state from another reference state?
This is a rather general question that has been discussed in quantum information for obvious reasons. In quantum computation it is desirable to obtain the result with the minimum set of gates. This number is, roughly speaking, the cost and it is called
Complexity. In this talk, I will introduce different ways to compute the complexity to prepare the ground states, i.e. the target state is the ground state of a given Hamiltonian. We will be interested in the situation where on the way from the reference to the target we cross a critical point. We will work different examples and draw general consequences. We will calculate exactly the complexity for integrable models, like the anisotropic XY, quasi-soluble like the Dicke model. As well as numerical results for non-integrable Hamiltonians (ZZXZ model). We will discuss general properties through scaling hypotheses and find optimal ways to reach the target state. All this theory will be applied to real algorithms: we will calculate the circuit complexity for varational quantum eigensolvers (VQE) and adiabatic algorithms (with and without shortcuts). As a take home message, we will show universal scaling relationships for circuit complexity. For systems of finite size and depending on the critical exponents, this quantity can be subextensive, extensive or superextensive. In the thermodynamic limit, we will discuss how complexity diverges.
Being able to efficiently represent mixed quantum states is essential in order to describe the effects of dissipation, such as those arising in Open Quantum Systems, or in order to represent the noisy outcome of a circuit executed on a present-day Quantum Computer.
The challenges in the description of such objects arise from the exponential growth of the Hilbert space and from the need to enforce the positive-definiteness of the resulting matrix.
A compact, physical representation of density matrices in terms of Neural Networks was originally proposed by Torlai and coworkers in 2018, based on the purification of a Restricted Boltzmann Machine, but that approach was limited to shallow networks.
In this talk we will discuss the Gram-Hadamard Density Operator (GHDO), a new deep neural-network architecture that can encode positive semi-definite density operators of exponential rank with poly- nomial resources.
We will then show how to embed an autoregressive structure in the GHDO to allow direct sampling of the probability distribution.
The application of Machine Learning techniques in many field of theoretical physics has been very successful in the last years, leading to great improvement over existing standard methods.
In this work, we demonstrate how deploying deep generative machine learning models for estimating thermodynamic observables in lattice field theory
is a promising route for addressing many drawbacks typical of Markov Chain Monte Carlo (MCMC) methods.
More specifically, we show that generative models can be used to estimate the absolute value of the free energy, which is in contrast to existing MCMC-based methods which are limited to only estimate free energy differences. Moreover, we combine this with two efficient sampling techniques namely neural importance sampling (NIS) and neural HMC-estimation (NHMC) and leverage on the fact that some kind of deep generative models give access to a good approximation of the true Boltzmann distribution. We demonstrate the effectiveness of the proposed method for two-dimensional $\phi^4$ theory and compare it to MCMC-based methods in detailed numerical experiments.
Quantum scrambling is the process by which quantum information is spread within the degrees of freedom of many-body quantum systems. As such, understanding what are the features of a quantum system that maximise this information spreading has become a recent topic of interest of crucial importance. Graph theory provides a natural mathematical framework to encode the interactions of a quantum many-body system, and we thus employ it to study the properties of quantum scrambling as we vary the underlying graph of interactions. Predicting when a particular quantum many-body system features either strong quantum scrambling (chaotic system) or not (integrable system) is a delicate issue where sophisticated computationally expensive methods are needed. We use (i) A Convolutional Neural Network, and (ii) A Graph Neural Network to understand better this integrable-to-chaotic transition and find that suprisingly simple graph-theoretic indices control this transition. In particular, we show that clustering coefficients can be used to predict its scrambling properties. While still a work in progress, we believe our results pave the way for a better understanding of how to maximize the spreading of quantum information in a controlled way.
Scanning tunneling microscopy (STM) is an important tool to image surfaces at atomic scale, that allows to acquire significant amounts of data in comparably short time. Therefore, for example to examine large ensembles of molecules in STM images can be a difficult and time-consuming task. We present a method to recognize chirality within experimentally observed self-assembled molecular structures using the convolutional neural network (CNN) based object detection architecture YOLOv5. Thereby we classify unit cells in the image towards one of the two chiral orientations present on the surface.
To train the neural network, a sufficient amount of correctly labeled images is necessary. To obtain such data and labels, we utilize a method to create realistic-looking, synthetic STM images in varying zoom-sizes containing lifelike properties such as noise and defects along with corresponding labels.
Using this synthetic data, we trained a model capable of classifying synthetic images at sizes ranging from 8nm to 200nm with high performance. Evaluations of the CNN's predictions for real images show that this network trained on synthetic data can categorize chirality in real images.
Experimental studies of charge transport through single molecules often rely on break junction setups, where molecular junctions are repeatedly formed and broken while measuring the conductance, leading to a statistical distribution of conductance values.
Modeling this experimental situation and the resulting conductance histograms is challenging for theoretical methods, as computations need to capture structural changes in experiments, including the statistics of junction formation and rupture.
This type of extensive structural sampling implies that even when evaluating conductance from computationally efficient electronic structure methods, which typically are of reduced accuracy, the evaluation of conductance histograms is too expensive to be a routine task.
Highly accurate quantum transport computations are only computationally feasible for a few selected conformations and thus necessarily ignore the rich conformational space probed in experiments.
To overcome these limitations, we investigate the potential of machine learning for modeling conductance histograms, in particular by Gaussian process regression. We show that by selecting specific structural parameters as features, Gaussian process regression can be used to efficiently predict the zero-bias conductance from molecular structures, reducing the computational cost of simulating conductance histograms by an order of magnitude.
This enables the efficient calculation of conductance histograms even on the basis of computationally expensive first-principles approaches by effectively reducing the number of necessary charge transport calculations. We also provide an outlook how techniques such as active learning can be used to push the boundaries of Machine Learning approaches.
The computational technology of highly expressive parametric neural-network-functions has allowed machine learning to make a major foray into disciplines of natural sciences. The neural network functions may be effectively “fitted” to a loss function, given in the form of a variational principle or virial theorem, to provide solutions to quantum mechanical problems. Recently, a few deep neural network models for solving the electronic Schrödinger equation were developed [1-3], demonstrating both outstanding computing efficiency and accurate results.
Here, we present a new quantum-flow-neural-network approach for obtaining variational solutions of the Schrödinger equation. At the core of the method is an invertible neural network composed with the general basis of orthogonal functions [4], which provides a more stable framework for simultaneous optimization of the ground state and a lot of excited states. We apply our approach to calculations of the vibrational energy levels of polyatomic molecules as well as of electronic energies in a single-active-electron approximation. The results show a considerable improvement of variational convergence for the ground and the excited states.
[1] M. T. Entwistle, Z. Schäzle, P. A. Erdman, J. Hermann, F. Noé, arXiv:2203.09472 (2022)
[2] D. Pfau, J. S. Spencer, A. G. D. G. Matthews, W. M. C. Foulkes, Phys. Rev. Research 2, 033429 (2020), arXiv:1909.02487v3
[3] J. Hermann, Z. Schäzle, F. Noé, Nat. Chem. 12, 891 (2020), arXiv:1909.08423v5
[4] K. Cranmer, S. Golkar, and D. Pappadopulor, arXiv:1904.05903
Neural networks are powerful feature extractors - but which features do they extract from their data? And how does the structure of the training data shape the representations they learn? We investigate these questions by introducing several synthetic data models, each of which accounts for a salient feature of modern data sets: low intrinsic dimension of images [1], symmetries and non-Gaussian statistics [2], and finally sequence memory [3]. Using tools from statistics and statistical physics, we will show how the learning dynamics and the representations are shaped by the statistical properties of the training data.
[1] Goldt, Mézard, Krzakala, Zdeborová (2020) Physical Review X 10 (4), 041044 [arXiv:1909.11500]
[2] Ingrosso & Goldt (2022) [arXiv:2202.00565]
[3] Seif, Loos, Tucci, Roldán, Goldt [arXiv:2205.14683]
We investigate the potential of tensor network based machine learning meth- ods to scale to large image and text data sets. For that, we study how the mutual information between a subregion and its complement scales with the subsystem size $L$, similarly to how it is done in quantum many-body physics. We find that for text, the mutual information scales as a power law $L^ν$ with a close to volume law exponent, indicating that text cannot be efficiently described by 1D tensor networks. For images, the scaling is close to an area law, hinting at 2D tensor networks such as PEPS could have an adequate expressibility. For the numerical analysis, we introduce a mutual information estimator based on autoregressive networks, and we also use convolutional neural networks in a neural estimator method.
The last few decades have seen significant advancements in materials research tools, allowing scientists to rapidly synthesis and characterize large numbers of samples - a major step toward high-throughput materials discovery. Autonomous research systems take the next step, placing synthesis and characterization under control of machine learning. For such systems, machine learning controls experiment design, execution, and analysis, thus accelerating knowledge capture while also reducing the burden on experts. Furthermore, physical knowledge can be built into the machine learning, reducing the expertise needed by users, with the promise of eventually democratizing science. In this talk I will discuss autonomous systems being developed at NIST with a particular focus on autonomous control over user facility measurement systems for materials characterization, exploration and discovery.
Fascination in topological materials originates from their remarkable response properties and exotic quasiparticles which can be utilized in quantum technologies. In particular, large-scale efforts are currently focused on realizing topological superconductors and their Majorana excitations. However, determining the topological nature of superconductors with current experimental probes is an outstanding challenge. This shortcoming has become increasingly pressing due to rapidly developing designer platforms which are theorized to display very rich topology and are better accessed by local probes rather than transport experiments. We introduce a robust machine-learning protocol for classifying the topological states of two-dimensional (2D) chiral superconductors and insulators from the local density of states (LDOS) data. Since the LDOS can be measured with standard experimental techniques, our protocol overcomes the almost three decades standing problem of identifying the topological invariants of 2D superconductors [1].
[1] Paul Baireuther, Marcin Płodzień, Teemu Ojanen, Jakub Tworzydło, Timo Hyart, "Identifying Chern numbers of superconductors from local measurements" arXiv:2112.06777
There will be food and drinks during the poster session.
Since many concepts in theoretical physics are well known to scientists in the form of equations, it is possible to identify such concepts in non-conventional applications of neural networks to physics. In this talk, we examine what is learned by artificial neural networks, especially siamese networks in various physical domains. These networks intrinsically learn physical concepts like energies, or symmetry invariants. The corresponding equations can be retrieved from the networks, opening up avenues to artificial scientific discovery.
Veracity (uncertainty of data quality) and variety (heterogeneity of form and meaning of data) are two of the 4V challenges of Big Data. Both are issues for the FAIRness of materials-science results, concerning in particular, the interoperability, i.e., the “I” in FAIR. I will address what may enable us to use heterogenous data for machine learning, e.g. data from different sources or exhibiting different quality. I will introduce metrics for measuring data quality and propose methods of unsupervised learning to explore large data spaces.
Inverse design problems in photonics typically operate in very high dimensional parameter spaces which are notoriously difficult to navigate to find local or global optima. Even worse, from practice it is known that different devices can have comparable performance leading to multimodal device distributions. This often confuses optimization routines causing oscillations and failure to converge. Bayesian inference provides a framework to obtain complete device distributions and has been applied with great success to problems in astrophysics and particle physics. However, traditional Bayesian methods typically also suffer from the curse of dimensionality which makes them unsuitable to apply to problems in photonics with large parameter spaces. However, recent developments in machine learning and specifically generative modeling have enabled the exploration of high-dimensional problems with Bayesian inference.
In this talk I will show how we apply this framework to two problems in nanophotonic design with multimodal device distributions. We investigate a slit flanked by periodic corrugations and a stack of alternating layers of different dielectric materials. Using invertible neural networks allows us to identify symmetric solutions in these devices and generate the complete distribution of devices with a certain behavior. This approach significantly outperforms other traditional generative models like variational autoencoders. From this discussion we motivate a general procedure to benchmark new models for photonic inverse design and how to choose an appropriate architecture in face of an unknown dataset.
References:
1. Frising M, Bravo-Abad J, Prins F, "Tackling Multimodal Device Distributions in Inverse Photonic Design using Invertible Neural Networks" (to be submitted)