2573 Publications

Classical Shadows for Quantum Process Tomography on Near-term Quantum Computers

Quantum process tomography is a powerful tool for understanding quantum channels and characterizing properties of quantum devices. Inspired by recent advances using classical shadows in quantum state tomography [H.-Y. Huang, R. Kueng, and J. Preskill, Nat. Phys. 16, 1050 (2020).], we have developed ShadowQPT, a classical shadow method for quantum process tomography. We introduce two related formulations with and without ancilla qubits. ShadowQPT stochastically reconstructs the Choi matrix of the device allowing for an a-posteri classical evaluation of the device on arbitrary inputs with respect to arbitrary outputs. Using shadows we then show how to compute overlaps, generate all k-weight reduced processes, and perform reconstruction via Hamiltonian learning. These latter two tasks are efficient for large systems as the number of quantum measurements needed scales only logarithmically with the number of qubits. A number of additional approximations and improvements are developed including the use of a pair-factorized Clifford shadow and a series of post-processing techniques which significantly enhance the accuracy for recovering the quantum channel. We have implemented ShadowQPT using both Pauli and Clifford measurements on the IonQ trapped ion quantum computer for quantum processes up to n=4 qubits and achieved good performance.
Show Abstract

Simulating Polaritonic Ground States on Noisy Quantum Devices

The recent advent of quantum algorithms for noisy quantum devices offers a new route toward simulating strong light-matter interactions of molecules in optical cavities for polaritonic chemistry. In this work, we introduce a general framework for simulating electron-photon coupled systems on small, noisy quantum devices. This method is based on the variational quantum eigensolver (VQE) with the polaritonic unitary coupled cluster (PUCC) ansatz. To achieve chemical accuracy, we exploit various symmetries in qubit reduction methods, such as electron-photon parity, and use recently developed error mitigation schemes, such as the reference zero-noise extrapolation method. We explore the robustness of the VQE-PUCC approach across a diverse set of regimes for the bond length, cavity frequency, and coupling strength of the H
Show Abstract

Ab Initio Calculations of Quantum Light–Matter Interactions in General Electromagnetic Environments

The emerging field of strongly coupled light-matter systems has drawn significant attention in recent years due to the prospect of altering physical and chemical properties of molecules and materials. Because this emerging field draws on ideas from both condensed-matter physics and quantum optics, it has attracted attention from theoreticians from both fields. While the former employ accurate descriptions of the electronic structure of the matter the description of the electromagnetic environment is often oversimplified. Contrastingly, the latter often employs sophisticated descriptions of the electromagnetic environment, while using simple few-level approximations for the matter. Both approaches are problematic because the oversimplified descriptions of the electronic system are incapable of describing effects such as light-induced structural changes, while the oversimplified descriptions of the electromagnetic environments can lead to unphysical predictions because the light-matter interactions strengths are misrepresented. Here we overcome these shortcomings and present the first method which can quantitatively describe both the electronic system and general electromagnetic environments from first principles. We realize this by combining macroscopic QED (MQED) with Quantum Electrodynamical Density-functional Theory. To exemplify this approach, we consider an absorbing spherical cavity and study the impact of different parameters of both the environment and the electronic system on the transition from weak-to-strong coupling for different aromatic molecules. As part of this work, we also provide an easy-to-use tool to calculate the cavity coupling strengths for simple cavity setups. Our work is a step towards parameter-free ab initio calculations for strongly coupled quantum light-matter systems and will help bridge the gap between theoretical methods and experiments in the field.
Show Abstract

Implicit Adaptive Mesh Refinement for Dispersive Tsunami Propagation

M. Berger, Randall J. LeVeque

We present an algorithm to solve the dispersive depth-averaged Serre–Green–Naghdi equations using patch-based adaptive mesh refinement. These equations require adding additional higher derivative terms to the nonlinear shallow water equations. This has been implemented as a new component of the open source GeoClaw software that is widely used for modeling tsunamis, storm surge, and related hazards, improving its accuracy on shorter wavelength phenomena. We use a formulation that requires solving an elliptic system of equations at each time step, making the method implicit. The adaptive algorithm allows different time steps on different refinement levels and solves the implicit equations level by level. Computational examples are presented to illustrate the stability and accuracy on a radially symmetric test case and two realistic tsunami modeling problems, including a hypothetical asteroid impact creating a short wavelength tsunami for which dispersive terms are necessary. Reproducibility of computational results. This paper has been awarded the “SIAM Reproducibility Badge: Code and data available” as a recognition that the authors have followed reproducibility principles valued by SISC and the scientific computing community. Code and data that allow readers to reproduce the results in this paper are available at https://github.com/rjleveque/ImplicitAMR-paper and in the supplementary materials (ImplicitAMR-paper.zip [174KB]).

Show Abstract

Influence of surface viscosities on the electrodeformation of a prolate viscous drop

H. Nganguia, Y.-N. Young, et al.

Contaminants and other agents are often present at the interface between two fluids, giving rise to rheological properties such as surface shear and dilatational viscosities. The dynamics of viscous drops with interfacial viscosities has attracted greater interest in recent years, due to the influence of surface rheology on deformation and the surrounding flows. We investigate the effects of shear and dilatational viscosities on the electro-deformation of a viscous drop using the Taylor–Melcher leaky dielectric model. We use a large deformation analysis to derive an ordinary differential equation for the drop shape. Our model elucidates the contributions of each force to the overall deformation of the drop and reveals a rich range of dynamic behaviors that show the effects of surface viscosities and their dependence on rheological and electrical properties of the system. We also examine the physical mechanisms underlying the observed behaviors by analyzing the surface dilatation and surface deformation.

Show Abstract
December 23, 2023

Prebifurcation enhancement of imbibition-drainage hysteresis cycles

I. Lavi, et al.

The efficient transport of fluids through disordered media requires a thorough understanding of how the driving rate affects two-phase interface propagation. Despite our understanding of front dynamics in homogeneous environments, as well as how medium heterogeneities shape fluid interfaces at rest, little is known about the effects of localized topographical variations on large-scale interface dynamics. To gain physical insights into this problem, we study here oil-air displacements through an “imperfect” Hele-Shaw cell. Combining experiments, numerical simulations, and theory, we show that the flow rate dramatically alters the interface response to a porous constriction as one approaches the Saffman-Taylor instability, strictly under stable conditions. This gives rise to asymmetric imbibition–drainage hysteresis cycles that feature divergent extensions and nonlocal effects, all of which are aptly captured and explained by a minimal free boundary model.

Show Abstract

Interpretable neural architecture search and transfer learning for understanding CRISPR–Cas9 off-target enzymatic reactions

Z. Zhang, A. Lamson, M. Shelley, O. Troyanskaya

Finely-tuned enzymatic pathways control cellular processes, and their dysregulation can lead to disease. Creating predictive and interpretable models for these pathways is challenging because of the complexity of the pathways and of the cellular and genomic contexts. Here we introduce Elektrum, a deep learning framework which addresses these challenges with data-driven and biophysically interpretable models for determining the kinetics of biochemical systems. First, it uses in vitro kinetic assays to rapidly hypothesize an ensemble of high-quality Kinetically Interpretable Neural Networks (KINNs) that predict reaction rates. It then employs a novel transfer learning step, where the KINNs are inserted as intermediary layers into deeper convolutional neural networks, fine-tuning the predictions for reaction-dependent in vivo outcomes. Elektrum makes effective use of the limited, but clean in vitro data and the complex, yet plentiful in vivo data that captures cellular context. We apply Elektrum to predict CRISPR-Cas9 off-target editing probabilities and demonstrate that Elektrum achieves state-of-the-art performance, regularizes neural network architectures, and maintains physical interpretability

Show Abstract

Explainable Equivariant Neural Networks for Particle Physics: PELICAN

A. Bogatskii, Timothy Hoffman, David W. Miller, Jan T. Offermann, Xiaoyang Liu

PELICAN is a novel permutation equivariant and Lorentz invariant or covariant aggregator network designed to overcome common limitations found in architectures applied to particle physics problems. Compared to many approaches that use non-specialized architectures that neglect underlying physics principles and require very large numbers of parameters, PELICAN employs a fundamentally symmetry group-based architecture that demonstrates benefits in terms of reduced complexity, increased interpretability, and raw performance. We present a comprehensive study of the PELICAN algorithm architecture in the context of both tagging (classification) and reconstructing (regression) Lorentz-boosted top quarks, including the difficult task of specifically identifying and measuring the $W$-boson inside the dense environment of the Lorentz-boosted top-quark hadronic final state. We also extend the application of PELICAN to the tasks of identifying quark-initiated vs.~gluon-initiated jets, and a multi-class identification across five separate target categories of jets. When tested on the standard task of Lorentz-boosted top-quark tagging, PELICAN outperforms existing competitors with much lower model complexity and high sample efficiency. On the less common and more complex task of 4-momentum regression, PELICAN also outperforms hand-crafted, non-machine learning algorithms. We discuss the implications of symmetry-restricted architectures for the wider field of machine learning for physics.

Show Abstract

Analysis of the human kidney transcriptome and plasma proteome identifies markers of proximal tubule maladaptation to injury

Yumen Men, Emily Su, W. Mao , et al.

Acute kidney injury (AKI) is a major risk factor for long-term adverse outcomes, including chronic kidney disease. In mouse models of AKI, maladaptive repair of the injured proximal tubule (PT) prevents complete tissue recovery. However, evidence for PT maladaptation and its etiological relationship with complications of AKI is lacking in humans. We performed single-nucleus RNA sequencing of 120,985 nuclei in kidneys from 17 participants with AKI and seven healthy controls from the Kidney Precision Medicine Project. Maladaptive PT cells, which exhibited transcriptomic features of dedifferentiation and enrichment in pro-inflammatory and profibrotic pathways, were present in participants with AKI of diverse etiologies. To develop plasma markers of PT maladaptation, we analyzed the plasma proteome in two independent cohorts of patients undergoing cardiac surgery and a cohort of marathon runners, linked it to the transcriptomic signatures associated with maladaptive PT, and identified nine proteins whose genes were specifically up- or down-regulated by maladaptive PT. After cardiac surgery, both cohorts of patients had increased transforming growth factor–β2 (TGFB2), collagen type XXIII-α1 (COL23A1), and X-linked neuroligin 4 (NLGN4X) and had decreased plasminogen (PLG), ectonucleotide pyrophosphatase/phosphodiesterase 6 (ENPP6), and protein C (PROC). Similar changes were observed in marathon runners with exercise-associated kidney injury. Postoperative changes in these markers were associated with AKI progression in adults after cardiac surgery and post-AKI kidney atrophy in mouse models of ischemia-reperfusion injury and toxic injury. Our results demonstrate the feasibility of a multiomics approach to discovering noninvasive markers and associating PT maladaptation with adverse clinical outcomes.

Show Abstract

Soft matter roadmap

Jean-Louis Barrat , Andrea J Liu

Soft materials are usually defined as materials made of mesoscopic entities, often self-organised, sensitive to thermal fluctuations and to weak perturbations. Archetypal examples are colloids, polymers, amphiphiles, liquid crystals, foams. The importance of soft materials in everyday commodity products, as well as in technological applications, is enormous, and controlling or improving their properties is the focus of many efforts. From a fundamental perspective, the possibility of manipulating soft material properties, by tuning interactions between constituents and by applying external perturbations, gives rise to an almost unlimited variety in physical properties. Together with the relative ease to observe and characterise them, this renders soft matter systems powerful model systems to investigate statistical physics phenomena, many of them relevant as well to hard condensed matter systems. Understanding the emerging properties from mesoscale constituents still poses enormous challenges, which have stimulated a wealth of new experimental approaches, including the synthesis of new systems with, e.g. tailored self-assembling properties, or novel experimental techniques in imaging, scattering or rheology. Theoretical and numerical methods, and coarse-grained models, have become central to predict physical properties of soft materials, while computational approaches that also use machine learning tools are playing a progressively major role in many investigations. This Roadmap intends to give a broad overview of recent and possible future activities in the field of soft materials, with experts covering various developments and challenges in material synthesis and characterisation, instrumental, simulation and theoretical methods as well as general concepts.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.