2005 Publications

Polaritonic Hofstadter Butterfly and Cavity-Control of the Quantized Hall Conductance

Vasil Rokaj, Markus Penz, Michael A. Sentef, Michael Ruggenthaler, A. Rubio
In a previous work [Phys. Rev. Lett. 123, 047202 (2019)] a translationally invariant framework called quantum-electrodynamical Bloch (QED-Bloch) theory was introduced for the description of periodic materials in homogeneous magnetic fields and strongly coupled to the quantized photon field in the optical limit. For such systems, we show that QED-Bloch theory predicts the existence of fractal polaritonic spectra as a function of the cavity coupling strength. In addition, for the energy spectrum as a function of the relative magnetic flux we find that a terahertz cavity can modify the standard Hofstadter butterfly. In the limit of no quantized photon field, QED-Bloch theory captures the well-known fractal spectrum of the Hofstadter butterfly and can be used for the description of 2D materials in strong magnetic fields, which are of great experimental interest. As a further application, we consider Landau levels under cavity confinement and show that the cavity alters the quantized Hall conductance and that the Hall plateaus are modified as σ
Show Abstract

Nonequilibrium phase transition in a driven-dissipative quantum antiferromagnet

Mona H. Kalthoff, Dante M. Kennes, Andrew J. Millis, Michael A. Sentef
A deeper theoretical understanding of driven-dissipative interacting systems and their nonequilibrium phase transitions is essential both to advance our fundamental physics understanding and to harness technological opportunities arising from optically controlled quantum many-body states. This paper provides a numerical study of dynamical phases and the transitions between them in the nonequilibrium steady state of the prototypical two-dimensional Heisenberg antiferromagnet with drive and dissipation. We demonstrate a nonthermal transition that is characterized by a qualitative change in the magnon distribution, from subthermal at low drive to a generalized Bose-Einstein form including a nonvanishing condensate fraction at high drive. A finite-size analysis reveals static and dynamical critical scaling at the transition, with a discontinuous slope of the magnon number versus driving field strength and critical slowing down at the transition point. Implications for experiments on quantum materials and polariton condensates are discussed.
Show Abstract

Few-Femtosecond Dynamics of Free-Free Opacity in Optically Heated Metals

A. Niedermayr, M. Volkov, S. A. Sato, N. Hartmann, Z. Schumacher, S. Neb, A. Rubio, L. Gallmann, U. Keller
Interaction of light with an excited free-electron gas is a fundamental process spanning a large variety of fields in physics. The advent of femtosecond laser pulses and extreme-ultraviolet sources allowed one to put theoretical models to the test. Recent experimental and theoretical investigations of nonequilibrium aluminum, which is considered to be a good real-world representation of an ideal free-electron metal, showed that, despite significant progress, the transient hot-electron/cold-ion state is not well understood. In particular, the role of plasmon broadening, screening, and electron degeneracy remains unclear. Here, we experimentally investigate the free-free opacity in aluminum on the few-femtosecond timescale at laser intensities close to the damage threshold. Few-femtosecond time resolution allows us to track the purely electronic contribution to nonequilibrium absorption and unambiguously separate it from the slower lattice contribution. We support the experiments with ab initio calculations and a nearly free electron model in the Sommerfeld expansion. We find that the simplest independent-particle model with a fixed band structure is sufficient to explain the experimental findings without the need to include changes in screening or electron scattering, contrasting previous observations in 3d transition metals. We further find that electronic heating of a free-electron gas shifts the spectral weight of the absorption to higher photon energies, and we are able to distinguish the influence of the population change and the chemical potential shift based on the comparison of ab initio calculations to a simplified free-electron model. Our findings provide a benchmark for further investigations and modeling of dense nonequilibrium plasma under even more extreme conditions.
Show Abstract

A common framework for discriminability and perceived intensity of sensory stimuli

The perception of sensory attributes is often quantified through measurements of discriminability (the ability to detect small stimulus changes), as well as through direct judgements of appearance or intensity. Despite their ubiquity, the relationship between these two measurements remains controversial and unresolved. Here, we propose a framework in which they arise from different aspects of a common representation. Specifically, we assume that judgements of stimulus intensity (e.g., through rating scales) reflect the mean value of an internal representation, and discriminability reflects the ratio between the derivative of the mean value and the internal noise amplitude, as quantified by the measure of Fisher Information. A unique identification of internal representation properties can be achieved by combining the two measurements. As a central example, we show that Weber’s Law of perceptual discriminability can co-exist with Stevens’ power-law scaling of intensity ratings (for all exponents), when the noise amplitude increases in proportion to the representational mean. We extend this result beyond Weber’s range by incorporating a more general and physiology-inspired form of noise, and show that the combination of noise properties and discriminability measurements accurately predicts intensity ratings across a variety of sensory modalities and attributes. Our framework unifies two major perceptual measurements - discriminability and rating scales, and proposes a neural interpretation for the underlying representations. Additionally, our framework teases two super-threshold perceptual measurements apart – rating scales and super-threshold perceptual distance, which were thought to measure the same perceptual aspect, and generated decades of mixed experimental results in the literature.

Show Abstract

Effects of foveation on early visual representations

Human vision is far from uniform across the visual field. At fixation, we have a region of high acuity known as the fovea, and acuity decreases with distance from the fovea. However, it is not true that peripheral vision is just a blurrier version of foveal vision, and finding a precise description of how exactly they differ has been challenging. This thesis presents two investigations into how the processing of visual information changes with location in the visual field, both focused on the early visual system, as well as a description of a software package developed to support studies of the type found in the second study. In the first study, we use functional magnetic resonance imaging (fMRI) to measure how spatial frequency tuning changes with orientation and visual field location in human primary visual cortex (V1). V1 is among the best-characterized regions of the primate brain, and we know that nearly every neuron in V1 is selective for spatial frequency and orientation. We also know that V1 neurons preferred spatial frequencies decrease with eccentricity, which aligns with the decrease in peak spatial frequency sensitivity found in perception. However, precise descriptions of this relationship have been elusive, due to the difficulty of characterizing tuning properties across the whole field. By utilizing fMRI's ability to measure responses across the entire cortex at once to a set of stimuli designed to efficiently map spatial frequency preferences, along with a novel analysis method which fits the responses of all voxels simultaneously, we present a compact description of this property, providing an important building block for future work.

In the second study, we build perceptual pooling models of the entire visual field from simple filter models inspired by retinal ganglion cells and V1 neurons. We then synthesize a large number of images to investigate how the sensitivities and invariances of these models align with those of the human visual system. This allows us to investigate to what extent the change in perception across the visual field can be accounted for by well-understood models of low-level visual processing, rather than requiring more cognitive phenomena or models with millions of parameters. Finally, I describe an open-source software package developed by members of the Simoncelli lab that provides four image synthesis methods in a shared, general framework. These methods were all developed in the lab over the past several decades and have been described in the literature, but their widespread use has been limited by the difficulty of applying them to new models. By leveraging the automatic differentiation built into a popular deep learning library, our package allows for the use of synthesis method with arbitrary models, providing an important resource for the vision science community. Altogether, this thesis presents a step forward in understanding how visual processing differs across the visual field and, with the effort to share the code, data, and computational environment of the projects, provides resources for future scientists to build on.

Show Abstract

Flexible sensory information processing through targeted stochastic co-modulation

Caroline Haimerl

Humans and animals can quickly adapt to new task demands while retaining capabilities developed previously. Such flexible sensory-guided behavior requires reliable encoding of stimulus information in neural populations, and task-specific readout through selective combination of these responses. The former has been the topic of intensive study, but the latter remains largely a mystery. Here we propose that targeted stochastic gain modulation could support flexible readout of task-information from an encoding population. In experiments, we find that responses of neurons in area V1 of monkeys performing a visual orientation discrimination task exhibit low-dimensional comodulation. This modulation fluctuates rapidly, and is stronger in those neurons that are most informative for the behavioral task. We propose a theoretical framework in which this modulation serves as a label to facilitate downstream readout. We demonstrate that the shared modulatory fluctuations found in V1 can be used to decode from the recorded neural activity within a small number of training trials, consistent with observed behavior. Simulations of visual information processing in a hierarchical neural network demonstrate that learned, modulator-induced labels can accompany task-information across several stages to guide readout at a decision stage and thereby fine-tune the network without reorganization of the feedforward weights. This allows the circuit to reach high levels of performance in novel tasks with minimal training, outperforming previously proposed attentional mechanisms based on gain increases, while also being able to instantly revert back to the initial operating regime once task demands change. The theory predicts that the modulator label should be maintained across processing stages and indeed we find that the trial-by-trial modulatory signal estimated from V1 populations is also present in the activity of simultaneously recorded MT units, preferentially so if they are task-informative. Overall, these results provide a new framework for how intelligent systems can flexibly and robustly adapt to changes in task structure by adjusting information routing via a shared modulator label.

Show Abstract

Robust and interpretable denoising via deep learning

S. Mohan

In the past decade, convolutional neural networks (CNN) have achieved state-of-the-art results in denoising. The goal of this work is to advance our understanding of these models and leverage this understanding to advance the current state-of-the-art. We start by showing that CNNs systematically overfit the noise levels in the training set, and propose a new architecture called bias-free CNNs which generalize robustly to noise levels outside the training set. Bias-free networks are also locally linear, which enables direct analysis with linear-algebraic tools. We show that the denoising map can be visualized locally as a filter that adapts to both signal structure and noise level. Denoising CNNs including bias-free CNNs are typically trained using pairs of noisy and clean data. However, in many domains like microscopy, clean data is generally not available. We develop a network architecture that performs unsupervised denoising for video data, i.e, we train only using noisy videos. We then build on top of the unsupervised denoising methodology and propose a new adaptive denoising paradigm. We develop GainTuning in which CNN models pre-trained on large datasets are adaptively and selectively adjusted for individual test images. GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, particularly for test images differing systematically from the training data, either in noise distribution or image type. Finally, we explore the application of deep learning-based denoising in scientific discovery through a case study in electron microscopy. To ensure that the denoised output is accurate, we develop likelihood map which quantifies the agreement between real noisy data and denoised output (thus flagging denoising artifacts). In addition, we show that popular metrics for denoising fail to capture scientifically relevant details and propose new metrics to fill this gap.

Show Abstract

Simple and statistically sound recommendations for analysing physical theories

Shehu S AbdusSalam, F. Agocs, Benjamin C Allanach, Peter Athron, Csaba Balázs, Emanuele Bagnaschi, Philip Bechtle, Oliver Buchmueller, Ankit Beniwal, Jihyun Bhom, others

Physical theories that depend on many parameters or are tested against data from many different experiments pose unique challenges to statistical inference. Many models in particle physics, astrophysics and cosmology fall into one or both of these categories. These issues are often sidestepped with statistically unsound ad hoc methods, involving intersection of parameter intervals estimated by multiple experiments, and random or grid sampling of model parameters. Whilst these methods are easy to apply, they exhibit pathologies even in low-dimensional parameter spaces, and quickly become problematic to use and interpret in higher dimensions. In this article we give clear guidance for going beyond these procedures, suggesting where possible simple methods for performing statistically sound inference, and recommendations of readily-available software tools and standards that can assist in doing so. Our aim is to provide any physicists lacking comprehensive statistical training with recommendations for reaching correct scientific conclusions, with only a modest increase in analysis burden. Our examples can be reproduced with the code publicly available at Zenodo.

Show Abstract

Multiple-Allele MHC Class II Epitope Engineering by a Molecular Dynamics-Based Evolution Protocol

Rodrigo Ochoa, Victoria Alves Santos Lunardelli, Daniela Santoro Rosa, Alessandro Laio, P. Cossio

Epitopes that bind simultaneously to all human alleles of Major Histocompatibility Complex class II (MHC II) are considered one of the key factors for the development of improved vaccines and cancer immunotherapies. To engineer MHC II multiple-allele binders, we developed a protocol called PanMHC-PARCE, based on the unsupervised optimization of the epitope sequence by single-point mutations, parallel explicit-solvent molecular dynamics simulations and scoring of the MHC II-epitope complexes. The key idea is accepting mutations that not only improve the affinity but also reduce the affinity gap between the alleles. We applied this methodology to enhance a Plasmodium vivax epitope for multiple-allele binding. In vitro rate-binding assays showed that four engineered peptides were able to bind with improved affinity toward multiple human MHC II alleles. Moreover, we demonstrated that mice immunized with the peptides exhibited interferon-gamma cellular immune response. Overall, the method enables the engineering of peptides with improved binding properties that can be used for the generation of new immunotherapies.

Show Abstract

A direct measurement of the distance to the Galactic center using the kinematics of bar stars

H. W. Leung, J. Bovy, T. Mackereth, J. Hunt, R. R. Lane, J. C. Wilson

The distance to the Galactic center R0 is a fundamental parameter for understanding the Milky Way, because all observations of our Galaxy are made from our heliocentric reference point. The uncertainty in R0 limits our knowledge of many aspects of the Milky Way, including its total mass and the relative mass of its major components, and any orbital parameters of stars employed in chemo-dynamical analyses. While measurements of R0 have been improving over a century, measurements in the past few years from a variety of methods still find a wide range of R0 being somewhere within 8.0 to 8.5kpc. The most precise measurements to date have to assume that Sgr A∗ is at rest at the Galactic center, which may not be the case. In this paper, we use maps of the kinematics of stars in the Galactic bar derived from APOGEE DR17 and Gaia EDR3 data augmented with spectro-photometric distances from the \texttt{astroNN} neural-network method. These maps clearly display the minimum in the rotational velocity vT and the quadrupolar signature in radial velocity vR expected for stars orbiting in a bar. From the minimum in vT, we measure R0=8.23±0.12kpc. We validate our measurement using realistic N-body simulations of the Milky Way. We further measure the pattern speed of the bar to be Ωbar=40.08±1.78kms−1kpc−1. Because the bar forms out of the disk, its center is manifestly the barycenter of the bar+disc system and our measurement is therefore the most robust and accurate measurement of R0 to date.

Show Abstract
April 26, 2022
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates