2697 Publications

Steps of actin filament branch formation by Arp2/3 complex investigated with coarse-grained molecular dynamics

Shuting Zhang , Dimitrios Vavylonis

The nucleation of actin filament branches by the Arp2/3 complex involves activation through nucleation promotion factors (NPFs), recruitment of actin monomers, and binding of the complex to the side of actin filaments. Because of the large system size and processes that involve flexible regions and diffuse components, simulations of branch formation using all-atom molecular dynamics are challenging. We applied a coarse-grained model that retains amino-acid level information and allows molecular dynamics simulations in implicit solvent, with globular domains represented as rigid bodies and flexible regions allowed to fluctuate. We used recent electron microscopy structures of the inactive Arp2/3 complex bound to NPF domains and to mother actin filament for the activated Arp2/3 complex. We studied interactions of Arp2/3 complex with the activating VCA domain of the NPF Wiskott-Aldrich syndrome protein, actin monomers, and actin filament. We found stable configurations with one or two actin monomers bound along the branch filament direction and with CA domain of VCA associated to the strong and weak binding sites of the Arp2/3 complex, supporting prior structural studies and validating our approach. We reproduced delivery of actin monomers and CA to the Arp2/3 complex under different conditions, providing insight into mechanisms proposed in previous studies. Simulations of active Arp2/3 complex bound to a mother actin filament indicate the contribution of each subunit to the binding. Addition of the C-terminal tail of Arp2/3 complex subunit ArpC2, which is missing in the cryo-EM structure, increased binding affinity, indicating a possible stabilizing role of this tail.

Show Abstract

A fast, high-order numerical method for the simulation of single-excitation states in quantum optics

Jeremy Hoskins, J. Kaye, M. Rachh, John C. Schotland

We consider the numerical solution of a nonlocal partial differential equation which models the process of collective spontaneous emission in a two-level atomic system containing a single photon. We reformulate the problem as an integro-differential equation for the atomic degrees of freedom, and describe an efficient solver for the case of a Gaussian atomic density. The problem of history dependence arising from the integral formulation is addressed using sum-of-exponentials history compression. We demonstrate the solver on two systems of physical interest: in the first, an initially-excited atom decays into a photon by spontaneous emission, and in the second, a photon pulse is used to an excite an atom, which then decays.

Show Abstract

Coordinated drift of receptive fields in Hebbian/anti-Hebbian network models during noisy representation learning

Shanshan Qin, S. Farashahi, D. Lipshutz, A. Sengupta, D. Chklovskii, Cengiz Pehlevan

Recent experiments have revealed that neural population codes in many brain areas continuously change even when animals have fully learned and stably perform their tasks. This representational β€˜drift’naturally leads to questions about its causes, dynamics and functions. Here we explore the hypothesis that neural representations optimize a representational objective with a degenerate solution space, and noisy synaptic updates drive the network to explore this (near-)optimal space causing representational drift. We illustrate this idea and explore its consequences in simple, biologically plausible Hebbian/anti-Hebbian network models of representation learning. We find that the drifting receptive fields of individual neurons can be characterized by a coordinated random walk, with effective diffusion constants depending on various parameters such as learning rate, noise amplitude and input statistics. Despite such drift, the representational similarity of population codes is stable over time. Our model recapitulates experimental observations in the hippocampus and posterior parietal cortex and makes testable predictions that can be probed in future experiments.

Show Abstract

An empirical model of the Gaia DR3 selection function

Tristan Cantat-Gaudin, Morgan Fouesneau, Hans-Walter Rix, ..., D. Hogg, ..., A. Price-Whelan, et. al.

Interpreting and modelling astronomical catalogues requires an understanding of the catalogues' completeness or selection function: objects of what properties had a chance to end up in the catalogue. Here we set out to empirically quantify the completeness of the overall Gaia DR3 catalogue. This task is not straightforward because Gaia is the all-sky optical survey with the highest angular resolution to date and no consistent ``ground truth'' exists to allow direct comparisons. However, well-characterised deeper imaging enables an empirical assessment of Gaia's G-band completeness across parts of the sky. On this basis, we devised a simple analytical completeness model of Gaia as a function of the observed G magnitude and position over the sky, which accounts for both the effects of crowding and the complex Gaia scanning law. Our model only depends on a single quantity: the median magnitude M10 in a patch of the sky of catalogued sources with πšŠπšœπšπš›πš˜πš–πšŽπšπš›πš’πšŒ_πš–πšŠπšπšŒπš‘πšŽπš_πšπš›πšŠπš—πšœπš’πšπšœ ≀10. M10 reflects elementary completeness decisions in the Gaia pipeline and is computable from the Gaia DR3 catalogue itself and therefore applicable across the whole sky. We calibrate our model using the Dark Energy Camera Plane Survey (DECaPS) and test its predictions against Hubble Space Telescope observations of globular clusters. We find that our model predicts Gaia's completeness values to a few per cent across the sky. We make the model available as a part of the πšπšŠπš’πšŠπšœπš Python package built and maintained by the GaiaUnlimited project: $\texttt{this https URL}$

Show Abstract

Adaptive Tuning for Metropolis Adjusted Langevin Trajectories

Lionel Riou-Durand, Pavel Sountsov, Jure Vogrinc, C. Margossian, Sam Power

Hamiltonian Monte Carlo (HMC) is a widely used sampler for continuous probability distributions. In many cases, the underlying Hamiltonian dynamics exhibit a phenomenon of resonance which decreases the efficiency of the algorithm and makes it very sensitive to hyperparameter values. This issue can be tackled efficiently, either via the use of trajectory length randomization (RHMC) or via partial momentum refreshment. The second approach is connected to the kinetic Langevin diffusion, and has been mostly investigated through the use of Generalized HMC (GHMC). However, GHMC induces momentum flips upon rejections causing the sampler to backtrack and waste computational resources. In this work we focus on a recent algorithm bypassing this issue, named Metropolis Adjusted Langevin Trajectories (MALT). We build upon recent strategies for tuning the hyperparameters of RHMC which target a bound on the Effective Sample Size (ESS) and adapt it to MALT, thereby enabling the first user-friendly deployment of this algorithm. We construct a method to optimize a sharper bound on the ESS and reduce the estimator variance. Easily compatible with parallel implementation, the resultant Adaptive MALT algorithm is competitive in terms of ESS rate and hits useful tradeoffs in memory usage when compared to GHMC, RHMC and NUTS.

Show Abstract

Eliminating Artificial Boundary Conditions in Time-Dependent Density Functional Theory Using Fourier Contour Deformation

J. Kaye, A. Barnett, L. Greengard, Umberto De Giovannini, A. Rubio

We present an efficient method for propagating the time-dependent Kohn–Sham equations in free space, based on the recently introduced Fourier contour deformation (FCD) approach. For potentials which are constant outside a bounded domain, FCD yields a high-order accurate numerical solution of the time-dependent SchrΓΆdinger equation directly in free space, without the need for artificial boundary conditions. Of the many existing artificial boundary condition schemes, FCD is most similar to an exact nonlocal transparent boundary condition, but it works directly on Cartesian grids in any dimension, and runs on top of the fast Fourier transform rather than fast algorithms for the application of nonlocal history integral operators. We adapt FCD to time-dependent density functional theory (TDDFT), and describe a simple algorithm to smoothly and automatically truncate long-range Coulomb-like potentials to a time-dependent constant outside of a bounded domain of interest, so that FCD can be used. This approach eliminates errors originating from the use of artificial boundary conditions, leaving only the error of the potential truncation, which is controlled and can be systematically reduced. The method enables accurate simulations of ultrastrong nonlinear electronic processes in molecular complexes in which the interference between bound and continuum states is of paramount importance. We demonstrate results for many-electron TDDFT calculations of absorption and strong field photoelectron spectra for one and two-dimensional models, and observe a significant reduction in the size of the computational domain required to achieve high quality results, as compared with the popular method of complex absorbing potentials.

Show Abstract

Generative Models of Multichannel Data from a Single Exampleβ€”Application to Dust Emission

B. RΓ©galdo-Saint Blancard, Erwan Allys, Constant Auclair, FranΓ§ois Boulanger, M. Eickenberg, FranΓ§ois Levrier, LΓ©o Vacher, Sixin Zhang

The quest for primordial B-modes in the cosmic microwave background has emphasized the need for refined models of the Galactic dust foreground. Here we aim at building a realistic statistical model of the multifrequency dust emission from a single example. We introduce a generic methodology relying on microcanonical gradient descent models conditioned by an extended family of wavelet phase harmonic (WPH) statistics. To tackle the multichannel aspect of the data, we define cross-WPH statistics, quantifying non-Gaussian correlations between maps. Our data-driven methodology could apply to various contexts, and we have updated the software PyWPH, on which this work relies, accordingly. Applying this to dust emission maps built from a magnetohydrodynamics simulation, we construct and assess two generative models: (1) a (I, E, B) multi-observable input, and (2) a {I

Show Abstract

Shedding a PAC-Bayesian Light on Adaptive Sliced-Wasserstein Distances

R. Ohana, Kimia Nadjahi, Alain Rakotomamonjy, Liva Ralaivola

The Sliced-Wasserstein distance (SW) is a computationally efficient and theoretically grounded alternative to the Wasserstein distance. Yet, the literature on its statistical properties -- or, more accurately, its generalization properties -- with respect to the distribution of slices, beyond the uniform measure, is scarce. To bring new contributions to this line of research, we leverage the PAC-Bayesian theory and a central observation that SW may be interpreted as an average risk, the quantity PAC-Bayesian bounds have been designed to characterize. We provide three types of results: i) PAC-Bayesian generalization bounds that hold on what we refer as adaptive Sliced-Wasserstein distances, i.e. SW defined with respect to arbitrary distributions of slices (among which data-dependent distributions), ii) a principled procedure to learn the distribution of slices that yields maximally discriminative SW, by optimizing our theoretical bounds, and iii) empirical illustrations of our theoretical findings.

Show Abstract

Linear optical random projections without holography

R. Ohana, Daniel Hesslow, Daniel Brunner, Sylvain Gigan, Kilian MΓΌller

We introduce what we believe to be a novel method to perform linear optical random projections without the need for holography. Our method consists of a computationally trivial combination of multiple intensity measurements to mitigate the information loss usually associated with the absolute-square non-linearity imposed by optical intensity measurements. Both experimental and numerical findings demonstrate that the resulting matrix consists of real-valued, independent, and identically distributed (i.i.d.) Gaussian random entries. Our optical setup is simple and robust, as it does not require interference between two beams. We demonstrate the practical applicability of our method by performing dimensionality reduction on high-dimensional data, a common task in randomized numerical linear algebra with relevant applications in machine learning.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates