2596 Publications

Biomolecular condensates control and are defined by RNA-RNA interactions that arise in viral replication

Dilimulati Aierken, Vita Zhang, R. Sealfon, et al.

Cells must limit RNA–RNA interactions to avoid irreversible RNA entanglement. Cells may prevent deleterious RNA-RNA interactions by genome organization to avoid complementarity however, RNA viruses generate long, perfectly complementary antisense RNA during replication. How do viral RNAs avoid irreversible entanglement? One possibility is RNA sequestration into biomolecular condensates. To test this, we reconstituted critical SARS-CoV-2 RNA–RNA interactions in Nucleocapsid condensates. We observed that RNAs with low propensity RNA–RNA interactions resulted in more round, liquid-like condensates while those with high sequence complementarity resulted in more heterogeneous networked morphology independent of RNA structure stability. Residue-resolution molecular simulations and direct sequencing-based detection of RNA–RNA interactions support that these properties arise from degree of trans RNA contacts. We propose that extensive RNA–RNA interactions in cell and viral replication are controlled via a combination of genome organization, timing, RNA sequence content, RNA production ratios, and emergent biomolecular condensate material properties.

Show Abstract
December 25, 2024

Uniqueness, regularity and characteristic flow for a non strictly convex singular variational problem

Jean-Francois Babadjian, G. Francfort

This work addresses the question of uniqueness and regularity of the minimizers of a convex but not strictly convex integral functional with linear growth in a two-dimensional setting. The integrand -- whose precise form derives directly from the theory of perfect plasticity -- behaves quadratically close to the origin and grows linearly once a specific threshold is reached. Thus, in contrast with the only existing literature on uniqueness for functionals with linear growth, that is that which pertains to the generalized least gradient, the integrand is not a norm. We make use of hyperbolic conservation laws hidden in the structure of the problem to tackle uniqueness. Our argument strongly relies on the regularity of a vector field -- the Cauchy stress in the terminology of perfect plasticity -- which allows us to define characteristic lines, and then to employ the method of characteristics. Using the detailed structure of the characteristic landscape evidenced in our preliminary study BF, we show that this vector field is actually continuous, save for possibly two points. The different behaviors of the energy density at zero and at infinity imply an inequality constraint on the Cauchy stress. Under a barrier type convexity assumption on the set where the inequality constraint is saturated, we show that uniqueness holds for pure Dirichlet boundary data devoid of any regularity properties, a stronger result than that of uniqueness for a given trace on the whole boundary since our minimizers can fail to attain the boundary data. We also show a partial regularity result for the minimizer.

Show Abstract

Uniqueness, regularity and characteristic flow for a non strictly convex singular variational problem

Jean-Francois Babadjian, G. Francfort

This work addresses the question of uniqueness and regularity of the minimizers of a convex but not strictly convex integral functional with linear growth in a two-dimensional setting. The integrand -- whose precise form derives directly from the theory of perfect plasticity -- behaves quadratically close to the origin and grows linearly once a specific threshold is reached. Thus, in contrast with the only existing literature on uniqueness for functionals with linear growth, that is that which pertains to the generalized least gradient, the integrand is not a norm. We make use of hyperbolic conservation laws hidden in the structure of the problem to tackle uniqueness. Our argument strongly relies on the regularity of a vector field -- the Cauchy stress in the terminology of perfect plasticity -- which allows us to define characteristic lines, and then to employ the method of characteristics. Using the detailed structure of the characteristic landscape evidenced in our preliminary study BF, we show that this vector field is actually continuous, save for possibly two points. The different behaviors of the energy density at zero and at infinity imply an inequality constraint on the Cauchy stress. Under a barrier type convexity assumption on the set where the inequality constraint is saturated, we show that uniqueness holds for pure Dirichlet boundary data devoid of any regularity properties, a stronger result than that of uniqueness for a given trace on the whole boundary since our minimizers can fail to attain the boundary data. We also show a partial regularity result for the minimizer.

Show Abstract

A mixing time bound for Gibbs sampling from log-smooth log-concave distributions

The Gibbs sampler, also known as the coordinate hit-and-run algorithm, is a Markov chain that is widely used to draw samples from probability distributions in arbitrary dimensions. At each iteration of the algorithm, a randomly selected coordinate is resampled from the distribution that results from conditioning on all the other coordinates. We study the behavior of the Gibbs sampler on the class of log-smooth and strongly log-concave target distributions supported on ℝ

Show Abstract

Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit

Jason D. Lee, Kazusato Oko, Taiji Suzuki, D. Wu

We study the problem of gradient descent learning of a single-index target function f∗(x) = σ∗(⟨x,θ⟩) under isotropic Gaussian data in Rd, where the unknown link function σ∗ : R → R has information exponent p (defined as the lowest degree in the Hermite expansion). Prior works showed that gradientbased training of neural networks can learn this target with n ≳ dΘ(p) samples, and such complexity is predicted to be necessary by the correlational statistical query lower bound. Surprisingly, we prove that a two-layer neural network optimized by an SGD-based algorithm (on the squared loss) learns f∗ with a complexity that is not governed by the information exponent. Specifically, for arbitrary polynomial single-index models, we establish a sample and runtime complexity of n ≃ T = Θ(d·polylogd), where Θ(·) hides a constant only depending on the degree of σ∗; this dimension dependence matches the information theoretic limit up to polylogarithmic factors. More generally, we show that n ≳ d(p∗−1)∨1 samples are sufficient to achieve low generalization error, where p∗ ≤ p is the generative exponent of the link function. Core to our analysis is the reuse of minibatch in the gradient computation, which gives rise to higher-order information beyond correlational queries.

Show Abstract

Comparing noisy neural population dynamics using optimal transport distances

A. Nejatbakhsh, Victor Geadah, A. Williams, D. Lipshutz

Biological and artificial neural systems form high-dimensional neural representations that underpin their computational capabilities. Methods for quantifying geometric similarity in neural representations have become a popular tool for identifying computational principles that are potentially shared across neural systems. These methods generally assume that neural responses are deterministic and static. However, responses of biological systems, and some artificial systems, are noisy and dynamically unfold over time. Furthermore, these characteristics can have substantial influence on a system's computational capabilities. Here, we demonstrate that existing metrics can fail to capture key differences between neural systems with noisy dynamic responses. We then propose a metric for comparing the geometry of noisy neural trajectories, which can be derived as an optimal transport distance between Gaussian processes. We use the metric to compare models of neural responses in different regions of the motor system and to compare the dynamics of latent diffusion models for text-to-image synthesis.

Show Abstract

Learning locally dominant force balances in active particle systems

Dominik Sturm, S. Maddu, Ivo F. Sbalzarini

We use a combination of unsupervised clustering and sparsity-promoting inference algorithms to learn locally dominant force balances that explain macroscopic pattern formation in self-organized active particle systems. The self-organized emergence of macroscopic patterns from microscopic interactions between self-propelled particles can be widely observed in nature. Although hydrodynamic theories help us better understand the physical basis of this phenomenon, identifying a sufficient set of local interactions that shape, regulate and sustain self-organized structures in active particle systems remains challenging. We investigate a classic hydrodynamic model of self-propelled particles that produces a wide variety of patterns, such as asters and moving density bands. Our data-driven analysis shows that propagating bands are formed by local alignment interactions driven by density gradients, while steady-state asters are shaped by a mechanism of splay-induced negative compressibility arising from strong particle interactions. Our method also reveals analogous physical principles of pattern formation in a system where the speed of the particle is influenced by the local density. This demonstrates the ability of our method to reveal physical commonalities across models. The physical mechanisms inferred from the data are in excellent agreement with analytical scaling arguments and experimental observations.

Show Abstract

A minimal dynamical system and analog circuit for non-associative learning

M. Smart, S. Shvartsman, Martin Mönnigmann

Learning in living organisms is typically associated with networks of neurons. The use of large numbers of adjustable units has also been a crucial factor in the continued success of artificial neural networks. In light of the complexity of both living and artificial neural networks, it is surprising to see that very simple organisms -- even unicellular organisms that do not possess a nervous system -- are capable of certain forms of learning. Since in these cases learning may be implemented with much simpler structures than neural networks, it is natural to ask how simple the building blocks required for basic forms of learning may be. The purpose of this study is to discuss the simplest dynamical systems that model a fundamental form of non-associative learning, habituation, and to elucidate technical implementations of such systems, which may be used to implement non-associative learning in neuromorphic computing and related applications.

Show Abstract

xVal: A Continuous Numerical Tokenization for Scientific Language Models

Siavash Golkar, Ph.D. , Mariel Pettee, Ph.D. , M. Eickenberg, A. Bietti, et al.

Due in part to their discontinuous and discrete default encodings for numbers, Large Language Models (LLMs) have not yet been commonly used to process numerically-dense scientific datasets. Rendering datasets as text, however, could help aggregate diverse and multi-modal scientific data into a single training corpus, thereby potentially facilitating the development of foundation models for science. In this work, we introduce xVal, a strategy for continuously tokenizing numbers within language models that results in a more appropriate inductive bias for scientific applications. By training specially-modified language models from scratch on a variety of scientific datasets formatted as text, we find that xVal generally outperforms other common numerical tokenization strategies on metrics including out-of-distribution generalization and computational efficiency.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.