2697 Publications

Measuring and modeling the dynamics of mitotic error correction

Gloria Ha, D. Needleman, et al.

Error correction is central to many biological systems and is critical for protein function and cell health. During mitosis, error correction is required for the faithful inheritance of genetic material. When functioning properly, the mitotic spindle segregates an equal number of chromosomes to daughter cells with high fidelity. Over the course of spindle assembly, many initially erroneous attachments between kinetochores and microtubules are fixed through the process of error correction. Despite the importance of chromosome segregation errors in cancer and other diseases, there is a lack of methods to characterize the dynamics of error correction and how it can go wrong. Here, we present an experimental method and analysis framework to quantify chromosome segregation error correction in human tissue culture cells with live cell confocal imaging, timed premature anaphase, and automated counting of kinetochores after cell division. We find that errors decrease exponentially over time during spindle assembly. A coarse-grained model, in which errors are corrected in a chromosome-autonomous manner at a constant rate, can quantitatively explain both the measured error correction dynamics and the distribution of anaphase onset times. We further validated our model using perturbations that destabilized microtubules and changed the initial configuration of chromosomal attachments. Taken together, this work provides a quantitative framework for understanding the dynamics of mitotic error correction.

Show Abstract

Structure and dynamics of motor-driven microtubule bundles

Bezia Lemma, et al.

Connecting the large-scale emergent behaviors of active cytoskeletal materials to the microscopic properties of their constituents is a challenge due to a lack of data on the multiscale dynamics and structure of such systems. We approach this problem by studying the impact of depletion attraction on bundles of microtubules and kinesin-14 molecular motors. For all depletant concentrations, kinesin-14 bundles generate comparable extensile dynamics. However, this invariable mesoscopic behavior masks the transition in the microscopic motion of microtubules. Specifically, with increasing attraction, we observe a transition from bi-directional sliding with extension to pure extension with no sliding. Small-angle X-ray scattering shows that the transition in microtubule dynamics is concurrent with a structural rearrangement of microtubules from an open hexagonal to a compressed rectangular lattice. These results demonstrate that bundles of microtubules and molecular motors can display the same mesoscopic extensile behaviors despite having different internal structures and microscopic dynamics. They provide essential information for developing multiscale models of active matter.

Show Abstract

Contrastive pre-training for sequence based genomics models

Ksenia Sokolova, Kathleen M. Chen, O. Troyanskaya

In recent years deep learning has become one of the central approaches in a number of applications, including many tasks in genomics. However, as models grow in depth and complexity, they either require more data or a strategic initialization technique to improve performance. In this project, we introduce cGen, a novel unsupervised, model-agnostic contrastive pretraining method for sequence-based models. cGen can be used before training to initialize weights, reducing the size of the dataset needed. It works through learning the intrinsic features of the reference genome and makes no assumptions on the underlying structure. We show that the embeddings produced by the unsupervised model are already informative for gene expression prediction and that the sequence features provide a meaningful clustering. We demonstrate that cGen improves model performance in various sequence-based deep learning applications, such as chromatin profiling prediction and gene expression. Our findings suggest that using cGen, particularly in areas constrained by data availability, could improve the performance of deep learning genomic models without the need to modify the model architecture.

Show Abstract
June 12, 2024

Variational bounds and nonlinear stability of an active nematic suspension

We use the entropy method to analyse the nonlinear dynamics and stability of a continuum kinetic model of an active nematic suspension. From the time evolution of the relative entropy, an energy-like quantity in the kinetic model, we derive a variational bound on relative entropy fluctuations that can be expressed in terms of orientational order parameters. From this bound we show isotropic suspensions are nonlinearly stable for sufficiently low activity, and derive upper bounds on spatiotemporal averages in the unstable regime that are consistent with fully nonlinear simulations. This work highlights the self-organising role of activity in particle suspensions, and places limits on how organised such systems can be.

Show Abstract

Disparate nonlinear neural dynamics measured with different techniques in macaque and human V1

J. Zhou, Matt Whitmire, Yuzhi Chen, Eyal Seidemann

Diverse neuro-imaging techniques measure different aspects of neural responses with distinct spatial and temporal resolutions. Relating measured neural responses across different methods has been challenging. Here, we take a step towards overcoming this challenge, by comparing the nonlinearity of neural dynamics measured across methods. We used widefield voltage-sensitive dye imaging (VSDI) to measure neural population responses in macaque V1 to visual stimuli with a wide range of temporal waveforms. We found that stimulus-evoked VSDI responses are surprisingly near-additive in time. These results are qualitatively different from the strong sub-additive dynamics previously measured using fMRI and electrocorticography (ECoG) in human visual cortex with a similar set of stimuli. To test whether this discrepancy is specific to VSDI—a signal dominated by subthreshold neural activity, we repeated our measurements using widefield imaging of a genetically encoded calcium indicator (GcaMP6f)—a signal dominated by spiking activity, and found that GCaMP signals in macaque V1 are also near-additive. Therefore, the discrepancies in the extent of sub-additivity between the macaque and the human measurements are unlikely due to differences between sub- and supra-threshold neural responses. Finally, we use a simple yet flexible delayed normalization model to capture these different dynamics across measurements (with different model parameters). The model can potentially generalize to a broader set of stimuli, which aligns with previous suggestion that dynamic gain-control is a canonical computation contributing to neural processing in the brain.

Show Abstract

Variational Inference for Uncertainty Quantification: an Analysis of Trade-offs

C. Margossian, L. Pillaud-Vivien, L. Saul

Given an intractable distribution p, the problem of variational inference (VI) is to find the best approximation from some more tractable family Q. Commonly, one chooses Q to be a family of factorized distributions (i.e., the mean-field assumption), even though p itself does not factorize. We show that this mismatch leads to an impossibility theorem: if p does not factorize, then any factorized approximation q∈Q can correctly estimate at most one of the following three measures of uncertainty: (i) the marginal variances, (ii) the marginal precisions, or (iii) the generalized variance (which can be related to the entropy). In practice, the best variational approximation in Q is found by minimizing some divergence D(q,p) between distributions, and so we ask: how does the choice of divergence determine which measure of uncertainty, if any, is correctly estimated by VI? We consider the classic Kullback-Leibler divergences, the more general Rényi divergences, and a score-based divergence which compares ∇logp and ∇logq. We provide a thorough theoretical analysis in the setting where p is a Gaussian and q is a (factorized) Gaussian. We show that all the considered divergences can be

Show Abstract

Finite Temperature Minimal Entangled Typical Thermal States Impurity Solver

We present a minimally entangled typical thermal state quantum impurity solver for general multiorbital systems at finite temperatures. We introduce an improved estimator for the single-particle Green's function that strongly reduces the large fluctuations at long imaginary time and low temperature, which were a severe limitation of the original algorithm. In combination with the fork tensor product states Ansatz, we obtain a dynamical mean field theory (DMFT) quantum impurity solver, which we benchmark for single and three-band models down to low temperatures, including the effect of spin-orbit coupling in a realistic DMFT computation for the Hund's metal Sr2⁢RuO4 down to low temperatures.

Show Abstract

Dynamical correlation functions from complex time evolution

We present an approach to tame the growth of entanglement during time evolution by tensor network methods. It combines time evolution in the complex plane with a perturbative and controlled reconstruction of correlation functions on the real time axis. We benchmark our approach on the single impurity Anderson model. Compared to purely real time evolution, the complex time evolution significantly reduces the required bond dimension to obtain the spectral function. Notably, our approach yields self-energy results with high precision at low frequencies, comparable to numerical renormalization group results, and it successfully captures the exponentially small Kondo energy scale.

Show Abstract

How Truncating Weights Improves Reasoning in Language Models

Lei Chen, Joan Bruna, A. Bietti

In addition to the ability to generate fluent text in various languages, large language models have been successful at tasks that involve basic forms of logical "reasoning" over their context. Recent work found that selectively removing certain components from weight matrices in pre-trained models can improve such reasoning capabilities. We investigate this phenomenon further by carefully studying how certain global associations tend to be stored in specific weight components or Transformer blocks, in particular feed-forward layers. Such associations may hurt predictions in reasoning tasks, and removing the corresponding components may then improve performance. We analyze how this arises during training, both empirically and theoretically, on a two-layer Transformer trained on a basic reasoning task with noise, a toy associative memory model, and on the Pythia family of pre-trained models tested on simple reasoning tasks.

Show Abstract

Matrix Product Study of Spin Fractionalization in the 1D Kondo Insulator

J. Chen, M. Stoudenmire, Yashar Komijani, Piers Coleman

The Kondo lattice is one of the classic examples of strongly correlated electronic systems. We conduct a controlled study of the Kondo lattice in one dimension, highlighting the role of excitations created by the composite fermion operator. Using time-dependent matrix product state methods, we compute various correlation functions and contrast them with both large-N mean-field theory and the strong-coupling expansion. We show that the composite fermion operator creates long-lived, charge-e and spin-1/2 excitations, which cover the low-lying single-particle excitation spectrum of the system. Furthermore, spin excitations can be thought to be composed of such fractionalized quasiparticles with a residual interaction which tend to disappear at weak Kondo coupling.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates