2573 Publications

Disentangling Recurrent Neural Dynamics with Stochastic Representational Geometry

D. Lipshutz, A. Nejatbakhsh, A. Williams

Uncovering and comparing the dynamical mechanisms that support neural processing remains a key challenge in the analysis of biological and artificial neural systems. However, measures of representational (dis)similarity in neural systems often assume that neural responses are static in time. Here, we show that stochastic shape distances (SSDs; Duong et al., 2023), which were developed to compare noisy neural responses to static inputs and lack an explicit notion of temporal structure, are well equipped to compare noisy dynamics. In two examples, we use SSDs, which interpolate between comparing mean trajectories and secondorder fluctuations about mean trajectories, to disentangle recurrent versus external contributions to noisy dynamics.

Show Abstract

Escaping saddle points efficiently with occupation-time-adapted perturbation

Xin Guo , J. Han, Mahan Tajrobehkar , Wenpin Tang

Motivated by the super-diffusivity of self-repelling random walk, which has roots in statistical physics, this paper develops a new perturbation mechanism for optimization algorithms. In this mechanism, perturbations are adapted to the history of states via the notion of occupation time. After integrating this mechanism into the framework of perturbed gradient descent (PGD) and perturbed accelerated gradient descent (PAGD), two new algorithms are proposed: perturbed gradient descent adapted to occupation time (PGDOT) and its accelerated version (PAGDOT). PGDOT and PAGDOT are guaranteed to avoid getting stuck at non-degenerate saddle points, and are shown to converge to second-order stationary points at least as fast as PGD and PAGD, respectively. The theoretical analysis is corroborated by empirical studies in which the new algorithms consistently escape saddle points and outperform not only their counterparts, PGD and PAGD, but also other popular alternatives including stochastic gradient descent, Adam, and several state-of-the-art adaptive gradient methods.

Show Abstract

Computational Design of Phosphotriesterase Improves V-Agent Degradation Efficiency

Jacob Kronenberg, Stanley Chu, D. Renfrew, et al.

Organophosphates (OPs) are a class of neurotoxic acetylcholinesterase inhibitors including widely used pesticides as well as nerve agents such as VX and VR. Current treatment of these toxins relies on reactivating acetylcholinesterase, which remains ineffective. Enzymatic scavengers are of interest for their ability to degrade OPs systemically before they reach their target. Here we describe a library of computationally designed variants of phosphotriesterase (PTE), an enzyme that is known to break down OPs. The mutations G208D, F104A, K77A, A80V, H254G, and I274N broadly improve catalytic efficiency of VX and VR hydrolysis without impacting the structure of the enzyme. The mutation I106 A improves catalysis of VR and L271E abolishes activity, likely due to disruptions of PTE's structure. This study elucidates the importance of these residues and contributes to the design of enzymatic OP scavengers with improved efficiency.

Show Abstract

Precision Medicine in Nephrology: An Integrative Framework of Multidimensional Data in the Kidney Precision Medicine Project

Tarek M. El-Achkar, Michael T. Eadon, R. Sealfon

Chronic kidney disease (CKD) and acute kidney injury (AKI) are heterogeneous syndromes defined clinically by serial measures of kidney function. Each condition possesses strong histopathologic associations, including glomerular obsolescence or acute tubular necrosis, respectively. Despite such characterization, there remains wide variation in patient outcomes and treatment responses. Precision medicine efforts, as exemplified by the Kidney Precision Medicine Project (KPMP), have begun to establish evolving, spatially anchored, cellular and molecular atlases of the cell types, states, and niches of the kidney in health and disease. The KPMP atlas provides molecular context for CKD and AKI disease drivers and will help define subtypes of disease that are not readily apparent from canonical functional or histopathologic characterization but instead are appreciable through advanced clinical phenotyping, pathomic, transcriptomic, proteomic, epigenomic, and metabolomic interrogation of kidney biopsy samples. This perspective outlines the structure of the KPMP, its approach to the integration of these diverse datasets, and its major outputs relevant to future patient care.

Show Abstract

Adequacy of the dynamical mean field theory for low density and Dirac materials

The qualitative reliability of the dynamical mean field theory (DMFT) is investigated for systems in which either the actual carrier density or the effective carrier density is low, by comparing the exact perturbative and dynamical mean field expressions of electron scattering rates and optical conductivities. We study two interacting systems: tight binding models in which the chemical potential is near a band edge and Dirac systems in which the chemical potential is near the Dirac point. In both systems it is found that DMFT underestimates the low frequency, near-Fermi surface single particle scattering rate by a factor proportional to the particle density. The quasiparticle effective mass is qualitatively incorrect for the low density tight binding model but not necessarily for Dirac systems. The dissipative part of the optical conductivity is more subtle: in the exact calculation vertex corrections, typically neglected in DMFT calculations, suppress the low frequency optical absorption, compensating for some of the DMFT underestimate of the scattering rate. The role of vertex corrections in calculating the conductivity for Dirac systems is clarified and a systematic discussion is given of the approach to the Galilean/Lorentz invariant low density limit. Relevance to recent calculations related to Weyl metals is discussed.
Show Abstract
March 1, 2024

Specificity, synergy, and mechanisms of splice-modifying drugs

Yuma Ishigami, Mandy S. Wong, S. Hanson, et al.

Drugs that target pre-mRNA splicing hold great therapeutic potential, but the quantitative understanding of how these drugs work is limited. Here we introduce mechanistically interpretable quantitative models for the sequence-specific and concentration-dependent behavior of splice-modifying drugs. Using massively parallel splicing assays, RNA-seq experiments, and precision dose-response curves, we obtain quantitative models for two small-molecule drugs, risdiplam and branaplam, developed for treating spinal muscular atrophy. The results quantitatively characterize the specificities of risdiplam and branaplam for 5’ splice site sequences, suggest that branaplam recognizes 5’ splice sites via two distinct interaction modes, and contradict the prevailing two-site hypothesis for risdiplam activity at SMN2 exon 7. The results also show that anomalous single-drug cooperativity, as well as multi-drug synergy, are widespread among small-molecule drugs and antisense-oligonucleotide drugs that promote exon inclusion. Our quantitative models thus clarify the mechanisms of existing treatments and provide a basis for the rational development of new therapies.

Show Abstract

NOVA1 acts as an oncogenic RNA-binding protein to regulate cholesterol homeostasis in human glioblastoma cells

Yuhki Saito, C. Park, et al.

NOVA1 is a neuronal RNA-binding protein identified as the target antigen of a rare autoimmune disorder associated with cancer and neurological symptoms, termed paraneoplastic opsoclonus-myoclonus ataxia. Despite the strong association between NOVA1 and cancer, it has been unclear how NOVA1 function might contribute to cancer biology. In this study, we find that NOVA1 acts as an oncogenic factor in a GBM (glioblastoma multiforme) cell line established from a patient. Interestingly, NOVA1 and Argonaute (AGO) CLIP identified common 3′ untranslated region (UTR) targets, which were down-regulated in NOVA1 knockdown GBM cells, indicating a transcriptome-wide intersection of NOVA1 and AGO–microRNA (miRNA) targets regulation. NOVA1 binding to 3′UTR targets stabilized transcripts including those encoding cholesterol homeostasis related proteins. Selective inhibition of NOVA1–RNA interactions with antisense oligonucleotides disrupted GBM cancer cell fitness. The precision of our GBM CLIP studies point to both mechanism and precise RNA sequence sites to selectively inhibit oncogenic NOVA1–RNA interactions. Taken together, we find that NOVA1 is commonly overexpressed in GBM, where it can antagonize AGO2–miRNA actions and consequently up-regulates cholesterol synthesis, promoting cell viability.

Show Abstract

Neural Manifold Capacity Captures Representation Geometry, Correlations, and Task-Efficiency Across Species and Behaviors

C. Chou , Luke Arend, Albert J. Wakhloo, Royoung Kim, Will Slatton, S. Chung

The study of the brain encompasses multiple scales, including temporal, spatial, and functional aspects. To integrate understanding across these different levels and modalities, it requires developing quantification methods and frameworks. Here, we present effective Geometric measures from Correlated Manifold Capacity theory (GCMC) for probing the functional structure in neural representations. We utilize a statistical physics approach to establish analytical connections between neural co-variabilities and downstream read-out efficiency. These effective geometric measures capture both stimulus-driven and behavior-driven structures in neural population activities, while extracting computationally-relevant information from neural data into intuitive and interpretable analysis descriptors. We apply GCMC to a diverse collection of datasets with different recording methods, various model organisms, and multiple task modalities. Specifically, we demonstrate that GCMC enables a wide range of multi-scale data analysis. This includes quantifying the spatial progression of encoding efficiency across brain regions, revealing the temporal dynamics of task-relevant manifold geometry in information processing, and characterizing variances as well as invariances in neural representations throughout learning. Lastly, the effective manifold geometric measures may be viewed as order parameters for phases related to computational efficiency, facilitating data-driven hypothesis generation and latent embedding.

Show Abstract

Statistical Component Separation for Targeted Signal Recovery in Noisy Mixtures

B. Régaldo-Saint Blancard, M. Eickenberg

Separating signals from an additive mixture may be an unnecessarily hard problem when one is only interested in specific properties of a given signal. In this work, we tackle simpler "statistical component separation" problems that focus on recovering a predefined set of statistical descriptors of a target signal from a noisy mixture. Assuming access to samples of the noise process, we investigate a method devised to match the statistics of the solution candidate corrupted by noise samples with those of the observed mixture. We first analyze the behavior of this method using simple examples with analytically tractable calculations. Then, we apply it in an image denoising context employing 1) wavelet-based descriptors, 2) ConvNet-based descriptors on astrophysics and ImageNet data. In the case of 1), we show that our method better recovers the descriptors of the target data than a standard denoising method in most situations. Additionally, despite not constructed for this purpose, it performs surprisingly well in terms of peak signal-to-noise ratio on full signal reconstruction. In comparison, representation 2) appears less suitable for image denoising. Finally, we extend this method by introducing a diffusive stepwise algorithm which gives a new perspective to the initial method and leads to promising results for image denoising under specific circumstances.

Show Abstract

Training self-learning circuits for power-efficient solutions

Menachem Stern , Douglas J. Durian, Andrea J. Liu, et al.

As the size and ubiquity of artificial intelligence and computational machine learning models grow, the energy required to train and use them is rapidly becoming economically and environmentally unsustainable. Recent laboratory prototypes of self-learning electronic circuits, such as “physical learning machines,” open the door to analog hardware that directly employs physics to learn desired functions from examples at a low energy cost. In this work, we show that this hardware platform allows for an even further reduction in energy consumption by using good initial conditions and a new learning algorithm. Using analytical calculations, simulations, and experiments, we show that a trade-off emerges when learning dynamics attempt to minimize both the error and the power consumption of the solution—greater power reductions can be achieved at the cost of decreasing solution accuracy. Finally, we demonstrate a practical procedure to weigh the relative importance of error and power minimization, improving the power efficiency given a specific tolerance to error.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.