2743 Publications

Facilitating analysis of open neurophysiology data on the DANDI Archive using large language model tools

The DANDI Archive is a key resource for sharing open neurophysiology data, hosting over 400 datasets in the Neurodata Without Borders (NWB) format. While these datasets hold tremendous potential for reanalysis and discovery, many researchers face barriers to reuse, including unfamiliarity with access methods and difficulty identifying relevant content. Here we introduce an AI-powered, agentic chat assistant and a notebook generation pipeline. The chat assistant serves as an interactive tool for exploring DANDI datasets. It leverages large language models (LLMs) and integrates with agentic tools to guide users through data access, visualization, and preliminary analysis. The notebook generator analyzes dataset structure with minimal human input, executing inspection scripts and generating visualizations. It then produces an instructional Python notebook tailored to the dataset. We applied this system to 12 recent datasets. Review by neurophysiology data specialists found the generated notebooks to be generally accurate and well-structured, with most notebooks rated as “very helpful.” This work demonstrates how AI can support FAIR principles by leveraging data standards and lowering barriers to data reuse and engagement.

Show Abstract

A Model-Guided Neural Network Method for the Inverse Scattering Problem

Olivia Tsang, O. Melia, Vasileios Charisopoulos, Jeremy Hoskins, Jeremy Hoskins, Rebecca Willett

Inverse medium scattering is an ill-posed, nonlinear wave-based imaging problem arising in medical imaging, remote sensing, and non-destructive testing. Machine learning (ML) methods offer increased inference speed and flexibility in capturing prior knowledge of imaging targets relative to classical optimization-based approaches; however, they perform poorly in regimes where the scattering behavior is highly nonlinear. A key limitation is that ML methods struggle to incorporate the physics governing the scattering process, which are typically inferred implicitly from the training data or loosely enforced via architectural design. In this paper, we present a method that endows a machine learning framework with explicit knowledge of problem physics, in the form of a differentiable solver representing the forward model. The proposed method progressively refines reconstructions of the scattering potential using measurements at increasing wave frequencies, following a classical strategy to stabilize recovery. Empirically, we find that our method provides high-quality reconstructions at a fraction of the computational or sampling costs of competing approaches.

Show Abstract

Protein Design with Agent Rosetta: A Case Study for Specialized Scientific Agents

Jacopo Teneggi, Tanya Marwah, A. Bietti, P. Douglas Renfrew, Vikram Mulligan, S. Golkar

Large language models (LLMs) are increasingly capable of emulating reasoning and using tools, creating opportunities for autonomous agents that execute complex scientific tasks. Protein design provides a natural case study: existing deep learning models achieve strong results, but they are typically restricted to canonical amino acids and narrow objectives, leaving space for a generalist tool for broad design pipelines. We introduce Agent Rosetta, an LLM agent built on top of the Rosetta suite---the leading physics-based software for heteropolymer design, capable of modeling non-canonical building blocks and geometries. Agent Rosetta is a single-agent, multi-turn framework that iteratively refines heteropolymers to achieve the goals of a user-defined task brief, combining the biophysical knowledge of modern LLMs with the accuracy of Rosetta's physics-based methods. In evaluations, Agent Rosetta achieves performance comparable to specialized deep learning models, especially when combined with inference-time techniques such as best-of-n sampling. Interestingly, we find that prompt engineering alone is insufficient for reliably producing RosettaScripts actions. This underscores the need for building a comprehensive environment that, for example, simplifies the most challenging aspects of RosettaScripts syntax. These results demonstrate that combining frontier LLMs with established domain-specific scientific tools can yield flexible agentic frameworks that not only lower barriers to use but also achieve performance competitive with specialized deep learning models.

Show Abstract

EmbryoProfiler: A Visual Clinical Decision Support System for IVF

Johannes Knittel , Simon Warchol, D. Needleman, et al.

In-vitro fertilization (IVF) has become standard practice to address infertility, which affects more than one in ten couples in the US. However, current protocols yield relatively low success rates of about 20% per treatment cycle. A critical but complex and time-consuming step is the grading and selection of embryos for implantation. Although incubators with time-lapse microscopy have enabled computational analysis of embryo development, existing automated approaches either require extensive manual annotations or use opaque deep learning models that are hard for clinicians to validate and trust. We present EmbryoProfiler, a visual analytics system collaboratively developed with embryologists, biologists, and machine learning researchers to support clinicians in visually assessing embryo viability from time-lapse microscopy imagery. Our system incorporates a deep learning pipeline that automatically annotates microscopy images and extracts clinically interpretable features relevant for embryo grading. Our contributions include: (1) a semi-automatic, visualization-based workflow that guides clinicians through fertilization assessment, developmental timing evaluation, morphological inspection, and comparative analysis of embryos; (2) innovative interactive visualizations, such as cell-shape plots, designed to facilitate efficient analysis of morphological and developmental characteristics; and (3) an integrated, explainable machine learning classifier offering transparent, clinically-informed embryo viability scoring to predict live birth outcomes. Quantitative evaluation of our classifier and qualitative case studies conducted with practitioners demonstrate that EmbryoProfiler enables clinicians to make better-informed embryo selection decisions, potentially leading to improved clinical outcomes in IVF treatments.

Show Abstract

Cryo-electron microscopy ensemble optimization using individual particles and physical constraints

David Silva-Sánchez, E. Thiede, Roy R. Lederman, P. Cossio

Biomolecules are inherently dynamic, and understanding their conformational ensemble distributions is essential for understanding their dynamics and biological roles. Cryo-electron microscopy (cryo-EM), a technique that images individual biomolecules frozen in a thin layer of amorphous ice, has emerged as a leading method for determining the structure of biomolecules at atomic resolution. Recent advances in cryo-EM reconstruction have made significant progress in determining structure in heterogeneous conformational landscapes. In contrast to reconstruction, a different class of techniques has been used to infer population weights, referred to as ensemble reweighting. These methods have yet to be generalized to infer structural heterogeneity simultaneously. Here, we present a method for cryo-EM ensemble optimization that directly infers the optimal set of structures and their associated population weights from cryo-EM images using Bayesian optimization techniques. Our method iterates between optimizing the structures and weights using a likelihood defined in terms of cryo-EM particle images (not reconstructions) and projecting onto the domain of a physical prior through an approach inspired by projected gradient descent. We test the method on several systems, ranging from a four-atom toy model to a large protein system with real cryo-EM data. We find that our approach successfully recovers the structures and their associated weights across a wide range of experimental conditions, even when the number of structures does not match the actual number of metastable states. Our method paves the way for cryo-EM ensemble optimization of flexible biomolecules exhibiting complex, multimodal conformational landscapes.

Show Abstract
December 4, 2025

From Shortcut to Induction Head: How Data Diversity Shapes Algorithm Selection in Transformers

Ryotaro Kawata, Yujin Song, A. Bietti, Naoki Nishikawa, Taiji Suzuki, Samuel Vaiter, D. Wu

Transformers can implement both generalizable algorithms (e.g., induction heads) and simple positional shortcuts (e.g., memorizing fixed output positions). In this work, we study how the choice of pretraining data distribution steers a shallow transformer toward one behavior or the other. Focusing on a minimal trigger-output prediction task -- copying the token immediately following a special trigger upon its second occurrence -- we present a rigorous analysis of gradient-based training of a single-layer transformer. In both the infinite and finite sample regimes, we prove a transition in the learned mechanism: if input sequences exhibit sufficient diversity, measured by a low “max-sum” ratio of trigger-to-trigger distances, the trained model implements an induction head and generalizes to unseen contexts; by contrast, when this ratio is large, the model resorts to a positional shortcut and fails to generalize out-of-distribution (OOD). We also reveal a trade-off between the pretraining context length and OOD generalization, and derive the optimal pretraining distribution that minimizes computational cost per sample. Finally, we validate our theoretical predictions with controlled synthetic experiments, demonstrating that broadening context distributions robustly induces induction heads and enables OOD generalization. Our results shed light on the algorithmic biases of pretrained transformers and offer conceptual guidelines for data-driven control of their learned behaviors.

Show Abstract

Emergence of Linear Truth Encodings in Language Models

Shauli Ravfogel, Gilad Yehudai, Tal Linzen, Joan Bruna, A. Bietti

Recent probing studies reveal that large language models exhibit linear subspaces that separate true from false statements, yet the mechanism behind their emergence is unclear. We introduce a transparent, one-layer transformer toy model that reproduces such truth subspaces end-to-end and exposes one concrete route by which they can arise. We study one simple setting in which truth encoding can emerge: a data distribution where factual statements co-occur with other factual statements (and vice-versa), encouraging the model to learn this distinction in order to lower the LM loss on future tokens. We corroborate this pattern with experiments in pretrained language models. Finally, in the toy setting we observe a two-phase learning dynamic: networks first memorize individual factual associations in a few steps, then---over a longer horizon---learn to linearly separate true from false, which in turn lowers language-modeling loss. Together, these results provide both a mechanistic demonstration and an empirical motivation for how and why linear truth representations can emerge in language models.

Show Abstract

Error Breakdown and Sensitivity Analysis of Dynamical Quantities in Markov State Models

Yehor Tuchkov, L. Evans, S. Hanson, E. Thiede

Markov state models (MSMs) are widely employed to analyze the kinetics of complex systems. But despite their effectiveness in many applications, MSMs are prone to systematic or statistical errors, often exacerbated by suboptimal hyperparameter choice. In this article, we attempt to understand how these choices affect the error of estimates of mean first-passage times and committors, key quantities in chemical rate theory. We first evaluate the performance of the recently introduced “stopped-process estimator” [Strahan, J. Long-time-scale predictions from short-trajectory data: A benchmark analysis of the trp-cage miniprotein. J. Chem. Theory Comput. 2021, 17, 2948–2963. 10.1021/acs.jctc.0c00933.] that attempts to reduce error caused by choosing a too-large lag time. We then study the effect of statistical errors on Markov state model construction using the condition number, which measures an MSM’s sensitivity to perturbation. This analysis helps give an insight into which factors cause an MSM to be more or less sensitive to statistical error. Our work highlights the importance of choosing a good sampling measure, the measure from which the initial points are drawn, and has implications for recent work applying a variational principle for evaluating the committor.

Show Abstract

Error Breakdown and Sensitivity Analysis of Dynamical Quantities in Markov State Models

Yehor Tuchkov, L. Evans, S. Hanson, E. Thiede

Markov state models (MSMs) are widely employed to analyze the kinetics of complex systems. But despite their effectiveness in many applications, MSMs are prone to systematic or statistical errors, often exacerbated by suboptimal hyperparameter choice. In this article, we attempt to understand how these choices affect the error of estimates of mean first-passage times and committors, key quantities in chemical rate theory. We first evaluate the performance of the recently introduced “stopped-process estimator” [Strahan, J. Long-time-scale predictions from short-trajectory data: A benchmark analysis of the trp-cage miniprotein. J. Chem. Theory Comput. 2021, 17, 2948–2963. 10.1021/acs.jctc.0c00933.] that attempts to reduce error caused by choosing a too-large lag time. We then study the effect of statistical errors on Markov state model construction using the condition number, which measures an MSM’s sensitivity to perturbation. This analysis helps give an insight into which factors cause an MSM to be more or less sensitive to statistical error. Our work highlights the importance of choosing a good sampling measure, the measure from which the initial points are drawn, and has implications for recent work applying a variational principle for evaluating the committor.

Show Abstract

Space-time adaptive methods for parabolic evolution equations

We present a family of integral equation-based solvers for the heat equation, reaction-diffusion systems, the unsteady Stokes equation and the incompressible Navier-Stokes equations in two space dimensions. Our emphasis is on the development of methods that can efficiently follow complex solution features in space-time by refinement and coarsening at each time step on an adaptive quadtree. For simplicity, we focus on problems posed in a square domain with periodic boundary conditions. The performance and robustness of the methods are illustrated with several numerical examples.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates