2024 National Institute for Theory and Mathematics in Biology Annual Meeting

Date & Time


Organizers:
Richard Carthew, Northwestern University

Meeting Goals:
The National Institute for Theory and Mathematics in Biology (NITMB) was created by the National Science Foundation and the Simons Foundation to enable innovative research at the intersection of mathematical and biological sciences. The two overarching goals of the NITMB are to catalyze integration of mathematics into fundamental biological research and to develop new mathematics inspired by biological phenomena and practices. Engagement from the larger research community is an important part of the NITMB’s vision.

The 2024 annual meeting of the NITMB will bring together leading mathematicians, computer scientists, physicists, and biologists who are interested in interdisciplinary research that aligns with the NITMB’s goals.

The meeting will survey progress in several biological fields, including neuroscience, developmental biology, evolutionary biology, and cell biology. Speakers will report on results enabled by methodologies such as data-driven modeling and inference, dynamical systems far from equilibrium, dimension reduction, stochastic optimization, and information theory. The presentations will elucidate a broad spectrum of theoretical and experimental work cutting across traditional boundaries.

  • Meeting Reportplus--large

    Accompanied by a 4.8-magnitude earthquake and preceding a continent-spanning total solar eclipse by three days, the first NITMB Annual Meeting at the Simons Foundation was held on April 4–5th 2024. Foremost in excitement, though, was the opportunity for mathematicians and biologists to convene and learn about important new advances in mathematical biology. The NITMB was founded in 2023 to create an international nexus for scientists working at the interface between mathematics and biology, two disciplines that only sporadically have overlapped to promote common interests. Sponsored by an equal partnership between the Simons Foundation and the National Science Foundation, the NITMB is a joint partnership between Northwestern University and the University of Chicago. As such, the institute is located in downtown Chicago, where it supports a broad variety of convening programs, as well as research aimed at better integrating mathematics with biology. NITMB research aims to broaden the use of mathematics in biological research to advance understanding of living systems. Its research also aims to develop new mathematics that is inspired by biology, which in turn, may become applied to harness new discoveries in biology.

    Almost 100 people attended the meeting in-person, including 27 trainees, with another 10 trainees and faculty in virtual attendance. The meeting brought together pure and applied mathematicians, computer scientists, theoretical physicists and empirical biologists. Although most attendees were faculty and trainees from the NITMB, the meeting also included people from around the United States and Europe. The meeting was comprised of a poster session that catalyzed new interactions between disciplines, and eight talks, all of which were enthusiastically received with vigorous questions and engagement.

    Daniel Fisher (Stanford University) kicked off the meeting by talking about his theoretical studies of microbial evolution. Inspired by the laboratory evolution experiments done by Richard Lenski on the bacterium Escherichia coli, Dr. Fisher presented two mathematically based models of evolution. The first model was a numerical analysis of how a population of cells evolves in isolation over time when environmental conditions are invariant. The theory of neutral evolution states that stochastic mutation and genetic drift are responsible for the genetic variation at the population level under such circumstances. Dr. Fisher found that, instead, selection of new mutants with small effects on cell fitness could theoretically account for continual evolution of the population to quasi-steady states, as had been observed in the lab evolution experiments. Dr. Fisher also modeled evolution of co-existing populations of bacteria and a virus that infects the bacteria. Again, he found that there was continual evolution of the two populations, but the dynamics were chaotic in nature. This discovery resonates with empirical observations of fluctuating host-pathogen population dynamics in the natural world.

    Yogesh Goyal (Northwestern University) spoke about his own work on cell evolution, but in his case, understanding how tumor cells evolve under selection during anti-cancer drug treatments. Inspired by the landmark mathematical analysis of bacterial mutagenesis by Luria and Delbrück in 1943, Dr. Goyal described his use of cutting-edge genomic methods to tag and monitor thousands of cancer cell lineages before and after drug selection. He found that rare cells in the population exist before selection that are drug resistant once exposed to the selective agent. Resistance is not genetic in origin but is pre-determined by molecular differences in the initial state of cells. Remarkably, cells as they divide retain a memory of this initial state for several generations. The nature of these molecular states and their memory mechanism remain to be elucidated. Dr. Goyal’s work exemplifies how mathematical dimension reduction can provide insights into the mysterious high-dimensional states of a cancer cell.

    Brent Doiron (University of Chicago) talked about his theoretical studies of the nervous system. Neuronal activity is very heterogeneous in response to a specific stimulus; some neurons emit many action potentials and others are silent. Further, trial-to-trial fluctuations of neuronal activity occupy a low dimensional space, owing to correlated activity of neurons within a population. Using techniques from the theory of random matrices, Dr. Doiron linked these two aspects of neuronal response and showed that the more heterogeneous neuronal firing rates are, the lower dimensional is their population trial-to-trial variability. This prediction was validated for multiple datasets from numerous brain areas in rodents, primates and humans. Dr. Doiron presented a simple theory in which a more heterogeneous neuronal code leads to better fine-discrimination performance, particularly when the brain is in more heightened states of information processing.

    Niall Mangan (Northwestern University) spoke on her work applying mathematics to discover dynamic models to describe time-series from biological networks. Machine learning often does not provide mechanistic insights whereas heuristic model selection based on information theory is often limited in model sampling. Dr. Mangan described an approach to use sparse optimization to select a subset of nonlinear dynamic network models. Since many biological networks have abundant nonlinearities, this approach is especially attractive to discover novel interactions controlling dynamic behavior. The sparse optimization algorithm discovers the most parsimonious models from a combinatorically large set of nonlinear network models (for a simple 3-species system 109 models). Dr. Mangan applied this innovative data-driven approach to time-series data from a hibernating mammal and successfully found models consistent with metabolic regulation.

    Arvind Murugan (University of Chicago) related a theoretical study of how mechanisms emerged to enhance biological fidelity beyond the limits set by equilibrium thermodynamics. Processive biochemical reactions often inject chemical energy at each step so that enzymes catalyze the reactions with orders-of-magnitude greater fidelity than if at equilibrium. This is known as kinetic proofreading. Dr. Murugan described how errors in the processive reaction generate a temporal delay such that if the reaction is optimized for speed, it will naturally evolve a kinetic proofreading mechanism. Thus, the costs in time and energetics for having kinetic proofreading would be offset by reactions occurring at greater speed, and therefore possibly greater fitness. This plausible mechanism for how proofreading evolved in DNA replication and protein synthesis was expanded upon by Dr. Murugan to include its generality to cell-scale and other higher-scale phenomena as well.

    The second day of the meeting started with Paul François (Université de Montréal) speaking about complex dynamical systems in development and immunology. Dr. François described using a combination of first-principle theoretical modeling with simple machine-learning autoencoders to build tractable models of biological dynamics. This allowed Dr. François to discover a small number of latent variables acting within each system of study. By inferring dynamics of the latent variables, a latent space could be described for the system. Dr. François applied his modeling to a simple gene regulatory network acting in the fruit fly embryo, and found two latent variables that describe the system. He also modeled the complex cytokine responses of immune T cells when exposed to various antigens, and found the latent space captured key modalities of T cell response. The latent space is consistent with T cell receptors responding to antigens using a kinetic proofreading mechanism to accurately discriminate between different types of antigens.

    Stephanie Palmer (University of Chicago) spoke about how the nervous system encodes predictive features that are most useful for fast and effective movements. Dr. Palmer found that predictive computation begins even at the earliest stages of the visual system, in the retina. Using data from the salamander retina, Dr. Palmer applied techniques in statistical physics and information theory to assess how the retina achieves predictive computation. She compared this to predictive computation performed by fruit flies in flight, observing parallels and differences with the salamander system. Dr. Palmer also discussed using an autoencoder model to characterize the latent space of retinal processing when stimulated by a variety of dynamic natural scenes, yielding a low representation of time in the natural scenes.

    The meeting was concluded by Richard Carthew (Northwestern University) who talked about how growth can distort the genotype-to-phenotype map. Dr. Carthew discussed how slowing animal growth will often suppress defects in cell fate specification caused by mutation. Conversely, accelerating growth will often enhance such mutationally-driven defects. These effects are not specific to particular body systems or stages of development. Dr. Carthew described a simple and general model of developmental gene expression using a control theoretic framework that made predictions about expression dynamics when subjected to growth variation. These predictions were validated by experiments in the fruit fly. The study highlights a deep connection existing between growth and development.

  • Agendaplus--large

    Thursday

    8:30 AMCHECK-IN & BREAKFAST
    9:30 AMDaniel Fisher | Ecology and Perpetual Evolution in High Dimensions
    10:30 AMBREAK
    11:00 AMYogesh Goyal | Chance, Necessity, or Free Will: Decision Making in Single Cells
    12:00 PMLUNCH
    1:00 PMBrent Doiron | Heterogeneity and Dimension in Recurrent Neuronal Networks
    2:00 PMBREAK
    2:30 PMNiall Mangan | Data-Driven Model Discovery Meets Mechanistic Modeling for Biological Systems
    3:30 PMBREAK
    4:00 PMArvind Murugan | The Origin of Non-Equilibrium Order
    5:00 PMDAY ONE CONCLUDES

    Friday

    8:30 AMCHECK-IN & BREAKFAST
    9:30 AMPaul François | Understanding Complex (Biological) Systems in Latent Space
    10:30 AMBREAK
    11:00 AMStephanie Palmer | What Should Biological Systems Throw Away?
    12:00 PMLUNCH
    1:00 PMRichard Carthew | Tweaking Mendel’s First Law of Inheritance with Growth
    2:00 PMMEETING CONCLUDES
  • Abstracts & Slidesplus--large

    Richard Carthew
    Northwestern University

    Tweaking Mendel’s First Law of Inheritance with Growth

    Expression of simple Mendelian traits rests upon the dominant-recessive features of gene alleles. However, it has long been known that limiting nutrition or lowering temperature can perturb the phenotypes of some dominant or recessive alleles. We study this phenomenon and find evidence for pervasive uncoupling of phenotype from genotype if animal growth is attenuated by either limiting carbohydrate metabolism or protein synthesis. We use a general model of developmental fate lineage restriction that is based on control theory and that simulates gene expression dynamics. The model predicts that crippling activators or repressors of gene expression (i.e., with mutations) has lesser effect on gene expression (and consequently on mutant phenotypes) when general protein translation or ATP turnover are limiting. These model predictions are experimentally validated in the Drosophila system.
     

    Brent Doiron
    University of Chicago

    Heterogeneity and Dimension in Recurrent Neuronal Networks
    View Slides (PDF)

    We will discuss two distinct features of neuronal response. First, neuronal activity is very heterogeneous — in response to a specific stimulus or behavior some neurons emit many action potentials, and many others are relatively silent. Second, trial-to-trial fluctuations of neuronal response occupy a low dimensional space, owing to significant correlations between the joint activity of neurons within a population. We will link these two aspects of neural representation using a recurrent circuit model and derive the following relation: the more heterogeneous the distribution of trial-averaged responses, the lower the effective dimension of population trial-to-trial covariability. This surprising prediction is tested and validated using multiple population datasets from numerous brain areas in mice, non-human primates and in the motor cortex of human subjects. We present a simple theory whereby a more heterogeneous neuronal code leads to better fine discrimination performance through a lowering of the dimension of population covariability. In line with this result, we show that neural populations across the brain exhibit both more heterogeneous mean responses and lower-dimensional fluctuations when the brain is in more heightened states of information processing. In sum, we present a key organizational principle of neural population response that is widely observed across the nervous system and acts to synergistically improve population representation.
     

    Daniel Fisher
    Stanford University

    Ecology and Perpetual Evolution in High Dimensions
    View Slides (PDF)

    In a simple, constant environment does evolution continue forever? Does extensive diversification via small genetic and ecological differences? What are general evolutionary consequences of organismic complexity? Hints from long term laboratory evolution experiments and from genomic data of within-species bacterial diversity motivate considering these questions. Several simple models of evolution with small ecological feedback will be introduced, with the high dimensionality of phenotype space enabling mathematical analysis.
     

    Paul François
    University of Montreal

    Understanding Complex (Biological) Systems in Latent Space
    Many phenomena in biology are considered too complicated or too contingent to be captured by predictive theories similar to what is done in physics. But complex systems theory has taught us that simple, higher-level laws with few effective parameters can emerge from the interaction of small-scale components. As biology is becoming more and more quantitative, one can use a combination of first-principle theoretical modeling with simple machine-learning techniques to build accurate and tractable theories of biological dynamics. Those dynamics can often be best understood in (abstract) latent spaces, giving “physics-like,” intuition. Paul Francois will illustrate the power of such approaches using examples from embryonic development, immunology and machine-learning.
     

    Yogesh Goyal
    Northwestern University

    Chance, Necessity or Free Will: Decision Making in Single Cells

    Single cell variations within a genetically homogeneous population of cells can lead to significant differences in cell fate in response to external stimuli. This is particularly relevant in cancer cells, where a small population of cells can evade therapies to develop resistance. In this talk, Yogesh Goyal will present our ongoing work on tracing the origins, nature and manifestations of single cell variations in response to a variety of cytotoxic chemotherapies and targeted therapies in various cancer models. Our experimental and computational designs promise to provide a foundation for controlling single-cell variabilities in cancer and other biological contexts, such as stem cell reprogramming and transdifferentiation.
     

    Niall Mangan
    Northwestern University

    Data-Driven Model Discovery Meets Mechanistic Modeling for Biological Systems
    View Slides (PDF)

    Building models for biological, chemical and physical systems has traditionally relied on domain-specific intuition about which interactions and features most strongly influence a system. Alternatively, machine-learning methods are adept at finding novel patterns in large data sets and building predictive models but can be challenging to interpret in terms of or integrate with existing knowledge. Our group balances traditional modeling with data-driven methods and optimization to get the best of both worlds. Recently developed for and applied to dynamical systems, sparse optimization strategies can select a subset of terms from a library that best describes data, automatically interfering potential model structures from a broad but well-defined class. Niall Mangan will discuss their group’s application and development of data-driven methods for model selection to (1) recover chaotic systems models from data with hidden variables and (2) discover models for metabolic and temperature regulation in hibernating mammals. Mangan will briefly discuss current preliminary work and roadblocks in developing new methods for model selection of biological metabolic and regulatory networks.
     

    Arvind Murugan
    University of Chicago

    The Origin of Non-Equilibrium Order
    View Slides (PDF)

    Since at least Schrödinger, physicists have seen life as a non-equilibrium process that has successfully fought the 2nd law of thermodynamics by maintaining order for 4 billion years. While we understand how extant biological Maxwell Demons work, much less is known about how such Demons come into existence in the first place. Using theoretical and experimental work on the molecular machinery that copies DNA-based information, we suggest that surprisingly little might be needed — proofreading mechanisms that maintain non-equilibrium order can potentially arise due to selection for faster replication, even if the order itself is not beneficial in any way. We argue that such order-through-speed mechanisms might also be relevant for any process that creates a wide variance in the distribution of replication times. Our work suggests the intriguing possibility that non-equilibrium order can arise more easily than assumed, as a byproduct of fast self-replication, even before that order is directly functional.
     

    Stephanie Palmer
    University of Chicago

    What Should Biological Systems Throw Away?
    View Slides (PDF)

    Biological systems must selectively encode partial information about the environment, as dictated by the capacity constraints at work in all living organisms. Classical efficient coding theory describes how sensory systems can maximize information transmission given such capacity constraints, but it treats all input features equally. Not all inputs are, however, of equal value to the organism. Our work quantifies whether and how the brain selectively encodes stimulus features, specifically predictive features, that are most useful for fast and effective movements. We have shown that efficient predictive computation starts at the earliest stages of the visual system, in the retina. We borrow techniques from statistical physics and information theory to assess how we get terrific, predictive vision from these imperfect component parts. In broader terms, we aim to build a more complete theory of efficient encoding in the brain, and along the way have found some intriguing connections between formal, mathematical notions of coarse graining in biology and physics.

  • Contactsplus--large

    Registration and Travel Assistance
    Ovation Travel Group
    [email protected]
    (917) 408-8384 (24-Hours)
    www.ovationtravel.com

    Meeting Questions and Assistance
    Meghan Fazzi
    Manager, Events and Administration, MPS, Simons Foundation
    [email protected]
    (212) 524-6080

Subscribe to MPS announcements and other foundation updates