2005 Publications

Uniqueness and characteristic flow for a non strictly convex singular variational problem

Jean-FrançoisBabadjian, G. Francfort

This work addresses the question of uniqueness of the minimizers of a convex but not strictly convex integral functional with linear growth in a two-dimensional setting. The integrand -- whose precise form derives directly from the theory of perfect plasticity -- behaves quadratically close to the origin and grows linearly once a specific threshold is reached. Thus, in contrast with the only existing literature on uniqueness for functionals with linear growth, that is that which pertains to the generalized least gradient, the integrand is not a norm. We make use of hyperbolic conservation laws hidden in the structure of the problem to tackle uniqueness. Our argument strongly relies on the regularity of a vector field -- the Cauchy stress in the terminology of perfect plasticity -- which allows us to define characteristic lines, and then to employ the method of characteristics. Using the detailed structure of the characteristic landscape evidenced in our preliminary study \cite{BF}, we show that this vector field is actually continuous, save for possibly two points. The different behaviors of the energy density at zero and at infinity imply an inequality constraint on the Cauchy stress. Under a barrier type convexity assumption on the set where the inequality constraint is saturated, we show that uniqueness holds for pure Dirichlet boundary data, a stronger result than that of uniqueness for a given trace on the whole boundary since our minimizers can fail to attain the boundary data.

Show Abstract
December 4, 2023

Sharp error estimates for target measure diffusion maps with applications to the committor problem

Shashank Sule, L. Evans, Maria Cameron

We obtain asymptotically sharp error estimates for the consistency error of the Target Measure Diffusion map (TMDmap) (Banisch et al. 2020), a variant of diffusion maps featuring importance sampling and hence allowing input data drawn from an arbitrary density. The derived error estimates include the bias error and the variance error. The resulting convergence rates are consistent with the approximation theory of graph Laplacians. The key novelty of our results lies in the explicit quantification of all the prefactors on leading-order terms. We also prove an error estimate for solutions of Dirichlet BVPs obtained using TMDmap, showing that the solution error is controlled by consistency error. We use these results to study an important application of TMDmap in the analysis of rare events in systems governed by overdamped Langevin dynamics using the framework of transition path theory (TPT). The cornerstone ingredient of TPT is the solution of the committor problem, a boundary value problem for the backward Kolmogorov PDE. Remarkably, we find that the TMDmap algorithm is particularly suited as a meshless solver to the committor problem due to the cancellation of several error terms in the prefactor formula. Furthermore, significant improvements in bias and variance errors occur when using a quasi-uniform sampling density. Our numerical experiments show that these improvements in accuracy are realizable in practice when using $\delta$-nets as spatially uniform inputs to the TMDmap algorithm.

Show Abstract

Adaptive whitening with fast gain modulation and slow synaptic plasticity

Neurons in early sensory areas rapidly adapt to changing sensory statistics, both by normalizing the variance of their individual responses and by reducing correlations between their responses. Together, these transformations may be viewed as an adaptive form of statistical whitening. Existing mechanistic models of adaptive whitening exclusively use either synaptic plasticity or gain modulation as the biological substrate for adaptation; however, on their own, each of these models has significant limitations. In this work, we unify these approaches in a normative multi-timescale mechanistic model that adaptively whitens its responses with complementary computational roles for synaptic plasticity and gain modulation. Gains are modified on a fast timescale to adapt to the current statistical context, whereas synapses are modified on a slow timescale to match structural properties of the input statistics that are invariant across contexts. Our model is derived from a novel multi-timescale whitening objective that factorizes the inverse whitening matrix into basis vectors, which correspond to synaptic weights, and a diagonal matrix, which corresponds to neuronal gains. We test our model on synthetic and natural datasets and find that the synapses learn optimal configurations over long timescales that enable adaptive whitening on short timescales using gain modulation.

Show Abstract

A polar prediction model for learning to represent visual transformations

All organisms make temporal predictions, and their evolutionary fitness level depends on the accuracy of these predictions. In the context of visual perception, the motions of both the observer and objects in the scene structure the dynamics of sensory signals, allowing for partial prediction of future signals based on past ones. Here, we propose a self-supervised representation-learning framework that extracts and exploits the regularities of natural videos to compute accurate predictions. We motivate the polar architecture by appealing to the Fourier shift theorem and its group-theoretic generalization, and we optimize its parameters on next-frame prediction. Through controlled experiments, we demonstrate that this approach can discover the representation of simple transformation groups acting in data. When trained on natural video datasets, our framework achieves better prediction performance than traditional motion compensation and rivals conventional deep networks, while maintaining interpretability and speed. Furthermore, the polar computations can be restructured into components resembling normalized simple and direction-selective complex cell models of primate V1 neurons. Thus, polar prediction offers a principled framework for understanding how the visual system represents sensory inputs in a form that simplifies temporal prediction.

Show Abstract

Efficient coding of natural images using maximum manifold capacity representations

The efficient coding hypothesis posits that sensory systems are adapted to the statistics of their inputs, maximizing mutual information between environmental signals and their representations, subject to biological constraints. While elegant, information theoretic quantities are notoriously difficult to measure or optimize, and most research on the hypothesis employs approximations, bounds, or substitutes (e.g., reconstruction error). A recently developed measure of coding efficiency, the "manifold capacity", quantifies the number of object categories that may be represented in a linearly separable fashion, but its calculation relies on a computationally intensive iterative procedure that precludes its use as an objective. Here, we simplify this measure to a form that facilitates direct optimization, use it to learn Maximum Manifold Capacity Representations (MMCRs), and demonstrate that these are competitive with state-of-the-art results on current self-supervised learning (SSL) recognition benchmarks. Empirical analyses reveal important differences between MMCRs and the representations learned by other SSL frameworks, and suggest a mechanism by which manifold compression gives rise to class separability. Finally, we evaluate a set of SSL methods on a suite of neural predictivity benchmarks, and find MMCRs are highly competitive as models of the primate ventral stream.

Show Abstract

Comparing neural models using their perceptual discriminability predictions

J. Zhou, Chanwoo Chun, Ajay Subramanian, E. P. Simoncelli

Internal representations are not uniquely identifiable from perceptual measurements: different representations can generate identical perceptual predictions, and similar representations may predict dissimilar percepts. Here, we generalize a previous method (``Eigendistortions'' -- Berardino et al., 2017) to enable comparison of models based on their metric tensors, which can be verified perceptually. Metric tensors characterize sensitivity to stimulus perturbations, reflecting both the geometric and stochastic properties of the representation, and providing an explicit prediction of perceptual discriminability. Brute force comparison of model-predicted metric tensors would require estimation of human perceptual thresholds along an infeasibly large set of stimulus directions. To circumvent this ``perceptual curse of dimensionality'', we compute and measure discrimination capabilities for a small set of most-informative perturbations, reducing the measurement cost from thousands of hours (a conservative estimate) to a single trial. We show that this single measurement, made for a variety of different test stimuli, is sufficient to differentiate models, select models that better match human perception, or generate new models that combine the advantages of existing models. We demonstrate the power of this method in comparison of (1) two models for trichromatic color representation, with differing internal noise; and (2) two autoencoders trained with different regularizers.

Show Abstract

Targeted V1 comodulation supports task-adaptive sensory decisions

Caroline Haimerl, Douglas A. Ruff, Marlene Cohen, Cristina Savin, E. P. Simoncelli

Sensory-guided behavior requires reliable encoding of stimulus information in neural populations, and flexible, task-specific readout. The former has been studied extensively, but the latter remains poorly understood. We introduce a theory for adaptive sensory processing based on functionally-targeted stochastic modulation. We show that responses of neurons in area V1 of monkeys performing a visual discrimination task exhibit low-dimensional, rapidly fluctuating gain modulation, which is stronger in task-informative neurons and can be used to decode from neural activity after few training trials, consistent with observed behavior. In a simulated hierarchical neural network model, such labels are learned quickly and can be used to adapt downstream readout, even after several intervening processing stages. Consistently, we find the modulatory signal estimated in V1 is also present in the activity of simultaneously recorded MT units, and is again strongest in task-informative neurons. These results support the idea that co-modulation facilitates task-adaptive hierarchical information routing.

Show Abstract

A dynamical model of growth and maturation in Drosophila

John J. Tyson , Amirali Monshizadeh , S. Shvartsman, Alexander W. Shingleton

The decision to stop growing and mature into an adult is a critical point in development that determines adult body size, impacting multiple aspects of an adult’s biology. In many animals, growth cessation is a consequence of hormone release that appears to be tied to the attainment of a particular body size or condition. Nevertheless, the size-sensing mechanism animals use to initiate hormone synthesis is poorly understood. Here, we develop a simple mathematical model of growth cessation in Drosophila melanogaster, which is ostensibly triggered by the attainment of a critical weight (CW) early in the last instar. Attainment of CW is correlated with the synthesis of the steroid hormone ecdysone, which causes a larva to stop growing, pupate, and metamorphose into the adult form. Our model suggests that, contrary to expectation, the size-sensing mechanism that initiates metamorphosis occurs before the larva reaches CW; that is, the critical-weight phenomenon is a downstream consequence of an earlier size-dependent developmental decision, not a decision point itself. Further, this size-sensing mechanism does not require a direct assessment of body size but emerges from the interactions between body size, ecdysone, and nutritional signaling. Because many aspects of our model are evolutionarily conserved among all animals, the model may provide a general framework for understanding how animals commit to maturing from their juvenile to adult form.

Show Abstract

Morphogens enable interacting supracellular phases that generate organ architecture

Sichen Yang , Karl H. Palmquist, P. Miller, et al.

During vertebrate organ morphogenesis, large collectives of cells robustly self-organize to form architectural units (bones, villi, follicles) whose form persists into adulthood. Over the past few decades, mechanisms of organ morphogenesis have been developed predominantly through molecular, genetic, and cellular frameworks. More recently, there has been a resurgence of interest in collective cell and tissue mechanics during organ formation. This approach has amplified the need to clarify and unambiguously link events across biological length scales. Doing so may require reassessing canonical models that continue to guide the field. The most recognized model for organ formation centers around morphogens as determinants of gene expression and morphological patterns. The classical view of a morphogen is that morphogen gradients specify differential gene expression in a distinct spatial order. Because morphogen expression colocalizes with emerging feather and hair follicles, the skin has served as a paradigmatic example of such morphogen prepatterning mechanisms.

Show Abstract
November 24, 2023

Microscopic Theory, Analysis, and Interpretation of Conductance Histograms in Molecular Junctions

Leopoldo Mejía, P. Cossio, Ignacio Franco

Molecular electronics break-junction experiments are widely used to investigate fundamental physics and chemistry at the nanoscale. Reproducibility in these experiments relies on measuring conductance on thousands of freshly formed molecular junctions, yielding a broad histogram of conductance events. Experiments typically focus on the most probable conductance, while the information content of the conductance histogram has remained unclear. Here we develop a microscopic theory for the conductance histogram by merging the theory of force-spectroscopy with molecular conductance. The procedure yields analytical equations that accurately fit the conductance histogram of a wide range of molecular junctions and augments the information content that can be extracted from them. Our formulation captures contributions to the conductance dispersion due to conductance changes during the mechanical elongation inherent to the experiments. In turn, the histogram shape is determined by the non-equilibrium stochastic features of junction rupture and formation. The microscopic parameters in the theory capture the junction’s electromechanical properties and can be isolated from separate conductance and rupture force (or junction-lifetime) measurements. The predicted behavior can be used to test the range of validity of the theory, understand the conductance histograms, design molecular junction experiments with enhanced resolution and molecular devices with more reproducible conductance properties.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates