2697 Publications

How to optimize neuroscience data utilization and experiment design for advancing brain models of visual and linguistic cognition?

Greta Tuckute, Dawn Finzi, Eshed Margalit, Joel Zylberberg, S. Chung , Alona Fyshe, Evelina Fedorenko, Nikolaus Kriegeskorte, Jacob Yates, Kalanit Grill-Spector, Kohitij Kar

In recent years, neuroscience has made significant progress in building large-scale artificial neural network (ANN) models of brain activity and behavior. However, there is no consensus on the most efficient ways to collect data and design experiments to develop the next generation of models. This article explores the controversial opinions that have emerged on this topic in the domain of vision and language. Specifically, we address two critical points. First, we weigh the pros and cons of using qualitative insights from empirical results versus raw experimental data to train models. Second, we consider model-free (intuition-based) versus model-based approaches for data collection, specifically experimental design and stimulus selection, for optimal model development. Finally, we consider the challenges of developing a synergistic approach to experimental design and model building, including encouraging data and model sharing and the implications of iterative additions to existing models. The goal of the paper is to discuss decision points and propose directions for both experimenters and model developers in the quest to understand the brain.

Show Abstract

Approximate Contraction of Arbitrary Tensor Networks with a Flexible and Efficient Density Matrix Algorithm

Linjian Ma, M. Fishman, M. Stoudenmire, Edgar Solomonik

Tensor network contractions are widely used in statistical physics, quantum computing, and computer science. We introduce a method to efficiently approximate tensor network contractions using low-rank approximations, where each intermediate tensor generated during the contractions is approximated as a low-rank binary tree tensor network. The proposed algorithm has the flexibility to incorporate a large portion of the environment when performing low-rank approximations, which can lead to high accuracy for a given rank. Here, the environment refers to the remaining set of tensors in the network, and low-rank approximations with larger environments can generally provide higher accuracy. For contracting tensor networks defined on lattices, the proposed algorithm can be viewed as a generalization of the standard boundary-based algorithms. In addition, the algorithm includes a cost-efficient density matrix algorithm for approximating a tensor network with a general graph structure into a tree structure, whose computational cost is asymptotically upper-bounded by that of the standard algorithm that uses canonicalization. Experimental results indicate that the proposed technique outperforms previously proposed approximate tensor network contraction algorithms for multiple problems in terms of both accuracy and efficiency.

Show Abstract

Uniqueness, regularity and characteristic flow for a non strictly convex singular variational problem

Jean-Francois Babadjian, G. Francfort

This work addresses the question of uniqueness and regularity of the minimizers of a convex but not strictly convex integral functional with linear growth in a two-dimensional setting. The integrand -- whose precise form derives directly from the theory of perfect plasticity -- behaves quadratically close to the origin and grows linearly once a specific threshold is reached. Thus, in contrast with the only existing literature on uniqueness for functionals with linear growth, that is that which pertains to the generalized least gradient, the integrand is not a norm. We make use of hyperbolic conservation laws hidden in the structure of the problem to tackle uniqueness. Our argument strongly relies on the regularity of a vector field -- the Cauchy stress in the terminology of perfect plasticity -- which allows us to define characteristic lines, and then to employ the method of characteristics. Using the detailed structure of the characteristic landscape evidenced in our preliminary study BF, we show that this vector field is actually continuous, save for possibly two points. The different behaviors of the energy density at zero and at infinity imply an inequality constraint on the Cauchy stress. Under a barrier type convexity assumption on the set where the inequality constraint is saturated, we show that uniqueness holds for pure Dirichlet boundary data devoid of any regularity properties, a stronger result than that of uniqueness for a given trace on the whole boundary since our minimizers can fail to attain the boundary data. We also show a partial regularity result for the minimizer.

Show Abstract

A mixing time bound for Gibbs sampling from log-smooth log-concave distributions

The Gibbs sampler, also known as the coordinate hit-and-run algorithm, is a Markov chain that is widely used to draw samples from probability distributions in arbitrary dimensions. At each iteration of the algorithm, a randomly selected coordinate is resampled from the distribution that results from conditioning on all the other coordinates. We study the behavior of the Gibbs sampler on the class of log-smooth and strongly log-concave target distributions supported on ℝ

Show Abstract

Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit

Jason D. Lee, Kazusato Oko, Taiji Suzuki, D. Wu

We study the problem of gradient descent learning of a single-index target function f∗(x) = σ∗(⟨x,θ⟩) under isotropic Gaussian data in Rd, where the unknown link function σ∗ : R → R has information exponent p (defined as the lowest degree in the Hermite expansion). Prior works showed that gradientbased training of neural networks can learn this target with n ≳ dΘ(p) samples, and such complexity is predicted to be necessary by the correlational statistical query lower bound. Surprisingly, we prove that a two-layer neural network optimized by an SGD-based algorithm (on the squared loss) learns f∗ with a complexity that is not governed by the information exponent. Specifically, for arbitrary polynomial single-index models, we establish a sample and runtime complexity of n ≃ T = Θ(d·polylogd), where Θ(·) hides a constant only depending on the degree of σ∗; this dimension dependence matches the information theoretic limit up to polylogarithmic factors. More generally, we show that n ≳ d(p∗−1)∨1 samples are sufficient to achieve low generalization error, where p∗ ≤ p is the generative exponent of the link function. Core to our analysis is the reuse of minibatch in the gradient computation, which gives rise to higher-order information beyond correlational queries.

Show Abstract

Comparing noisy neural population dynamics using optimal transport distances

A. Nejatbakhsh, Victor Geadah, A. Williams, D. Lipshutz

Biological and artificial neural systems form high-dimensional neural representations that underpin their computational capabilities. Methods for quantifying geometric similarity in neural representations have become a popular tool for identifying computational principles that are potentially shared across neural systems. These methods generally assume that neural responses are deterministic and static. However, responses of biological systems, and some artificial systems, are noisy and dynamically unfold over time. Furthermore, these characteristics can have substantial influence on a system's computational capabilities. Here, we demonstrate that existing metrics can fail to capture key differences between neural systems with noisy dynamic responses. We then propose a metric for comparing the geometry of noisy neural trajectories, which can be derived as an optimal transport distance between Gaussian processes. We use the metric to compare models of neural responses in different regions of the motor system and to compare the dynamics of latent diffusion models for text-to-image synthesis.

Show Abstract

Liquid Filled Elastomers: From Linearization to Elastic Enhancement

Juan Casado Dìaz, G. Francfort

Surface tension at cavity walls can play havoc with the mechanical properties of perforated soft solids when the cavities are filled with a fluid. This study is an investigation of the macroscopic elastic properties of elastomers embedding spherical cavities filled with a pressurized liquid in the presence of surface tension, starting with the linearization of the fully nonlinear model and ending with the enhancement properties of the linearized model when many such liquid filled cavities are present.

Show Abstract

Learning locally dominant force balances in active particle systems

Dominik Sturm, S. Maddu, Ivo F. Sbalzarini

We use a combination of unsupervised clustering and sparsity-promoting inference algorithms to learn locally dominant force balances that explain macroscopic pattern formation in self-organized active particle systems. The self-organized emergence of macroscopic patterns from microscopic interactions between self-propelled particles can be widely observed in nature. Although hydrodynamic theories help us better understand the physical basis of this phenomenon, identifying a sufficient set of local interactions that shape, regulate and sustain self-organized structures in active particle systems remains challenging. We investigate a classic hydrodynamic model of self-propelled particles that produces a wide variety of patterns, such as asters and moving density bands. Our data-driven analysis shows that propagating bands are formed by local alignment interactions driven by density gradients, while steady-state asters are shaped by a mechanism of splay-induced negative compressibility arising from strong particle interactions. Our method also reveals analogous physical principles of pattern formation in a system where the speed of the particle is influenced by the local density. This demonstrates the ability of our method to reveal physical commonalities across models. The physical mechanisms inferred from the data are in excellent agreement with analytical scaling arguments and experimental observations.

Show Abstract

A minimal dynamical system and analog circuit for non-associative learning

M. Smart, S. Shvartsman, Martin Mönnigmann

Learning in living organisms is typically associated with networks of neurons. The use of large numbers of adjustable units has also been a crucial factor in the continued success of artificial neural networks. In light of the complexity of both living and artificial neural networks, it is surprising to see that very simple organisms -- even unicellular organisms that do not possess a nervous system -- are capable of certain forms of learning. Since in these cases learning may be implemented with much simpler structures than neural networks, it is natural to ask how simple the building blocks required for basic forms of learning may be. The purpose of this study is to discuss the simplest dynamical systems that model a fundamental form of non-associative learning, habituation, and to elucidate technical implementations of such systems, which may be used to implement non-associative learning in neuromorphic computing and related applications.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates