1967 Publications

FMM-accelerated solvers for the Laplace-Beltrami problem on complex surfaces in three dimensions

Dhwanit Agarwal, Michael O'Neil, M. Rachh

The Laplace-Beltrami problem on closed surfaces embedded in three dimensions arises in many areas of physics, including molecular dynamics (surface diffusion), electromagnetics (harmonic vector fields), and fluid dynamics (vesicle deformation). Using classical potential theory,the Laplace-Beltrami operator can be pre-/post-conditioned with integral operators whose kernel is translation invariant, resulting in well-conditioned Fredholm integral equations of the second-kind. These equations have the standard Laplace kernel from potential theory, and therefore the equations can be solved rapidly and accurately using a combination of fast multipole methods (FMMs) and high-order quadrature corrections. In this work we detail such a scheme, presenting two alternative integral formulations of the Laplace-Beltrami problem, each of whose solution can be obtained via FMM acceleration. We then present several applications of the solvers, focusing on the computation of what are known as harmonic vector fields, relevant for many applications in electromagnetics. A battery of numerical results are presented for each application, detailing the performance of the solver in various geometries.

Show Abstract
November 21, 2021

A Normative and Biologically Plausible Algorithm for Independent Component Analysis

The brain effortlessly solves blind source separation (BSS) problems, but the algorithm it uses remains elusive. In signal processing, linear BSS problems are often solved by Independent Component Analysis (ICA). To serve as a model of a biological circuit, the ICA neural network (NN) must satisfy at least the following requirements: 1. The algorithm must operate in the online setting where data samples are streamed one at a time, and the NN computes the sources on the fly without storing any significant fraction of the data in memory. 2. The synaptic weight update is local, i.e., it depends only on the biophysical variables present in the vicinity of a synapse. Here, we propose a novel objective function for ICA from which we derive a biologically plausible NN, including both the neural architecture and the synaptic learning rules. Interestingly, our algorithm relies on modulating synaptic plasticity by the total activity of the output neurons. In the brain, this could be accomplished by neuromodulators, extracellular calcium, local field potential, or nitric oxide.

Show Abstract

Shadow tomography based on informationally complete positive operator-valued measure

Atithi Acharya, Siddhartha Saha, A. Sengupta

Recently introduced shadow tomography protocols use “classical shadows” of quantum states to predict many target functions of an unknown quantum state. Unlike full quantum state tomography, shadow tomography does not insist on accurate recovery of the density matrix for high rank mixed states. Yet, such a protocol makes multiple accurate predictions with high confidence, based on a moderate number of quantum measurements. One particular influential algorithm, proposed by Huang et al. [Huang, Kueng, and Preskill, Nat. Phys. 16, 1050 (2020)], requires additional circuits for performing certain random unitary transformations. In this paper, we avoid these transformations but employ an arbitrary informationally complete positive operator-valued measure and show that such a procedure can compute k-bit correlation functions for quantum states reliably. We also show that, for this application, we do not need the median of means procedure of Huang et al. Finally, we discuss the contrast between the computation of correlation functions and fidelity of reconstruction of low rank density matrices.

Show Abstract

All-sky search for long-duration gravitational-wave bursts in the third Advanced LIGO and Advanced Virgo run

The LIGO Scientific Collaboration, the Virgo Collaboration, the KAGRA Collaboration, R. Abbott, T. D. Abbott, F. Acernese, ..., W. Farr, ..., M. Isi, ..., Y. Levin, et. al.

After the detection of gravitational waves from compact binary coalescences, the search for transient gravitational-wave signals with less well-defined waveforms for which matched filtering is not well-suited is one of the frontiers for gravitational-wave astronomy. Broadly classified into "short" ≲1 \,s and "long" ≳1 \,s duration signals, these signals are expected from a variety of astrophysical processes, including non-axisymmetric deformations in magnetars or eccentric binary black hole coalescences. In this work, we present a search for long-duration gravitational-wave transients from Advanced LIGO and Advanced Virgo's third observing run from April 2019 to March 2020. For this search, we use minimal assumptions for the sky location, event time, waveform morphology, and duration of the source. The search covers the range of 2 -- 500~s in duration and a frequency band of 24−2048 Hz. We find no significant triggers within this parameter space; we report sensitivity limits on the signal strength of gravitational waves characterized by the root-sum-square amplitude hrss as a function of waveform morphology. These hrss limits improve upon the results from the second observing run by an average factor of 1.8.

Show Abstract

A Similarity-preserving Neural Network Trained on Transformed Images Recapitulates Salient Features of the Fly Motion Detection Circuit

Learning to detect content-independent transformations from data is one of the central problems in biological and artificial intelligence. An example of such problem is unsupervised learning of a visual motion detector from pairs of consecutive video frames. Rao and Ruderman formulated this problem in terms of learning infinitesimal transformation operators (Lie group generators) via minimizing image reconstruction error. Unfortunately, it is difficult to map their model onto a biologically plausible neural network (NN) with local learning rules. Here we propose a biologically plausible model of motion detection. We also adopt the transformation-operator approach but, instead of reconstruction-error minimization, start with a similarity-preserving objective function. An online algorithm that optimizes such an objective function naturally maps onto an NN with biologically plausible learning rules. The trained NN recapitulates major features of the well-studied motion detector in the fly. In particular, it is consistent with the experimental observation that local motion detectors combine information from at least three adjacent pixels, something that contradicts the celebrated Hassenstein-Reichardt model.

Show Abstract

Adaptive Denoising via GainTuning

E. P. Simoncelli, Sreyas Mohan

Deep convolutional neural networks (CNNs) for image denoising are usually trained on large datasets. These models achieve the current state of the art, but they have difficulties generalizing when applied to data that deviate from the training distribution. Recent work has shown that it is possible to train denoisers on a single noisy image. These models adapt to the features of the test image, but their performance is limited by the small amount of information used to train them. Here we propose “GainTuning”, in which CNN models pre trained on large datasets are adaptively and selectively adjusted for individual test images. To avoid overfitting, GainTuning optimizes a single multiplicative scaling parameter (the “Gain”) of each channel in the convolutional layers of the CNN. We show that GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, boosting their denoising performance on nearly every image in a held-out test set. These adaptive improvements are even more substantial for test images differing systematically from the training data, either in noise level or image type. We illustrate the potential of adaptive denoising in a scientific application, in which a CNN is trained on synthetic data, and tested on real transmission-electronmicroscope images. In contrast to the existing methodology, GainTuning is able to faithfully reconstruct the structure of catalytic nanoparticles from these data at extremely low signal-to-noise ratios.

Show Abstract

Unsupervised Deep Video Denoising

E. P. Simoncelli, Sreyas Mohan

Deep convolutional neural networks (CNNs) for video denoising are typically trained with supervision, assuming the availability of clean videos. However, in many applications, such as microscopy, noiseless videos are not available. To address this, we propose an Unsupervised Deep Video Denoiser (UDVD1), a CNN architecture designed to be trained exclusively with noisy data. The performance of UDVD is comparable to the supervised state-of-the-art, even when trained only on a single short noisy video. We demonstrate the promise of our approach in real-world imaging applications by denoising raw video, fluorescencemicroscopy and electron-microscopy data. In contrast to many current approaches to video denoising, UDVD does not require explicit motion compensation. This is advantageous because motion compensation is computationally expensive, and can be unreliable when the input data are noisy. A gradient-based analysis reveals that UDVD automatically adapts to local motion in the input noisy videos. Thus, the network learns to perform implicit motion compensation, even though it is only trained for denoising.

Show Abstract

Small brains for big science

As the study of the human brain is complicated by its sheer scale, complexity, and impracticality of invasive experiments, neuroscience research has long relied on model organisms. The brains of macaque, mouse, zebrafish, fruit fly, nematode, and others have yielded many secrets that advanced our understanding of the human brain. Here, we propose that adding miniature insects to this collection would reduce the costs and accelerate brain research. The smallest insects occupy a special place among miniature animals: despite their body sizes, comparable to unicellular organisms, they retain complex brains that include thousands of neurons. Their brains possess the advantages of those in insects, such as neuronal identifiability and the connectome stereotypy, yet are smaller and hence easier to map and understand. Finally, the brains of miniature insects offer insights into the evolution of brain design.

Show Abstract

Mapping Spatial Frequency Preferences Across Human Primary Visual Cortex

E. P. Simoncelli, William Broderick

Neurons in primate visual cortex (area V1) are tuned for spatial frequency, in a manner that depends on their position in the visual field. Several studies have examined this dependency using fMRI, reporting preferred spatial frequencies (tuning curve peaks) of V1 voxels as a function of eccentricity, but their results differ by as much as two octaves, presumably due to differences in stimuli, measurements, and analysis methodology. Here, we characterize spatial frequency tuning at a millimeter resolution within human primary visual cortex, across stimulus orientation and visual field locations. We measured fMRI responses to a novel set of stimuli, constructed as sinusoidal gratings in log-polar coordinates, which include circular, radial, and spiral geometries. For each individual stimulus, the local spatial frequency varies inversely with eccentricity, and for any given location in the visual field, the full set of stimuli span a broad range of spatial frequencies and orientations. Over the measured range of eccentricities, the preferred spatial frequency is well-fit by a function that varies as the inverse of the eccentricity plus a small constant. We also find small but systematic effects of local stimulus orientation, defined in both absolute coordinates and relative to visual field location. Specifically, peak spatial frequency is higher for tangential than radial orientations and for horizontal than vertical orientations.

Show Abstract

A Neural Network with Local Learning Rules for Minor Subspace Analysis

D. Chklovskii, Y. Bahroun

The development of neuromorphic hardware and modeling of biological neural networks requires algorithms with local learning rules. Artificial neural networks using local learning rules to perform principal subspace analysis (PSA) and clustering have recently been derived from principled objective functions. However, no biologically plausible networks exist for minor subspace analysis (MSA), a fundamental signal processing task. MSA extracts the lowest-variance subspace of the input signal covariance matrix. Here, we introduce a novel similarity matching objective for extracting the minor subspace, Minor Subspace Similarity Matching (MSSM). Moreover, we derive an adaptive MSSM algorithm that naturally maps onto a novel neural network with local learning rules and gives numerical results showing that our method converges at a competitive rate.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates