2573 Publications

Engineered protein–iron oxide hybrid biomaterial for MRI-traceable drug encapsulation

Lindsay K. Hill, D. Renfrew, R. Bonneau, et al.

Labeled protein-based biomaterials have become popular for various biomedical applications such as tissue-engineered, therapeutic, and diagnostic scaffolds. Labeling of protein biomaterials, including with ultrasmall superparamagnetic iron oxide (USPIO) nanoparticles, has enabled a wide variety of imaging and therapeutic techniques. These USPIO-based biomaterials are widely studied in magnetic resonance imaging (MRI), thermotherapy, and magnetically-driven drug delivery, which provide a method for direct and non-invasive monitoring of implants or drug delivery agents. Where most developments have been made using polymers or collagen hydrogels, shown here is the use of a rationally designed protein as the building block for a meso-scale fiber. While USPIOs have been chemically conjugated to antibodies, glycoproteins, and tissue-engineered scaffolds for targeting or improved biocompatibility and stability, these constructs have predominantly served as diagnostic agents and often involve harsh conditions for USPIO synthesis. Here, we present an engineered protein–iron oxide hybrid material comprised of an azide-functionalized coiled-coil protein with small molecule binding capacity conjugated via bioorthogonal azide–alkyne cycloaddition to an alkyne-bearing iron oxide templating peptide, CMms6, for USPIO biomineralization under mild conditions. The coiled-coil protein, dubbed Q, has been previously shown to form nanofibers and, upon small molecule binding, further assembles into mesofibers via encapsulation and aggregation. The resulting hybrid material is capable of doxorubicin encapsulation as well as sensitive weighted MRI darkening for strong imaging capability that is uniquely derived from a coiled-coil protein.

Show Abstract

An Aligned Orbit for the Young Planet V1298 Tau b

Marshall C. Johnson, T. David, Erik A. Petigura, ..., Rodrigo Luger, ..., D. Foreman-Mackey, et. al.

The alignment of planetary orbits with respect to the stellar rotation preserves information on their dynamical histories. Measuring this angle for young planets help illuminate the mechanisms that create misaligned orbits for older planets, as different processes could operate over timescales ranging from a few Myr to a Gyr. We present spectroscopic transit observations of the young exoplanet V1298 Tau b; we update the age of V1298 Tau to be 28±4 Myr based on Gaia EDR3 measurements. We observed a partial transit with Keck/HIRES and LBT/PEPSI, and detected the radial velocity anomaly due to the Rossiter-McLaughlin effect. V1298 Tau~b has a prograde, well-aligned orbit, with λ=4+7∘−10. By combining the spectroscopically-measured vsini⋆ and the phtometrically-measured rotation period of the host star we also find that the orbit is aligned in 3D, ψ=8+4∘−7 deg. Finally, we combine our obliquity constraints with a previous measurement for the interior planet V1298 Tau c to constrain the mutual inclination between the two planets to be imut=0∘±19∘. This measurements adds to the growing number of well-aligned planets at young ages, hinting that misalignments may be generated over timescales of longer than tens of Myr. The number of measurements, however, is still small, and this population may not be representative of the older planets that have been observed to date. We also present the derivation of the relationship between imut, λ, and i for two planets.

Show Abstract

Spectrally accurate solutions to inhomogeneous elliptic PDE in smooth geometries using function intension

We present a spectrally accurate embedded boundary method for solving linear, inhomogeneous, elliptic partial differential equations (PDE) in general smooth geometries, focusing in this manuscript on the Poisson, modified Helmholtz, and Stokes equations. Unlike several recently proposed methods which rely on function extension, we propose a method which instead utilizes function `intension', or the smooth truncation of known function values. Similar to those methods based on extension, once the inhomogeneity is truncated we may solve the PDE using any of the many simple, fast, and robust solvers that have been developed for regular grids on simple domains. Function intension is inherently stable, as are all steps in the proposed solution method, and can be used on domains which do not readily admit extensions. We pay a price in exchange for improved stability and flexibility: in addition to solving the PDE on the regular domain, we must additionally (1) solve the PDE on a small auxiliary domain that is fitted to the boundary, and (2) ensure consistency of the solution across the interface between this auxiliary domain and the rest of the physical domain. We show how these tasks may be accomplished efficiently (in both the asymptotic and practical sense), and compare convergence to several recent high-order embedded boundary schemes.

Show Abstract
May 3, 2022

Second-order magnetic responses in quantum magnets: Magnetization under ac magnetic fields

Tatsuya Kaneko, Yuta Murakami, Shintaro Takayoshi, Andrew J. Millis
We investigate second-order magnetic responses of quantum magnets against ac magnetic fields. We focus on the case where the z component of the spin is conserved in the unperturbed Hamiltonian and the driving field is applied in the xy plane. We find that linearly polarized driving fields induce a second-harmonic response, while circularly polarized fields generate only a zero-frequency response, leading to a magnetization with a direction determined by the helicity. Employing an unbiased numerical method, we demonstrate the nonlinear magnetic effect driven by the circularly polarized field in the XXZ model and show that the magnitude of the magnetization can be predicted by the dynamical spin structure factor in the linear response regime.
Show Abstract

An activation to memory differentiation trajectory of tumor-infiltrating lymphocytes informs metastatic melanoma outcomes

Abhinav Jaiswal , Akanksha Verma, O. Troyanskaya, et al.

There is a need for better classification and understanding of tumor-infiltrating lymphocytes (TILs). Here, we applied advanced functional genomics to interrogate 9,000 human tumors and multiple single-cell sequencing sets using benchmarked T cell states, comprehensive T cell differentiation trajectories, human and mouse vaccine responses, and other human TILs. Compared with other T cell states, enrichment of T memory/resident memory programs was observed across solid tumors. Trajectory analysis of single-cell melanoma CD8+ TILs also identified a high fraction of memory/resident memory-scoring TILs in anti-PD-1 responders, which expanded post therapy. In contrast, TILs scoring highly for early T cell activation, but not exhaustion, associated with non-response. Late/persistent, but not early activation signatures, prognosticate melanoma survival, and co-express with dendritic cell and IFN-γ response programs. These data identify an activation-like state associated to poor response and suggest successful memory conversion, above resuscitation of exhaustion, is an under-appreciated aspect of successful anti-tumoral immunity.

Show Abstract

A common framework for discriminability and perceived intensity of sensory stimuli

J. Zhou, L. Duong, E. P. Simoncelli

The perception of sensory attributes is often quantified through measurements of discriminability (the ability to detect small stimulus changes), as well as through direct judgements of appearance or intensity. Despite their ubiquity, the relationship between these two measurements remains controversial and unresolved. Here, we propose a framework in which they arise from different aspects of a common representation. Specifically, we assume that judgements of stimulus intensity (e.g., through rating scales) reflect the mean value of an internal representation, and discriminability reflects the ratio between the derivative of the mean value and the internal noise amplitude, as quantified by the measure of Fisher Information. A unique identification of internal representation properties can be achieved by combining the two measurements. As a central example, we show that Weber’s Law of perceptual discriminability can co-exist with Stevens’ power-law scaling of intensity ratings (for all exponents), when the noise amplitude increases in proportion to the representational mean. We extend this result beyond Weber’s range by incorporating a more general and physiology-inspired form of noise, and show that the combination of noise properties and discriminability measurements accurately predicts intensity ratings across a variety of sensory modalities and attributes. Our framework unifies two major perceptual measurements - discriminability and rating scales, and proposes a neural interpretation for the underlying representations. Additionally, our framework teases two super-threshold perceptual measurements apart – rating scales and super-threshold perceptual distance, which were thought to measure the same perceptual aspect, and generated decades of mixed experimental results in the literature.

Show Abstract

Effects of foveation on early visual representations

Human vision is far from uniform across the visual field. At fixation, we have a region of high acuity known as the fovea, and acuity decreases with distance from the fovea. However, it is not true that peripheral vision is just a blurrier version of foveal vision, and finding a precise description of how exactly they differ has been challenging. This thesis presents two investigations into how the processing of visual information changes with location in the visual field, both focused on the early visual system, as well as a description of a software package developed to support studies of the type found in the second study. In the first study, we use functional magnetic resonance imaging (fMRI) to measure how spatial frequency tuning changes with orientation and visual field location in human primary visual cortex (V1). V1 is among the best-characterized regions of the primate brain, and we know that nearly every neuron in V1 is selective for spatial frequency and orientation. We also know that V1 neurons preferred spatial frequencies decrease with eccentricity, which aligns with the decrease in peak spatial frequency sensitivity found in perception. However, precise descriptions of this relationship have been elusive, due to the difficulty of characterizing tuning properties across the whole field. By utilizing fMRI's ability to measure responses across the entire cortex at once to a set of stimuli designed to efficiently map spatial frequency preferences, along with a novel analysis method which fits the responses of all voxels simultaneously, we present a compact description of this property, providing an important building block for future work.

In the second study, we build perceptual pooling models of the entire visual field from simple filter models inspired by retinal ganglion cells and V1 neurons. We then synthesize a large number of images to investigate how the sensitivities and invariances of these models align with those of the human visual system. This allows us to investigate to what extent the change in perception across the visual field can be accounted for by well-understood models of low-level visual processing, rather than requiring more cognitive phenomena or models with millions of parameters. Finally, I describe an open-source software package developed by members of the Simoncelli lab that provides four image synthesis methods in a shared, general framework. These methods were all developed in the lab over the past several decades and have been described in the literature, but their widespread use has been limited by the difficulty of applying them to new models. By leveraging the automatic differentiation built into a popular deep learning library, our package allows for the use of synthesis method with arbitrary models, providing an important resource for the vision science community. Altogether, this thesis presents a step forward in understanding how visual processing differs across the visual field and, with the effort to share the code, data, and computational environment of the projects, provides resources for future scientists to build on.

Show Abstract

Flexible sensory information processing through targeted stochastic co-modulation

Caroline Haimerl

Humans and animals can quickly adapt to new task demands while retaining capabilities developed previously. Such flexible sensory-guided behavior requires reliable encoding of stimulus information in neural populations, and task-specific readout through selective combination of these responses. The former has been the topic of intensive study, but the latter remains largely a mystery. Here we propose that targeted stochastic gain modulation could support flexible readout of task-information from an encoding population. In experiments, we find that responses of neurons in area V1 of monkeys performing a visual orientation discrimination task exhibit low-dimensional comodulation. This modulation fluctuates rapidly, and is stronger in those neurons that are most informative for the behavioral task. We propose a theoretical framework in which this modulation serves as a label to facilitate downstream readout. We demonstrate that the shared modulatory fluctuations found in V1 can be used to decode from the recorded neural activity within a small number of training trials, consistent with observed behavior. Simulations of visual information processing in a hierarchical neural network demonstrate that learned, modulator-induced labels can accompany task-information across several stages to guide readout at a decision stage and thereby fine-tune the network without reorganization of the feedforward weights. This allows the circuit to reach high levels of performance in novel tasks with minimal training, outperforming previously proposed attentional mechanisms based on gain increases, while also being able to instantly revert back to the initial operating regime once task demands change. The theory predicts that the modulator label should be maintained across processing stages and indeed we find that the trial-by-trial modulatory signal estimated from V1 populations is also present in the activity of simultaneously recorded MT units, preferentially so if they are task-informative. Overall, these results provide a new framework for how intelligent systems can flexibly and robustly adapt to changes in task structure by adjusting information routing via a shared modulator label.

Show Abstract

Robust and interpretable denoising via deep learning

S. Mohan

In the past decade, convolutional neural networks (CNN) have achieved state-of-the-art results in denoising. The goal of this work is to advance our understanding of these models and leverage this understanding to advance the current state-of-the-art. We start by showing that CNNs systematically overfit the noise levels in the training set, and propose a new architecture called bias-free CNNs which generalize robustly to noise levels outside the training set. Bias-free networks are also locally linear, which enables direct analysis with linear-algebraic tools. We show that the denoising map can be visualized locally as a filter that adapts to both signal structure and noise level. Denoising CNNs including bias-free CNNs are typically trained using pairs of noisy and clean data. However, in many domains like microscopy, clean data is generally not available. We develop a network architecture that performs unsupervised denoising for video data, i.e, we train only using noisy videos. We then build on top of the unsupervised denoising methodology and propose a new adaptive denoising paradigm. We develop GainTuning in which CNN models pre-trained on large datasets are adaptively and selectively adjusted for individual test images. GainTuning improves state-of-the-art CNNs on standard image-denoising benchmarks, particularly for test images differing systematically from the training data, either in noise distribution or image type. Finally, we explore the application of deep learning-based denoising in scientific discovery through a case study in electron microscopy. To ensure that the denoised output is accurate, we develop likelihood map which quantifies the agreement between real noisy data and denoised output (thus flagging denoising artifacts). In addition, we show that popular metrics for denoising fail to capture scientifically relevant details and propose new metrics to fill this gap.

Show Abstract
  • Previous Page
  • Viewing
  • Next Page
Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates

privacy consent banner

Privacy preference

We use cookies to provide you with the best online experience. By clicking "Accept All," you help us understand how our site is used and enhance its performance. You can change your choice at any time here. To learn more, please visit our Privacy Policy.