Recent advances in calcium imaging acquisition techniques are creating datasets of the order of Terabytes/week. Memory and computationally efficient algorithms are required to analyze in reasonable amount of time terabytes of data. This project implements a set of essential methods required in the calcium imaging movies analysis pipeline. Fast and scalable algorithms are implemented for motion correction, movie manipulation, and source and spike extraction. CaImAn also contains some routines for the analyisis of behavior from video cameras. In summary, CaImAn provides a general purpose tool to handle large movies, with special emphasis on tools for two-photon and one-photon calcium imaging and behavioral datasets.
A Computational toolbox for large scale Calcium Imaging data Analysis. The code implements the CNMF algorithm for simultaneous source extraction and spike inference from large scale calcium imaging movies. Many more features are included. The code is suitable for the analysis of somatic imaging data. Improved implementation for the analysis of dendritic/axonal imaging data will be added in the future.
Modern imaging methods, such as Light, Electron, and synchrotron X-ray, have enabled 3D imaging for large samples with high resolution. As a result, more and more terabyte-scale or even petabyte-scale image volumes are produced. Traditional software runs on a single computer can not handle them anymore, and distributed computing, especially cloud computing is usually preferred. At the same time, there exists a variety of image processing pipelines due to the diverse scientific tasks while they usually share some common operations inside. Chunkflow is designed to tackle these challenges. The image volume is decomposed as chunks and distributed across computation nodes. Benefiting the hybrid cloud architecture design, users can run the tasks using both local cluster and public cloud with both CPUs and GPUs at the same time. Currently, over fifty operators could be composed in the command line to build customized pipelines instantly. Users can also easily plug in their own Python code as a new operator. Chunkflow is built in practical projects and has already been used to produce over 18 petabytes of result volumes. The maximum scale we have reached is over 3300 instances with GPUs in Google Cloud across three regions, and Chunkflow is still robust and reliable.
`plenoptic` is a python library for model-based stimulus synthesis. It provides tools to help researchers understand their model by synthesizing novel informative stimuli, which help build intuition for what features the model ignores and what it is sensitive to. These synthetic images can then be used in future perceptual or neural experiments for further investigation.
Pynapple is a light-weight python library for neurophysiological data analysis. The goal is to offer a versatile set of tools to study typical data in the field, i.e. time series (spike times, behavioral events, etc.) and time intervals (trials, brain states, etc.). It also provides users with generic functions for neuroscience such as tuning curves and cross-correlograms.
Due to the string-like nature of neurons and blood vessels, they could be abstracted as curved tubes with center lines and radii. This representation could be used for morphological analysis, such as path length and branching angle. Given an accurate voxel segmentation, the computation of object centerlines and radii is called skeletonization. RealNeuralNetworks.jl is developed to do that. Unlike most related packages, it combines the synaptic connectivity graph with morphological features and could be used to explore the relationship between synaptic connectivity and morphology. Recently, a new arising programing language, called Julia, is getting popular in data science. RealNeuralNetworks.jl is a Julia package and the algorithms are written from scratch for less dependency and efficiency.