Visual Computation Workshop Defines Goals for Toolbox Development

On August 20, theoretical and experimental scientists gathered at Stanford University for the Computational Eye and Brain Workshop. The goal of the workshop, organized by Simons Collaboration on the Global Brain (SCGB) Investigators E.J. Chichilnisky, David Brainard, Fred Rieke and Brian Wandell, was to coordinate with the visual computation field during the early development of their SCGB project. The organizers are using their SCGB funds to develop and advance software infrastructure, known as the Image System Engineering Toolbox for Biology (ISETBIO), for modeling the physiological optics and neural processing components of early vision. The workshop focused on how to make the ISETBIO resource most useful for others in the field. Though the workshop participants study an array of model organisms and techniques, they are all working to understand visual computation.

ISETBIO is a freely available collaborative software resource based on code developed by Brainard, Heidi Hofer, Wandell and Jon Winawer. It includes a portion of the Image Systems Engineering Toolbox (ISET), which was developed by ImagEval Consulting. ISETBIO offers tools for modeling image formation at the retina, including physiological optics. The developers are working to add computational models of phototransduction and retinal processing, with signals at each key stage of transformation to be made available for viewing and analysis. The code is flexible so that users can tailor it to their specific applications. The workshop organizers envision this resource being accompanied by a repository of validation data and used to generate testable hypotheses about visual physiology and behavior. Ideally, this resource will ultimately encompass portions of the visual system beyond the eye and will be used in diverse disciplines, including science, engineering, medicine and education.

The organizers asked each participant to answer specific questions that would aid in the development of ISETBIO as a resource for the field, and attendees responded with a wide variety of suggestions for techniques and specific applications. Greg Horwitz of the University of Washington, Austin Roorda of the University of California, Berkeley, and Winawer, who is at New York University, discussed how to make this resource most useful for understanding visual perception and cortical physiology. Jeremy Freeman of the Howard Hughes Medical Institute’s Janelia Research Campus and Wyeth Bair of the University of Washington discussed the infrastructure and coding tools that should be considered for this resource, especially as the data scale up. Eero Simoncelli of NYU, Markus Meister of the California Institute of Technology, and Johannes Burge of the University of Pennsylvania discussed the kinds of computational models that should be implemented. Christof Koch of the Allen Institute for Brain Science described the efforts the institute is making to understand the neuronal computations and cell types involved in visual perception and how the ISETBIO resource can help further our understanding of the visual cortex, both in humans and in mice. The talks were interactive, with considerable discussion that helped to sharpen and clarify the main themes that emerged.

Although the workshop topics spanned a range of complex mathematical models and components of the visual system, ultimately the goal was simple: to get the researchers’ perspectives on how to make this resource as useful as possible.

Several strong recommendations emerged from the workshop. In particular, attendees said, the organizers should:

  • Provide an accessible tutorial to accompany the software, so that users can easily grasp what it does and how to apply it to their research;
  • Incorporate diverse competing models, from the simplest to the most accurate, particularly of the retinal ganglion cell outputs of the eye;
  • Provide validation data, for direct comparison with model output;
  • Create flexible software entry points and data formats, with an accessible modular design, so that users can merge parts of the pipeline with their own models and data;
  • Enable visual estimation and discrimination computations, for comparison to behavior, as well as computations that reconstruct the stimulus from the modeled neural signals, for a variety of applications;
  • Coordinate with related efforts at the Allen Institute and by Wyeth Bair (the iModel framework) to broaden the systems and visual areas covered.

Though this may appear to be quite the wish list, Chichilnisky, Brainard, Rieke and Wandell are convinced that this roadmap is key to a centralized platform for understanding, sharing and applying research on the early visual pathways.