BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Simons Foundation - ECPv6.6.3//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Simons Foundation
X-ORIGINAL-URL:https://www.simonsfoundation.org
X-WR-CALDESC:Events for Simons Foundation
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20100314T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20101107T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20110313T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20111106T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20110323T050000
DTEND;TZID=America/New_York:20110323T180000
DTSTAMP:20260417T113143
CREATED:20170428T040000Z
LAST-MODIFIED:20211207T155751Z
UID:404-1300856400-1300903200@www.simonsfoundation.org
SUMMARY:The Missing Circuits: Studying Entire Brains
DESCRIPTION:Fundamental gaps remain in our understanding of animal brains\, especially so for human brains\, in comparison with other organ systems in the body. This is evident from our limited mechanistic understanding of neuropsychiatric disorders and the difficulty in developing therapies\, despite decades of intensive research. One of these gaps is our very partial knowledge of the circuit architecture of brains\, even in the best studied model organisms. \nA frequently mentioned reason for this gap is the complexity of the circuitry: the astronomical numbers of neurons and synapses are often cited in this context. However\, although brains are complex\, this complexity is not entirely disorganized – classical neuroanatomical studies exhibit the existence of an intermediate\, “mesoscopic” level of organization\, as can be seen from classical neuroanatomical atlases which exhibit brain nuclei\, layered structures\, and organized projection patterns. Although the whole brain circuit architecture is more tractable at this mesoscopic scale\, our knowledge of it also remains incomplete. Reasons for this are partly technical (the large volumes of data involved\, which have only been recently become possible to store and process\, the focus on slice physiology which precludes the study of many long range circuits\, etc). Background theoretical attitudes also play a role (a “chemical soup” theory of brain function\, exclusive focus on mechanisms of synaptic plasticity\, or an exclusive focus on hypothesized core operational units such as “canonical microcircuits” in cortex). All these ideas are useful\, however a deeper understanding of the whole brain circuit architecture may be necessary to provide context for\, and integrate\, these different perspectives. A move towards a circuit dysregulation based (rather than a “chemical soup” imbalance based) view is already evident in the conceptual understanding of neuropsychiatric disorders. \nWe have argued for the need and feasibility of determining brain-wide circuit architecture\, starting with the mouse and eventually in multiple model organisms chosen to suitably span the phylogenetic tree. This talk will present some historical and theoretical background\, and a description of ongoing experimental work as well as intermediate results. Ideas about how to fruitfully analyze whole brain data sets will also be presented. \nSuggested Reading: \nProposal for a Coordinated Effort for the Determination of Brainwide Neuroanatomical Connectivity in Model Organisms at a Mesoscopic Scale\nOptional Reading: \nWhile not directly related to this talk\, Dr. Mitra’s basic approach to quantitative neuroscience research can be found in the first four chapters of Observed Brain Dynamics \nAbout the Speaker:  \nPartha Mitra is the head of the Mitra Lab at Cold Spring Harbor Laboratory. Mitra and his fellow lab members combine theoretical\, computational and experimental approaches to understand biological complexity. \nThe lab’s goal is to obtain conceptual breakthroughs into how brains work. Despite extensive research\, we are still far from a comprehensive understanding of how the nervous system gives rise to the behavioral complexities\, cognition and affect. We do not yet know what precisely goes wrong in human brains in most major neuropsychiatric disorders\, and therapeutic advances remain slow. Part of the difficulty arises from the complexity of the systems involved: neurobiological phenomena have to be studied at the molecular/cellular level\, neural circuit level\, behavioral as well as social levels\, and in multiple species. Given this complexity\, there remain large empirical gaps in our knowledge that can only be filled in experimentally. However\, an equally important problem is that of integrating the information thus obtained. \nPrevious work in the laboratory was largely theoretical and computational in nature\, and focused on analyzing behavioral and electrophysiological measurements in a number of model organisms. Currently\, the laboratory is focused on the Brain Architecture Project. The basic premise of this project is that\, while great advances have been made at the individual neuron and microcircuit levels\, there is a large gap at the whole-brain level of analysis of neural circuitry. The Mouse Brain Architecture Project seeks to fill this gap experimentally\, by systematically mapping the whole brain meso-circuit of the mouse brain\, and simultaneously addressing the computational and theoretical questions that arise. \nHomepage: http://mitralab.org/
URL:https://www.simonsfoundation.org/event/the-missing-circuits-studying-entire-brains/
CATEGORIES:Simons Science Series
ATTACH;FMTTYPE=image/jpeg:https://sf-web-assets-prod.s3.amazonaws.com/wp-content/uploads/2017/07/10181059/partha-mitra.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20110209T170000
DTEND;TZID=America/New_York:20110209T180000
DTSTAMP:20260417T113143
CREATED:20170428T040000Z
LAST-MODIFIED:20211207T155803Z
UID:402-1297270800-1297274400@www.simonsfoundation.org
SUMMARY:When and How Can We Compute Approximately Optimal Solutions to Intractable Computational Problems?
DESCRIPTION:The discovery of NP-completeness by computer scientists in the 1970s showed that many computational problems in a variety of disciplines do not have efficient algorithms (assuming the classes P and NP are different\, as is widely believed). This was a profound discovery. However\, in practice it often suffices to solve problems approximately: say\, to obtain a solution of cost within 10% of optimum. Can efficient algorithms find approximately optimal solutions? The classical theory of NP-completeness didn’t address or preclude this possibility. Research in the past two decades has answered the approximability issue for a large subset of NP-complete problems. We know the precise approximation threshold that can be achieved by efficient algorithms\, and also know that improving upon that threshold is no easier than exact optimization. The former is the domain of “approximation algorithms\,” and the latter of the theory of “probabilistically checkable proofs.” This theory makes connections with a host of other disciplines and yields surprising results such as the PCP Theorem\, which states that mathematical proofs can be checked by examining a constant number of bits in them (this constant is independent of the size of the proof). \nSuggested Reading: \nChapter 8 on NP-completeness from Algorithms by Dasgupta\, Papadimitriou & Vazirani \nNP-completeness: A Retrospective \nThe Approximability of NP-hard Problems \nAbout the Speaker: \nSanjeev Arora is the Charles C. Fitzmorris Professor of Computer Science at Princeton University. His research area is Theoretical Computer Science. Specific topics that Arora has worked on: Computational Complexity\, Probabilistically Checkable Proofs (PCPs)\, computing approximate solutions to NP-hard problems\, geometric embedding of metric spaces\, unique games conjecture\, complexity of financial derivatives\, provable bounds for Machine Learning. \nArora received his PhD from UC Berkeley in 1994. In 2012 he received a Simons Investigator award and the AMS-MOS D.R. Fulkerson Prize. Past awards include ACM-Infosys Foundation Award in the Computing Sciences; Best paper\, IEEE Foundations of Computer Science; EATCS-SIGACT Goedel Prize; elected ACM Fellow; Engineering Council Teaching Award for Fall 2008\, Princeton University. \nHomepage: http://www.cs.princeton.edu/~arora/bio.html
URL:https://www.simonsfoundation.org/event/when-and-how-can-we-compute-approximately-optimal-solutions-to-intractable-computational-problems/
CATEGORIES:Simons Science Series
ATTACH;FMTTYPE=image/jpeg:https://sf-web-assets-prod.s3.amazonaws.com/wp-content/uploads/2017/07/10181057/sanjeev-arora.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20101215T170000
DTEND;TZID=America/New_York:20101215T180000
DTSTAMP:20260417T113143
CREATED:20170428T040000Z
LAST-MODIFIED:20211207T155815Z
UID:400-1292432400-1292436000@www.simonsfoundation.org
SUMMARY:Unbiased Reconstruction of Mammalian Regulatory Networks
DESCRIPTION:Deciphering the regulatory networks that control dynamic and specific gene expression responses in mammalian cells remains a major challenge. While models inferred from genomic data have identified candidate regulatory mechanisms\, such models remain largely unvalidated. Here\, we present an unbiased strategy based on systematic gene perturbation and innovative multiplex detection to derive regulatory networks in mammalian cells. \nWe first apply this approach to decipher the network that controls the transcriptional response to pathogens in primary dendritic cells (DCs)\, testing the regulatory function of over a hundred transcription factors\, chromatin modifiers\, and RNA binding proteins. Our approach accurately assigned dozens of known regulators (e.g. NFkB\, IRFs\, and STATs) to their target genes and discovered dozens additional functional regulators that were not previously implicated in this response\, quantifies their contribution of each regulator to two major transcriptional programs. We identify a core network of key regulators and fine-tuners\, which uses a combination of coherent feed-forward circuits\, dominant activation\, and cross-inhibition to control response specificity. Among these we discover a tier of chromatin modifiers that specifically repress interferon beta 1 (IFNB1) expression upon bacterial but not viral stimulation\, and a large circuit of cell cycle regulators that was co-opted to regulate the viral response. We then show how a similar strategy can be used to study the global architecture of gene regulation across ~40 cell populations in human hematopoiesis\, from hematopoietic stem cells\, through multiple progenitor and intermediate maturation states\, to terminally differentiated cell type\, implicating dozens of new regulators in hematopoiesis and demonstrate a substantial re-use of gene modules and their regulatory programs in distinct lineages. \nOur work establishes a broadly-applicable\, comprehensive and unbiased approach to identifying the wiring and function of a regulatory network controlling a major transcriptional response in primary mammalian cells. \nSuggested Reading: \nLearning Module Networks\, Journal of Machine Learning Research 6 (2005) 557–588 \nMinReg: A Scalable Algorithm for Learning Parsimonious Regulatory Networks in Yeast and Mammals\, Journal of Machine Learning Research 7 (2006) 167–189 \nUnbiased Reconstruction of a Mammalian Transcriptional Network Mediating Pathogen Responses\, Ido Amit\, et al.\, Science 326\, 257 (2009) \nAbout the Speaker: \nComputational biologist Aviv Regev’s joined the Broad Institute as a core faculty member in 2006. Her research centers on understanding how complex molecular networks function and evolve in the face of genetic and environmental changes. In addition to her position at the Broad Institute\, Aviv is an assistant professor in the department of biology at MIT and an Early Career Scientist at Howard Hughes Medical Institute. In 2008\, she received the Overton Prize from the International Society for Computational Biology and the NIH Director’s Pioneer Award. She is a past recipient of the Burroughs Wellcome Fund Career Award.
URL:https://www.simonsfoundation.org/event/unbiased-reconstruction-of-mammalian-regulatory-networks/
CATEGORIES:Simons Science Series
ATTACH;FMTTYPE=image/jpeg:https://sf-web-assets-prod.s3.amazonaws.com/wp-content/uploads/2017/07/10181056/aviv-regev.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20101117T170000
DTEND;TZID=America/New_York:20101117T180000
DTSTAMP:20260417T113143
CREATED:20170428T040000Z
LAST-MODIFIED:20170428T040000Z
UID:398-1290013200-1290016800@www.simonsfoundation.org
SUMMARY:Fluctuations\, Information and Survival: Some Lessons from Bacteria
DESCRIPTION:Growing (micro)organisms are subject to different types of environmental changes. Some of these fluctuations are regular: for example\, daily variations of light intensity. Others are stochastic\, such as the random appearance of predators or toxins. Bacteria have developed an astonishing panoply of survival strategies in varying environments. In this talk\, Leibler will describe some recent experimental and theoretical studies connected with microbial behavior. \nAbout the Speaker: \nStanislas Leibler is the Gladys T. Perkins Professor and Head of the Laboratory of Living Matter at The Rockefeller University. Dr. Leibler is interested in the quantitative description of microbial systems\, both on cellular and population levels. \nIn recent years\, the field of molecular biology has moved away from the study of individual components and toward the study of how they interact\, creating a “systemic” approach that seeks an appropriate and quantitative description of cells and organisms. Dr. Leibler’s laboratory is developing both the theoretical and experimental methods necessary for conducting studies on the collective behavior of biomolecules\, cells and organisms. By selecting a number of basic questions on how simple genetic and biochemical networks function in bacteria\, his lab is beginning to understand how individual components can give rise to complex\, collective phenomena. \nRecent research topics in the laboratory include quantitative studies of interacting microorganisms. In particular\, the question of the survival of microbial populations in varying environments is being addressed both experimentally and theoretically. Dr. Leibler and his collaborators are developing new experimental techniques that will facilitate quantitative analysis of long-time population dynamics in microbial populations. In parallel\, they are developing statistical methods for the so-called inverse problems\, in which the interactions between different components of a biological system are deduced from measured statistical correlations. Long-term dynamics of closed microbial ecosystems are being analyzed by these inverse methods.  Similar theoretical approaches are also applied to other types of data\, such as spiking activity of retinal neuron assemblies or evolution of protein families. \nHomepage: http://lab.rockefeller.edu/leibler/ \n  \n 
URL:https://www.simonsfoundation.org/event/fluctuations-information-and-survival-some-lessons-from-bacteria/
CATEGORIES:Simons Science Series
ATTACH;FMTTYPE=image/jpeg:https://sf-web-assets-prod.s3.amazonaws.com/wp-content/uploads/2017/07/10181054/stanislas-leibler.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20101013T170000
DTEND;TZID=America/New_York:20101013T180000
DTSTAMP:20260417T113143
CREATED:20170428T040000Z
LAST-MODIFIED:20211207T155827Z
UID:396-1286989200-1286992800@www.simonsfoundation.org
SUMMARY:Normalization as a Canonical Neural Computation
DESCRIPTION:It is hypothesized that the computations performed by the brain are modular\, and are repeated across brain regions and modalities to apply similar operations to different problems. A candidate for such a canonical neural computation is normalization\, whereby the responses of a neuron is divided by a common factor\, which typically includes the summed activity of the local population of neurons. Normalization was developed to explain responses in primary visual cortex\, and it is now thought to operate throughout the visual system and in multiple other sensory modalities and brain regions. Normalization may underlie operations as diverse as the deployment of visual attention\, the encoding of value in parietal cortex\, and the integration of multisensory information. It is present not only in mammals but also in the neural systems of invertebrates\, suggesting that it is a computation that was developed at an early stage in evolution. I will present the normalization model of neural computation\, some of the empirical tests of the model\, and elaborate the hypothesis that dysfunctions of normalization may be associated with schizophrenia\, amblyopia\, epilepsy\, and autism spectrum disorders. \nAbout the Speaker: \nDavid Heeger is a Professor of Psychology and Neural Science at New York University\, where he is a member of the Center for Brain Imaging. His research focuses on biological and artificial vision. He has made a number of influential contributions. These include a model how one can measure motion from optic flow\, a nonlinear model of responses in the visual cortex called the “normalization model”\, and a method for texture synthesis. Since the late 1990s he has been at the forefront of the field of functional magnetic resonance imaging (fMRI). \nHomepage: http://www.cns.nyu.edu/~david/
URL:https://www.simonsfoundation.org/event/normalization-as-a-canonical-neural-computation/
CATEGORIES:Simons Science Series
ATTACH;FMTTYPE=image/jpeg:https://sf-web-assets-prod.s3.amazonaws.com/wp-content/uploads/2017/07/10181051/David-Heeger.jpeg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20100922T170000
DTEND;TZID=America/New_York:20100922T180000
DTSTAMP:20260417T113143
CREATED:20170428T040000Z
LAST-MODIFIED:20211207T155838Z
UID:394-1285174800-1285178400@www.simonsfoundation.org
SUMMARY:Strategic Behavior and the Science of Social Networks
DESCRIPTION:The modern ability to carefully measure large-scale social networks has driven new empirical studies and theoretical models of growth\, dynamics\, influence\, and collective behavior in such systems. This emerging science is inherently interdisciplinary\, with key contributions coming from sociologists\, computer scientists\, mathematicians\, physicists\, and economists. \nWhile much of the empirical investigation so far has focused on documenting social network structure or topology\, less is understood about how topology *matters* — that is\, in what ways social network structure influences behavior and collective outcomes. In this talk I will survey some of the progress on this topic\, particularly in settings in which there is some kind of strategic or economic interaction taking place in the network. I will illustrate some of the concepts with results from an extensive series of human-subject experiments in networked interaction conducted at Penn. \nPart 1 \n \nPart 2 \n \nAbout the Speaker: \nMichael Kearns is a professor in the Computer and Information Science Department at the University of Pennsylvania\, where he holds the National Center Chair in Resource Management and Technology. Kearns is the Founding Director of Penn Engineering’s new Networked and Social Systems Engineering (NETS) Program; his co-director is Ali Jadbabaie\, and the program’s curriculum chair is Zack Ives. Kearns has secondary appointments in the Statistics and Operations and Information Management (OPIM) departments of the Wharton School. He is an active member of Penn’s machine learning community PRiML\, and am an affiliated faculty member of Penn’s Applied Math and Computational Science graduate program. Until July 2006 Kearns was the co-director of Penn’s interdisciplinary Institute for Research in Cognitive Science. \nKearns also works closely with a quantitative trading group at SAC Capital in New York City. \nKerns currently serves as an advisor to the companies Yodle\, Wealthfront (formerly known as kaChing)\, PayNearMe (formerly known as Kwedit)\, Activate Networks\, Convertro\, and RootMetrics. He is also involved in the startup Hunch (recently acquired by eBay)\, and in the seed-stage fund Founder Collective and several of its portfolio companies. Kearns is a member of the scientific advisory board of Opera Solutions. He also occasionally serves as an expert witness/consultant on technology-related legal and regulatory cases. \nHomepage: http://www.cis.upenn.edu/~mkearns/
URL:https://www.simonsfoundation.org/event/strategic-behavior-and-the-science-of-social-networks/
CATEGORIES:Simons Science Series
ATTACH;FMTTYPE=image/png:https://sf-web-assets-prod.s3.amazonaws.com/wp-content/uploads/2017/07/10181050/michael-kearns.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20100512T170000
DTEND;TZID=America/New_York:20100512T180000
DTSTAMP:20260417T113143
CREATED:20170428T040000Z
LAST-MODIFIED:20250206T172847Z
UID:392-1273683600-1273687200@www.simonsfoundation.org
SUMMARY:The Power and Weakness of Randomness (When You are Short on Time)
DESCRIPTION:Man has grappled with the meaning and utility of randomness for centuries. Research in the Theory of Computation in the last thirty years has enriched this study considerably. I’ll describe two main aspects of this research on randomness\, demonstrating respectively its power and weakness for making algorithms faster. I will address the role of randomness in other computational settings\, such as space bounded computation and probabilistic and zero-knowledge proofs. \nAbout the Speaker: \nAvi Wigderson received his Ph.D. in computer science in 1983 from Princeton University. Since then he has held permanent positions at the Hebrew University Computer Science Institute\, where he was the chair from 1992-95\, and at the Institute for Advanced Study School of Math\, heading their Computer Science and Discrete Math Program since 1999. He has held visiting positions at the University of California\, Berkeley\, IBM Research\, the Mathematical Sciences Research Institute\, Princeton University and the Institute for Advanced Study. \nWigderson’s research interests include computational complexity theory\, algorithms\, parallel and distributed computation\, combinatorics and graph theory\, cryptography\, randomness and pseudorandomness. \nHonors include being a two-time invited speaker at the International Congress of Mathematicians\, where he was presented in 1994 with the Nevanlinna Prize for outstanding contributions in mathematical aspects of information sciences. Widgerson was an invited speaker at the AMS Gibbs Lectures and the recipient of the Conant Prize. Most recently\, Wigderson was the honored recipient of the 2009 Gödel Prize\, which recognizes outstanding papers in theoretical computer science. \nHomepage: http://www.math.ias.edu/avi/
URL:https://www.simonsfoundation.org/event/the-power-and-weakness-of-randomness-when-you-are-short-on-time/
CATEGORIES:Simons Science Series
ATTACH;FMTTYPE=image/jpeg:https://sf-web-assets-prod.s3.amazonaws.com/wp-content/uploads/2017/07/10181048/avi-widgerson.jpeg
END:VEVENT
END:VCALENDAR