Bayes Reading Group: Yuling Yao

Date & Time


Discussion Lead: Yuling Yao [CCM]

 

Topic: Computation and evaluation without the likelihood—Or did you forget to compute it?

 

Abstract: Both computation and model-evaluation become hard when the likelihood is intractable—when the sampling model involves differentiation equations, when the observations have multiple scales, or when the model is conditioning on a summary statistics. I will review some popular likelihood-free inference techniques including approximate bayesian inference, synthetic likelihood, fully Bayes, and normalizing flow. For model-evaluation, an intractable likelihood is even more prevalent: even in a fully specified model, many generated quantities have no closed-form predictive density, prohibiting the default log predestine density in Bayesian workflow.

But is the likelihood really that fragile? I argue that often we are able to evaluate the intractable likelihood in terms of its score function, except we forgot. Inspired by geometric theory, this new score estimate is unbiased, and sharpens many existing likelihood-free computation and evaluation tools.

Relevant readings:

[1] Parallel Gaussian Process Surrogate Bayesian Inference with Noisy Likelihood Evaluations

[2] Bayesian aggregation of average data: An application in drug development

[3] Likelihood-Free Inference by Ratio Estimation

Advancing Research in Basic Science and MathematicsSubscribe to Flatiron Institute announcements and other foundation updates