Discussion Lead: Leonardo Egidi
Topic: A Bayesian fairy tale: the mysteries of the mixtures. Priors, likelihoods and other ‘multi-headed’ monsters
The Bayesian model consists of the prior–likelihood pair. A prior–data conflict arises whenever
the prior allocates most of its mass to regions of the parameter space where the likelihood is relatively
low. Once a prior–data conflict is diagnosed, what to do next is a hard question to answer. We propose
an automatic prior elicitation that involves a two-component mixture of a diffuse and an informative prior
distribution that favours the first component if a conflict emerges. Using various examples, we show that
these mixture priors can be useful in regression models as a device for regularizing the estimates and
retrieving useful inferential conclusions. According to the ‘in medio stat virtus philosophy’,
a mixture prior combining the two extremes—the wildly informative prior and the weakly informative prior—
can realistically average over them and represent a sound compromise to get robust inferences.
Mixture prior distributions are much used in many statistical applications, such as clinical
trials, especially to avoid prior-data conflicts for future sets of observations/experiments.
We explicitly prove that the effective sample size (ESS) of a mixture prior rarely exceeds the ESS of
any individual mixture component density of the prior.
Please email Shakemia Browne at email@example.com for the Zoom link or to be added to the meeting list.