- Sequential Gibbs posterior distribution (arXiv) with application to principal component analysis
Author: Stephen Winter, Omar Merikeki, David B. Danson
Summary: The Gibbs posterior distribution is proportional to the prior distribution multiplied by a power-law loss function. The key tuning parameters weight the loss information relative to the prior distribution and provide control for the posterior uncertainty. Although the Gibbs posterior method provides a principled framework for likelihood-free Bayesian inference, in many situations the inclusion of a single tuning parameter necessarily reduces the quantification of uncertainty. It will be insufficient. In particular, regardless of the values of the parameters, the reliable region will be far from the nominal frequentist range, even in large samples. To address this issue, we propose a sequential extension of the Gibbs posterior function. We prove the proposed sequential posterior distribution cardinality and the Bernstein-von Mises theorem. This theorem holds under easy-to-verify conditions on Euclidean spaces and manifolds. As a byproduct, we obtain the first Bernstein-von Mises theorem for traditional likelihood-based Bayesian posterior distributions on manifolds. All methods are presented along with their application to principal component analysis.
2. Direct Gibbs a posteriori inference about risk minimization factors: construction, focus, and calibration (arXiv)
Author: Ryan Martin, Nicholas Schilling
Abstract: Real-world problems, often expressed as machine learning applications, involve quantities of interest that have meaning in the real world, independent of statistical models. A direct model-free approach is preferred to avoid potential model misspecification bias and overcomplicating the problem formulation. Since traditional Bayesian frameworks rely on models of the data generation process, clearly the desired direct, model-free posterior probabilistic inference cannot be achieved. Fortunately, the likelihood function is not the only means of linking data and quantities of interest. Loss functions provide an alternative link where the quantity of interest is, or at least can be, defined as the minimum value of the corresponding risk or expected loss. In this case, by using the empirical risk function directly, we can obtain what is commonly called the Gibbs posterior distribution. This manuscript investigates the Gibbs posterior structure, its asymptotic concentration properties, and the frequentist calibration of its confidence region. Freed from the constraints of model specification, Gibbs posterior functions create new opportunities for probabilistic inference in modern statistical learning problems.
