
- Gibbs Sampling with Anticorrelated Gaussian Data Augmentation with Application to L1 Ball Model (arXiv)
Author: Yu Zheng, Leo L. Duan
Summary: The L1 ball prior is a recent generalization of the spike-and-slab prior. By converting the continuous precursor distribution to the L1 ball boundary, exact zeros with positive prior and posterior probabilities are induced. The great flexibility in the choice of precursor and threshold distributions makes it easy to specify models under structured sparsity, such as models with zero dependent probabilities or nonzero-to-zero smoothness. Aiming to significantly speed up post-computation, we propose a new data augmentation that leads to a fast block-Gives sampling algorithm. The latent variable named “anti-correlated Gaussian” cancels the quadratic exponential term of the latent Gaussian distribution, making the parameters of interest conditionally independent and able to be updated within the block. Compared to existing algorithms such as the No-U-Turn sampler, the new blocked Gibbs sampler has a very low computational cost per iteration and exhibits rapid mixing of Markov chains. Establish geometric ergodicity guarantees for algorithms in linear models. Additionally, we demonstrate useful extensions of the algorithm for posterior estimation of general latent Gaussian models, such as those involving multivariate truncated Gaussians or latent Gaussian processes.
2. Gibbs measurement of continuous random energy model and sampling from hardness threshold (arXiv)
Author: Fushuang Ho
Summary: The Continuous Random Energy Model (CREM) is a toy model for chaotic systems introduced by Bovier and Kurkova in 2004, based on earlier work by Derrida and Spohn in the 80s. In a recent paper by Addario-Berry and Maillard, they posed the following questions: What is the threshold βG? This is when sampling close to the Gibbs measurement becomes algorithmically difficult for any inverse temperature β>βG. Here, sampling is approximate, which means that the Kullback-Leibler divergence from the output law of the algorithm to the Gibbs measure is of order o(N), and the probability approaches 1 as N→∞. ,Algorithmically difficult is that the execution time ,number of vertex queries by the algorithm exceeds the ,degree of the polynomial. This study shows that when the covariance function A of CREM is concave, the recursive sampling algorithm on the renormalized tree approximates the Gibbs measure in a running time of order O(N1+ε) for all β>0. is shown. In the case of a non-concave surface A, this study investigates the threshold value βG such that the following hardness transition occurs.<∞ を示します。 a) すべての β≤βG について、再帰的サンプリング アルゴリズムは実行時間 O(N1+ε) のオーダーでギブス測度を近似します。 。 b) すべての β>For βG, hardness results are established for a large class of algorithms. That is, for this class of algorithms that approximately sample the Gibbs measure, there exists z>0 such that the algorithm's execution time is at least ezN and the probability approaches 1. In other words, it is impossible to sample approximately with a polynomial. Time the Gibbs measurement in this regime. Additionally, it provides a lower bound on the free energy of CREM that can hold its own value.
3. Insufficient Gibbs sampling (arXiv)
Author: Antoine Luciano, Christian P. Robert, Robin J. Rider
Abstract: In some application scenarios, the availability of complete data is limited, often due to privacy concerns. Only aggregated, robust, and inefficient statistics derived from the data are accessible. Although these robust statistics are not sufficient, they do indicate reduced sensitivity to outliers and provide increased data protection due to higher breakdown points. We consider parametric frameworks and consider various robust and inefficient statistics, specifically parameters conditioned on pairs (median, MAD) or (median, IQR), or sets of quantiles. We propose a method to sample from the posterior distribution of . Our approach utilizes a Gibbs sampler to simulate potentially augmented data. This facilitates simulations from posterior distributions of parameters belonging to a particular distribution family. A byproduct of these samples from a joint posterior distribution of parameters and data that takes into account observed statistics is that Bayes factors can be estimated based on observed statistics via bridge sampling. We examine and outline the limitations of the proposed method through a toy example and application to real-world income data.