OC-C1: Dani Gamerman
Title: Bayesian inference for spatiotemporal point processes
This talk addresses Bayesian inference for point processes with unknown form of the intensity function driven by Gaussian processes under a variety of scenarios. Discretized versions of the intensity function are initially considered. Spatial and spatio-temporal heterogeneity is contemplatedthrough flexible forms for effects of covariates and their relevance is illustrated with an epidemiological application. Inference forcontinuously-varying intensity functions is also presented, based on a customized MCMC scheme using augmented data and Poisson thinning.Illustrations with synthetic and real data are also provided.
C2: Pierre Del Moral
Title: On the Stability and the Uniform Propagation of Chaos Properties of Ensemble Kalman-Bucy Filters
The Ensemble Kalman filter is a sophisticated and powerful data assimilation method for filtering high dimensional problems arising in fluid mechanics and geophysical sci- ences. This Monte Carlo method can be interpreted as a mean-field McKean-Vlasov type particle interpretation of the Kalman-Bucy diffusions. Besides some recent advances on the stability of nonlinear Langevin type diffusions with drift interactions, the long-time behaviour of models with interacting diffusion matrices and conditional distribution inter- action functions has never been discussed in the literature. One of the main contributions of the talk is to initiate the study of this new class of models. The talk presents a series of new functional inequalities to quantify the stability of these nonlinear diffusion processes. The second contribution of this talk is to provide uniform propagation of chaos properties as well as Lp-mean error estimates w.r.t. the time horizon.
C3: Veronika Rackova
Title: Fast Bayesian Factor Analysis via Automatic Rotations to Sparsity
Rotational post hoc transformations have traditionally played a key role in enhancing the interpretability of factor analysis. Regularization methods also serve to achieve this goal by prioritizing sparse loading matrices. In this work, we bridge these two paradigms with a unifying Bayesian framework. Our approach deploys intermediate factor rotations throughout the learning process, greatly enhancing the effectiveness of sparsity inducing priors. These automatic rotations to sparsity are embedded within a PXL-EM algorithm, a Bayesian variant of parameter-expanded EM for posterior mode detection. By iterating between soft-thresholding of small factor loadings and transformations of the factor basis, we obtain (a) dramatic accelerations, (b) robustness against poor initializations, and (c) better oriented sparse solutions. To avoid the prespecification of the factor cardinality, we extend the loading matrix to have infinitely many columns with the Indian buffet process (IBP) prior. The factor dimensionality is learned from the posterior, which is shown to concentrate on sparse matrices. Our deployment of PXL-EM performs a dynamic posterior exploration, outputting a solution path indexed by a sequence of spike-and-slab priors. For accurate recovery of the factor loadings, we deploy the spike-and-slab LASSO prior, a two-component refinement of the Laplace prior. A companion criterion, motivated as an integral lower bound, is provided to effectively select the best recovery. The potential of the proposed procedure is demonstrated on both simulated and real high-dimensional data, which would render posterior simulation impractical. Supplementary materials for this article are available online.
C4: Igor Prünster
Title: Distribution theory for hierarchical processes
Hierarchies of discrete probability measures represent popular nonparametric priors in several applications. This is due to two key properties: (i) they naturally represent multiple heterogeneous populations; (ii) they produce ties across populations, resulting in a shrinkage property often described as "sharing of information".A distribution theory for hierarchical random measures that are generated via normalization, thus encompassing both the hierarchical Dirichlet and hierarchical [UTF-8?]Pitmanâ€“Yor processes is presented. These results provide a probabilistic characterization of the induced (partially exchangeable) partition structure, including the distribution and the asymptotics of the number of partition sets, and a complete posterior characterization. Moreover, they also serve as building blocksfor new simulation algorithms (both marginal and conditional) for Bayesian inference. Also other dependent random measures (based on additive and nested constructions) will be discussed and comparisons drawn.
C5: Malay Ghosh
Title: Bayesian Multiple Testing under Sparsity
This talk reviews certain Bayesian procedures that have recently been proposed to address multiple testing under sparsity. Consider the problem of simultaneous testing for the means of independent normal observations. In this talk we study asymptotic optimality properties of certain multiple testing rules in a Bayesian decision theoretic framework, where the overall loss of a multiple testing rule is taken as the number of misclassified hypotheses. The multiple testing rules that are considered, include spike and slab priors as well as a general class of one-group shrinkage priors for the mean parameters. The latter is rich enough to include, among others, the families of three parameter beta, generalized double Pareto priors, and in particular the horseshoe, the normal-exponential-gamma and the Strawderman-Berger priors. Within the chosen asymptotic framework, the multiple testing rules under study asymptotically attain the risk of the Bayes Oracle. Some classical multiple testing procedures are also evaluated within theproposed Bayesian framework.Keywords: Bayes Oracle; Benjamini-Hichberg; Empirical Bayes; Generalized Double Pareto; Hierarchical Bayes; Horseshoe; Three Parameter Beta Normal.
C6: Håvard Rue
Title: "Space-varying regression models: Specifications and simulations": 15 years later
In 2003 we published with D. Gamerman and A. Moreira a paper about space-varying regression models, in Computational Statistics &Data Analysis. Dani was the driving force behind this paper, and I recall that I was very impressed about the work at the time. The XIVEBEB homage to Dani, reminded me of that paper. After re-reading it, it became clear that lot has happened since then and almost to the pointthat I considered the problem to be "solved". In this talk, I will discuss the relevant developments since then, and in particular somerecent work unifying the prior specification for various varying coefficient models.
C7: Marco Ferreira
Title: Objective Bayesian Analysis for Gaussian Hierarchical Models with Intrinsic Conditional Autoregressive Priors
Bayesian hierarchical models are commonly used for modeling spatially correlated areal data. However, choosing appropriate prior distributions forthe parameters in these models is necessary and sometimes challenging. In particular, an intrinsic conditional autoregressive (CAR) hierarchical component is often used to account for spatial association. Vague proper prior distributions have frequently been used for this type of model, but this requires the careful selection of suitable hyperparameters. In this paper, we derive several objective priors for the Gaussian hierarchical model with an intrinsic CAR component and discuss their properties. We show that the independence Jereys and Jereys-rule priors result in improper posterior distributions, while the reference prior results in a proper posterior distribution. We present results from a simulation study that compares frequentist properties of Bayesian procedures that use several competing priors, including the derived reference prior. We demonstrate that using the reference prior results in favorable coverage, interval length, and mean squared error. Finally, we illustrate our methodology with an application to 2012 housing foreclosure rates in the 88 counties of Ohio.
C8: Sudipto Banerjee
Title: High-Dimensional Bayesian Geostatistics
With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. I will present a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as ``priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has n floating point operations (flops), where n is the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.Keywords: Bayesian modeling; Directed Acyclic Graphs; Gaussian Processes; Low-rank models; Scalable models; Spatial stochastic processes.
C9: Flavio Bambirra Gonçalves
Title: The magic of Retrospective Sampling: tractable exact solutions for intractable infinite-dimensional problems
Statistical models become more and more complex everyday. In particular, it is common to consider infinite-dimensional models, in which the (augmented) sample space and/or the parameter space are infinite-dimensional. Those models inevitably lead to serious intractability problems in an inference context and have required, until recently, the use of finite-dimensional approximations which are a considerable source of error, usually hard to quantify and control. Those approximations may actually lead to considerable model decharacterisation. Exact methodologies for intractable infinite-dimensional problems have been an area of intensive investigation in the last few years with the most promising solutions relying on Retrospective Monte Carlo methods. This talk will introduce the idea of retrospective sampling and discuss some general methodologies to perform exact inference in infinite-dimensional models. These are illustrated by examples concerning Cox processes and SDE driven models.
C10: Sylvia Frühwirth-Schnatter
Title: Sparse Finite Mixtures for Model-Based Clustering
The talk reviews the concept of sparse finite mixture modelling and its application for model-based clustering and is based on joint recent work with Gertraud Malsiner-Walli and Bettina Grün (2016, 2017, 2018). Sparse finite mixture models are based on a shrinkage prior on the weight distribution that removes all redundant components automatically and provides an automatic tool to select the number of clusters in a data set. Their implementation via MCMC is straightforward and the tedious design of moves in common trans-dimensional approaches such as RJMCMC is avoided. The first part of the talk discusses sparse finite mixtures of Gaussian distribution as a special case and shows how this strategy can be extended to sparse finite mixtures, where the component densities themselves are estimated semi-parametrically through mixtures of Gaussian. In the second part of the talk, this concept is extended in several directions and examples are presented showing that the framework of sparse finite mixtures works for non-Gaussian distributions such as skew distributions or for latent class models. The talk concludes with a comparison between sparse finite mixtures and Dirichlet process mixtures.References:Malsiner Walli, Gertraud, Frühwirth-Schnatter, Sylvia and Bettina Grün (2016): Model-based clustering based on sparse finite Gaussian mixtures, Statistics and Computing, 26, 303-324.Malsiner Walli, Gertraud, Frühwirth-Schnatter, Sylvia and Bettina Grün (2017): Identifying mixtures of mixtures using Bayesian estimation, Journal of Computational and Graphical Statistics, 26, 285-295. Frühwirth-Schnatter, Sylvia und Malsiner Walli, Gertraud (2018): From here to infinity - sparse finite versus Dirichlet process mixtures in model-based clustering. ArXiv 1706.07194v2.