Bayesian Statistics -- Test 2 Review Sheet, Spring 2011 I. Noninformative Bayesian Models for Normal Data A. Choices of Vague Priors for mu and sigma 1. Resulting posterior distributions for mu and sigma^2 2. How do these compare to the posteriors from the conjugate analysis? II. Bayesian Linear Regression A. Regression Setup (with Matrix-Vector Notation) 1. Likelihood Function for Beta and sigma^2 2. Least-squares estimates for Beta and sigma^2 B. Noninformative Analysis for the Regression Situation 1. Choices of Vague Priors for Beta and sigma^2 2. Resulting posterior distributions for mu and sigma^2 C. Conjugate Analysis for the Regression Situation 1. Choices of Conjugate Priors for Beta and sigma^2 2. Role of delta, a and b in prior specifications 3. How do the posteriors for mu and sigma^2 compare to those from the noninformative analysis? D. Bayesian Model Selection 1. Partitioning beta_j into z_j b_j 2. Role of the indicator vector, z 3. Finding posterior probabilities for each possible value of the vector z 4. Implementing the approach via Gibbs Sampling code 5. Restricting possible choices of model to a particular subset of models by only specifying certain z vectors E. Posterior Predictive Distribution for the Data 1. What is the definition of the posterior predictive distribution in the regression setting? 2. Form of the posterior predictive distribution for the normal-error regression model III. Classes of Bayesian Priors A. The Class of Conjugate Priors 1. Why use a conjugate prior? 2. Why not use a conjugate prior? 3. Examples of conjugate prior/likelihood combinations 4. Conjugate prior existence for exponential-family distributions B. The Class of Uninformative Priors 1. Proper and Improper Uniform Priors 2. Lack of Invariance of Uniform Prior for a Bernoulli probability 3. Jeffreys Priors and the formula for a Jeffreys prior 4. Invariance Property of Jeffreys prior and what it means 5. Other noninformative options (Reference priors, diffuse proper priors, improper priors) C. The Class of Informative Priors 1. Power priors involving previous-data information 2. Strategies for prior elicitation 3. Spike-and-slab priors for linear regression IV. Markov Chain Monte Carlo Techniques A. The Monte Carlo Method 1. Using the law of large numbers to approximate population quantities of interest 2. Sampling from common distributions in R B. MCMC Methodology 1. When is MCMC useful? 2. What is a Markov Chain and the Markovian Property? C. Gibbs Sampling 1. When can we use the Gibbs Sampler? 2. The formal Gibbs Algorithm 3. What is the result of the Gibbs Sampling process? 4. Implementations of Gibbs Sampling in R and WinBUGS 5. Burn-in and Convergence Diagnostics like trace plots D. Metropolis-Hastings Method 1. When should we use the Metropolis-Hastings Method? 2. The formal Metropolis-Hastings Algorithm 3. Role of the acceptance ratio in the algorithm 4. Acceptance rate 5. Autocorrelation and the role of "thinning"