I. Bayesian Linear Regression A. Regression Setup (with Matrix-Vector Notation) 1. Likelihood Function for Beta and sigma^2 2. Least-squares estimates for Beta and sigma^2 B. Noninformative Analysis for the Regression Situation 1. Choices of Vague Priors for Beta and sigma^2 2. Resulting posterior distributions for mu and sigma^2 C. Conjugate Analysis for the Regression Situation 1. Specification of Conjugate Prior Information 2. Role of the "Hypothetical" Prior Observations 3. Role of delta, a and b in prior specifications 4. Rules of Thumb for weighting the worth of the prior information 5. Form of Posterior distributions for precision tau and for beta|tau D. Bayesian Model Selection 1. Partitioning beta_j into z_j b_j 2. Role of the indicator vector, z 3. Finding posterior probabilities for each possible value of the vector z 4. Implementing the approach via Gibbs Sampling code 5. Restricting possible choices of model to a particular subset of models by only specifying certain z vectors E. Posterior Predictive Distribution for the Data 1. What is the definition of the posterior predictive distribution in the regression setting? 2. Form of the posterior predictive distribution for the normal-error regression model II. Classes of Bayesian Priors A. The Class of Conjugate Priors 1. Why use a conjugate prior? 2. Why not use a conjugate prior? 3. Examples of conjugate prior/likelihood combinations 4. Conjugate prior existence for exponential-family distributions B. The Class of Uninformative Priors 1. Proper and Improper Uniform Priors 2. Lack of Invariance of Uniform Prior for a Bernoulli probability 3. Jeffreys Priors and the formula for a Jeffreys prior 4. Invariance Property of Jeffreys prior and what it means 5. Other noninformative options (Reference priors, diffuse proper priors, improper priors) C. The Class of Informative Priors 1. Power priors involving previous-data information 2. Strategies for prior elicitation 3. Spike-and-slab priors for linear regression III. Markov Chain Monte Carlo Techniques A. The Monte Carlo Method 1. Using the law of large numbers to approximate population quantities of interest 2. Sampling from common distributions in R B. MCMC Methodology 1. When is MCMC useful? 2. What is a Markov Chain and the Markovian Property? ### ### NOTE: For Test 2, you should be prepared to answer conceptual questions about ### the Gibbs Sampler and M-H Algorithm (e.g., short-answer, multiple-choice, or ### True-False type questions), but you will not have to *implement* these algorithms ### on a data set on Test 2. ### C. Gibbs Sampling 1. When can we use the Gibbs Sampler? 2. The formal Gibbs Algorithm 3. What is the result of the Gibbs Sampling process? 4. Implementations of Gibbs Sampling in R and WinBUGS (Don't worry about coding details) 5. Burn-in and Convergence Diagnostics like trace plots D. Metropolis-Hastings Method 1. When should we use the Metropolis-Hastings Method? 2. The formal Metropolis-Hastings Algorithm 3. Role of the acceptance ratio in the algorithm 4. Acceptance rate 5. Autocorrelation and the role of "thinning"