Showing posts with label mcmc. Show all posts
Showing posts with label mcmc. Show all posts

Friday, September 22, 2017

MCMC Samplers Visualization

A really nice interactive gallery of MCMC samplers


You can choose different algorithms, and target distributions, change method parameters and observe the chain evolve.

This might come in handy next semester, when I teach a Monte Carlo class.

Tuesday, January 12, 2016

MCMC Class Lecture Notes

In the fall of 2015, I taught an applied introductory course in Markov-Chain Monte Carlo Methods targeted to graduate students from scientific computing, engineering, math, and computer science.

Here's the blurb I put out for the course:
Markov Chain Monte Carlo (MCMC) is one the most powerful and versatile methods developed in the 20th century. It uses a sequence of random numbers to solve important problems in physics, computational biology, econometrics, political science, Bayesian inference, machine learning, data science, optimization, etc. For many of these problems, simple Monte Carlo ("integration by darts") is inefficient. Often, MCMC is the answer. 
Broadly speaking, MCMC is collection of sampling methods that allows us to solve problems of integration, optimization, simulation in high dimensional spaces. In this course, we will look at the foundations of Monte Carlo and MCMC, introduce and implement different sampling algorithms, and develop statistical concepts and intuition to analyze convergence and error in the simulation. Assignments and labs will consider illustrative examples from statistics, material science, physics, economics, optimization, and Bayesian inference.
The complete set of lecture notes may be downloaded from my Google Sites webpage.

Sunday, October 6, 2013

Metropolis-Hastings

As I mentioned previously, one of my colleagues is running an MCMC seminar this semester, which is essentially a journal club that is trying to read and discuss some of the more seminal papers in the field.

The second paper we discussed was Hastings famous 1970 paper, which generalized the "Metropolis" algorithm and couched it in a more general form (the Metropolis-Hastings algorithm).

The story behind this man and his famous paper turns out to be every bit as fascinating as that behind the original paper.

From 1966 to 1971, Hastings was an Associate Professor in the Department of Mathematics at the University of Toronto. During this period, he wrote the famous paper listed above (which generalised the work of N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller (1953), "Equations of state calculations by fast computing machines", J. Chem. Phys. 21, 1087-1091). Hastings explains: 
When I returned to the University of Toronto, after my time at Bell Labs, I focused on Monte Carlo methods and at first on methods of sampling from probability distributions with no particular area of application in mind. [University of Toronto Chemistry professor] John Valleau and his associates consulted me concerning their work. They were using Metropolis's method to estimate the mean energy of a system of particles in a defined potential field. With 6 coordinates per particle, a system of just 100 particles involved a dimension of 600. When I learned how easy it was to generate samples from high dimensional distributions using Markov chains, I realised how important this was for Statistics, and I devoted all my time to this method and its variants which resulted in the 1970 paper.
He seems to have been a very fascinating character.

He wrote only three peer-reviewed papers all this life, and supervised only a single PhD student.