Matthew Graham

Matthew Graham


Research Interests

I am interested in probabilistic inference and learning from both a machine learning and neuroscience perspective. In particular I will be exploring how sampling-based probabilistic inference and learning might be implemented within biological neural networks.

Publications:
2017
  Continuously tempered Hamiltonian Monte Carlo
Graham, M & Storkey, A 2017, Continuously tempered Hamiltonian Monte Carlo. in The Conference on Uncertainty in Artificial Intelligence (UAI 2017).
Hamiltonian Monte Carlo (HMC) is a powerful Markov chain Monte Carlo (MCMC) method for performing approximate inference in complex probabilistic models of continuous variables. In common with many MCMC methods, however, the standard HMC approach performs poorly in distributions with multiple isolated modes. We present a method for augmenting the Hamiltonian system with an extra continuous temperature control variable which allows the dynamic to bridge between sampling a complex target distribution and a simpler unimodal base distribution. This augmentation both helps improve mixing in multimodal targets and allows the normalisation constant of the target distribution to be estimated. The method is simple to implement within existing HMC code, requiring only a standard leapfrog integrator. We demonstrate experimentally that the method is competitive with annealed importance sampling and simulating tempering methods at sampling from challenging multimodal distributions and estimating their normalising constants.
General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Graham, Matthew & Storkey, Amos.
Number of pages: 16
Publication Date: 15 Aug 2017
Publication Information
Category: Conference contribution
Original Language: English
  Asymptotically exact inference in differentiable generative models
Graham, M & Storkey, A 2017, Asymptotically exact inference in differentiable generative models. in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) 2017. Journal of Machine Learning Research: Workshop and Conference Proceedings, pp. 499-508.
Many generative models can be expressed as a differentiable function of random inputs drawn from some simple probability density. This framework includes both deep generative architectures such as Variational Autoencoders and a large class of procedurally defined simulator models. We present a method for performing efficient MCMC inference in such models when conditioning on observations of the model output. For some models this offers an asymptotically exact inference method where Approximate Bayesian Computation might otherwise be employed. We use the intuition that inference corresponds to integrating a density across the manifold corresponding to the set of inputs consistent with the observed outputs. This motivates the use of a constrained variant of Hamiltonian Monte Carlo which leverages the smooth geometry of the manifold to coherently move between inputs exactly consistent with observations. We validate the method by performing inference tasks in a diverse set of models.
General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Graham, Matthew & Storkey, Amos.
Number of pages: 10
Pages: 499-508
Publication Date: Apr 2017
Publication Information
Category: Conference contribution
Original Language: English
2016
  Continuously tempered Hamiltonian Monte Carlo
Graham, M & Storkey, A 2016, 'Continuously tempered Hamiltonian Monte Carlo' Advances in Approximate Bayesian Inference, Barcelona, Spain, 9/12/16 - 10/12/16, .
Hamiltonian Monte Carlo (HMC) is a powerful Markov chain Monte Carlo (MCMC) method for performing approximate inference in complex probabilistic models of continuous variables. In common with many MCMC methods however the standard HMC approach performs poorly in distributions with multiple isolated modes. Based on an approach proposed in the statistical physics literature, we present a method for augmenting the Hamiltonian system with an extra continuous temperature control variable which allows the dynamic to bridge between sampling a complex target distribution and a simpler uni-modal base distribution. This augmentation both helps increase mode-hopping in multi-modal targets and allows the normalisation constant of the target distribution to be estimated. The method is simple to implement within existing HMC code, requiring only a standard leapfrog integrator. It produces MCMC samples from the target distribution which can be used to directly estimate expectations without any importance re-weighting.
General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Graham, Matthew & Storkey, Amos.
Publication Date: 9 Dec 2016
Publication Information
Category: Abstract
Original Language: English
  Pseudo-Marginal Slice Sampling
Murray, I & Graham, M 2016, Pseudo-Marginal Slice Sampling. in Proceedings of the 19th International Conference on Artificial Intelligence and Statistics 2016. vol. 51.
Markov chain Monte Carlo (MCMC) methods asymptotically sample from complex probability distributions. The pseudo-marginal MCMC framework only requires an unbiased estimator of the unnormalized probability distribution function to construct a Markov chain. However, the resulting chains are harder to tune to a target distribution than conventional MCMC, and the types of updates available are limited. We describe a general way to clamp and update the random numbers used in a pseudo-marginal method’s unbiased estimator. In this framework we can use slice sampling and other adaptive methods.We obtain more robust Markov chains,which often mix more quickly.
General Information
Organisations: Neuroinformatics DTC.
Authors: Murray, Iain & Graham, Matthew.
Number of pages: 9
Publication Date: May 2016
Publication Information
Category: Conference contribution
Original Language: English

Projects:
Neural models for sampling-based probabilistic inference and learning (PhD)