Principled multimodal cue integration for perceptual interference (PhD)

How should a human or machine combine information from many available senses (e.g. vision & audition) to produce an accurate and unified percept of the world? In this multidisciplinary research, we test theoretical models both by experiments on human perception and by implementation in large scale intelligent computer systems which learn to make sense of multisensory data. Hence computational experiments can improve our understanding of human multisensory perception and human experiments can improve our ability to build intelligent machines. For example, we have developed a computational system which can learn – without any operator supervision – to audio-visually identify and track people in a meeting scenario, understanding who said what, where and when.

Related Themes

Related Publications and Presentations

  • Adrian M Haith, Carl Jackson, Chris Miall, and Sethu Vijayakumar, “Unifying the sensory and motor components of sensorimotor adaptation”, Advances in Neural Information Processing Systems (NIPS), Vancouver, 2009.
  • Timothy Hospedales, and Sethu Vijayakumar, “Structure Inference for Bayesian Multisensory Scene Understanding”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(12), 2140-2157. View Details
  • Timothy Hospedales, and Sethu Vijayakumar, “Structure Inference for Bayesian Multisensory Perception and Tracking”, International Joint Conference in Artificial Intelligence (IJCAI), 2007.
  • Timothy Hospedales, and Sethu Vijayakumar, “Multisensory Oddity Detection as Bayesian Inference”, Public Library of Science (PLoS) ONE, 2009, 4(1), e4209.
  • Timothy Hospedales, Mark C W Van Rossum, B. Graham, and Mayank Dutia, “Implications of noise and neural heterogeneity for vestibulo-ocular reflex fidelity”, Neural Computation, 2007, 20, 756-788.

Related People