David Reichert PhD

David Reichert


Publications:
2013
  Charles Bonnet Syndrome
Reichert, DP, Seriès, P, Storkey, AJ & Gutkin, BS (ed.) 2013, 'Charles Bonnet Syndrome: Evidence for a Generative Model in the Cortex?' PLoS Computational Biology, vol 9, no. 7, e1003134. DOI: 10.1371/journal.pcbi.1003134
Several theories propose that the cortex implements an internal model to explain, predict, and learn about sensory data, but the nature of this model is unclear. One condition that could be highly informative here is Charles Bonnet syndrome (CBS), where loss of vision leads to complex, vivid visual hallucinations of objects, people, and whole scenes. CBS could be taken as indication that there is a generative model in the brain, specifically one that can synthesise rich, consistent visual representations even in the absence of actual visual input. The processes that lead to CBS are poorly understood. Here, we argue that a model recently introduced in machine learning, the deep Boltzmann machine (DBM), could capture the relevant aspects of (hypothetical) generative processing in the cortex. The DBM carries both the semantics of a probabilistic generative model and of a neural network. The latter allows us to model a concrete neural mechanism that could underlie CBS, namely, homeostatic regulation of neuronal activity. We show that homeostatic plasticity could serve to make the learnt internal model robust against e. g. degradation of sensory input, but overcompensate in the case of CBS, leading to hallucinations. We demonstrate how a wide range of features of CBS can be explained in the model and suggest a potential role for the neuromodulator acetylcholine. This work constitutes the first concrete computational model of CBS and the first application of the DBM as a model in computational neuroscience. Our results lend further credence to the hypothesis of a generative model in the brain.
General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Reichert, David P., Seriès, Peggy & Storkey, Amos J..
Number of pages: 19
Publication Date: 18 Jul 2013
Publication Information
Category: Article
Journal: PLoS Computational Biology
Volume: 9
Issue number: 7
ISSN: 1553-734X
Original Language: English
DOIs: 10.1371/journal.pcbi.1003134
2011
  A Hierarchical Generative Model of Recurrent Object-Based Attention in the Visual Cortex
Reichert, DP, Series, P & Storkey, AJ 2011, A Hierarchical Generative Model of Recurrent Object-Based Attention in the Visual Cortex. in T Honkela, W Duch, M Girolami & S Kaski (eds), ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2011, PT I. Lecture Notes in Computer Science, vol. 6791, Springer-Verlag Berlin Heidelberg, BERLIN, pp. 18-25, 21st International Conference on Artificial Neural Networks, ICANN 2011, Finland, 14-17 June.

In line with recent work exploring Deep Boltzmann Machines (DBMs) as models of cortical processing, we demonstrate the potential of DBMs as models of object-based attention, combining generative principles with attentional ones. We show: (1) How inference in DBMs can be related qualitatively to theories of attentional recurrent processing in the visual cortex; (2) that deepness and topographic receptive fields are important for realizing the attentional state; (3) how more explicit attentional suppressive mechanisms can be implemented, depending crucially on sparse representations being formed during learning.


General Information
Organisations: Neuroinformatics DTC.
Authors: Reichert, David P., Series, Peggy & Storkey, Amos J..
Keywords: (RECOGNITION. )
Number of pages: 8
Pages: 18-25
Publication Date: 2011
Publication Information
Category: Conference contribution
Original Language: English
  Neuronal Adaptation for Sampling-Based Probabilistic Inference in Perceptual Bistability
Reichert, DP, Series, P & Storkey, A 2011, Neuronal Adaptation for Sampling-Based Probabilistic Inference in Perceptual Bistability. in J Shawe-Taylor, RS Zemel, PL Bartlett, F Pereira & KQ Weinberger (eds), Advances in Neural Information Processing Systems 24. Curran Associates Inc.
It has been argued that perceptual multistability reflects probabilistic inference performed by the brain when sensory input is ambiguous. Alternatively, more traditional explanations of multistability refer to low-level mechanisms such as neuronal adaptation. We employ a Deep Boltzmann Machine (DBM) model of cortical processing to demonstrate that these two different approaches can be combined in the same framework. Based on recent developments in machine learning, we show how neuronal adaptation can be understood as a mechanism that improves probabilistic, sampling-based inference. Using the ambiguous Necker cube image, we analyze the perceptual switching exhibited by the model. We also examine the influence of spatial attention, and explore how binocular rivalry can be modeled with the same approach. Our work joins earlier studies in demonstrating how the principles underlying DBMs relate to cortical processing, and offers novel perspectives on the neural implementation of approximate probabilistic inference in the brain.
General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Reichert, David P., Series, Peggy & Storkey, Amos.
Publication Date: 2011
Publication Information
Category: Conference contribution
Original Language: English
  Unifying low-level mechanistic and high-level Bayesian explanations of bistable perceptions: neuronal adaptation for cortical inference
Reichert, DP, Series, P & Storkey, A 2011, 'Unifying low-level mechanistic and high-level Bayesian explanations of bistable perceptions: neuronal adaptation for cortical inference' 20th Annual Computational Neuroscience Meeting: CNS*2011, Stockholm, Sweden, 23/07/11 - 28/07/11, .
Ambiguous images such as the Necker cube evoke bistable perceptions in observers, where the conscious percept alternates between the two possible image interpretations. One classic explanation is that mechanisms like neuronal adaptation underlie the switching phenomenon [1]. On the other hand, one possible high-level explanation [2] is that in performing Bayesian inference, the brain might explore the multimodal posterior distribution over possible image interpretations. For example, sampling from a bimodal distribution could explain the perceptual switching [2], and probabilistic sampling might be a general principle underlying cortical inference [3]. In this computational study of bistable perceptions, we show that both views can be combined: Neuronal adaptation such as changes of neuronal excitability and synaptic depression can be understood to improve the sampling algorithm the brain might perform. We use Deep Boltzmann Machines (DBMs) as models of cortical processing [4]. DBMs are hierarchal probabilistic neural networks that learn to generate or predict the data they are trained on. For doing inference, one can utilize Markov chain Monte Carlo methods such as Gibbs-sampling, corresponding to the model's neurons switching on stochastically. The model then performs a random walk in state space, exploring the various learned interpretations of an image, thus potentially explaining bistable perceptions (cf. [5]). However, in machine learning one often finds that exploring multi-modal posterior distributions in high-dimensional spaces can be problematic, as models can get stuck in individual modes ('the Markov chain does not mix'). Very recent machine learning work [6, 7] has devised a class of methods that alleviate this issue by dynamically changing the model parameters, the connection strengths, during sampling. Interestingly, Welling [6] suggested a potential connection to dynamic synapses in biology. Here, we make this connection explicit. Using a DBM model that has learned to represent toy images of unambiguous cubes, we show how a sampling algorithm similar to [7] can be understood as modeling dynamic changes to neuronal excitability and synaptic strength, making it possible to switch more easily between modes of the posterior distribution, i.e. the two likely interpretations of the ambiguous Necker cube. Unlike [2], who design an ad-hoc abstract inference process, our approach is based on a concrete hierarchical neural network that has learned to represent the images, and utilizes canonical inference methods, with the additional twist of relating the latter to neuronal adaptation. We also make different hypotheses than [2] w.r.t. where in the brain the perceptual switch is realized (namely, gradually throughout the visual hierarchy) and how probability distributions are represented (one sample at a time). Our study naturally follows up on our earlier work [4], where we showed how similar, homeostatic mechanisms on a slower timescale can cause hallucinations. As a final contribution, we demonstrate how spatial attention directed to specific features of the Necker cube can influence the perceptual switching [8]. References 1. Blake R: A neural theory of binocular rivalry. Psychol Rev 1989, 96:145-167. 2. Sundareswara R, Schrater PR: Perceptual multistability predicted by search model for Bayesian decisions. J Vis 2008, 8:1-19. 3. Fiser J, Berkes B, Orban G, Lengyel M: Statistically optimal perception and learning: from behavior to neural representations. Trends Cogn Sci 2010, 14:119-130. 4. Reichert DP, Series P, Storkey AJ: Hallucinations in Charles Bonnet Syndrome Induced by Homeostasis: a Deep Boltzmann Machine Model. Adv Neural Inf Process Syst 2010. 5. Gershman S, Vul E, Tenenbaum J: Perceptual multistability as Markov chain Monte Carlo inference. Adv Neural Inf Process Syst 2009. 6. Welling M: Herding dynamical weights to learn. In Proceedings of the 26th Annual International Conference on Machine Learning. Montreal, Quebec, Canada: ACM; 2009:1121-1128. 7. Breuleux O, Bengio Y, Vincent P: Unlearning for Better Mixing. Universite de Montreal/DIRO; 2010. 8. Kawabata N: Attention and depth perception. Perception 1986, 15:563-572.
General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Reichert, David P., Series, Peggy & Storkey, Amos.
Publication Date: 2011
Publication Information
Category: Poster
Original Language: English
  Homeostasis causes hallucinations in a hierarchical generative model of the visual cortex: the Charles Bonnet Syndrome
Reichert, DP, Series, P & Storkey, A 2011, 'Homeostasis causes hallucinations in a hierarchical generative model of the visual cortex: the Charles Bonnet Syndrome' 20th Annual Computational Neuroscience Meeting: CNS*2011, Stockholm, Sweden, 23/07/11 - 28/07/11, .
Hierarchical predictive models of the cortex [1, 2] pose that the prediction of sensory input is a crucial aspect of cortical processing. Evaluating the internally generated predictions against actual input could be a powerful means of learning about causes in the world. During inference itself, rich high-level representations could then be utilized to resolve low-level ambiguities in sensory inputs via feed-back processing. A natural phenomenon to consider in such frameworks is that of hallucinations. In the Charles Bonnet Syndrome (CBS) [3-5], patients suffering from, primarily, eye diseases develop complex visual hallucinations containing vivid and life-like images of objects, animals, people etc. This syndrome is of particular interest as the complex content of the hallucinations rules out explanations based on simple low-level aspects of cortical organization, which are more suited to describe simpler hallucinations such as geometric patterns [6]. Moreover, the primary cause for the syndrome seems to be loss of sensory input in an otherwise healthy brain. Hence, a computational model of CBS needs to be capable of evoking rich internal representations under lack of external input, and elucidate on the underlying mechanisms. We explore Deep Boltzmann Machines (DBMs) as models of cortical processing. DBMs are hierarchical, probabilistic neural networks that learn to generate the data they are trained on based on simple Hebbian learning rules. To explain CBS, we propose that homeostatic mechanisms that serve to stabilize neuronal firing rates [7] overcompensate for the loss of sensory input. With a model trained on simple toy images that then had its input removed, we demonstrate that homeostatic adaptation is sufficient to cause spontaneous occurrence of internal representations of the toy objects. We qualitatively analyze various properties of the model in the light of clinical evidence about CBS, such as an initial latent period before hallucination onset, an occasional localization of imagery to damaged regions of the visual field, and the effects of cortical suppression and lesions. To elucidate on the potential role of drowsiness in causing hallucinations, we model acetylcholine as mediating the balance between feed-forward and feed-back processing in the hierarchy. An earlier version of this work was presented to a machine learning audience [8]. Here, we extend it with additional simulations to elaborate on our findings. In particular, we utilize more complex data sets, enforce sparsity to establish a clearer link between loss of input and decrease of cortical activity, and further justify the interpretation of the acetylcholine mechanism from a biological point of view. References 1. Rao RP, Ballard DH: Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat Neurosci 1999, 2:79-87. 2. Lee TS, Mumford D: Hierarchical Bayesian inference in the visual cortex. J. Opt. Soc. Am. A 2003, 20:1434-1448. 3. Schultz G, Melzack R: The Charles Bonnet Syndrome: 'phantom visual images'. Perception 1991, 20:809-825. 4. Teunisse RJ, Zitman FG, Cruysberg JRM, Hoefnagels WHL, Verbeek ALM: Visual hallucinations in psychologically normal people: Charles Bonnet's Syndrome. The Lancet 1996, 347:794-797. 5. Menon GJ, Rahman I, Menon SJ, Dutton GN: Complex visual hallucinations in the visually impaired: the Charles Bonnet Syndrome. Surv Ophthalmol 2003, 48:58-72. 6. ffytche DH, Howard RJ: The perceptual consequences of visual loss: `positive' pathologies of vision. Brain 1999, 122:1247-1260. 7. Turrigiano GG: The self-tuning neuron: synaptic scaling of excitatory synapses. Cell 2008, 135:422-435. 8. Reichert DP, Series P, Storkey AJ: Hallucinations in Charles Bonnet Syndrome Induced by Homeostasis: a Deep Boltzmann Machine Model. Adv Neural Inf Process Syst 2010.
General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Reichert, David P., Series, Peggy & Storkey, Amos.
Publication Date: 2011
Publication Information
Category: Poster
Original Language: English
2010
  Hallucinations in Charles Bonnet Syndrome Induced by Homeostasis: a Deep Boltzmann Machine Model
Reichert, DP, Series, P & Storkey, A 2010, Hallucinations in Charles Bonnet Syndrome Induced by Homeostasis: a Deep Boltzmann Machine Model. in Advances in Neural Information Processing Systems 23. vol. 23, pp. 2020-2028.
The Charles Bonnet Syndrome (CBS) is characterized by complex vivid visual hallucinations in people with, primarily, eye diseases and no other neurological pathology. We present a Deep Boltzmann Machine model of CBS, exploring two core hypotheses: First, that the visual cortex learns a generative or predictive model of sensory input, thus explaining its capability to generate internal imagery. And second, that homeostatic mechanisms stabilize neuronal activity levels, leading to hallucinations being formed when input is lacking. We reproduce a variety of qualitative findings in CBS. We also introduce a modification to the DBM that allows us to model a possible role of acetylcholine in CBS as mediating the balance of feed-forward and feed-back processing. Our model might provide new insights into CBS and also demonstrates that generative frameworks are promising as hypothetical models of cortical learning and perception.
General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Reichert, David P., Series, Peggy & Storkey, Amos.
Number of pages: 9
Pages: 2020-2028
Publication Date: 2010
Publication Information
Category: Conference contribution
Original Language: English

Projects:
Hierarchical Probabilistic Inference and Dynamical Prediction with Neuronal Hardware (PhD)