Personal tools
You are here: Home Events Past Events Past Events (2000 to 2008)

Past Events (2000 to 2008)

2008

28/10/08  11:00 - 12:00

Cumulative Distribution Networks and the Derivative-sum-product Algorithm

Brendan Frey
Electrical and Computer Engineering
University of Toronto 

I'll describe a new type of graphical model called a cumulative distribution network (CDN), which expresses a joint cumulative distribution as a product of local functions. Each local function can be viewed as providing evidence about possible orderings, or rankings, of variables. Interestingly, the conditional independence properties of CDNs are quite different from other graphical models. I'll also describe a messagepassing algorithm that efficiently computes conditional cumulative distributions. Due to the unique independence properties of the CDN, these messages do not in general have a one-to-one correspondence with messages exchanged in standard algorithms, such as belief propagation. I'll review results obtained by Jim Huang, demonstrating the application of CDNs for structured ranking learning using a previously-studied multi-player gaming dataset. 

A buffet lunch will be provided in G02 (Cafe) on the ground floor. 

 

07/10/08  11:00 - 12:00

Sensory coding by spatial patterns of cerebellar Purkinje cell complex

Simon Schultz
Computational Neuroscience
ICL 

Climbing fiber input produces complex spike synchrony across populations of cerebellar Purkinje cells oriented in the parasagittal axis. What is the fine spatial structure of this synchrony and its role in the encoding and processing of sensory information within the olivocerebellar cortical circuit? We investigated these questions using in vivo two-photon calcium imaging in combination with information theoretic analysis. Spontaneous dendritic calcium transients linked to climbing fiber input were observed in multiple neighboring Purkinje cells. Spontaneous synchrony of calcium transients between individual Purkinje cells falls off over approximately two hundred microns mediolaterally, consistent with the presence of cerebellar microzones organized by climbing fiber input. Synchrony was increased following administration of harmaline, consistent with an olivary source. Periodic sensory stimulation also resulted in a transient increase of synchrony following stimulus onset. To examine how synchrony affects the neural population code provided by the spatial pattern of complex spikes, we analyzed its information content. We found that spatial patterns of calcium events from ensembles of seven cells provided on average 59% more information about the stimulus than available by counting the number of events across the pool without taking into account their spatial origin. The olivocerebellar feedback circuit thus may act to report sensory errors by generating a population code across a local pool of climbing fibers within a cerebellar microzone. 

A buffet lunch will be provided in G02 and G03 on the Ground floor. 

 

23/09/08  11:00 - 12:00

Rapid Neural Coding of Visual Information in the Retina

Tim Gollisch
Visual Coding
Max Planck Institute of Neurobiology 

The neural processing and computations that underlie our visual perception begin in the retina, a neural network at the back of the eyeball. Here, all visual information available to the central brain is transduced into electrical potentials, pre-processed, and encoded into patterns of membrane-potential spikes. These spike patterns must provide the relevant information rapidly when a new image suddenly comes into view, for example, after a visual saccade. In this talk, I will discuss recent experimental findings from the salamander retina where particular neuron types have been identified that rapidly transmit information about the spatial image structure in the relative timing of their spikes. The characteristics of this spike-timing code help us gain insight into the underlying neuronal circuitry and lead us towards a refined model of how these neurons integrate spatiotemporal visual stimuli. 

  

A buffet lunch will be provided in G02 and G03 on the ground floor.

 

16/07/08  11:00 - 12:00

New Applications of PAC-Bayes Analysis

John Shawe-Taylor
Centre for Computational Statistics and Machine Learning 
UCL 

New Applications of PAC-Bayes Analysis 

PAC-Bayes techniques provides bounds on generalisation error of learning systems that are inspired by Bayesian analysis. We will review earlier work and go on to describe extensions of the technology that enable two new applications. The first is to maximum entropy classification, a thresholded linear classifier that regularises by maximising the entropy of the weights. The second application is to fitting non-linear stochastic differential equation models to observations. The analysis is inspired by a variational Bayesian approximate inference algorithm that models the posterior distribution by a time varying linear stochastic differential equation. The approach provides a lower bound on the expected value of the fit of new data to the posterior marginal distribution. 

A buffet lunch will be provided in room 4.40 on level 4. 

 

17/06/08  11:00 - 13:00

The richness of the stimulus: Computational analyses of language acquisition

Padraic Monaghan
Department of Psychology 
University of York 

The nativist view claims that there is insufficient information in the environment to adequately constrain the language that children learn accurately and without apparent effort. However, taking a multimodal view of information sources for constraining language indicates numerous ways in which the language environment is in fact a rich source of constraints to assist in speech segmentation, grammatical categorisation, and syntactic rule learning. The usefulness and potential use of these cues is revealed through combined corpus analysis, computational modelling, and behavioural studies.  

 

27/05/08  11:00 - 12:00

Making the Sky Searchable: Large Scale Astronomical Pattern Recognition

Sam Roweis 
Department of Computer Science 
University of Toronto 

Imagine you have an uncalibrated picture of the night sky and you want to know where the telescope was pointing when the picture was taken. Since several digital catalogues are available, containing (among other data) positions and magnitudes of billions of stars, you should, in principle, be able to find source locations by analyzing the pixels of your image and then exhaustively search the catalogues and find where that pattern of sources occurs. The only catch is that the sky is pretty big and that both images and catalogues are pretty noisy. Nonetheless, by using efficient geometric hashing techniques, our group has built a universal astrometric calibration robot which, roughly speaking, takes as input a picture of the night sky and returns as output the location on the sky at which the picture was taken. This is the first step in a more ambitious effort to learn a probabilistic model which accounts for *every* image of the night sky ever taken (including all professional, amateur and historical pixles) by modeling not only astrometry but also bandpass, time, and instrument properties. 

 

11/03/08  11:00 - 12:00

Love at First Light: Neural Circuits Controlling Sexual Behaviour

Gero Miesenböck 
Waynflete Professor of Physiology
Department of Physiology, Anatomy and Genetics
Magdalen College
University of Oxford

Transcription factors encoded by the fruitless (fru) gene are key determinants of sexual behaviour in Drosophila. They are expressed in a minority of central neurons with limited dimorphisms and regulate neural processes that remain largely unknown. Here, we use light-activated ion channels to stimulate fru neurons in the thoracic-abdominal ganglia, enabling direct functional comparisons of homologous circuitry between sexes. Optical stimulation of males or females initiates the unilateral wing vibrations that normally generate the male courtship song. The pattern-generating circuit operates differently in the two sexes, producing wing movement and sound in both but authentic songs only in males and in females expressing male fru product. A song-like motor program is thus present in females but lies dormant because the neural commands required for song initiation are absent. Supplying such commands artificially reveals fru-specific differences in the internal dynamics of the song generator and sets the stage for exploring their physiological basis. 

 

19/02/08  11:00 - 12:00

Functional Sparsity

John Lafferty 
Professor, Computer Science and Machine Learning
Computer Science Department
Machine Learning
Carnegie Mellon University

Substantial progress has recently been made on understanding the behavior of sparse linear models in the high dimensional setting, where the number the variables can greatly exceed the number of samples. This problem has attracted the interest of multiple communities, including applied mathematics, signal processing, statistics, and machine learning. But linear models often rely on unrealistically strong assumptions, made more by convenience than conviction. Can we understand the properties of high dimensional functions that enable them to be estimated accurately from sparse data? In this talk we present some progress on this problem, showing that many of the recent results for sparse linear models can be extended to the infinite dimensional setting of nonparametric function estimation. In particular, we present some theory for estimating sparse additive models, together with algorithms that are scalable to high dimensions. We illustrate these ideas with an application to functional sparse coding of natural images. This is joint work with Han Liu, Pradeep Ravikumar, and Larry Wasserman.

 

12/02/08  11:00 - 12:00

Evidence for smooth (and inertial) dynamics in the evolution of neural state

Maneesh Sahani 
Gatsby Computational Neuroscience Unit
University College London

An often-emphasised property of networks of spiking neurons is their ability to update state rapidly, essentially switching their pattern of firing almost arbitrarily within a single membrane or synaptic time-constant. By contrast many more abstract network models are based on a smoothly updated firing rate or firing probability. To get from state A to B, such networks must follow a smooth trajectory through intervening states, taking measurable time to do so. I will show experimental evidence that in at least one real neural system (in the motor areas of primate cortex), when running without external driving, the dynamics do indeed appear to follow smooth trajectories in this sense. The trajectories also show tantalising evidence of inertia in the neural state. We conclude that, even though the underlying subtrate is clearly spiking, the behaviour of real networks may, at least in some cases, be well-described by smooth dynamical models.


Joint work with Afsheen Afshar, Byron Yu and Krishna Shenoy.


2007

27th November

Chris Holmes

Professor of Biostatistics 
Oxford Centre for Gene Function 
University of Oxford
Bayesian signal processing techniques for inferring regions of copy number variation (CNVs) in the human genome

In recent work we have developed Bayesian Hidden Markov models (HMMs) to infer regions of CNV in the human genome using single nucleotide polymorphism (SNP) data. A CNV is defined as a segment of DNA >1 kb that is present at a variable copy number in comparison to a reference genome. It is believed that up to 10% of the human genome maybe copy number variable. The data is typically of order 500k measurements spread over a genome and study sizes can vary between 10s to 1000s of samples. In certain scenarios, as for cancer genomes, the data often arises as a cryptic mixture of heterogeneous cell types and hence a deconvolution stage is necessary to try to infer the hidden mixtures of HMMs. We will illustrate our methods using a number of studies that we're involved in covering population genetics, cancer genomics, and disease association.


Host: Chris Williams 

3pm, Thursday 22nd November

Tom Mrsic-Flogel 
Lecturer 
Department of Physiology 
UCL
Imaging functional organization and plasticity in primary visual cortex

The vast majority of our knowledge about how the cerebral cortex represents information has been obtained from recordings of one or few neurons at a time or from global mapping methods such as fMRI. These approaches have left unexplored how neuronal activity is distributed in space and time within a cortical column and how hundreds of neurons interact to process sensory information. By taking advantage of recent advances in two-photon laser scanning microscopy, my research aims to understand development, plasticity and function of neuronal circuits in primary visual cortex. We use in vivo two-photon calcium imaging to record activity simultaneously from hundreds of neurons in visual cortex while showing different visual stimuli. This approach enables us to characterize in detail how individual neurons and neuronal subsets interact within a large cortical network in response to different visual features. The same approach is used to describe the maturation of cortical network function after the onset of vision and to assess the role of visual experience in this process.


Host: Jim Bednar

26th October

Karl Friston 
Scientific Director of The Wellcome Trust Centre for Neuroimaging, 
UCL 
Free energy and the brain

The Departments of Psychology and Neurosciences (Centre for Cognitive and Neural Systems, Human Cognitive Neuroscience group) and the School of Informatics (Institute for Adaptive and Neural Computation) are pleased to announce a special joint guest seminar in October to be given by Professor Karl Friston, the Scientific Director of The Wellcome Trust Centre for Neuroimaging at UCL.

Karl will present a theoretical approach, utilising constructs from machine learning and statistical physics, to help understand a range of neurobiological phenomena. It is relevant to those interested in computatonal modelling, and in the analysis of behavioural and neuroimaging (EEG/MEG and fMRI) data (abstract below).

By formulating the original ideas of Helmholtz on perception, in terms of modern-day theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. Using constructs from statistical physics, machine learning and probability theory the problems of inferring the causes of sensory input and learning the causal structure of their generation can be resolved using exactly the same principles. Furthermore, inference and learning can proceed in a biologically plausible fashion. The ensuing scheme rests on Empirical Bayes and hierarchical models of how sensory input is caused. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of cortical organization and responses.

In terms of cortical architectures, it predicts that sensory cortices should be arranged hierarchically, that connections should be reciprocal, and that forward and backward connections should show a functional asymmetry (backward connections are both modulatory and driving, whereas forward connections need only be driving). In terms of synaptic physiology it predicts associative plasticity and, for dynamic models, spike timing-dependent plasticity. In terms of electrophysiology it accounts for classical and extra-classical receptive field effects and long-latency or endogenous components of evoked cortical responses. It predicts the attenuation of responses encoding prediction error with perceptual learning and explains many phenomena like repetition suppression, mismatch negativity (MMN) and the P300 in electroencephalography. In psychophysical terms, it accounts for the behavioral correlates of these physiological phenomena, e.g. priming, and global precedence. The final focus of this talk is on perceptual learning as measured with repetition suppression and the implications for empirical studies of coupling among cortical areas.

23th October

Barry Horwitz 
Chief Brain Imaging and Modeling Section Voice, Speech and Language Branch
National Institute on Deafness and other Communication Disorders 
National Institutes of Health

Network Analyses of Auditory and Language Processing

We will discuss two types of neural modeling that, used in conjuction with functional brain imaging data (fMRI, MEG), can help elucidate the neural bases of human cognitive function, including language processing. One type of modeling attempts to determine the brain network interactions that mediate specific cognitive processes. The second type simulates different types of neural data at multiple spatiotemporal scales. To illustrate the latter, we will discuss a large-scale, neurobiologically realistic network model of auditory and visual pattern recognition that relates neuronal dynamics to fMRI and MEG data. Areas included in the model extend from primary sensory cortex to prefrontal cortex. The electrical activities of the model neuronal units were constrained to agree with data from the neurophysiological literature. An fMRI experiment using stimuli and tasks similar to those used in our simulations was performed. The regional integrated synaptic activities of the model were used to determine simulated regional fMRI activities, and generally agreed with the experimentally observed fMRI data. Preliminary results also suggest that our model can be used to simulate MEG data. Our results demonstrate that the model is capable of exhibiting the salient features of both electrophysiological neuronal activities and fMRI and MEG values that are in agreement with empirically observed data.

9th October

Subramanian Ramamoorthy
Institute of Perception, Action and Behaviour, University of Edinburgh

Achieving Robust Autonomy with Dynamically Dexterous Tasks 

Cheetahs and beetles run, dolphins and salmon swim, and bees and birds fly with grace and economy surpassing our technology - with seemingly limited cognitive effort. One of the major goals of robotics is to replicate, in machines, the combination of robustness, dynamical dexterity and efficiency that is common in the biological world.

My approach to solving this problem of robust autonomy is a factored one, involving two major components: (a) task encoding in the language of dynamical systems theory and (b) techniques for learning, search and optimization to enable adaptation to the environment.

The purpose of this talk is threefold - to introduce the design methodology, to illustrate its use through a case study and finally to present some of the major open questions that constitute my current research agenda.

I will begin with an introduction to the problem of motion generation in an imprecisely known environment and a brief discussion of the limitations of many traditional techniques from control theory and machine learning with respect to this problem. Then, I will discuss the specific problem of bipedal walking on irregular terrain. I will show that through a combination of a biologically-inspired dynamical encoding and a manifold approximation algorithm, it becomes possible to achieve an exceptional level of task flexibility and energy efficiency. Finally, I will outline the major directions for future development - use of the geometrical language of dynamical systems theory in more sophisticated problem spaces (task encoding), identification or extraction of suitable abstractions from data (unsupervised learning) and efficient approximation algorithms that leverage the geometrical representations (learning from partial knowledge).


BIO:

Subramanian Ramamoorthy is currently a Lecturer, affiliated with the Institute for Perception, Action and Behaviour in the School of Informatics at The University of Edinburgh. He holds a PhD from The University of Texas at Austin, an ME from the University of Virginia and a BE from Bangalore University.

His research interests include algorithmic and mathematical techniques in machine learning, control and dynamical systems theory, geometry and topology and their use to solve problems in robotics, autonomous agent design and biology.

In addition to his academic experience, he has several years of industrial experience with National Instruments Corp. where he worked in the areas of motion control, dynamic simulation and computer vision.

25th September

Tim Cootes 
Professor of Computer Vision 
Division of Imaging Science and Biomedical Engineering 
University of Manchester

Automatic Construction of Statistical Shape Models using Non-Rigid Registration

Statistical models of shape and appearance have been shown to be powerful tools for image interpretation, as they can explicitly deal with the natural variation in structures of interest. Such models can be built from suitably labelled training sets. Given a model of appearance we can match it to a new image using the efficient optimisation algorithms, which seek to minimise the difference between a synthesized model image and the target image.

To construct such models we require a set of points defining a dense correspondence between every image and every other. This can be very time-consuming to achieve manually, and is potentially error prone. There is considerable demand for algorithms to automatically construct such models from training data, with minimal human intervention. Building on work on registering 2D boundaries and 3D surfaces, we have developed methods for registering unlabelled images so as to construct compact models.

This talk will describe the approach, and demonstrate its application to both face images and 3D medical data.

10th July

Johnathan Pillow 
Postdoctoral Research Fellow
Gatsby Computational Neuroscience Unit
University College London
A model-based approach to correlations and multi-neuronal spike coding

A central problem in systems neuroscience is to understand how ensembles of neurons convey information in their collective spiking activity. Correlations, or statistical dependencies between neural responses, are of critical importance to understanding the neural code, as they affect both the amount of information carried by population responses and the manner in which downstream brain areas are able decode it. We show that multi-neuronal correlations can be understood using a simple, highly tractable computational model. The model captures both the stimulus dependence and detailed spatio-temporal correlation structure in the light responses of a complete population of parasol retinal ganglion cells (27 cells), making it possible to assess how correlations affect the encoding of stimulus-related information. We find that correlations strongly influence the precise timing of spike trains, explaining a large fraction of trial-to-trial response variability in individual neurons that otherwise would be attributed to intrinsic noise. We can assess the importance of correlations by performing Bayesian decoding of multi-neuronal spike trains; we find that exploiting the full correlation structure of the population response preserves 20% more stimulus-related information than decoding under the assumption of independent encoding. These results provide a framework for understanding the role that correlated activity plays in encoding and decoding sensory signals, and should be applicable to the study of population coding in a wide variety of neural circuits.

Host: Peggy Series

19th June

Peter Hancock 
Department of Psychology 
University of Stirling
Probing schizophrenia with a contour integration task

We present results on the task of finding a contour in a field of Gabor patches, where the position of the contour may be signalled by timing differences. The patches making up the contour may appear 0, 20, 40 or 100ms before or after other distracter patches. When the separation between the patches in the contour is sufficient, performance on a 2AFC task is at chance at synchrony, rising to near perfect at 40ms, with symmetrical performance either side of zero. Contours where the elements are aligned are no more detectable than those that are misaligned, implying no synergy between the timing signal and contour detection. However, it is possible to distinguish an aligned contour from an unaligned one given an asynchrony of about 40ms from the distracter patches. Tests on schizophrenic patients using the basic detection task indicate a considerably increased threshold, in excess of 100ms for some individuals. Amongst the student population tested, poor performance on the task was related to high scores on schizotypy and autistic quotient scales.

Host: Richard Shillcock

12th June

Daniel Huttenlocher 

Cornell University 
Computer Science Department

Learning and Recognizing Visual Object Categories Without Detecting Features

Over the past few years there has been substantial progress in the development of systems that can recognize generic categories of objects in images, such as automobiles, bicycles, airplanes, and human faces. Much of this progress can be traced to two underlying technical advances: (i) detectors for locally invariant features of an image, and (ii) the application of techniques from machine learning. Despite recent successes, however, there are some fundamental concerns about methods that rely heavily on feature detection, as local image evidence is often highly ambiguous due to the absence of contextual information.

We are taking a different approach to learning and recognizing visual object categories, in which there is no separate feature detection stage. In our approach, objects are modeled as local image patches with spring-like connections that constrain the spatial relations between patches. Such models are intuitively natural, and their use dates back over 30 years. Until recently such models were largely abandoned due to computational challenges that are addressed by our work. Our approach can be used to learn models from weakly labeled training data, without any specification of the location of objects or their parts. The recognition accuracy for such models is better than when using feature-based techniques with similar forms of spatial constraint.

Dan Huttenlocher is the John P. and Rilla Neafsey Professor of Computing, Information Science and Business at Cornell University, where he holds a joint appointment in the Computer Science Department and the Johnson Graduate School of Management. His research interests are in computer vision, electronic collaboration tools, social and information networks, computational geometry and financial trading systems. In addition to academic posts he has been chief technical officer of Intelligent Markets, a provider of advanced trading systems on Wall Street, and spent more than ten years at Xerox PARC directing work that led to the ISO JBIG2 image-compression standard.

Host: Chris Williams 

5th June

Boris Gutkin 
Recepteurs et Cognition
Departement de Neuroscience
Institut Pasteur

A neurodynamical framework for nicotine addiction. 

We present a neuro-computational model that combines a set of neural circuits at the molecular, cellular and system levels and accounts for several neurobiological and behavioural processes leading to nicotine addiction. We propose that the combination of changes in the nicotinic receptor response expressed by mesolimbic dopaminergic neurons, coupled with dopamine-gated learning in action-selection circuits, suffices to capture the main features of the acquisition of nicotine addiction. Then, we show that an opponent process enhanced by persistent nicotine taking renders self-administration rigid and habitual by inhibiting the learning process which, in nicotine-dependent animals, results in long-term learning impairment in the absence of drug. Further implication of our model is that distinct thresholds on the dosage and duration of nicotine taking exist for the acquisition and persistence of addiction. Our model unites a number of prevalent ideas on nicotine action into a coherent formal network for further understanding of compulsive drug addiction.

Host: Peggy Series

15th May

Tom Kirkwood 
Probing the complex causes of ageing

School of Clinical Medical Sciences Gerontology
Henry Wellcome Laboratory for Biogerontology Research
Newcastle General Hospital
Host: Douglas Armstrong

30th January


Stephen Mc Kenna 
School of Computing
University of Dundee

Cells, Bones and Bodies: Inferring Deformable Part-based Structures from Images

Objects can often be represented by decomposition into their constituent parts in a quite natural way, and it is often descriptions in terms of such parts that we would like to recover from their images. Models of the 2D shape and texture of each part, along with suitable priors on pose and motion, can enable inference of such descriptions from images. In general, such models must account for non-rigid part deformation, changes in texture, and occlusion/overlap by, or at least proximity to, other parts of the object or other objects. Probabilistic methods for part-based modelling and inference will be described with reference to several applications, namely, inferring human body pose from photographs of poorly constrained scenes, inferring femoral and tibial bone contours from clinical x-ray images, and inferring cell membrane structure from confocal scanning laser microscope images of growing plant roots.

Host: Amos Storkey 

6th February


Yee Whye Teh
Gatsby Computational Neuroscience Unit
University College London 
School of Computing

Stick-breaking Construction for the Indian Buffet Process

The Indian buffet Process (IBP) is a recently proposed latent feature model where each object is modelled using a potentially unbounded number of binary latent features. It has had a variety of applications, including matrix factorization, causal inference, and psychological choice modelling. However, due to the unbounded nature of the model, standard Markov chain Monte Carlo inference techniques like Gibbs sampling is cumbersome and inefficient in IBPs. In this talk, I will reformulate the IBP model using a stick-breaking construction, and show that this leads to straightforward and efficient MCMC inference for the IBP. Furthermore, we will see that there are interesting and strong connections between the stick-breaking construction for the IBP, and the standard stick-breaking construction for the more well-known Dirichlet process.

Host: Chris Williams 

13th February


Dr. Wei Wang
Faculty of Life Sciences
University of Manchester

Corticothalamic Feedback effects on the early visual information processing

Following from the classical work of Hubel and Wiesel, it has been recognized that the orientation and the on- and off-zones of receptive fields of layer 4 simple cells in the visual cortex are linked to the spatial alignment and properties of the cells in the visual thalamus that relay the retinal input. Here we present evidence showing that the orientation and the on- and off-zones of receptive fields of layer 6 simple cells in cat visual cortex that provide feedback to the thalamus are similarly linked to the alignment and properties of the receptive fields of the thalamic cells they contact. However, the pattern of influence linked to on- and off-zones is phase-reversed. Furthermore, the enhanced visually driven feedback restructured the thalamic cell receptive field spatial profiles and caused its spatial focus shifting. These data suggest that relatively small changes in the gain of the feedback system, commensurate with those that might be expected to occur naturally, can exert profound effects on thalamic cell response properties.

Host: Jim Bednar

13th March


V.Anne Smith
RCUK fellow
School of Biology
University of St. Andrews

A Bayesian network method to infer neural information flow in the songbird brain

Attempts to reveal neural information flow along anatomical brain pathways have been made using linear computational methods, but neural interactions are known to be nonlinear. In this talk, I will present work we have done showing that a dynamic Bayesian network (DBN) inference algorithm is successful at inferring nonlinear neural information flow networks from electrophysiology data collected from the auditory regions of awake female zebra fiinches during the presentation of sound stimuli. The inferred networks recovered are correctly restricted to a subset of known anatomical paths. This method presents the potential to investigate differential neural information flow during different behaviour and perception events.

Host: Irina Erchova 

20th March


Eric Mjolsness 
Associate Professor
Department of Information and Computer Science
UCI

An algebra of stochastic processes as the formal semantics for a biological modeling language

One route to understanding complex biological systems is through computational models of them. This route often requires modeling a variety of different processes, occurring at widely varying scales, that interact with one another. The burden can be lightened by using a modeling framework which assigns definite mathematical meaning to models of elementary sub-processes, in such a way that the models can be composed flexibly to build up the full dynamics of a complex system. Stochastic processes that generalize the idea of chemical reactions can serve as the required "mathematical meaning" or semantics in such a framework. Elementary processes can be specified using a notation similar to reactions or to production rules in a grammar, so that a collection of rules is mapped to the composite stochastic process given by summing up a corresponding collection of very high-dimensional time-evolutiion operators for stochastic processes. Differential equations can be included naturally in the framework. Using operator algebra it is then possible to derive simulation algorithms, potentially in great variety for any given model. A modeling language using this semantics will be described, along with applications to the developmental biology of plant stem cell niches.

27th March


Klaus Obermeyer
Institute for Software Engineering
and Theoretical Computer Science
Faculty of Electrical Engineering and Computer Science
Berlin University of Technology

Correlation- vs. Spike timing-dependent plasticity: The "didactic"reorganization of cortical receptive fields


The receptive fields of neurons in primary visual cortex that are eclipsed by inactivation of a circumscribed region of the retina are known to shift to intact retina, apparently due to plastic changes at intracortical connections. When the inactivation occurs in adolescent animals the receptive fields, however, rarely shift towards the closest available region of intact retina. In addition, shift directions are often correlated among neurons recorded in the same animal, even when these neurons are separated by considerable cortical distances. Such directions are contrary to the expectation that a deafferented cortical neuron would potentiate the input connections received from its closest non-deafferented neighbours, because of their superior efficacy. Here I will show - using a computational model of primary visual cortex - that the observed convergent shifts are inconsistent with the common assumption that the underlying plasticity of intracortical connections is dependent on the temporal correlation of pre- and post-synaptic action potentials (correlation dependent plasticity). They are, however, consistent with the hypothesis that this plasticity is dependent upon the temporal order of pre- and post-synaptic action potentials (spike timing-dependent plasticity). Our results show that spike timing-dependent plasticity causes receptive fields to converge by creating competition between neurons for the control of spike timing within the network. The spatial scale of this competition appears to depend on the balance of excitation and inhibition and thus could be controlled by, for example, the mechanism of synaptic scaling. This reveals a novel way by which the capacity of spike timing-dependent plasticity to transfer response properties between neurons can be effectively switched on and off.
joint work with:
J. Young (model)
W. Waleszczyk, C. Wang, M. Calford, B. Dreher (data)
Host: Mark van Rossum

Autumn 2006

3rd October

John Henderson

Department of Psychology
University of Edinburgh

What makes real-world scenes special? Evidence from fMRI

Real-world scenes differ in important ways from other types of meaningful visual stimuli such as objects, faces, and text. Are there scene-specific or scene-preferential cortical areas (domain-specific modules) that support high-level scene analysis? If so, what specific scene properties do these areas compute or represent? I'll talk about an ongoing program of research and recent results from fMRI suggesting that sub-regions of cortex can be identified that preferentially activate to scenes compared to objects and faces. Current evidence is most consistent with the hypothesis that these areas are tuned to the 3D structure of local space.


17th October

Gareth Leng

Professor of Experimental Physiology 
University of Edinburgh

Information Processing in the hypothalamus: peptides and analogue computation

Peptides in the hypothalamus are not like conventional neurotransmitters; their release is not particularly associated with synapses, and their long half-lives mean that they can diffuse to distant targets. Peptides can act on their cells of origin to facilitate the development of patterned electrical activity, they can act on their neighbours to bind the collective activity of a neural population into a coherent signalling entity, and the co-ordinated population output can transmit waves of peptide secretion that act as a patterned hormonal analogue signal within the brain. At their distant targets, peptides can re-programme neural networks, by effects on gene expression, synaptogenesis, and by functionally rewiring connections by priming activity-dependent release.

In this talk, I will show a neural network model of the milk-ejection reflex.

Leng G, Ludwig M. Jacques Benoit Lecture. Information processing in the hypothalamus: peptides and analogue computation.J Neuroendocrinol. 2006 Jun;18(6):379-92. Ludwig M, Leng G. Dendritic peptide release and peptide-dependent behaviours. Nat Rev Neurosci. 2006 Feb;7(2):126-36

Spring 2006

7th February

Peter Kind

Biomedical Sciences
Hugh Robson Building
University of Edinburgh

Role of glutamate receptor signaling in pattern formation in the mouse somatosensory system.

A fundamental concept of the structure and function of the cerebral cortex is localization of sensory and motor systems into distinct topographically arranged domains. Somatosensory information from the periphery is segregated into specific domains in the neocortex that form topological maps. An exceptional example of these maps is the primary somatosensory (SI) cortex of rodents, where periphery-related patterns are present in layer IV (Woolsey and Van der Loos, 1970). Presynaptic serotonin and postsynaptic glutamate signaling are known to regulate barrel formation, however, little is known of the relevant intracellular signaling pathways downstream of these neurotransmitter receptors (Erzurumlu and Kind, 2002; Gaspar et al., 2003). Both NMDA and metabotropic glutamate receptors have been shown to regulate barrel development. Our laboratory has been identifying the intracellular signaling molecules by which these receptors regulate neuronal phenotype through analysis of mice with null mutations of genes whose proteins are members of the NMDA receptor complex (NRC). The research outlined in this talk will highlight two key pathways two barrel formation, the Protein Kinase A (PKA) and SynGAP-MAPK pathways.

21st February

Jenny Read

Royal Society Research Fellow
Institute of Neuroscince
University of Newcastle upon Tyne

Solving the correspondence problem using position and phase disparity

Because we have two eyes set slightly apart, the images they receive are slightly offset from one another, a relationship called position disparity. This requires the visual system to work out which feature in the left eye corresponds to a given feature in the right - the stereo correspondence problem - before the two images can be fused into a single three-dimensional percept. In recent years, neurophysiologists have mapped the response properties of binocular neurons in primary visual cortex and elsewhere, and developed a model that successfully describes many of their properties (the binocular energy model, Ohzawa, DeAngelis, & Freeman, 1990) . However, we still have little understanding of how the output from these early neuronal populations is used by the visual system to solve the correspondence problem. Here, we point out a puzzling property of these neurons: they apparently respond best to stimuli in which the two eyes' views have a special relationship called phase disparity, even though this type of disparity never occurs in nature. We exploit this to develop a new method for solving the correspondence problem, based on the response properties of cortical neurons. Precisely because they are tuned to non-physical stimuli, these cells are best activated by false matches. Thus, their activity can help the brain to identify the true match by a process of elimination. We implement a computational algorithm based on these neuronal properties, and show that it successfully solves the correspondence problem.

7th March

Charles Peck 
IBM Research Yorktown Heights

Synchronization and the Binding Problem

Ever since Gray, et. al demonstrated that synchronization of neural oscillations can reflect global visual properties, neural synchronization has be proposed as mechanism for realizing various aspects of consciousness. In particular, synchronization has been proposed as a solution to the binding problem. This talk will discuss the binding problem, its relationship to Rosenblatt's "superposition catastrophe," the requirements synchronization must satisfy to solve this problem, and a minicolumn-scale computational model that satisfies a subset of these requirements. In addition, the talk will consider secondary requirements derived from the constraints of biological nervous systems. Circuits of Wilson-Cowan models consistent with the known connectivity of the neocortical microcircuit are proposed to explore solutions to these requirements. Results from these models will also be presented.

21st March

David Lowe 
Department of Computer Science 
The University of British Columbia

Object and Place Recognition from Invariant Local Features

Within the past few years, invariant local features have been successfully applied to a wide range of recognition and image matching problems. For recognition applications, it has proved particularly important to develop features that are distinctive as well as invariant, so that a single feature can be used to index into a large database of features from previous images. Robust recognition can then be achieved by identifying clusters of features with geometric consistency followed by detailed model fitting. Efficiency can be obtained with approximate nearest-neighbor methods that identify matches in a large database in real time. Recent work will be presented on applications to location recognition, augmented reality, and the detection of image panoramas from unordered sets of images.

4th April

Peter Latham 
Gatsby Computational Neuroscience Unit 
University College London

Computing with population codes

One of the few things in systems neuroscience we are fairly certain of is that variables in the outside world are encoded in population activity. Encoding, however, is only one step -- just as important is computing. For example, to determine whether to cross a street, we need to combine our estimate of the distance to the nearest car with its speed, and then divide one into the other to determine how much time we have to get to the other side. This transformation from distance and speed to time must be done with populations of neurons.

We are beginning to understand how such computations might take place in the brain, and in particular how they might take place efficiently; that is, with very little information loss. Most models, however, have focused on computing the values of variables (time in the above example). It is also important to understand how the brain handles uncertainty. This is especially important when combining multiple cues, as it is critical to weight them according to their reliability. If in the above example we use both sound and vision to estimate the distance to the car, we would need to pay more attention to vision during the day and more to audition on a dark night.

Stated more precisely, networks of neurons need to compute the posterior over the variable of interest. I will discuss two models for how the brain might do this: multisensory integration, where we have good models, and computations like the one describe above (division), where we do not.

11th May

David Redish 
University of Minnesota 

THE DYNAMICS OF REPRESENTATION IN THE HIPPOCAMPUS. IMPLICATIONS FOR MEMORY, LEARNING, COGNITION, AND DECISION-MAKING.

The hippocampus is known to be a critical neural structure for quickly-learned and flexible learning tasks, including spatial navigation tasks, reversal learning tasks, and episodic memory tasks. The hippocampus has also been implicated in planning, decision-making, and anxiety. From neural ensembles recorded from the hippocampi of behaving rats, it is possible to reconstruct the location encoded by the firing pattern in the ensemble. Using newly developed analysis techniques, we will examine how the encoded location changes at very fast (e.g. ms) timescales. We will show that, even during awake behaviors, the neural ensemble does not always encode the actual location of the rat, but instead, transiently, encodes non-local positions on the maze. These non-local encodings show that the hippocampus can provide signals that could be used for replay, for choice-evaluation, and for correction in the face of errors. We will relate these signals to their underlying brain states and discuss implications for flexible learning strategies. If time permits, I will also compare these signals to neural ensemble recordings taken from the dorsal striatum (which do not show the same fast dynamics seen in hippocampus).

Host: Mark van Rossum ( mvanross@inf.ed.ac.uk).

Autumn 2005


11th October

Yaakov Engel

Alberta Ingenuity Centre for Machine Learning (AICML)
Department of Computing Science
University of Alberta, Canada.

Bayes Meets Bellman: Reinforcement Learning with Gaussian Processes

Reinforcement Learning (RL) is a class of learning problems frequently encountered by intelligent agents, both biological and artificial. The last two decades have witnessed an explosion in RL research, resulting in a large number of new RL algorithms and architectures. However, there remain several fundamental obstacles hindering the widespread application of RL methodology to real-world problems. Probably the most obvious of these, is the scarcity of provably convergent, on-line, RL algorithms that are capable of dealing with large state-action spaces. The best-studied of these algorithms is the celebrated family of TD(lambda) algorithms and their variants, employed in conjunction with a linear function approximation architecture, using a simple gradient-like stochastic approximation update rule. While machine learning research at large has moved on from studying such gradient-based stochastic-approximation algorithms to more sophisticated approaches, most notably kernel machines, RL practitioners were left restricted to employing TD(lambda), unless they were either prepared to forgo all convergence guarantees or settle for off-line operation.

In this talk I will present the GPTD family of algorithms, which employs Bayesian reasoning and (kernel-based) Gaussian processes (GPs) to efficiently compute a posterior value distribution, given the state-reward trajectory observed so far. GPTD methods are capable of solving problems characterized by large, possibly infinite, state and action spaces. Moreover, they may be used in the on-line setting, they may be extended to control-learning via GPSARSA, and may be shown to provide optimal linear estimators, in the mean-squared-error sense, even if the Gaussianity assumptions are relaxed. The GP approach to RL delivers an extra bonus by providing, apart of value estimates, an additional value-uncertainty measure via the posterior covariance. This additional piece of information opens up the way to a wide range of possibilities not available before to RL practitioners.

18th October

Julie Harris

Vision Lab
School of Psychology
University of St. Andrews

Exploiting the visual environment for human 3-D motion perception

The main problem of vision is understanding what visual information from our environment can be used for seeing, and then discovering if, when and how the human brain uses it. In this talk I will use a specific problem, how we use binocular vision to see object move in 3-D, to illustrate how human behavioural research can contribute to our understanding of visual processing.

We know that the visual system is exquisitely sensitive to the small differences between the two eyes' images (binocular disparity). This information can be used to find the depth and shape of objects. Can we also exploit the motion disparity that occurs when objects move in depth to help us see object moving in 3-D, and to help us navigate through the 3-D environment?

I will discuss some recent literature in this area, showing how we are sensitive to small differences in the 3-D direction of moving objects, and how binocular information may be used to achieve this sensitivity. I will then describe my recent work on biases in the perception of 3-D motion direction, demonstrating how binocular information produces large biases. Finally, I will consider how a simpler model based on the use of visual direction can account for the biases displayed by observers.

1st November

Alessandro Treves 
International School for Advanced Studies
Trieste

Frontal Latching Networks and the neuronal basis of infinite recursio

Understanding the neural basis of higher cognitive functions, such as those involved in language, in planning, in logics, what the Greeks would have called \x{201C}logos\x{201D}, requires as a very prerequisite a shift from mere localization, which has been popular with imaging research, to an analysis of network operation. A recent proposal points at infinite recursion as the core of several higher functions, and thus challenges cortical network theorists to describe network behavior that could subserve infinite recursion. Considering a class of reduced models of large semantic associative networks, whose storage capacity can be studied analytically with statistical physics methods, I have simulated their dynamics, once the units are endowed with a simple model of firing frequency adaptation. I find that such models naturally display latching dynamics, i.e. they hop from one attractor to the next following a stochastic process based on the correlations among attractors. I propose that such latching dynamics may be associated with a network capacity for combinatorial recursion. More interestingly it turns out, from the simulations and from analytical arguments, that infinite latching only occurs after a phase transition, once the network connectivity becomes sufficiently extensive to support structured transition probabilities between global network states. The crucial development endowing a semantic system with a non-random dynamics would thus be an increase in connectivity, perhaps to be identified with the dramatic increase in spine numbers recently observed in the basal dendrites of pyramidal cells in Old World monkey and particularly in human frontal cortex.

22nd November

Graeme Ackland 
The School of Physics 
College of Science and Engineering 
The University of Edinburgh

Daisyworlds: feedback, adaptation and regulation in the Earth

The Gaia hypothesis suggests that strong coupling between life and its environment brings the system into a state "favourable for life". This appears to be a general principle of computational systems with replicating agents, although sometimes competition between replicators means it breaks downs. I will discuss a simple model ecosystem (daisyworld) which shows how life can regulate its environment for its own benefit. I will discuss the extent to which natural selection is essential in evolution and regulation, and the presence of similar phenomena in other ecological areas such as foodwebs and population dynamics.

I will further discuss how the mathematical principles embodied system could be used in more abstract applications, such as optimising a function with time-varying inputs. In principle, this acts similarly to a genetic algorithm, but would have greater flexibility to find subsidiary minima. Preliminary work on this topic will be presented and discussed.

Friday 25th November

Barry Dickson

Wired for Sex: The Genetic and Neural Bases for Sexual Behaviour in Drosophila

29th November

Stephen Muggleton 
Head of Computational Bioinformatics Laboratory
Department of Computing
Imperial College London

Machine Learning for Systems Biology

Systems biologists use graph-based descriptions of bio-molecular interactions which describe cellular activities such as gene regulation, metabolism and transcription. Biologists build and maintain these network models based on the results of experiments in wild-life and mutated organisms. This presentation will provide an overview of recent ILP research within in this area. Some of the intrinsic interest in the area from a logic-based machine learning perspective include:

1. the availability of background knowledge on existing known biochemical networks from publicly available resources such as KEGG (used in data sets such as those in the Nature paper by Bryant, King, Muggleton, etc);
2. the availability of training and test data from a variety of sources including micro-array experiments (see for instance the Rosetta compendium of Hughes et al.) and metabolomic data
(eg Nicholson et al.) from NMR and mass spectroscopy experiments; 3. the inherent importance of the problem (see Kitano's articles in Nature and Science in 2002) owing to its application in biology and medicine;
4. the inherent relational structure in the form of spatial and temporal interactions of the molecules involved;
5. the naturalness of probabilistic representations to represent, for instance, the availability of genes as discrete or continuous random variables having expression levels as states (used by Friedman et al with Bayes' nets and Angelopoulos with Stochastic Logic Programs).

We will argue the requirements in this area for rich probabilistic logic-based representations which can be machine learned. From a logical perspective the objects within this area include genes, proteins, metabolites, inhibitors and cofactors. The relationships include biochemical reactions in which one set of metabolites is transformed to another mediated by the involvement of an enzyme. One of the representational challenges is that within various databases the same object can be referred to in several ways, which brings in the problem of identity uncertainty. The available genomic information is also very incomplete concerning the functions and even the existence of genes and metabolites, leading to the necessity of techniques such as logical abduction to introduce novel functions and even invention of new objects.

Summer 2005

April 19th

Bernd Porr
University of Glasgow
Department of Electronics & Electrical Engineering

Fast learning with heterosynaptic plasticity

Currently all important, low-level, unsupervised network learning algorithms follow the paradigm of Hebb, which correlates input- with output activity to change the connection strength of a synapse. However, classical Hebbian learning always carries a potentially destabilising autocorrelation term which prevents the use of high learning rates. I'll introduce a novel non-Hebbian learning rule that utilises input correlations only, effectively implementing strict heterosynaptic learning. This leads to dramatically improved properties. Elimination of the output from the learning rule removes the unwanted, destabilising autocorrelation term allowing the use of high learning rates.

I'll present different applications from the fields of autonomous robotics and control which demonstrate the performance of this new heterosynaptic learning rule.

In addition I'll point out functional similarities with the limbic system and dopamine driven learning.

May 3rd

Frank Sengpiel
Reader in Neuroscience
Cardiff School of Biosciences

Plasticity of Binocular Integration in the Primary Visual Cortex

Monocular deprivation (MD) is one of the classical paradigms for the study of developmental plasticity of the cortex. The profound changes in visual cortical anatomy and physiology are commonly viewed as the result of competition between the two eyes for synaptic space in layer 4. I will show results from two sets of experiments that call this textbook view into doubt: they rather argue in favour of cooperative binocular interactions and support the BCM theory. First, recovery from MD depends on binocularly correlated activity and is disrupted by strabismus. Second, strabismus induced prior to a period of MD provides little protection against the effects of MD.

Additionally, I will address the question how different types of visual experience are integrated. Although the severity of the effects of continuous MD has made it a useful model of experience-dependent plasticity, the complete loss of visual function in one eye after just a brief period of compromised vision would appear to be maladaptive. We have studied the relative effects of mixed daily normal (binocular) and abnormal (monocular) visual experience and found that long periods of the latter have no visible effect on V1 when brief periods of the former are also given. The results indicate that the visual system has the ability to select one type of input (that which would be expected in a normal environment) over any other.

May 20th

Andrew Blake

Senior Research Scientist Microsoft Research

Enhanced video for teleconferencing through stereoscopic matching

Technology advances mean that a stereo webcam could be manufactured and sold for essentially the same price as a monocular one. There are two outstanding advantages for teleconferencing in using the stereo modality. First, automatic control of pan/tilt/zoom, which is possible monocularly, is particularly robust in stereo. Second privacy can be protected by obscuring background elements and replacing them with safer ones. For example a business conversation held at home could show only the talking head, against a bland video background, with inappropriate elements obscured. The first of these advantages is readily attainable and we describe progress towards achieving the second.

Two algorithms will be described that are capable of real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from colour/contrast or from stereo alone is known to be error-prone. Here, colour,contrast and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, Layered Graph Cut, does not directly solve stereo. Instead it marginalises the stereo match likelihood over foreground and background hypotheses, and fuses it with a contrast-sensitive colour model that is learned on the fly. Segmentation is then solved efficiently and exactly by binary graph cut. The second algorithm, Layered Dynamic Programming, solves stereo in an extended 6-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is similarly fused with the contrast-sensitive colour model.

Both algorithms are evaluated with respect to ground truth segmentation and found to have similar performance but rather different characteristics with respect to computational efficiency. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.

June 14th

Sam Roweis

Department of Computer Science, University of Toronto

Neighbourhood Components Analysis

Say you want to do K-Nearest Neighbour classification. Besides selecting K, you also have to chose a distance function, in order to define "nearest". I'll talk about a novel method for *learning* -- from the data itself -- a distance measure to be used in KNN classification. The learning algorithm, Neighbourhood Components Analysis (NCA) directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and very fast classification in high dimensions. Of course, the resulting classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them. If time permits, I'll also talk about newer work on learning the same kind of distance metric for use inside a Gaussian Kernel SVM classifier.

(Joint work with Jacob Goldberger)

Associated paper (pdf file)

Spring 2005

January 10th

Mikko Juusola

Physiological Laboratory University of Cambridge

Neural code: from molecules to networks.

I will briefly discuss three cases.

(1) At cellular level: How signal encoding in vivo photoreceptors is different for naturalistic (ordered) and random stimulus patterns.

(2) At network and molecular level: How adapting feedback makes transfer of information robust in Drosophila photoreceptor-interneuron synapses in vivo.

(3) At the level of codes: How action potential waveforms carry information about stimulus history in rat cortical slices.

Hosted by Mark van Rossum

February 8th

Roland Baddeley

Department of Experimental Psychology, University of Bristol

Machine learning techniques for interpreting animal communication: decoding cuttlefish signals

Cuttlefish are intelligent molluscs with an amazing ability to change their skin colouration that puts the better known abilities of the chameleon to shame. They use this ability for both camouflage and signalling to other cuttlefish. As system understanding the nature of animal communication, it has many advantages. In this talk I will show how, by using various computational techniques based on Bayesian inference, the nature of camouflage and communication in cuttlefish can be understood.

Hosted by Mark van Rossum 

March 1st

Neil Lawrence

Department of Computer Science, University of Sheffield

Probabilistic Non-linear Component Analysis through Gaussian Process Latent Variable Models

It is known that Principal Component Analysis has an underlying probabilistic representation based on a latent variable model. Principal component analysis (PCA) is recovered when the latent variables are integrated out and the parameters of the model are optimised by maximum likelihood. It is less well known that the dual approach of integrating out the parameters and optimising with respect to the latent variables also leads to PCA. The marginalised likelihood in this case takes the form of Gaussian process mappings, with linear Covariance functions, from a latent space to an observed space, which we refer to as a Gaussian Process Latent Variable Model (GPLVM). This dual probabilistic PCA is still a linear latent variable model, but by looking beyond the inner product kernel as a covariance function we can develop a non-linear probabilistic PCA.

In the talk we will introduce the GPLVM and illustrate its application on a range of high dimensional data sets including motion caputre data, hand written digits, a medical diagnosis data set and images.

Hosted by Chris Williams

March 8th

Liam Paninsky

Gatsby Computational; Neuroscience Unit, University College London

Statistical methods for understanding neural codes

We examine neural encoding models in which a linear filtering stage is followed by a noisy, leaky, integrate-and-fire spike generation mechanism incorporating after-spike currents and spike-dependent conductance modulations. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can effectively reproduce a variety of spiking behaviors seen {\it in vivo}. We describe the maximum likelihood estimator for the model parameters, given only extracellular spike train responses. Specifically, we prove that the log-likelihood function is concave and thus has an essentially unique global maximum that can be found using gradient ascent techniques. We develop efficient algorithms for computing the maximum likelihood solution based on Fokker-Planck and integral equation techniques, and demonstrate the effectiveness of the resulting estimator on two different in vitro physiological preparations. First, we examine intracellular recordings of pyramidal cortical neurons stimulated with random currents, where the model allows us to ``decode'' the underlying subthreshold somatic voltage dynamics, given only the superthreshold spike train. Second, we analyze extracellular recordings from retinal ganglion cells stimulated with dynamic light ``flicker'' stimuli. Here the model provides insight into the biophysical factors underlying the reliability of these neurons' spiking responses. We close by discussing recent extensions to highly biophysically-detailed, conductance-based models.

Hosted by Chris Williams 

Autumn 2004

30th November

Peter Redgrave

Department of Psychology, University of Sheffield

Action selection: Inspiration from Biology

Any mechanism, biological or mechanical system, which has more than one sensory or cognitive representation capable of directing potentially conflicting movements faces the problem of what to do next. To avoid chaotic movement, the independent functional systems must achieve sequential priority which gives them sole control over the "final common motor path" in an orderly manner. This issue, often termed action selection and continues to be a major concern of ethology, psychology, neuroscience and robotics. We have recently proposed that part of the vertebrate central nervous system, the basal ganglia, appears to be ideally configured to select between multiple cognitive and sensorimotor systems that have the capacity to promote exclusive behaviours. To test this notion we have implemented a high level computational model of intrinsic basal ganglia circuitry and its interactions with other regions of the brain. The computational model was then exposed to the rigors of 'real world' action selection by embedding it within the control architecture of a small mobile robot. In a mock foraging task, the robot was required to select appropriate actions under changing sensory and motivational conditions. Our results demonstrate: (i) the computational model of basal ganglia switches effectively between competing channels depending on the dynamics of relative input 'salience'; (ii) its performance is enhanced by inclusion of additional biologically inspired circuitry; and (iii) in the robot, the model demonstrates appropriate and clean switching between different actions and is able to generate coherent behavioural sequences. It is, therefore, concluded that a particular functional capability prescribed for a particular biologically inspired architecture has been confirmed. This architecture may serve as a basis from which to develop robust networks to control the behaviour of robots. On the other hand, significant insights have also been gained concerning the possible role of particular biological features within the context of a selection architecture.

Hosted by Mark van Rossum and Martin Guthrie

16th November

Andrew Zisserman

Visual Geometry group, Oxford University

Retrieving and recognizing objects in image databases and videos

An approach to object retrieval will be described, which searches for and localizes all the occurrences of an object in an image database or video. The object is represented by a set of viewpoint invariant fragments. These fragments enable recognition to proceed successfully despite changes in viewpoint, illumination and partial occlusion.

Two aspects of the problem will be discussed. The first is efficient retrieval of specific objects. It will be shown that methods developed for text retrieval, such as an inverted file system and document ranking, can be employed to make retrieval immediate, returning a ranked list of key frames/shots in the manner of Google. The method will be demonstrated on feature length films.

The second aspect is the retrieval and recognition of object classes, such as motorbikes or airplanes, where the within-class variation must also be modelled. We will describe the `constellation model' which allows a particular variation in the appearance and configuration of the class. This model has the agreeable property that it can be learnt from a set of training images without requiring any segmentation or manual input.

[Joint work with Rob Fergus, Pietro Perona, Frederik Schaffalitzky and Josef Sivic].

Hosted by Chris Williams

2nd November

Steve Russell

Department of Genetics, University of Cambridge

Functional Genomics Resources for Drosophila

With a high-quality genome sequence and almost a century of genetics resources behind it the fruit fly, Drosophila melanogaster, is an excellent model system for exploring metazoan biology at a genomics level. Over the past few years we have been developing a set of genomics resources to enable the UK research community to use state-of-the-art methods in their particular areas of interest (www.flychip.org.uk). These resources include gentics tools for whole-genome deficiency mapping, microarrays (spotted cDNA, long oligonucleotide and Affymetrix), genome tile arrays for mapping transcription factor binding sites and an integrated database, FlyMine. I will discuss the progress in developing some of these resources, highlighting the problems and successes, with particular focus on recent work developing genome-wide chromatin immunopurification strategies with Heat Shock Factor and the chromatin modulator Suppressor of Hairy wing.

Hosted by Douglas Armstrong 

Motifs in Brain networks

31st August

Rolf Kotter

Complex brains have evolved a highly efficient network architecture whose structural connectivity is capable of generating a large repertoire of functional states. We detect characteristic network building blocks (structural and functional motifs) in neuroanatomical data sets and we identify a small set of structural motifs that occur in significantly increased numbers. Our analysis suggests the hypothesis that brain networks maximize both the number and the diversity of functional motifs, while the repertoire of structural motifs remains small. Using functional motif number as a cost function in an optimization algorithm, we obtain network topologies that resemble real brain networks across a broad spectrum of structural measures, including small-world attributes. These results are consistent with the hypothesis that highly evolved neural architectures are organized to maximize functional repertoires and to support highly efficient integration of information.

Hosted by Mark van Rossum

10am, Wednesday 28th July

Max Welling

On the Choice of Regions for Generalized Belief Propagation

Generalized belief propagation (GBP) has proven to be a promising technique for approximate inference tasks in AI and machine learning. However, the choice of a good set of clusters to be used in GBP has remained more of an art then a science until this day. This paper proposes a sequential approach to adding new clusters of nodes and their interactions (i.e. "regions") to the approximation. We first review and analyze the recently introduced region graphs and find that three kinds of operations ("split", "merge" and "death") leave the free energy and (under some conditions) the fixed points of GBP invariant. This leads to the notion of "weakly irreducible" regions as the natural candidates to be added to the approximation. Computational complexity of the GBP algorithm is controlled by restricting attention to regions with small "region-width". Combining the above with an efficient (i.e. local in the graph) measure to predict the improved accuracy of GBP leads to the sequential "region pursuit" algorithm for adding new regions bottom-up to the region graph. Experiments show that this algorithm can indeed perform close to optimally.

Hosted by Chris Williams

Summer 2004

4th May

Keith Stenning

HCRC, Informatics, University of Edinburgh.

and

Michel van Lambalgen

Chair of Logic & Cognitive Science (shared with Frank Veltman), ILLC/Department of Philosophy, University of Amsterdam.

Some neural correlates of reasoning

The process whereby hearers take in successive statements of a discourse and accommodate their interpretation to apparent conflicts, is a human analogue of the general biological process whereby animals maintain a model of the current environment in the light of incoming data, and plan their actions on the basis of this model. This process of interpretative accommodation is generally automatic and efficient. We present a default logic for modeling some well known data on accomodation of interpretation during discourse comprehension (the so-called `suppression effect', Byrne 1989). We then provide a spreading activation neural implementation of this logic which identifies minimal models of the discourse with stable states of a suitable network. This implementability strongly distinguishes planning logic from classical logic which presents huge problems of search for neural implementation. We indicate connections with the work of Bienenstock and von der Malsburg on `fast functional links' which we conjecture provide a mechanism for the on-line construction of these networks as representations of discourses in working memory.

Hosted by David Willshaw 

1st June

Graham Barnes

Department of Optometry and Neuroscience, UMIST

Predictive control of ocular pursuit movements in humans-experimental data leading to the development of a realistic model

An intriguing aspect of smooth pursuit eye movements is that, unlike limb movements, they cannot normally be initiated in the absence of a moving target or, indeed, in complete darkness. Any attempt to generate smooth eye movements volitionally by, for example, imagining a moving target results in a pattern of saccadic movements only. It has generally been assumed that this behaviour results from the fact that pursuit requires visual error information, derived at the retina, in order to control smooth eye movement. Associated with this visual feedback is a large time delay (~100ms), as is evident when subjects respond to randomised transient target motion stimuli. Yet, there is conflicting evidence that, in some situations subjects can exercise predictive control over pursuit that does not seem to be dependent on immediate visual feedback.

How can this conflict be resolved? It turns out that two factors seem to be important in this process: expectancy and short-term storage. In early experiments we were able to show that repeated presentation of identical target motion stimuli, preceded by regularly timed warning cues, led to replacement of the normal reactive response with the progressive build up of anticipatory smooth eye movements. This, and other evidence, suggested that a short-term store was being built up from prior stimulation and then released at an appropriate time to predict the upcoming target motion. Subsequent experiments have shown that the ability to initiate the smooth movement is heavily dependent on expectancy of target presentation, whereas the ability to scale the response is dependent on short-term storage of pre-motor drive information. In this talk I will show how the volitional control of the initiation, duration, scaling and direction of smooth eye movements rapidly (in ~300ms) takes over the control of pursuit, irrespective of concurrent visual feedback, and how this is used to allow us to generate predictive responses to quite complex sequences of target motion stimuli after only brief exposure to such stimuli. I will then show how this process may be modelled, at a control systems level, making reference to the dynamic characteristics of established neural structures.

Hosted by Mark van Rossum 

8th June

Barak Pearlmutter

Monaural Source Separation using Spectral Cues

Hamilton Insitute, NUI Maynooth.

The acoustic environment poses at least two important challenges. First, animals must localise sound sources using a variety of binaural and monaural cues; and second they must separate sources into distinct auditory streams (the ``cocktail party problem''). Binaural cues include intra-aural intensity and phase disparity. The primary monaural cue is the spectral filtering introduced by the head and pinnae via the head-related transfer function (HRTF), which imposes different linear filters upon sources arising at different spatial locations.

Here we address the second challenge, source separation. We propose an algorithm for exploiting the monaural HRTF to separate spatially localised acoustic sources in a noisy environment. We assume that each source has a unique position in space, and is therefore subject to preprocessing by a different linear filter. We also assume prior knowledge of weak statistical regularities present in the sources. This framework can incorporate various aspects of acoustic transfer functions (echos, delays, multiple sensors, frequency-dependent attenuation) in a uniform fashion, treating them as cues for, rather than obstacles to, separation. To accomplish this, sources are represented sparsely in an overcomplete basis. This framework can be extended to make predictions about the neural representations required to separate acoustic sources.

Hosted by Chris Williams 

15th June

Nicolas Brunnel

Optimal information storage and the distribution of synaptic weights: Perceptron vs Purkinje cell

Neurophysiscs and Physiology of the Motor System, René Descartes University, CNRS.

It is widely believed that synaptic modifications underlie learning and memory. However, few studies have examined what can be deduced about the learning process from the distribution of synaptic weights. We analyse the perceptron, a prototypical feedforward neural network, and obtain the optimal synaptic weight distribution for a perceptron with excitatory synapses. It contains more than 50% silent synapses and this fraction increases with storage reliability: silent synapses are therefore a necessary byproduct of optimising learning and reliability. Exploiting the classical analogy between the perceptron and the cerebellar Purkinje cell, we fitted the optimal weight distribution to that measured for granule-cell---Purkinje-cell synapses. The two distributions agreed well, suggesting that the Purkinje cell can learn up to 5Kbytes of information, in the form of 40,000 input-output associations.

Hosted by Mark van Rossum 

22nd June

Aapo Hyvarinen

Towards predictive computational neuroscience: natural image statistics and the extrastriate cortex

Department of Computer Science, Faculty of Science, University of Helsinki.

It has been shown that well-known properties of the cells in the primary visual cortex emerge when certain statistical generative models are estimated for natural image input. This offers an "explanation" of some properties of visual neurons that is more detailed and extensive than previous explanations such as space-frequency analysis or scale-space theory. Here, I argue that in addition to explaining properties that have already been experimentally observed, this framework enables us to predict properties of neurons whose function is not understood, e.g. those in the extrastriate cortex. Statistical generative models are especially suitable for predictive computational neuroscience because they offer a framework that is data-driven and strongly constrained by well-known statistical theory. Thus, the prior assumptions of the theorist are much less likely to influence the results than in ordinary computational modelling. The theoretical predictions can then serve as a basis for experimental work, generating exact hypotheses on neurons for which no clear-cut hypotheses have been developed so far. I will show some very preliminary results we have obtained in this direction.

Hosted by Chris Williams

SPRING 2004

13th January


Christoph von der Malsburg

Institute for Neural Computing, Ruhr-University Bochum, Germany.

Object Recognition as Paradigm of Brain Organization

Object recognition presents a rich variety of sub-problems, including learning, structuring of the a model database and efficient retrieval in spite of large image variability. I will report on recent progress in my laboratory on several of these fronts and will discuss the issues involved as exemplary for brain organization.

27th January


Will Penny

Wellcome Department of Imaging Neuroscience, University College London.

Bayesian Methods in Brain Imaging

I will provide a short introduction to Bayesian methods and show how they can be used in the analysis of functional Magnetic Resonance Imaging (fMRI) data. I will present Bayesian General Linear Model (GLM) and Dynamic Causal Model (DCM) approaches and show how they can be used to make inferences about functional specialisation and functional integration in the human brain.

Monday, 2nd February


Laszlo Zaborszky

Center for Molecular and Behavioral Neuroscience Rutgers University

Modulatory inputs to the cortex with special reference to the role of the basal forebrain

In the last two decades, studies employing recording of single neurons and monitoring of neocortical activity combined with chemical and electron microscopic identification of the synaptic input-output relations of the recorded neurons, provided an increasingly detailed understanding of the function of specific neurotransmitter systems in modulating cortical activity. First the electrophysiology of various 'diffuse' brainstem and hypothalamic corticopetal systems will be discussed in relation to cortical activity. This will be followed by reviewing the electrophsyiology of basal forebrain (BF) cholinergic, GABAergic corticopetal and peptidergic interneurons as they affect cortical activity. Evidence from various studies suggests that BF neurons receive input from the ascending brainstem and hypothalamic modulatory systems. Thus, cholinergic and GABAergic projection neurons of the BF are anatomically in a unique position to integrate the constant flow of cellular and homeostatic states derived from the ascending subcortical systems and to channel this momentarily changing neural pattern to the entire cortical mantle to modulate alertness. Study of the local axonal arborization of electrophysiologically and chemically identified neurons of the BF and application of computational anatomical studies suggest that the BF is not a diffuse structure and its elements are capable for distinct operations. Against the relatively 'diffuse' termination of the ascending brainstem and hypothalamic axons, the restricted input from the prefrontal cortex to specific BF neurons might be instrumental in communicating state-related changes from BF neurons to specific posterior sensory areas to modulate selective cognitive processes, including sensory processing, cortical plasticity and attention.

Thursday, 19th February


Andrew Fitzgibbon

Department of Engineering Science, University of Oxford.

Image-based rendering using image-based priors

Given a small number of photographs of the same scene from several viewing positions, we want to synthesize the image which would be seen from a new viewpoint. This ``novel view synthesis'' problem has been widely researched in recent years. However, even the best methods do not yet produce images which look truly real. The primary source of error is in the trade-off between the inherent ambiguity of the problem, and the loss of high-frequency detail due to the regularizations which must be applied to alleviate that ambiguity. In this talk, I shall show how to constrain the generated images to have the same local statistics as natural images, effectively projecting the new view onto the space of real-world images. As this space is a small subspace of the space of all images, the result is strongly regularized synthetic views which preserve high-frequency details.

16th March


Thomas Trappenberg

Associate Professor, Director of Electronic Commerce, Faculty of Computer Science, Dalhousie University, Canada.

On Bubbles and Drifts: Continuous attractor networks and their relation to working memory, path integration, population decoding, attention, and oculomotor control.

Abstract: It has been recognized for many years that brain-style information processing often utilizes cooperation and competition through lateral inhibition type on-center-off-surround organizations in the brain. I will review a basic model, the continuous attractor neural network (CANN), highlighting the similarities and some variations of the basic model in relation to some specific neuroscientific models, specifically in the areas mentioned in the title. I will then concentrate on two complements of the basic model studied in recent years, that of NMDA-style stabilization, and that of hetero-associative dynamics often discussed in conjunction with path-integration.

AUTUMN 2003

7th October


Roland Strauss

Department of Genetics and Neurobiology, Universitat Wurburg.

The Control of Drosophila Walking and Orientation Behaviour - Neurogenetic Characterization and Localization of Functional Subunits in the Central Brain and Implementation on Robots

Control of walking, climbing and landmark orientation behaviour by the insect brain is being studied on the basis of an extensive genetic screen through more than 10.000 candidate mutant flies. Functional subunits of behavioural control were surprisingly independent of each other and in many cases coincided with neuroanatomically well-defined neuropilar regions of the central brain. Among seven identified control units for walking are those for the optimisation of step length, the across-body symmetry of steps, and the proper initiation of step length when starting from a resting position. In orientation experiments we identified a brain region necessary to give up on a previously chosen target, as soon as it turns out to be inaccessible and another region needed to retain the direction towards a landmark which became temporarily invisible during approach. Identified control algorithms are being tested on a six-legged or a wheeled, camera-equipped robot.

21st October


Walter Senn

Department of Physiology, University of Bern.

Adaptive climbing activity as an event-based cortical representation of time

The brain has the ability to represent the passage of time between two behaviorally relevant events. Recordings from different areas in the cortex of monkeys suggest the existence of neurons representing time by increasing (climbing) activity which is triggered by a first event and peaks at the expected time of a second event, e.g. a visual stimulus or a reward. When the typical interval between the two events is changed, the slope of the climbing activity adapts to the new timing. I'll present a model in which the climbing activity results from slow firing rate adaptation in inhibitory neurons. Hebbian synaptic modifications allow to learn the new time interval by changing the degree of firing rate adaptation. This event-based representation of time is consistent with Weber's law in interval timing, according to which that the error in estimating a time interval is proportional to the interval length.

18th November


Richard Connor

Department of Computer Science, University of Strathclyde.

Type projection for semistructured data: towards XML Querying for Dummies

XML's major appeal as a data standard is that information is self-describing. This is the essential feature that allows autonomous information-based interaction, as similar information can be shared and understood by users with a common human context, but without rigourous conformance to a common schema.

Problems come with the automation of queries over such data. There are many mechanisms and models for querying semistructured data, but there are two common problems deriving from the schemaless approach: queries are not written according to intuitive data models, and important semantic errors may remain undetected. The implication is that writing queries over such data is strictly in the expert domain.

This talk examines the post-hoc imposition of data models onto XML data, giving a two-phase query procedure. This can be achieved for certain classes of simple queries. The hypothesis examined is that the process gives a more intuitive model of both construction and behaviour of queries, making querying easier for non-expert users.

25th November


John Shawe-Taylor

Department of Computing Science, University of Southampton.

Estimating the moments of a random vector with applications

A general result about the quality of approximation of the mean of a distribution by its empirical estimate is described that does not involve the dimension of the feature space. Using the kernel trick this also gives bounds on the quality of approximation of higher order moments. A number of applications are derived of interest in learning theory including a new novelty detection algorithm and rigorous bounds on the Robust Minimax Classification algorithm.

2nd December


Keith van Rijsbergen

Information Retrieval Group,Department of Computing Science, University of Glasgow.

The geometry of Information Retrieval: or, what's QM got to do with IR?

This talk is about some ongoing and incomplete research. I will try and introduce some basic quantum mechanics, just enough to talk about some issues in IR. For some years now I have worked on set-theoretic, logical, and probabilistic models of IR. I have always found it frustrating that the "language" used to discuss these models is model-dependent; that is, one adopts a language appropriate to the model. Quantum mechanics is based on a vector-space of a particular kind and deals with set-theory, logic, and probability within the same language. It is possible that this is also an approriate language for IR. I will not introduce yet another model for IR!


SUMMER 2003

Tuesday 22nd July


Faculty Room South
University of Edinburgh
David Hume Tower
George Square
Edinburgh
EH1 9JX

Barbara Webb

Perception, Action and Behaviour, University of Edinburgh

Spiking neural network models of insect behaviour

Insect systems offer the opportunity to explore very direct connections between single neuron properties and behaviour. I will describe the models we are currently exploring for sound localisation and other behaviours in the cricket. These models are implemented and tested on robots, and I will also discuss the motivations for this approach.

Tuesday 10th June

Lecture Theatre S1
Department of Psycholoy
University of Edinburgh
7 George Square
Edinburgh
EH8 9JZ

Developmental modulation of spontaneous activity in the nervous system: form and function


Stephen Eglen

Adaptive and Neural Computation, University of Edinburgh

Spontaneous activity is thought to play a crucial role in the development of connections in the nervous system. In the immature retina, this spontaneous activity is correlated between neighbouring neurons, producing waves of activity. We have used multi-electrode recordings to examine the developmental changes in the spontaneous patterns of waves in the mouse retina. Furthermore, to address the role of visual experience upon the disappearance of waves, we have examined spontaneous activity from retinas of mice reared in the dark. Finally, simple computer models based upon Hebbian rules have been used to investigate how these patterns of spontaneous activity can refine neural connections.


David Donaldson

Department of Psychology, University of Stirling

**Talk postponed, date to be announced**

Tuesday 27th May

Room S1
Department of Psychology 
7 George Square 
Edinburgh
EH8 9JZ

Steve Coombes

Mathematical Biology Group, Loughborough University

Modelling Thalamic Relay Networks

The minimal integrate-and-fire-or-burst (IFB) neuron model can reproduce the salient features of experimentally observed thalamocortical (TC) relay and reticular (RE) neuron response properties. These include the temporal tuning of both tonic spiking (i.e.,conventional action potentials),bursting and post-inhibitory rebound firing mediated by a low-threshold calcium current. Here we consider networks of IFB neurons with slow synaptic interactions and show how the dynamics may be described with a smooth firing rate model. When the firing rate of the IFB model is dominated by a refractory process the equations of motion simplify and may be solved exactly. Within a continuum model we show that a network of RE cells with on-centre excitation can support a fast travelling pulse. In contrast a network of inhibitory TC cells is found to support a slowly propagating lurching pulse. Waves in biologically realistic two-layered networks of RE-TC cells will also be discussed and explored using numerical simulations. This work is relevant to the modelling of waves in thalamic slices.

Tuesday 13th May

Lecture Theatre 5
Appleton Tower 
Crichton 
EH8 9LE

Carl van Vreeswijk

CNRS, René Descartes University

Heterogeneity and contrast invariance in a model of V1

Several models have recently proposed to account for contrast-invariance of simple and complex cell tuning curves in the primary visual cortex. What all these models have in common is that all cells have practically identical tuning curves, up to a translation of their preferred orientation. Experimental studies have shown that there is an enormous difference in the tuning curves of cells and this heterogeneity has not been taken into account in the models. I present a model of a simple cell network with balanced inhibition and exitation. In this model the average response is contrast invariant by construction, and there is a large heterogeneity of the cells' tuning curves. The contrast invariance of tuning curves of individual cells is also investigated and it is shown that, while well tuned cells are very close to contrast invariant, this is true to a much lesser extend for badly tuned cells. Experimentally the contrast invariance of the latter has up to now not been investigated, so that the latter result should be seen as a, easily tested, prediction.

Title to be announced

Tuesday 29th April

Conference Suite
School of Informatics 
4 Buccleuch Place 
EH8 9LW

David MacKay

Department of Physics, Cavendish Laboratory, University of Cambridge.

Hands-free writing

Keyboards are inefficient for two reasons: they do not exploit the predictability of normal language; and they waste the fine analogue capabilities of the user's fingers. I describe a system intended to rectify both these inefficiencies. Dasher is a text-entry system in which a language model plays an integral role, and it's driven by continuous gestures. Single-finger writing speeds exceeding 35 words per minute can be achieved. Hands-free writing is also possible, at speeds up to 25 words per minute.

SPRING 2003

Tuesday 1st April

Conference Suite
School of Informatics 
4 Buccleuch Place 
EH8 9LW

Geoff Goodhill

Dept of Neuroscience, Georgetown University Medical Center

Wiring up the brain: How axons detect gradients

A key step in the formation of connections between neurons in the developing brain is the guidance of axons over long distances. An important mechanism for such guidance is the detection of molecular concentration gradients in the environment of the axon. In this talk I will present data from a new experimental assay we have developed which shows that axons are extraordinarily sensitive gradient detectors. I will also present a new computational theory for how axons move in response to gradients, and show that the simulation results match well with our experimental results.

Tuesday 25th March

Joe Whittaker 

A model based view of partial least squares

Multivariate statistics, almost as old as statistics itself, has a long and distinguished history that can certainly be traced back to the turn of the century with the work of Karl Pearson. There are several established paradigms, or frameworks, in which to understand the motivation of particular methods; for instance, one of which is the multivariate extension of linear models and another is the geometric paradigm of principal directions based on the singular value decomposition.

Partial least squares is a relative newcomer to this field stemming from the work of Herman and then Svante Wold in the 1970's and later extensively used in food science and statistical chemists such as Martens and Naes. However it is not clear how to classify partial least squares within the current repertoire of paradigms for multivariate statistics.

The aim of this talk is first to describe explain partial least squares within the framework of graphical models.

Tuesday 4th March

Arnaud Destrebecqz 
Universite Libre de Bruxelles

Behavioural, imaging and modelling studies of implicit sequence learning

Conference Suite
School of Informatics 
4 Buccleuch Place 
EH8 9LW

Can we learn unconsciously? Since about 30 years ago, this question has been at the centre of many implicit learning studies. It is currently enjoying renewed interest, which can be attributed to the development of studies aimed at discovering the neural correlates of consciousness (e.g. Frith, Perry & Lumer, 1999), and to increased recognition of the crucial role played by learning processes in different fields of human cognition such as language (e.g. Saffran et al., 1997). In a typical implicit learning situation, subjects are faced with a complex environment which is governed by a set of regularities. For example, in a sequential reaction time (SRT)task, subjects simply have to press as fast as possible on the key that corresponds to the location of a stimulus on a computer screen. Unknown to them, this location depends on the location of the previous stimuli. Typically, performance improvement shows that subjects learn the structure of the sequence even when they are not able to describe precisely the regularities of the sequential material. This result has been replicated many times since the initial study of Nissen & Bullemer (1987), but different and contradictory interpretations have subsequently been proposed to account for it.

In this controversial context, my talk is specifically targeted towards the study of dissociations between implicit and explicit learning. This question will be addressed through the sequence learning paradigm, which can be seen as involving a fundamental dimension of human cognition (sequence processing is indeed involved in many different skills, such as the execution of complex movements, language processing, the planification of action, or problem solving). Despite the numerous studies published in the field, many important questions remain unresolved : Is the acquired knowledge implicit or explicit? What are the mechanisms of sequence learning? Do implicit and explicit forms of learning depend on different learning systems? In order to address these questions, the approach that I will follow combines (1) the use of behavioural methods that are sensitive enough to differentiate between implicit and explicit influences on performance, (2) the development of a computational model capable of simulating performance in this situation, and (3) the use of brain imaging techniques in order to address the hypothesis that implicit and explicit forms of learning depend on different brain structures. Indeed, given the nature of the questions implied by sequence learning, I believe that a multidisciplinary approach would allow for a significant advance in our understanding of this phenomenon.

Tuesday 4th February

Bartlett Mel 
University of Southern California

On Dendritic Compartmentalization and the Neural Substrate for Learning and Memory

Conference Suite
School of Informatics 
4 Buccleuch Place 
EH8 9LW

One of the key questions facing modern neuroscience is to find the right abstraction for the individual neuron. I will describe recent work in which we have used a detailed biophysical model of a pyramidal cell to test the validity of various single-neuron abstractions, and how we have arrived at the conclusion that a single pyramidal neuron acts like a conventional 2-layer neural network with sigmoidal hidden units. This shift in granularity---where individual branches of neurons may themselves act like separately thresholded "neuronlets"---has profound implications for the ways in which learned information may be physically incorportated into neural tissue during learning and development. In particular, structural remodeling at the interface between axons and dendrites takes on new importance as a possible substrate for long-term memory. I will present some of the theoretical tools we have used to quantify structurally-mediated storage capacity, including some recent results regarding the geometry of the 3-D interface between axons and dendrites.

Tuesday 21st January

Conference Suite
School of Informatics
4 Buccleuch Place
EH8 9lw

Nick Chater
Department of Psychology,
University of Warwick

Simplicity as a unifying principle in human cognition

Ernst Mach suggested that a principle of economy, or simplicity, might underpin enquiry into the world, ranging from perceptual processes to scientific investigation. This paper considers how this tradition has been formalized using information theory and Kolmogorov complexity theory,and ask how far it can explain data on perception and cognition.

Tuesday 7th January

Gaddam Lecture Theatre (Room G8)
Division of Neurosicence
1 George Square
EH8 9JZ

Roderick Murray-Smith
Department of Computing Science,
Glasgow University & Hamilton Institute, 
Nat. Univ. of Ireland Maynooth

Adaptive control with Gaussian process priors

Gaussian Process prior models can be used to implement nonlinear adaptive control laws. I will discuss why such models are of interest in this area, discuss their applicability to Fel'dbaum's dual control problem, and present a model-predictive control framework which can incorporate appropriate prior knowledge in the form of derivative 'observations', and which can propagate uncertainty for k-step-ahead predictions. Mixtures of Gaussian Processes, which allow efficient use of large training sets in the form of batch data, are applied to the task of modelling multiple patients standing with the aid of Functional Electrical Stimulation.

Autumn Term 2002

Friday 6th December, 2pm

University of Edinburgh

2 Buccleuch Place

Edinburgh EH8 9LW

Graeme Mitchison
The Laboratory of Molecular Biology,Cambridge

What is quantum computation and what is it good for?

Abstract: Quantum computers are frequently in the news, though so far only very simple examples of such devices have been made. What computations could they carry out and how far would they transcend classical computation? I will give an elementary introduction (no familiarity with quantum mechanics assumed), and conclude with some examples where information is gained in ways that seem paradoxical.

Tuesday 12th November, 2pm

Magnus Rattray 
University of Manchester

Phylogenetic inference using RNA molecules

The structures of tRNA and rRNA molecules evolve on a slower timescale than the DNA sequences encoding them. The conserved structure of these molecules therefore has a strong influence on sequence evolution in genes encoding them. In particular, bases in the helical regions of these molecules evolve according to a compensatory process which is required to maintain the stability of the helices. Models of the substitution process specific to the helical regions of structural RNA molecules have been implemented in a new phylogenetic inference software package. We use Bayesian inference techniques and determine the posterior probability of phylogenetic trees and substitution model parameters. I will present our most recent results on the phylogeny of mammals and describe some of the statistical issues which arise when applying RNA-based methods for Bayesian phylogenetic inference.

Tuesday 29th October

Taylan Cemgil 
University of Nijmegen

Probabilistic Methods for Music Transcription

Automatic music transcription refers to extraction of a human readable and interpretable description from a recording of a musical performance. Traditional music notation is such a description that lists the pitch levels (notes) and corresponding timestamps. Such a representation would be useful in several applications such as interactive music performance, information retrieval (Music-IR) and content description of musical material in large music databases.

In this talk, I will focus on a subproblem in music-ir, where I assume that exact timing information of notes is available, for example as a stream of MIDI events from a digital keyboard.

I will present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model will turn out to be a switching state space model (switching Kalman filter). The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo.

Given the model, we can formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Unfortunately, exact computation of posterior features such as the MAP state is intractable in this model class, so we resort to Monte Carlo methods for integration and optimization. I have compared Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Simulation results suggest better results with sequential methods.

The methods can be applied in both online and batch scenarios (such as tempo tracking and transcription) and are thus potentially useful in a number of music applications such as adaptive automatic accompaniment, score typesetting and music information retrieval.

My papers about the topic can be downloaded from http://www.mbfys.kun.nl/~cemgil/papers.html


Spring Term 2002

Tuesday 11th June

David Wild 
Keck Graduate Institute

Modelling biological responses using gene expression profiling and linear dynamical systems

ABSTRACT

Linear dynamical systems are a subclass of dynamic Bayesian networks used for modelling time series data which assume the existence of a hidden state variable, from which we can make noisy measurements, which evolves with Markovian dynamics. We have applied these methods to the analysis highly replicated gene expression microarray time series data with the intention of building testable hypotheses about the causal influences between gene expression events involved in the activation of human T cells.

Tuesday 28 May

Durk Husmeier 
BioSS

Statistical methods for phylogenetic inference and the detection of recombination in DNA sequence alignments

ABSTRACT, Part 1 (first 30 minutes)

I will summarize the statistical approach to phylogenetic inference from DNA sequence alignments. The methods described will be used in the second part of my talk. 

ABSTRACT, Part 2 (last 30 minutes)

The recent advent of multiple-resistant pathogens has led to an increased interest in recombination as an important, and previously underestimated, source of genetic diversification in bacteria and viruses. In my talk, I will describe a statistical method for detecting recombination in multiple DNA sequence alignments. This approach is based on the combination of two probabilistic graphical models: (1) a taxon graph (phylogenetic tree) representing the relationship between the taxa, and (2) a site graph (hidden Markov model) representing interactions between different sites in the DNA sequence alignment. I will compare three different parameter estimation techniques, and will discuss the results obtained on various synthetic and real-world DNA sequence alignments.

Tuesday 30 April

Björn Brembs 
University of Texas, Houston

Operant Conditioning of Feeding Behavior: Rewarding Aplysia with a Steak?

Operant conditioning is a form of associative learning through which an animal learns about the consequences of its behavior. We have developed an appetitive operant conditioning procedure using Aplysia feeding behavior that induces long-term memory. Biophysical changes that accompanied the memory were found in an identified neuron (cell B51) that is considered critical for the expression of the rewarded behavior. Similar cellular changes in B51 were produced by contingent reinforcement of B51 with dopamine in a single-cell analog of the operant procedure. The mechanisms underlying operant conditioning can now be compared to a classical conditioning paradigm developed in the same preparation. Results from both operant and classical conditioning are continuously entered into a computational model of the central pattern generator (CPG) underlying Aplysia feeding behavior.

Monday 29 April

Dr Constance Hammond 
INMED (Mediterranean Neuroscience Institute) 
Marseille

Firing pattern changes of subthalamic nucleus neurons during high frequency stimulation

Deep brain stimulation greatly ameliorates parkinsonian symptoms, akinesia, rigidity and tremor. This treatment consists of the uninterrupted high frequency stimulation (HFS) of both subthalamic nuclei via chronically implanted electrodes. Subthalamic nucleus belongs to the basal ganglia network, which is responsible for the realisation of automatized movements. It contains an homogenous population of projection neurones that presents two types of discharge patterns depending on membrane potential, a single-spike and a bursting mode. Transition from the tonic to the bursting mode is observed in parkinsonian patients and animal's models of parkinsonins. Since subthalamic lesion also ameliorates parkinsonian symptoms, it has been proposed that HFS would somehow disconnect subthalamic neurones form the network. The aim of our study was to understand the electrophysiological effect of HFS, locally, on the activity of subthalamic neurones recorded in patch clamp in brain slices in vitro.

Thursday 04 April

Dr Ricarda Schubotz 
Max Planck Institute of Cognitive Neuroscience 
Leipzig

Predicting dynamic events like moving targets or human action: fMRI suggests premotor cortex in sensorimotor planning Findings in the monkey indicate that the lateral premotor cortex is involved not only in motor planning and preparation, but responds also to sensory events. Likewise, lots of fMRI studies in humans also show premotor cortex activations caused in the absence of motor requirements. Using functional MRI, we have tried to find out what makes the premotor cortex interested in visual and auditory perceptions. As a result, we find evidence for a pragmatic, i.e., action-related, stimulus representation that parallels the functional model for the monkey premotor cortex. Taking the results from several studies, we suggest that the premotor cortex integrates ongoing sensory patterns and corresponding optional motor plans, no matter if the former are obviously caused by other beings (human action) or not (target motion).This extends the idea of an "action-perception mirror system" introduced by Rizzolatti and co-workers in the context of observed action and premotor area F5 in the monkey. In this context we are able to show that not Broca's Area (BA 44), but rather ventral BA 6 is the most probable functional homologue of monkey area F5.

Spring term 2001

Friday 22nd June


Professor Dr Stefan Pollmann 
Cognitive Neurology 
University of Leipzig

fMRI Studies of Attentional Selection Processes

Typically, we can attentively process only a fraction of the stimuli which reach our senses. The mechanisms by which we select the material which will be attentively processed may differ in the kind of information that is selected, such as location or visual dimension (e.g. color or movement). Moreover, selection may be achieved by facilitation of relevant or inhibition of irrelevant stimuli. I will report data of a number of event-related functional magnetic resonance experiments which investigated the functional neuroanatomy of these processes. Particular emphasis will be laid on control processes which guide attentional selection.

Tuesday 19th June

David Donaldson 
Department of Psychology 
University of Stirling

Identifying and dissociating Episodic memory processes using fMRI (or: Why `when', not just `where' is `what' in the brain)

Recent investigations of episodic memory (the ability to remember experiences and events from one's past) have begun to identify a network of regions that are sensitive to sucessful remembering. The ability to identify and dissociate the neural correlates of cognitive processes involved in memory retrieval has been dependent in part upon developments in fMRI paradigm design. In particular the move from `blocked' to `event-related' methods has been a significant benefit for studies of memory. The current talk outlines these developements, and introduces a new paradigm, `mixed designs' that allow further specification and dissociation of the neural correlates of episodic memory.

Tuesday 12th June

David Barber 
Institute for Adaptive & Neural Computation 
Division of Informatics, University of Edinburgh

Graphical Models: An Introduction and Applications

Graphical models are a framework for probabilistic modelling, providing a lucid description of assumed dependencies in the model. This general framework incorporates many previous models as (often rather simple) graphical models, including Hidden Markov Models, latent variable models, and Boltzmann machines, and can be seen as the natural progression from non-linear models (eg neural nets) to include stochastic effects.

An elegant aspect of such models has been the development of general algorithms, highlighting the usefulness of marrying concepts from graph and probability theory. Tractablity issues will be discussed, together with the deep connection between this framework and statistical mechanics. An example application will discuss the automatic generation of music. Dance floor space will be provided.

Tuesday 16th January

Sam Roweis 
Gatsby Computational Neuroscience Unit, University College London 

Scalable Learning of Nonlinear Manifolds from Data

How many numbers does it take to represent a complex object such as an image of a face? Obviously, one number per pixel is enough, but many fewer are actually needed. In fact, there is a thin "submanifold" of faces hiding in the very high dimensional image space. Learning the structure of such manifolds is the problem of nonlinear dimensionality reduction. Its solution allows compression, generation, interpolation and classification of complex objects.

A first step is to do "embedding": given some high dimensional training data, find some low dimensional representation of each point which preserves desired relationships. I will introduce locally linear embedding (LLE), a new unsupervised learning algorithm I developed with Lawrence Saul (AT&T Labs), which uses local symmetries and linear reconstructions to compute low dimensional, neighborhood preserving embeddings of multivariate data. The embeddings of LLE, unlike those generated by multidimensional scaling (MDS) or principal components analysis (PCA) are able to capture the global structure of nonlinear manifolds. In particular, when applied to images of faces, LLE discovers a coordinate representation of facial attributes; applied to documents of text, it colocates---in a continuous semantic space---words with similar contexts.

But we are *more ambitious* than just embedding. We want an explicit mapping between the data and embedding spaces that is valid both on and off the training data. In effect, we want a magic box that has a few knobs which, when turned, generate all variations of the objects in question (e.g. poses and expressions of faces); but no setting of knobs should generate an image that is not a face. We also want to be able to show the box an image and have it recommend knob settings which would generate that object. I will describe how, starting only with a large database of examples, we might build such a box by first applying LLE to the data. Finally, I will discuss some of the work I have done to scale up the algorithm so it works on very large datasets.

Joint work with Lawrence Saul, AT&T Labs -- Research.

 

 

Autumn term 2000

Monday 23rd October

Dr Jim Stone (University of Sheffield) 
Indepedent Components Analysis and Other Source Separation Methods

 

Overview

The talk will be in two parts. Part 1 will consist of a brief and informal tutorial to independent component analysis (ICA), which will be based on the simple geometric transformations associated with linear mixtures of signals. Part 2 will be an account of a recently developed method which, unlike ICA, is based on temporal predictability, as follows.

Abstract

A measure of temporal predictability is defined, and used to separate linear mixtures of signals. Given any set of statistically independent source signals, it is conjectured here that a linear mixture of those signals has the following property: the temporal predictability of any signal mixture is less than (or equal to) that of any of its component source signals. It is shown that this property can be used to recover source signals from a set of linear mixtures of those signals by finding an un-mixing matrix which maximises a measure of temporal predictability for each recovered signal. This matrix is obtained as the solution to a generalised eigenvalue problem; such problems have scaling characteristics of O(N^3), where N is the number of signal mixtures. In contrast to independent component analysis, the temporal predictability method requires minimal assumptions regarding the probability density functions of source signals. It is demonstrated that the method can separate signal mixtures in which each mixture is a linear combination of source signals with super-Gaussian, sub-Gaussian, and Gaussian probability density functions, and on mixtures of voices and music.

Context

Almost every signal measured within a physical system is actually a mixture of statistically independent source signals. However, because source signals are usually generated by the motion of mass (e.g. a membrane), the form of physically possible source signals is underwritten by the laws that govern how masses can move over time. This suggests that the most parsimonious explanation for the complexity of a given observed signal is that it consists of a mixture of simpler source signals, each of which is from a different physical source. Here, this observation has been used as a basis for recovering source signals from mixtures of those signals.

Consider two people speaking simultaneously, with each person a different distance from two microphones. Each microphone records a linear mixture of the two voices. The two resultant voice mixtures exemplify three universal properties of linear mixtures of statistically independent source signals:

1) Temporal Predictability (Conjecture): the temporal predictability of any signal mixture is less than (or equal to) that of any of its component source signals,

2) Gaussian Probability Density Function: the central limit theorem ensures that the extent to which the probability density function (pdf) of any mixture approximates a Gaussian distribution is greater than (or equal to) any of its component source signals, and,

3) Statistical Independence: the degree of statistical independence between any two signal mixtures is less than (or equal to) the degree of independence between any two source signals. Property 2 forms the basis of projection pursuit (Friedman, 1987), and properties 1 and 2 are critical assumptions underlying independent component analysis (ICA) (Jutten & Herault, 1988; Bell & Sejnowski, 1995).

All three properties are generic characteristics of signal mixtures. Unlike properties 2 and 3, property 1 (temporal predictability) has received relatively little attention as a basis for source separation. Here, a method explicitly based on a simple measure of temporal predictability is introduced. Preprints of a paper describing this method can be obtained by email request to:J.V.Stone@sheffield.ac.uk.

 

Monday 6th November

Professor Mike Denham (University of Plymouth) 
The Role of the Septal-Hippocampal System in a Global Network for Episodic Memory: What meets Where and When.

In a number of recent papers, the proposal has been put forward that there exists a brain-wide, network-based system for episodic memory, which involves the functional interaction of a set of widely spread areas of the brain (eg Fuster, 1997; Fletcher et al, 1997). In this talk, I will describe this network in terms of its putative component subsystems and their roles and connectivity, and propose the idea that one particular subsystem, the septo-hippocampal system, plays a key integrating role in the network. I will also examine some of the functions and mechanism in the septo-hippocampal system which may be needed to support this role.

 

Monday 20th November (12 am)

Professor Florentin Wörgötter (University of Stirling)
A video-real time chip for stereoscopic depth analysis based on cortical cell behaviour: Models, pychophysics and implementation.

Binocular (stereoscopic) vision allows us to perceive our environment as 3-dimensional, but it remains an unresolved problem how the neural network of the brain encodes this percept. At the same time, technical implementations, which try to solve the stereo problem, are far from perfect. Over the last years we have attempted to contribute to the solution of both problems. Thus, in this talk, I will first derive a formalism which allows to compute stereoscopic depth using realistic cortical neurons. This formalism has led to a certain psychophysical prediciton which we were able to confirm experimentally. In the second part of the talk I will focus on a different implementation of the stereo problem, describing an algorithm which computes stereoscopic depth in video real-time by means of (causal) electronic filters (like band-pass-, low-pass operations, etc). This algorithm has been implemented on an FPGA and is now used in industry. Several movies will be shown which demonstrate its functionallity. For a first glimpse look at

Monday 4th December (talk will be at noon, not 11am.)

Professor Dave Lee (University of Edinburgh)
Information variable in the nervous system.

The nervous system might follow principles similar to those involved in the sensory guidance of movement, since that is the primary function that it serves. A basic principle of sensory guidance appears to be controlling the closure of gaps (as when reaching for something or directing gaze), using information solely about the time-to-closure of the gap at the current closure-rate. This is called tau of the gap, and is equal to the reciprocal of the relative rate of closure. A number of behavioural experiments support this idea. In the nervous system, information about how gaps are closing presumably resides in variables defined on neural spike trains, the information carriers. In line with the behavioural results on tau, it is hypothesised that one such information variable is the relative rate of flow of electrical energy in spike trains. There is a mathematical argument for the hypothesis and empirically it is supported by single-unit recordings from motor cortex. The hypothesis also offers a way of understanding how the flow of electrical energy along neurons directs body movement at the level of the muscles and at the sensory level.

Summer term 2000

Monday 5th June
Manfred Opper (Aston University) 
Sparse Representation for Gaussian Process Models

In Bayesian approaches to statistical modelling the (a priori) uncertainties about model parameters are encoded within probability distributions. In cases where those latent parameters are entire functions over some input domain, distributions over function spaces are needed, the simplest choice being the so-called Gaussian process priors. While the basic idea is simple and can be applied to a variety of important statistical problems (regression, classification) its practical realization for large data sets is somewhat limited by the drastic increase of computational complexity. The talk discusses an approach to overcome these limitations. The method is based on sequential construction of a relevant subsample of the data which fully specifies the prediction of the model. Experimental results on toy examples and large real-world datasets indicate the efficiency of the approach.

 

Spring term 2000

Monday 17th January
Dr Zoubin Ghahramani (University College London) 
Bayesian Learning of Model Structure

In any field where models are built from data there is a tension between fitting the data well and keeping the model simple. I will discuss this problem in the context of learning the structure of latent variable models and other probabilistic graphical models encountered in pattern recognition and machine learning. The Bayesian approach allows a principled treatment of the problem of learning model structure. The tension between data fit and model complexity is resolved via Ockham's razor, which arises from averaging over all possible settings of the model parameters. Unfortunately, for most non-trivial problems these averages are intractable, resulting in the use of large-sample limits, local Gaussian approximations, or Markov chain Monte Carlo methods. I will describe a new approach to Bayesian inference based on variational approximations. Variational methods are deterministic, global, generally fast, and the objective function is guaranteed to increase monotonically. The variational optimisation procedure generalises the EM algorithm. The optimal forms of the approximating distributions fall out of the optimisation (and need not be Gaussian). Most importantly, the variational Bayesian approach can be used to compare and adapt model structures since the objective function transparently incorporates the model complexity cost. This approach has been used to automatically infer the most probable number of clusters in data and the intrinsic latent-space dimensionality of each cluster. Joint work with Hagai Attias and Matthew J Beal.

Monday 31st January
Dr Colin Fyfe (University of Paisley) 
A Canonical Correlation Analysis Neural Network for Extracting Depth Information from Images

We review a neural implementation of Canonical Correlation Analysis and display its use on real and artificial data sets. In particular we investigate its performance on Becker and Hinton's random dot stereogram data. We then derive a family of algorithms from both a probabilistic perspective and from specific cases of Becker's Maximisation of Information criteria. This family is characterised by being a simple combination of Hebbian and anti-Hebbian learning. We then put constraints on families of output neurons and show interesting results on Stone's much more complex stereo disparity data set. Finally we extend the method by introducing a nonlinearity and compare with Kernel Canonical Correlation Analysis - a new technique derived from Kernel Principal Component Analysis which is itself based on the kernels used by Support Vector Machines.

Monday 14th February
Dr Mike Oram (University of St Andrews) 
Time scale of neural communication

Over the past 15 years there has been increasing evidence that neurones might communicate with each other using codes that operate on the millisecond time scale. I will describe my work using data from LGN, primary visual, inferior temporal and motor cortices that indicate that, when analysed using appropriate methods, codes with millisecond precision only carry information that is already available from the measures with 100's of millisecond precision. The new spike count matched model provides insight into the relationship between coarse and fine temporal measures of neural responses.

Recent simulations neural responses indicate that a simple count of the number of the synchronous inputs that occur by chance provides an accurate estimate of the correlation between the neural inputs. As correlation between the coarse temporal measures of neural responses increases the available information, these simulations may indicate a previously undocumented role for precisely timed spike patterns in the decoding of neural signals.

Monday 28th February
Dr Andrew Holmes (University of Glasgow) 
Statistical Parametric Mapping for functional MRI

Statistical Parametric Mapping refers to the construction and assessment of spatially extended statistical process to test hypotheses about functional neuroimaging data. Many published functional neuroimaging experiments utilize some form of statistical parametric mapping, often using the freely available SPM software.

In this talk, we aim to present an accessible overview of the concepts and theory of Statistical Parametric Mapping, concentrating on statistical issues. We will also consider the ontology and practicalities that have led to the widespread usage of this methodology, and discuss it's limitations.