Christopher Ball PhD

Christopher Ball


Research Interests

Colour is a vivid sensation for most people. We usually have no trouble identifying the colour of an object, even though light reflected by the object contains a broad spectrum of wavelengths that is not constant over the object's surface. Additionally, the colour we perceive does not depend only on wavelength: surrounding colours in space and time have a large influence. How does the brain create the colours we perceive? My work is exploring a part of this question by modelling how the visual system could develop a representation of colour based on visual experience.

Publications:
2012
  ImaGen: Generic library for 0D, 1D and 2D pattern distributions
Bednar, JA & Ball, CE 2012, 'ImaGen: Generic library for 0D, 1D and 2D pattern distributions' SciPy2012: Scientific Computing with Python, Austin, United States, 16/07/12 - 21/07/12, .
Simulations and other scientific applications require streams of numeric scalar or array values to determine initial conditions or external inputs. In the Topographica brain simulation software project, the ImaGen library was developed as a general means of supplying such data so that subsequent code can ignore the details of how the data was generated. Unless constrained by external data such as bitmap images, the patterns are all resolution-independent, and can be rendered at any desired size and level of detail. A typical use in Topographica is to generate visual input patterns for a computational model of the visual system. In this case, the patterns can either be read from image databases or movies, or generated algorithmically using a library of artificial stimuli that can be positioned, scaled, rotated, and combined arbitrarily and flexibly over time. The same library is used for specifying auditory input patterns, neural-network weight patterns, and in general any 2D, 1D, or 0D (scalar) numerical pattern series. In each case, other code simply calls the pattern-generating object whenever it needs a new pattern, allowing any existing or user-defined pattern to be used for a given purpose. ImaGen thus provides generality and flexibility without adding complexity at the point of use. Using the Param library (see separate presentation at SciPy 2012), every aspect of the patterns can be controlled flexibly to provide a customizable stream of inputs on demand. For instance, an ImaGen pattern.Line object could specify some of its Parameters as numbers, with others drawn from specified distributions, to provide a never-ending stream of patterns that vary according to the given distributions: from imagen import pattern,numbergen import pylab a=pattern.Line(xdensity=60,ydensity=60,size=0.3,orientation=0.2, x=numbergen.NormalRandom(mu=0.5,sigma=0.2), y=numbergen.UniformRandom(lbound=-0.5,ubound=0.5,seed=34)) pylab.imshow(a()) pylab.imshow(a()) Patterns can be combined easily to construct any arbitrarily complex series of patterns dynamically: b=pattern.Composite(generators=[a,pattern.random.UniformRandom()], operator=numpy.multiply) These capabilities make ImaGen very useful for any visual system modeling project, but also for other neural network and machine-learning systems, and in general for any scientific application needing dynamic and arbitrarily complex yet controllable streams of 0D, 1D, or 2D input patterns. The core rendering functionality of ImaGen is similar to evaluating a NumPy function using numpy.meshgrid, but with a consistent object-based interface using a library of patterns. Some of the image-rendering functionality is similar to that of the GD and Agg libraries, but the focus of ImaGen is on generating NumPy arrays directly rather than RGB color images (though color images are also supported as three 2D NumPy arrays). ImaGen is also a pure Python module with few external dependencies (just NumPy and Param), rather than a wrapper for C code, and is thus more easily integrated into other packages. ImaGen is freely available under a BSD license from: http://ioam.github.com/imagen/
General Information
Organisations: Neuroinformatics DTC.
Authors: Bednar, James A. & Ball, Christopher E..
Publication Date: 2012
Publication Information
Category: Poster
Original Language: English
2010
  Orientation-contingent color aftereffects (the McCollough effect) can arise through Hebbian synaptic adaptation of horizontal connections
Ball, CE, Ciroux, JB & Bednar, JA 2010, 'Orientation-contingent color aftereffects (the McCollough effect) can arise through Hebbian synaptic adaptation of horizontal connections' 40th Society for Neuroscience Annual Meeting, San Diego, United States, 13/11/10 - 17/11/10, .
General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Ball, Christopher E., Ciroux, Julien B. & Bednar, James A..
Publication Date: 2010
Publication Information
Category: Poster
Original Language: English
2009
  Topographica: Computational Modeling of Neural Maps
Ball, CE & Bednar, JA 2009, 'Topographica: Computational Modeling of Neural Maps' 2nd INCF Congress of Neuroinformatics, Pilsen, Czech Republic, 6/09/09 - 8/09/09, .
Topographica is designed to make large-scale computational modeling of the development, structure, and function of topographic neural maps practical. Models can be built that focus on structural, functional, or integrative phenomena, either in the visual cortex or in other sensory or motor areas. The Topographica software package currently provides: Arbitrary feedforward, lateral, and feedback connectivity patterns, and adaptation/self-organization, using firing rate or simple spiking neuron models; Automatic generation of inputs for modelling development and for testing, allowing user control of the statistical environment, based on natural or computer-generated inputs; A graphical user interface for designing networks and experiments, with integrated visualization and analysis tools for understanding the results, as well as for validating models through comparison with experimental results. Rapid, interactive prototyping of multiple, large maps is possible on desktop workstations; automatic scaling to network arrangements and phenomena at larger scales, along with a batch mode for submitting jobs to clusters, allows simulation at greater levels of detail. Freely available from topographica.org and written in Python, Topographica is easily extensible by users, and runs on Windows, Macintosh, and Linux. We invite neural map researchers in all fields to begin using Topographica, to help establish a community of researchers who can share code, models, and approaches. In this demo, we will show how models running in other simulators can be analyzed in a rich, consistent framework using Topographica. This research was supported in part by the US National Institute of Mental Health under Human Brain Project grant R01-MH66991, and by the UK EPSRC-funded Neuroinformatics Doctoral Training Centre at the University of Edinburgh.
General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Ball, Christopher E. & Bednar, James A..
Publication Date: 2009
Publication Information
Category: Poster
Original Language: English
  A self-organizing model of color, ocular dominance, and orientation selectivity in the primary visual cortex
Ball, C & Bednar, JA 2009, 'A self-organizing model of color, ocular dominance, and orientation selectivity in the primary visual cortex' Society for Neuroscience Annual Meeting 2009, Chicago, United States, 17/10/09 - 21/10/09, .
Color-selective cells in macaque monkey V1 are organized into small, spatially separated blobs, and several studies have shown that these tend to be centered within ocular dominance columns. Recent work has further shown that all hues are represented within each blob, that perceptually similar hues are adjacent (often in the form of a pinwheel), and that these color blobs correspond to CO patches (Xiao et al., Neuroimage 2007 35: 771-786; Lu and Roe, Cereb Cortex 2008 18: 516-533). This organization of color preference into blobs is strikingly different from maps of orientation preference and ocular dominance, which consist of large, spatially contiguous patterns. Here we present a developmental model showing how this organization depends on the statistical distribution of colors in sets of natural images. The model consists of fixed subcortical pathways and a model of V1 that develops through Hebbian learning. Each eye and the corresponding LGN region is modelled as sets of L, M, and S photoreceptors coupled with luminosity, red-green opponent, and blue-yellow coextensive ganglion cells. Afferent connections to a LISSOM-based model of V1 from the ganglion cells are initially random, but develop through Hebbian learning. Lateral excitatory and inhibitory connections within V1 are also initially random and can similarly be modified by Hebbian learning. An initial 'prenatal' phase of spontaneous activity results in the formation of realistic ocular dominance and orientation preference maps, while subsequent presentation of natural images after 'eye opening' results in the emergence of color blobs centered in ocular dominance columns, in regions of lower orientation selectivity. Color-selective cells connect laterally at long ranges to cells with similar chromatic preferences, and an activity ('metabolic') measure in the model shows the equivalent of CO patches corresponding to color blobs. In control simulations, we show that the model results depend crucially on the input's balance of luminance to hue contours, and the relative balance of hues. Depending on these factors, the model can develop either a continuous color map or realistic isolated color blobs. In the case of blobs, the model can develop either a single color preference per blob or pinwheel blobs with all colors, leading to preferences for either primary colors alone or also a range of intermediate colors. These results suggest specific developmental experiments to be run on animals in order to test the assumptions of the model. The simulator and model used for these experiments are freely available from topographica.org.
General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Ball, Christopher & Bednar, James A..
Publication Date: 2009
Publication Information
Category: Poster
Original Language: English
2006
  Motion aftereffects in a self-organizing model of the primary visual cortex
Ball, C & Bednar, J 2006, 'Motion aftereffects in a self-organizing model of the primary visual cortex' 15th Annual Computational Neuroscience Meeting (CNS*2006), Edinburgh, United Kingdom, 15/07/06 - 18/07/06, pp. 34.
The LISSOM (Laterally Interconnected Synergetically Self-Organizing Map) model has previously been used to simulate the development of primary visual cortex (V1) maps for orientation and motion processing. In this work, we show that the same self-organizing processes driving the long-term development of the map result in illusory motion over short time scales in the adult. After adaptation to a moving stimulus, the model exhibits both motion aftereffects (illusory motion for a static test pattern, also known as the waterfall illusion or MAE) and the direction aftereffect (systematic changes in direction perception for moving test patterns, the DAE). Together with previous results replicating
the tilt afferefect (TAE) for stationary patterns, these results suggest that a relatively simple computational process underlies common orientation and motion aftereffects. The model predicts that such effects are caused by adaptation of lateral connections between neurons selective for motion direction and orientation.

General Information
Organisations: Institute for Adaptive and Neural Computation .
Authors: Ball, Christopher & Bednar, James.
Publication Date: 2006
Publication Information
Category: Poster
Original Language: English

Projects:
Modeling the development of primate color vision (PhD)