Several recent models of processing in the brain argue in probabilistic, Bayesian terms, and involve reasoning akin to machine learning techniques. Along similar lines, I propose modelling work on the hierarchical aspects of the neocortical architecture, specifically considering how cortical visual processing can be related to and contrasted with current machine learning approaches such as Bayesian belief networks and dynamical state estimation with particle filtering. I aim to focus on the unique properties and capabilities of the neocortex not captured by the machine learning approaches, e.g. spiking neuronal networks, synaptic learning rules, and the topologies implicit in axonal delays in between neurons. In the long run, I hope to develop task-oriented, self-organising models that could be tested in closed loop conditions, with possible applications to hardware implementations of neuronal networks.