Personal tools
You are here: Home Events ANC Workshop: Michaelis Michaelides and Matthias Hennig, Chair: Wannisa Matcha

ANC Workshop: Michaelis Michaelides and Matthias Hennig, Chair: Wannisa Matcha

— filed under:

  • ANC Workshop Talk
When Mar 27, 2018
from 11:00 AM to 12:00 PM
Where IF 4.31/4.33
Add event to calendar vCal

Michaelis Michaelides


Fluid approximations to Markov chains


Continuous-time Markov chains (CTMCs) are a useful formalism for modelling a gallery of systems, but exact analysis can become expensive if they consist of large state-spaces. Motivated by the success of considering the "fluid limit" of such models in structured cases (population-CTMCs, queue networks), we propose a method based on machine learning techniques to provide a fluid approximation for a general CTMC. We show that our method approaches the classical fluid approximation results in specific cases, and is applicable to the general CTMC.


Matthias Hennig


Optimal encoding in stochastic latent-variable Models


We examine the problem of optimal sparse encoding of sensory stimuli by latent variables in stochastic models. Analyzing restricted Boltzmann machines with a communications theory approach, we search for the minimal model size that correctly conveys the correlations in stimulus patterns in an information-theoretic sense. We show that the Fisher information Matrix (FIM) reveals the optimal model size. In larger models the FIM reveals that irrelevant parameters are associated with individual latent variables, displaying a surprising amount of order.

For well-fit models, we observe the emergence of statistical criticality as diverging generalized susceptibility of the model. In this case, an encoding strategy is adopted where highly informative, but rare stimuli selectively suppress variability in the encoding units. The information content of the encoded stimuli acts as an unobserved variable leading to criticality. Together, these results can explain the stimulus-dependent variability suppression observed in sensory systems, and suggest a simple, correlation-based measure to reduce the size of artificial neural networks.


Joint work with Michael Rule and Martino Sorbaro