I am interested in how an information processing system can organize itself without global monitoring to perform a learning or inference task. Practically, I am lazy and would prefer it if the different processing stages would arrange themselves with as little input as possible. I'm researching methods to make this happen using transactional or adversarial systems, drawing on ideas from prediction markets and machine learning.
Resource-Efficient Feature Gathering at Test Time
Gray, G & Storkey, A 2016, 'Resource-Efficient Feature Gathering at Test Time' Reliable Machine Learning in the Wild, Barcelona, Spain, 9/12/16, .
Data collection is costly. A machine learning model requires input data to produce an output prediction, but that input is often not cost-free to produce accurately. For example, in the social sciences, it may require collecting samples; in signal processing it may involve investing in expensive accurate sensors. The problem of allocating a budget across the collection of different input variables is largely overlooked in machine learning, but is important under real-world constraints. Given that the noise level on each input feature depends on how much resource has been spent gathering it, and given a fixed budget, we ask how to allocate that budget to maximise our expected reward. At the same time, the optimal model parameters will depend on the choice of budget allocation, and so searching the space of possible budgets is costly. Using doubly stochastic gradient methods we propose a solution that allows expressive models and massive datasets, while still providing an interpretable budget allocation for feature gathering at test time.
Organisations: Institute for Adaptive and Neural Computation .
Authors: Gray, Gavin & Storkey, Amos.
Publication Date: 2016
Original Language: English