I am a third year PhD student in Statistics at the Oxford-Warwick Statistics Programme (OxWaSP), working under the joint supervision of Professor Yee Whye Teh (Oxford), Dr. Dario Spanò (Warwick) and Dr. Paul Jenkins (Warwick). My research interests lie in the fields of Bayesian nonparametric statistics and machine learning. Specifically, I am interested in incorporating dependency structures into Bayesian nonparametric models and in developing novel algorithms for large scale machine learning. I have applied my work to a range of areas including topic modelling and population genetics.
Publications
2017
V. Perrone
,
P. A. Jenkins
,
D. Spano
,
Y. W. Teh
,
Poisson Random Fields for Dynamic Feature Models, Journal of Machine Learning Research (JMLR), Dec. 2017.
@article{PerJenSpa2017a,
author = {Perrone, V. and Jenkins, P. A. and Spano, D. and Teh, Y. W.},
journal = {Journal of Machine Learning Research (JMLR)},
month = dec,
year = {2017},
title = {{P}oisson Random Fields for Dynamic Feature Models},
bdsk-url-1 = {https://arxiv.org/abs/1611.07460},
bdsk-url-2 = {https://arxiv.org/pdf/1611.07460v1.pdf}
}
Hamiltonian Monte Carlo (HMC) is a popular Markov chain Monte Carlo (MCMC) algorithm that generates proposals for a Metropolis-Hastings algorithm by simulating the dynamics of a Hamiltonian system. However, HMC is sensitive to large time discretizations and performs poorly if there is a mismatch between the spatial geometry of the target distribution and the scales of the momentum distribution. In particular the mass matrix of HMC is hard to tune well. In order to alleviate these problems we propose relativistic Hamiltonian Monte Carlo, a version of HMC based on relativistic dynamics that introduce a maximum velocity on particles. We also derive stochastic gradient versions of the algorithm and show that the resulting algorithms bear interesting relationships to gradient clipping, RMSprop, Adagrad and Adam, popular optimisation methods in deep learning. Based on this, we develop relativistic stochastic gradient descent by taking the zero-temperature limit of relativistic stochastic gradient Hamiltonian Monte Carlo. In experiments we show that the relativistic algorithms perform better than classical Newtonian variants and Adam.
@inproceedings{LuPerHas2016a,
author = {Lu, X. and Perrone, V. and Hasenclever, L. and Teh, Y. W. and Vollmer, S. J.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {Relativistic {M}onte {C}arlo},
month = apr,
year = {2017},
bdsk-url-1 = {https://arxiv.org/pdf/1609.04388v1.pdf}
}
@software{PerJenSpa2016b,
author = {Perrone, V. and Jenkins, P. A. and Spano, D. and Teh, Y. W.},
title = {NIPS 1987-2015 dataset},
year = {2016},
bdsk-url-1 = {https://archive.ics.uci.edu/ml/datasets/NIPS+Conference+Papers+1987-2015}
}