Gaussian Processes, probabilistic inference, deep generative models
I was a DPhil student supervised by Prof. Yee Whye Teh. My research interests fall under the topic of scalable probabilistic inference and interpretable machine learning. My current research interests lie in deep generative models and representation learning, especially in using deep generative models to learn disentangled factors of variation in the data. I am also interested in gradient based inference for generative models with discrete units, which ties in closely with interpretability. Previously, I have worked on scaling up inference for Gaussian processes, in particular on regression models for collaborative filtering that are motivated by a scalable approximation to a GP, as well as a method for scaling up the compositional kernel search used by the Automatic Statistician via variational sparse GP methods.
Subsampling is used in convolutional neural networks (CNNs) in the form of pooling or strided convolutions, to reduce the spatial dimensions of feature maps and to allow the receptive fields to grow exponentially with depth. However, it is known that such subsampling operations are not translation equivariant, unlike convolutions that are translation equivariant. Here, we first introduce translation equivariant subsampling/upsampling layers that can be used to construct exact translation equivariant CNNs. We then generalise these layers beyond translations to general groups, thus proposing group equivariant subsampling/upsampling. We use these layers to construct group equivariant autoencoders (GAEs) that allow us to learn low-dimensional equivariant representations. We empirically verify on images that the representations are indeed equivariant to input translations and rotations, and thus generalise well to unseen positions and orientations. We further use GAEs in models that learn object-centric representations on multi-object datasets, and show improved data efficiency and decomposition compared to non-equivariant baselines.
@inproceedings{xu2021group,
title = {Group Equivariant Subsampling},
author = {Xu, Jin and Kim, Hyunjik and Rainforth, Tom and Teh, Yee Whye},
booktitle = {Neural Information Processing Systems (NeurIPS)},
year = {2021}
}
We develop a functional encoder-decoder approach to supervised meta-learning, where labeled data is encoded into an infinite-dimensional functional representation rather than a finite-dimensional one. Furthermore, rather than directly producing the representation, we learn a neural update rule resembling functional gradient descent which iteratively improves the representation. The final representation is used to condition the decoder to make predictions on unlabeled data. Our approach is the first to demonstrates the success of encoder-decoder style meta-learning methods like conditional neural processes on large-scale few-shot classification benchmarks such as miniImageNet and tieredImageNet, where it achieves state-of-the-art performance.
@inproceedings{xu2019metafun,
title = {MetaFun: Meta-Learning with Iterative Functional Updates},
author = {Xu, Jin and Ton, Jean-Francois and Kim, Hyunjik and Kosiorek, Adam R and Teh, Yee Whye},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2020}
}
2019
H. Kim
,
A. Mnih
,
J. Schwarz
,
M. Garnelo
,
S. M. A. Eslami
,
D. Rosenbaum
,
O. Vinyals
,
Y. W. Teh
,
Attentive Neural Processes, in International Conference on Learning Representations (ICLR), 2019.
Neural Processes (NPs) (Garnelo et al., 2018) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions. Each function models the distribution of the output given an input, conditioned on the context. NPs have the benefit of fitting observed data efficiently with linear complexity in the number of context input-output pairs, and can learn a wide family of conditional distributions; they learn predictive distributions conditioned on context sets of arbitrary size. Nonetheless, we show that NPs suffer a fundamental drawback of underfitting, giving inaccurate predictions at the inputs of the observed data they condition on. We address this issue by incorporating attention into NPs, allowing each input location to attend to the relevant context points for the prediction. We show that this greatly improves the accuracy of predictions, results in noticeably faster training, and expands the range of functions that can be modelled.
@inproceedings{KimTeh2019a,
author = {Kim, H. and Mnih, A. and Schwarz, J. and Garnelo, M. and Eslami, S. M. A. and Rosenbaum, D. and Vinyals, O. and Teh, Y. W.},
title = {Attentive Neural Processes},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2019},
month = may
}
2018
H. Kim
,
Y. W. Teh
,
Scaling up the Automatic Statistician: Scalable Structure Discovery using Gaussian Processes, in Artificial Intelligence and Statistics (AISTATS), 2018.
Automating statistical modelling is a challenging problem in artificial intelligence. The Automatic Statistician takes a first step in this direction, by employing a kernel search algorithm with Gaussian Processes (GP) to provide interpretable statistical models for regression problems. However this does not scale due to its O(N^3) running time for the model selection. We propose Scalable Kernel Composition (SKC), a scalable kernel search algorithm that extends the Automatic Statistician to bigger data sets. In doing so, we derive a cheap upper bound on the GP marginal likelihood that sandwiches the marginal likelihood with the variational lower bound . We show that the upper bound is significantly tighter than the lower bound and thus useful for model selection.
@inproceedings{KimTeh18,
author = {Kim, H. and Teh, Y. W.},
booktitle = {Artificial Intelligence and Statistics (AISTATS)},
title = {Scaling up the Automatic Statistician: Scalable Structure Discovery using Gaussian Processes},
month = apr,
year = {2018}
}
A. R. Kosiorek
,
H. Kim
,
Y. W. Teh
,
I. Posner
,
Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects, in Advances in Neural Information Processing Systems (NeurIPS), 2018.
We present Sequential Attend, Infer, Repeat (SQAIR), an interpretable deep generative model for videos of moving objects. It can reliably discover and track objects throughout the sequence of frames, and can also generate future frames conditioning on the current frame, thereby simulating expected motion of objects. This is achieved by explicitly encoding object presence, locations and appearances in the latent variables of the model. SQAIR retains all strengths of its predecessor, Attend, Infer, Repeat (AIR, Eslami et. al., 2016), including learning in an unsupervised manner, and addresses its shortcomings. We use a moving multi-MNIST dataset to show limitations of AIR in detecting overlapping or partially occluded objects, and show how SQAIR overcomes them by leveraging temporal consistency of objects. Finally, we also apply SQAIR to real-world pedestrian CCTV data, where it learns to reliably detect, track and generate walking pedestrians with no supervision.
@inproceedings{koskimposteh18,
title = {Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects},
author = {Kosiorek, Adam R. and Kim, Hyunjik and Teh, Yee Whye and Posner, Ingmar},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2018}
}
H. Kim
,
A. Mnih
,
Disentangling by Factorising, in International Conference on Machine Learning (ICML), 2018.
We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation. We propose FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions. We show that it improves upon beta-VAE by providing a better trade-off between disentanglement and reconstruction quality. Moreover, we highlight the problems of a commonly used disentanglement metric and introduce a new metric that does not suffer from them.
@inproceedings{KimMnih18,
author = {Kim, H. and Mnih, A.},
booktitle = {International Conference on Machine Learning (ICML)},
title = {Disentangling by Factorising},
year = {2018}
}
We tackle the problem of collaborative filtering (CF) with side information, through the lens of Gaussian Process (GP) regression. Driven by the idea of using the kernel to explicitly model user-item similarities, we formulate the GP in a way that allows the incorporation of low-rank matrix factorisation, arriving at our model, the Tucker Gaussian Process (TGP). Consequently, TGP generalises classical Bayesian matrix factorisation models, and goes beyond them to give a natural and elegant method for incorporating side information, giving enhanced predictive performance for CF problems. Moreover we show that it is a novel model for regression, especially well-suited to grid-structured data and problems where the dependence on covariates is close to being separable.
@unpublished{kimluflateh16,
title = {Collaborative Filtering with Side Information: a Gaussian Process Perspective},
author = {Kim, H. and Lu, X. and Flaxman, S. and Teh, Y. W.},
note = {ArXiv e-prints: 1605.07025},
year = {2016}
}
H. Kim
,
Y. W. Teh
,
Scalable Structure Discovery in Regression using Gaussian Processes, in Proceedings of the 2016 Workshop on Automatic Machine Learning, 2016.
Automatic Bayesian Covariance Discovery(ABCD) in Lloyd et. al (2014) provides a framework for automating statistical modelling as well as exploratory data analysis for regression problems. However ABCD does not scale due to its O(N^3) running time. This is undesirable not only because the average size of data sets is growing fast, but also because there is potentially more information in bigger data, implying a greater need for more expressive models that can discover sophisticated structure. We propose a scalable version of ABCD, to encompass big data within the boundaries of automated statistical modelling.
@inproceedings{KimTeh2016a,
author = {Kim, H. and Teh, Y. W.},
booktitle = {Proceedings of the 2016 Workshop on Automatic Machine Learning},
title = {Scalable Structure Discovery in Regression using Gaussian Processes},
year = {2016},
bdsk-url-1 = {http://www.jmlr.org/proceedings/papers/v64/kim_scalable_2016.html},
bdsk-url-2 = {http://www.jmlr.org/proceedings/papers/v64/kim_scalable_2016.pdf}
}