Cong Lu
Deep Reinforcement Learning, Meta-Learning, Bayesian Optimisation
I am a DPhil student supervised by Michael A. Osborne and Yee Whye Teh. My research interests span deep reinforcement learning, meta-learning and Bayesian Optimisation. I am particularly interested in offline reinforcement learning (including generalisation to new tasks and uncertainty quantification for pessimistic MDPs) and reinforcement learning as probabilistic inference. I obtained my undergraduate degree in Mathematics and Computer Science from the University of Oxford.
Publications
2021
-
P. J. Ball
,
C. Lu
,
J. Parker-Holder
,
S. Roberts
,
Augmented World Models Facilitate Zero-Shot Dynamics Generalization From a Single Offline Environment, International Conference on Machine Learning, 2021.
-
X. Wan
,
V. Nguyen
,
H. Ha
,
B. Ru
,
C. Lu
,
M. A. Osborne
,
Think Global and Act Local: Bayesian Optimisation over High-Dimensional Categorical and Mixed Search Spaces, International Conference on Machine Learning, 2021.
-
L. Zintgraf
,
L. Feng
,
C. Lu
,
M. Igl
,
K. Hartikainen
,
K. Hofmann
,
S. Whiteson
,
Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning, International Conference on Machine Learning, 2021.
-
T. G. J. Rudner
,
C. Lu
,
M. A. Osborne
,
Y. Gal
,
Y. W. Teh
,
On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations, ICLR 2021 RobustML Workshop, 2021.