Cost-Effective Gaussian Processes for Big Data
Research Funding:
This research was proposed as part of a broader research proposal that I and my former PhD advisor (Associate Professor Kian Hsiang Low) got approved for a MOE AcRF Tier 2 Grant of SGD 737,461 (2017 - 2020)
Idea and Motivation:
Gaussian process (GP) models are a rich class of Bayesian non-parametric models that can perform probabilistic regression by providing Gaussian predictive distributions with formal measures of the predictive uncertainty. Unfortunately, a GP model is handicapped by its poor scalability in the size of the data, hence limiting its practical use to small data. To improve its scalability, my research aims to develop cost-effective Gaussian process models for big data with strong focus on distributed and anytime algorithms building on existing families of sparse GPs that can either leverage the multi-core architecture to parallelize the processing effectively or cope with the budgeted resource to trade-off gracefully between prediction quality and processing cost.
Relevant papers:
Revisiting the Sample Complexity of Sparse Spectrum Approximation of Gaussian Processes (NeurIPS-20) Paper
Collective Online Learning via Decentralized Gaussian Processes in Massive Multi-Agent Systems (AAAI-19) Paper
Stochastic Variational Inference for Bayesian Sparse Gaussian Process Regression (IJCNN-19) Paper
A Generalized Stochastic Variational Bayesian Hyperparameter Learning Framework for Sparse Spectrum Gaussian Process Regression (AAAI-17) Paper
A Distributed Variational Inference Framework for Unifying Gaussian Process Regression Models (ICML-16) Paper
A Unifying Framework of Anytime Sparse Gaussian Process Regression Models with Stochastic Variational Inference for Big Data (ICML-15) Paper