© 2018 by Vidhi Lalchand

  • LinkedIn
  • Medium-512
  • Twitter
  • Instagram

Gaussian processes have always been accused of not being able to learn representations like deep models do. This is untrue as the key to this lies in kernel learning, a lot of approaches in literature engineer sophisticated kernels from simpler ones by relying on the closure properties of kernels. However, this process still needs to be handcrafted; I am interested in techniques where a kernel can be learnt from the data; this can be achieved by reparameterizing the kernel function in terms of basis functions or spectral densities and learning these quantities through standard Bayesian probabilistic non-parametric inference.  

V. R. Lalchand and Carl E Rasmussen. Approximate Inference for Fully Bayesian Gaussian Process Regression. In Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference, volume 118 of Proceedings of Machine Learning Research, pages 1–12. PMLR, 2020.

V. R. Lalchand. Extracting more from Boosted Decision Trees: A High Energy Physics case study. Second Workshop on Machine Learning and the Physical Sciences. NeurIPS 2019, Vancouver, Canada.

V. R. Lalchand, A.C. Faul. A Fast and Greedy Subset-of-Data (SoD) scheme for Sparsification in Gaussian Processes. 38th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. August 2018.

P Treleavan, M Galas and V Lalchand. Algorithmic Trading Review. Association of Applied Computing Machinery, November 2013.