Proximal Gradient Reinforcement Learning, 01/2013 - 05/2015 [UAI-2015, IJCAI-2016]
  • Goal: For 30 years, researchers in reinforcement learning have been attempting to design a true stochastic gradient temporal difference learning method. Another long-term attempt is to provide sample complexity analysis of a temporal difference learning algorithm.
  • This work is the first work that establishes a first-order stochastic optimization framework for temporal difference learning, which enables acceleration, regularization, and sample complexity analysis.
  • This work received high recognition from Prof. Richard Sutton as "the best attempts to make TD methods with the robust convergence properties of stochastic gradient descent."

    Sparse Learning Models, 01/2014 - 03/2016 [UAI-2016, AAAI-2016]
  • Goal: This project explores the improvement of the learning ability of several notable sparse supervised learning models, including Lasso and Dantzig Selector.
  • Dantzig Selector with an Approximately Optimal Denoising Matrix: Dantzig Selector is notable for feature selection and sparse signal recovery. Is it possible to improve the sparse signal recovery ability of the vanilla Dantzig Selector with very little extra effort?
  • Uncorrelated Group Lasso: Group Lasso captures "sparsity among groups", how to capture "sparsity inside each group"?

    Transfer Learning, Domain Adaptation and Multi-task Learning with Sparsity and Geometric Structure, 01/2012 - 01/2014 [UMCS-2012]
  • Goal: Explore two types of intrinsic structure of data: sparsity/low-rank structure and manifold geometry in transfer learning, domain adaptation, and multi-task learning.
  • Sparse Manifold Alignment: Aim to reach a better trade-off between preserving cross-domain similarity and uniqueness among different tasks.
  • Manifold learning is used to help preserve the cross-domain latent intrinsic structure, and sparsity is introduced to help prune out domain-specific features. The algorithm is friendly to MapReduce implementation. The work is applied to multilingual machine translation, image alignment, social network analysis, etc.

    Sparse Reinforcement Learning, 09/2010 - 09/2014 [UAI-2016, UAI-2012, NIPS-2012, NIPS-2010]
  • Goal: How can modern optimization help design regularized reinforcement learning algorithms?
  • SparseQ: we use stochastic variational inequality to propose the first sparse Q-learning algorithm.
  • ROTD: Dual norm representation is applied to enable regularized off-policy TD learning.
  • ODDS-TD: Dantzig Selector with an approximate optimal denoising matrix is applied to improve DS-TD, and the performance is better than previous DS-TD, BPDN-TD for sparse reinforcement learning