Skip to main content

Yuansi Chen

Assistant Research Professor of Statistical Science
Statistical Science

Selected Publications

Minimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-Concave Sampling

Journal Article Vol. · September 27, 2021 We study the mixing time of the Metropolis-adjusted Langevin algorithm (MALA) for sampling from a log-smooth and strongly log-concave distribution. We establish its optimal minimax mixing time under a warm start. Our main contribution is two-fold. First, f ... Link to item Cite

An Almost Constant Lower Bound of the Isoperimetric Coefficient in the KLS Conjecture

Journal Article · November 27, 2020 We prove an almost constant lower bound of the isoperimetric coefficient in the KLS conjecture. The lower bound has the dimension dependency $d^{-o_d(1)}$. When the dimension is large enough, our lower bound is tighter than the previous best bound which ha ... Link to item Cite

Domain adaptation under structural causal models

Journal Article · October 29, 2020 Domain adaptation (DA) arises as an important problem in statistical machine learning when the source data used to train a model is different from the target data used to test the model. Recent advances in DA have mainly been application-driven and have la ... Link to item Cite

Sampling can be faster than optimization.

Journal Article Proceedings of the National Academy of Sciences of the United States of America · October 2019 Optimization algorithms and Monte Carlo sampling algorithms have provided the computational foundations for the rapid growth in applications of statistical machine learning in recent years. There is, however, limited theoretical understanding of the relati ... Full text Cite

Fast mixing of Metropolized Hamiltonian Monte Carlo: Benefits of multi-step gradients

Journal Article · May 29, 2019 Hamiltonian Monte Carlo (HMC) is a state-of-the-art Markov chain Monte Carlo sampling algorithm for drawing samples from smooth probability densities over continuous spaces. We study the variant most widely used in practice, Metropolized HMC with the St\"{ ... Link to item Cite

The DeepTune framework for modeling and characterizing neurons in visual cortex area V4

Journal Article · November 9, 2018 AbstractDeep neural network models have recently been shown to be effective in predicting single neuron responses in primate visual cortex areas V4. Despite their high predictive accuracy, these models are generally difficu ... Full text Cite

Stability and Convergence Trade-off of Iterative Optimization Algorithms

Journal Article · April 4, 2018 The overall performance or expected excess risk of an iterative machine learning algorithm can be decomposed into training error and generalization error. While the former is controlled by its convergence analysis, the latter can be tightly handled by algo ... Link to item Cite

Log-concave sampling: Metropolis-Hastings algorithms are fast

Journal Article Journal of Machine Learning Research, 2019 · January 8, 2018 We consider the problem of sampling from a strongly log-concave density in $\mathbb{R}^d$, and prove a non-asymptotic upper bound on the mixing time of the Metropolis-adjusted Langevin algorithm (MALA). The method draws samples by simulating a Markov chain ... Link to item Cite

Fast MCMC sampling algorithms on polytopes

Journal Article The Journal of Machine Learning Research · October 23, 2017 We propose and analyze two new MCMC sampling algorithms, the Vaidya walk and the John walk, for generating samples from the uniform distribution over a polytope. Both random walks are sampling algorithms derived from interior point methods. The former is b ... Link to item Cite