Rong Ge
Associate Professor of Computer Science
Current Appointments & Affiliations
- Associate Professor of Computer Science, Computer Science, Trinity College of Arts & Sciences 2021
- Assistant Professor of Mathematics, Mathematics, Trinity College of Arts & Sciences 2020
Contact Information
- Background
-
Education, Training, & Certifications
- Ph.D., Princeton University 2013
-
Previous Appointments & Affiliations
- Assistant Professor of Computer Science, Computer Science, Trinity College of Arts & Sciences 2015 - 2021
- Recognition
-
In the News
-
SEP 21, 2021 Office of Faculty Advancement
-
- Research
-
Selected Grants
- Collaborative Reseach: Transferable, Hierarchical, Expressive, Optimal, Robust, Interpretable NETworks (THEORINET) awarded by Simons Foundation 2020 - 2025
- Collaborative Reseach: Transferable, Hierarchical, Expressive, Optimal, Robust, Interpretable NETworks (THEORINET) awarded by National Science Foundation 2020 - 2025
- CAREER: Optimization Landscape for Non-convex Functions - Towards Provable Algorithms for Neural Networks awarded by National Science Foundation 2019 - 2024
- HDR TRIPODS: Innovations in Data Science: Integrating Stochastic Modeling, Data Representation, and Algorithms awarded by National Science Foundation 2019 - 2023
- Ge Sloan Fellowship 2019 awarded by Alfred P. Sloan Foundation 2019 - 2023
- AF:Large:Collaborative Research:Nonconvex methods and models for learning: Towards algorithms with provable and interpretable guarantees awarded by National Science Foundation 2017 - 2022
- Publications & Artistic Works
-
Selected Publications
-
Academic Articles
-
Frandsen, A., and R. Ge. “Optimization landscape of Tucker decomposition.” Mathematical Programming 193, no. 2 (June 1, 2022): 687–712. https://doi.org/10.1007/s10107-020-01531-z.Full Text
-
Ge, R., and T. Ma. “On the optimization landscape of tensor decompositions.” Mathematical Programming 193, no. 2 (June 1, 2022): 713–59. https://doi.org/10.1007/s10107-020-01579-x.Full Text
-
Jin, C., P. Netrapalli, R. Ge, S. M. Kakade, and M. I. Jordan. “On Nonconvex Optimization for Machine Learning.” Journal of the Acm 68, no. 2 (March 1, 2021). https://doi.org/10.1145/3418526.Full Text
-
Ge, R., H. Lee, and J. Lu. “Estimating normalizing constants for log-concave distributions: Algorithms and lower bounds.” Proceedings of the Annual Acm Symposium on Theory of Computing, June 8, 2020, 579–86. https://doi.org/10.1145/3357713.3384289.Full Text
-
Wang, X., C. Wu, J. D. Lee, T. Ma, and R. Ge. “Beyond lazy training for over-parameterized tensor decomposition.” Advances in Neural Information Processing Systems 2020-December (January 1, 2020).
-
Ge, R., Z. Li, R. Kuditipudi, and X. Wang. “Learning two-layer neural networks with symmetric inputs.” 7th International Conference on Learning Representations, Iclr 2019, January 1, 2019.
-
Janzamin, M., R. Ge, J. Kossaifi, and A. Anandkumar. “Spectral learning on matrices and tensors.” Foundations and Trends in Machine Learning 12, no. 5–6 (January 1, 2019): 393–536. https://doi.org/10.1561/2200000057.Full Text
-
Kuditipudi, R., X. Wang, H. Lee, Y. Zhang, Z. Li, W. Hu, S. Arora, and R. Ge. “Explaining landscape connectivity of low-cost solutions for multilayer nets.” Advances in Neural Information Processing Systems 32 (January 1, 2019).
-
Arora, S., R. Ge, Y. Halpern, D. Mimno, A. Moitra, D. Sontag, Y. Wu, and M. Zhu. “Learning topic models — Provably and efficiently.” Communications of the Acm 61, no. 4 (April 1, 2018): 85–93. https://doi.org/10.1145/3186262.Full Text
-
Anandkumar, A., R. Ge, and M. Janzamin. “Analyzing tensor power method dynamics in overcomplete regime.” Journal of Machine Learning Research 18 (April 1, 2017): 1–40.
-
Huang, Q., R. Ge, S. Kakade, and M. Dahleh. “Minimal Realization Problems for Hidden Markov Models.” Ieee Transactions on Signal Processing 64, no. 7 (April 1, 2016): 1896–1904. https://doi.org/10.1109/TSP.2015.2510969.Full Text
-
Arora, S., R. Ge, A. Moitra, and S. Sachdeva. “Provable ICA with Unknown Gaussian Noise, and Implications for Gaussian Mixtures and Autoencoders.” Algorithmica 72, no. 1 (May 1, 2015): 215–36. https://doi.org/10.1007/s00453-015-9972-2.Full Text
-
Anandkumar, A., R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. “Tensor decompositions for learning latent variable models.” Journal of Machine Learning Research 15 (August 1, 2014): 2773–2832.
-
Huang, Q., R. Ge, S. Kakade, and M. Dahleh. “Minimal realization problem for Hidden Markov Models.” 2014 52nd Annual Allerton Conference on Communication, Control, and Computing, Allerton 2014, January 30, 2014, 4–11. https://doi.org/10.1109/ALLERTON.2014.7028428.Full Text
-
Anandkumar, A., R. Ge, D. Hsu, and S. M. Kakade. “A tensor approach to learning mixed membership community models.” Journal of Machine Learning Research 15 (January 1, 2014): 2239–2312.
-
Arora, S., A. Bhaskara, R. Ge, and T. Ma. “Provable bounds for learning some deep representations.” 31st International Conference on Machine Learning, Icml 2014 1 (January 1, 2014): 883–91.
-
Arora, S., R. Ge, and A. Moitra. “New algorithms for learning incoherent and overcomplete dictionaries.” Journal of Machine Learning Research 35 (January 1, 2014): 779–806.
-
Arora, S., R. Ge, and A. K. Sinop. “Towards a better approximation for SPARSEST CUT?” Proceedings Annual Ieee Symposium on Foundations of Computer Science, Focs, December 1, 2013, 270–79. https://doi.org/10.1109/FOCS.2013.37.Full Text
-
Anandkumar, A., R. Ge, D. Hsu, and S. M. Kakade. “A tensor spectral approach to learning mixed membership community models.” Journal of Machine Learning Research 30 (January 1, 2013): 867–81.
-
Arora, S., R. Ge, Y. Halpern, D. Mimno, A. Moitra, D. Sontag, Y. Wu, and M. Zhu. “A practical algorithm for topic modeling with provable guarantees.” 30th International Conference on Machine Learning, Icml 2013, no. PART 2 (January 1, 2013): 939–47.
-
Arora, S., R. Ge, A. Moitra, and S. Sachdeva. “Provable ICA with unknown Gaussian noise, with implications for Gaussian mixtures and autoencoders.” Advances in Neural Information Processing Systems 3 (December 1, 2012): 2375–83.
-
Arora, S., R. Ge, and A. Moitra. “Learning topic models - Going beyond SVD.” Proceedings Annual Ieee Symposium on Foundations of Computer Science, Focs, December 1, 2012, 1–10. https://doi.org/10.1109/FOCS.2012.49.Full Text
-
Arora, S., R. Ge, S. Sachdeva, and G. Schoenebeck. “Finding overlapping communities in social networks: Toward a rigorous approach.” Proceedings of the Acm Conference on Electronic Commerce, July 10, 2012, 37–54. https://doi.org/10.1145/2229012.2229020.Full Text
-
Arora, S., R. Ge, R. Kannan, and A. Moitra. “Computing a nonnegative matrix factorization - Provably.” Proceedings of the Annual Acm Symposium on Theory of Computing, June 26, 2012, 145–61. https://doi.org/10.1145/2213977.2213994.Full Text
-
Dai, D., and R. Ge. “Another sub-exponential algorithm for the simple stochastic game.” Algorithmica (New York) 61, no. 4 (December 1, 2011): 1092–1104. https://doi.org/10.1007/s00453-010-9413-1.Full Text
-
Arora, S., and R. Ge. “New tools for graph coloring.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6845 LNCS (September 8, 2011): 1–12. https://doi.org/10.1007/978-3-642-22935-0_1.Full Text
-
Arora, S., and R. Ge. “New algorithms for learning in presence of errors.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6755 LNCS, no. PART 1 (July 11, 2011): 403–15. https://doi.org/10.1007/978-3-642-22006-7_34.Full Text
-
Arora, S., B. Barak, M. Brunnermeier, and R. Ge. “Computational complexity and information asymmetry in financial products.” Communications of the Acm 54, no. 5 (May 1, 2011): 101–7. https://doi.org/10.1145/1941487.1941511.Full Text
-
Dai, D., and R. Ge. “New results on simple stochastic games.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 5878 LNCS (December 1, 2009): 1014–23. https://doi.org/10.1007/978-3-642-10631-6_102.Full Text
-
-
Other Articles
-
Azar, Y., A. Ganesh, R. Ge, and D. Panigrahi. “Online Service with Delay.” Acm Transactions on Algorithms, August 1, 2021. https://doi.org/10.1145/3459925.Full Text
-
-
Conference Papers
-
Anand, K., R. Ge, A. Kumar, and D. Panigrahi. “A Regression Approach to Learning-Augmented Online Algorithms.” In Advances in Neural Information Processing Systems, 36:30504–17, 2021.
-
Ge, R., Y. Ren, X. Wang, and M. Zhou. “Understanding Deflation Process in Over-parametrized Tensor Decomposition.” In Advances in Neural Information Processing Systems, 2:1299–1311, 2021.
-
Anand, K., and R. Ge. “Customizing ML predictions for online algorithms.” In 37th International Conference on Machine Learning, Icml 2020, PartF168147-1:280–90, 2020.
-
Cheng, Y., I. Diakonikolas, R. Ge, and M. Soltanolkotabi. “High-dimensional robust mean estimation via gradient descent.” In 37th International Conference on Machine Learning, Icml 2020, PartF168147-3:1746–56, 2020.
-
Cheng, Y., I. Diakonikolas, and R. Ge. “High-dimensional robust mean estimation in nearly-linear time.” In Proceedings of the Annual Acm Siam Symposium on Discrete Algorithms, 2755–71, 2019. https://doi.org/10.1137/1.9781611975482.171.Full Text
-
Frandsen, A., and R. Ge. “Understanding composition of word embeddings via tensor decomposition.” In 7th International Conference on Learning Representations, Iclr 2019, 2019.
-
Frandsen, A., and R. Ge. “Understanding composition of word embeddings via tensor decomposition.” In 7th International Conference on Learning Representations, Iclr 2019, 2019.
-
Ge, R., S. M. Kakade, R. Kidambi, and P. Netrapalli. “The step decay schedule: A near optimal, geometrically decaying learning rate procedure for least squares.” In Advances in Neural Information Processing Systems, Vol. 32, 2019.
-
Ge, R., Z. Li, R. Kuditipudi, and X. Wang. “Learning two-layer neural networks with symmetric inputs.” In 7th International Conference on Learning Representations, Iclr 2019, 2019.
-
Arora, S., R. Ge, B. Neyshabur, and Y. Zhang. “Stronger generalization bounds for deep nets via a compression approach.” In 35th International Conference on Machine Learning, Icml 2018, 1:390–418, 2018.
-
Fazel, M., R. Ge, S. M. Kakade, and M. Mesbahi. “Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator.” In 35th International Conference on Machine Learning, Icml 2018, 4:2385–2413, 2018.
-
Ge, R., H. Lee, and A. Risteski. “Beyond log-concavity: Provable guarantees for sampling multi-modal distributions using simulated tempering langevin Monte Carlo.” In Advances in Neural Information Processing Systems, 2018-December:7847–56, 2018.
-
Ge, R., J. D. Lee, and T. Ma. “Learning one-hidden-layer neural networks with landscape design.” In 6th International Conference on Learning Representations, Iclr 2018 Conference Track Proceedings, 2018.
-
Ge, R., J. D. Lee, and T. Ma. “Learning one-hidden-layer neural networks with landscape design.” In 6th International Conference on Learning Representations, Iclr 2018 Conference Track Proceedings, 2018.
-
Jin, C., R. Ge, L. T. Liu, and M. I. Jordan. “On the local minima of the empirical risk.” In Advances in Neural Information Processing Systems, 2018-December:4896–4905, 2018.
-
Arora, S., R. Ge, T. Ma, and A. Risteski. “Provable learning of noisy-or networks.” In Proceedings of the Annual Acm Symposium on Theory of Computing, Part F128415:1057–66, 2017. https://doi.org/10.1145/3055399.3055482.Full Text
-
Azar, Y., A. Ganesh, R. Ge, and D. Panigrahi. “Online service with delay.” In Proceedings of the Annual Acm Symposium on Theory of Computing, Part F128415:551–63, 2017. https://doi.org/10.1145/3055399.3055475.Full Text
-
Arora, S., R. Ge, Y. Liang, T. Ma, and Y. Zhang. “Generalization and equilibrium in generative adversarial nets (GANs).” In 34th International Conference on Machine Learning, Icml 2017, 1:322–49, 2017.
-
Ge, R., C. Jin, and Y. Zheng. “No spurious local minima in nonconvex low rank problems: A unified geometric analysis.” In 34th International Conference on Machine Learning, Icml 2017, 3:1990–2028, 2017.
-
Ge, R., and T. Ma. “On the optimization landscape of tensor decompositions.” In Advances in Neural Information Processing Systems, 2017-December:3654–64, 2017.
-
Jin, C., R. Ge, P. Netrapalli, S. M. Kakade, and M. I. Jordan. “How to escape saddle points efficiently.” In 34th International Conference on Machine Learning, Icml 2017, 4:2727–52, 2017.
-
Anandkumar, A., and R. Ge. “Efficient approaches for escaping higher order saddle points in non-convex optimization.” In Journal of Machine Learning Research, 49:81–102, 2016.
-
Arora, S., R. Ge, F. Koehler, T. Ma, and A. Moitra. “Provable algorithms for inference in topic models.” In 33rd International Conference on Machine Learning, Icml 2016, 6:4176–84, 2016.
-
Arora, S., R. Ge, R. Kannan, and A. Moitra. “Computing a nonnegative matrix factorization-provably.” In Siam Journal on Computing, 45:1582–1611, 2016. https://doi.org/10.1137/130913869.Full Text
-
Ge, R., C. Jin, S. Kakade, P. Netrapalli, and A. Sidford. “Efficient algorithms for large-scale generalized eigenvector computation and canonical correlation analysis.” In 33rd International Conference on Machine Learning, Icml 2016, 6:4009–26, 2016.
-
Ge, R., J. D. Lee, and T. Ma. “Matrix completion has no spurious local minimum.” In Advances in Neural Information Processing Systems, 2981–89, 2016.
-
Ge, R., and J. Zou. “Rich component analysis.” In 33rd International Conference on Machine Learning, Icml 2016, 3:2238–55, 2016.
-
Ge, R., and T. Ma. “Decomposing overcomplete 3rd order tensors using sum-of-squares algorithms.” In Leibniz International Proceedings in Informatics, Lipics, 40:829–49, 2015. https://doi.org/10.4230/LIPIcs.APPROX-RANDOM.2015.829.Full Text
-
Ge, R., Q. Huang, and S. M. Kakade. “Learning mixtures of gaussians in high dimensions.” In Proceedings of the Annual Acm Symposium on Theory of Computing, 14-17-June-2015:761–70, 2015. https://doi.org/10.1145/2746539.2746616.Full Text
-
Anandkumar, A., R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. “Tensor decompositions for learning latent variable models (A survey for ALT).” In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9355:19–38, 2015. https://doi.org/10.1007/978-3-319-24486-0_2.Full Text
-
Anandkumar, A., R. Ge, and M. Janzamin. “Learning overcomplete latent variable models through tensor methods.” In Journal of Machine Learning Research, Vol. 40, 2015.
-
Arora, S., R. Ge, T. Ma, and A. Moitra. “Simple, efficient, and neural algorithms for sparse coding.” In Journal of Machine Learning Research, Vol. 40, 2015.
-
Frostig, R., R. Ge, S. M. Kakade, and A. Sidford. “Competing with the empirical risk minimizer in a single pass.” In Journal of Machine Learning Research, Vol. 40, 2015.
-
Frostig, R., R. Ge, S. M. Kakade, and A. Sidford. “Un-regularizing: Approximate proximal point and faster stochastic algorithms for empirical risk minimization.” In 32nd International Conference on Machine Learning, Icml 2015, 3:2530–38, 2015.
-
Ge, R., F. Huang, C. Jin, and Y. Yuan. “Escaping from saddle points: Online stochastic gradient for tensor decomposition.” In Journal of Machine Learning Research, Vol. 40, 2015.
-
Ge, R., and J. Zou. “Intersecting faces: Non-negative matrix factorization with new guarantees.” In 32nd International Conference on Machine Learning, Icml 2015, 3:2285–93, 2015.
-
-
- Teaching & Mentoring
-
Recent Courses
- COMPSCI 391: Independent Study 2023
- COMPSCI 394: Research Independent Study 2023
- COMPSCI 532: Design and Analysis of Algorithms 2023
- COMPSCI 330: Introduction to the Design and Analysis of Algorithms 2022
- COMPSCI 394: Research Independent Study 2022
- COMPSCI 330: Introduction to the Design and Analysis of Algorithms 2021
- COMPSCI 394: Research Independent Study 2021
- COMPSCI 532: Design and Analysis of Algorithms 2021
- COMPSCI 590: Advanced Topics in Computer Science 2021
Some information on this profile has been compiled automatically from Duke databases and external sources. (Our About page explains how this works.) If you see a problem with the information, please write to Scholars@Duke and let us know. We will reply promptly.