I am an assistant professor in the Department of Industrial and Operations Engineering at the University of Michigan. I am also affiliated with Michigan Institute for Computational Discovery and Engineering (MICDE), Michigan Institute for Data Science (MIDAS), and Michigan Center for Applied and Interdisciplinary Mathematics (MCAIM).
My research focuses on developing efficient and scalable computational methods for structured problems. In particular, I exploit inherent structures in optimization and machine learning problems, such as sparsity, low-rankness, and benign landscape, to develop computational techniques that scale gracefully and enjoy strong guarantees. See the list of representative papers for more details.
My group’s research has been recognized by multiple awards, including the 2023 INFORMS JFIG Best Paper Award (second place), 2023 and 2018 INFORMS Data Mining Best Student Paper Award (finalist and winner), and 2022 INFORMS Computing Society Best Student Paper Award (runner up). I am honored to be awarded the 2024 North Campus Dean’s MLK Spirit Award.
My research is supported by an NSF CAREER Award CCF-2337776, an NSF Grant DMS-2152776, an ONR Grant, a MICDE Catalyst Grant, a MIDAS PODS Grant, a START Grant, and a DEI Faculty Grant.
I received my M.Sc. and Ph.D. degrees in Industrial Engineering and Operations Research from University of California, Berkeley. I also received a M.Sc. degree in Electrical Engineering from Columbia University, and a B.Sc. degree in Electrical Engineering from Sharif University of Technology.
Prospective PhD students for Fall 2025
I plan to recruit two PhD students for the Fall 2025. Ideal candidates should have a strong mathematical background and have an interest in the theoretical foundations of optimization and machine learning. To get an idea of my group’s research, please review our representative papers.
If you are interested in working with me, please email me with the subject line “PhD Applicant for Fall 2025” and attach your resume.
Contact:
Email: [email protected]
Phone: (734) 763-9744
News:
July 2024: Our paper Triple Component Matrix Factorization: Untangling Global, Local, and Noisy Components is conditionally accepted to appear in the Journal of Machine Learning Research.
May 2024: Delighted to have organized the AIventure workshop for Early College Alliance High School to explore real-life applications of machine learning in engineering.
May 2024: Our paper Convergence of Gradient Descent with Small Initialization for Unregularized Matrix Completion is accepted to appear in COLT 2024.
May 2024: I will serve as an Area Chair for NeurIPS 2024.
April 2024: Our new paper proposes a parametric approach that can solve mixed-integer quadratic programs over trees in quadratic time and memory, significantly outperforming Gurobi: A Parametric Approach for Solving Convex Quadratic Optimization with Indicators Over Trees. A Python implementation of our algorithm is available here. Joint work with Aaresh Bhathena (my student), Andrés Gómez, and Simge Küçükyavuz.
April 2024: In our new paper, we study a personalized variant of PCA: Given observation matrices from different but associated sources, each corrupted by sparse noise, can we recover their common and unique components? Triple Component Matrix Factorization: Untangling Global, Local, and Noisy Components. Joint work with Naichen Shi and Raed Al Kontar.
April 2024: My student, Geyu Liang, will intern at Amazon this summer.
March 2024: My student, Jianhao Ma, will intern at Meta FAIR Labs this summer.
March 2024: Our paper Robust Sparse Mean Estimation via Incremental Learning is accepted to appear in ICLR 2024 Workshop on Bridging the Gap Between Practice and Theory in Deep Learning.
March 2024: Congrats to my student, Jianhao Ma, for receiving the Rackham Predoctoral Fellowship.
February 2024: Our new paper establishes the first global convergence of gradient descent on the overparameterized matrix completion without any regularization: Convergence of Gradient Descent with Small Initialization for Unregularized Matrix Completion. Joint work with my student Jianhao Ma.
January 2024: Honored to be awarded the 2024 North Campus Dean’s MLK Spirit Award.
December 2023: Thrilled to receive the NSF CAREER award.
December 2023: I will serve as a member of the Management and Education Committee for MICDE.
December 2023: Revised paper on the landscape of low-rank matrix recovery: Can Learning Be Explained By Local Optimality In Low-rank Matrix Recovery? Joint work with my student Jianhao Ma.
September 2023: Our paper Personalized Dictionary Learning for Heterogeneous Datasets is accepted to appear in NeurIPS 2023.
September 2023: Our paper On the Optimization Landscape of Burer-Monteiro Factorization: When do Global Solutions Correspond to Ground Truth? has won the second place award in the INFORMS Junior Faculty Interest Group (JFIG) Paper Competition.
August 2023: Our paper Heterogeneous Matrix Factorization: When Features Differ by Datasets is selected as one of the finalists for the INFORMS Data Mining Best Student Paper Competition.
August 2023: I will serve as an Area Chair for ICLR 2024.
August 2023: I will serve as an Area Chair for AISTATS 2024.
July 2023: New paper on the solution path of time-varying Markov random fields: Solution Path of Time-varying Markov Random Fields with Discrete Regularization. Joint work with Andres Gomez.
May 2023: New paper on heterogeneous matrix factorization: Heterogeneous Matrix Factorization: When Features Differ by Datasets. Joint work with Naichen Shi and Raed Al Kontar.
May 2023: New paper on robust sparse mean estimation: Robust Sparse Mean Estimation via Incremental Learning. Joint work with my student Jianhao Ma, Rui Ren Chen, Yinghui He, and Wei Hu.
May 2023: New paper on dictionary learning: Personalized Dictionary Learning for Heterogeneous Datasets. Joint work with my student Geyu Liang, Naichen Shi, and Raed Al Kontar.
May 2023: Our paper Efficient Inference of Spatially-varying Gaussian Markov Random Fields with Applications in Gene Regulatory Networks to appear in the IEEE/ACM Transactions on Computational Biology and Bioinformatics.
April 2023: Congrats to Jianhao Ma for receiving the 2023 Murty Best Paper Prize for his paper Blessing of Depth in Linear Regression: Deeper Models Have Flatter Landscape Around the True Solution!
April 2023: Delighted to have organized a workshop for Early College Alliance high school to explore real-life applications of machine learning in engineering. See the link for more information.
April 2023: Our paper Preconditioned Gradient Descent for Overparameterized Nonconvex Burer–Monteiro Factorization with Global Optimality Certification to appear in the Journal of Machine Learning Research.
March 2023: I will serve as an Area Chair for NeurIPS 2023.
February 2023: New paper on the landscape of Burer-Monteiro factorization: On the Optimization Landscape of Burer-Monteiro Factorization: When do Global Solutions Correspond to Ground Truth? Joint work with my student Jianhao Ma.
February 2023: Our paper Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization to appear in the Journal of Machine Learning Research.
January 2023: Our paper Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition is accepted to appear in ICLR 2023.
January 2023: Gave a talk at Michigan Medicine on “Scalable Learning of Dynamic Graphical Models with Combinatorial Structures: Beyond Maximum Likelihood Estimation”.
November 2022: Our paper Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution is selected as a Featured Paper (Spotlight, top 3%) in NeurIPS 2022.
October 2022: New paper on dictionary learning: Simple Alternating Minimization Provably Solves Complete Dictionary Learning. Joint work with my student Geyu Liang, Gavin Zhang, and Richard Y. Zhang.
October 2022: Congrats to Jianhao Ma for receiving the NeurIPS 2022 Scholar Award!
October 2022: New paper on the convergence of gradient descent: Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition. Joint work with my students Jianhao Ma and Lingjun Guo.
September 2022: Our paper Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution is accepted to appear in NeurIPS 2022.
September 2022: Gave a talk at the CSP Seminar Series. Check out the video of my talk: “Blessing of Nonconvexity in Factorized Models“.
August 2022: Delighted to receive a DEI Faculty Grant on a collaborative partnership with low-income Detroit schools to promote interest in STEM. Joint work with Tiffany Wu from the School of Education.
August 2022: Our paper “A Graph-based Decomposition Method for Convex Quadratic Optimization with Indicators” has received the INFORMS Computing Society Best Student Paper Prize – Runner Up Prize. See the press release.
August 2022: I will serve as an Area Chair for AISTATS 2023.
July 2022: New paper on deep linear models: Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution. Joint work with my student Jianhao Ma.
June 2022: New journal paper on spatially-varying graphical models and their application in gene regulatory networks: Efficient Inference of Spatially-varying Gaussian Markov Random Fields with Applications in Gene Regulatory Networks. Joint work with Visweswaran Ravikumar, my student Tong Xu, Prof. Wajd N. Al-Holou, and Prof. Arvind Rao.
June 2022: The paper A Graph-based Decomposition Method for Convex Quadratic Optimization with Indicators is accepted to appear in Mathematical Programming.
June 2022: New journal paper on low-rank matrix optimization: Preconditioned Gradient Descent for Overparameterized Nonconvex Burer–Monteiro Factorization with Global Optimality Certification. Joint work with Gavin Zhang and Richard Y. Zhang.
May 2022: Delighted to receive a Seeding to Accelerate Research Themes (START) grant from the College of Engineering at the University of Michigan. Joint work with Professors Qing Qu, Laura Balzano, Albert Berahas, and Eunshin Byon.
April 2022: Gave a talk at the Data Science Seminar at the Johns Hopkins University.
April 2022: Organized Winter 2022 IOE Seminar Series. See the link for the great lineup of speakers!
March 2022: Delighted to receive a multi-institute research grant from the National Science Foundation as a lead PI to study the scalable inference of spatio-temporal Markov Random Fields. Joint work with Prof. Arvind Rao and Prof. Andres Gomez.
February 2022: Delighted to receive a research grant from the Office of Naval Research to study the optimization and algorithmic foundations of low-rank matrix factorization.
February 2022: New journal paper on over-parameterized robust matrix recovery: Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization. Joint work with my student Jianhao Ma.
October 2021: Received the Outstanding Reviewer Award from NeurIPS 2021.
October 2021: New journal paper on graph-based mixed-integer quadratic optimization: A Graph-based Decomposition Method for Convex Quadratic Optimization with Indicators. Joint work with Peijing Liu, Andres Gomez, and Simge Küçükyavuz.
September 2021: Three papers are accepted to appear in NeurIPS 2021:
- Scalable Inference of Sparsely-changing Gaussian Markov Random Fields (poster)
- Preconditioned Gradient Descent for Over-parameterized Nonconvex Matrix Factorization (poster)
- Sign-RIP: A Robust Restricted Isometry Property for Low-rank Matrix Recovery (Workshop on Optimization for Machine Learning)
July 2021: Received Catalyst Grant from MICDE (with Prof. Arvind Rao).
June 2021: Our paper Sample Complexity of Block-Sparse System Identification Problem is accepted to appear in IEEE Transactions on Control of Network Systems.
May 2021: Received Propelling Original Data Science (PODS) grant from MIDAS (with Prof. Albert Berahas).
April 2021: Our paper On the Absence of Spurious Local Trajectories in Time-varying Nonconvex Optimization has been conditionally accepted to appear in IEEE Transactions on Automatic Control.
March 2021: Our paper Learning Partially Observed Linear Dynamical Systems from Logarithmic Number of Samples is accepted to appear in Learning for Dynamics & Control (L4DC).
February 2021: New paper on robust matrix recovery: Implicit Regularization of Sub-gradient Method in Robust Matrix Recovery: Don’t be Afraid of Outliers. Joint work with my student Jianhao Ma.
February 2021: New paper on the inference of time-varying Markov Random Fields: Scalable Inference of Sparsely-changing Markov Random Fields with Strong Statistical Guarantees. Joint work with Andres Gomez.
November 2020: Our paper Smoothing Property of Load Variation Promotes Finding Global Solutions of Time-Varying Optimal Power Flow has been conditionally accepted to appear in IEEE Transactions on Control of Network Systems.
November 2020: A revised version of our paper On the Absence of Spurious Local Trajectories in Time-varying Nonconvex Optimization is available online.
October 2020: New paper on the efficient learning of partially-observed linear dynamical systems: Learning Partially Observed Linear Dynamical Systems from Logarithmic Number of Samples.
October 2020: I gave a talk on “learning and control of linear dynamical systems in high dimensions” at the University of Michigan Controls Seminar. The presentation is available online.
October 2020: I will be organizing a session on “Recent Advances in Learning, Optimization, and Control” at INFORMS Annual Meeting. Check out the schedule here!
September 2020: Here is a (very) short overview of my research, presented at MIDAS Faculty Research Pitch.
September 2020: Our paper Smoothing Property of Load Variation Promotes Finding Global Solutions of Time-Varying Optimal Power Flow has received the 2020 INFORMS ENRE Best Student Paper Award.
August 2020: Our paper Absence of Spurious Local Trajectories in Time-Varying Optimization: A Control-Theoretic Perspective has received an Outstanding Student Paper Award of the IEEE Conference on Control Technology and Applications (CCTA).
July 2020: Our paper ‘‘Efficient Learning of Distributed Linear-Quadratic Controllers’’ is accepted to appear in SIAM Journal on Control and Optimization.
June 2020: Our paper ‘‘Load Variation Enables Escaping Poor Solutions of Time-Varying Optimal Power Flow’’ has received the Best-of-the-Best Conference Paper Award of the 2020 Power & Energy Society General Meeting.
April 2020: Interview with UC Berkeley IEOR.
March 2020: Our paper ‘‘Exact Guarantees on the Absence of Spurious Local Minima for Non-negative Rank-1 Robust Principal Component Analysis’’ is accepted to appear in Journal of Machine Learning Research (JMLR), 2020.