Home

 

I am an assistant professor in the Department of Industrial and Operations Engineering at the University of Michigan. I am also affiliated with Michigan Institute for Computational Discovery and Engineering (MICDE), Michigan Institute for Data Science (MIDAS), Michigan Center for Applied and Interdisciplinary Mathematics (MCAIM), and Controls Group.

My research focuses on developing large-scale computational and statistical methods for structured problems. In particular, I develop new techniques in optimization and machine learning to solve massive-scale and data-driven problems.

My research is supported by National Science Foundation, Office of Naval Research, Michigan Institute for Computational Discovery and Engineering Catalyst Grant, Michigan Institute for Data Science Propelling Original Data Science (PODS) Grant, College of Engineering Seeding To Accelerate Research Themes (START) Grant, and DEI Faculty Grant.

 

I received my M.Sc. and Ph.D. degrees in Industrial Engineering and Operations Research from University of California, Berkeley. I also received a M.Sc. degree in Electrical Engineering from Columbia University, and a B.Sc. degree in Electrical Engineering from Sharif University of Technology.

 

Contact:

Email: fattahi@umich.edu
Phone: (734) 763-9744


Prospective PhD students: I am always on the lookout for PhD students to join my research group. If interested, please send me an email with your CV, and apply to our PhD program.


News:

November 2022: Our paper Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution is selected as a Featured Paper (Spotlight, top 3%) in NeurIPS 2022.

October 2022: New paper on dictionary learning: Simple Alternating Minimization Provably Solves Complete Dictionary Learning. Joint work with my student Geyu Liang, Gavin Zhang, and Richard Y. Zhang.

October 2022: Congrats to Jianhao Ma for receiving the NeurIPS 2022 Scholar Award!

October 2022: Our paper Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization is conditionally accepted to appear in Journal of Machine Learning Research. 

October 2022: New paper on the convergence of gradient descent: Behind the Scenes of Gradient Descent: A Trajectory Analysis via Basis Function Decomposition. Joint work with my students Jianhao Ma and Lingjun Guo.

September 2022: Our paper Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution to appear in NeurIPS 2022.

September 2022: Gave a talk at the CSP Seminar Series. Check out the video of my talk: “Blessing of Nonconvexity in Factorized Models“.

August 2022: Delighted to receive a DEI Faculty Grant on a collaborative partnership with low-income Detroit schools to promote interest in STEM. Joint work with Tiffany Wu from the School of Education. 

August 2022: Our paper “A Graph-based Decomposition Method for Convex Quadratic Optimization with Indicators” has received the INFORMS Computing Society Best Student Paper Prize – Runner Up Prize. See the press release

August 2022: I will serve as an Area Chair for AISTATS 2023.

July 2022: New paper on deep linear models: Blessing of Nonconvexity in Deep Linear Models: Depth Flattens the Optimization Landscape Around the True Solution. Joint work with my student Jianhao Ma.

June 2022: New journal paper on spatially-varying graphical models and their application in gene regulatory networks: Efficient Inference of Spatially-varying Gaussian Markov Random Fields with Applications in Gene Regulatory Networks. Joint work with Visweswaran Ravikumar, my student Tong Xu, Prof. Wajd N. Al-Holou, and Prof. Arvind Rao.

June 2022: The paper A Graph-based Decomposition Method for Convex Quadratic Optimization with Indicators to appear in Mathematical Programming.

June 2022: New journal paper on low-rank matrix optimization: Preconditioned Gradient Descent for Overparameterized Nonconvex Burer–Monteiro Factorization with Global Optimality Certification. Joint work with Gavin Zhang and Richard Y. Zhang.

May 2022: Delighted to receive a Seeding to Accelerate Research Themes (START) grant from the College of Engineering at the University of Michigan. Joint work with Professors Qing Qu, Laura Balzano, Albert Berahas, and Eunshin Byon.

April 2022: Gave a talk at the Data Science Seminar at the Johns Hopkins University.

April 2022: Organized Winter 2022 IOE Seminar Series. See the link for the great lineup of speakers!

March 2022: Delighted to receive a multi-institute research grant from the National Science Foundation as a lead PI to study the scalable inference of spatio-temporal Markov Random Fields. Joint work with Prof. Arvind Rao and Prof. Andres Gomez. 

February 2022: New journal paper on over-parameterized robust matrix recovery: Global Convergence of Sub-gradient Method for Robust Matrix Recovery: Small Initialization, Noisy Measurements, and Over-parameterization. Joint work with my student Jianhao Ma.

February 2022: Delighted to receive a research grant from the Office of Naval Research to study the optimization and algorithmic foundations of low-rank matrix factorization.

October 2021: Received the Outstanding Reviewer Award from NeurIPS 2021. 

October 2021: New journal paper on graph-based mixed-integer quadratic optimization: A Graph-based Decomposition Method for Convex Quadratic Optimization with Indicators. Joint work with Peijing Liu, Andres Gomez, and Simge Küçükyavuz.

September 2021: Three papers to appear in NeurIPS 2021:

July 2021: Received Catalyst Grant from MICDE (with Prof. Arvind Rao).

June 2021: Our paper  Sample Complexity of Block-Sparse System Identification Problem to appear in IEEE Transactions on Control of Network Systems.

May 2021: Received Propelling Original Data Science (PODS) grant from MIDAS (with Prof. Albert Berahas).

April 2021: Our paper On the Absence of Spurious Local Trajectories in Time-varying Nonconvex Optimization  has been conditionally accepted to appear in IEEE Transactions on Automatic Control.

March 2021: Our paper Learning Partially Observed Linear Dynamical Systems from Logarithmic Number of Samples to appear in 3rd Annual Learning for Dynamics & Control Conference.

February 2021: New paper on robust matrix recovery: Implicit Regularization of Sub-gradient Method in Robust Matrix Recovery: Don’t be Afraid of Outliers. Joint work with my student Jianhao Ma.

February 2021: New paper on the inference of time-varying Markov Random Fields: Scalable Inference of Sparsely-changing Markov Random Fields with Strong Statistical Guarantees. Joint work with Andres Gomez.

November 2020: Our paper Smoothing Property of Load Variation Promotes Finding Global Solutions of Time-Varying Optimal Power Flow has been conditionally accepted to appear in IEEE Transactions on Control of Network Systems.   

November 2020: A revised version of our paper On the Absence of Spurious Local Trajectories in Time-varying Nonconvex Optimization is available online.  

October 2020: New paper on the efficient learning of partially-observed linear dynamical systems: Learning Partially Observed Linear Dynamical Systems from Logarithmic Number of Samples

October 2020: I gave a talk on “learning and control of linear dynamical systems in high dimensions” at the University of Michigan Controls Seminar. The presentation is available online.

October 2020: I will be organizing a session on “Recent Advances in Learning, Optimization, and Control” at INFORMS Annual Meeting. Check out the schedule here!

September 2020: Here is a (very) short overview of my research, presented at MIDAS Faculty Research Pitch.

September 2020: Our paper Smoothing Property of Load Variation Promotes Finding Global Solutions of Time-Varying Optimal Power Flow has received the 2020 INFORMS ENRE Best Student Paper Award.  

 August 2020: Our paper Absence of Spurious Local Trajectories in Time-Varying Optimization: A Control-Theoretic Perspective has received an Outstanding Student Paper Award of the IEEE Conference on Control Technology and Applications (CCTA).

 July 2020: Our paper ‘‘Efficient Learning of Distributed Linear-Quadratic Controllers’’ to appear in SIAM Journal on Control and Optimization.

 June 2020: Our paper ‘‘Load Variation Enables Escaping Poor Solutions of Time-Varying Optimal Power Flow’’ has received the Best-of-the-Best Conference Paper Award of the 2020 Power & Energy Society General Meeting.

April 2020: Interview with UC Berkeley IEOR.

March 2020: Our paper ‘‘Exact Guarantees on the Absence of Spurious Local Minima for Non-negative Rank-1 Robust Principal Component Analysis’’ to appear in Journal of Machine Learning Research (JMLR), 2020.