Software

 

Solution Path of Time-varying Markov Random Fields

We study the problem of inferring sparse time-varying Markov random fields (MRFs) with
different discrete and temporal regularizations on the parameters. Our algorithm recovers the entire solution path of the problem efficiently and parametrically for all sparsity levels. The recovery of the entire solution path makes our algorithm highly suitable for cross-validation, where parameter estimation is required for varying regularization values.

[Google Drive Link] [paper link]


Personalized Dictionary Learning

In personalized dictionary learning (PerDL), the goal is to learn sparse linear representations from heterogeneous datasets with unique features that, at the same time, share some commonality. These shared and unique features are modeled as global and local dictionaries. We develop a meta-algorithm called Personalized Matching and Averaging (PerMA) that can efficiently and provably recover both global and local dictionaries from heterogeneous datasets.

[Github Link] [paper link]


Heterogeneous Matrix Factorization

In heterogeneous matrix factorization (HMF), the goal is to factor a number of observation matrices into shared and unique components. Our algorithm achieves this goal in a distributed fashion. 

[Github Link] [paper link]


Robust Sparse Mean Estimation

In robust sparse mean estimation, the goal is to estimate a sparse mean from a collection of partially corrupted samples drawn from a heavy-tailed distribution. The existing sample-efficient estimators fall short of practical use as they scale poorly with the dimension. We develop a simple algorithm that runs in near-linear time and memory (with respect to ambient dimension) and recovers the sparse mean with near-optimal sample complexity.

[Github Link] [paper link]


Algorithmic Function Basis Decomposition

We analyze the solution trajectory of gradient-based algorithms via a novel basis function decomposition.  Although solution trajectories of gradient-based algorithms may vary depending on the learning task, they behave almost monotonically when projected onto an appropriate orthonormal function basis. Our key finding is that gradient-based algorithms monotonically learn the coefficients of a particular orthonormal function basis of DNNs defined as the eigenvectors of the conjugate kernel after training. We provide the code for this key observation. 

[Github Link] [paper link]


Approximate closed-form Solution for Graphical Lasso

Graphical Lasso (GL) is a popular method for learning the structure of Gaussian graphical models, which is based on an L1 regularization technique. We develop a highly efficient approximate closed-form solution for Graphical Lasso. Our developed code is publicly available in Python and MATLAB.

[Github Link] [paper link]