Withdraw
Loading…
Algorithms and software for efficient tensor decompositions
Singh, Navjot
Loading…
Permalink
https://hdl.handle.net/2142/132455
Description
- Title
- Algorithms and software for efficient tensor decompositions
- Author(s)
- Singh, Navjot
- Issue Date
- 2025-08-26
- Director of Research (if dissertation) or Advisor (if thesis)
- Solomonik, Edgar
- Doctoral Committee Chair(s)
- Solomonik, Edgar
- Committee Member(s)
- Fischer, Paul
- Olson, Luke
- Ballard, Grey
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Tensor Computations
- Optimization
- Tensor Decomposition
- Tensor Completion
- Abstract
- Tensors, which generalize vectors and matrices to higher dimensions, provide a powerful framework for representing and analyzing multi-way data arising in science and engineering. Their ability to capture complex, multi-relational structures makes them invaluable in applications ranging from computational chemistry to recommendation systems and knowledge graphs. However, high-dimensional tensor data often exhibit sparsity, missing entries, and non-Gaussian structure, posing significant computational and statistical challenges. Tensor decomposition techniques address these issues by extracting low-dimensional representations, reducing storage and computational cost, and enabling efficient analysis and imputation. This thesis develops new algorithms and scalable software for efficient tensor decompositions, with a focus on real-world applications. On the algorithmic front, we introduce accurate and scalable methods for computing various types of tensor decompositions under diverse data assumptions. On the software side, we design efficient sparse tensor kernels that enable high-performance implementations of these algorithms on distributed-memory systems, while offering an accessible Python interface. The Canonical Polyadic (CP) decomposition is essential for extracting latent patterns from multi-dimensional data in fields such as signal processing, chemometrics, and machine learning. The first part of this thesis addresses accurate and well-conditioned computation of CP decomposition for real-world applications. We introduce a new alternating optimization algorithm that achieves theoretically guaranteed superlinear local convergence for exact CP rank tensors. By leveraging a Mahalanobis norm formulation, we propose a method that interpolates between classical Alternating Least Squares (ALS) and a new Alternating Mahalanobis Distance Minimization (AMDM). We evaluate the performance of this hybrid method by comparing it with ALS and empirically show that the resulting decomposition has improved fitness and better-conditioned factors. Real-world tensors are often large, sparse, incomplete, and may contain non-Gaussian entries—or a combination of these—which poses significant computational challenges. To address these issues, the second part of this thesis develops specialized algorithms and scalable distributed-memory software kernels for the accurate and efficient computation of generalized tensor decompositions. We begin by tackling the problem of generalized CP tensor completion, proposing novel algorithms that leverage second-order derivative information alongside efficient sparse tensor computations. To support scalable and user-friendly implementations, we also introduce a suite of sparse tensor kernels with a high-level Python interface. We then extend this framework to Tucker tensor completion by developing an efficient optimization method for updating the core tensor under the least squares loss. Finally, for sparse generalized tensor decomposition, we propose objective function transformations, extend the optimization framework to support nonnegativity constraints, and develop corresponding sparse tensor kernels that enable CP and Tucker decompositions under general loss functions. The final part of this thesis explores tensor decompositions other than CP and Tucker. Butterfly matrix decomposition is a hierarchical matrix factorization that approximates matrices by representing them as a product of block-sparse matrices, enabling fast application and storage with near-optimal complexity. We study the problem of completing butterfly-structured matrices from partial observations. To address this, we represent the butterfly matrix decomposition as a tensor decomposition of dense tensors, where each block-sparse butterfly factor is encoded as a dense tensor. This formulation allows us to develop a sparse tensor kernel that supports both butterfly and tensor train completion. We empirically show that butterfly matrix completion method achieves superior accuracy and computational efficiency compared to existing approaches based on quantized tensor trains and traditional low-rank matrix completion.
- Graduation Semester
- 2025-12
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/132455
- Copyright and License Information
- Copyright 2025 Navjot Singh
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…