Withdraw
Loading…
Deep code: representation and learning algorithms for neural networks & their applications to communication codes
Makkuva, Ashok Vardhan
Loading…
Permalink
https://hdl.handle.net/2142/116250
Description
- Title
- Deep code: representation and learning algorithms for neural networks & their applications to communication codes
- Author(s)
- Makkuva, Ashok Vardhan
- Issue Date
- 2022-07-15
- Director of Research (if dissertation) or Advisor (if thesis)
- Viswanath, Pramod
- Doctoral Committee Chair(s)
- Viswanath, Pramod
- Committee Member(s)
- Hajek, Bruce
- Rayadurgam, Srikant
- Sun, Ruoyu
- Oh, Sewoong
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- machine-learning
- LSTM
- GRU
- mixture-of-experts
- deep learning
- KO codes
- channel codes
- error-correcting-codes
- Reed-Muller codes
- Polar codes
- Language
- eng
- Abstract
- Codes are the backbone of modern information age. Codes, composed of encoder and decoder pairs, are the basic mathematical objects that enable reliable communication. Landmark codes include convolutional, Reed-Muller, turbo, LDPC, and polar: each is linear and represents a mathematical breakthrough. Their impact on humanity is huge; each of these codes has been used in global communication standards over the past six decades. On the other hand, designing codes is a challenging task, mostly driven by human ingenuity. Befittingly, historically, the progress in discovery of codes has been sporadic. In this thesis, we present a new paradigm to invent codes via harnessing tools from deep-learning. Our major result is the invention of \emph{KO codes}, a computationally efficient family of deep-learning driven codes that outperform the state-of-the-art RM and polar codes, in the challenging short-to-medium block length regime. The key technical innovation behind KO codes is the design of a novel family of neural architectures inspired by the computation tree of the {\bf K}ronecker {\bf O}peration (KO) central to RM and polar codes. These architectures pave the way for discovery of a much richer class of hitherto unexplored non-linear codes. This design technique can be viewed as an instantiation of the classical neural augmentation principle. In the process, we also study a popular neural network model called Mixture-of-Experts (MoE) that realizes this principle. We provide the first set of consistent and efficient algorithms with global learning guarantees for learning the parameters in a MoE which has been an open problem for more than two decades.
- Graduation Semester
- 2022-08
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/116250
- Copyright and License Information
- Copyright 2022 Ashok Makkuva
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisDissertations and Theses - Electrical and Computer Engineering
Dissertations and Theses in Electrical and Computer EngineeringManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…