Withdraw
Loading…
Multi-agent reinforcement learning: A mean-field perspective
Zaman, Muhammad Aneeq Uz
Content Files

Loading…
Download Files
Loading…
Download Counts (All Files)
Loading…
Edit File
Loading…
Permalink
https://hdl.handle.net/2142/127124
Description
- Title
- Multi-agent reinforcement learning: A mean-field perspective
- Author(s)
- Zaman, Muhammad Aneeq Uz
- Issue Date
- 2024-07-24
- Director of Research (if dissertation) or Advisor (if thesis)
- Başar, Tamer
- Doctoral Committee Chair(s)
- Dullerud, Geir
- Committee Member(s)
- Jiang, Nan
- Srikant, Rayadurgam
- Department of Study
- Mechanical Sci & Engineering
- Discipline
- Mechanical Engineering
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Multi-agent Reinforcement Learning
- Mean-field Game Theory
- Policy Gradient methods
- Actor-Critic methods
- cooperative-competitive games
- Abstract
- Recent advancements in Reinforcement Learning (RL) have enabled significant progress in solving complex decision-making problems across various domains such as robotics, autonomous driving, and strategy games. However, many real-world scenarios involve multiple agents cooperating and competing with each other to achieve common goals, necessitating the development of Multi-Agent RL (MARL) algorithms. MARL not only addresses the challenges posed by interactions among agents but also enables robust learning in dynamic environments. Mean-Field Games (MFGs) offer a promising framework to tackle scalability issues in RL for multiple decision-making agents by considering the limiting case where the number of agents approaches infinity. Originating from seminal works, MFGs have seen extensive research, extensions, and applications in diverse fields. The thesis focuses on extending RL techniques for purely competitive games (Chapters 2, 3 and 6), Cooperative-Competitive (CC) games (Chapter 4) and the robust N -agent cooperative control problem (Chapter 5). Within purely competitive games Chapter 2 deals with a consensus problem where the agents are split into multiple populations. Although the thesis primarily deals with the Linear Quadratic (LQ) framework (which has applications in finance and engineering), we dedicate Chapter 3 to a general large population Markov game where we relax the assumption of access to a population (mean-field) simulator prevalent in literature. Chapter 4 is regarding a CC game where the agents are divided into multiple teams where these is intra-team cooperation and inter-team competition. Chapter 5 pertains to a purely cooperative control problem where the agents’ dynamics and cost functions can be manipulated by an adversary. This chapter takes the min-max approach by characterizing optimal policies in the presence of the worst adversarial manipulation. Chapter 6 investigates the effects of entropy regularization on cost functions of competitive agents and how it results in exploratory noise in the control policies of the agents. In each of these chapters we start with a literature review highlighting existing approaches and gaps in research. Then we characterize the various equilibria corresponding to each of the problems, followed by RL algorithms to compute these equilibria (in a data driven manner) along with finite sample guarantees and numerical validations.
- Graduation Semester
- 2024-12
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/127124
- Copyright and License Information
- Copyright 2024 Muhammad Aneeq Uz Zaman
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…