Files in this item



application/pdfSUN-THESIS-2020.pdf (2MB)
(no description provided)PDF


Title:Environmental curriculum learning for efficiently achieving superhuman play in games
Author(s):Sun, Ray
Advisor(s):Peng, Jian
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):reinforcement learning
curriculum learning
sample efficiency
StarCraft II
Monte Carlo tree search
Abstract:Reinforcement learning has made large strides in training agents to play games, including complex ones such as arcade game Pommerman and real-time strategy game StarCraft II. To allow agents to grasp the many concepts in these games, curriculum learning has been used to teach agents multiple skills over time. We present Environmental Curriculum Learning, a new technique for creating a curriculum of environment versions for an agent to learn in sequence. By adding helpful features to the state and action spaces, and then removing these helpers over the course of training, agents can focus on the fundamentals of a game one at a time. Our experiments in Pommerman illustrate the design principles of ECL, and our experiments in StarCraft II show that ECL produces agents with far better final performance than without it, when using the same training algorithm. Our StarCraft II ECL agent exceeds previous score records in a StarCraft II minigame, including human records, while taking far less training time to do so than previous approaches.
Issue Date:2020-05-11
Rights Information:Copyright 2020 Ray Sun
Date Available in IDEALS:2020-08-26
Date Deposited:2020-05

This item appears in the following Collection(s)

Item Statistics