Files in this item

FilesDescriptionFormat

application/pdf

application/pdfECE499-Sp2019-chen-Ziao.pdf (323kB)Restricted to U of Illinois
(no description provided)PDF

Description

Title:Load balancing with reinforcement learning
Author(s):Chen, Ziao
Contributor(s):Lu, Yi
Subject(s):load balancing
reinforcement learning
actor-critic
recurrent network
Abstract:We consider a load balancing problem with task-server affinity and server-dependent task recurrence, motivated by our online Q&A system. Different TAs not only answer different questions at different rates, but also generate different numbers of follow-up questions. Similar patterns can be observed in other human-related service systems. This makes simple load balancing policies such as random and shortest-queue-first inadequate. We develop an efficient load balancing algorithm using reinforcement learning, which consistently outperforms the shortest-queue policy, which is a well-known static policy widely used in practice. The improvement achieved by our algorithm over the shortest-queue policy is observed to be 1 to 5 times the improvement of shortest-queue over the random policy, with larger amount of improvement for larger buffer size. We employed several ideas from the state-of-the-art deep reinforcement learning algorithms to improve the stability and speed of convergence of the system. We propose an innovative way of achieving fast convergence over a large state space by transferring a learned policy on a small state space to the larger system. We also propose to use a recurrent network in place of the feedforward network in the actor-critic system, which proves to extract better features from a state as ordering of tasks is important in a queueing system.
Issue Date:2019-05
Genre:Other
Type:Text
Language:English
URI:http://hdl.handle.net/2142/104003
Date Available in IDEALS:2019-06-13


This item appears in the following Collection(s)

Item Statistics