Files in this item

FilesDescriptionFormat

application/pdf

application/pdfGEIGLE-DISSERTATION-2018.pdf (7MB)
(no description provided)PDF

Description

Title:Towards high quality, scalable education: Techniques in automated assessment and probabilistic user behavior modeling
Author(s):Geigle, Chase Aaron
Director of Research:Zhai, ChengXiang
Doctoral Committee Chair(s):Zhai, ChengXiang
Doctoral Committee Member(s):Sundaram, Hari; Zilles, Craig; Cope, Bill; Tang, Jie
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):automated assessment
autograding
virtual lab
data science lab
user behavior modeling
probabilistic user behavior modeling
two-layer hidden Markov models
Dirichlet-multinomial mixture model
Abstract:There are two primary challenges for instructors in offering a high-quality course at large scale. The first is scaling educational experiences to such a large audience. The second major challenge encountered is that of enabling adaptivity of the educational experience. This thesis addresses both major challenges in the way of high-quality scalable education by developing new techniques for large-scale automated assessment (for addressing scalability) and developing new models for interpretable user behavior analysis in educational environments for improving the quality of interaction via personalized education. Specifically, I perform a study of automated assessment of complex assignments where I explore the effectiveness of different types of features in a feasibility study. I argue for re-framing automated assessment techniques in these more complex contexts as a ranking problem, and provide a systematic approach for integrating expert, peer, and automated assessment techniques via an active-learning-to-rank formulation that outperforms a traditional randomized training solution. I also present the design and implementation of CLaDS---a Cloud-based Lab for Data Science---to enable students to engage with real-world data science problems at-scale with minimal cost ($7.40/student). I discuss our experience with deploying seven major text data assignments for students in both on-campus and online courses and show that the general infrastructure of CLaDS can be used to efficiently deliver a wide range of hands-on data science assignments. Understanding student behavior is necessary for improving the quality of scalable education through adaptivity. To this end, I present two general user behavior models for analyzing student interaction log data to understand student behavior. The first focuses on the discovery and analysis of action-based roles in community question answering (CQA) platforms using a generative model called the MDMM behavior model. I show interesting distinctions within CQA communities in question-asking behavior (where two distinct types of askers can be identified) and answering behavior (where two distinct roles surrounding answers emerge). Second, I find that where there are statistically significant differences in health metrics across topical groups on StackExchange, there are also statistically significant differences in behavior compositions, suggesting a relationship between behavior composition and health. Third, I show that the MDMM behavior model can be used to demonstrate similar but distinct evolutionary patterns between topical groups. The second model focuses on discovering temporal action patterns of learners in Coursera MOOCs. I present a two-layer hidden Markov model (2L-HMM) to extract a multi-resolution summary of user behavior patterns and their evolution, and show that these patterns can be used to extract latent features that correlate with educational outcomes. Finally, I develop the Piazza Educational Role Mining (PERM) system to close the gap between theory and practice by providing an easy-to-use web-based interface for leveraging probabilistic user behavior models on Piazza CQA interaction data. PERM allows instructors to easily crawl their courses and run subsequent MDMM behavior analyses on them. Analyses provide instructors with insight into the common user behavior patterns (roles) uncovered by plotting their action distributions in a browser. PERM enables instructors to perform deep-dives into an individual role by viewing the concrete sessions that have been assigned a specific role by the model, along with each session's individual actions and associated content. This allows instructors to flexibly combine data-driven statistical inference (through the MDMM behavior model) with a qualitative understanding of the behavior within a role. Finally, PERM develops a model of individual users as mixtures over the discovered roles, which instructors can also deep-dive into to explore exactly what individual users were doing on the platform.
Issue Date:2018-12-03
Type:Thesis
URI:http://hdl.handle.net/2142/102458
Rights Information:Copyright 2018 Chase Geigle
Date Available in IDEALS:2019-02-06
Date Deposited:2018-12


This item appears in the following Collection(s)

Item Statistics