Files in this item

FilesDescriptionFormat

application/pdf

application/pdfAZAD-THESIS-2020.pdf (937kB)Restricted to U of Illinois
(no description provided)PDF

Description

Title:Lessons learnt developing and deploying grading mechanisms for EiPE code-reading questions in CS1 classes
Author(s):Azad, Sushmita
Advisor(s):Zilles, Craig
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:M.S.
Genre:Thesis
Subject(s):CSEducation
CS1
Code-reading
EiPE
Abstract:Previous research has identified that the ability to understand the high-level purpose of a piece of code is an important developmental skill that is harder to master than executing the same piece of code in one’s head for a given input (“code tracing”), but easier to master than writing the code. One way to help students to practice this skill in the middle ground is by asking them Explain in Plain English (EiPE) questions, where they are asked to explain the purpose of a piece of code at a high level. Prior works involving EiPE questions have used scoring rubrics that do not adequately handle the three dimensions of answer quality: correctness, level of abstraction, and ambiguity. Also, these studies have been carried out in limited experimental settings with manual grading, which did not shed light on how EiPE questions can be deployed in real world classrooms without overwhelming grading workload. In this work, we cover both unaddressed issues. First, we describe our efforts in validating a 7-point rubric that the research group has developed for scoring student responses to EiPE questions. Second, we describe the deployment of an imperfect NLP-based automatic grading system for these EiPE responses on an exam in CS105, a large-enrollment CS1 course at the University of Illinois, finding that the auto-grader has an accuracy similar to that of TA’s who teach the course. We study allowing students to attempt an EiPE question multiple times (without penalty based on the number of attempts used) in exam settings as a strategy to mitigate potential student dissatisfaction due to the imperfect grading system mistakenly rejecting a correct answer. We also characterize common student errors, auto-grader failure, and discuss the lessons learnt in this process.
Issue Date:2020-05-14
Type:Thesis
URI:http://hdl.handle.net/2142/108199
Rights Information:Copyright 2020 Sushmita Azad
Date Available in IDEALS:2020-08-26
Date Deposited:2020-05


This item appears in the following Collection(s)

Item Statistics