Withdraw
Loading…
Strategies and exercises for assessing code writing and reading ability at scale
Fowler, Max
Loading…
Permalink
https://hdl.handle.net/2142/125524
Description
- Title
- Strategies and exercises for assessing code writing and reading ability at scale
- Author(s)
- Fowler, Max
- Issue Date
- 2024-06-18
- Director of Research (if dissertation) or Advisor (if thesis)
- Zilles, Craig
- Doctoral Committee Chair(s)
- Zilles, Craig
- Committee Member(s)
- Lewis, Colleen
- Karahalios, Karrie
- Porter, Leo
- Department of Study
- Computer Science
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- computer science education
- explain in plain English
- EiPE
- isomorphic questions
- assessment
- Abstract
- Fair and effective assessment at scale necessitates large numbers of relevant exercises. In computer science, the need for a multitude of exercises is further complicated by the nature of the skills we assess. The tightly coupled nature of foundational code-related skills, such as tracing, code reading, and code writing - requires a range of question types in order to properly practice and assess programming skills. In this dissertation, I present work on two strategies for effective programming skill assessment at scale: a strategy for rapidly authoring isomorphic programming questions and multiple techniques for autograding Explain in Plain English (EiPE) exercises. Small, auto-gradable programming exercises provide a useful tool with which to assess students' code writing ability. This dissertation presents multiple semesters of using a surface feature permutation strategy to generate isomorphic programming exercises from existing questions. In particular, this strategy is designed to produce item isomorphs, questions for which the underlying solution strategy does not change significantly. This strategy ranges from changing simple features, such as function names, to more complicated changes such as changing exercise-relevant constants or data types. With multiple semesters of data from a non-major introductory Python course and linear regression, we show evidence that the strategy can be effective. When comparing original questions and their permutations, the latter have at most a 5 -- 11 percentage point difference in score. Of the set of possible features, the more "complex" features may produce questions that are harder than is ideal to be considered isomorphic questions. EiPE exercises present students with pieces of code and ask them to write a high-level description of the code's purpose. This dissertation presents the result of interviews with faculty who use EiPE exercises with respect to how they may grade such exercises and what they are useful for. Following from these interviews, the likelihood that some general grading standard could be applied is raised, with a focus on answers being suitably high-level, technically accurate, and unambiguous. As the grading of EiPE questions can be time intensive for large classes, we also present multiple approaches to autograding EiPE exercises. Two of these methods are based on simple linear classifiers with expert labeled student responses and two are based on using large language models (LLMs) in the grading pipeline. We find the majority of these autograders have similar performance in the 86 -- 88% range versus our expert established ground truth. Further, we find this performance is comparable to the accuracy of the average course teaching assistant (TA). Together, this dissertation presents two pathways to produce a large number of reliable questions for large scale assessment in order to assess code writing and code reading ability.
- Graduation Semester
- 2024-08
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/125524
- Copyright and License Information
- Copyright 2024 Max Fowler
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…