Withdraw
Loading…
What makes an argument effective? Triangulating task characteristics, essay performances, and rating processes in an L2 integrated argumentative writing test
Chuang, Ping-Lin
Loading…
Permalink
https://hdl.handle.net/2142/125529
Description
- Title
- What makes an argument effective? Triangulating task characteristics, essay performances, and rating processes in an L2 integrated argumentative writing test
- Author(s)
- Chuang, Ping-Lin
- Issue Date
- 2024-06-27
- Director of Research (if dissertation) or Advisor (if thesis)
- Yan, Xun
- Doctoral Committee Chair(s)
- Yan, Xun
- Committee Member(s)
- Bowles, Melissa
- Christianson, Kiel
- Plakans, Lia
- Department of Study
- Linguistics
- Discipline
- Linguistics
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- L2 writing assessment
- Integrated writing
- Argumentation
- Source use
- Rater behavior
- Abstract
- Integrated argumentative tasks are commonly used to assess learners’ writing ability in second language (L2) writing assessment. While abundant research has addressed performance characteristics and rating processes of integrated argumentative tests (e.g., Gebril & Plakans, 2009, 2014), our understanding of the nature of L2 argumentation and the impact of source use on argumentation quality remains relatively limited. This dissertation study thus investigates essay performances and rating processes of an operational integrated argumentative writing test to delineate how features related to task, examinee, and rater interact, in the hope of gathering validity evidence for the use of such tests in an L2 context. Specifically, it examines (1) whether argumentation effectiveness exhibits systematic differences across proficiency levels; (2) how source use influences argumentation effectiveness; (3) whether raters consider source use characteristics when scoring argumentation effectiveness; (4) what cognitive processes are involved in scoring argumentation effectiveness. To answer the four main research questions, I analyzed test takers’ written products and raters’ rating performances qualitatively and quantitatively. I first conducted text analysis to examine features pertaining to argumentation and source use. Then, I designed an offline experiment to examine if different source use characteristics impact raters’ scoring of argumentation. I further investigated raters’ cognitive processes using the eye-tracking methodology. Results from test takers’ writing performances show that (1) both argument structure and reasoning quality differed across proficiency levels and (2) source use characteristics had an impact on argument structure and reasoning quality. The offline experiment further indicates that the quantity and quality of source use could determine argumentation scores assigned by raters. Finally, the eye-tracking experiment reveals the cognitive processes involved in scoring integrated argumentative writing performances. Raters paid the most attention to sentences with source text use, and their eye movements were related to the scores and comments they provided. This dissertation provides a deeper understanding of how L2 test takers construct arguments and how raters evaluate integrated argumentative writing performances. It also stands to have practical implications for L2 writing pedagogy and assessment, including how argumentation can be taught in writing courses and how rating scales and rater training programs can be developed to guide the scoring of argumentation effectiveness in academic contexts.
- Graduation Semester
- 2024-08
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/125529
- Copyright and License Information
- Copyright 2024 Ping-Lin Chuang
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…