Diagnostic evaluation of logical reasoning capability of large language models
Jiang, Jize
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/130063
Description
Title
Diagnostic evaluation of logical reasoning capability of large language models
Author(s)
Jiang, Jize
Issue Date
2025-07-22
Director of Research (if dissertation) or Advisor (if thesis)
Zhai, Chengxiang
Department of Study
Siebel School Comp & Data Sci
Discipline
Computer Science
Degree Granting Institution
University of Illinois Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
Natural Language Processing
Machine Learning
Large Language Model
Logic Reasoning
Model Evaluation
Abstract
Recent advancements in large language models (LLMs) have revolutionized natural language processing, achieving impressive capabilities across diverse tasks; however, critical shortcomings remain, particularly in logical reasoning, characterized by frequent hallucinations, superficial inference patterns, and inconsistencies under linguistic variations. To address these limitations and to pave the foundations for future studies, we introduce a diagnostic evaluation framework utilizing systematically generated synthetic ordering tasks designed to rigorously probe the logical reasoning capacities of LLMs, assessing performance across varying complexities and robustness against controlled perturbations, including equivalent task rephrasings and directional query variations. Evaluating two prominent models, GPT-4.1 and GPTo, at full and reduced ("mini") scales, we found larger models maintain higher accuracy and robustness compared to their mini counterparts but still exhibit declining performance with increasing complexity. Significant sensitivity to linguistic variations was observed, with even strong models displaying inconsistent performance between logically equivalent formulations, and pronounced biases toward certain answer types emerged, indicating reliance on heuristic shortcuts rather than genuine logical deduction. This thesis underscores the necessity for nuanced evaluations beyond simple accuracy metrics, highlighting specific vulnerabilities and robustness limitations, and establishes a foundation for future evaluations aimed at enhancing the reliability and transparency of logical reasoning in LLMs.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.