Improving medical report generation and evaluation through prompt engineering
Li, Chenhao
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/129604
Description
Title
Improving medical report generation and evaluation through prompt engineering
Author(s)
Li, Chenhao
Issue Date
2025-05-05
Director of Research (if dissertation) or Advisor (if thesis)
Kindratenko, Volodymyr
Department of Study
Electrical & Computer Eng
Discipline
Electrical & Computer Engr
Degree Granting Institution
University of Illinois Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
Medical Report Generation
Medical Report Evaluation
Prompt Engineering
Vision Language Model
Abstract
The advancement of vision language models (VLMs) has opened new avenues for automating complex clinical tasks such as medical report generation. However, challenges remain in generating clinically accurate and detailed reports and in evaluating them effectively. Our work investigates the role of prompt engineering in improving both the generation and evaluation of medical reports.
We make three main contributions. First, we analyze the effect of various prompting strategies on the performance of different VLMs, including GPT-4o mini, LLaMA 11B, and LLaVA-Med. We introduce a structured prompt design based on clinically relevant anatomical checkpoints that significantly improve reports coherence and clinical fidelity. Second, we propose a novel LLM evaluation strategy that uses checkpoints to anchor the assessment of generated reports, providing an interpretable and clinically meaningful metric. Third, we explore two integrated frameworks that combine generation and evaluation, enabling iterative improvement through example-based and self-supervised prompt optimization.
Experimental results on the CT-RATE dataset demonstrate that prompt engineering can substantially increase both the quality and the evaluability of medical reports. Our findings highlight the potential of prompt-based approaches as efficient and scalable alternatives to fine-tuning, bridging the gap between general-purpose models and specialized clinical applications.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.