Withdraw
Loading…
CRABS: A syntactic-semantic pincer strategy for bounding LLM interpretation of Python notebooks
Li, Meng; McPhillips, Timothy; Wang, Dingmin; Tsai, Shin-Rong; Ludäscher, Bertram
Loading…
Permalink
https://hdl.handle.net/2142/128898
Description
- Title
- CRABS: A syntactic-semantic pincer strategy for bounding LLM interpretation of Python notebooks
- Author(s)
- Li, Meng
- McPhillips, Timothy
- Wang, Dingmin
- Tsai, Shin-Rong
- Ludäscher, Bertram
- Issue Date
- 2025
- Keyword(s)
- notebook understanding
- Larege Language Model (LLM)
- YesWorkflow
- data flow
- provenance
- Date of Ingest
- 2025-07-15T17:16:33-05:00
- Abstract
- Recognizing the information flows and operations comprising data science and machine learning Python notebooks is critical for evaluating, reusing, and adapting notebooks for new tasks. Investigating a notebook via re-execution often is impractical due to the challenges of resolving data and software dependencies. While Large Language Models (LLMs) pre-trained on large codebases have demonstrated effectiveness in understanding code without running it, we observe that they fail to understand some realistic notebooks due to hallucinations and long-context challenges. To address these issues, we propose a notebook understanding task yielding an information flow graph and corresponding cell execution dependency graph for a notebook, and demonstrate the effectiveness of a pincer strategy that uses limited syntactic analysis to assist full comprehension of the notebook using an LLM. Our Capture and Resolve Assisted Bounding Strategy (CRABS) employs shallow syntactic parsing and analysis of the abstract syntax tree (AST) to capture the correct interpretation of a notebook between lower and upper estimates of the inter-cell I/O set—the flows of information into or out of cells via variables—then uses an LLM to resolve remaining ambiguities via cell-by-cell zero-shot learning, thereby identifying the true data inputs and outputs of each cell. We evaluate and demonstrate the effectiveness of our approach using an annotated dataset of 50 representative, highly up-voted Kaggle notebooks that together represent 3454 actual cell inputs and outputs. The LLM correctly resolves 1397 of 1425 (98%) ambiguities left by analyzing the syntactic structure of these notebooks. Across 50 notebooks, CRABS achieves average F1 scores of 98% identifying cell-to-cell information flows and 99% identifying transitive cell execution dependencies. Moreover, 37 out of the 50 (74%) individual information flow graphs and 41 out of 50 (82%) cell execution dependency graphs match the ground truth exactly.
- Has Part
- https://github.com/cirss/crabs/tree/v1.0.0
- Type of Resource
- text
- Genre of Resource
- conference paper
- Language
- eng
- DOI
- https://doi.org/10.48550/arXiv.2507.11742
Owning Collections
Student Publications and Research - Information Sciences PRIMARY
Publications, conference papers, and other research and scholarship of iSchool students.Manage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…