Files in this item



application/pdficonf-ef.pdf (887kB)
Main articlePDF


Title:Access to billions of pages for large-scale text analysis
Author(s):Organisciak, Peter; Capitanu, Boris; Underwood, Ted; Downie, J. Stephen
Subject(s):Non-consumptive research
Feature extraction
Large-scale text analysis
Text mining
Abstract:Consortial collections have led to unprecedented scales of digitized corpora, but the insights that they enable are hampered by the complexities of access, particularly to in-copyright or orphan works. Pursuing a principle of non-consumptive access, we developed the Extracted Features (EF) dataset, a dataset of quantitative counts for every page of nearly 5 million scanned books. The EF includes unigram counts, part of speech tagging, header and footer extraction, counts of characters at both sides of the page, and more. Distributing book data with features already extracted saves resource costs associated with large-scale text use, improves the reproducibility of research done on the dataset, and opens the door to datasets on copyrighted books. We describe the coverage of the dataset and demonstrate its useful application through duplicate book alignment and identification of their cleanest scans, topic modeling, word list expansion, and multifaceted visualization.
Issue Date:2017-03
Citation Info:Peter Organisciak, Boris Capitanu, Ted Underwood, J. Stephen Downie. “Access to Billions of Pages for Large-Scale Text Analysis.” iConference 2017. Wuhan, China.
Series/Report:iConference 2017
Genre:Conference Paper / Presentation
Date Available in IDEALS:2017-06-19

This item appears in the following Collection(s)

  • Illinois Research and Scholarship
    This is the default collection for all research and scholarship developed by faculty, staff, or students at the University of Illinois at Urbana-Champaign

Item Statistics