Generative artificial intelligence (AI) has started to be considered a cost-efficient alternative for geospatial and urban surveys, but there remains a critical need to evaluate how closely AI-generated outputs align with human responses. This paper compares responses from ChatGPT and residents in defining neighborhood boundaries, a long-standing challenge in urban studies with no single correct answer and typically relies on input from resident surveys. Our analysis focuses on both the defined boundaries and areas that are rarely covered by any boundaries. Our results show that ChatGPT tends to generate neighborhood boundaries with less variability in extent and geographic coverage compared to crowdsourced boundaries, potentially favoring more standardized representations. Additionally, we find that AI-generated boundaries are less likely than human efforts to cover areas with lower population density and higher percentages of non-White and Hispanic populations, reflecting potential biases. These findings highlight the need to critically evaluate generative AI’s potential to supplement human respondents in urban and spatial applications while carefully considering its limitations, particularly regarding bias and representation.
This is the default collection for all research and scholarship developed by faculty, staff, or students at the University of Illinois at Urbana-Champaign
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.