Withdraw
Loading…
Virtual reality in speech and voice measurements: Investigating sensory streams and a preventative intervention
Nudelman, Charles J.
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/129525
Description
- Title
- Virtual reality in speech and voice measurements: Investigating sensory streams and a preventative intervention
- Author(s)
- Nudelman, Charles J.
- Issue Date
- 2025-04-16
- Director of Research (if dissertation) or Advisor (if thesis)
- Bottalico, Pasquale
- Doctoral Committee Chair(s)
- Bottalico, Pasquale
- Committee Member(s)
- Fogerty, Daniel
- Monson, Brian B.
- Flaherty, Mary M.
- Hunter, Eric J.
- Department of Study
- Speech & Hearing Science
- Discipline
- Speech & Hearing Science
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Virtual reality
- Voice production
- Voice quality
- Multisensory simulation
- Voice Therapy
- Voice Intervention
- Voice Disorders
- Teachers
- Occupational voice
- Professional voice
- Abstract
- The purpose of this work is to investigate a potential method for improving ecological validity of speech and voice recordings in laboratory and clinical settings. This work focuses on implementing simulation technologies during voice recordings and examining the effects of their sensory features. Additionally, a brief virtual reality (VR) intervention is executed, and its effects are analyzed. The first experiment evaluated the effects of simulated background noise, reverberation, and visual VR room size and VR room occupancy (number of occupants present) on voice acoustic outcome parameters and self-reported vocal status. Forty-one participants recorded reading and spontaneous speech samples in six different simulated acoustic experiences and six different visual VR experiences separately. Linear mixed effects regression models revealed that auralized occupancy (background noise) and reverberation significantly affected the participants’ acoustic voice parameters, while only the auralized occupancy significantly affected their self-reported vocal status ratings. The second experiment involved forty-one participants and evaluated the effects of multisensory simulations on the same outcomes. Linear mixed effects regression models indicated that densely occupied and large VR rooms significantly influenced voice-related outcomes in comparison to sparsely occupied and small VR rooms. A secondary set of statistical models were implemented to analyze results across the first two experiments. These models revealed that multisensory simulations and unisensory (i.e., single-sensory) auditory simulations tended to significantly influence acoustic voice parameters and self-reported vocal status, compared to unisensory visual VR simulations. From these two studies, it can be determined that speech and voice production adapt significantly to audiovisual sensory input in VR, both when evaluating different aspects of the sensory input itself and when comparing multisensory simulations to unisensory simulations. Such multisensory simulations could improve voice recordings obtained in traditional contexts, such as laboratory and clinical settings. The third experiment endeavored to pilot a brief VR intervention for the clinical prevention of voice disorders using an evidence-based voice therapy. Ten pre-professional teachers recorded speech samples in three different conditions – a control condition (conversational speech), a teaching style condition in a sound booth, and a VR condition with voice-related cues provided by a certified and licensed speech-language pathologist. Linear mixed effects regression models revealed that the VR intervention condition contributed to the adoption of a teaching style of speech for all participants. The VR intervention condition resulted in significantly improved voice acoustic outcomes and significantly increased self-reported vocal discomfort compared to the control condition. Finally, responses on a VR questionnaire indicated that in the VR intervention condition, participants endorsed a strong sense of presence and immersion. This VR intervention demonstrates early feasibility that multisensory simulations can feasibly elicit vocal adaptations in a clinician-patient interaction. Future work should explore the efficacy of the VR intervention for speakers with speech and voice disorders.
- Graduation Semester
- 2025-05
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/129525
- Copyright and License Information
- Copyright 2025 Charles J. Nudelman
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…