Files in this item

FilesDescriptionFormat

application/pdf

application/pdfHu_Chi.pdf (900kB)
(no description provided)PDF

Description

Title:FSM-Based Pronunciation Modeling using Articulatory Phonological Code
Author(s):Hu, Chi
Advisor(s):Hasegawa-Johnson, Mark A.
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:M.S.
Genre:Thesis
Subject(s):articulatory phonology
speech production
speech gesture
finite state machine
Abstract:According to articulatory phonology, the gestural score is an invariant speech representation. Though the timing schemes, i.e., the onsets and offsets, of the gestural activations may vary, the ensemble of these activations tends to remain unchanged, informing the speech content. "Gestural pattern vector" (GPV) has been proposed to encode the instantaneous gestural activations that exist across all tract variables at each time. Therefore, a gestural score with a particular timing scheme can be approximated using a GPV sequence. In this work, we propose a pronunciation modeling method that uses a finite state machine (FSM) to represent the invariance of a gestural score. Given the "canonical" gestural score of a word with a known activation timing scheme, the plausible activation onsets and offsets are recursively generated and encoded as a weighted FSM. An empirical measure is used to prune out gestural activation timing schemes that deviate too much from the "canonical" gestural score. Speech recognition is achieved by matching the recovered gestural activations to the FSM-encoded gestural scores of different speech contents. In particular, the observation distribution of each GPV is modeled by an artificial neural network and Gaussian mixture tandem model. These models are used together with the FSM-based pronunciation models in a Bayesian framework. We carry out pilot word classification experiments using synthesized data from one speaker. The proposed pronunciation modeling achieves over 90% accuracy for a vocabulary of 139 words with no training observations, outperforming direct use of the "canonical" gestural score.
Issue Date:2010-08-20
URI:http://hdl.handle.net/2142/16726
Rights Information:Copyright 2010 Chi Hu
Date Available in IDEALS:2010-08-20
Date Deposited:2010-08


This item appears in the following Collection(s)

Item Statistics