Conditional Maximum Likelihood Estimation for Improving Annotation Performance of N-gram Models Incorporating Stochastic Finite State Grammars

Language models that combine stochastic grammars and N-grams are often used in speech recognition and language understanding systems. One useful aspect of these models is that they can be used to annotate phrases in the text with their constituent grammars; such annotation often plays an important role in subsequent processing of the text. In this paper we present an estimation procedure, under a conditional maximum likelihood objective, that aims at improving the annotation performance of these models over their maximum likelihood estimate. The estimation is carried out using the extended Baum-Welch procedure of Gopalakrishnan et.al. We find that with conditional maximum likelihood estimation the annotation accuracy of the language models can be improved by over 7% relative to their maximum likelihood estimation.

By: Vaibhava Goel

Published in: RC23224 in 2004

LIMITED DISTRIBUTION NOTICE:

This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.

rc23224.pdf

Questions about this service can be mailed to reports@us.ibm.com .