Approaches to Automatic Quality Estimation of Manual Translations in Crowdsourcing Parallel Corpora Development: A Quality Equivalence and Cohort-Consensus Approach

We address the topic of metrics and approaches to automatic quality analysis and validation of sentence translations when manually developing a parallel corpus of translations. We focus specifically on the crowdsourcing-centered approach. We propose a set of metrics which provide the corpus developers with translation quality estimates. These estimates are particularly necessary when, due to the particular circumstances of the data collection, the quality of the translation provided is expected to vary significantly from person to person as well as from sentence to sentence. Our approach is based on the concept of quality equivalence and cohort-consensus. We also describe our experience and results using our proposed metrics when developing a large parallel corpus in a crowdsourcing approach.

By: Juan M. Huerta

Published in: RC25031 in 2010


This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.


Questions about this service can be mailed to .