WIT3

Web Inventory of Transcribed and Translated Talks

Home 2012-02 MT results

For some language pairs among those for which benchmarks are available, results of baseline SMT systems are provided below.

The baseline systems were built upon the open-source MT toolkit Moses in a pretty standard configuration; see the WIT3 paper for details.

Performance in terms of BLEU% and TER were computed by means of the MultEval software and refer to tst2010 evaluation set. Automatic translations are linked to entries, just click for getting them. 

tst2010
en

fr

it
arBleu=23.41
TER=57.01
  
deBleu=26.57
TER=52.27
 Bleu=14.56
TER=66.96
en Bleu=29.24
TER=51.13
Bleu=22.54
TER=55.72
nlBleu=31.56
TER=47.81
  
zh-cnBleu=11.48
TER=75.79