One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling

التفاصيل البيبلوغرافية
العنوان: One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling
المؤلفون: Chelba, Ciprian, Mikolov, Tomas, Schuster, Mike, Ge, Qi, Brants, Thorsten, Koehn, Phillipp, Robinson, Tony
سنة النشر: 2013
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: We propose a new benchmark corpus to be used for measuring progress in statistical language modeling. With almost one billion words of training data, we hope this benchmark will be useful to quickly evaluate novel language modeling techniques, and to compare their contribution when combined with other advanced techniques. We show performance of several well-known types of language models, with the best results achieved with a recurrent neural network based language model. The baseline unpruned Kneser-Ney 5-gram model achieves perplexity 67.6; a combination of techniques leads to 35% reduction in perplexity, or 10% reduction in cross-entropy (bits), over that baseline. The benchmark is available as a code.google.com project; besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the baseline n-gram models.
Comment: Accompanied by a code.google.com project allowing anyone to generate the benchmark data, and use it to compare their language model against the ones described in the paper
نوع الوثيقة: Working Paper
الوصول الحر: http://arxiv.org/abs/1312.3005Test
رقم الانضمام: edsarx.1312.3005
قاعدة البيانات: arXiv