pretrained models, Khmer language, word segmentation, part-of-speech (POS) tagging, news categorization
Trained on a large corpus, pretrained models (PTMs) can capture different levels of concepts in context and hence generate universal language representations, which greatly benefit downstream natural language processing (NLP) tasks. In recent years, PTMs have been widely used in most NLP applications, especially for high-resource languages, such as English and Chinese. However, scarce resources have discouraged the progress of PTMs for low-resource languages. Transformer-based PTMs for the Khmer language are presented in this work for the first time. We evaluate our models on two downstream tasks: Part-of-speech tagging and news categorization. The dataset for the latter task is self-constructed. Experiments demonstrate the effectiveness of the Khmer models. In addition, we find that the current Khmer word segmentation technology does not aid performance improvement. We aim to release our models and datasets to the community in hopes of facilitating the future development of Khmer NLP applications.
Jiang, Shengyi; Fu, Sihui; Lin, Nankai; and Fu, Yingwen
"Pretrained Models and Evaluation Data for the Khmer Language,"
Tsinghua Science and Technology: Vol. 27:
4, Article 5.
Available at: https://dc.tsinghuajournals.com/tsinghua-science-and-technology/vol27/iss4/5