Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts

Authors

DOI:

https://doi.org/10.59297/zxa19c16

Keywords:

Sentence Encoders, Embedding Models, Cross-lingual Vector Space, Multi-lingual Embeddings, CrisisTransformers

Abstract

Tasks such as semantic search and clustering on crisis-related social media texts enhance our comprehension of crisis discourse, aiding decision-making and targeted interventions. Pre-trained language models have advanced performance in crisis informatics, but their contextual embeddings lack semantic meaningfulness. Although the CrisisTransformers family includes a sentence encoder to address the semanticity issue, it remains monolingual, processing only English texts. Furthermore, employing separate models for different languages leads to embeddings in distinct vector spaces, introducing challenges when comparing semantic similarities between multi-lingual texts. Therefore, we propose multi-lingual sentence encoders (CT-XLMR-SE and CT-mBERT-SE) that embed crisis-related social media texts for over 50 languages, such that texts with similar meanings are in close proximity within the same vector space, irrespective of language diversity. Results in sentence encoding and sentence matching tasks are promising, suggesting these models could serve as robust baselines when embedding multi-lingual crisis-related social media texts. The models are publicly available at: https://huggingface.co/crisistransformers.

Downloads

Download data is not yet available.

Downloads

Published

2024-05-14

How to Cite

Lamsal, R., Read, M. R., & Karunasekera, S. (2024). Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts. Proceedings of the International ISCRAM Conference. https://doi.org/10.59297/zxa19c16

Similar Articles

1-10 of 19

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)

1 2 3 4 5 6 7 8 9 10 > >>