Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts



Sentence Encoders, Embedding Models, Cross-lingual Vector Space, Multi-lingual Embeddings, CrisisTransformers


Tasks such as semantic search and clustering on crisis-related social media texts enhance our comprehension of crisis discourse, aiding decision-making and targeted interventions. Pre-trained language models have advanced performance in crisis informatics, but their contextual embeddings lack semantic meaningfulness. Although the CrisisTransformers family includes a sentence encoder to address the semanticity issue, it remains monolingual, processing only English texts. Furthermore, employing separate models for different languages leads to embeddings in distinct vector spaces, introducing challenges when comparing semantic similarities between multi-lingual texts. Therefore, we propose multi-lingual sentence encoders (CT-XLMR-SE and CT-mBERT-SE) that embed crisis-related social media texts for over 50 languages, such that texts with similar meanings are in close proximity within the same vector space, irrespective of language diversity. Results in sentence encoding and sentence matching tasks are promising, suggesting these models could serve as robust baselines when embedding multi-lingual crisis-related social media texts. The models are publicly available at:


Download data is not yet available.




How to Cite

Lamsal, R., Read, M. R., & Karunasekera, S. (2024). Semantically Enriched Cross-Lingual Sentence Embeddings for Crisis-related Social Media Texts. ISCRAM Proceedings, 21.

Similar Articles

1-10 of 19

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)

1 2 3 4 5 6 7 8 9 10 > >>