Exploring Multiple Strategies to Improve Multilingual Coreference Resolution in CorefUD

Authors

  • Ondřej Pražák NTIS – New Technologies for the Information Society, Faculty of Applied Sciences, University of West Bohemia, Technická 8, Plzeň, Czech Republic
  • Miloslav Konopík NTIS – New Technologies for the Information Society, Faculty of Applied Sciences, University of West Bohemia, Technická 8, Plzeň, Czech Republic
  • Pavel Král NTIS – New Technologies for the Information Society, Faculty of Applied Sciences, University of West Bohemia, Technická 8, Plzeň, Czech Republic

DOI:

https://doi.org/10.31577/cai_2026_1_22

Keywords:

Coreference resolution, cross-lingual model, transformers, end-to-end model

Abstract

Coreference resolution, the task of identifying expressions in a text that refer to the same entity, is a critical component in various natural language processing applications. This paper presents a novel end-to-end neural coreference resolution system, utilizing the CorefUD 1.1 dataset, which spans 17 datasets across 12 languages. The proposed model is based on the standard end-to-end neural coreference resolution system. We first establish baseline models, including monolingual and cross-lingual variations, and then propose several extensions to enhance performance across diverse linguistic contexts. These extensions include cross-lingual training, incorporation of syntactic information, a Span2Head model for optimized headword prediction, and advanced singleton modeling. We also experiment with headword span representation and long-document modeling through overlapping segments. The proposed extensions, particularly the heads-only approach, singleton modeling, and long document prediction, significantly improve performance across most datasets. We also perform zero-shot cross-lingual experiments, highlighting the potential and limitations of cross-lingual transfer in coreference resolution. Our findings contribute to the development of robust and scalable coreference systems for multilingual coreference resolution. Finally, we evaluate our model on the CorefUD 1.1 test set and surpass the best model from the CRAC 2023 shared task of comparable size by a large margin.

Downloads

Download data is not yet available.

Published

2026-04-30

How to Cite

Pražák, O., Konopík, M., & Král, P. (2026). Exploring Multiple Strategies to Improve Multilingual Coreference Resolution in CorefUD. Computing and Informatics, 45(1), 22–54. https://doi.org/10.31577/cai_2026_1_22