Relation Extraction from Texts Containing Pharmacologically Significant Information on base of Multilingual Language Models

Anton Selivanov, Artem Gryaznov, Roman Rybka, Alexander Sboev, Sanna Sboeva, Yuliya Klyueva

Research output: Contribution to journalConference articlepeer-review

Abstract

In this paper we estimate the accuracy of the relation extraction from texts containing pharmacologically significant information on base of the expanded version of RDRS corpus, which contains texts of internet reviews on medications in Russian. The accuracy of relation extraction is estimated and compared for two multilingual language models: XLM-RoBERTa-large and XLM-RoBERTa-large-sag. Earlier research proved XLM-RoBERTa-large-sag to be the most efficient language model for the previous version of the RDRS dataset for relation extraction using a ground-truth named entities annotation. In the current work we use two-step relation extraction approach: automated named entity recognition and extraction of relations between predicted entities. The implemented approach has given an opportunity to estimate the accuracy of the proposed solution to the relation extraction problem, as well as to estimate the accuracy at each step of the analysis. As a result, it is shown, that multilingual XLM-RoBERTa-large-sag model achieves relation extraction macro-averaged f1-score equals to 86.4% on the ground-truth named entities, 60.1% on the predicted named entities on the new version of the RDRS corpus contained more than 3800 annotated texts. Consequently, implemented approach based on the XLM-RoBERTa-large-sag language model sets the state-of-the-art for considered type of texts in Russian.

Original languageEnglish
JournalProceedings of Science
Volume429
StatePublished - 6 Dec 2022
Externally publishedYes
Event6th International Workshop on Deep Learning in Computational Physics, DLCP 2022 - Dubna, Russian Federation
Duration: 6 Jul 20228 Jul 2022

Fingerprint

Dive into the research topics of 'Relation Extraction from Texts Containing Pharmacologically Significant Information on base of Multilingual Language Models'. Together they form a unique fingerprint.

Cite this