ASHuR: Evaluation of the Relation Summary-Content Without Human Reference Using ROUGE

Authors

  • Alan Ramírez-Noriega Universidad Autónoma de Baja California, Facultad de Ciencias Quimicas e Ingeniería, Tijuana, Baja California, C.P. 22390
  • Reyes Juárez-Ramírez Universidad Autónoma de Baja California, Facultad de Ciencias Quimicas e Ingeniería, Tijuana, Baja California, C.P. 22390
  • Samantha Jiménez Universidad Autónoma de Baja California, Facultad de Ciencias Quimicas e Ingeniería, Tijuana, Baja California, C.P. 22390
  • Sergio Inzunza Universidad Autónoma de Baja California, Facultad de Ciencias Quimicas e Ingeniería, Tijuana, Baja California, C.P. 22390
  • Yobani Martínez-Ramírez Universidad Autónoma de Sinaloa, Facultad de Ingeniería Mochis, Los Mochis, Sinaloa, C.P. 81223

Keywords:

Text summarization, summary evaluation, ROUGE, sentences extraction

Abstract

In written documents, the summary is a brief description of important aspects of a text. The degree of similarity between the summary and the content of a document provides reliability about the summary. Some efforts have been done in order to automate the evaluation of a summary. ROUGE metrics can automatically evaluate a summary, but it needs a model summary built by humans. The goal of this study is to find a quantitative relation between an article content and its summary using ROUGE tests without a model summary built by humans. This work proposes a method for automatic text summarization to evaluate a summary (ASHuR) based on extraction of sentences. ASHuR extracts the best sentences of an article based on the frequency of concepts, cue-words, title words, and sentence length. Extracted sentences constitute the essence of the article; these sentences construct the model summary. We performed two experiments to assess the reliability of ASHuR. The first experiment compared ASHuR against similar approaches based on sentences extraction; the experiment placed ASHuR in the first place in each applied test. The second experiment compared ASHuR against human-made summaries, which yielded a Pearson correlation value of 0.86. Assessments made to ASHuR show reliability to evaluate summaries written by users in collaborative sites (e.g. Wikipedia) or to review texts generated by students in online learning systems (e.g. Moodle).

Downloads

Download data is not yet available.

Downloads

Published

2018-07-03

How to Cite

Ramírez-Noriega, A., Juárez-Ramírez, R., Jiménez, S., Inzunza, S., & Martínez-Ramírez, Y. (2018). ASHuR: Evaluation of the Relation Summary-Content Without Human Reference Using ROUGE. COMPUTING AND INFORMATICS, 37(2), 509–532. Retrieved from https://www.cai.sk/ojs/index.php/cai/article/view/2018_2_509

Most read articles by the same author(s)