From human to automatic summary evaluation


Abstract:

One of the goals remaining in Intelligent Tutoring Systems is to create applications to evaluate open-ended text in a human-like manner. The aim of this study is to produce the design for a fully automatic summary evaluation system that could stand for human-like summarisation assessment. In order to gain this goal, an empirical study has been carried out to identify underlying cognitive processes. The studied sample is compound by 15 expert raters on summary evaluation with different professional backgrounds in education. Pearson's correlation has been calculated to see inter-rater agreement level and stepwise linear regression to observe pbkp_redicting variables and weights. In addition, interviews with subjects provided qualitative information that could not be acquired numerically. Based on this research, a design of a fully automatic summary evaluation environment has been described. © Springer-Verlag 2004.

Año de publicación:

2004

Keywords:

    Fuente:

    scopusscopus

    Tipo de documento:

    Article

    Estado:

    Acceso restringido

    Áreas de conocimiento:

    • Ciencias de la computación

    Áreas temáticas:

    • Funcionamiento de bibliotecas y archivos