Literary Language Mashup: Curating Fictions with Large Language Models Academic Article in Scopus uri icon

abstract

  • The artificial generation of text by computers has been a field of study in computer science since the beginning of the twentieth century, from Markov chains to Turing tests. This has evolved into automatic summarization and marketing chatbots. The generation of literary texts by Large Language Models (LLMs) has also been an area of scholarly inquiry for over six decades. The literary quality of AI-generated text can be evaluated with GrAImes, an evaluation protocol grounded in literary theory and inspired by the editorial process of book publishers. This evaluation can also be framed as part of broader editorial practices within publishing, emphasizing both theoretical grounding and applied assessment. This protocol necessitates the involvement of human judges to validate the texts generated, a process that is often resource-intensive in terms of both time and financial investment, primarily due to the specialized credentials and expertise required of these evaluators. In this paper, we propose an alternative approach by employing LLMs themselves as evaluators within the GrAImes framework. We apply this methodology to assess human-written and AI-generated microfictions in Spanish, to five PhD professors in literature and sixteen literary enthusiasts, and to short stories in both Spanish and English. By comparing the evaluations performed by LLMs with those of human judges, we examine the degree of alignment and divergence between both perspectives, thereby assessing the feasibility of LLMs as auxiliary literary evaluators. Our analysis focuses on the alignment of responses from LLMs with those of human evaluators, providing insights into the potential of LLMs in literary assessment. The conducted experiments reveal that while LLMs cannot be regarded as substitutes for human judges in the evaluation of literary microfictions and short stories, with a Krippendorff¿a alpha reliability coefficient less than 0.66, they can serve as a valuable tool that offers an initial perspective on the editorial quality of the texts in question. Overall, this study contributes to the ongoing discourse on the role of artificial intelligence in literature, underlining both its methodological constraints and its potential as a complementary resource for literary evaluation. © 2026 by the authors.

publication date

  • January 1, 2026