A new academic study has revealed that while ChatGPT can produce grammatically polished and coherent argumentative essays, it still falls short when it comes to the subtle craft of engaging the reader—a key element that human student writers handle far better.
The research, led by Professor Ken Hyland from the University of East Anglia in collaboration with Professor Kevin Jiang of Jilin University, analysed 145 essays written by UK university students and another 145 generated by OpenAI’s ChatGPT. The findings, now published in the journal Written Communication, show that student-authored essays were markedly stronger in deploying rhetorical strategies like questions, personal commentary, and direct appeals—features that build a persuasive and interactive connection with the reader.
“Since its public release, ChatGPT has created considerable anxiety among teachers worried that students will use it to write their assignments,” said Prof Hyland. “We wanted to see how closely AI can mimic human essay writing, particularly focusing on how writers engage with readers.”
The term “engagement markers” refers to elements that actively involve the reader, such as posing rhetorical questions, including personal asides, or directly addressing the reader with terms like “we” or “you”. These rhetorical flourishes were largely absent in the AI-generated essays, which, although structurally sound, read as more impersonal and detached.
“The ChatGPT essays mimicked academic writing conventions but were unable to inject text with a personal touch or to demonstrate a clear stance,” said Hyland. “They tended to avoid questions and limited personal commentary. Overall, they were less engaging, less persuasive, and there was no strong perspective on a topic.”
By contrast, student essays were found to be more interactive, using a wider variety of techniques to guide the reader through complex arguments. This included a significantly higher number of questions and narrative asides—conversational moves that humanise the writing and create a sense of shared intellectual journey between writer and reader.
The study also sheds light on the underlying mechanics of ChatGPT’s output. Because it relies on statistical modelling from large-scale training data, it often generates text that is contextually appropriate but devoid of the nuanced engagement typical of human writing. The AI’s so-called “audience blindness”—its inability to imagine a specific reader—emerges as a fundamental shortcoming in producing persuasive prose.
Despite these gaps, the researchers are not dismissing AI’s role in education. Rather, they argue for its responsible integration into teaching. “When students come to school, college or university, we’re not just teaching them how to write, we’re teaching them how to think and that’s something no algorithm can replicate,” added Hyland.