How AI and Humans Simplify Text Differently

Last year (2025), I spent time thinking about the differences between human and AI language processing. Through a co-authored paper published in Intercultural Pragmatics and a keynote lecture in Taiwan, contrasting strategies in text simplification came into sharper focus. Here I look back at the key points.

The problem of text simplification

"Easy Japanese" (Yasashii Nihongo) is an initiative to make information accessible to people who are not native speakers of Japanese. NHK's "NEWS WEB EASY", for example, rewrites news articles into simpler Japanese. This rewriting requires multidimensional processing: judging vocabulary difficulty, sentence structure, and information selection all at once.

What happens when we let AI (GPT-4) do this work, and how does it compare with human experts? And what do the differences reveal about the cognitive characteristics of human language processing? We compared 420 NHK news articles across three versions: originals, human-simplified, and AI-simplified (Hasebe and Lee 2025).

Two divergent strategies

The results were clear. AI-simplified texts achieved higher readability scores than human-simplified texts (AI: 3.69, human: 2.97, original: 1.33). Moreover, AI preserved more of the original information. In terms of morpheme count, the human versions were reduced to 205 while the AI versions retained 332. The originals had 598.

In other words, AI made texts easier to read while keeping more information; humans cut information heavily to achieve conciseness.

This difference is not accidental. It is rooted in the different processing mechanisms of humans and AI.

Human working memory is limited. We can hold roughly four chunks of information at a time. When simplifying complex text, maintaining overall coherence while rewriting individual sentences imposes a heavy cognitive load. As a result, humans tend to rely on a strategy of stripping away less essential information and summarizing the core – a process we can call "abstraction."

AI transformer architectures, by contrast, use self-attention to process the entire input in parallel. They can grasp relationships between distant parts of the text simultaneously, making it possible to preserve information while transforming only the sentence structure – a "systematic transformation."

The possibility of complementarity

What is interesting is that the human strategy has its own clear strengths. Human-simplified texts showed the ability to judge the relative importance of information and reorganize the overall structure of a text at a conceptual level. This is not mere reduction but "conceptual synthesis" – extracting what the reader truly needs – and it is qualitatively different from AI's systematic transformation.

In my keynote at a symposium in Taiwan, I discussed how this complementarity could be applied to language education. AI handles the initial systematic transformation; human educators add cultural contextualization and pedagogical considerations. I believe AI should be positioned not as a "supplementary tool" but as a "complementary tool."

At the same time, AI comes with non-determinism, hallucinations, and the risk of unsolicited interpretation. Given these limitations, a process of human verification, correction, and control over AI output remains essential – another dimension in which human-AI collaboration is key.


Hasebe, Y. and J.-H. Lee. 2025. Divergent strategies in text simplification: A comparative analysis of AI and human approaches in language processing. Intercultural Pragmatics 22(2), 203-230. PDF

Hasebe, Y. 2025. Redesigning Japanese language education through human-AI complementarity. Keynote lecture, 2025 International Symposium on Taiwan Japanese Language Education Research. PDF