Perhaps the issue is also to do with situational awareness. In taking up the question, the LLM can’t help but try to help; its dot-pointed taxonomies and numbered frameworks exude a desperation to, quite literally, exhaust all possible avenues of inquiry, often to the point of tedium. It can’t, in other words, read the room and cough up something indexical to the embodied vibe that calls for judgement. While some children might enjoy a prolonged lecture on paleoanthropology, many ask questions that presuppose the return of a concrete answer. Children also know when you’re trying to trick them with rhetoric that sounds persuasive but lacks the ingenuity and precision of genuine conviction. They don’t hold back. They exemplify directness and intersubjective purity. Even if they don’t state it directly, they can quickly digest the dynamics of a given situation. I will never forget the time a friend’s trilingual daughter thrust one of her story books at me to read as she dryly barked in disgust: “Here, this one’s in En-gl-ish.”
And what of irony? Or playfulness? The kind that might allow one to answer the question in a way that enlivens dialogue without sounding condescending. To the extent that children seek honesty and directness, they also know how to mess – psychologically, that is – with unreflective adults. The best measure of this game might well be a facial expression or tone of voice when it comes to children who harass their parents with questions ad nauseum – why? why? why? – that they have absolutely no intention of having answered. An LLM, of course, has none of this context. It cannot sense mischievous innuendo. It cannot detect a dialogic swindle. As Charles Barbour writes of the LLM’s inability to fully recognise or construct the conditions of irony: “machines might be able to determine from outward expression that a speaker does not intend what they say. But how could a machine possibly intuit the meaning that is not said? By what set of calculations, no matter how expansive and complex, could it conjure the unsaid up from the said?” 1
There is growing research – not always the same thing as evidence – that contests the above evaluations. 2 Some recent reports contend that AI outputs can demonstrate ideational coherence, reflexivity, and interpersonal positioning. They argue that LLMs can not only make claims but justify them within epistemically structured discourse. The word ideation frequently appears in these studies. So, too, does the most agonising of all word-salad words: workflow. Reading across and between them, there is a shared proposal that, when employed using sets of meticulously designed evaluation protocols, LLMs can generate novel ideas and responses. But what kind of novelty? And in service of whom? With enough ‘structured ideation’ could an LLM answer the question properly? Not correctly, in the scientific sense, but well – adequately, thoughtfully, with a degree of rhetorical courage proportionate to the occasion?
To converse with a child is to be caught off guard and respond while stumbling to regain your balance. It demands a certain kind of responsibility; not necessarily intellectual or philosophical but relational and altruistic. It demands that we abandon managed uncertainty. That we put aside, for a moment, the kind of diplomatic multi-perspectivism that sounds good but says nothing. This demand – part personal and part social – puts us on the spot. It necessitates that we frame and reframe the dulled habits and safe ontologies that settle like dust over adulthood. It demands that we embarrass ourselves a bit. Not through lack of knowledge but by letting our – please forgive me – guardrails down. It asks that we appreciate that sometimes knowledge is not a matter of expanding the frame, but of fixing it, however provisionally, and saying – bravely, without qualification – There. That is my answer. Now let’s discuss it.
The banality of Generative AI is about more than just slop and the “stochastic parrot” about which Emily Bender wrote almost five years ago now. 3 It is about more than simply the difference between fact and fiction. It is about a new regime of discourse that eschews particularity, even when the situation demands it. It is about the extent to which we are parroting that regime without even knowing we’re doing it. Certainty through volume. Predictability through scaffolding. Didactic surety. The illusion of care. The erosion of intrigue. A world of ideas arranged at the hands of tokens. An expression by N. Katherine Hayles comes to mind: “The human aura is now being subverted by a variety of simulacra.” 4
I should have answered the question with simply, “Hamlet.” I should have said: “Hamlet was the first human.” At least then I would have had to defend myself. It would have made for a better conversation.
- Charles Barbour, ‘Irony Machines: Artificial Intelligence, Literary Language, and the Opacities of Trust.’ Special Issue: AI and the Future of Literary Studies. Australian Literary Studies (2025): 1-14. ↩
- Many of these studies conclude by way of recommendations that emphasise ‘human-AI collaboration’ to achieve ostensibly novel insights. Their experimental frameworks are also considerably constrained and frequently pertain to single disciplinary contexts (e.g. consumer research or advertising). See, for example, Julian De Freitas, Gideon Nave and Stefano Puntoni, ‘Ideation with Generative AI—in Consumer Research and Beyond.’ Journal of Consumer Research 52.1 (2025): 18-31; Carlos Carrasco-Farre, ‘Large Language Models are as persuasive as humans, but how? About the cognitive effort and moral-emotional language of LLM arguments.’ (2024): arXiv:2402.09329l. ↩
- Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Schmitchell, ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’ In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). New York: Association for Computing Machinery (2021): 610-623. ↩
- N. Katherine Hayles, ‘Subversion of the Human Aura: A Crisis in Representation.’ American Literature 95.2 (2023): 255. ↩