Subtitle: It is easy for me, with aphantasia
Found this article on substack.com a few days ago, and was rereading it from time to time since then, being unable to figure out what exactly is wrong with it.
I’ve read it carefully – probably, three times, or so – and yet it somehow just didn’t click for me.
At some point, I caught myself thinking that I almost never saw those «mental images» that the author mentions in the text.
(I even decided that I’m the same kind of person as the author, «aphantasied», and I studied her other articles about this topic – on the same resource – and found something else funny there, but that’s probably a completely different story.)
I’ve thought about this many times and have almost developed my own rules to determine whether it’s written by artificial intelligence or not.
But more often than not, I was guided simply by a personal sense of some abstract «artificiality».
Perhaps the most working rule, that looks like it’s working for me, is the «comprehensive» and over-structured text, and this partially echoes the conclusions of the author, but with many «buts».
The main feature that the author constantly refers to is the «mental images». Of course, if we see some vivid images or specific details in an article that are clearly based on personal experience or memories, then most likely it was written by a person. (But I’m afraid that it’ll be possible to imitate those things soon, too.)
The related things, such as overuse of institutional buzzwords, absence of unique, messy, or uncertain personal narratives, smooth but mechanical-sounding satire or persuasion lacking authentic insight or belief – these might be a sign sometimes, of course, but this absolutely cannot be recognized in cases where a scientific article or analytical text is being created (probably the most common use cases of AI).
«Comprehensive texts» – this not a point, and might only work if you see a text that is too dry and detailed, when you expect a light, lively description or a story.
And I completely missed the point of the lack of «not just X but also Y» constructions. I didn’t even understand if this was a sign of artificial intelligence or vice versa. My bad?
But, undoubtedly, this is a good reason for reflection and a basis for a deeper study of your own criteria for determining AI-generated texts and for working out our own position on this delicate and urgent issue.
References
The article: How to Tell if Something is AI-Written / by Hollis Robbins, Aug 2025.
The author: Hollis Robbins is an American academic and essayist. Robbins is professor of English and also serves as Special Advisor for Humanities at the University of Utah; she was formerly dean of humanities. Her scholarship focuses on African-American literature and her essays focus on higher ed and artificial intelligence.
An article about aphantasia, and a study on it by prof. Carl Zeman: The Shape of Things Unseen (Review). While I was reading, I doubted how possible it was to measure the presence of these «mental pictures», and this article (and in the study) claims that it is really possible (prof. Zeman was engaged in this). The next step is to find this study and try to understand what practical methods were used.