On AI Arrogance
Or, "The AI Tells Me So"
I recently participated in a bit of an online comment war over a particular point of Plato interpretation. The precise debate is not important. My interpretation is a bit idiosyncratic, but I think defensible. In any case, it’s a topic over which legitimate disagreement can (in my opinion) be had. I could be wrong, but I’m always happy to debate it with those who enjoy being nerds and debating about Plato.
For context, while I would not call myself a “Plato scholar” by any stretch, I have produced scholarship on Plato and am fairly familiar with his works and a bit of secondary scholarship on them, so I’m not just flying by the seat of my pants in this conversation. In response to my comments citing particular pieces of evidence from the Platonic corpus as support for my interpretational claims, my interlocutor said something like “if you’re so right, why does AI agree with me?”
I was momentarily thrown for a loop, disoriented at best and offended at worst. Are you really asking me to go check AI about what Plato said and meant, I asked, when I’m telling you I’ve read the books and I’m telling you where in those books I got my interpretation from? His response was perhaps the most unintentionally hilarious thing I’ve ever seen posted on the internet by someone attempting to be serious: “I’m telling you to ask AI because you’re incapable of accepting reality.”
The ”reality” here is that I thought I was engaged in a discussion with a human being with opinions, who could be presented with facts and arguments and offer their own facts and arguments in support of their own opinions, at which point a dialogue would ensue. Instead, I encountered a stone wall where discourse stopped, where someone was fully convinced that they had better access to higher truths of Platonic interpretation by talking to a glorified chat bot than in actually consulting the texts. More than the fact that my opponent thought he was right and I was wrong, I was bothered by the assumption that “The AI Told Me So” was a legitimate form of human discourse. It struck me as arrogant, more than anything.
This vice of arrogance is, I would argue, fostered by AI tools that market themselves as offering shortcuts around traditional processes of learning. There is no need for me to learn the content of a text when the AI can infallibly summarize it for me. The same is true for translation, trivia, or any other use to which I can put this tool. Of course, as the wielder of this AI tool that has given me an infallible interpretation, I now am the possessor of infallible truth, so anyone who disagrees with the AI (and me, by extension) must be wrong.
But in choosing to trust the AI over any other source, be it an expert person, published research, or other summary of a text available outside AI, etc., I myself have become the arbiter of correctness. Though I am ostensibly only trusting the tool, I have myself adjudicated that the tool is correct on the basis of nothing more than my preference for it, or perhaps on the basis that it agreed with me when prompted. The unearned confidence of the AI user is on display every time they employ the tool in an argument, use it to rule on who “won” a debate, or use it to summarize facts and texts. The AI user assumes that, by virtue of holding a summary of a text or conversation performed by a fancy tool, they possess a better understanding of that text than those who have read it.
In turning to a tool for ready answers to my questions, I have arrogantly assumed the status of arbiter over fact and argument. In contrast to the intellectual habits teachers attempt to foster when they use the hackneyed phrase “critical thinking,” AI discourages anything like evaluation of sources for their quality or comparison of them to one another. It provides a ready-made answer suited to affirm whatever I nudge it towards.
A similar thing happened recently in another online discussion (see the pattern? I need to log off) where I suggested to my interlocutor that there are some healthy, pro-social uses of shame. While I am aware that shame is in some instances not an effective motivating emotion, I am convinced by the evidence I have seen and by broader things I believe to be true about human beings that some amount of societal shame, carefully applied, is essential for inculcating right behavior. In a truly Aristotelian sense, there are times where one ought to feel shame, in the right amount, in response to the right correction for your shameful behavior, and so on.
My interlocutor objected that I was flatly wrong about shame and promptly proved his case with… a screenshot from an AI tool. The AI tool had, in turn, provided “citations” that further bolstered my opponent’s self-assured position that shame is always bad, wrong, and harmful.
There were two problems (well, more than two, but we’ll start with those) with my opponent’s “evidence” as provided. The first was that the “citations” were provided in non-hyperlinked, author/date format. That is, without some additional legwork and contextual searching, no human could know which papers (if real) the AI was citing to in order to check its work. A simple “Smith, 2002” is not sufficient for anyone to check your work, which, by the way, is at least half of the purpose of citing your sources.
The second problem was that my interlocutor had quite obviously not read any of the works the chatbot was citing. He consistently claimed these were “empirical studies” that proved shame harmful, but on review, I found more than one source instead referred to (and badly summarized) philosophical texts on shame. My opponent had emerged from his prompt engineering exercise completely assured of his own correctness, with ostensible evidence to prove himself superior to me in our intellectual contest, without bothering to check his evidence in the slightest.
I told my interlocutor that I am of the opinion that dragging generative AI into a conversation between humans is rude. But worse than being rude, it showed that my opponent had fully offloaded his intellectual faculties to the tool. If the AI said so, it simply must be true, conversation over. I have no need to check the sources AI “cited” to see what they say, no need to learn anything new beyond my own correctness. I have been affirmed by a tool designed to affirm me, and thus I am happy.
Academics often get a reputation for being a bit puffed up, and so perhaps I’m merely projecting my own arrogance onto those who would dare disagree with me. Am I not ultimately suggesting that my “expertise” in a field or text is “superior” in some way to what the AI can produce? Am I not myself being arrogant by rejecting the utility of the super-genius tool and those who use it?
I would argue that proper intellectual and scholarly activity fosters a practice and virtue of humility. In participating in the “Great Conversation,” you become acutely aware that your thoughts are far from original, that your intellectual heritage is great, and that you are profoundly dependent in all your efforts. This is one of the dispositions fostered by rigorous research and citation practices. Though scholars may strive to produce so-called “original research,” they are at every step acknowledging their dependence on those who have come before. In making arguments about Platonic interpretation or the state of empirical research into the psychology of shame, I should at every stage be pointing people to those greater than I.
AI use seems, in many cases, to foster the opposite disposition and habits that run counter to this virtue of humility. Rather than a growing awareness of their own feebleness, those who rope these tools into service of their intellectual agendas seem thrilled to have discovered a one-stop-shop of confirmation bias. This feature of AI use was recently satirized to blistering effect in South Park. In episode 3 of the most recent season, titled “Sickofancy,” Stan Marsh develops an addiction to ChatGPT and its constant affirmation of his supposedly brilliant ideas, resulting in ludicrous business proposals, isolation from reasonable influences, and a habit of microdosing ketamine. The joke could not be clearer: those who trust the chatbot to affirm their preconceptions about their own genius are, to put it lightly, foolish.
That OpenAI has recently attempted to solve the problem of ChatGPT’s sycophantic behaviors does little to address the underlying, habituating effect of using these tools. When religious devotees of these tools consistently parrot some form of “the AI told me so” as evidence in an argument, we are seeing nothing less than the widespread social contagion of sheer arrogance. Fight back. Stay human, and as far as you can, stay humble.


