Academics Need to Wake Up on AI, Part III
Most of us do not contribute to human knowledge—AI just made it obvious
Full article excerpt tap to expand
Academics Need to Wake Up on AI, Part IIIMost of us do not contribute to human knowledge—AI just made it obviousAlexander KustovApr 15, 20261373634SharePlease like, share, comment, and subscribe. It helps grow the newsletter without a financial contribution on your part. Thank you for reading.In Part I, I argued that AI can already do social science research better than most professors. In Part II, I engaged with over a thousand responses, conceding where critics were right, while standing by my main claim: the academic status quo was already broken, and AI is just forcing the reckoning.1 In this Part III, written collaboratively with AI and my peers over the last month, I move from diagnosis to what academics can and can't actually do about it.The rather unlikely proximate cause of this third installment on AI was visiting the 2026 International Studies Association (ISA) Annual Convention in Columbus, Ohio—a preeminent multidisciplinary conference of the world’s leading international studies professionals. Or so I was told. What I actually witnessed were presentations so rough they would barely get a C in any of my classes: arguments with no thesis or coherence, grammar errors any spell-checker would catch, presenters reading off their slides as if encountering their own bad arguments for the first time. All without any AI involved, as far as I could tell, judging by the presence of typos and inconsistencies. These were not just grad students, but people with PhDs, tenure, and research budgets.If AI slop is the crisis everyone warns about, I’d like to know what to call what I saw at ISA or most other big social science conferences, for that matter.2 The contrast was impossible to ignore: I was sitting through these presentations at precisely the moment I was receiving death threats and calls to fire me online for suggesting AI can do research better than most professors. That juxtaposition crystallized the argument for this piece.Subscribe21. Most “slop” has always been and still is human slop.My first thesis was the most provocative thing I’ve said, and I’ve adjusted it only slightly since then: agentic AI can already do most social science research tasks better than most professors globally. I still stand by it. In my recent interview with the Chronicle, they put it more bluntly: “AI Is a Better Researcher Than You.” If you still don’t believe that’s true, let’s talk in a few years.But the flip side is just as important. If AI can produce better research output than professors, that’s also an indictment of the output those professors were and are still producing without AI.“Slop” was Merriam-Webster’s 2025 Word of the Year, defined as low-quality digital content produced by AI. But the ISA conference was a reminder that the vast majority of slop has always been human slop. The academic journal system and big conferences in much of the humanities and social sciences were slop factories long before anyone had a ChatGPT subscription. Yes, I really mean that most research is slop.3Some of it is also what the philosopher Harry Frankfurt would call “bullshit“: work that is indifferent to whether its claims are true, especially on politically charged topics like immigration, where researchers start with the left-wing conclusion and work backward. But slop is broader than bullshit. It also includes work that makes no claim at all, work that is supposed to have craft value, and simply fails. The researcher who finds a dataset before having a…
This excerpt is published under fair use for community discussion. Read the full article at Popularbydesign.