top of page

AI and the temptation to cheat at everything

Updated: Jun 30

AI shortcuts are irresistible to many: from students to companies and government agencies. Too many of them are dead ends. If we aren't careful we'll end up in a world full of discount bullshit, efficiently delivered by stupid machines.


As a university law lecturer, I'm becoming all-too-familiar with students cheating in assignments using AI tools. The ones I catch are the ones for whom the tool has done a terrible job. It's hard to describe how demoralising it is, after working through dozens of student papers, to notice the unmistakeable hallmarks of AI cheating in yet another one.


A crumpled plastic bottle on the sand

AI generated dross, produced by tools like ChatGPT and DeepSeek, stands out like so much plastic waste on a beach. Papers are littered with 'hallucinated' references to non-existent journal articles, abject nonsense about weirdly obscure bits of law, and hopeless mischaracterisation of scholarship in the field. It's depressing, and it's an insult both to the other students who work hard on their papers, and to the academics on whose hard-won reputations these fabrications trade. As a friend put it to me, after marking a paper full of references to non-existent articles atributed to him: "They must think we're fucking idiots. Or we don't give a shit. Or both."


The problem isn't confined to students. One law scholar keeps a database of cases where judges have detected fabricated legal authorities in lawyers' submissions to court. There are already over a hundred cases there. Just yesterday a friend sent me a link to an article in a pre-print repository by a data scientist at a prominent tech company (I won't name and shame). The article was absolutely riddled with AI hallucination. AI cheating is everywhere.


AI seems to offer incredible shortcuts: around the exhausing (but edifying) intellectual effort required to write a good university assignment, yes; but also shortcuts around anything that requires thought, analysis and judgment.


AI hypers sell bottled capability to companies: outputs and production without (paying for,and investing in) human skill and human work. Then, so very often, the hype-train derails. I'm thinking of the recent embarassment at the Melbourne Cricket Ground (MCG) where the stadium administrator procured a system called 'EvolvExpress' to provide AI-enabled security scanners. Evolv was supposed to be able to detect weapons from security scans, alerting human staff to then intervene. But at a recent Australian Football game where the MCG deployed Evolv, two men involved in a fracas in the stadium were found to be carrying guns. According to the MCG CEO, "our initial internal investigation identified a breakdown in the thoroughness of the secondary and manual screening process". In other words, like cheating students, the MCG over-relied on a tool that was unfit for the job, and failed to validate results appropriately.


It's not the first time Evolv has landed in hot water. The US competition and consumer regulator, the Federal Trade Commission, recently settled a lawsuit against Evolv for false and misleading claims about its suitability as a security system for US schools. The settlement in that case prohibited Evolv from claiming that the technology could detect weapons and ignore harmless personal items (their previous sales pitch). And yet somehow Evolv still exists and still claims to provide ‘advanced security detection technology’.


It's a familiar story in the world of AI and data-driven automation. Australia's Robodebt scandal, where a deeply stupid algorithm wrongly issued automated debt notices to welfare recipient, is a variation on a theme of ill-considered penny-pinching automation. No one seemed to have thought through the system's fundamental, idiotic assumption: that welfare recipients earn the same from work every month. For people in precarious work, income ebbs and flows from month to month, and deviation from average monthly income is not a sufficient basis for assuming welfare overpayment. The Netherlands, France, the UK and the US have all had their own versions of this story: hastily deployed, badly validated AI tools incorrectly raising allegations of welfare fraud en masse, with disastrous impacts for people in poverty.


What's so weird is that in so many of the AI-related scandals we've seen, it would not have been particulary hard to work out that the tool wasn't suitable for the job, and yet... people are somehow taken in, over and over again.


Mephistopheles dressed in Rennaissance clothes stands over a dreaming Faust
Faust Tempted by Mephistopheles - Engraving (Unknown Artist) - Source: https://www.meisterdrucke.us/

At this point it seems fair to say that the lure of cheating is not a fringe sociological bug, found around the margins of AI deployment. It's an essential feature of this new technology. The tech is what it does, and one thing it does very well is tempt people to cheat. At everything.


No-one really knows what is going to happen with AI. Still, one future that seems plausible is one where we see the same surrender to weakness and dishonesty that undoes cheating students, but at scale. AI 'doomers' worry about the catastrophes that might attend the advent of superhuman AI or 'artificial general intelligence'. If superhuman artificial general intelligence were to replace human labour, that would create huge upheaval. But what if people lose their jobs, not to superhuman AI, but to the same shabby faux-intelligence, the same artificial stupidity I see in AI generated law assignments?


In that world of artificial general stupidity, businesses and governments and providers of every product of human intellect will have succumbed to the temptation of AI tools, not because the AI is better than a skilled person, but because it is cheap. Unable to resist the promise of production without work, service providers will cheat themselves, they will cheat their employees, and they will cheat us by using crummy tools to deliver sub-par services.


The irresistable lure of AI cheating in every domain will cheapen everything, literally and figuratively. I hope that doesn't happen, but I fear it will.




Recent Posts

See All
The nightmare of a frictionless world

Big tech businesses like Google and Meta are trying to build a world where their AI products "assist" everyone to do everything. They are eliminating friction from life, letting us all do things by ou

 
 
 

Comments


bottom of page