top of page

midAI and the misunderstanding of work

The promise of AI was to deliver wonders. Why does it keep turning out so mid? One reason is a misunderstanding of the work that AI applications are supposed to automate. Automators see tasks, but often miss the relationships in which they are embedded.


The arrival of GPT 5 and its tepid reception may mark a turning point in the AI hype cycle. Users found the new model to be worse than earlier models for many tasks, resented the suddenly tighter usage limits even for paying users (the model is very compute hungry - a story for another post), and were rightly indignant at OpenAI’s removal of access to earlier models without notice. OpenAI’s CEO Sam Altman agreed to reverse some of these changes. But the hype train lost momentum.


So mid.

The ill-considered shipping of GPT5 has many of the hallmarks of what I’m starting to think of as ‘midAI’. The hype, ‘PhD level intelligence for everyone’ (whatever that means) was especially obnoxious. The model didn’t live up to the hype. MidAI never does. The release, and the various changes to the subscriber experience that came with it, was evidently rushed. MidAI is rushed because developers buy into a story that they are ‘racing’ for AI dominance with each other, with China; racing to achieve ‘superintelligence’ lest others less worthy get there first. They need to justify and recoup the eye-watering investments of hundreds of billions of dollars in each new model.  Emily Bender and Alex Hanna exquisitely skewer this cycle of hype, rush, underperformance and harm in their recent book, The AI Con.


What interests me the most, though is the particular way the GPT5 release pissed off users: throttling usage without notice; removing models without notice. Users complained that they'd already integrated earlier GPT models into their work flows. Their work, their livelihoods in some cases, depended on those models, and they disappeared from one moment to the next. OpenAI, in its haste, gave no thought to the fact that these models were integrated into complex value chains made up of many moving parts, dependencies and relationships.


And this is one of the reasons midAI is so mid (mediocre, if you haven’t looked it up already). Automators often fail to understand the relationships into which automation is deployed. Here it was relationships between AI models and other components in users' IT architectures.


Computers don't care
Computers don't care

Elsehwere it's an apparent obliviousness to the relationships that underpin work of all kinds. The boosters of AI-enabled 'ed-tech' seem to think that teachers exist simply to deliver educational 'content': a chatbot can ask questions following the Socratic method, organise information, answer questions, write essays; who needs a teacher?


AI 'companion' providers reduce friendship to mere dialogue; not even real conversation: merely the certainty of some verbal response. Generally the response follows the theatre sports mantra of 'yes and', affirming and extending. But real friends don't affirm and extend suicidal ideation or violent thoughts. An AI companion never delivers a pregnant pause, a sad or wry smile, a raised eyebrow inviting you to reconsider. An AI companion never itself needs any reciprocity or care from you. Automated sycophancy is not friendship and for the most part it is not healthy (there are exceptions, as with any rule of thumb).


Boosters imagine synthetic AI 'agents' replacing more conventional forms of polling and consulation - replicating the views of the public with 80% accuracy. But as my colleague, Dr Alex Sinclair has pointed out, efficiency is not the only goal. Government consultation is fundamental to democracy and the rule of law - allowing citizens actually to participate in their government; and reminding governments of their duty to serve the people.


LegalAI has been slow to take off, because it took providers an age to properly accommodate the need for privacy and confidentiality. It has an incredibly mid track record - with hundreds of examples of lawyers and self-represented litigants presenting hallucinated and misleading slop to courts. Underlying this series of embarassing failures is a fundamental misapprehension (in this case, perhaps, by lawyers attempting to automate their own work) of the relationships in play. A lawyer is not only in a relationship of trust that gives rise to duties to her client. A lawyer is an officer of the court - she is there to assist the state to uphold the law, not to efficiently fabricate a spurious argument that misrepresents the law.


Even apparently straightforward AI transcription tools have a rather outsized risk profile by virtue of the fact that they are context-agnostic, and therefore blind to relationships. You change the relationships in a meeting by inserting a surveillance tool provided by a corporate third party with no personal relationships to the attendees, and by using a robot scribe that has no empathy or common sense. 'How are you?' or 'How are the kids?' become incredibly charged questions in a meeting surveilled by an AI transcription service. 


If it so happens the meeting is a particularly delicate one, between parties in a relationship of trust and dependence, the intrusion is still more jarring. Kobi Leins, another colleague from the ADM+S Centre, went viral last week when she refused to agree to the use of AI transcription in a specialist medical appointment for her child, and was told she had to go elsewhere for care. The big picture: a corporate entity with no meaningful relationship to a patient insinuating itself, poorly, into the most sensitive of human relationships. Doctors provide care, and care is part of the relationship.


Until automators understand the importance of relationships, and learn to be careful about the ways their tools disrupt relationships, AI is going to stay mid.




Recent Posts

See All
The nightmare of a frictionless world

Big tech businesses like Google and Meta are trying to build a world where their AI products "assist" everyone to do everything. They are eliminating friction from life, letting us all do things by ou

 
 
 

Comments


bottom of page