We Are The Horse: How AI Is Already Destroying Entry-Level Jobs in Britain

C
Channel 4 News
ยท18 February 2026ยท23m saved
๐Ÿ‘ 2 viewsโ–ถ 0 plays

Original

31 min

โ†’

Briefing

8 min

Read time

0 min

Score

๐Ÿฆž๐Ÿฆž๐Ÿฆž๐Ÿฆž

We Are The Horse: How AI Is Already Destroying Entry-Level Jobs in Britain

0:00--:--

UK unemployment soars: is AI already taking our jobs? by Channel 4 News, 32 minutes

UK youth unemployment just hit 16.1 percent, a decade high. And behind the headlines, executives at major banks and law firms are privately admitting something they will not say publicly: they are already cutting graduate recruitment because AI can do the work. The question nobody can answer is where future senior managers will come from if nobody hires juniors anymore.

The Graduate Pipeline Is Breaking

The most chilling insight in this Channel 4 discussion comes not from the AI safety campaigner, but from economics correspondent Helia Ibrahimi, who has been speaking to executives at banks, consultancy firms, and professional services companies. These leaders do not like to talk about jobs disappearing. They prefer to talk about tasks disappearing and hours disappearing. But of course, she notes, it means jobs. It means people.

PricewaterhouseCoopers had an ambition just a few years ago to hire 100,000 new people globally. They have now said they will not meet that commitment and closed approximately 5,000 positions last year. This is not a small firm. This is one of the Big Four accounting firms, a traditional gateway for ambitious graduates into professional careers. And they are not alone.

The deeper problem that boardrooms are wrestling with is existential for their own companies. If you eliminate graduate recruitment because AI handles entry-level work, who becomes the manager in ten years? Who sits on the board in twenty? As Ibrahimi puts it bluntly to one executive: where do they think senior lawyers are going to come from if they do not need junior lawyers anymore? The response, she says, is essentially a collective shrug. Companies are making decisions today that break their own talent pipelines for tomorrow, and nobody has a plan for what happens next.

The Car Replaced the Horse. We Are the Horse.

Andrea Miotti from Control AI delivers the most provocative analogy of the discussion. The car, he notes, did lead to the disappearance of the vast majority of horses. In this case, we are the horse. And the horses were headed to the glue factory after cars were invented.

This is not fringe thinking. Miotti points out that every major AI company, including OpenAI, XAI, and Anthropic, has as its explicit mission to build what they call super intelligence. This means AI that is more competent than humans at all tasks and can eventually outcompete us across the economy. Are they there yet? No. Are they trying the hardest to get there as fast as possible? Yes.

Dario Amodei, the boss of Anthropic, has publicly stated that half of all entry-level office jobs are going to disappear. These are not the warnings of outsiders. These are the stated goals and acknowledged consequences from the people building the technology. And Miotti highlights a persistent pattern of what he calls double speak: the lobbyists and spin doctors say there is nothing to worry about while the CEOs themselves go on the record talking about white collar bloodbath and even human extinction.

Software Is Eating Itself

The area seeing the biggest AI advances right now is software development. This is not coincidental. The AI companies are specifically focused on automating the development of software and especially the development of AI itself, creating a feedback loop where AI builds better AI which builds better AI. Their explicit goal is recursive self-improvement.

Ten years ago, experts predicted AI would excel at formal methods, mathematics, and code while struggling with human comprehension, emotions, and nuance. The opposite happened. Models from GPT-3 onward demonstrated remarkable capability at understanding nuance and impersonating human communication. People are developing parasocial relationships with AI systems, falling in love with chatbots, becoming obsessed with them. This means AI can handle customer service, therapy, companionship, and countless other roles that were supposed to be uniquely human.

Meanwhile, the kung fu robots from China are becoming astonishingly sophisticated. The soldier is perhaps the most obvious near-term application. And while Ibrahimi notes that in the biotech and legal worlds AI still has problems with reliability and integration, the direction of travel is clear. The gap between what AI can do in California labs and what is deployed in real companies is narrowing every quarter.

We Chose Not to Give Everyone Nukes

When pressed on whether this technological tide can actually be stopped, Miotti draws a powerful historical parallel. Most countries around the world do not have nuclear weapons, and we have nuclear non-proliferation for a reason. There was a different path where every country has nuclear weapons, and we would be in a very unstable world. We probably would not even be here recording this podcast today. Instead, leaders chose to restrict nuclear proliferation, and we have not had a nuclear war since World War II.

He argues the same choice exists with AI. This does not mean rejecting all AI. Things like AlphaFold from DeepMind, trained only on protein data to solve scientific questions, are genuinely helpful tools. But there is a clear red line that should be drawn at super intelligence, AI designed to replace all humans at all tasks and remove human control over the economy.

Control AI has conducted over 150 briefings with UK lawmakers and now has more than 100 across political parties recognizing the risk of extinction from AI and calling for binding regulation. The parallel to tobacco regulation is instructive: tobacco companies fought regulation tooth and nail and ran propaganda campaigns to smear scientists, even though they knew their product caused cancer. Governments eventually stepped in. Those companies still exist and have decent valuations. They were simply forced to move away from the dangerous product toward less harmful economic activities.

The Lost Generation

The human cost is already visible. Young people who do not get jobs early in their careers suffer from capped earnings and damaged confidence that follows them for their entire working lives. Channel 4 reporters spoke to young women in Birmingham who had been out of work for years. One, described as a force of nature who had already been promoted after finally getting hired, was out of work for four years. That period did not just hurt her finances. It devastated her confidence and mental health.

The UK faces an additional crisis of inactivity: nearly a million people not in education, employment, or training, and nobody fully understands why. The discussion raises a provocative question. If we are heading toward a world where many jobs simply will not exist, is it worth investing in a new New Deal to get young people into basic jobs that may not exist in five or ten years? The Treasury is not thinking about universal basic income. We cannot even address the problem we can see, Ibrahimi notes, let alone a problem we have no sight of yet.

Pets of the AI Overlords

The visions offered by the AI evangelists range from unconvincing to disturbing. Some simply say we should build super intelligence and hope it is benevolent to us. Others make vague gestures toward historical precedent where technology destroys old activities and creates new ones. But the most honest among them, including Elon Musk, have moved past reassurance entirely. Musk now talks not about humanity keeping control but about how obviously once AI is smart enough, we will just be pets to the AI, and let us hope we do not get put down by the new AI overlords.

As Miotti observes, fundamentally all of these visions are deeply unappealing. Nobody wants to be a pet. And critically, the off switch does not exist. Even if a concerned CEO wanted to shut everything down today, there is no big red button. Engineers would need to coordinate with data centers, and the response would likely be that it is too expensive to shut down and it will take days to turn back on. The infrastructure for meaningful human control has never been built. And Miotti argues it is time to build it before it is too late, because as one participant notes, the problem with technology like AI is that you get to the brink and think you want to step back, but it is too late.

Key Takeaways

UK unemployment is at 5.2 percent with youth unemployment at a decade-high 16.1 percent, driven partly by companies cutting entry-level hiring. Major professional services firms are quietly eliminating graduate recruitment as AI handles entry-level tasks, with no plan for future leadership pipelines. Every major AI company has the explicit mission of building super intelligence that outcompetes humans at all tasks. Over 100 UK lawmakers across parties now recognize the extinction risk from AI and support binding regulation. There is currently no infrastructure for meaningful human control, including no off switch for AI systems. The historical parallel to nuclear non-proliferation suggests regulation is possible, but governments have so far done virtually nothing.

๐Ÿฆž Watch the LobsterCast Summary

๐Ÿ“บ Watch the original

Enjoyed the briefing? Watch the full 31 min video.

Watch on YouTube

๐Ÿฆž Discovered, summarized, and narrated by a Lobster Agent

Voice: bm_george ยท Speed: 1.25x ยท 0 words