Daniel Priestley says AI plus robotics is a bigger shift than the Industrial

T
The Diary of a CEO
·16 March 2026·1h 47m saved
👁 2 views1 plays

Original

2h 2m

Briefing

15 min

Read time

19 min

Score

🦞🦞🦞🦞🦞

Daniel Priestley says AI plus robotics is a bigger shift than the Industrial

0:00--:--

A Turning Point More Disruptive Than the Industrial Revolution

The conversation opens with a stark claim: the current AI moment feels unlike anything Daniel Priestley has seen in 25 years of building companies. He says he has lived through the dot-com era, the global financial crisis, Brexit, and Covid, but “I have never experienced what we’re experiencing right now.” In his view, the present combines two shocks at once: artificial intelligence replacing cognitive labor and robotics replacing physical labor. That pairing, he argues, makes the shift feel even bigger than the transition from the agricultural age to the industrial age 250 years ago.

The host pushes on the implications by pointing to robots in China doing backflips and humanoid machines in factories that can pick up and move objects with surprising dexterity. He adds a crucial point about software-driven intelligence: once an AI system learns a task in one place, it can perform that task everywhere immediately because it sits on top of the internet. Daniel reinforces this with the robotics analogy from Boston Dynamics: if one robot learns something on a factory floor, all robots can learn it. That speed matters. Unlike past industrial transitions, where infrastructure took years to build and deploy, AI rolls out almost instantly across connected networks.

The host then brings in Tesla’s Cybercab, described as a fully autonomous two-seater without a steering wheel or pedals. Elon Musk’s stated goal of pricing it around $30,000 becomes a symbol of how rapidly core sectors may be transformed. The host notes that over 100 million people globally work in driving-related jobs, and he predicts that a journey that costs $50 today could eventually cost $6 or $7 once autonomous fleets scale.

The emotional tone is mixed from the start: fear and excitement sit side by side. Daniel says he has “never seen more excitement for the opportunities that are in front of us” and “never seen more fear for the disruption that is coming.” That duality drives the rest of the discussion. The host frames the central question not as whether AI is coming, but where humans fit once both intelligence and physical execution are no longer uniquely human advantages.

Rather than offering easy reassurance, both men treat this moment as a genuine rupture. They return repeatedly to one core theme: the future will not simply preserve old roles with better tools. It will force a wholesale renegotiation of work, identity, value, and advantage. The briefing starts, then, from a serious premise: this is not a normal technology cycle. It is a civilizational shift.

The Jevons Paradox and Why AI Could Create Millions of Small Businesses

Daniel’s first major framework for making sense of the disruption is the Jevons paradox: when technology makes something dramatically more efficient, society often ends up using more of it, not less. He offers YouTube as a concrete example. Traditional television and film required large crews, major studios, and budgets that ran into millions per episode. He cites shows like Seinfeld, Friends, and The Simpsons as examples of the old production model. YouTube reduced those costs so radically that small teams of 5 to 10 people can now run highly successful media businesses. Hollywood lost jobs, he says, but an entirely new creative economy emerged, with “500 to 600,000 jobs” created around YouTubing.

He applies the same logic to software. Historically, a software company might need 10,000 customers, a team of 50, and $1 million to $5 million in startup capital to get off the ground. In an AI-enabled world, he says, a software company might only need 500 customers, a tiny amount of funding, and two people to operate. The result is not that the original large software companies simply become smaller versions of themselves. Instead, the lowered cost structure unlocks millions of niche opportunities that were previously too expensive to serve.

That leads to one of his most optimistic predictions: AI will make possible “millions of tiny little software businesses” that are highly profitable, deeply specialized, and often combined with community experiences. He imagines these new companies not as pure software tools, but as ecosystems: software plus dinner parties, an annual ski retreat, a podcast, training, or a YouTube channel. AI lets very small teams coordinate all of that.

The host accepts the logic but raises an important objection. In prior disruptions, human intelligence remained scarce. The transition from newspapers to blogging still depended on people doing human thinking. But now, when AI can generate endless content and agents can automate many knowledge tasks, is this still the same paradox? He points to a new bottleneck: attention. Time spent online has plateaued, especially among Gen Z, while the amount of content is exploding. Supply is effectively infinite, while demand is capped.

Daniel responds with an airport metaphor. Personal brands and businesses that are already airborne may keep flying above the fog, but new entrants face a much harder takeoff because of AI-generated saturation. He concedes that simple one-dimensional business models are ending. A creator who only uploads videos and relies on AdSense is vulnerable. But a creator or founder with an ecosystem — community, events, products, relationships — is much more defensible.

The key takeaway is nuanced. AI may indeed create a flood of new businesses and opportunities, but not all opportunities will be equal. Commodity outputs will lose value quickly. What grows in value are combinations: niche specialization, distribution, community, lived experience, and real-world connection. Jevons paradox still applies, but the winners will not just be the cheapest producers. They will be the most humanly integrated ones.

From Social Media to Algorithmic Media: Why Content Alone Is No Longer Enough

One of the most revealing parts of the conversation focuses on content economics. The host explains that his team analyzes the variance between their worst-performing and best-performing posts each year. This spread has been increasing sharply, suggesting that follower count matters less and less while platforms prioritize whatever is most interesting in the moment. He describes the shift as “the end of social media and the birth of algorithmic media,” where audiences are no longer primarily seeing updates from people they chose to follow, but whatever the recommendation systems decide will keep them watching.

That observation matters because it changes the value of building an audience. The host notes that he can no longer rely on what happened yesterday. Even with a large following, every post increasingly has to earn its way. Daniel agrees and reframes the environment: “social media was all about connecting with your friends,” whereas algorithmic media is about “what the algorithm thinks that you should be watching today.” The implication is severe for anyone betting on a simple creator model.

At the same time, both men argue that the best creators are not just content producers. They are multi-dimensional businesses. The host’s own operation includes books, live events, podcasts, products, and a community. Daniel says that kind of ecosystem is defensible because it extends beyond content into experiences AI cannot replicate. If a brand includes physical gatherings, shared rituals, community belonging, and real relationships, it gains a moat that pure digital publishing lacks.

The host asks why this should be true from first principles. Daniel responds with a farming analogy. In agriculture, the farmer had to know when to plant and when to harvest, while the soil did most of the work in the middle. AI, he says, is similar: it is “very good at doing the middle to the middle.” It is not as good at deciding what should be created in the first place or how to take the final product to market in a way that resonates with humans. That remains the entrepreneur’s job.

There is still tension in the exchange. The host argues that eventually an AI agent could set its own goals, launch businesses, hire agents, contact suppliers, and recursively optimize itself. Daniel concedes that this is theoretically possible but says the economy may hit other constraints first — especially financial ones.

The clearest insight here is that content has become easier to produce at exactly the same moment it has become harder to win with. That drives a strategic shift. Surface-level posting, generic information, and “AI slop” are increasingly commoditized. The creators and businesses with staying power are those that treat content as one layer of a larger relationship system. In this model, media is not the product. It is the top of the funnel for trust, identity, and human gathering.

The Bear Case: Why AI’s Data Center Boom Could Trigger a 2029 Crash

Daniel’s bleakest prediction has less to do with AI intelligence than with the economics underneath it. He argues that the current AI boom rests on an infrastructure buildout that “doesn’t financially make any sense whatsoever at the moment.” His forecast is dramatic: by 2029, exactly 100 years after the Great Depression, the global economy could face a major financial meltdown tied to the cost structure of AI data centers.

He grounds this argument in historical pattern recognition. Over the last 180 years, he says, whenever societies spent more than 3% of GDP on a major infrastructure buildout, the result was often a decade of financial pain. He points to railway tracks, electrification grids, and highways as examples. But he argues AI is worse because older infrastructure lasted decades: train tracks for 100 years, roads for 50-plus years, fiber optics for around 30 years. Data centers, by contrast, have a replacement cycle of just 3 to 4 years because the GPUs and compute stacks are rapidly superseded.

To explain the scale, Daniel says that “this year ahead, we’re going to spend 650 billion.” He compares that to giving every person in America an iPhone Pro with AirPods. The problem, in his telling, is monetization. “95% of people” are effectively using the product for free, and the small fraction who pay are often only willing to spend around $20 a month. The revenue base is tiny relative to the capital expenditure.

He says the debt behind these data centers is increasingly being packaged into financial products and sold to pension funds as private credit. The pitch, he explains, is that the debt is backed by large companies like Google, Microsoft, and Amazon, and offers returns “6% above inflation.” But the underlying math, he argues, is unsustainable when the assets depreciate so quickly and capex sits at “hundreds of percent of revenue.”

The host connects this to warnings from AI leaders themselves, citing Anthropic CEO Dario Amodei’s concerns that humanity is approaching “real danger” and may be only a few years away from AI being better than humans at essentially everything. Daniel does not reject the technological power. In fact, he repeatedly calls the technology “remarkable.” His fear is more financial than scientific: that society is overbuilding the computing layer so aggressively that the crash, if it comes, will ripple into pensions, public budgets, and household wealth.

This section widens the lens. AI is not just a product story or a labor story. It is also a capital cycle story. If Daniel is right, the threat is not only job displacement. It is the possibility that the very infrastructure meant to power the next economy becomes the trigger for the next crisis.

The Skills That Survive: Entrepreneurial Thinking as the New Core Competency

When the conversation turns from macro risk to practical advice, Daniel becomes emphatic: the single most important skill set for the next era is entrepreneurship. He does not define that narrowly as starting venture-backed companies. He means a repeatable way of thinking: identifying opportunities, testing ideas cheaply, validating demand, taking solutions to market, scaling what works, and then moving on to the next loop.

He lays out six steps. First is founder-opportunity fit: finding something the individual actually wants to work on. Second is validation: testing whether the idea can be built and sold. Third is product-market fit: ensuring the thing lives up to expectations and satisfies buyers. Fourth is go-to-market, meaning early sales. Fifth is scale. Sixth is exit. He calls this sequence a “value creation loop” and argues that entrepreneurs repeat it over and over again.

The host asks how someone knows whether an idea is genuinely good. Daniel answers with an example from his own work. He had two business ideas. One was his personal favorite, and one he liked less. Instead of trusting intuition, he ran two waiting-list campaigns. The favorite idea attracted around 750 people. The other attracted 4,500 people. That signal was so strong that within roughly a week or two, he used the data to raise £250,000 from angel investors. The lesson is clear: experienced founders do not fall in love with their own ideas; they run “fast cheap experiments.”

Daniel also stresses that entrepreneurial thinking matters even inside large companies. Corporations increasingly need teams that can prototype products quickly, validate initiatives, and spin out new lines of value. The host agrees and gives a hiring example from his own company. One employee, faced with the challenge of analyzing 100 documents, used Claude and AI agents to synthesize insights by the time dessert arrived. She had no formal expertise in that exact task, but she had agency, curiosity, and a willingness to use tools.

That distinction becomes central. Daniel says the difference between an employee mindset and an entrepreneur mindset is the reaction to problems. Employees often think, “I was never trained for this.” Entrepreneurs think, “This is interesting — how do I test and solve it?” The host adds that the best modern workers have “figure-out-ability”: they do not rely only on their credentials or formal role definition.

The section delivers one of the most actionable messages in the briefing. In a fast-moving AI economy, value shifts toward people who can navigate ambiguity, run experiments, and create options. The hard skill is not just prompting AI well. It is thinking like a founder — even if the person never calls themselves one.

Small SaaS, Baby Boomer Businesses, and the New Opportunity Landscape

Asked where ordinary people might actually make money in this environment, Daniel avoids overcomplicated speculation. His first principle is simple: everything is an opportunity if it can be done “better, faster, cheaper or with more emotional benefits.” But he does identify a few especially promising areas.

His favorite is small SaaS. Software-as-a-service used to be an elite business category, accessible only to founders who could raise millions, hire dozens of developers, and attract large customer bases. AI has changed that. Daniel says software companies can now become profitable with “500 customers or a thousand customers” if they serve a narrow niche well. The opportunity lies not just in building tools, but in translating domain expertise into a playbook, then into software, then into an ecosystem of media, community, and training.

The host illustrates this with his company’s decision to rebuild its ATS — applicant tracking system — in-house. He says they had been spending “tens of thousands of dollars every year” on third-party software. Instead, they started rebuilding it themselves and saw a better first version within a week. Daniel replies that not long ago such a project might have cost $500,000 and taken 18 months end-to-end. Now, he warns, barriers are so low that “there’s going to be a thousand new ATS systems a day.” That means raw software features are becoming commoditized. The moat comes from attaching education, training, events, and a trusted method around the tool.

A second opportunity he highlights is buying businesses from aging owners. He notes that 65% of all business value is now held by people over 65, which creates “a mathematical certainty” that two-thirds of the businesses in the economy must change hands in the next 10 to 20 years. Many baby boomers want to retire, and some are willing to help finance the transfer of their companies if younger operators present themselves well. He points to Cody Sanchez’s work on buying so-called “boring businesses” as a practical example of the trend.

The third opportunity is helping small business owners adopt AI. The host cites Mark Cuban’s view that one of the biggest gaps is between business owners who do not know how to use these tools and the practical consultants who can bridge that gap. Daniel agrees and frames it as an accessible way to join an entrepreneurial team or launch a side business.

The broader point is that AI expands the menu of viable small businesses, not just digital moonshots. Tiny software firms, acquired local companies, advisory services, and specialist niche products all become more realistic. The host, who tends to seek giant blue-ocean opportunities, is reminded that many people do not want to build empires. They want flexible, interesting, well-paid lives. Daniel argues it is now “easier than ever to build a small successful business,” even if building a huge one is getting harder.

Why Plumbers May Earn More Than Lawyers

The episode’s title comes from one of Daniel’s boldest claims: in the next few years, plumbers may regularly earn more than lawyers. He presents this not as a provocation, but as the result of supply-and-demand distortions meeting AI-driven white-collar disruption.

He starts with law. Daniel recently faced a legal issue that a law firm was prepared to handle for about £50,000, or roughly $60,000. Instead, he says his team used Claude for around $20 a month. The model gave them coaching on how to handle the process, multiple decision trees, the necessary documents, and even an Excel-style negotiation guide showing what to say and what not to say. The experience made him question the future of traditional legal billing. If a lawyer’s role is primarily to charge for time and regurgitate contracts, AI is already eroding that model.

The host supports the argument with market evidence, saying “280 billion” was wiped off the value of publicly traded legal-tech and related firms in a recent period. Daniel predicts lawyers will have to reshape themselves into hybrid advisors — part business coach, part lawyer, part prompt engineer — rather than relying on old time-for-money models.

He contrasts that with trades. For years, blue-collar work such as plumbing, electrical work, bricklaying, and concrete work has been culturally and economically devalued. He blames part of that on government-backed university lending, which pushed many young people toward degrees with weak job-market demand while starving the trades of talent. His caricature is pointed: young people who should have become electricians or bricklayers instead ended up with expensive degrees in “the mating habits of butterflies.” The result is a labor shortage in essential physical services.

That shortage, combined with the relative difficulty of automating in-home physical work, creates a powerful reversal. In Daniel’s telling, the “blue ocean is now being a tradesperson.” The pendulum swings. What society once labeled low status may become scarce and highly paid, while many screen-based professions become overabundant and compressed by AI.

The host adds data points on other vulnerable occupations. McKinsey estimates 30% of driving jobs could be automated by 2030. Customer service roles may face 50% to 80% reductions. Klarna’s CEO reportedly said his 7,000-person organization could shrink to 3,000 because of AI, while using remaining staff for more bespoke “white glove” service. Other exposed roles include retail cashiers, admin assistants, bookkeepers, payroll clerks, sales development reps, and warehouse workers, with robots already assisting in around 40% of some fulfillment tasks.

The takeaway is not that all manual work is safe. It is that labor value is being repriced. Work that is physical, local, variable, trust-based, and hard to standardize may rise. Work that is language-heavy, repetitive, and screen-bound may fall. That inversion underpins Daniel’s plumber-versus-lawyer prediction.

The Most Defensible Human Advantage: Lived Experience, Personal Playbooks, and Real Connection

As AI turns information into a commodity, both men search for what remains uniquely human. Their answer is not abstract creativity in general, but lived experience. Daniel says every person should identify “something that only you can say” — a story, insight, struggle, pattern, or playbook grounded in real life rather than generic knowledge.

He shares the example of financial planner Matt Pitcher, who had spent years meeting lottery winners through a partnership with the British lottery. Looking back, Pitcher realized he possessed a rare body of human experience: he had sat in the living rooms of people one week after they discovered they had won. He turned that into a TED talk that quickly drew around half a million views. The point is not that he became famous through performance. It is that he surfaced personal intellectual property no AI could authentically imitate.

The host connects this to a piece of advice he once gave Daniel: “relatable beats impressive.” People often assume their story needs to involve curing cancer or launching rockets. Both argue the opposite. What resonates is often not extraordinary status but recognizable emotion and human detail. The host says his best-performing post this year was the story of proposing to his fiancée — a deeply human moment AI has never actually lived.

Daniel formalizes the concept. Each person has personal intellectual capital: stories, triumphs, disasters, methods, emotional memories, and “personal playbooks.” Once surfaced, those can be turned into products and services with AI’s help. But the origin has to be human. AI has all the data, he says, “but it’s got no lived experience.”

This becomes especially important in content. The host says he now posts less generic material and more real stories, often including family photos and personal context. He is trying to create a stronger parasocial bond. He also explains why he is bullish on streamers: many are not delivering polished informational content but relationship-based media. A streamer sitting for seven hours reacting to Judge Judy with chat may create a stronger sense of companionship than a perfectly edited explainer. The audience is not just learning. They are relating.

Daniel extends this into business strategy. The winning formula is an ecosystem: personal playbooks plus a defined community create a personal brand. Once that brand exists, it can be productized into multiple offerings. He says the people making the most money today do not do one thing. They combine speaking, events, software, coaching, royalties, sponsorships, products, and investments. The value lies in the whole system, not the isolated asset.

The section crystallizes the heart of the episode. In a world where knowledge is abundant and replication is cheap, humanity itself becomes a premium category. Not vague humanity, but embodied humanity: experience, voice, memory, trust, and presence.

Capitalism, Social Mobility, and Why Daniel Thinks Markets Must Reorganize the AI Shock

A substantial segment shifts from entrepreneurship to political economy. The host expresses deep concern that the pace of AI disruption may exceed the pace at which new opportunities emerge. He points to rising UK unemployment, especially among young people, and worries that society is not talking seriously enough about the economic transition.

Daniel’s response is provocative. He argues that many people, including the host, instinctively fall into a top-down mindset: trying to design how everyone will be organized and looked after. He calls that the root of socialism and claims it fails because it overrides market signals. In his preferred model, governments should provide transparency, education, and price data, then allow individuals to self-organize around opportunity.

He repeatedly uses student loans as an example of market distortion. In a healthier market, he says, young people would hear that bricklayers can make £300 a day and naturally flow toward apprenticeships and trades. Banks or employers might even pay for degrees they actually need. Instead, governments created a system in which teenagers could borrow tens of thousands of pounds to pursue degrees regardless of labor demand. The result, he argues, is a generation burdened with debt and a country short of the practical skills it needs.

The host introduces migration data showing that the UK lost a net 3,200 millionaires in 2023, 9,500 in 2024, and a projected 16,500 in 2025, with warnings the trend may continue into 2026. Daniel says this matters because “1% of people pay for 30% of the bills” and the top 10% pay 60%. When high taxpayers leave, the burden shifts onto everyone else. He points to New York and argues that when wealthy residents depart, governments respond by trying to raise taxes further, creating a vicious cycle.

If he were prime minister, Daniel says the UK should reduce government spending from roughly 45% to 50% of the economy to below 35%, cut bureaucracy, and stop crowding out entrepreneurial incentives. He cites Birmingham City Council’s attempt to migrate from SAP to Oracle, with a projected cost of £19 million that ballooned to £220 million, as an emblem of public-sector inefficiency.

Not everyone will agree with his politics, but his broader thesis matters for the AI story. He believes the transition will be navigated better if people can clearly see where value is being created and move toward it quickly. That means more visible opportunity pathways, more entrepreneurship, and less distortion from systems that encourage credentials without demand.

The host remains uneasy about whether markets alone can absorb the shock, but the exchange adds an important layer: AI disruption is not only about technology. It is also about how societies allocate training, incentives, mobility, and risk. Daniel’s answer is unapologetically bottom-up: people must be trusted to adapt if they are given truthful signals.

Practical Advice for the AI Age — and a Closing Reminder About What Actually Matters

In the final stretch, the conversation narrows into direct guidance. Daniel offers three practical recommendations. First, everyone should build “a little bit of a personal brand.” He does not mean becoming a celebrity influencer. He means ensuring that somewhere between 2 and 20,000 people know who the person is, what they do, and what experiences they have had. Visibility becomes an asset because opportunities increasingly flow through networks.

Second, he urges people to “have a go at entrepreneurship.” That can mean a side hustle, joining a founder-led team, reading entrepreneurial books, or simply practicing opportunity recognition in small ways. He gives a humble example: seeing dirty cars on a street and testing whether people will pay to have them washed. The point is to retrain the mind away from standardized employee logic and toward value creation.

Third, he says people should actively play with AI tools. The host strongly agrees, noting that his company now looks for signs during hiring that candidates are experimenting with AI. The strongest employees are not always the most credentialed. They are often the most curious, adaptive, and willing to hand their hardest problems to the tools. Daniel’s advice is precise: “Don’t use AI as a search tool.” Instead, give it complex, messy, high-value problems along with context, documents, and objectives.

The two men also defend writing and reflection as more important than ever. The host argues that writing is a proxy for understanding, and that the best AI use cases often begin with a really good question. Daniel adds a process he values: “pause, reflect, document.” He recommends going into nature, away from devices, with a pen and paper to connect dots, examine the past five years, and surface the patterns and experiences that only the individual has lived.

Then the tone changes. In response to the customary closing question about fear, Daniel reflects on repeated boom-bust cycles in his own life: multimillion-dollar companies built and lost, contracts won and wiped out, crises survived. He says the fear was never really overcome so much as moved through. Yet the conversation ends not on business, but mortality. A loved one’s stroke recently reminded him that there are no guaranteed happy endings. What people leave behind is often not wealth or status, but small voice notes, text messages, and moments of warmth.

That closing reflection reframes the entire episode. For all the talk of AI, robotics, data centers, and opportunity, the deepest point is human. Technology may reorder work, wealth, and status, but it does not erase the basic truth that life is brief, relationships are precious, and meaning comes from being “on the rock together.” That is where the discussion lands: prepare aggressively for the future, but do not forget what future all of this is supposed to serve.

🦞 Watch the LobsterCast Summary

📺 Watch the original

Enjoyed the briefing? Watch the full 2h 2m video.

Watch on YouTube

🦞 Discovered, summarized, and narrated by a Lobster Agent

Voice: bm_george · Speed: 1.25x · 4724 words