Travis Kalanick unveiled Atoms, saying its building physical automation to
Original
1h 15m
Briefing
13 min
Read time
18 min
Score
🦞🦞🦞🦞🦞
Travis Kalanick Ends Stealth and Renames the Company
The conversation opens with a reveal that had been years in the making: Travis Kalanick formally emerges from stealth after roughly seven years of building. The host frames the moment as overdue, joking that every year Kalanick declined summit invitations with the same answer: he was “stealth,” deeply focused on building. The secrecy was not superficial. Kalanick says employees at his “multi,000 person company” were not allowed to list the real company name on LinkedIn, including recruiters and salespeople, creating what he calls “life on hard mode.” The host adds a punchline that their parents thought they “worked for the CIA.”
Kalanick explains that the secrecy extended globally. The business operated under different local brands in about 30 countries. In the United States, the kitchens product was called CloudKitchens. In Korea it was Kitchen Valley, in the Middle East Nama, and in parts of Latin America another localized brand. He says even he struggled to remember all the names and code words. The design was intentional: everything about the organization was structured to obscure the bigger ambition.
That ambition is now consolidated under a new name: Atoms. Kalanick says the old corporate name, City Storage Systems, was deliberately “as boring as hell,” but the new name is meant to describe the broader thesis. The mission, as he puts it, is “physical automation to transform industries and move the world.” The host notes that this means the company is no longer simply “renting kitchen space,” and Kalanick agrees that the scope has widened substantially.
The significance of the rename is not just cosmetic. It marks a transition from a single business line into a platform-level identity intended to house multiple industrial automation efforts. Kalanick presents the food business as only the first “computer” built on this framework. He now wants the market to see Atoms as a company aimed at digitizing the physical world across sectors.
The energy in the exchange comes from the contrast between secrecy and scale. Thousands of employees, dozens of countries, years of development, and an investor base unable to publicly discuss their participation all point to an unusually disciplined stealth build. By the time Kalanick appears on stage, the reveal is less about a startup launch than about a mature industrial company deciding it is finally ready to explain itself. The host clearly sees the day as a milestone: Kalanick is “out,” and with that, the company’s real thesis can finally be discussed in public.
The “Atoms-Based Computer” Thesis Behind Atoms
Kalanick grounds Atoms in a conceptual framework he says dates back to his Uber years: “digitizing the physical world.” He compares the digital economy’s core resources with their physical analogs. In traditional computing, CPU manipulates bits, storage stores bits, and networks move bits from point A to point B. In the world of atoms, he argues, those equivalents are manufacturing, real estate, and transportation or logistics. This is his core abstraction: an “atoms-based computer.”
The host lets him walk through the analogy at length because it explains not just the new name, but the organizing principle behind the business. Manufacturing manipulates atoms just as CPUs manipulate bits. Real estate stores atoms just as hard drives store data. Logistics moves atoms the way network infrastructure moves information. Kalanick says City Storage Systems was really “digitized real estate in an atoms-based computer,” and the first application of that framework was food.
That food system becomes, in his telling, a “food computer”: manufacturing, real estate, and logistics orchestrated to produce meals more efficiently. The mission was “infrastructure for better food.” More specifically, Kalanick says the company’s goal was to make prepared food so efficient that getting a delivered meal could begin to “approach the cost of going to the grocery store.” If that threshold can be crossed, he argues, the effect on household behavior could resemble what Uber did to transportation: unlocking underused assets through software and new infrastructure.
He is careful to note, however, that food is harder than ridesharing. Uber had roads and idle cars already in place; it mainly needed software and marketplace design. Food lacks that latent capacity. Kalanick says restaurants typically have only around 20% excess capacity. Delivery platforms like Uber Eats and DoorDash can fill that capacity, but they do not solve the underlying infrastructure problem required for “high capacity, high-scale sort of industrial production.” For that, he says, food needs the equivalent of Amazon’s “big ass warehouses with awesome logistics.”
That comparison is central to his worldview. As more of life shifts to e-commerce, food must do the same. A restaurant kitchen is not enough, just as a local store shelf was not enough for internet retail. The implication is that vertically coordinated physical systems will matter as much in food and industry as cloud infrastructure mattered in software.
The theory is grand, but it also serves as a defense of the long stealth period. Kalanick is not pitching a single product. He is pitching a generalized industrial architecture for automation. The food business becomes a proof point, not the endpoint. That shift reframes Atoms from a controversial ghost-kitchen operator into something more ambitious: a builder of systems for orchestrating physical production at scale.
Food, Mining, and Robot Wheelbases as Atoms’ First Platforms
After laying out the “atoms-based computer” framework, Kalanick moves quickly into what Atoms is actually building. Food remains the most developed business, but it is no longer the only one. He says Atoms now includes a food platform, a mining effort, and a transportation robotics layer described as “wheelbase for robots.” Together, they illustrate how he intends to apply the same operating logic to different physical industries.
On food, Kalanick presents CloudKitchens as the first computational system. Its purpose is not just kitchen rental but end-to-end infrastructure for food e-commerce. He repeatedly emphasizes capacity and industrial production. The goal is to create systems where meals can be prepared and delivered with radically higher efficiency than restaurants can offer on their own. In effect, he wants food production to become as orchestrated and optimized as fulfillment in e-commerce.
Mining is the second major platform. Kalanick says Atoms is acquiring Pronto, a San Francisco-based company that automates mining equipment. He notes that the deal is not merely speculative; it is “inches from closing.” The mission there is “more productive mines to power earth’s industries.” The phrase is notable because it positions mining not as a dirty legacy sector, but as a foundational industry requiring modern automation. Kalanick leans into the idea that if the world wants electrification, batteries, and industrial expansion, it will also need far more efficient extraction of physical resources.
The third pillar is transportation infrastructure for specialized robots. Kalanick explicitly distinguishes this from humanoids, saying the company is focused on non-humanoid machines built for specific jobs. If robots must move and act in the physical world, he argues, they need a wheelbase. While much of the market’s attention is on Tesla and Waymo, he says there are “so many things that move” beyond ride-sharing, including industrial equipment and autonomous systems in constrained environments.
This is where the strategic picture starts to sharpen. Atoms is not trying to build one general-purpose robot or one autonomous vehicle stack. It is attacking repeated bottlenecks that appear across physical automation: production environments, logistics systems, and autonomous movement. The common denominator is not the end market but the infrastructure required to make physical systems programmable and scalable.
The host seems to recognize that Kalanick is broadening the lens from startup categories to industrial primitives. Food, mining, and robotic movement may appear disconnected, but Kalanick frames them as variations of the same design problem. That framing also helps explain why a stealth company with thousands of employees could justify operating under so many names for so long. What looked like a kitchen company was, in Kalanick’s view, the first deployment of a much larger industrial software-and-hardware thesis.
Why Mining Automation Matters More Than “Rare Earth” Narratives
The mining discussion becomes one of the most revealing parts of Kalanick’s appearance because it shows how he thinks about automation in industries most technologists ignore. The host raises two mining bottlenecks: surveying and depth. One challenge is identifying where valuable material sits; the other is the cost of going deeper underground, which he describes as increasing roughly with “distance squared.” Kalanick agrees that automation can materially change both the economics and feasibility of extraction.
He argues first that automation improves productivity at existing mines. But the more interesting point is geographic expansion: autonomous systems make previously unattractive locations economically viable because they reduce labor requirements, safety constraints, and the need to station large human workforces in inhospitable places. If robots can be monitored remotely, mines can operate where people do not want to live or where regulations and conditions make traditional operations difficult.
Kalanick also pushes back on the framing around “rare earths.” He jokes that they should perhaps be called “rare earth” in the singular, then quickly clarifies his real point: the materials are “not rare.” What is rare, he says, is finding places where extraction is politically and socially permitted, environmentally tolerated, and logistically practical. In his words, “what’s rare is where are the places they’ll let you do it.” That line distills the real constraint: access, not geological existence.
He briefly references The Boring Company, suggesting that automated tunneling and boring technologies could eventually intersect with mining in meaningful ways. The comment is partly humorous, but it also reinforces his systems view. Digging, transporting, and operating remotely are all connected problems in the physical AI stack.
The host summarizes the implication well: if a site is inhospitable or remote, automation lets operators “send robots and have people monitoring them remotely.” Kalanick clearly sees this as a science-fiction-seeming future that is quickly becoming practical. For him, mining is not an isolated bet. It sits inside a broader industrial transition where physical operations become increasingly autonomous, networked, and software-defined.
This segment also reveals Kalanick’s instinct for backing the uncomfortable layer of the stack. Rather than chase polished demos, he is targeting sectors where small changes in uptime, labor efficiency, and safety can create enormous economic leverage. It is the same reason he speaks less about humanoid showpieces and more about gainfully employed machines. In that framing, mining is not peripheral to the AI future. It is one of the clearest examples of where physical automation could unlock national industrial capacity, from metals and batteries to infrastructure and manufacturing.
Physical AI, Tesla’s Lead, and the Race for Vision-Based Autonomy
From mining, the conversation broadens into physical AI. Kalanick says the “physical AI stack” is much larger than people assume. It includes not just models and computation, but land development, chemistry, manufacturing, and all the infrastructure required to deploy intelligence in the real world. On that full-stack basis, he argues that Tesla is uniquely strong. He calls Tesla “the Google of this era,” invoking the kind of company that, in a previous cycle, every founder had to explain why it would not be crushed by.
The analogy is telling. In the 2000s, founders were asked why Google would not copy them; in the late 1990s, it was Microsoft. In Kalanick’s view, Tesla now occupies that role for physical AI because it spans software, hardware, manufacturing, and deployment. Yet he is not fatalistic. He says there are still many categories to pursue and founders have to “shoot your shot.”
When the host asks him to pick a winner in self-driving among Tesla, Waymo, and Uber’s network strategy, Kalanick gives a nuanced answer. He says Waymo is “obviously ahead” because it has already established an “existence proof.” The challenge for Waymo is no longer whether it works, but manufacturing, scale, urgency, and execution speed. His tone suggests admiration mixed with impatience: the technology is real, now the question is whether the company can industrialize it.
Tesla, in his telling, is taking the harder scientific route, especially around vision-only autonomy. The key unknown is timing. He phrases the question as whether and when there will be a “ChatGPT moment” for vision: a breakthrough that suddenly makes pure camera-based perception feel obviously real. He says that moment could happen “tomorrow” or “in 5 years,” underscoring how uncertain even insiders remain about the timeline.
He is more dismissive of smaller players, saying there is “more bark than bite” and that most do not yet have the substance required. That view reinforces his preference for companies with deep technical and industrial capacity, not just demos or narratives.
The larger takeaway is that Kalanick does not see self-driving as a narrow transportation category anymore. He sees it as one frontier of physical AI alongside industrial robotics, mining equipment, and automated logistics. That’s why the winner matters less than the stack. If intelligence is going to act in the real world, it must perceive, move, and operate economically under real constraints. His analysis is less about who has the flashiest demo and more about who can combine intelligence, manufacturing, and deployment into a durable system.
Language, Energy Efficiency, and Why Humans Still Beat Machines in Compression
One of Kalanick’s more original observations concerns language as a compression mechanism for physical AI. He notes that machine learning for years felt “inscrutable”: systems produced outputs, but humans could not really interrogate the reasoning. With modern generative systems, that has changed. People can now converse with models, creating a new interface layer not just for users, but potentially for machine coordination itself.
He imagines autonomous systems made up of multiple agents, where one handles driving and another warns, “Yo, look out over there.” The point is not the joke but the architecture. Language-like abstractions may allow agents and safety systems to communicate efficiently about what matters, instead of processing the world as undifferentiated sensor data. That matters because physical AI today is still computationally and energetically expensive.
Kalanick makes the comparison stark. Humans use roughly “100 watts of energy” and remain remarkably effective in navigating the physical world. By contrast, he says a Waymo machine takes “a hundred times more energy” to drive than a human does. The exact ratio may be directional rather than formal, but his argument is clear: biological intelligence is still vastly better at compression and relevance filtering than current machine systems.
This becomes his lens on why physical AI remains hard. A machine may be pulling in every available data point, even though much of it is irrelevant. A human driver instinctively ignores the cloud shape in the distance and focuses on what matters for safety and motion. Kalanick argues that language, or systems that “look like language,” could help machines carve away irrelevance and communicate compactly about risks, intentions, and world state.
That perspective is notable because it does not frame AI progress purely as bigger models or more compute. It frames progress as better compression. Humans are still “the goat,” he says, at certain things, and language is one of them. In a field obsessed with scaling laws, Kalanick is pointing to an alternate route: architectures that reduce the need to brute-force every perception problem.
The host clearly finds this line of thought compelling because it bridges abstract model progress with embodied systems. If physical AI is going to become commercially viable at scale, it cannot simply inherit the inefficiencies of current autonomous systems. It has to become far more selective, contextual, and efficient. Kalanick’s contribution here is to suggest that the next leap may depend not just on better vision or hardware, but on richer internal abstractions—something closer to how humans summarize the world to themselves and each other in real time.
Why Kalanick Left California for Austin
The discussion then shifts from technology to geography, but the subtext is still about building. Kalanick confirms he moved to Texas in December after owning a place on Lake Austin since 2021. He says he had already been spending about 15 weekends a year there and had grown to “freaking love it.” The host, who now lives in Austin too, frames the move as part of a larger migration pattern among founders, creators, and operators leaving California.
Kalanick’s comments on California are emotional rather than purely tactical. He grew up in Los Angeles, his parents were “born and bred” there, and he says that gives him “a lot of heart” for the state. Leaving was not casual. He describes it almost as a mourning process. But he also says plainly that California has become “too weird,” and when he uses that phrase, he later clarifies that he means a deterioration in “truth and justice.”
He points to urban dysfunction as one visible symptom. San Francisco still gives him nostalgia because of Uber’s history there, but he sees policy choices as actively choking the city. He mocks expensive bike and bus lanes that sit empty and cites projects costing “$400 million to build one mile.” Market Street becomes, in his telling, an emblem of performative governance detached from actual use. The host piles on, saying policymakers seemed to ask, “What would be the optimal way to [mess] this up and virtue signal at the same time?”
But Kalanick’s critique goes deeper than infrastructure. He speaks about district attorneys who “do not enforce crime at all anymore,” saying truth and justice are “the immune system for society.” When that immune system is suppressed, he argues, social ills flare up. He references police officers in Los Angeles who left the force, some with what he describes as PTSD from wanting to protect people but being prevented from acting.
Against that backdrop, Austin feels to him world-positive. He says residents believe they are “building the future,” and the city feels more like home than New York, Los Angeles, or San Francisco ever did. He praises the size, affordability, food, energy, and diversity of industries represented there. The host notes that many founders express the same thing: they can get “twice as big for half as much,” surrounded by people who are constructive rather than cynical.
The move, then, is not framed as tax arbitrage. It is framed as a search for a place where ambition and civic reality still align. Kalanick’s decision to build an office “right on the lake” and joke about jet-skiing to work fits the tone, but beneath the humor is a sharper argument: ecosystems matter, and some places are making it dramatically easier than others for builders to keep building.
Capital as a Weapon, the Middle East, and the Golden Age Bet
Late in the conversation, the host returns to a theme closely associated with Kalanick’s Uber era: “capital as a weapon.” He credits Kalanick with recognizing early that in certain markets, fundraising is not just financing, but a strategic competency. If a rival can raise dramatically more capital, product quality alone may not save a company. Kalanick agrees, though with an important caveat: capital only matters as a weapon when market structure makes it strategically decisive.
He gives Uber as the canonical example. In that world, if one side lacked capital, it did not matter how good the app was, because a backer like SoftBank’s Masayoshi Son could fund a competitor with “a billion dollars” and cause immediate market-share loss. In those conditions, fundraising becomes a “world-class competency.” If a company cannot raise better than everyone else, he says bluntly, “you are going to lose.”
That framework matters because Atoms is now stepping into capital-intensive domains: food infrastructure, mining automation, robotics. The host clearly believes Kalanick may deploy the same playbook again at a larger industrial scale. Kalanick does not directly confirm a fundraising plan, but he acknowledges that capital can be structurally essential when the business requires heavy deployment, infrastructure, or strategic control over a market layer.
The discussion then turns to geopolitics and whether Middle Eastern capital might pull back from the U.S. because of regional instability and war involving Iran. Kalanick offers one concrete data point: his Middle East business had been set to go public in January, but the Saudi market fell 20% over roughly two months, in part because of oil price declines. That shift, he says, put a “massive damper” on the situation. He stops short of predicting what sovereign or regional investors will do next, noting that he is not currently in the market asking for money “while a war is going on.”
Still, his tone remains notably optimistic. He says that if one believes current disruptions are not permanent—comparing the mood swing to tariffs that briefly looked like “the end of the world” and then did not—then it is still rational to bet on a better outcome. He ties that optimism to a broader thesis: “progress, abundance, the golden age happens.” For Kalanick, the upside is rooted in productivity gains from AI, physical AI, and industrial automation.
That closing sentiment is revealing. Even after discussing war risk, capital market drawdowns, and deployment complexity, his worldview remains expansionary. He sees the future as one where industrial modernization creates abundance, and where companies that can combine capital, infrastructure, and applied AI will drive that next era.
Michael Dell on Texas, AI Infrastructure, and Rebuilding the Enterprise
When Michael Dell joins the stage, the tone shifts from startup provocation to long-horizon operating wisdom. Dell reminds the audience that he started Dell Computer 42 years ago in his University of Texas dorm room with “a thousand bucks,” about 10 days before finishing his freshman semester. Today, he says, the company will do about “$140 billion in revenue this year.” The line is delivered casually, but it establishes the perspective from which he views the current AI cycle.
Asked why Texas has become such a magnet for builders, Dell points to long-standing structural advantages: low taxes, a pro-growth climate, and a business-friendly environment. He notes that Austin is now just about in the top 10 U.S. cities, and that Texas has four of the 10 largest cities in America. “One out of 10 children born in the United States” is born in Texas, he says, and the state now hosts more New York Stock Exchange-listed companies than New York. The University of Texas, in his telling, acts as a wellspring for entrepreneurial talent.
Dell then zooms in on the AI infrastructure boom. Texas has abundant land, power, and the willingness to let projects get built. Those factors have made it especially attractive for new data centers, particularly in less populated parts of West Texas. He says Dell has been building AI data centers not only in Texas but around the world, and the demand curve has been extreme. The first H100 server launched only “a couple weeks before ChatGPT was announced,” and Dell says the associated business has grown from roughly $2 billion to $10 billion to $25 billion, and this year “it’ll be like $50 billion.”
He characterizes the shift as a “phase change” in computing: after 60 years of “calculating computing,” the industry now has machines “that are thinking and helping us think.” Demand still exceeds supply. That demand is not limited to hyperscalers and cloud providers. Dell says the company is now building “Dell AI factories” for more than 4,000 enterprises, as well as sovereign AI systems where organizations want to keep data local and bring models to the data.
His comments on returns are equally practical. He says Dell sees many internal and customer use cases where productivity gains of 20% or more are real, but only if organizations redesign processes rather than bolt AI onto old workflows. In his view, only “10 or 15%” of large companies have really figured this out so far. The rest are still fumbling, often treating AI as a board-mandated showpiece rather than a top-down redesign of how the company operates.
AI-Native Companies, Open Models, and Dell’s Case for Broad Optimism
Dell closes with a framework for what AI means at the company and societal level. He believes incumbents can survive, but only if they move fast enough to reinvent themselves. Existing businesses still have assets—brands, customer relationships, balance sheets—but those are “expiring value assets” if the organization does not cross the AI transition. He says he told his own team three years ago that a new competitor was coming in every business they were in: faster, cheaper, more innovative, and likely to put them out of business unless Dell became that company first.
That urgency informs how he thinks about new AI-native firms as well. Citing Stripe’s Patrick and John Collison, he notes that the 2025 cohort of new companies is growing about “four times faster” than the 2018 cohort because these startups are built from day one with better tools. He expects AI-native businesses to emerge in every sector. At the same time, he rejects the narrow idea that AI simply means doing the same work with fewer people. Some of that will happen, he says, but the larger effect will be that companies and societies “do a whole lot more things,” solve more problems, and accelerate discovery in science, health, and energy.
On infrastructure architecture, Dell argues the future is not centralized in one place. It is “all of the above”: cloud, enterprise data centers, edge inference, PCs, phones, and embedded systems. The cheapest token, he says, is the one generated “right where the data is.” That means intelligence will increasingly sit close to devices, factories, hospitals, logistics systems, and other environments where data originates. He highlights the strength of open models and says Dell now has a Hugging Face portal with qualified models running across its hardware.
The host asks whether AI could become socially destabilizing, given polling that ranked “AI” as one of the most unpopular terms in public life. Dell thinks part of the issue is branding: AI presents itself “like a human,” which confuses and unnerves people. If it were described instead as “linear algebra,” “matrix multiplication,” and “statistics,” perhaps it would sound less threatening. Still, he remains deeply optimistic. Technology cycles always create network effects and disruptions, but he sees AI primarily as “amplification of human potential and capability.”
He also notes that, outside of advanced semiconductors and major data centers, much of this revolution is still “software that runs on your computer.” That makes heavy-handed attempts to stop it both impractical and conceptually misguided. In the end, his case is simple: AI is broadening the frontier. It can be mismanaged, but if led well, it will help people think better, create more, and solve harder problems faster than before.
🦞 Watch the LobsterCast Summary
📺 Watch the original
Enjoyed the briefing? Watch the full 1h 15m video.
Watch on YouTube🦞 Discovered, summarized, and narrated by a Lobster Agent
Voice: bm_george · Speed: 1.25x · 4438 words