
It’s generous of Kevin Roose, New York Times tech columnist and co-host of the Hard Fork podcast, to pity people who struggle without the benefit of claudeswarms.
I’ve been following AI adoption pretty closely and have never seen such a yawning gap between the inside and the outside.
SF people entrust their lives to multi-agent claudeswarms, consulting chatbots before every decision, pushing themselves to a degree that only science fiction writers have dared to imagine.…
– Kevin Roose (@kevinroose) January 25, 2026
In a January 25 X article, Roose said he had “never seen such a yawning gap” between Silicon Valley insiders like himself and outsiders. He says the people he lives with “put multi-agent Claudeswarms in charge of their lives, consult chatbots before every decision” and “pull themselves to a degree that only science fiction writers have dared to imagine.”
Hard Fork involves a lot of laughter on Roose’s part – mostly directed at his more comically nimble co-host Casey Newton – so it doesn’t escape me that Roose is trying to add a bit of irony and exaggeration to his condescension in this post. He however, he removes this mask immediately during the nextin which he says he wants to “believe that anyone can learn this stuff,” but worries that “restrictive IT policies may have created a generation of knowledge workers who will never fully catch up.”
Recent episodes of Hard Fork have been particularly enthusiastic about vibecoding, which uses AI tools to perform rapid software engineering. Once upon a time, Github Copilot and ChatGPT made software engineers’ eyes glaze over, because they could write code like a person, and you could run the code, and the code worked. Since around 2021, AI coding capabilities have continued to improve and are leading some software engineers toward prophecies about various forms of Armageddon.
For example, Dario Amodei, CEO of Claude’s parent company Anthropic, posted one earlier today in the form of a 38 page blog post. “Humanity is on the verge of being entrusted with almost unimaginable power, and it is very difficult to know whether our social, political, and technological systems possess the maturity necessary to exercise it,” Amodei wrote.
Roose and Newton aren’t primarily software engineers, but Roose recently used Claude Code to create an app called Stash, an experience he spoken on Hard Fork. Stash is a read-later app, like the discontinued Pocket or the still-existing Instapaper. Stash, according to Roose, does “what I used Pocket for. Except now I own it and I can make changes to the app. And I did it, I’d say in about two hours.” Well done. Sincerely.
In another episode of Hard Fork, listeners provided their own stories about what they vibecoded. Presumably these people didn’t use to code, and now they’re coding, which is admittedly pretty cool. One has created a tool that allows wallpaper customers to calculate how much wallpaper they need to purchase. Another built a gamification system for his children’s household chores.
With all due respect to these people and the cool stuff they’re doing with vibecoding, they’re just people putting in work to have fun. There’s nothing wrong with that, but that’s how it is.
It’s true that most people don’t have the knowledge to perform software engineering tasks, and it’s intriguing to try vibecoding if, like me, you’ve never coded anything. I had LLMs create rudimentary side-scrolling games, create ray-traced 3D environments in javascript, and perform other small experiments that failed. I learned a little about LLMs, but it didn’t change my life.
Then again, like many people, I’m bored of optimization and productivity hacks, and it’s not in my nature to have software ideas that are purely software. In the rare cases where I feel a creative spark involving coding, the coding tends to be only a small part of the idea, and the rest of the idea tends to involve much more engagement with the world than an LLM can do. For example, I live in one of those neighborhoods where people go crazy with their Halloween decorations, and I’ve dreamed of setting up some festive animatronics on the lawn, but vibecoding a control system wouldn’t get me far in the process of setting up my monsters. Most of the work would involve walking around my yard with a power drill, wires, and stakes, playing with my werewolf mannequin, and Claude Code is not about to make that thing stay upright on my lawn.
Roose and other AI fanatics these days talk as if It is. Finally. Here. They give the impression that AI is really about to take off and that standards need to get there.
The next 6 months are going to be really weird. https://t.co/TAtAomZQzb
-Alex Graveley (@alexgraveley) January 25, 2026
When Roose talks about these ignorant “knowledge workers” outside of San Francisco, if he’s talking exclusively about software engineers struggling to accomplish tasks that could be done by Claudeswarms (Claudeswarms, in case you’re wondering, appear to be little hives of virtual coders who perform complex coding tasks), I suspect his pity is misplaced. If AI-inclined coders are not allowed to use the latest AI tools when they are at work, and they are also software engineers in their free time, it stands to reason that they play with AI toys at home. if they want.
And there’s little doubt that, half-joking or not, Roose’s experience of Bay Area residents “moving around” and constantly asking chatbots for life advice is real. This is to be expected. They also have lots of other problems, like a horrible new habit of injecting peptide solutions purchased online.
It’s not at all surprising that San Francisco residents think AI is about to become the closest thing to a god, because it feels like it’s about to become what many people in San Francisco think a god is: a software engineer. An understandable mistake.
But the rest of the pathetic knowledge workers who aren’t lucky enough to be in San Francisco’s AI haven don’t necessarily believe that software engineers are that powerful, and some of us are counting down the months until the next Halloween, and AI won’t be much help in making our latex clowns look scary until then. This will probably never be the case, and that’s fine.




