A new test for AI labs: are you even trying to make money?


We are in a unique moment for AI companies building their own core model.

First, there is a whole generation of industry veterans who made their names at big tech companies and are now going solo. You also have legendary researchers with immense experience but ambiguous business aspirations. There’s a good chance that at least some of these new labs will become giants the size of OpenAI, but they also have the opportunity to conduct interesting research without worrying too much about commercialization.

The end result? It becomes difficult to tell who is actually trying to make money.

To keep things simple, I offer a sort of sliding scale for any company doing a foundation model. This is a five-tier scale on which it doesn’t matter whether you actually make money – only whether you try to. The idea here is to measure ambition, not success.

Think about it in these terms:

  • Level 5: We already make millions of dollars every day, thank you very much.
  • Level 4: We have a detailed, multi-step plan to become the richest human beings on the planet.
  • Level 3: We have many promising product ideas, which will be revealed over time.
  • Level 2: We have the outline of a plan concept.
  • Level 1: True wealth is when we love ourselves.

The big names are all at level 5: OpenAI, Anthropic, Gemini, etc. The scale becomes more interesting with the launch of a new generation of labs, with big dreams but ambitions that can be harder to read.

Importantly, people involved in these labs can usually choose the level they want. There is so much money in AI right now that no one is going to interview them for a business plan. Even if the lab is just a research project, investors will feel happy to participate in it. If you’re not particularly motivated to become a billionaire, you may well live a happier life at level 2 than at level 5.

Techcrunch event

San Francisco
|
October 13-15, 2026

Problems arise because it’s not always clear where an AI lab sits on the scale — and much of the AI ​​industry’s current drama comes from this confusion. Much of the anxiety over OpenAI’s conversion from a nonprofit came from the fact that the lab spent years at Tier 1, then moved to Tier 5 almost overnight. On the other hand, one could argue that Meta’s early AI research was firmly at Level 2, when what the company really wanted was Level 4.

With that in mind, here’s a quick look at four of the largest contemporary AI labs and how they compare at scale.

Humans and

Humans& was the big news in AI this weekand part of the inspiration for coming up with all this scale. The founders have a compelling case for the next generation of AI models, with scaling laws giving way to a focus on communication and coordination tools.

But despite all the glowing press, Humans& has been coy about how this would translate into real monetizable products. It seems that do you want to create products; the team just won’t commit to anything specific. All they said was they would build a kind of AI work tool, replacing products like Slack, Jira and Google Docs, but also redefining how these other tools work at a fundamental level. Workplace software for a post-software workplace!

It’s my job to know what that means, and I’m still pretty confused about that last part. But it’s just specific enough that I think we can put them at level 3.

Thinking Machine Laboratory

This is a very difficult question to assess! Typically, if you have a former CTO and project manager for ChatGPT raising a $2 billion seed round, you have to assume there is a pretty specific roadmap. Mira Murati doesn’t strike me as someone who goes in without a plan, so in 2026 I would have liked to put TML at level 4.

But then the last two weeks have happened. The departure of CTO and co-founder Barret Zoph made headlines, in part because of special circumstances implied. But at least five other employees left with Zoph, many citing concerns about the company’s direction. Barely a year later, almost half of the leaders of TML’s founding team no longer work there. One way to read the events is that they thought they had a solid plan to become a world-class AI lab, only to find out that the plan wasn’t as solid as they thought. Or in terms of scale, they wanted a level 4 lab but realized they were at level 2 or 3.

There is not yet enough evidence to justify a downgrade, but we are getting closer.

Global Laboratories

Fei-Fei Li is one of the most respected names in AI research, best known for starting the ImageNet challenge that pioneered contemporary deep learning techniques. She currently holds a Sequoia Endowed Chair at Stanford, where she co-directs two different AI labs. I won’t bore you by going through all the different honors and positions at the academy, but suffice it to say that if she wanted to, she could spend the rest of her life receiving awards and being told how awesome she is. His book It’s pretty good too!

SO in 2024When Li announced she had raised $230 million for a space AI company called World Labs, you’d think we were operating at Tier 2 or lower.

But that was over a year ago, which is a long time in the world of AI. Since then, World Labs has shipped both a complete world generating model And a marketed product built on it. Over the same period, we’ve seen real signs of demand for global modeling from the video game and special effects industries – and none of the big labs have built anything that can compete. The result looks an awful lot like a Tier 4 business, perhaps moving up to Tier 5 soon.

Safe Superintelligence (SSI)

Founded by former OpenAI chief scientist Ilya Sutskever, Safe Superintelligence (or SSI) appears to be a classic example of a Tier 1 startup. Sutskever has gone to great lengths to keep SSI free from commercial pressures, to the point of refuse an acquisition attempt from Meta. There are no product cycles and, aside from the superintelligent base model still being developed, there appears to be no product at all. With this pitch, he raised 3 billion dollars! Sutskever has always been more interested in the science of AI than the business, and all indications are that this is a truly scientific project.

That said, the world of AI is evolving rapidly – ​​and it would be foolish to exclude SSI from the commercial realm entirely. On his recent appearance in DwarkeshSutskever gave two reasons why SSI might pivot, either “if the timelines turn out to be long, which they might,” or because “the best and most powerful AI impacting the world has a lot of value.” In other words, if research does very well or very poorly, we could see SSI quickly climb a few levels.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *