Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. learn more
Every week – sometimes every day – new modern AI model born into the world. As we move into 2025, the pace at which new models are being released is dizzying, if not exhausting. The curve of the roller-coaster continues to grow exponentially, and fatigue and wonder have become constant companions. Each message explains why this a particular model is better than all others, with endless collections of gauges and bar charts filling our feeds as we scramble to keep up.

Charlie Giattino, Edouard Mathieu, Veronika Samborska and Max Roser (2023) – “Artificial Intelligence” Published online at OurWorldinData.org.
Eighteen months ago, most developers and businesses were using a single AI model. Today, the opposite is true. It is rare to find a large-scale business that limits itself to the capabilities of a single model. Companies are wary of vendor lock-in, especially for technology that has quickly become a key part of both long-term corporate strategy and short-term bottom line revenue. It is increasingly dangerous for teams to place all their bets on one large language model (LLM).
But despite this disruption, many model providers are still building the idea that AI will be a winning market. They claim that the knowledge and computing required to train best-in-class models is scarce, defensive and self-reinforcing. From their point of view, the hype bubble for building AI models it will eventually collapse, leaving one giant artificial intelligence (AGI) model that will be used for anything. Owning such a model alone would mean being the most powerful company in the world. The size of this prize has started an arms race for more and more GPUs, with a new zero added to the number of training parameters every few months.

BBC, Hitchhiker's Guide to the Galaxy, TV series (1981). Still image restored for reporting purposes.
We think this view is wrong. There will not be one model that will rule the universe, next year or the next decade. Instead, the future of AI will be multi-model.
Language models are intangible objects
The Oxford Dictionary of Economics defines commodities as “normal goods that are bought and sold at scale and have interchangeable units.” Language models are commodities in two important senses:
- The models themselves are becoming more interchangeable on a wider set of functions;
- The research expertise needed to produce these models is becoming more distributed and accessible, with frontier labs overlapping each other and independent researchers in the open source community. nipping at the heel.

But although language models do trade, they do so unfairly. It has a huge core of capabilities that any model, from GPT-4 all the way down to Mistral Small, is well suited to handle. At the same time, as we move towards the edge and the edge cases, we see more and more differentiation, with some model providers specializing in code generation , reasoning, retrieval enhanced generation (RAG) or mathematics. This leads to endless handwriting, reddit research, evaluation and tuning to find the right model for each role.

And so although language models are commodities, they are more precisely defined as fuzzy goods. For many use cases, AI models will be almost interchangeable, with metrics like cost and latency determining which model to use. But at the edge of possibilities, the opposite happens: models continue to specialize, becoming more diverse. For example, Deepseek-V2.5 stronger than GPT-4o coded in C#, despite being a fraction of the size and 50 times cheaper.
These two dynamics – commoditization and specialization – debunk the thesis that one model will be best suited to handle all use cases. Instead, they point to an evolving landscape for AI.
A multi-modal orchestra and track
There is an apt analogy for the market dynamics of language models: The human brain. The structure of our brain has remained unchanged for 100,000 years, and brains are far more similar than they are dissimilar. For most of our time on Earth, most people learned the same things and had similar abilities.
But then something changed. We developed the ability to communicate in language – first in speech, then in writing. Communication protocols enable networks, and as people began to network with each other, we also began to specialize to greater and greater degrees. We have been freed from the burden of having to be generalists across all areas, to be self-sufficient islands. Paradoxically, the general wealth of knowledge has also meant that the average person today is a far stronger generalist than any of our ancestors.
On a sufficiently wide entry point, the universe will always tend to be singular. This is true all the way from molecular chemistry, to biology, to human society. Given enough variety, distributed systems will always be more computationally efficient than monoliths. We believe the same will be true of AI. The more we can leverage the strengths of multiple models instead of relying on just one, the more these models can specialize, expanding the limit for capabilities.

An increasingly important pattern for leveraging the strengths of diverse models is to dynamically send queries to the most appropriate model, while leveraging cheaper and faster models when not doing so reduces the quality. Routing allows us to take advantage of the benefits of specification – higher precision with lower costs and latency – without sacrificing the strength of generalization.
A simple demonstration of routing power can be seen as most of the world's leading models are routers themselves: They are built using A mix of expert an architecture that runs each next generation into a few dozen familiar submodules. If it is true that LLMs greatly expand on fuzzy goods, routing must be an integral part of every AI stack.
The idea is that LLMs will progress as they reach human intelligence – as we saturate capabilities, we'll come together around one common model in the same way we've come together around AWS, or the iPhone. None of these platforms (or their competitors) have 10X their capabilities in the last few years – so we may be comfortable in their ecosystems. We believe, however, that AI does not stop at human-level intelligence; it will continue far beyond any boundaries we could even imagine. As it does so, it becomes increasingly fragmented and specialized, just like any other natural system.
We cannot overemphasize how much AI model disruption is a very good thing. Broken markets are efficient markets: they empower buyers, increase innovation and reduce costs. And to the extent that we can leverage networks of smaller and more specialized models rather than putting everything through the internals of one giant model, we move towards a future much safer, more interpretable and more stable for AI.
The greatest inventions have no owners. Ben Franklin's heirs have no electricity. Not every computer owns the Turing estate. AI is undoubtedly one of humanity's greatest inventions; we believe that its future will – and should – be multi-model.
Zack Kass is vice president of go-to-market at Open AI.
Tomás Hernando Kofman is co-founder and CEO Not a Diamond.
Data Decision Makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share insights and innovation related to data.
If you want to read about cutting-edge ideas and information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You may even be considering contributing to an article by yourself!
Source link