The Chinese company of the Artificial Intelligence Deepseek shook the markets this week, and its new AI model exceeds OPENAI and cost a fraction of the price for construction.
The claims – in particular that the large Deepseek model cost only $ 5.6 million – caused concerns about the sum of the eye, which technological giants currently spend on computing infrastructure required for training and conducting advanced AI loads.
But not everyone is convinced by Deepseek's claims.
CNBC asked industry experts for his views on Deepseek and how he actually compares with OpenAI, the creator of the viral chatbot chatgpt, who caused the AI revolution.
What is Deepseek?
Last week, Deepseek released R1, his new reasoning model It competes O1 Openai. The reasoning model is a large language model that breaks the hint into smaller pieces and considers a lot of approaches before generating answers. It was designed to process complex problems in a similar way as people.
Deepseek was founded in 2023 by Liang Wenfeng, co -founder of the quantitative Hedging Fund with a high hedge fund to focus on large language models and achieving artificial general intelligence or aga.
AGI as a concept loosely refers to the idea of AI, which is equal or exceeds the human intellect on a wide range of tasks.
A significant part of the technology standing behind R1 is not new. It is noteworthy, however, that Deepseek is the first to implement it in the high-performance model of a company-a significant reduction in energy requirements.
“Too away is that there are many development opportunities for this industry. A high-class chip/capital way is one technological approach, “said Xiaomeng Lu, the director of geo-techology practice at the Eurasia Group.
“But Deepseek proves that we are still in the initial stage of the development of artificial intelligence, and the path established by Opennai may not be the only way to a highly talented artificial intelligence.”
How is it different from Opeli?
Deepseek has two main systems that have gained the sound of the AI community: V3, a large language model that spins its products, and R1, reasoning model.
Both models are open source, which means that their basic code is free and publicly available for other programmers to adapt and redistribute.
Deepseek models are much smaller than many other large language models. V3 has a total of 671 billion parameters or variables, which the model learns during training. And although OpenAi does not reveal parameters, experts estimate their latest model at least a trillion.
In terms of deepseek performance says Its R1 model reaches comparable performance with OPenai O1 on the tasks of reasoning, citing reference research, including Aime 2024, Codeforces, GPQA Diamond, Math-500, Mml and its verified.
In the technical report, the company said that its V3 model cost only $ 5.6 million – a fraction of billions of dollars, which a significant Western AI laboratory, such as OpenAI and Anthropic, for training and conducting fundamental AI models. However, it is not yet clear how much deepeek costs.
However, if the costs of training are accurate, it means that the model was developed for a fraction of the costs of competing models by OpenAI, Anthropic, Google and others.
Daniel Newman, general director of The Futurum Group technology, said that these changes suggest a “huge breakthrough”, although he said some doubts as to the exact numbers.
“I think that the breakthrough of Deepseek indicate a significant inflection of scaling of rights and is a real necessity,” he said. “Having said this, there are still many questions and uncertainty around the full picture of costs, because it refers to the development of Deepeks.”
Meanwhile, Paul Triolio, a senior vice president of China and technology policy at the DGA Group consulting company, noted that it was difficult to use a direct comparison of the costs of the Deepseek model and the costs of the main American developers.
“The figure of 5.6 million for Deepseek V3 was only for one training, and the company emphasized that it did not represent the overall cost of research and development to develop the model,” he said. “The total cost was probably much higher, but still lower than the amount spent by the main American AI companies.”
Deepseek was not immediately available to comment when he contacted CNBC.
Comparison of Deepseek, OpenAI included
Both Deepseek and OpenNai reveal prices for calculating their models on their websites.
Deepseek claims that R1 costs 55 cents for 1 million input data tokens – “tokens” relating to each unit of text processed by the model – and USD 2.19 for 1 million production tokens.
For comparison, the OpenAI price page for O1 shows that the company charges USD 15 for 1 million input tokens and 60 USD for 1 million production tokens. In the case of the Mini GPT-4O, a smaller, cheap model of the OPENAI language, the company downloads 15 cents for 1 million input tokens.
Skepticism over fries
The disclosure of R1 by Deepsek has already led to heating the public debate on the truth of its claim – not only because his models were built despite the control of exports from the USA, limiting the use of advanced AI systems to China.
Deepseek claims that he has broken down with the help of mature NVIDIA clips, including the H800 and A100, which are less advanced than the most modern H100, which cannot be exported to China.
However, in Comments on CNBC last weekAI Scale, Alexandr Wang, said he believes that Deepseek used forbidden chips – the claim that Deepseek denies.

Since then, Nvidia left and said that the GPU used by Deepseek are fully compatible with export.
A real offer or not?
It seems that industry experts generally agree that what has achieved Deepseek is impressive, although some encouraged skepticism towards some claims of the Chinese company.
“Deepseek is rightly impressive, but the level of hysteria is an accusation of so many,” he wrote on X.
“The number of USD 5 million is false. He is pushed by the Chinese Hedge Fund for slow investments in the American start -ups AI, supports its own shorts against American titans, such as Nvidia and hide evading to sanctions. “
Seida Rejal, commercial director of Netmind, a startup with London length, which offers access to AI Deepseek models through a distributed GPU network, said that he had not seen a reason not to believe in Deepseek.
“Even if it is from a certain factor, it is still so efficient,” said Rejal in CNBC in a telephone interview at the beginning of this week. “The logic of what they explained is very reasonable.”
However, some claimed that Deepseek could not be built from scratch.
“Deepseek makes the same mistakes that O1 makes a strong indication that the technology has been cheated,” said the investor of the billionaire Vinod Khosl on X, without giving more details.
This is the claim mentioned by Opeli himself, informing CNBC in a statement on Wednesday that he reviews Deepseek reports, so that the output data from its models to develop the AI model is “improperly used”, the method referred to as “distillation”.
“We take aggressive, proactive remedies to protect our technology and will continue to work closely with the US government to protect the most talented models built here,” said OPENAI CNBC spokesman.
Cowatvating AI
However, the control surrounding Deepseek shakes, scientists AI generally agree that it is a positive step for the industry.
Yann Lecun, main scientist AI in FinishHe said that Deepseeka's success is the victory of AI Open Source models, not necessarily victory in China over the American meta-meta is behind the popular AI Open Source model called Llama.
“For people who see Deepseek performances and think:” China exceed the US in artificial intelligence. ” You read it wrong.
“Deepseek took advantage of open research and open source (e.g. Pytorch and Llam with Meta). They invented new ideas and built them on a regular basis with the work of other people. Because their work is published and open source, everyone can gain it. This is the strength of open research and open source. “
TO WATCH: Why Deepseek exposes the American leader of artificial intelligence

– Katrina Bishop and Hayden Field from CNBC contributed to this report