Nvidia reveals AI base models running on RTX AI PCs


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. learn more


Nvidia today they announced base models that run locally Nvidia RTX AI PCs which overcomes digital people, content creation, productivity and development.

GeForce has long been a critical platform for AI developers. The first GPU-accelerated deep learning network, AlexNet, was trained on the GeForce GTXTM 580 in 2012 – and last year, over 30% of published AI research papers referenced the use of GeForce RTX. Jensen Huang, CEO of Nvidia, announced the news during a CES 2025 keynote address.

Now, with AI generation PCs and RTX AI, anyone can be a developer. A new wave of low-code and no-code tools, such as AnythingLLM, ComfyUI, Langflow and LM Studio enable enthusiasts to use AI models in complex workflows through a simple graphical user interface.

NIM microservices connected to these GUIs will make it effortless to access and deploy the latest generation AI models. Nvidia AI Blueprints, built on NIM microservices, provide easy-to-use predefined reference workflows for digital people, content creation and more.

To meet the growing demand from AI developers and enthusiasts, all major PC manufacturers and system builders are launching NIM-ready RTX AI PCs.

“AI is moving forward at light speed, from idea AI to generative AI and now agent AI,” Huang said. “NIM microservices and AI Blueprints give PC developers and enthusiasts the building blocks to explore the magic of AI.”

The NIM microservices will also be available with Nvidia Digits, a personal AI supercomputer that gives AI researchers, data scientists and students around the world access to the power of Nvidia Grace Blackwell. Project Digits introduces the new Nvidia GB10 Grace Blackwell Superchip, offering a petaflop of AI computing performance for prototyping, tuning and running large AI models.

Making AI NIMble

As AI gets smarter

Basic models—neural networks trained on large amounts of raw data—are the building blocks for generative AI.

Nvidia will release a pipeline of NIM microservices for RTX AI PCs from leading model developers such as Black Forest Labs, Meta, Mistral and Stability AI. Use cases spanning large-scale language models (LLMs), visual language models, image generation, speech, inference models for enhanced recall generation (RAG), PDF extraction and computer vision.

“Making FLUX an Nvidia NIM microservice allows more users to use and experience AI, while delivering incredible performance,” said Robin Rombach, CEO of Black Forest Labs, in report

Nvidia today also announced the Llama Nemotron family of open modules that provide high accuracy on a wide range of agent functions. The Nemotron Nano Llama module will be offered as a NIM microservice for RTX AI PCs and workstations, and will excel at agent AI tasks such as instruction following, function calling, chat, coding and math. NIM microservices include the core components for running AI on PCs and are optimized for use over NVIDIA GPUs – whether in RTX PCs and workstations or in the
cloud

Developers and enthusiasts will be able to download, install and run these NIM microservices on Windows 11 PCs with Windows Subsystem for Linux (WSL).

“AI is driving Windows 11 PC innovation at a rapid pace, and Windows Subsystem for Linux (WSL) offers a cross-platform environment for AI development on Windows 11 together with Windows Copilot Runtime,” said Pavan Davuluri , corporate vice president of Windows at Microsoft, in a statement. “Nvidia NIM microservices, optimized for Windows PCs, provide developers and developers with ready-to-use AI models for their Windows applications, accelerating the deployment of AI capabilities to Windows users.”

The NIM microservices, which run on RTX AI PCs, will be compatible with leading AI development and agent frameworks, including AI Toolkit for VSCode, AnythingLLM, ComfyUI, CrewAI, Flowise AI, LangChain, Langflow and LM Studio. Developers can connect applications and workflows built on these frameworks to AI models running NIM microservices through business-grade endpoints, allowing them to use the latest technology with inter- unified front across the cloud, data centers, workstations and PCs.

Enthusiasts will be able to experience a range of NIM microservices using an upcoming release of the Nvidia ChatRTX demo tech.

Facing Agent AI

Nvidia AI Blueprints

To demonstrate how enthusiasts and developers can use NIM to build AI agents and assistants, Nvidia today previewed Project R2X, a vision-enabled PC avatar that can put information at your fingertips user, help with desktop apps and video conference calls, read and summarize documents. , and more.

The avatar is rendered using Nvidia RTX Neural Faces, a new generation AI algorithm that complements traditional rasterization with completely generated pixels. The face is then animated with a new model based on the NVIDIA Audio2FaceTM-3D release that improves lip and tongue movement. R2X can be connected to AI cloud services such as GPT4o OpenAI and xAI's Grok, and microservices NIM and AI Blueprints, such as PDF retrievers or other LLMs, through developer frameworks such as CrewAI, Flowise AI and Langflow.

AI Blueprints comes to PC

A wafer full of Nvidia Blackwell chips.

NIM microservices are also available to PC users via AI Blueprints – reference AI workflows that can run locally on RTX PCs. With these blueprints, developers can create podcasts from PDF documents, generate stunning visuals driven by 3D visualizations and more.

The PDF to podcast plan extracts text, images and tables from PDF to create a user-editable podcast script. It can also generate a full audio recording from the script using voices available in the plan or based on a user voice sample. Additionally, users can have a real-time chat with an AI podcast host to learn more.

The blueprint uses NIM microservices like Mistral-Nemo-12B-Instruct for language, Nvidia Riva for text-to-speech and automatic speech recognition, and the NeMo Retriever collection of microservices for extraction PDF.

The Blueprint AI for 3D-guided generation AI gives artists better control over image generation. While AI can generate stunning images from simple text suggestions, it can be challenging to control the composition of images using only words. With this plan, creators can use simple 3D objects in a 3D renderer like Blender to drive AI image generation.

The artist can create 3D assets by hand or generate them using AI, place them in the scene and position the 3D view camera. Then, a prepackaged workflow powered by the FLUX NIM microservice will use the current composition to generate high-quality images that match the 3D scene.

Nvidia NIM microservices and AI Blueprints will be available starting in February. NIM-ready RTX AI PCs will be available from Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MSI, Razer and Samsung, and from local system builders Corsair, Falcon Northwest, LDLC, Maingear, Mifcon, Origin PC, PCS and Scan.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *