Beyond LLMs: How large-scale quantitative models can optimize enterprise AI SandboxAQ


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. learn more


Although major language modules (LLMs) and AI generation has dominated enterprise AI conversations over the past year, there are other ways enterprises can benefit from AI. One alternative is large scale quantitative models (LQMs).

LQMs are trained to optimize for specific goals and parameters relevant to the business or application, such as material properties or financial risk metrics. This is in contrast to the broader language comprehension and generation activities of LLMs. Major proponents and commercial vendors of LQMs include SandboxAQannounced today that they have raised $300 million in a new funding round. The company was originally part of Alphabet and was spun off as a separate business in 2022.

The funding is a testament to the company's success, and more importantly, its future growth prospects as it looks to find a solution. AI enterprise use cases. SandboxAQ has established partnerships with leading consulting firms including Accenture, Deloitte and EY to distribute their enterprise solutions.

The main advantage of LQMs is their ability to deal with complex domain-specific problems in industries where fundamental physics and quantitative relationships are critical.

“It's about creating great products at the companies that use our AI,” SandboxAQ CEO Jack Hidary told VentureBeat. “And so if you want to create a drug, a diagnostic tool, a new substance or you want to do risk management at a big bank, that's where quantitative models shine. ”

Why LQMs are important for an AI enterprise

LQMs have different objectives and work differently than LLMs. Unlikely LLMs that process textual data obtained from the InternetLQM generates its own data from mathematical equations and physical principles. The aim is to deal with quantitative challenges that an enterprise may have.

“We generate data and receive data from quantitative sources,” explained Hidary.

This approach enables improvements in areas where traditional methods have stalled. For example, in battery development, where lithium-ion technology has been dominant for 45 years, LQM can simulate millions of chemical compounds without physical prototyping.

Similarly, in pharmaceutical development, where traditional approaches face a high failure rate in clinical trials, LQMs can study molecular structures and interactions at the electronic level.

In financial services, meanwhile, LQMs address the limitations of traditional modeling approaches.

“Monte Carlo simulation is no longer sufficient to handle the complexity of structural instruments,” said Hidary. Monte Carlo simulation is a classic type of computational algorithm that uses random sampling to obtain results. With the SandboxAQ approach LQM, a financial services company can scale in a way that Monte Carlo simulation cannot. Hidary noted that some financial portfolios can be very complex with all kinds of instruments and structural choice.

“If I have a portfolio and I want to know the tail risk of changes in this portfolio,” Hidary said, “what I would like to do is I would like 300 to 500 million versions of that portfolio with small changes to it, and then I want to look at the tail risk.”

How SandboxAQ is using LQMs to improve cybersecurity

LQM Sandbox AQ technology aims to enable enterprises to create new products, materials and solutions, rather than simply optimizing existing processes.

Among the enterprise verticals in which the company has been innovating is cybersecurity. In 2023, the company released its first Sandwich cryptography management technology. That has since been expanded with the company's AQtive Guard enterprise solution.

The software can analyze enterprise files, applications and network traffic to identify the encryption algorithms being used. This includes detecting the use of outdated or broken encryption algorithms such as MD5 and SHA-1. SandboxAQ feeds this information into a management model that alerts the chief information security officer (CISO) and compliance teams to potential vulnerabilities.

While An LLM can be used for the same purposethe LQM has a different approach. LLMs are trained on extensive, unstructured Internet data, which may include information about encryption algorithms and vulnerabilities. In contrast, SandboxAQ's LQMs are constructed using objective, quantitative data about encryption algorithms, their properties and known vulnerabilities. The LQMs use this structured data to build models and knowledge graphs specifically for cryptographic analysis, rather than relying on general language understanding.

Looking ahead, Sandbox AQ is also working on a future remediation model that can automatically recommend and implement updates to the encryption in use.

Quantum dimensions without a quantum computer or transformers

The original idea behind SandboxAQ was to combine AI techniques with quantum computing.

Hidary and his team realized early on that real quantum computers were not going to be easy to come by or powerful enough in the short term. SandboxAQ uses quantum principles implemented through an optimized GPU infrastructure. Through a partnership, SandboxAQ has extended Nvidia's CUDA capabilities to handle quantum techniques.

SandboxAQ also does not use transformers, which are the basis of almost all LLMs.

“The models we train are neural network models and knowledge graphs, but they are not transformers,” said Hidary. “You can generate from equations, but you can also have quantitative data coming from sensors or other types of sources and networks. “

Although LQMs are different from LLMs, Hidary does not see it as a position or setting for enterprises.

“Use LLMs for what they're good at, and then bring in LQMs for what they're good at,” he said.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *