In a new lawsuit, Microsoft has accused a group of developer tools of abusing its AI service.


Microsoft has taken legal action against a group that the company claims used tools it purposely developed to bypass the security shields of its cloud AI products.

according to Complaint from the company In December, 10 unnamed defendants were indicted in the U.S. District Court for the Eastern District of Virginia against a group accused of stealing customer credentials and custom-designed software. Azure OpenAI serviceMicrosoft's fully managed service ChatGPT Developer OpenAI's technologies.

In the complaint, The Computer Fraud and Abuse Act against the defendants, whom Microsoft refers to only as “Does”; Alleged violations of the Digital Millennium Copyright Act and the Federal Fair Use Act by illegally accessing Microsoft's software. and servers for the purpose of “offensive” and “creating harmful and illegal content”. Microsoft did not provide specific details about the misused content it produced.

The company is seeking injunctive and “other equitable” relief and damages.

In the complaint, Azure OpenAI service credentials from Microsoft—specifically API keys; In July 2024, Microsoft said it discovered that unique strings used to authenticate an app or user were being used to create content that violated the service's Acceptable Use Policy. Later on, Through an investigation, the complaint found that Microsoft had stolen API keys from paying customers.

“The precise method by which the defendant obtained all of the API keys used to conduct the misconduct described in this complaint is unknown.” Microsoft's complaint states that “the defendants appear to have engaged in a systematic pattern of API key theft that enabled them. To steal Microsoft API Keys from multiple Microsoft customers.”

Microsoft alleges that the defendants stole Azure OpenAI Service API keys belonging to US-based customers to create a “hacking-as-a-service” scheme. According to the complaint, To pull off this scheme, the defendants created a client-side tool called de3u, as well as software for processing and routing communications from de3u to Microsoft's systems.

De3u allows users to generate images using stolen API keys. DALL-EMicrosoft claims it is one of the OpenAI models available to users of Azure OpenAI services without having to write their own code. De3u sought to prevent De3u from revising suggestions for generating images using the Azure OpenAI service, for example, according to the complaint. This can happen when a text message contains words that trigger Microsoft's content filtering.

De3u is suing Microsoft.
Screenshot of De3u tool from Microsoft complaint.Image creditsMicrosoft

A repo containing the de3u project code hosted on GitHub, a Microsoft-owned company, was no longer available at press time.

“These features, combined with defendants' unlawful programmatic API access to the Azure OpenAI service, enabled defendants to reverse engineer Microsoft's content and anti-abuse measures,” the complaint states. “Defendants knowingly and willfully accessed the Azure OpenAl Service without authorization, and damages were caused by such conduct.”

in one Blog post Published on Friday, Microsoft filed a lawsuit against the website, which the court said was used to store the plaintiffs' “instruments” for their operations, to gather evidence. Microsoft said it allowed the takeover of a website to reveal how the defendants' alleged services were monetized and to disrupt additional technical infrastructure it found. .

Microsoft also said the company had taken unspecified countermeasures and added “additional security mitigations” to the Azure OpenAI Service that targeted the activity.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *