The supervision advice is caused by uneven AI META META “unconditional and unjustified”


Since META platforms are filled with a large amount of content generated by AI, the company still has a lot of work when it comes to ensuring compliance with its policy regarding the manipulated media. The supervision council again criticizes the company on social networks from its processing of such posts, writing in its last decision that its inability to constantly observe its rules is “inconsistent and unjustified”.

If it sounds familiar, it is because it is Since last year, the Council for Supervision of Supervision used the word “incoherently” to describe the approach of Meta to the manipulated media. The board previously called on META to update its rules after the misleading video about Joe Bayden became viral on Facebook. In response, Meta said this The use of labels to identify content generated AI, and that it will use more noticeable labels in high -risk situations. These labels, like the below, are noted when the post was created or edited with the help of AI.

An example of a shortcut when Meta determines part of the A-manipulated content, An example of a shortcut when Meta determines part of the A-manipulated content,

An example of a label, when Meta determines part of the A-manipulated content, is a “high risk”. (Screenshot (met))

This approach still fails, according to the board of directors. “The rule is concerned that, despite the growing prevalence of manipulated content in formats, the use of META in its manipulated media policy is inconsistent,” the last decision says. “Meta inability to automatically use the label to all cases of the same manipulated media incoherent and unjustified.”

The application was made in the decision related to the message To show the sound of two politicians in Iraqi Kurdistan. The alleged “recorded conversation” included a discussion about the falsification of the upcoming elections and other “ominous plans” for the region. According to the Board of Directors, the META report for misinformation, but the company closed the case “without human consideration”. Later, Meta called some cases of audioclip, but not about what was originally reported.

The case, according to the rule, is not an emission. Met, according to the contrary, told the board that it cannot automatically identify and apply labels to audio and video, only for “static images”. This means that several copies of the same audio or video clip may not get the same treatment that the notes to the board can cause further confusion. The supervision advice also criticized Meta for often relied on third parts to determine the A-manipulated videos and audio, as it was in this case.

“Given that META is one of the leading technologies and artificial art companies in the world, with its resources and the widespread use of META platforms, the board confirms that META should put priorities in investments in technology for identification and manipulation of video and sounding on the scale,” the board wrote. “The board is unclear why the company of this technical examination and resources that identify probable manipulating the media in high -risk situations in the media or trusted partners.”

In his recommendations, META, the board of directors said that the company should accept a “clear process” for a sequential designation of “identical or similar content” in situations when it adds a “high risk” shortcut to the message. The board also recommended that these labels appear in the language that corresponds to the rest of the Facebook, Instagram and Threads settings.

Meta did not respond to a request about the comment. The company has 60 days to answer the recommendations of the board.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *