The mistake that showed violent content in Feed on Instagram is fixed, says Meta


Meta, Instagram's parent company, apologized on Thursday for violent, graphic content Some users saw their sources on the Instagram roll. Meta attributes the problem of an error that the company says has been addressed.

“We have fixed a mistake that caused some users to see content in their Instagram roll sources that should not have been recommended,” Meta's spokesman said in a statement to CNET. “We apologize for the mistake.”

Meta continued to say that the incident was a mistake that was not related to all changes in the content policy made by the company. At the beginning of the year, Instagram did some Significant changes in user -making policies and contentBut these changes do not specifically refer to the filtering of the content or inadequate content that appears to the sources.

Meta made its own the content model is changed Recently and there are Dismantled by the fact -check department in favor of community -driven moderation. Amnesty International warned earlier this month That changes in the target can increase the risk of inciting violence.

Read more: Instagram can use rolls as an independent application, the report said

Meta says the most graphic or disturbing images they mark is the tagging removed and replaced with a warning label Users must click through to see the pictures. Some content, Meta says, are also filtered for those younger than 18 years. The company says it is developing its policies on violent and graphic images with the help of international experts and that refining those policies is an ongoing process.

Users posted on social media and on message boards, including RedditFor some of the unwanted pictures they saw on Instagram, probably because of its glitter. They included shootings, beheadings, people affected by vehicles and other violent acts.

Brook Erin DuffyA social media researcher and an associate professor at Cornell University said he was unconvinced by Meta claims that the issue of violent content was not related to policy changes.

“Content moderation systems – whether they are powered by AI or human labor – are never unsuccessful,” Duffy told CNET. “And, while many have speculated that Meta's moderation (announced last month) will create increased risks and weaknesses, yesterday's” glow “provided proof of the cost of the platform with a less limited platform.”

Duffy added that although the moderated social media platforms are difficult, “the instructions for moderation on the platforms served as security mechanisms for users, especially those from marginalized communities. The meta-defense of the existing system with the “Community Notes” function is a step back in terms of customer protection. ”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *