Meta is released to top marketers in the content moderation process


Meta has exempted some of its top sellers from its traditional content moderation process, protecting its multibillion-dollar business amid internal concerns that the company's systems are unfairly penalizing top brands.

According to internal documents from 2023 seen by the Financial Times, the owner of Facebook and Instagram introduced a series of “guardrails” that “protect those who spend a lot of money”.

The previously undisclosed memos said Meta It will “suppress identification” based on the amount of money a seller spends on the platform, and some top sellers will instead be screened by people.

Another document suggested that a group called “P95 users” – those who spend more than $1,500 a day – “will be exempted from advertising restrictions” but will “eventually be subject to personal review”.

Previous memos this week notice by CEO Mark Zuckerberg that Meta was ending its third-party fact-checking system and dialing down its automatic moderation, as it prepares for Donald Trump's return as president.

The 2023 documents show Meta found that its automated systems had falsely flagged some high-spending accounts as a result of company violations.

The company told the FT that high-spending accounts are disproportionately subject to false notifications about potential breaches. It did not respond to questions asking whether any of the measures in the documents were temporary or ongoing.

Ryan Daniels, Meta's spokesman, said the FT's report was “inaccurate” and “based on a selective reading of documents that clearly state that this effort is designed to address what we have been talking about publicly: preventing errors in enforcement” .

Marketing makes up the majority of Meta's annual revenues, which are expected to reach $135 billion by 2023.

The tech giant regularly monitors ads using a combination of artificial intelligence and human moderators to prevent violations of standards, in an effort to remove things like fraud or malicious content.

In a document titled “overspending error prevention”, Meta said it has seven ways to protect business accounts that bring in more than $1,200 in revenue over a 56-day period, and users who spend more than $960 in advertising at one time. time.

It wrote that the surveillance sites help the company “determine whether detection should proceed with enforcement” and are designed to “suppress detection . . . based on indicators, such as the level of ad spend”.

Give as an example a business in the “top 5 percent of revenue”.

Meta told the FT that it uses “higher spend” as a surveillance tool because it often means a company's ads will be more accessible, and so the consequences could be dire if the company or its ads are accidentally removed.

The company also agreed to prevent some high-spending accounts from being disabled by its automated system, instead sending out human reviews, when the company had concerns about the accuracy of their systems.

However, it said all businesses are still subject to the same marketing standards and no retailer is exempt from its rules.

In the memo “to prevent the mistake of spending a lot of money”, the company rated different types of security as “low”, “medium” or “high” in terms of how “safe”.

Meta staff identified the practice of having guardrails related to spending as “low” security.

Some monitoring methods, such as using business integrity information to help it decide whether policy violation detection should be automated, are labeled as “high-level” security.

Meta said the word “defensible” refers to the difficulty of explaining the concept of surveillance to stakeholders, if they can be misinterpreted.

The 2023 documents do not name the high-spenders who fell through the company's surveillance methods, but the spending limits suggest that thousands of retailers may be considered outside of the oversight process.

Estimates from market intelligence firm Sensor Tower suggest that the top 10 US spenders on Facebook and Instagram include Amazon, Procter & Gamble, Temu, Shein, Walmart, NBCUniversal and Google.

Meta has achieved record profits in recent quarters and its stock is trading at a high, following the company's recovery from the pandemic's slowdown in the global stock market.

But Zuckerberg warned of threats to his business, from the rise of AI to ByteDance's rival TikTok, which has grown in popularity among younger users.

A person familiar with the documents said the company “prioritises revenue and profits over user loyalty and health”, adding that concerns have been raised internally about the deviation of the moderation process.

Zuckerberg said on Tuesday that the complexity of Meta's content moderation system has brought “a lot of mistakes and a lot of scrutiny”.

His comments came after Trump accused Meta last year of criticizing conservative speech and suggested that if the company meddled in the 2024 election, Zuckerberg would “spend the rest of his life in prison”.

Internal documents also indicate that Meta has considered pursuing additional exemptions for certain high-spending advertisers.

In one memo, Meta staff suggested “strongly providing protection” from more moderates to so-called “platinum and gold spenders”, which together generate more than half of the market's revenue.

“False integrity enforcement against High Value Advertisers costs Meta revenue (and) undermines our credibility,” the memo read.

It proposed the option of a blanket exemption for these sellers from certain obligations, except in “very rare cases”.

The memo indicates that the staff concluded that platinum and gold traders were “not the most appropriate sector” for the general exemption, since an estimated 73 percent of its enforcement was appropriate, according to the company's tests.

Internal documents also show that Meta has detected more AI-generated accounts among the high-net-worth user categories.

Meta has previously fallen under scrutiny for carving out releases for key users. 2021, Facebook whistle Frances Haugen leaked documents show that the company has an internal system called “cross-check”, designed to review content from politicians, celebrities and journalists to ensure that posts are not removed by mistake.

According to Haugen's documents, this was sometimes used to protect other users from coercion, even if they broke Facebook's rules, a practice known as “whitelisting”.

Meta's review board – an independent board of the “Supreme Court” – an independent body funded by the company to look at the most difficult moderation decisions – found that the cross-check process left harmful content on the Internet. It required a complete overhaul of the system, which Meta has already begun to do.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *