Meta caused a stir last week when it revealed that it intends to populate its platform with a significant number of completely artificial users in the not too distant future.
“We expect these AIs to actually, over time, persist on our platform, in the same way that accounts do,” said Connor Hayes, vice president of general AI products at Meta. know. told the Financial Times. “They will have a bio and profile picture, and will be able to create and share AI-powered content on the platform… that's where we see all of this going.”
In fact, Meta seems happy to fill its platform with AI and speed up “degeneration” of the Internet as we know it is concerned. Some people later noticed that Facebook had in fact filled with strange AI-generated individualsmost of which stopped posting a while ago. For example, these include “Liv,” a “proud black gay mother of two and truth teller, the truest source of your life's ups and downs,” a character went viral as people marveled at its awkward sloppiness. Meta started removing previous fake profiles after they failed to attract engagement from any real users.
However, let's stop hating on Meta for a moment. It's worth noting that AI-generated social personalities could also be a valuable research tool for scientists looking to explore how AI can mimic human behavior.
An experiment called GovSim, running in late 2024, illustrates how useful it can be to study how AI characters interact with each other. The researchers behind the project wanted to explore the phenomenon of human collaboration with access to common resources, such as common land for grazing. A few decades ago, the economist won the Nobel Prize Elinor Ostrom shows that, rather than depleting such a resource, communities actually tend to find ways to share it through informal communication and collaboration without any imposed rules.
Max Kleiman-Weiner, a professor at the University of Washington and one of the people involved in the GovSim work, says it was partly inspired by Stanford The project is called Smallvillewhich me previously written about in the AI lab. Smallville is a Farmville-like simulation consisting of characters communicating and interacting with each other under the control of large language models.
Kleiman-Weiner and colleagues wanted to see whether AI characters would engage in the type of cooperation that Ostrom noticed. The team tested 15 different LLMs, including those from OpenAI, Google and Anthropic, on three imaginary scenarios: a fishing community with access to the same lake; Shepherds divided land among their flocks; and a group of factory owners who need to limit collective pollution.
In 43 out of 45 simulations, they found that the AI characters did not share resources correctly, even though the smarter models did better. “We saw a pretty strong correlation between how robust the LLM was and its ability to sustain collaboration,” Kleiman-Weiner told me.