If the benefits are not profitable, make money for the AI's agents to make money in charity


Tech Giants like Microsoft can say AI “agents” Profit promotion to the corporationIt is trying to prove that non-profit qualities are likely to be a good power, so non-profitable qualities may be a good force.

The future of Sage, Philanthropy, backed 501 (c) (3), conducted a test on four AI models in four Virtual Environdes by making money for a test earlier this month. Models: Openai's GPTI GPT-4O and O1 and Anthropic models (3.6 and 3.7 Sonnet) have the right to choose which charity and their campaign will be best.

There was Agentic Foursome around a week Helen boos $ 257 for Helen Keller InternationalSupports the vitamins to complete the children.

Clearly, the agents are not fully autonomous. Their surroundings are allowed to open the web in their surroundings. Allows them to create documents. And donations are almost entirely from these fans. In other words, the agents were able to pay a lot of money.

The case of the wise director Adam Binksmith thinks this test is an example of a useful illustration of the current actions of the agents.

“I want to help people understand and understand. What are you really struggling to understand?” Today's agents are passing through the promise of various actions.

The agents are amazing resources in the SAGE test. They negotiate with each other in a group conversation and send emails via oneconfigfigfiged Gmail accounts. They create and edit Google Docs together. They estimate that the minimum donation of the charitable work and the minimum donation is expected to save life through Helen Keller International ($ 3,500). They are even Created an X account for promotion.

“The most prominent thing is that when the Claude Agent needs a profile agent for the X Account,” he signed for a free chatgpt account. It produces three different images. Created an online survey to view that image preferred.

The agents run up against technical barriers. Sometimes they are clinging – the viewers need to give them suggestions. They are distracted by the games like the world, and they take the broken broken. On one occasion, GPT-4O has temporarily stops himself for an hour.

Binksmith thinks that the better and more Ai agents will be able to overcome these difficulties. The SAGE program plans to keep the surrounding new models in order to keep the surroundings around the environment to test this theory.

“In the future we will try things that give the different goals that are different, and the agents will be faster and faster than secure automatic monitoring and supervision systems for safe purposes.”

With the good luck, During the process, agents will do some meaningful philanthrops.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *