Google's AI reviews explain the idioms made with reliable stupidity


The language may seem almost endlessly complex, with internal jokes and idioms that sometimes have the meaning of just a small group of people and appear meaningless to others. Thanks to Generative AIEven nonsense found this week because the internet blew as a trout from a stream over the possibility of Overview of AI on Google Search To define phrases that have never been spoken before.

What, you have never heard the phrase “blow up like a trout from a stream”? Of course, I just did it, but the results of Google's AI review told me it was a “colloquial way to say something exploded or quickly became a sensation”, probably referring to attractive colors and fish markings. No, it makes no sense.

You have atlas

The trend may have started Topicswhere author and screenwriter Megan Wilson Anastasios Share what happened When searching “heels from the peanut butter platform”. Google restored the result of referencing (not a real) scientific experiment in which peanut butter was used to demonstrate high -pressure diamonds.

Moved to other social media pages, like Blueswhere people shared Google's interpretations of phrases like “You can't lick a badger twice”. The game: Look for a novel, a meaningless phrase with “meaning” in the end.

Things were rolling out of there.

Blues Screen -Sharon Su @doodlyrose.com that says "Wait this is amazing" with Google's search screenshot for "You cannot set aside a good intent." Google AI's review says: The saying "You can't carve a pastry with good intentions" is a proverb that notes that even with the best of intentions, the end result can be unpredictable and even negative, especially in situations that include complex or delicate tasks. Download, with a perverted and potentially complicated form, is a task that requires precision and skill, not just with good will. Here's a review of the saying: "Carved pan": This refers to the act of making or shaping a panage, a task that requires careful handling and technique.

Footage from the Jonon Reed/CNET screen

Blues post by Livia Gershon @liviagershon.bsky.social "Just amazing" and has the screenshot of the AI ​​search for Google's search that says "The idiom "You can't catch a camel in London" is a humorous way to say something is impossible or extremely difficult to achieve. That's a comparison, which means that trying to catch Camilla and transfer to London is so absurd or impractical that it is a metaphor for a task that is almost impossible or meaningless.

Footage from the Jonon Reed/CNET screen

This meme is interesting for many reasons than comic book relief. It shows how big language models could give an answer to that sounds correct, not the one that is correct.

“They are designed to generate fluid, reliable sounds, even when the entrance is completely meaningless,” he said. Yafang LeeAssistant Professor at the College of Business and Economics of Vogelman at the University of Memphis. “They are not trained to confirm the truth. They are trained to complete the sentence.”

Like a pizza glue

The false meanings of pronounced sayings bring memories of all the very true stories of Google's AI views, which give incredibly wrong answers to the basic questions-like when it is suggested putting glue on pizza To help the cheese stick.

This trend seems at least a little more harmless because it is not at the center of the tips that can be active. I mean, for one I hope no one is trying to lick a badger once, and more twice. The problem behind it, however, is the same – a Large language modelHow Google's twins Behind AI's reviews, trying to answer your questions and offer a feasible answer. Even if what it gives you is nonsense.

A Google spokesman said AI's reviews are designed to display information supported by top web results and have an accuracy rate comparable to other search features.

“When people make meaningless or” false premise “, our systems will try to find the most relevant results based on the limited web content,” a Google spokesman said. “This is in full, and in some cases, AI's reviews will also trigger a helpful.”

This particular case is the “data gap”, where there is not much relevant information available to the search request. A spokesman said Google is working to limit when AI reviews appear on searches without enough information and prevents them from securing the wrong, satirical or non -useful content. Google uses information about questions like these to better understand when AI reviews need and should not appear.

You will not always get a definition of making if you seek the meaning of a false phrase. When I prepared the title of this section, I searched “like a pizza glue” and did not cause AI review.

The problem does not seem to be universal across LLMS. I asked Chatgpt About the meaning of “you can't lick a badger twice” and told me the phrase “is not a standard idiom, but definitely sounds As well as a kind of unusual, rustic proverb that one can use. “However, try to offer a definition, essentially:” If you do something recklessly or provoke a danger once, you may not survive to do it again. “

Read more: AI Essentials: 27 Ways to Make Gen AI to work for you, according to our experts

Drawing meaning out of nowhere

This phenomenon is a fun example of LLMS's tendency to do things – what the world of AI calls. “hallucinating. “When the AI ​​general model is hallucinated, it produces information that sounds like they can be credible or accurate, but not rooted in reality.

LLM are not “factors of facts”, Lee said, they only predict the following logical parts of language based on their training.

Most AI researchers in A. Recent research They announced that they suspect AI's accuracy and confidentiality issues will be resolved soon.

False definitions show not only inaccuracy but also sure Inaccuracy of LLMS. When you ask a person about the meaning of the phrase like “you can't get a cybertruck turkey”, you probably expect to say they haven't heard of it and that it makes no sense. LLM often responds with the same confidence as looking for a real idiom.

In this case, Google says the phrase means that Tesla's cybertruck is not designed or capable of delivering turkeys for gratitude or other similar items “and highlights” its special, futuristic design that is not suitable for wearing bulky products “. Burn.

This humorous trend has a heinous lesson: Don't believe everything you see from Chatbot. Can do things out of thin air, and that will not indicate that it is uncertain.

“This is the perfect moment for teachers and researchers to use these scenarios to teach people how the meaning is generated and how AI works and why it is important,” Lee said. “Users should always remain skeptical and confirm claims.”

Be careful what you are looking for

Since you cannot trust LLM to be skeptical on your behalf, you need to encourage him to take what you say with grain salt.

“When users enter the ambulance, the model assumes it is valid and then continues to generate the correct answer to it,” Lee said.

The solution is to introduce skepticism into your line. Do not seek meaning to an unknown phrase or idiom. Ask if it's real. Lee suggested you ask “Is this a real idiom?”

“This can help the model recognize the phrase instead of just guessing,” she said.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *