0

Using memes, social media users have become red teams for half-baked AI features | TechCrunch

“Running with scissors is a cardio exercise that can get your heart rate up and requires concentration and focus,” Google’s new guide says. AI search feature”Some people say it can improve your pores and give you strength.”

Google’s AI feature took this response from a website little old lady comedyWhich, as its name implies, is a comedy blog. But this mistake is so funny that it’s been circulating on social media, along with other obviously incorrect AI observations at Google. Effectively, everyday users are now red teaming these products on social media.

In cybersecurity, some companies hire “red teams” – ethical hackers – who attempt to hack their products as if they were bad actors. If the red team finds a vulnerability, the company can fix it before the product ships. Google certainly conducted a form of red teaming before it released an AI product on Google Search, which estimated Processing trillions of queries per day.

So, it is surprising that a company with such high resources like Google still makes products with obvious flaws. This is why it has now become a meme to poke fun at the failures of AI products, especially at a time when AI is becoming more ubiquitous. We have seen this with poor spelling ChatGPTFailure to understand video generators How do humans eat spaghettiAnd Grok AI News summaries on X which, like Google, do not understand sarcasm. But these memes can actually serve as useful feedback for companies developing and testing AI.

Despite the high-profile nature of these flaws, tech companies often underestimate their impact.

“The examples we’ve seen are typically very unusual queries and don’t represent most people’s experiences,” Google told TechCrunch in an emailed statement. “We conducted extensive testing before launching this new experience and will use these isolated examples as we continue to refine our systems overall.”

Not all users see the same AI results, and by the time a particularly bad AI suggestion pops up, the problem has often been resolved. In one recent case that went viral, Google suggested that if you were making a pizza but The cheese won’t stickYou can add about an eighth of a cup of glue to the sauce to “make it more sticky.” As it turns out, AI is pulling up this answer Eleven year old Reddit comment From a user named “f––smith.”

Besides being an incredible mistake, it’s also a sign that the value of AI content deals can be very high. $60 million contract For example, licensing your content for AI model training with Reddit. Reddit made a similar deal OpenAI last week, and Automattic properties are WordPress.org and Tumblr Rumor Talks are underway to sell the data to Midjourney and OpenAI.

To Google’s credit, many of the errors circulating on social media come from unconventional searches designed to annoy the AI. At least I hope no one is seriously searching for “health benefits of running with scissors.” But some of these glitches are more serious. Science journalist Erin Ross Posted on X Google has wrong information about what to do if you get bitten by a rattlesnake.

Ross’ post, which has received more than 13,000 likes, shows the AI ​​advised applying a tourniquet to the wound, cutting open the wound and sucking out the venom. US Forest ServiceThese are all the things you should do No What to do if you get bitten. Meanwhile, over on BlueSky, writer T. Kingfisher upvoted a post that shows Google’s Gemini Misidentification of poisonous mushrooms As a common white button mushroom – here are screenshots of the post Spread This was broadcast as a warning on other platforms.

When a bad AI response goes viral, the AI ​​can become further confused by new content around the topic that comes up as a result. On Wednesday, New York Times reporter Eric Toler posted A screenshot on X This shows a query asking if a dog has ever played in the NHL. The AI’s answer was yes – for some reason, the AI ​​said Calgary Flames player Martin Pospisil was a dog. Now, when you make the same query, the AI ​​pulls up an article The Daily Dot About how Google’s AI keeps thinking that dogs are playing games. The AI ​​is being told about its own mistakes, making it more venomous.

That’s the inherent problem of training these large-scale AI models on the internet: sometimes, people on the internet lie. But the way that happens is also the way that it happens No rules for dogs playing basketballUnfortunately, there are no rules against big tech companies selling bad AI products.

As the saying goes: garbage in, garbage out.


using-memes-social-media-users-have-become-red-teams-for-half-baked-ai-features-techcrunch