The worst branding blunders of the AI era—so far
https://www.fastcompany.com/91147959/worst-brand-mistakes-of-the-ai-era-so-far
From Toys ‘R’ Us to McDonald’s to Figma, brands are showing just how badly things can go wrong when you rush to incorporate the latest AI bells and whistles.
The worst branding blunders of the AI era—so far
BY Rob Walker
4 minute read
AI를 창조한 인간조차도
자기 주관적인 시각견해 구역 안에 들어온 모든 대상 각자들의 참 현실진실을
맨 눈으로만 봐서는 절대로 모르기에 상상추정으로 각자를 채색도색 변색하고
볼수 없는 사각지대나 초근접 또는 시계범위 밖의 대상은 존재부재로 찍고말며
전체의 일부만 본 걸 전부라고 과대포장하며 순간발상적으로 인식한 한두장면에
안 드러난 과거나 미래 장면은 모르쇠로 일관하여 오해와 쌈이 난무하는 처지라
그런 존재가 만든 AI는 현실사실 진실이 뭔지도 아예 모르고 주어진 데이타에서
최적을 찍어내는 것이라 글자 그대로 인간 말을 흉내내는 가상현실에 불과하니
공장의 기계 틀 안이라는 한계를 벗어나면 인간이 의지 못할 게 너무나 자명하다
가상현실에 진실이 존재한다면 사막의 신기루 물을 사람이 먹고도 살아야 한다
Branded is a weekly column devoted to the intersection of marketing, business, design, and culture.
Plenty of brands seem eager to signal their artificial intelligence chops these days—maybe too eager. Consider Toys “R” Us. It set out to grab attention at the recent Cannes Lions festival, and beyond, with a bold example of AI as a creative tool. And what it touted as the first brand video generated with AI certainly got a strong reaction. In short, many found it creepy and off-putting, as well as a slight to human ad creatives. Jeff Beer, Fast Company‘s senior staff editor covering advertising and branding, pronounced it an “abomination.”
[Photo: Toys “R” Us]
In fact, the spot’s dreamy depiction of the chain’s origin story, made with OpenAI’s text-to-video tool, Sora, became just the latest example of a brand scrambling to embrace—and being seen to embrace—AI’s potential, and basically stepping on a rake. It should be (yet another) reminder of what brands have to lose in the rush to do something, anything, involving AI. Whatever the ambition, it ended up the most recent high-profile entry on the roster of the biggest brand mistakes of the AI era. So far.
But it certainly has a wide variety of company. Just a few weeks ago, McDonald’s pulled the plug on an experiment with AI handling drive-through orders. The system’s botched interpretations of certain orders—mistakenly accepting that customers had asked for hundreds of McNuggets or ice cream with bacon on it—went viral on social media. The burger giant announced it would “explore voice-ordering solutions more broadly,” essentially conceding that the technology’s not ready for prime time just yet. (McDonald’s wasn’t the only brand burned by the incident; the episode was also a bad look for IBM, McDonald’s tech partner on the effort.)
Earlier this year, a Canadian tribunal ruled that Air Canada would have to repay one of its customers who received erroneous information about its bereavement policy rules from the airline’s chatbot. Air Canada’s defense involved an argument that the chatbot was in effect a separate legal entity “responsible for its own actions.” The amount in dispute was around $600 (plus tribunal fees)—which just makes the brand-mistake cost seem even more ridiculous.
In one of the most high-profile AI debacles to date, Sports Illustrated was found to have used the technology to create and publish AI-generated articles attributed to fake “authors.” The scandal wreaked havoc on an already struggling but storied sports journalism brand; the CEO of its operating entity was fired in the aftermath. (Authentic Brands Group, owner of SI‘s intellectual property rights, later signed a licensing agreement with a different operator.) Much of the automated content was dubious and strange, and the debacle became an object lesson for brands on the need to be honest and transparent about AI experiments.
And of course the companies actually fueling the AI tech boom have hardly been immune to brand mistakes as they’ve battled each other for customers and attention. Quite the contrary. The much ballyhooed OpenAI has practically become a household name—and its notorious gaffes have been part of that story. Its technology infamously dreamed up imaginary case law that was actually used (and exposed as fake) in actual legal proceedings. The company was also accused of generating an unauthorized imitation of Scarlett Johansson’s voice for its ChatGPT product, stepping on a creative-community nerve about generative AI copying without permission; its denial was undercut by CEO Sam Altman’s tweeting “her” to promote the release, seemingly a direct reference to the movie Her in which Johansson voiced a fictional AI assistant.
Google AI searches recommended making spaghetti with gasoline. [Source Photos: Getty Images]
Anxious not to be left behind, Google has scrambled to add AI to its search arsenal, and its AI Overviews product has definitely gotten attention—particularly for doling out dubious (and soon viral) advice involving eating rocks and adding glue to a pizza recipe.
But Microsoft, another participant in the AI scrum, arguably gets the first-mover advantage nod in the brief history of AI gaffes. Way back in 2016, it debuted Tay, a social media chatbot powered by AI and supposedly designed to converse with humans and learn from those interactions. Unfortunately, a number of those humans promptly trained Tay to spew racist and antisemitic views; it was shuttered the next day. (Microsoft has more recently looked like a winner in the AI race, but its Bing search engine has produced its share of attention-grabbing “hallucinations.”)
In fairness, AI has come a long way in a short period of time, and will presumably continue to improve. But that doesn’t change the challenge of today’s feature becoming tomorrow’s glitch. Smaller-scale examples keep piling up, too, from Snapchat’s AI help bot alarming users by seeming to quit its job, to Adobe accidentally ticking off some of its photographer customers by noting Photoshop users could “skip the photo shoot” thanks to AI, to Figma disabling an AI design tool that apparently copied the design of Apple’s Weather app. It also won’t change the underlying risk for brands—the rush to brag about incorporating the latest AI bells and whistles can end up making them look not just clueless but untrustworthy when things go sideways. That’s a problem for the brand, not the technology. After all, each of these gaffes resulting from the current AI scramble can be attributed partly, if not mostly, to poor human judgment. And fixing that might take a while.