|
Why you shouldn’t trust AI search engines
Plus: The original startup behind Stable Diffusion has launched a generative AI for video.
By
Melissa Heikkiläarchive page
February 14, 2023
pinocchio with AI powered search nose
Stephanie Arnett/MITTR | Envato
AI를 믿지 못할 이유는 탐색정보가 사실인지 가짜인지 옳고 그름을 구별 및 판단할 능력이 없기 때문이다
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Last week was the week chatbot-powered search engines were supposed to arrive. The big idea is that these AI bots would upend our experience of searching the web by generating chatty answers to our questions, instead of just returning lists of links as searches do now. Only … things really did not go according to plan.
Approximately two seconds after Microsoft let people poke around with its new ChatGPT-powered Bing search engine, people started finding that it responded to some questions with incorrect or nonsensical answers, such as conspiracy theories. Google had an embarrassing moment when scientists spotted a factual error in the company’s own advertisement for its chatbot Bard, which subsequently wiped $100 billion off its share price.
What makes all of this all the more shocking is that it came as a surprise to precisely no one who has been paying attention to AI language models.
Here’s the problem: the technology is simply not ready to be used like this at this scale. AI language models are notorious bullshitters, often presenting falsehoods as facts. They are excellent at predicting the next word in a sentence, but they have no knowledge of what the sentence actually means. That makes it incredibly dangerous to combine them with search, where it’s crucial to get the facts straight.
OpenAI, the creator of the hit AI chatbot ChatGPT, has always emphasized that it is still just a research project, and that it is constantly improving as it receives people’s feedback. That hasn’t stopped Microsoft from integrating it into a new version of Bing, albeit with caveats that the search results might not be reliable.
Google has been using natural-language processing for years to help people search the internet using whole sentences instead of keywords. However, until now the company has been reluctant to integrate its own AI chatbot technology into its signature search engine, says Chirag Shah, a professor at the University of Washington who specializes in online search. Google’s leadership has been worried about the “reputational risk” of rushing out a ChatGPT-like tool. The irony!
The recent blunders from Big Tech don’t mean that AI-powered search is a lost cause. One way Google and Microsoft have tried to make their AI-generated search summaries more accurate is by offering citations. Linking to sources allows users to better understand where the search engine is getting its information, says Margaret Mitchell, a researcher and ethicist at the AI startup Hugging Face, who used to co-lead Google’s AI ethics team.
This might even help give people a more diverse take on things, she says, by nudging them to consider more sources than they might have done otherwise.
But that does nothing to address the fundamental problem that these AI models make up information and confidently present falsehoods as fact. And when AI-generated text looks authoritative and cites sources, that could ironically make users even less likely to double-check the information they’re seeing.
“A lot of people don’t check citations. Having a citation gives something an air of correctness that might not actually be there,” Mitchell says.
But the accuracy of search results is not really the point for Big Tech, says Shah. Though Google invented the technology that is fueling the current AI hype, the acclaim and attention are fixed firmly on the buzzy startup OpenAI and its patron, Microsoft. “It is definitely embarrassing for Google. They’re in a defensive position now. They haven’t been in this position for a very long time,” says Shah.
Meanwhile, Microsoft has gambled that expectations around Bing are so low a few errors won’t really matter. Microsoft has less than 10% of the market share for online search. Winning just a couple more percentage points would be a huge win for them, Shah says.
There’s an even bigger game beyond AI-powered search, adds Shah. Search is just one of the areas where the two tech giants are battling each other. They also compete in cloud computing services, productivity software, and enterprise software. Conversational AI becomes a way to demonstrate cutting-edge tech that translates to these other areas of the business.
Shah reckons companies are going to spin early hiccups as learning opportunities. “Rather than taking a careful approach to this, they’re going in a very bold fashion. Let the [AI system] make mistakes, because now the cat is out of the bag,” he says.
Essentially, we—the users—are now doing the work of testing this technology for free. “We’re all guinea pigs at this point,” says Shah.
Deeper Learning
The original startup behind Stable Diffusion has launched a generative AI for video
Runway, the generative AI startup that co-created last year’s breakout text-to-image model Stable Diffusion, has released an AI model that can transform existing videos into new ones by applying any style specified by a text prompt or reference image. If 2022 saw a boom in AI-generated images, the people behind Runway think 2023 will be the year of AI-generated video. Read more from Will Douglas Heaven here.
Why this matters: Unlike Meta’s and Google’s text-to-video systems, Runway’s model was built with customers in mind. “This is one of the first models to be developed really closely with a community of video makers,” says Runway CEO and cofounder Cristóbal Valenzuela. “It comes with years of insight about how filmmakers and VFX editors actually work on post-production.” Valenzuela thinks his model brings us a step closer to having full feature films generated with an AI system.
Bits and Bytes
ChatGPT is everywhere. Here’s where it came from
ChatGPT has become the fastest-growing internet service ever, reaching 100 million users just two months after its launch in December. But OpenAI’s breakout hit did not come out of nowhere. Will Douglas Heaven explains how we got here. (MIT Technology Review)
How AI algorithms objectify women’s bodies
A new investigation shows how AI tools rate photos of women as more sexually suggestive than similar images of men. This is an important story about how AI algorithms reflect the (often male) gaze of their creators. (The Guardian)
How Moscow’s smart-city project became an AI surveillance dystopia
Cities around the world are embracing technologies that purport to help with security or mobility. But this cautionary tale from Moscow shows just how easy it is to transform these technologies into tools for political repression. (Wired)
ChatGPT is a blurry JPEG of the internet
I like this analogy. ChatGPT is essentially a low-resolution snapshot of the internet, and that’s why it often spews nonsense. (The New Yorker)
Correction: The newsletter version of this story incorrectly stated Google lost $100 million off its share price. It was in fact $100 billion. We apologize for the error.
by Melissa Heikkilä
linkedinlink opens in a new window
twitterlink opens in a new window
facebooklink opens in a new window
emaillink opens in a new window
Popular
Large language models can do jaw-dropping things. But nobody knows exactly why.
Will Douglas Heaven
OpenAI teases an amazing new generative video model called Sora
Will Douglas Heaven
Deep Dive
Artificial intelligence
A photo illustration showing speech bubbles full of data.
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
By
Will Douglas Heavenarchive page
still frame from an AI-generated video following 2 people walking down a snowy sidewalk in Tokyo
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
By
Will Douglas Heavenarchive page
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
By
Will Douglas Heavenarchive page
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
By
Will Douglas Heavenarchive page
Stay connected
Illustration by Rose Wong
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.
Enter your email
Privacy Policy
The latest iteration of a legacy
Founded at the Massachusetts Institute of Technology in 1899, MIT Technology Review is a world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and political impact.
READ ABOUT OUR HISTORY
Advertise with MIT Technology Review
Elevate your brand to the forefront of conversation around emerging technologies that are radically transforming business. From event sponsorships to custom content to visually arresting video storytelling, advertising with MIT Technology Review creates opportunities for your brand to resonate with an unmatched audience of technology and business elite.
ADVERTISE WITH US
© 2024 MIT Technology Review
|