Rechercher dans ce blog

Friday, November 18, 2022

New Meta AI demo writes racist and inaccurate scientific literature, gets pulled - Ars Technica

depolitikblog.blogspot.com
An AI-generated illustration of robots making science.
Enlarge / An AI-generated illustration of robots making science.
Ars Technica

On Tuesday, Meta AI unveiled a demo of Galactica, a large language model designed to "store, combine and reason about scientific knowledge." While intended to accelerate writing scientific literature, adversarial users running tests found it could also generate realistic nonsense. After several days of ethical criticism, Meta took the demo offline, reports MIT Technology Review.

Large language models (LLMs), such as OpenAI's GPT-3, learn to write text by studying millions of examples and understanding the statistical relationships between words. As a result, they can author convincing-sounding documents, but those works can also be riddled with falsehoods and potentially harmful stereotypes. Some critics call LLMs "stochastic parrots" for their ability to convincingly spit out text without understanding its meaning.

Enter Galactica, an LLM aimed at writing scientific literature. Its authors trained Galactica on "a large and curated corpus of humanity’s scientific knowledge," including over 48 million papers, textbooks and lecture notes, scientific websites, and encyclopedias. According to Galactica's paper, Meta AI researchers believed this purported high-quality data would lead to high-quality output.

A screenshot of Meta AI's Galactica website before the demo ended.
Enlarge / A screenshot of Meta AI's Galactica website before the demo ended.
Meta AI

Starting on Tuesday, visitors to the Galactica website could type in prompts to generate documents such as literature reviews, wiki articles, lecture notes, and answers to questions, according to examples provided by the website. The site presented the model as "a new interface to access and manipulate what we know about the universe."

While some people found the demo promising and useful, others soon discovered that anyone could type in racist or potentially offensive prompts, generating authoritative-sounding content on those topics just as easily. For example, someone used it to author a wiki entry about a fictional research paper titled "The benefits of eating crushed glass."

Even when Galactica's output wasn't offensive to social norms, the model could assault well-understood scientific facts, spitting out inaccuracies such as incorrect dates or animal names, requiring deep knowledge of the subject to catch.

As a result, Meta pulled the Galactica demo Thursday. Afterward, Meta's Chief AI Scientist Yann LeCun tweeted, "Galactica demo is off line for now. It's no longer possible to have some fun by casually misusing it. Happy?"

The episode recalls a common ethical dilemma with AI: When it comes to potentially harmful generative models, is it up to the general public to use them responsibly, or for the publishers of the models to prevent misuse?

Where the industry practice falls between those two extremes will likely vary between cultures and as deep learning models mature. Ultimately, government regulation may end up playing a large role in shaping the answer.

Adblock test (Why?)



"Demo" - Google News
November 19, 2022 at 06:30AM
https://ift.tt/I1AmeE8

New Meta AI demo writes racist and inaccurate scientific literature, gets pulled - Ars Technica
"Demo" - Google News
https://ift.tt/H0t7nuo
https://ift.tt/QJDkryL

No comments:

Post a Comment

Search

Featured Post

Granblue Fantasy: Relink's Demo Will Make a Believer Out of You - Kotaku

depolitikblog.blogspot.com Before multiple friends of mine went out of their way to sing the praises of Granblue Fantasy: Relink to ...

Postingan Populer