Connect with us

Tech

The race to understand the thrilling, dangerous world of language AI

Published

on

The race to understand the thrilling, dangerous world of language AI


Among other things, this is what Gebru, Mitchell, and five other scientists warned about in their paper, which calls LLMs “stochastic parrots.” “Language technology can be very, very useful when it is appropriately scoped and situated and framed,” says Emily Bender, a professor of linguistics at the University of Washington and one of the coauthors of the paper. But the general-purpose nature of LLMs—and the persuasiveness of their mimicry—entices companies to use them in areas they aren’t necessarily equipped for.

In a recent keynote at one of the largest AI conferences, Gebru tied this hasty deployment of LLMs to consequences she’d experienced in her own life. Gebru was born and raised in Ethiopia, where an escalating war has ravaged the northernmost Tigray region. Ethiopia is also a country where 86 languages are spoken, nearly all of them unaccounted for in mainstream language technologies.

Despite LLMs having these linguistic deficiencies, Facebook relies heavily on them to automate its content moderation globally. When the war in Tigray first broke out in November, Gebru saw the platform flounder to get a handle on the flurry of misinformation. This is emblematic of a persistent pattern that researchers have observed in content moderation. Communities that speak languages not prioritized by Silicon Valley suffer the most hostile digital environments.

Gebru noted that this isn’t where the harm ends, either. When fake news, hate speech, and even death threats aren’t moderated out, they are then scraped as training data to build the next generation of LLMs. And those models, parroting back what they’re trained on, end up regurgitating these toxic linguistic patterns on the internet.

In many cases, researchers haven’t investigated thoroughly enough to know how this toxicity might manifest in downstream applications. But some scholarship does exist. In her 2018 book Algorithms of Oppression, Safiya Noble, an associate professor of information and African-American studies at the University of California, Los Angeles, documented how biases embedded in Google search perpetuate racism and, in extreme cases, perhaps even motivate racial violence.

“The consequences are pretty severe and significant,” she says. Google isn’t just the primary knowledge portal for average citizens. It also provides the information infrastructure for institutions, universities, and state and federal governments.

Google already uses an LLM to optimize some of its search results. With its latest announcement of LaMDA and a recent proposal it published in a preprint paper, the company has made clear it will only increase its reliance on the technology. Noble worries this could make the problems she uncovered even worse: “The fact that Google’s ethical AI team was fired for raising very important questions about the racist and sexist patterns of discrimination embedded in large language models should have been a wake-up call.”

BigScience

The BigScience project began in direct response to the growing need for scientific scrutiny of LLMs. In observing the technology’s rapid proliferation and Google’s attempted censorship of Gebru and Mitchell, Wolf and several colleagues realized it was time for the research community to take matters into its own hands.

Inspired by open scientific collaborations like CERN in particle physics, they conceived of an idea for an open-source LLM that could be used to conduct critical research independent of any company. In April of this year, the group received a grant to build it using the French government’s supercomputer.

At tech companies, LLMs are often built by only half a dozen people who have primarily technical expertise. BigScience wanted to bring in hundreds of researchers from a broad range of countries and disciplines to participate in a truly collaborative model-construction process. Wolf, who is French, first approached the French NLP community. From there, the initiative snowballed into a global operation encompassing more than 500 people.

The collaborative is now loosely organized into a dozen working groups and counting, each tackling different aspects of model development and investigation. One group will measure the model’s environmental impact, including the carbon footprint of training and running the LLM and factoring in the life-cycle costs of the supercomputer. Another will focus on developing responsible ways of sourcing the training data—seeking alternatives to simply scraping data from the web, such as transcribing historical radio archives or podcasts. The goal here is to avoid toxic language and nonconsensual collection of private information.

Tech

How do I know if egg freezing is for me?

Published

on

How do I know if egg freezing is for me?


The tool is currently being trialed in a group of research volunteers and is not yet widely available. But I’m hoping it represents a move toward more transparency and openness about the real costs and benefits of egg freezing. Yes, it is a remarkable technology that can help people become parents. But it might not be the best option for everyone.

Read more from Tech Review’s archive

Anna Louie Sussman had her eggs frozen in Italy and Spain because services in New York were too expensive. Luckily, there are specialized couriers ready to take frozen sex cells on international journeys, she wrote.

Michele Harrison was 41 when she froze 21 of her eggs. By the time she wanted to use them, two years later, only one was viable. Although she did have a baby, her case demonstrates that egg freezing is no guarantee of parenthood, wrote Bonnie Rochman.

What happens if someone dies with eggs in storage? Frozen eggs and sperm can still be used to create new life, but it’s tricky to work out who can make the decision, as I wrote in a previous edition of The Checkup.

Meanwhile, the race is on to create lab-made eggs and sperm. These cells, which might be made from a person’s blood or skin cells, could potentially solve a lot of fertility problems—should they ever prove safe, as I wrote in a feature for last year’s magazine issue on gender.

Researchers are also working on ways to mature eggs from transgender men in the lab, which could allow them to store and use their eggs without having to pause gender-affirming medical care or go through other potentially distressing procedures, as I wrote last year.

From around the web

The World Health Organization is set to decide whether covid still represents a “public health emergency of international concern.” It will probably decide to keep this status, because of the current outbreak in China. (STAT)  

Researchers want to study the brains, genes, and other biological features of incarcerated people to find ways to stop them from reoffending. Others warn that this approach is based on shoddy science and racist ideas. (Undark)

Continue Reading

Tech

A watermark for chatbots can expose text written by an AI

Published

on

The Download: watermarking AI text, and freezing eggs


For example, since OpenAI’s chatbot ChatGPT was launched in November, students have already started cheating by using it to write essays for them. News website CNET has used ChatGPT to write articles, only to have to issue corrections amid accusations of plagiarism. Building the watermarking approach into such systems before they’re released could help address such problems. 

In studies, these watermarks have already been used to identify AI-generated text with near certainty. Researchers at the University of Maryland, for example, were able to spot text created by Meta’s open-source language model, OPT-6.7B, using a detection algorithm they built. The work is described in a paper that’s yet to be peer-reviewed, and the code will be available for free around February 15. 

AI language models work by predicting and generating one word at a time. After each word, the watermarking algorithm randomly divides the language model’s vocabulary into words on a “greenlist” and a “redlist” and then prompts the model to choose words on the greenlist. 

The more greenlisted words in a passage, the more likely it is that the text was generated by a machine. Text written by a person tends to contain a more random mix of words. For example, for the word “beautiful,” the watermarking algorithm could classify the word “flower” as green and “orchid” as red. The AI model with the watermarking algorithm would be more likely to use the word “flower” than “orchid,” explains Tom Goldstein, an assistant professor at the University of Maryland, who was involved in the research. 

Continue Reading

Tech

The Download: watermarking AI text, and freezing eggs

Published

on

The Download: watermarking AI text, and freezing eggs


That’s why the team behind a new decision-making tool hope it will help to clear up some of the misconceptions around the procedure—and give would-be parents a much-needed insight into its real costs, benefits, and potential pitfalls. Read the full story.

—Jessica Hamzelou

This story is from The Checkup, MIT Technology Review’s weekly newsletter giving you the inside track on all things health and biotech. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Elon Musk held a surprise meeting with US political leaders 
Allegedly in the interest of ensuring Twitter is “fair to both parties.” (Insider $)
+ Kanye West’s presidential campaign advisors have been booted off Twitter. (Rolling Stone $)
+ Twitter’s trust and safety head is Musk’s biggest champion. (Bloomberg $) 

2 We’re treating covid like flu now
Annual covid shots are the next logical step. (The Atlantic $)

3 The worst thing about Sam Bankman-Fried’s spell in jail? 
Being cut off from the internet. (Forbes $)
+ Most crypto criminals use just five exchanges. (Wired $)
+ Collapsed crypto firmFTX has objected to a new investigation request. (Reuters)

4 Israel’s tech sector is rising up against its government
Tech workers fear its hardline policies will harm startups. (FT $)

5 It’s possible to power the world solely using renewable energy
At least, according to Stanford academic Mark Jacobson. (The Guardian)
+ Tech bros love the environment these days. (Slate $)
+ How new versions of solar, wind, and batteries could help the grid. (MIT Technology Review)

6 Generative AI is wildly expensive to run 
And that’s why promising startups like OpenAI need to hitch their wagons to the likes of Microsoft. (Bloomberg $)
+ How Microsoft benefits from the ChatGPT hype. (Vox)
+ BuzzFeed is planning to make quizzes supercharged by OpenAI. (WSJ $) 
+ Generative AI is changing everything. But what’s left when the hype is gone? (MIT Technology Review)

7 It’s hard not to blame self-driving cars for accidents
Even when it’s not technically their fault. (WSJ $)

8 What it’s like to swap Google for TikTok
It’s great for food suggestions and hacks, but hopeless for anything work-related. (Wired $)
+ The platform really wants to stay operational in the US. (Vox)
+ TikTok is mired in an eyelash controversy. (Rolling Stone $)

9 CRISPR gene editing kits are available to buy online
But there’s no guarantee these experiments will actually work. (Motherboard)
+ Next up for CRISPR: Gene editing for the masses? (MIT Technology Review)

10 Tech workers are livestreaming their layoffs
It’s a candid window into how these notoriously secretive companies treat their staff. (The Information $)

Continue Reading

Copyright © 2021 Seminole Press.