Connect with us

Tech

The race to understand the thrilling, dangerous world of language AI

Published

on

The race to understand the thrilling, dangerous world of language AI


Among other things, this is what Gebru, Mitchell, and five other scientists warned about in their paper, which calls LLMs “stochastic parrots.” “Language technology can be very, very useful when it is appropriately scoped and situated and framed,” says Emily Bender, a professor of linguistics at the University of Washington and one of the coauthors of the paper. But the general-purpose nature of LLMs—and the persuasiveness of their mimicry—entices companies to use them in areas they aren’t necessarily equipped for.

In a recent keynote at one of the largest AI conferences, Gebru tied this hasty deployment of LLMs to consequences she’d experienced in her own life. Gebru was born and raised in Ethiopia, where an escalating war has ravaged the northernmost Tigray region. Ethiopia is also a country where 86 languages are spoken, nearly all of them unaccounted for in mainstream language technologies.

Despite LLMs having these linguistic deficiencies, Facebook relies heavily on them to automate its content moderation globally. When the war in Tigray first broke out in November, Gebru saw the platform flounder to get a handle on the flurry of misinformation. This is emblematic of a persistent pattern that researchers have observed in content moderation. Communities that speak languages not prioritized by Silicon Valley suffer the most hostile digital environments.

Gebru noted that this isn’t where the harm ends, either. When fake news, hate speech, and even death threats aren’t moderated out, they are then scraped as training data to build the next generation of LLMs. And those models, parroting back what they’re trained on, end up regurgitating these toxic linguistic patterns on the internet.

In many cases, researchers haven’t investigated thoroughly enough to know how this toxicity might manifest in downstream applications. But some scholarship does exist. In her 2018 book Algorithms of Oppression, Safiya Noble, an associate professor of information and African-American studies at the University of California, Los Angeles, documented how biases embedded in Google search perpetuate racism and, in extreme cases, perhaps even motivate racial violence.

“The consequences are pretty severe and significant,” she says. Google isn’t just the primary knowledge portal for average citizens. It also provides the information infrastructure for institutions, universities, and state and federal governments.

Google already uses an LLM to optimize some of its search results. With its latest announcement of LaMDA and a recent proposal it published in a preprint paper, the company has made clear it will only increase its reliance on the technology. Noble worries this could make the problems she uncovered even worse: “The fact that Google’s ethical AI team was fired for raising very important questions about the racist and sexist patterns of discrimination embedded in large language models should have been a wake-up call.”

BigScience

The BigScience project began in direct response to the growing need for scientific scrutiny of LLMs. In observing the technology’s rapid proliferation and Google’s attempted censorship of Gebru and Mitchell, Wolf and several colleagues realized it was time for the research community to take matters into its own hands.

Inspired by open scientific collaborations like CERN in particle physics, they conceived of an idea for an open-source LLM that could be used to conduct critical research independent of any company. In April of this year, the group received a grant to build it using the French government’s supercomputer.

At tech companies, LLMs are often built by only half a dozen people who have primarily technical expertise. BigScience wanted to bring in hundreds of researchers from a broad range of countries and disciplines to participate in a truly collaborative model-construction process. Wolf, who is French, first approached the French NLP community. From there, the initiative snowballed into a global operation encompassing more than 500 people.

The collaborative is now loosely organized into a dozen working groups and counting, each tackling different aspects of model development and investigation. One group will measure the model’s environmental impact, including the carbon footprint of training and running the LLM and factoring in the life-cycle costs of the supercomputer. Another will focus on developing responsible ways of sourcing the training data—seeking alternatives to simply scraping data from the web, such as transcribing historical radio archives or podcasts. The goal here is to avoid toxic language and nonconsensual collection of private information.

Tech

The EU wants to put companies on the hook for harmful AI

Published

on

The EU wants to put companies on the hook for harmful AI


The new bill, called the AI Liability Directive, will add teeth to the EU’s AI Act, which is set to become EU law around the same time. The AI Act would require extra checks for “high risk” uses of AI that have the most potential to harm people, including systems for policing, recruitment, or health care. 

The new liability bill would give people and companies the right to sue for damages after being harmed by an AI system. The goal is to hold developers, producers, and users of the technologies accountable, and require them to explain how their AI systems were built and trained. Tech companies that fail to follow the rules risk EU-wide class actions.

For example, job seekers who can prove that an AI system for screening résumés discriminated against them can ask a court to force the AI company to grant them access to information about the system so they can identify those responsible and find out what went wrong. Armed with this information, they can sue. 

The proposal still needs to snake its way through the EU’s legislative process, which will take a couple of years at least. It will be amended by members of the European Parliament and EU governments and will likely face intense lobbying from tech companies, which claim that such rules could have a “chilling” effect on innovation. 

Whether or not it succeeds, this new EU legislation will have a ripple effect on how AI is regulated around the world.

In particular, the bill could have an adverse impact on software development, says Mathilde Adjutor, Europe’s policy manager for the tech lobbying group CCIA, which represents companies including Google, Amazon, and Uber.  

Under the new rules, “developers not only risk becoming liable for software bugs, but also for software’s potential impact on the mental health of users,” she says. 

Imogen Parker, associate director of policy at the Ada Lovelace Institute, an AI research institute, says the bill will shift power away from companies and back toward consumers—a correction she sees as particularly important given AI’s potential to discriminate. And the bill will ensure that when an AI system does cause harm, there’s a common way to seek compensation across the EU, says Thomas Boué, head of European policy for tech lobby BSA, whose members include Microsoft and IBM. 

However, some consumer rights organizations and activists say the proposals don’t go far enough and will set the bar too high for consumers who want to bring claims. 

Continue Reading

Tech

China is betting big on another gas engine alternative: methanol cars

Published

on

China is betting big on another gas engine alternative: methanol cars


Today, the leading company making methanol from carbon dioxide is Carbon Recycling International, an Icelandic company. Geely invested in CRI in 2015, and they have partnered to build the world’s largest CO2-to-fuel factory in China. When it’s running, it could recycle 160,000 tons of CO2 emissions from steel plants every year. 

The potential for clean production is what makes methanol desirable as a fuel. It’s not just a more efficient way to use energy, but also a way to remove existing CO2 from the air. To reach carbon neutrality by 2060, as China has promised, the country can’t put all its eggs in one basket, like EVs. Popularizing the use of methanol fuel and the clean production of methanol may enable China to hit its target sooner.

Can methanol move beyond its dirty roots?

But the future is not all bright and green. Currently, the majority of methanol in China is still made by burning coal. In fact, the ability to power cars with coal instead of oil, which China doesn’t have much of, was a major reason the country pursued methanol in the first place. Today, the Chinese provinces that lead in methanol-car experiments are also the ones that have abundant coal resources.  

But as Bromberg says, unlike gas and diesel, at least methanol has the potential to be green. The production of methanol may still have a high carbon footprint today, just as most EVs in China are still powered by electricity generated from coal. But there is a path to transition from coal-produced methanol to renewables-produced methanol. 

“If that is not an intention—if people are not going to pursue low-carbon methanol—you really don’t want to implement methanol at all,” Bromberg says.

Methanol fuel also has other potential drawbacks. It has a lower energy density than gasoline or diesel, requiring bigger, heavier fuel tanks—or drivers may need to refuel more often. This also effectively prevents methanol from being used as an airplane fuel.

What’s more, methanol is severely toxic when ingested and moderately so when inhaled or when people are exposed to it in large amounts. The potential harm was a big concern during the pilot program, though the researchers concluded that methanol proved no more toxic to participants than gas. 

Beyond China, some other countries, like Germany and Denmark, are also exploring the potential of methanol fuels. China, though, is at least one step ahead of the rest—even if it remains a big question whether it will replicate its success in developing EVs or follow the path of another country with a major auto industry. 

In 1982, California offered subsidies for car manufacturers to make over 900 methanol cars in a pilot program. The Reagan administration even pushed for the Alternative Motor Fuels Act to promote the use of methanol. But a lack of advocacy and the falling price of gasoline prevented further research of methanol fuel, and pilot drivers, while generally satisfied with their cars’ performance, complained about the availability of methanol fuel and the smaller range compared with gas cars. California officially ended the use of methanol cars in 2005, and there’s been no such experimentation in the US since.

Continue Reading

Tech

Can we find ways to live beyond 100? Millionaires are betting on it.

Published

on

Can we find ways to live beyond 100? Millionaires are betting on it.


But to test the same treatments in people, we’d need to run clinical trials for decades, which would be very difficult and extremely expensive. So the hunt is on for chemical clues in the blood or cells that might reveal how quickly a person is aging. Quite a few “aging clocks,” which purport to give a person’s biological age rather than their chronological age, have been developed. But none are reliable enough to test anti-aging drugs—yet. 

As I leave to head back to my own slightly less posh but still beautiful hotel, I’m handed a gift bag. It’s loaded up with anti-aging supplements, a box with a note saying it contains an AI longevity assistant, and even a regenerative toothpaste. At first glance, I have absolutely no idea if any of them are based on solid science. They might be nothing more than placebos.

Ultimately, of all the supplements, drugs and various treatments being promoted here, the workout is the one that’s most likely to work, judging from the evidence we have so far. It’s obvious, but regular exercise is key to gaining healthy years of life. Workouts designed to strengthen our muscles seem to be particularly beneficial for keeping us healthy, especially in later life. They can even help keep our brains young.
 
I’ll be penning a proper write up of the conference when I’m back home, so if your curiosity has been piqued, keep an eye out for that next week! In the meantime, here’s some related reading:

  • I wrote about what aging clocks can and can’t tell us about our biological age earlier this year.
  • Anti-aging drugs are being tested as a way to treat covid. The idea is that, by rejuvenating the immune system, we might be able to protect vulnerable older people from severe disease.
  • Longevity scientists are working to extend the lifespan of pet dogs. There’ll be benefits for the animals and their owners, but the eventual goal is to extend human lifespan, as I wrote in August.
  • The Saudi royal family could become one of the most significant investors in anti-aging research, according to this piece by my colleague Antonio Regalado. The family’s Hevolution Foundation plans to spend a billion dollars a year on understanding how aging works, and how to extend healthy lifespan.
  • While we’re on the subject of funding, most of the investment in the field has been poured into Altos Labs—a company focusing on ways to tackle aging by reprogramming cells to a more youthful state. The company has received financial backing from some of the wealthiest people in the world, including Jeff Bezos and Yuri Milner, Antonio explains.

From around the web

An experimental Alzheimer’s drug appears to slow cognitive decline. It’s huge news, given the decades of failed attempts to treat the disease. But the full details of the study have not yet been published, and it is difficult to know how much of an impact the drug might have on the lives of people with the disease. (STAT)

Bionic pancreases could successfully treat type 1 diabetes, according to the results of a clinical trial. The credit card-sized device, worn on the abdomen, can constantly monitor a person’s blood sugar levels, and deliver insulin when needed. (MIT Technology Review)

We’re headed for a dementia epidemic in US prisons. There’s a growing number of older inmates, and the US penal system doesn’t have the resources to look after them. (Scientific American)

Unvaccinated people are 14 times more likely to develop monkeypox disease than those who receive the Jynneos vaccine are, according to the US Centers for Disease Control and Prevention. But the organization doesn’t yet know how the vaccine affects the severity of disease in those who do become unwell, or if there is any difference in protection for people who are given fractional doses. (The New York Times $)

Don’t call them minibrains! In last week’s Checkup, I covered organoids—tiny clumps of cells meant to mimic full-grown organs. They’ve mainly been used for research, but we’ve started to implant them into animals to treat disease, and humans are next. Arguably the best-known organoids are those made from brain cells, which have been referred to as minibrains. A group of leading scientists in the field say this wrongly implies that the cells are capable of complex mental functions, like the ability to think or feel pain. They ask that we use the less-catchy but more accurate term “neural organoid” instead. (Nature)

That’s it for this week. Thanks for reading!

—Jess

Continue Reading

Copyright © 2021 Seminole Press.