Connect with us


Generative AI risks concentrating Big Tech’s power. Here’s how to stop it.



Generative AI risks concentrating Big Tech’s power. Here’s how to stop it.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

If regulators don’t act now, the generative AI boom will concentrate Big Tech’s power even further. That’s the central argument of a new report from research institute AI Now. And it makes sense. To understand why, consider that the current AI boom depends on two things: large amounts of data, and enough computing power to process it.  

Both of these resources are only really available to big companies. And although some of the most exciting applications, such as OpenAI’s chatbot ChatGPT and Stability.AI’s image-generation AI Stable Diffusion, are created by startups, they rely on deals with Big Tech that gives them access to its vast data and computing resources. 

“A couple of big tech firms are poised to consolidate power through AI rather than democratize it,” says Sarah Myers West, managing director of the AI Now Institute, a research nonprofit. 

Right now, Big Tech has a chokehold on AI. But Myers West believes we’re actually at a watershed moment. It’s the start of a new tech hype cycle, and that means lawmakers and regulators have a unique opportunity to ensure that the next decade of AI technology is more democratic and fair. 

What separates this tech boom from previous ones is that we have a better understanding of all the catastrophic ways AI can go awry. And regulators everywhere are paying close attention. 

China just unveiled a draft bill on generative AI calling for more transparency and oversight, while the European Union is negotiating the AI Act, which will require tech companies to be more transparent about how generative AI systems work. It’s also planning  a bill to make them liable for AI harms.

The US has traditionally been reluctant to regulate its tech sector. But that’s changing. The Biden administration is seeking input on ways to oversee AI models such as ChatGPT—for example, by requiring tech companies to produce audits and impact assessments, or by mandating that AI systems meet certain standards before they are released. It’s one of the most concrete steps the administration has taken to curb AI harms.

Meanwhile, Federal Trade Commission chair Lina Khan has also highlighted Big Tech’s advantage in data and computing power and vowed to ensure competition in the AI industry. The agency has dangled the threat of antitrust investigations and crackdowns on deceptive business practices. 

This new focus on the AI sector is partly influenced by the fact that many members of the AI Now Institute, including Myers West, have spent time at the FTC. 

Myers West says her stint taught her that AI regulation doesn’t have to start from a blank slate. Instead of waiting for AI-specific regulations such as the EU’s AI Act, which will take years to put into place, regulators should ramp up enforcement of existing data protection and competition laws.

Because AI as we know it today is largely dependent on massive amounts of data, data policy is also artificial-intelligence policy, says Myers West. 

Case in point: ChatGPT has faced intense scrutiny from European and Canadian data protection authorities, and it has been blocked in Italy for allegedly scraping personal data off the web illegally and misusing personal data. 

The call for regulation is not just coming from government officials. Something interesting has happened. After decades of fighting regulation tooth and nail, today most tech companies, including OpenAI, claim they welcome it.  

The big question everyone’s still fighting over is how AI should be regulated. Though tech companies claim they support regulation, they’re still pursuing a “release first, ask question later” approach when it comes to launching AI-powered products. They are rushing to release image- and text-generating AI models as products even though these models have major flaws: they make up nonsense, perpetuate harmful biases, infringe copyright, and contain security vulnerabilities.

The White House’s proposal to tackle AI accountability with post-AI product launch measures such as algorithmic audits is not enough to mitigate AI harms, AI Now’s report argues. Stronger, swifter action is needed to ensure that companies first prove their models are fit for release, Myers West says.

“We should be very wary of approaches that do not put the burden on companies. There are a lot of approaches to regulation that essentially put the onus on the broader public and on regulators to root out AI-enabled harms,” she says. 

And importantly, Myers West says, regulators need to take action swiftly. 

“There need to be consequences for when [tech companies] violate the law.” 

Deeper Learning

How AI is helping historians better understand our past

This is cool. Historians have started using machine learning to examine historical documents smudged by centuries spent in mildewed archives. They’re using these techniques to restore ancient texts, and making significant discoveries along the way. 

Connecting the dots: Historians say the application of modern computer science to the distant past helps draw broader connections across the centuries than would otherwise be possible. But there is a risk that these computer programs introduce distortions of their own, slipping bias or outright falsifications into the historical record. Read more from Moira Donovan here.

Bits and bytes

Google is overhauling Search to compete with AI rivals  
Threatened by Microsoft’s relative success with AI-powered Bing search, Google is building a new search engine that uses large language models, and upgrading its existing search engine with AI features. It hopes the new search engine will offer users a more personalized experience. (The New York Times

Elon Musk has created a new AI company to rival OpenAI 
Over the past few months, Musk has been trying to hire researchers to join his new AI venture, X.AI. Musk was one of OpenAI’s cofounders, but he was ousted in 2018 after a power struggle with CEO Sam Altman. Musk has accused OpenAI’s chatbot ChatGPT of being politically biased and says he wants to create “truth-seeking” AI models. What does that mean? Your guess is as good as mine. (The Wall Street Journal

Stability.AI is at risk of going under
Stability.AI, the creator of the open-source image-generating AI model Stable Diffusion, just released a new version of the model whose results are slightly more photorealistic. But the business is in trouble. It’s burning through cash fast and struggling to generate revenue, and staff are losing faith in the CEO. (Semafor)

Meet the world’s worst AI program
The bot on, depicted  as a turtleneck-wearing Bulgarian man with bushy eyebrows, a thick beard, and a slightly receding hairline, is designed to be absolutely awful at chess. While other AI bots are programmed to dazzle, Martin is a reminder that even dumb AI systems can still surprise, delight, and teach us. (The Atlantic


The Download: child online safety laws, and ClimateTech is coming



The Download: child online safety laws, and ClimateTech is coming

August 2022

Matt Kaeberlein is what you might call a dog person. He has grown up with dogs and describes his German shepherd, Dobby, as “really special.” But Dobby is 14 years old—around 98 in dog years.

Kaeberlein is co-director of the Dog Aging Project, an ambitious research effort to track the aging process of tens of thousands of companion dogs across the US. He is one of a handful of scientists on a mission to improve, delay, and possibly reverse that process to help them live longer, healthier lives.

And dogs are just the beginning. One day, this research could help to prolong the lives of humans. Read the full story.

—Jessica Hamzelou

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ All hail the unsung women of indie sleaze.
+ It’s officially October!
+ This list of sartorial advice has been entertaining us at MIT Technology Review—how many points do you agree with?
+ Put down the expired milk, it’s got a whole lot more to give. 🥛
+ Some top tips for remembering your dreams more fully: should you want to, that is.

Continue Reading


Everything you need to know about artificial wombs



Everything you need to know about artificial wombs

The technology would likely be used first on infants born at 22 or 23 weeks who don’t have many other options. “You don’t want to put an infant on this device who would otherwise do well with conventional therapy,” Mychaliska says. At 22 weeks gestation, babies are tiny, often weighing less than a pound. And their lungs are still developing. When researchers looked at babies born between 2013 and 2018, survival among those who were resuscitated at 22 weeks was 30%. That number rose to nearly 56% at 23 weeks. And babies born at that stage who do survive have an increased risk of neurodevelopmental problems, cerebral palsy, mobility problems, hearing impairments, and other disabilities. 

Selecting the right participants will be tricky. Some experts argue that gestational age shouldn’t be the only criteria. One complicating factor is that prognosis varies widely from center to center, and it’s improving as hospitals learn how best to treat these preemies. At the University of Iowa Stead Family Children’s Hospital, for example, survival rates are much higher than average: 64% for babies born at 22 weeks. They’ve even managed to keep a handful of infants born at 21 weeks alive. “These babies are not a hopeless case. They very much can survive. They very much can thrive if you are managing them appropriately,” says Brady Thomas, a neonatologist at Stead. “Are you really going to make that much of a bigger impact by adding in this technology, and what risks might exist to those patients as you’re starting to trial it?”

Prognosis also varies widely from baby to baby depending on a variety of factors. “The girls do better than the boys. The bigger ones do better than the smaller ones,” says Mark Mercurio, a neonatologist and pediatric bioethicist at the Yale School of Medicine. So “how bad does the prognosis with current therapy need to be to justify use of an artificial womb?” That’s a question Mercurio would like to see answered.

What are the risks?

One ever-present concern in the tiniest babies is brain bleeds. “That’s due to a number of factors—a combination of their brain immaturity, and in part associated with the treatment that we provide,” Mychaliska says. Babies in an artificial womb would need to be on a blood thinner to prevent clots from forming where the tubes enter the body. “I believe that places a premature infant at very high risk for brain bleeding,” he says.  

And it’s not just about the baby. To be eligible for EXTEND, infants must be delivered via cesarean section, which puts the pregnant person at higher risk for infection and bleeding. Delivery via a C-section can also have an impact on future pregnancies.  

So if it works, could babies be grown entirely outside the womb?

Not anytime soon. Maybe not ever. In a paper published in 2022, Flake and his colleagues called this scenario “a technically and developmentally naive, yet sensationally speculative, pipe dream.” The problem is twofold. First, fetal development is a carefully choreographed process that relies on chemical communication between the pregnant parent’s body and the fetus. Even if researchers understood all the factors that contribute to fetal development—and they don’t—there’s no guarantee they could recreate those conditions. 

The second issue is size. The artificial womb systems being developed require doctors to insert a small tube into the infant’s umbilical cord to deliver oxygenated blood. The smaller the umbilical cord, the more difficult this becomes.

What are the ethical concerns?

In the near term, there are concerns about how to ensure that researchers are obtaining proper informed consent from parents who may be desperate to save their babies. “This is an issue that comes up with lots of last-chance therapies,” says Vardit Ravitsky, a bioethicist and president of the Hastings Center, a bioethics research institute. 

Continue Reading


The Download: brain bandwidth, and artificial wombs



Elon Musk wants more bandwidth between people and machines. Do we need it?

Last week, Elon Musk made the bold assertion that sticking electrodes in people’s heads is going to lead to a huge increase in the rate of data transfer out of, and into, human brains.

The occasion of Musk’s post was the announcement by Neuralink, his brain-computer interface company, that it was officially seeking the first volunteer to receive an implant that contains more than twice the number of electrodes than previous versions to collect more data from more nerve cells.

The entrepreneur mentioned a long-term goal of vastly increasing “bandwidth” between people, or people and machines, by a factor of 1,000 or more. But what does he mean, and is it even possible? Read the full story.

—Antonio Regalado

This story is from The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Thursday.

Everything you need to know about artificial wombs

Earlier this month, US Food and Drug Administration advisors met to discuss how to move research on artificial wombs from animals into humans.

These medical devices are designed to give extremely premature infants a bit more time to develop in a womb-like environment before entering the outside world. They have been tested with hundreds of lambs (and some piglets), but animal models can’t fully predict how the technology will work for humans. 

Continue Reading

Copyright © 2021 Seminole Press.