It’s a tricky one. The scientists behind the work argue that there’s nothing really human about these rats. Throughout the study, the team examined the rats to see if those with human cells were any smarter, or experienced more suffering, than rats that didn’t receive organoid transplants. They found no sign of human traits or behaviors.
But the whole point of implanting human cells is to get some insight into what happens in the human brain. So there’s a trade-off here. Essentially, the animals need to represent what happens in humans without becoming too human themselves. And if the rats don’t show any human behaviors, can they really tell us that much about human disease?
“The question is: What percentage of animal cells would be needed in the brain to reduce animal behavior and see a different type of behavior?” asks Jeantine Lunshof, a philosopher and ethicist at the Wyss Institute for Biologically Inspired Engineering at Harvard University.
This raises another question. What would it take for us to accept that an animal is no longer a typical member of its own species? Many of the discussions on this topic focus on moral status. Most people would agree that humans have a greater moral status than other animals—and that it is not acceptable to treat people the same way we treat animals, whether for research or in other contexts.
It can be difficult to pinpoint exactly what it is about us that makes us special, but the consensus is that it has something to do with our brains, which are larger and more complex than those of other animals. It is our brains that allow us to think, feel, dream, rationalize, form social bonds, plan our futures, and, more generally, experience consciousness and self-awareness. Could rodents with human brain cells have these same experiences?
It’s an important question for bioethicists like Julian Koplin at Monash University in Victoria, Australia. “If we’re talking about humanizing the brains of non-human animals … by introducing human brain organoids and allowing them to integrate into the animal brain,” he says, “I think we do need to start thinking about whether this could have any follow-on effect for the moral status of the research animal.”
In the current study, the answer appears to be no. But that doesn’t mean we won’t see “humanized” or “enhanced” rats in future, according to Koplin and other bioethicists who specialize in this field.
We need to tread carefully.
In this study, scientists put human brain organoids into a region of the rats’ brains that helps them sense their environment. But there’s no reason they couldn’t put the same organoids into regions that play a role in cognition or consciousness—which might make cognitive enhancement more likely.
Then there’s the question of how much of the rat’s brain is made up of human cells. Transplanting bigger organoids might mean that the rat is technically “more human” at the cellular level—but that’s not what’s important. What matters is how, if at all, its mental state changes.
The mental changes aren’t just about how “human” the rats’ mental states become, either. “You might have an animal that thinks in a very different way to we do, but is acutely susceptible to suffering, or is really intelligent in ways that are not familiar to us as humans,” says Koplin.
So far, we’ve focused on rats. But what would happen if the organoids were put into baby monkeys instead? Non-human primates have brains that look and work much more like ours, so they’d be better models for studying human disease. But “it does raise the possibility that you will create a humanized primate,” says Julian Savulescu, a bioethicist at the National University of Singapore.
Savulescu is also concerned about cloning. The cells that make up organoids contain a person’s DNA. What would happen if a large chunk of a monkey’s brain were made up of cells with an individual’s genetic code?
“If you were to introduce an advanced organoid into a developing primate, you may well essentially create a clone of an existing person,” he says. “Not only would it be humanized—it would be a clone of somebody that’s already in existence.” This would be the very bottom of an ethical slippery slope, says Savulescu.
There are a lot of questions here, and few definitive answers. No one really knows how to measure moral status, or the point at which animals with human cells become special—or even some kind of new animal.
But it provides plenty of food for thought. To read more, check out these articles from Tech Review’s archive:
In this piece from 2016, Antonio Regalado describes researchers’ attempts to grow human organs in pigs and sheep. The aim here is to create new organs for people who need transplants.
A Spanish stem-cell biologist told a reporter that the pope had given his blessing to this kind of research. But the Vatican later disputed the claim and called it “absolutely unfounded.”
A few years later, that same biologist went on to create embryos that are part human and part monkey, as reported by El País. Antonio explained why the research was so controversial.
In this recent piece, Hannah Thomasy explores eight technologies that are helping us understand the mysteries of the human brain and how we form memories.
And you can read more about how our brains make our minds in this piece from Lisa Feldman Barrett, which was featured in last year’s Mind issue.
From around the web
Could an algorithm help people who choose to end their own lives? The founder of this nonprofit thinks so. (MIT Technology Review)
Monkeypox cases have been declining for a couple of months now. But there are several ways things could play out from here. (Nature)
Covid boosters have been approved for children as young as five in the US. (Reuters)
Long covid is an enduring problem. Almost half of those who get sick with covid still haven’t fully recovered months later. (New York Times)
Watch this game of Pong. And then realize that it is being played by brain cells in a dish. (Neuron)
Human creators stand to benefit as AI rewrites the rules of content creation
A game-changer for content creation
Among the AI-related technologies to have emerged in the past several years is generative AI—deep-learning algorithms that allow computers to generate original content, such as text, images, video, audio, and code. And demand for such content will likely jump in the coming years—Gartner predicts that by 2025, generative AI will account for 10% of all data created, compared with 1% in 2022.
“Théâtre D’opéra Spatial” is an example of AI-generated content (AIGC), created with the Midjourney text-to-art generator program. Several other AI-driven art-generating programs have also emerged in 2022, capable of creating paintings from single-line text prompts. The diversity of technologies reflects a wide range of artistic styles and different user demands. DALL-E 2 and Stable Diffusion, for instance, are focused mainly on western-style artwork, while Baidu’s ERNIE-ViLG and Wenxin Yige produce images influenced by Chinese aesthetics. At Baidu’s deep learning developer conference Wave Summit+ 2022, the company announced that Wenxin Yige has been updated with new features, including turning photos into AI-generated art, image editing, and one-click video production.
Meanwhile, AIGC can also include articles, videos, and various other media offerings such as voice synthesis. A technology that generates audible speech indistinguishable from the voice of the original speaker, voice synthesis can be applied in many scenarios, including voice navigation for digital maps. Baidu Maps, for example, allows users to customize its voice navigation to their own voice just by recording nine sentences.
Recent advances in AI technologies have also created generative language models that can fluently compose texts with just one click. They can be used for generating marketing copy, processing documents, extracting summaries, and other text tasks, unlocking creativity that other technologies such as voice synthesis have failed to tap. One of the leading generative language models is Baidu’s ERNIE 3.0, which has been widely applied in various industries such as health care, education, technology, and entertainment.
“In the past year, artificial intelligence has made a great leap and changed its technological direction,” says Robin Li, CEO of Baidu. “Artificial intelligence has gone from understanding pictures and text to generating content.” Going one step further, Baidu App, a popular search and newsfeed app with over 600 million monthly users, including five million content creators, recently released a video editing feature that can produce a short video accompanied by a voiceover created from data provided in an article.
Improving efficiency and growth
As AIGC becomes increasingly common, it could make content creation more efficient by getting rid of repetitive, time-intensive tasks for creators such as sorting out source assets and voice recordings and rendering images. Aspiring filmmakers, for instance, have long had to pay their dues by spending countless hours mastering the complex and tedious process of video editing. AIGC may soon make that unnecessary.
Besides boosting efficiency, AIGC could also increase business growth in content creation amid rising demand for personalized digital content that users can interact with dynamically. InsightSLICE forecasts that the global digital creation market will on average grow 12% annually between 2020 and 2030 and hit $38.2 billion. With content consumption fast outpacing production, traditional development methods will likely struggle to meet such increasing demand, creating a gap that could be filled by AIGC. “AI has the potential to meet this massive demand for content at a tenth of the cost and a hundred times or thousands of times faster in the next decade,” Li says.
AI with humanity as its foundation
AIGC can also serve as an educational tool by helping children develop their creativity. StoryDrawer, for instance, is an AI-driven program designed to boost children’s creative thinking, which often declines as the focus in their education shifts to rote learning.
The Download: the West’s AI myth, and Musk v Apple
While the US and the EU may differ on how to regulate tech, their lawmakers seem to agree on one thing: the West needs to ban AI-powered social scoring.
As they understand it, social scoring is a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen.
The reality? While there have been some contentious local experiments with social credit scores in China, there is no countrywide, all-seeing social credit system with algorithms that rank people.
The irony is that while US and European politicians try to ban systems that don’t really exist, systems that do rank and penalize people are already in place in the West—and are denying people housing and jobs in the process. Read the full story.
Melissa’s story is from The Algorithm, her weekly AI newsletter covering all of the industry’s most interesting developments. Sign up to receive it in your inbox every Monday.
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Apple has reportedly threatened to pull Twitter from the App Store
According to Elon Musk. (NYT $)
+ Musk has threatened to “go to war” with the company after it decided to stop advertising on Twitter. (WP $)
+ Apple’s reluctance to advertise on Twitter right now isn’t exactly unique. (Motherboard)
+ Twitter’s child protection team in Asia has been gutted. (Wired $)
2 Another crypto firm has collapsed
Lender BlockFi has filed for bankruptcy, and is (partly) blaming FTX. (WSJ $)
+ The company is suing FTX founder Sam Bankman-Fried. (FT $)
+ It looks like the much-feared “crypto contagion” is spreading. (NYT $)
3 AI is rapidly becoming more powerful—and dangerous
That’s particularly worrying when its growth is too much for safety teams to handle. (Vox)
+ Do AI systems need to come with safety warnings? (MIT Technology Review)
+ This AI chat-room game is gaining a legion of fans. (The Guardian)
4 A Pegasus spyware investigation is in danger of being compromised
It’s the target of a disinformation campaign, security experts have warned. (The Guardian)
+ Cyber insurance won’t protect you from theft of your data. (The Guardian)
5 Google gave the FBI geofence data for its January 6 investigation
Google identified more than 5,000 devices near the US Capitol during the riot. (Wired $)
6 Monkeypox isn’t going anywhere
But it’s not on the rise, either. (The Atlantic $)
+ The World Health Organization says it will now be known as mpox. (BBC)
+ Everything you need to know about the monkeypox vaccines. (MIT Technology Review)
7 What it’s like to be the unwitting face of a romance scam
James Scott Geras’ pictures have been used to catfish countless women. (Motherboard)
What’s next in cybersecurity
One of the reasons cyber hasn’t played a bigger role in the war, according to Carhart, is because “in the whole conflict, we saw Russia being underprepared for things and not having a good game plan. So it’s not really surprising that we see that as well in the cyber domain.”
Moreover, Ukraine, under the leadership of Zhora and his cybersecurity agency, has been working on its cyber defenses for years, and it has received support from the international community since the war started, according to experts. Finally, an interesting twist in the conflict on the internet between Russia and Ukraine was the rise of the decentralized, international cyber coalition known as the IT Army, which scored some significant hacks, showing that war in the future can also be fought by hacktivists.
Ransomware runs rampant again
This year, other than the usual corporations, hospitals, and schools, government agencies in Costa Rica, Montenegro, and Albania all suffered damaging ransomware attacks too. In Costa Rica, the government declared a national emergency, a first after a ransomware attack. And in Albania, the government expelled Iranian diplomats from the country—a first in the history of cybersecurity—following a destructive cyberattack.
These types of attacks were at an all-time high in 2022, a trend that will likely continue next year, according to Allan Liska, a researcher who focuses on ransomware at cybersecurity firm Recorded Future.
“[Ransomware is] not just a technical problem like an information stealer or other commodity malware. There are real-world, geopolitical implications,” he says. In the past, for example, a North Korean ransomware called WannaCry caused severe disruption to the UK’s National Health System and hit an estimated 230,000 computers worldwide.
Luckily, it’s not all bad news on the ransomware front. According to Liska, there are some early signs that point to “the death of the ransomware-as-a-service model,” in which ransomware gangs lease out hacking tools. The main reason, he said, is that whenever a gang gets too big, “something bad happens to them.”
For example, the ransomware groups REvil and DarkSide/BlackMatter were hit by governments; Conti, a Russian ransomware gang, unraveled internally when a Ukrainian researcher appalled by Conti’s public support of the war leaked internal chats; and the LockBit crew also suffered the leak of its code.
“We are seeing a lot of the affiliates deciding that maybe I don’t want to be part of a big ransomware group, because they all have targets on their back, which means that I might have a target on my back, and I just want to carry out my cybercrime,” Liska says.