This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
AI literacy might be ChatGPT’s biggest lesson for schools
This year millions of people have tried—and been wowed by— artificial intelligence systems. That’s in no small part thanks to OpenAI’s chatbot ChatGPT.
When it launched last November, the chatbot became an instant hit among students, many of whom started using it to write essays and homework. Alarmed by an influx of AI-generated essays, schools around the world moved swiftly to ban the use of the technology.
But there’s an unexpected upside: ChatGPT has forced schools to quickly adapt and start teaching kids an ad hoc curriculum of AI 101. The big hope is that educators and policymakers will realize just how important it is to teach the next generation critical thinking skills around AI. Read the full story.
—Melissa Heikkilä
Melissa’s story is from The Algorithm, her weekly AI newsletter. Sign up to receive it in your inbox every Monday.
Read more about AI:
+ ChatGPT is about to revolutionize the economy. We need to decide what that looks like. New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us. Read the full story.
+ We are hurtling toward a glitchy, spammy, scammy, AI-powered internet. Large language models are full of security vulnerabilities, yet they’re being embedded into tech products on a vast scale. Read the full story.
+ What if we could just ask AI to be less biased? Instead of making the training data less biased, researchers are experimenting with simply asking the model to give you less biased answers. Read the full story.
Podcast: Concerning AI ethics
The best definitions of AI are vague, largely lack consensus and represent a huge challenge for lawmakers and legal scholars looking to regulate it. But back to back breakthroughs and rapid adoption of generative AI tools are making it feel a lot more real to everybody else.
The latest episode of our podcast, In Machines We Trust, digs into the ethics of such tools, and what it could mean for the future of legal decisions. Listen to it on Apple Podcasts or wherever you get your podcasts.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Elon Musk is working on a Twitter AI project Despite recently joining a call for an industry-wide halt to AI training. (Insider $) + Twitter technically no longer exists—it’s merged with Musk’s X Corp. (Bloomberg $) + Musk joked that his dog is in charge of Twitter. (WP $) + We’re witnessing the brain death of Twitter. (MIT Technology Review)
2 China is attempting to manipulate its covid legacy Its officials are withholding data and censoring dissident voices. (WSJ $)
3 Bitcoin is on the rise again And market manipulation could be the root cause. (The Guardian) + El Salvador’s bitcoin holdings are still way, way down, though. (Bloomberg $)+ Crypto regulation is on the agenda for the next G7 summit. (Reuters)
4 Secret Pentagon intelligence was leaked by a meme group Authorities are racing to work out how the classified documents were procured. (NYT $) + They contain intel collected by the NSA and CIA, among other agencies. (NY Mag $)
5 We’re learning more about dark matter Researchers have managed to map it in unprecedented detail. (BBC)
6 Abortion pills are perfectly safe Despite what some pro-life groups would have you believe. (Vox)
7 What the rise of generative AI means for porn It’s becoming increasingly easy to create erotic images that people are willing to pay for. (WP $) + Even AI has trouble spotting whether pictures are AI-generated. (WSJ $) + AI music is infiltrating streaming services. (FT $) + ChatGPT is fueling a new wave of spam on Reddit. (Motherboard) + The viral AI avatar app Lensa undressed me—without my consent. (MIT Technology Review)
8 Underground wells are the new batteries They’re surprisingly good at storing thermal energy. (Wired $) + This geothermal startup showed its wells can be used like a giant underground battery. (MIT Technology Review)
9 Why Big Tech’s platforms are so hard to replace Despite Twitter’s wild last six months, users are still logging on. (NPR)
10 TikTok’s latest craze? Water Watertokers are turning to elaborate syrup concoctions to up their daily H2O intake. (Fast Company $)
Quote of the day
“I wish I could just shoot down these programs.”
—An anonymous video game artist living in China vents her frustration at image-generating AI models that are forcing human workers to work extra long hours to compete to Rest of World.
This artist is dominating AI-generated art. And he’s not happy about it.
September 2022
Greg Rutkowski is a Polish digital artist who uses classical styles to create dreamy landscapes. His distinctive style has been used in some of the world’s most popular fantasy games, including Dungeons and Dragons and Magic: The Gathering.
Now he’s become a hit in the new world of text-to-image AI generation. His name is one of the most commonly used prompts in the open-source AI art generator Stable Diffusion.
But this and other open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists. As a result, they are raising tricky questions about ethics and copyright. And artists like Rutkowski have had enough. Read the full story.
+ It’s the little things in life that make a difference: here’s how the experts do it. + All hail Cloudflare’s wall of lava lamps! + I’m not sure about this pixelated hoodie—yours for just $2,500. + Forget everything you know, a rainbow is not actually an arch at all: it’s a circle. + The world’s deepest living fish has a sweet lil face.
Matt Kaeberlein is what you might call a dog person. He has grown up with dogs and describes his German shepherd, Dobby, as “really special.” But Dobby is 14 years old—around 98 in dog years.
Kaeberlein is co-director of the Dog Aging Project, an ambitious research effort to track the aging process of tens of thousands of companion dogs across the US. He is one of a handful of scientists on a mission to improve, delay, and possibly reverse that process to help them live longer, healthier lives.
And dogs are just the beginning. One day, this research could help to prolong the lives of humans. Read the full story.
—Jessica Hamzelou
We can still have nice things
A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)
+ All hail the unsung women of indie sleaze. + It’s officially October! + This list of sartorial advice has been entertaining us at MIT Technology Review—how many points do you agree with? + Put down the expired milk, it’s got a whole lot more to give. 🥛 + Some top tips for remembering your dreams more fully: should you want to, that is.
The technology would likely be used first on infants born at 22 or 23 weeks who don’t have many other options. “You don’t want to put an infant on this device who would otherwise do well with conventional therapy,” Mychaliska says. At 22 weeks gestation, babies are tiny, often weighing less than a pound. And their lungs are still developing. When researchers looked at babies born between 2013 and 2018, survival among those who were resuscitated at 22 weeks was 30%. That number rose to nearly 56% at 23 weeks. And babies born at that stage who do survive have an increased risk of neurodevelopmental problems, cerebral palsy, mobility problems, hearing impairments, and other disabilities.
Selecting the right participants will be tricky. Some experts argue that gestational age shouldn’t be the only criteria. One complicating factor is that prognosis varies widely from center to center, and it’s improving as hospitals learn how best to treat these preemies. At the University of Iowa Stead Family Children’s Hospital, for example, survival rates are much higher than average: 64% for babies born at 22 weeks. They’ve even managed to keep a handful of infants born at 21 weeks alive. “These babies are not a hopeless case. They very much can survive. They very much can thrive if you are managing them appropriately,” says Brady Thomas, a neonatologist at Stead. “Are you really going to make that much of a bigger impact by adding in this technology, and what risks might exist to those patients as you’re starting to trial it?”
Prognosis also varies widely from baby to baby depending on a variety of factors. “The girls do better than the boys. The bigger ones do better than the smaller ones,” says Mark Mercurio, a neonatologist and pediatric bioethicist at the Yale School of Medicine. So “how bad does the prognosis with current therapy need to be to justify use of an artificial womb?” That’s a question Mercurio would like to see answered.
What are the risks?
One ever-present concern in the tiniest babies is brain bleeds. “That’s due to a number of factors—a combination of their brain immaturity, and in part associated with the treatment that we provide,” Mychaliska says. Babies in an artificial womb would need to be on a blood thinner to prevent clots from forming where the tubes enter the body. “I believe that places a premature infant at very high risk for brain bleeding,” he says.
And it’s not just about the baby. To be eligible for EXTEND, infants must be delivered via cesarean section, which puts the pregnant person at higher risk for infection and bleeding. Delivery via a C-section can also have an impact on future pregnancies.
So if it works, could babies be grown entirely outside the womb?
Not anytime soon. Maybe not ever. In a paper published in 2022, Flake and his colleagues called this scenario “a technically and developmentally naive, yet sensationally speculative, pipe dream.” The problem is twofold. First, fetal development is a carefully choreographed process that relies on chemical communication between the pregnant parent’s body and the fetus. Even if researchers understood all the factors that contribute to fetal development—and they don’t—there’s no guarantee they could recreate those conditions.
The second issue is size. The artificial womb systems being developed require doctors to insert a small tube into the infant’s umbilical cord to deliver oxygenated blood. The smaller the umbilical cord, the more difficult this becomes.
What are the ethical concerns?
In the near term, there are concerns about how to ensure that researchers are obtaining proper informed consent from parents who may be desperate to save their babies. “This is an issue that comes up with lots of last-chance therapies,” says Vardit Ravitsky, a bioethicist and president of the Hastings Center, a bioethics research institute.
Last week, Elon Musk made the bold assertion that sticking electrodes in people’s heads is going to lead to a huge increase in the rate of data transfer out of, and into, human brains.
The occasion of Musk’s post was the announcement by Neuralink, his brain-computer interface company, that it was officially seeking the first volunteer to receive an implant that contains more than twice the number of electrodes than previous versions to collect more data from more nerve cells.
The entrepreneur mentioned a long-term goal of vastly increasing “bandwidth” between people, or people and machines, by a factor of 1,000 or more. But what does he mean, and is it even possible? Read the full story.
—Antonio Regalado
This story is from The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Thursday.
Everything you need to know about artificial wombs
Earlier this month, US Food and Drug Administration advisors met to discuss how to move research on artificial wombs from animals into humans.
These medical devices are designed to give extremely premature infants a bit more time to develop in a womb-like environment before entering the outside world. They have been tested with hundreds of lambs (and some piglets), but animal models can’t fully predict how the technology will work for humans.