This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
What is death?
Just as birth certificates note the time we enter the world, death certificates mark the moment we exit it. This practice reflects traditional notions about life and death as binaries. We are here until, suddenly, like a light switched off, we are gone.
But while this idea of death is pervasive, evidence is building that it is an outdated social construct, not really grounded in biology. Dying is in fact a process—one with no clear point demarcating the threshold across which someone cannot come back.
Scientists and many doctors have already embraced this more nuanced understanding of death. And as society catches up, the implications for the living could be profound. Read the full story.
—Rachel Nuwer
‘What is death?’is part of our mini-series The Biggest Questions, which explores how technology is helping probe some of the deepest, most mind-bending mysteries of our existence.
Read more:
+ Why is the universe so complex and beautiful? For some reason the universe is full of stars, galaxies, and life. But it didn’t have to be this way. Read the full story.
+ How did life begin? AI is helping chemists unpick the mysteries around the origins of life and detect signs of it on other worlds. Read the full story.
+ Are we alone in the universe? Scientists are training machine-learning models and designing instruments to hunt for life on other worlds. Read the full story.
+ Is it possible to really understand someone else’s mind? How we think, feel and experience the world is a mystery to everyone but us. But technology may be starting to help us understand the minds of others. Read the full story.
Text-to-image AI models can be tricked into generating disturbing images
What’s happened: Popular text-to-image AI models can be prompted to ignore their safety filters and generate disturbing images. A group of researchers managed to get both Stability AI’s Stable Diffusion and OpenAI’s DALL-E 2’s text-to-image models to disregard their policies and create images of naked people, dismembered bodies, and other violent and sexual scenarios.
How they did it: This new jailbreaking method, called “SneakyPrompt”, uses reinforcement learning to create written prompts that look like garbled nonsense to us but that AI models learn to recognize as hidden requests for disturbing images. It essentially works by turning the way text-to-image AI models function against them.
Why it matters: The research highlights the vulnerability of existing AI safety filters and should serve as a wake-up call for the AI community to bolster security measures across the board, experts say. It also demonstrates how difficult it is to prevent these models from generating such content, as it’s included in the vast troves of data they’ve been trained on. Read the full story.
—Rhiannon Williams
The pain is real. The painkillers are virtual reality.
Plenty of children—and adults—hate needles. But virtual reality devices like Smileyscope, a device for kids that recently received FDA clearance, could help to make a difference. It helps lessen the pain of a blood draw or IV insertion by sending the user on an underwater adventure. Inside this watery deep-sea reality, the swipe of an alcohol wipe becomes cool waves washing over the arm. The pinch of the needle becomes a gentle fish nibble.
But how Smileyscope works is not entirely clear. It’s more complex than just distraction, and not all stimuli are equally effective. But the promise of VR has led companies to work on devices to address a much tougher problem: chronic pain. Read the full story.
—Cassandra Willyard
This story is from The Checkup, our weekly health and biotech newsletter. Sign up to receive it in your inbox every Thursday.
The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Elon Musk endorsed an antisemitic post on X Leaving its executives racing to suppress the damage. (NYT $) + IBM has pulled its ads from X after they appeared next to antisemitic posts. (WP $) + Musk’s comments are resonating with the far-right, unsurprisingly. (Motherboard)
2 Osama bin Laden’s letter to America has exploded on social media Videos of American users endorsing parts of the 9/11 manifesto have gone viral. (WP $) + TikTok says it’s aggressively working to remove the clips. (NYT $) + The Guardian newspaper has deleted its version of the letter from its site. (404 Media)
3 SpaceX has pushed back its giant rocket launch A component in need of replacing has delayed the launch until Saturday. (Ars Technica)
4 The first CRISPR medicine has been approved in the UK The treatment, called Casgevy, edits the cells of people with sickle cell disease before infusing them back in. (Wired $) + Remarkably, the therapy effectively cures the disease. (New Scientist $)+ Here’s how CRISPR is changing lives. (MIT Technology Review)
5 Data broker LexisNexis sold surveillance tools to US border enforcement Social media oversight, face recognition and geolocation data, among others. (The Intercept)
6 OpenAI has steamrollered the AI industry And startup founders are struggling to avoid becoming roadkill. (Insider $) + Google has delayed releasing its OpenAI-challenging Gemini system. (The Information $) + Inside the mind of OpenAI’s chief scientist. (MIT Technology Review)
7 Climate-proofing our homes is a nightmare Extreme weather events are on the rise—and our homes are vulnerable. (The Verge) + The quest to build wildfire-resistant homes. (MIT Technology Review)
8 Vietnamese immigrants rely on YouTube for their news Even when it’s not always clear if that news is from reliable sources. (The Markup)
9 Reddit is the best place for product reviews now Fake reviews and SEO-bait lists aren’t helpful. Honest assessments from real people are. (Vox)
10 Meet the inventor of the lickable TV Net-licks and chill? (The Guardian)
Quote of the day
“We fly, we break some things, we learn some things, and then we go back and fly again.”
—William Gerstenmaier, the vice president of build and flight reliability at SpaceX, explains the company’s approach to inevitable rocket launch setbacks to Bloomberg.
The big story
Responsible AI has a burnout problem
October 2022
Margaret Mitchell had been working at Google for two years before she realized she needed a break. Only after she spoke with a therapist did she understand the problem: she was burnt out.
Mitchell, who now works as chief ethics scientist at the AI startup Hugging Face, is far from alone in her experience. Burnout is becoming increasingly common in responsible AI teams.
All the practitioners MIT Technology Review interviewed spoke enthusiastically about their work: it is fueled by passion, a sense of urgency, and the satisfaction of building solutions for real problems. But that sense of mission can be overwhelming without the right support. Read the full story.
+ The adorable tale of how this couple met will warm your heart. + Why Sex in space is a such a tricky business. + Let it go—why Frozen’s legacy refuses to die. + Why not treat yourself to a White Toreador tequila cocktail this weekend? + Uncanny Valley makeup is the stuff of nightmares, quite frankly.
A new training model, dubbed “KnowNo,” aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense.
For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door. Conventionally, these smaller substeps had to be manually programmed, because otherwise the robot would not know that people usually keep their drinks in the kitchen.
That’s something large language models (LLMs) could help to fix, because they have a lot of common-sense knowledge baked in, says Zeng.
Now when the robot is asked to bring a Coke, an LLM, which has a generalized understanding of the world, can generate a step-by-step guide for the robot to follow.
The problem with LLMs, though, is that there’s no way to guarantee that their instructions are possible for the robot to execute. Maybe the person doesn’t have a refrigerator in the kitchen, or the fridge door handle is broken. In these situations, robots need to ask humans for help.
KnowNo makes that possible by combining large language models with statistical tools that quantify confidence levels.
When given an ambiguous instruction like “Put the bowl in the microwave,” KnowNo first generates multiple possible next actions using the language model. Then it creates a confidence score predicting the likelihood that each potential choice is the best one.
The news: A new robot training model, dubbed “KnowNo,” aims to teach robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Why it matters: While robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. That’s something large language models could help to fix, because they have a lot of common-sense knowledge baked in. Read the full story.
—June Kim
Medical microrobots that travel inside the body are (still) on their way
The human body is a labyrinth of vessels and tubing, full of barriers that are difficult to break through. That poses a serious hurdle for doctors. Illness is often caused by problems that are hard to visualize and difficult to access. But imagine if we could deploy armies of tiny robots into the body to do the job for us. They could break up hard-to-reach clots, deliver drugs to even the most inaccessible tumors, and even help guide embryos toward implantation.
We’ve been hearing about the use of tiny robots in medicine for years, maybe even decades. And they’re still not here. But experts are adamant that medical microbots are finally coming, and that they could be a game changer for a number of serious diseases. Read the full story.
We haven’t always been right (RIP, Baxter), but we’ve often been early to spot important areas of progress (we put natural-language processing on our very first list in 2001; today this technology underpins large language models and generative AI tools like ChatGPT).
Every year, our reporters and editors nominate technologies that they think deserve a spot, and we spend weeks debating which ones should make the cut. Here are some of the technologies we didn’t pick this time—and why we’ve left them off, for now.
New drugs for Alzheimer’s disease
Alzmeiher’s patients have long lacked treatment options. Several new drugs have now been proved to slow cognitive decline, albeit modestly, by clearing out harmful plaques in the brain. In July, the FDA approved Leqembi by Eisai and Biogen, and Eli Lilly’s donanemab could soon be next. But the drugs come with serious side effects, including brain swelling and bleeding, which can be fatal in some cases. Plus, they’re hard to administer—patients receive doses via an IV and must receive regular MRIs to check for brain swelling. These drawbacks gave us pause.
Sustainable aviation fuel
Alternative jet fuels made from cooking oil, leftover animal fats, or agricultural waste could reduce emissions from flying. They have been in development for years, and scientists are making steady progress, with several recent demonstration flights. But production and use will need to ramp up significantly for these fuels to make a meaningful climate impact. While they do look promising, there wasn’t a key moment or “breakthrough” that merited a spot for sustainable aviation fuels on this year’s list.
Solar geoengineering
One way to counteract global warming could be to release particles into the stratosphere that reflect the sun’s energy and cool the planet. That idea is highly controversial within the scientific community, but a few researchers and companies have begun exploring whether it’s possible by launching a series of small-scale high-flying tests. One such launch prompted Mexico to ban solar geoengineering experiments earlier this year. It’s not really clear where geoengineering will go from here or whether these early efforts will stall out. Amid that uncertainty, we decided to hold off for now.