So what is making this golden age for sample return missions possible? The launches are cheaper, for one, as is the hardware used to build the probes and landers. Instruments like spectrometers, which can identify the presence of different elements and compounds, are smaller and more resilient, and use much less power. The autonomous technology used to navigate these worlds has improved tremendously—OSIRIS-REx in particular benefited from the fact that the natural feature tracking (NFT) system onboard provided real time mapping of the surface to keep the probe safe from Bennu’s hazardous boulders. NFT is poised to help future robotic missions run smoothly and safely, sample return or otherwise.
Engineers are also coming up with more novel ideas for how to actually collect and store these samples. Perseverance is going old-school with a drill kit to gather intact cores of rock from the ground. OSIRIS-REx came up with a pogo stick-like “touch and go” collection system that brought the spacecraft down for a few-second hop off Bennu and used compressed air to waft small rubble into the collection container. Haybausa2 literally shot bullets into Ryugu. MMX will use simple pneumatics to collect sandy material off Phobos.
For a Venus mission, scientists have been considering a spacecraft that can dip into the atmosphere and bottle up some gas. Cryogenics technologies will enable better storage of extraterrestrial volatiles—or frozen elements that can be vaporized. Basically, every world has a unique environment and set of circumstances that dictate the best approach for sample collection, and our technologies are finally at the point where sampling methods that once seemed too difficult or challenging are reasonable to pull off.
These aren’t investigations you can do with just a probe on the ground. There is simply no substitute for the kinds of investigations you can run through laboratory equipment here on Earth. Say we found evidence of DNA on Mars—Perseverance has no way to sequence it, and as of yet there’s no way any Martian probe could be fitted with the necessary equipment to do so. If we wanted to study rock samples to understand the history of Mars’s magnetic field, a rover just doesn’t have the ability to run those sorts of tests.
From paper to practice
So how exactly does a sample return mission go from idea to execution? “For a sample return mission, it’s about accessibility to get there and accessibility to come back,” says Richard Binzel, an MIT astronomer and co-investigator of OSIRIS-REx.
Certain destinations like the moon and Mars have always been at the forefront of planetary scientists’ minds, especially as we’ve learned more about the history of water on both bodies. But beyond these places, sample returns are harder to justify.
In Binzel’s view, sample returns are still too difficult to pull off for all but the most important questions. These revolve around the origins of the solar system and of the chemistry that led to life on Earth. “How far back can we go and get a time capsule of the beginning of everything that is the Earth, and us?” he says. “It’s all about volatiles.” In the context of planetary science, this can mean water ice, or nitrogen, carbon dioxide, ammonia, hydrogen, methane, sulfur dioxide—the ingredients for life. If there are no volatiles—and therefore no indication was once habitable or still might be—a sample return mission seems highly unlikely.
Once the target is selected, however, the engineers that take over to figure out how best to collect the sample and bring it back. From there, the scientists must simply play the cards they’re dealt with and hope the material that comes back is suitable enough to study.
The payoffs can be huge. Between 1969 and 1972, Apollo astronauts brought back 842 pounds of moon rocks. Over 50 years later, people are still studying them and publishing papers detailing new insights. “We’re reanalyzing and remeasuring and using newly developed techniques to look at the samples, and coming up with new questions,” says Bosak. “It’s the gift that keeps on giving.”
The fact that these samples can be passed down from generation to generation, in which future scientists can use new technologies and insights to narrow down their investigations and pursue questions no one has yet thought of, means there’s a powerful legacy that’s worth going after. When Perseverance descends to Mars and visits Jezero Crater this month, it will be collecting material that scientists on Earth will study for decades—perhaps hundreds of years.
A new training model, dubbed “KnowNo,” aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense.
For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door. Conventionally, these smaller substeps had to be manually programmed, because otherwise the robot would not know that people usually keep their drinks in the kitchen.
That’s something large language models (LLMs) could help to fix, because they have a lot of common-sense knowledge baked in, says Zeng.
Now when the robot is asked to bring a Coke, an LLM, which has a generalized understanding of the world, can generate a step-by-step guide for the robot to follow.
The problem with LLMs, though, is that there’s no way to guarantee that their instructions are possible for the robot to execute. Maybe the person doesn’t have a refrigerator in the kitchen, or the fridge door handle is broken. In these situations, robots need to ask humans for help.
KnowNo makes that possible by combining large language models with statistical tools that quantify confidence levels.
When given an ambiguous instruction like “Put the bowl in the microwave,” KnowNo first generates multiple possible next actions using the language model. Then it creates a confidence score predicting the likelihood that each potential choice is the best one.
The news: A new robot training model, dubbed “KnowNo,” aims to teach robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Why it matters: While robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. That’s something large language models could help to fix, because they have a lot of common-sense knowledge baked in. Read the full story.
—June Kim
Medical microrobots that travel inside the body are (still) on their way
The human body is a labyrinth of vessels and tubing, full of barriers that are difficult to break through. That poses a serious hurdle for doctors. Illness is often caused by problems that are hard to visualize and difficult to access. But imagine if we could deploy armies of tiny robots into the body to do the job for us. They could break up hard-to-reach clots, deliver drugs to even the most inaccessible tumors, and even help guide embryos toward implantation.
We’ve been hearing about the use of tiny robots in medicine for years, maybe even decades. And they’re still not here. But experts are adamant that medical microbots are finally coming, and that they could be a game changer for a number of serious diseases. Read the full story.
We haven’t always been right (RIP, Baxter), but we’ve often been early to spot important areas of progress (we put natural-language processing on our very first list in 2001; today this technology underpins large language models and generative AI tools like ChatGPT).
Every year, our reporters and editors nominate technologies that they think deserve a spot, and we spend weeks debating which ones should make the cut. Here are some of the technologies we didn’t pick this time—and why we’ve left them off, for now.
New drugs for Alzheimer’s disease
Alzmeiher’s patients have long lacked treatment options. Several new drugs have now been proved to slow cognitive decline, albeit modestly, by clearing out harmful plaques in the brain. In July, the FDA approved Leqembi by Eisai and Biogen, and Eli Lilly’s donanemab could soon be next. But the drugs come with serious side effects, including brain swelling and bleeding, which can be fatal in some cases. Plus, they’re hard to administer—patients receive doses via an IV and must receive regular MRIs to check for brain swelling. These drawbacks gave us pause.
Sustainable aviation fuel
Alternative jet fuels made from cooking oil, leftover animal fats, or agricultural waste could reduce emissions from flying. They have been in development for years, and scientists are making steady progress, with several recent demonstration flights. But production and use will need to ramp up significantly for these fuels to make a meaningful climate impact. While they do look promising, there wasn’t a key moment or “breakthrough” that merited a spot for sustainable aviation fuels on this year’s list.
Solar geoengineering
One way to counteract global warming could be to release particles into the stratosphere that reflect the sun’s energy and cool the planet. That idea is highly controversial within the scientific community, but a few researchers and companies have begun exploring whether it’s possible by launching a series of small-scale high-flying tests. One such launch prompted Mexico to ban solar geoengineering experiments earlier this year. It’s not really clear where geoengineering will go from here or whether these early efforts will stall out. Amid that uncertainty, we decided to hold off for now.