The letter, which was organized by the Stanford University microbiologist David Relman and the University of Washington virologist Jesse Bloom, takes aim at a recent joint study of covid origins undertaken by the World Health Organization and China, which concluded that a bat virus likely reached humans via an intermediate animal and that a lab accident was “extremely unlikely.”
That conclusion was not scientifically justified, according to the authors of the new letter, since no trace of how the virus first jumped to humans has been found and the possibility of a laboratory accident received only a cursory look. Just a handful of the 313 pages of the WHO origins report and its annexes are devoted to the subject.
Marc Lipsitch, a well-known Harvard University epidemiologist who is among the signers of the letter, said he had not expressed a view on the origin of the virus until recently, choosing instead to focus on improving the design of epidemiological studies and vaccine trials—in part because the debate over the lab theory has become so controversial. “I stayed out of it because I was busy dealing with the outcome of the pandemic instead of the origin,” he says. “[But] when the WHO comes out with a report that makes a specious claim about an important topic … it’s worth speaking out.”
Several of those signing the letter, including Lipsitch and Relman, have in the past called for greater scrutiny of “gain of function” research, in which viruses are genetically modified to make them more infectious or virulent. Experiments to engineer pathogens were also ongoing at the Wuhan Institute of Virology, China’s leading center for studying bat viruses similar to SARS-CoV-2. Some see the fact that covid-19 first appeared in the same city in which the lab is located as circumstantial evidence that a laboratory accident could be to blame.
Lipsitch has previously estimated the risk of a pandemic caused by accidental release from a high-security biolab at between 1 in 1,000 and 1 in 10,000 per year, and he has warned that the proliferation of thousands of such labs around the globe is a major concern.
Even though Chinese scientists have said no such leak occurred in this case, the letter writers say that can only be established through a more independent investigation. “A proper investigation should be transparent, objective, data-driven, inclusive of broad expertise, subject to independent oversight, and responsibly managed to minimize the impact of conflicts of interest,” they write. “Public health agencies and research laboratories alike need to open their records to the public. Investigators should document the veracity and provenance of data from which analyses are conducted and conclusions drawn.”
The chief scientist for emerging disease at the Wuhan Institute of Virology, Shi Zhengli, said in an email that the letter’s suspicions were misplaced and would damage the world’s ability to respond to pandemics. “It’s definitely not acceptable,” Shi said of the group’s call to see her lab’s records. “Who can provide an evidence that does not exist?”
“It’s really sad to read this ‘Letter’ written by these 18 prominent scientists.” Shi wrote in her email. “The hypothesis of a lab leaking is just based on the expertise of a lab which has long been working on bat coronaviruses which are phylogenetically related to SARS-CoV-2. This kind of claim will definitely damage the reputation and enthusiasm of scientists who are dedicated to work on the novel animal viruses which have potential spillover risk to human populations and eventually weaken the ability of humans to prevent the next pandemic.”
The discussion around the lab leak hypothesis has already become highly political. In the US, it has been embraced most loudly by Republican lawmakers and conservative media figures, including Fox News host Tucker Carlson. The resulting polarization has had a chilling effect on scientists, some of whom have been reluctant to express their own concerns, says Relman.
“We felt motivated to say something because science is not living up to what it can be, which is a very fair and rigorous and open effort to gain greater clarity on something,” he says. “For me, part of the purpose was to create a safe space for other scientists to say something of their own.”
“Ideally, this is a relatively uncontroversial call for being as clear-eyed as possible in testing several viable hypotheses for which we have little data,” says Megan Palmer, a biosecurity expert at Stanford University who is not affiliated with the letter group. “When politics are complex and stakes are high, a reminder from prominent experts may be what is needed to compel careful consideration by others.”
These robots know when to ask for help
A new training model, dubbed “KnowNo,” aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense.
For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door. Conventionally, these smaller substeps had to be manually programmed, because otherwise the robot would not know that people usually keep their drinks in the kitchen.
That’s something large language models (LLMs) could help to fix, because they have a lot of common-sense knowledge baked in, says Zeng.
Now when the robot is asked to bring a Coke, an LLM, which has a generalized understanding of the world, can generate a step-by-step guide for the robot to follow.
The problem with LLMs, though, is that there’s no way to guarantee that their instructions are possible for the robot to execute. Maybe the person doesn’t have a refrigerator in the kitchen, or the fridge door handle is broken. In these situations, robots need to ask humans for help.
KnowNo makes that possible by combining large language models with statistical tools that quantify confidence levels.
When given an ambiguous instruction like “Put the bowl in the microwave,” KnowNo first generates multiple possible next actions using the language model. Then it creates a confidence score predicting the likelihood that each potential choice is the best one.
The Download: inside the first CRISPR treatment, and smarter robots
The news: A new robot training model, dubbed “KnowNo,” aims to teach robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Why it matters: While robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. That’s something large language models could help to fix, because they have a lot of common-sense knowledge baked in. Read the full story.
Medical microrobots that travel inside the body are (still) on their way
The human body is a labyrinth of vessels and tubing, full of barriers that are difficult to break through. That poses a serious hurdle for doctors. Illness is often caused by problems that are hard to visualize and difficult to access. But imagine if we could deploy armies of tiny robots into the body to do the job for us. They could break up hard-to-reach clots, deliver drugs to even the most inaccessible tumors, and even help guide embryos toward implantation.
We’ve been hearing about the use of tiny robots in medicine for years, maybe even decades. And they’re still not here. But experts are adamant that medical microbots are finally coming, and that they could be a game changer for a number of serious diseases. Read the full story.
5 things we didn’t put on our 2024 list of 10 Breakthrough Technologies
We haven’t always been right (RIP, Baxter), but we’ve often been early to spot important areas of progress (we put natural-language processing on our very first list in 2001; today this technology underpins large language models and generative AI tools like ChatGPT).
Every year, our reporters and editors nominate technologies that they think deserve a spot, and we spend weeks debating which ones should make the cut. Here are some of the technologies we didn’t pick this time—and why we’ve left them off, for now.
New drugs for Alzheimer’s disease
Alzmeiher’s patients have long lacked treatment options. Several new drugs have now been proved to slow cognitive decline, albeit modestly, by clearing out harmful plaques in the brain. In July, the FDA approved Leqembi by Eisai and Biogen, and Eli Lilly’s donanemab could soon be next. But the drugs come with serious side effects, including brain swelling and bleeding, which can be fatal in some cases. Plus, they’re hard to administer—patients receive doses via an IV and must receive regular MRIs to check for brain swelling. These drawbacks gave us pause.
Sustainable aviation fuel
Alternative jet fuels made from cooking oil, leftover animal fats, or agricultural waste could reduce emissions from flying. They have been in development for years, and scientists are making steady progress, with several recent demonstration flights. But production and use will need to ramp up significantly for these fuels to make a meaningful climate impact. While they do look promising, there wasn’t a key moment or “breakthrough” that merited a spot for sustainable aviation fuels on this year’s list.
One way to counteract global warming could be to release particles into the stratosphere that reflect the sun’s energy and cool the planet. That idea is highly controversial within the scientific community, but a few researchers and companies have begun exploring whether it’s possible by launching a series of small-scale high-flying tests. One such launch prompted Mexico to ban solar geoengineering experiments earlier this year. It’s not really clear where geoengineering will go from here or whether these early efforts will stall out. Amid that uncertainty, we decided to hold off for now.