For the last 40 years, this voluntary guideline has served as an important stop sign for embryonic research. It has provided a clear signal to the public that scientists wouldn’t grow babies in labs. To researchers, it gave clarity about what research they could pursue.
Now, however, a key scientific body is ready to do away with the 14-day limit. The action would come at a time when scientists are making remarkable progress in growing embryonic cells and watching them develop. Researchers, for example, can now create embryo-like structures starting even from stem cells, and some hope to follow these synthetic embryo models well past the old two-week line.
By allowing both normal and artificial embryos to continue developing after two weeks, the end of the self-imposed limit could unleash impressive but ethically charged new experiments on extending human development outside the womb.
The International Society for Stem Cell Research has prepared draft recommendations to move such research out of a category of “prohibited” scientific activities and into a class of research that can be permitted after ethics review and depending on national regulations, according to several people familiar with its thinking.
A spokesperson for the ISSCR, an influential professional society with 4,000 members, declined to comment on the change, saying its new guidelines would be released this spring.
Because embryo research doesn’t receive federal funding in the US, and laws differ widely around the world, the ISSCR has taken on outsize importance as the field’s de facto ethics regulator. The society’s rules are relied on by universities and by scientific journals to determine what kinds of research they can publish.
The existing ISSCR guidelines, issued in 2016, are being updated because of an onrush of new, boundary-busting research. For instance, some labs are attempting to create human-animal chimeras through experiments including mixing human cells into monkey embryos. Researchers are also continuing to explore genetic modification of human embryos, using gene-editing tools like CRISPR.
Many labs are also working on realistic artificial models of human embryos constructed from stem cells. For instance, last week, Zernicka-Goetz posted a preprint describing how her lab coaxed stem cells to self-assemble into a version of a human blastocyst, as a week-old embryo is known.
Though scientists are keen to explore whether such lab-created mimicry can be pushed further, the 14-day rule stands in the way. In many cases, the embryo models must also be destroyed before two weeks elapse.
The 14-day limit arose after the birth of the first test-tube babies in the 1970s. “It was ‘Oh, we can create human embryos outside the body—we need rules,” says Josephine Johnston, a scholar with the Hastings Center, a nonprofit bioethics organization. “It was a political decision to show the public there is a framework for this research, that we aren’t growing babies in labs.”
The rule stood unchallenged for many years. That was in part because scientist couldn’t grow embryos more than four or five days anyway, which was sufficient for in vitro fertilization.
Tetsuya Ishii, a bioethics and legal researcher at Hokkaido University, says some countries, including Japan, have put the 14-day limit into law. Others, like Germany, ban embryo research altogether. That means a guideline change could do most to open up new fields of competition between countries without federal restrictions, particularly among scientists in the US and China.
Scientists are motivated to grow embryos longer in order to study—and potentially manipulate—the development process. But such techniques raise the possibility of someday gestating animals outside the womb until birth, a concept called ectogenesis.
According to Ishii, new experiments “might ignite abortion debates,” especially if the researchers develop human embryos to the point where they take on recognizable characteristics like a head, beating heart cells, or the beginning of limbs.
During the Trump administration, embryologists endeavored to keep a low profile for the startling technical advances in their labs. Fears of a presidential tweet or government action to impede research helped keep discussion of changing the 14-day rule in the background. For instance, the ISSCR guidelines were complete in December, according to one person, but they still have not been published.
These robots know when to ask for help
A new training model, dubbed “KnowNo,” aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense.
For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door. Conventionally, these smaller substeps had to be manually programmed, because otherwise the robot would not know that people usually keep their drinks in the kitchen.
That’s something large language models (LLMs) could help to fix, because they have a lot of common-sense knowledge baked in, says Zeng.
Now when the robot is asked to bring a Coke, an LLM, which has a generalized understanding of the world, can generate a step-by-step guide for the robot to follow.
The problem with LLMs, though, is that there’s no way to guarantee that their instructions are possible for the robot to execute. Maybe the person doesn’t have a refrigerator in the kitchen, or the fridge door handle is broken. In these situations, robots need to ask humans for help.
KnowNo makes that possible by combining large language models with statistical tools that quantify confidence levels.
When given an ambiguous instruction like “Put the bowl in the microwave,” KnowNo first generates multiple possible next actions using the language model. Then it creates a confidence score predicting the likelihood that each potential choice is the best one.
The Download: inside the first CRISPR treatment, and smarter robots
The news: A new robot training model, dubbed “KnowNo,” aims to teach robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.
Why it matters: While robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. That’s something large language models could help to fix, because they have a lot of common-sense knowledge baked in. Read the full story.
Medical microrobots that travel inside the body are (still) on their way
The human body is a labyrinth of vessels and tubing, full of barriers that are difficult to break through. That poses a serious hurdle for doctors. Illness is often caused by problems that are hard to visualize and difficult to access. But imagine if we could deploy armies of tiny robots into the body to do the job for us. They could break up hard-to-reach clots, deliver drugs to even the most inaccessible tumors, and even help guide embryos toward implantation.
We’ve been hearing about the use of tiny robots in medicine for years, maybe even decades. And they’re still not here. But experts are adamant that medical microbots are finally coming, and that they could be a game changer for a number of serious diseases. Read the full story.
5 things we didn’t put on our 2024 list of 10 Breakthrough Technologies
We haven’t always been right (RIP, Baxter), but we’ve often been early to spot important areas of progress (we put natural-language processing on our very first list in 2001; today this technology underpins large language models and generative AI tools like ChatGPT).
Every year, our reporters and editors nominate technologies that they think deserve a spot, and we spend weeks debating which ones should make the cut. Here are some of the technologies we didn’t pick this time—and why we’ve left them off, for now.
New drugs for Alzheimer’s disease
Alzmeiher’s patients have long lacked treatment options. Several new drugs have now been proved to slow cognitive decline, albeit modestly, by clearing out harmful plaques in the brain. In July, the FDA approved Leqembi by Eisai and Biogen, and Eli Lilly’s donanemab could soon be next. But the drugs come with serious side effects, including brain swelling and bleeding, which can be fatal in some cases. Plus, they’re hard to administer—patients receive doses via an IV and must receive regular MRIs to check for brain swelling. These drawbacks gave us pause.
Sustainable aviation fuel
Alternative jet fuels made from cooking oil, leftover animal fats, or agricultural waste could reduce emissions from flying. They have been in development for years, and scientists are making steady progress, with several recent demonstration flights. But production and use will need to ramp up significantly for these fuels to make a meaningful climate impact. While they do look promising, there wasn’t a key moment or “breakthrough” that merited a spot for sustainable aviation fuels on this year’s list.
One way to counteract global warming could be to release particles into the stratosphere that reflect the sun’s energy and cool the planet. That idea is highly controversial within the scientific community, but a few researchers and companies have begun exploring whether it’s possible by launching a series of small-scale high-flying tests. One such launch prompted Mexico to ban solar geoengineering experiments earlier this year. It’s not really clear where geoengineering will go from here or whether these early efforts will stall out. Amid that uncertainty, we decided to hold off for now.