The probability that parts of the booster could hit populated land is admittedly quite low—it’s much more likely to land in the ocean somewhere. But that probability is not zero. Case in point: the CZ-5B booster’s debut last year for a mission on May 5, 2020. The same problem arose back then as well: the core booster ended up in an uncontrolled orbit before eventually reentering Earth’s atmosphere. Debris landed in villages across Ivory Coast. It was enough to elicit a notable rebuke from the NASA administrator at the time, Jim Bridenstine.
The same story is playing out this time, and we’re playing the same waiting game because of how difficult it is to predict when and where this thing will reenter. The first reason is the booster’s speed: it’s currently traveling at nearly 30,000 kilometers per hour, orbiting the planet about once every 90 minutes. The second reason has to do with the amount of drag the booster is experiencing. Although technically it’s in space, the booster is still interacting with the upper edges of the planet’s atmosphere.
That drag varies from day to day with changes in upper-atmosphere weather, solar activity, and other phenomena. In addition, the booster isn’t just zipping around smoothly and punching through the atmosphere cleanly—it’s tumbling, which creates even more unpredictable drag.
Given those factors, we can establish a window for when and where we think the booster will reenter Earth’s atmosphere. But a change of even a couple of minutes can put its location thousands of miles away. “It can be difficult to model precisely, meaning we are left with some serious uncertainties when it comes to the space object’s reentry time,” says Thomas G. Roberts, an adjunct fellow at the CSIS Aerospace Security Project.
This also depends on how well the structure of the booster holds up to heating caused by friction with the atmosphere. Some materials might hold up better than others, but drag will increase as the structure breaks up and melts. The flimsier the structure, the more it will break up, and the more drag will be produced, causing it to fall out of orbit more quickly. Some parts may hit the ground earlier or later than others.
By the morning of reentry, the estimate of when it will land should have narrowed to just a few hours. Several different groups around the world are tracking the booster, but most experts are following data provided by the US Space Force through its Space Track website. Jonathan McDowell, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics, hopes that by the morning of reentry, the timing window will have shrunk to just a couple of hour where the booster orbits Earth maybe two more times. By then we should have a sharper sense of the route those orbits are taking and what regions of the Earth may be at risk from a shower of debris.
The Space Force’s missile early warning systems will already be tracking the infrared flare from the disintegrating rocket when reentry starts, so it will know where the debris is headed. Civilians won’t know for a while, of course, because that data is sensitive—it will take a few hours to work through the bureaucracy before an update is made to the Space Track site. If the remnants of the booster have landed in a populated area, we might already know thanks to reports on social media.
In the 1970s, these were common hazards after missions. “Then people started to feel it wasn’t appropriate to have large chunks of metal falling out of the sky,” says McDowell. NASA’s 77-ton Skylab space station was something of a wake-up call—its widely watched uncontrolled deorbit in 1979 led to large debris hitting Western Australia. No one was hurt and there was no property damage, but the world was eager to avoid any similar risks of large spacecraft uncontrollably reentering the atmosphere (not a problem with smaller boosters, which just burn up safely).
As a result, after the core booster gets into orbit and separates from the secondary boosters and payload, many launch providers quickly do a deorbit burn that brings it back into the atmosphere and sets it on a controlled crash course for the ocean, eliminating the risk it would pose if left in space. This can be accomplished with either a restartable engine or an added second engine designed for deorbit burns specifically. The remnants of these boosters are sent to a remote part of the ocean, such as the South Pacific Ocean Uninhabited Area, where other massive spacecraft like Russia’s former Mir space station have been dumped.
Another approach which was, used during space shuttle missions and is currently used by large boosters like Europe’s Ariane 5, is to avoid putting the core stage in orbit entirely and simply switch it off a few seconds early while it’s still in Earth’s atmosphere. Smaller engines then fire to take the payload the short extra distance to space, while the core booster is dumped in the ocean.
None of these options are cheap, and they create some new risks (more engines mean more points of failure), but “it’s what everyone does, since they don’t want to create this type of debris risk,” says McDowell. “It’s been standard practice around the world to avoid leaving these boosters in orbit. The Chinese are an outlier of this.”
Why? “Space safety is just not China’s priority,” says Roberts. “With years of space launch operations under its belt, China is capable of avoiding this weekend’s outcome, but chose not to.”
The past few years have seen a number of rocket bodies from Chinese launches that have been allowed to fall back to land, destroying buildings in villages and exposing people to toxic chemicals. “It’s no wonder that they would be willing to roll the dice on an uncontrolled atmospheric reentry, where the threat to populated areas pales in comparison,” says Roberts. “I find this behavior totally unacceptable, but not surprising.”
McDowell also points to what happened during the space shuttle Columbia disaster, when damage to the wing caused the spacecraft’s entry to become unstable and break apart. Nearly 38,500 kilograms of debris landed in Texas and Louisiana. Large chunks of the main engine ended up in a swamp—had it broken up a couple of minutes earlier, those parts could have hit a major city, slamming into skyscrapers in, say, Dallas. “I think people don’t appreciate how lucky we were that there weren’t casualties on the ground,” says McDowell. “We’ve been in these risky situations before and been lucky.”
But you can’t always count on luck. The CZ-5B variant of the Long March 5B is slated for two more launches in 2022 to help build out the rest of the Chinese space station. There’s no indication yet whether China plans to change its blueprint for those missions. Perhaps that will depend on what happens this weekend.
Everything you need to know about artificial wombs
The technology would likely be used first on infants born at 22 or 23 weeks who don’t have many other options. “You don’t want to put an infant on this device who would otherwise do well with conventional therapy,” Mychaliska says. At 22 weeks gestation, babies are tiny, often weighing less than a pound. And their lungs are still developing. When researchers looked at babies born between 2013 and 2018, survival among those who were resuscitated at 22 weeks was 30%. That number rose to nearly 56% at 23 weeks. And babies born at that stage who do survive have an increased risk of neurodevelopmental problems, cerebral palsy, mobility problems, hearing impairments, and other disabilities.
Selecting the right participants will be tricky. Some experts argue that gestational age shouldn’t be the only criteria. One complicating factor is that prognosis varies widely from center to center, and it’s improving as hospitals learn how best to treat these preemies. At the University of Iowa Stead Family Children’s Hospital, for example, survival rates are much higher than average: 64% for babies born at 22 weeks. They’ve even managed to keep a handful of infants born at 21 weeks alive. “These babies are not a hopeless case. They very much can survive. They very much can thrive if you are managing them appropriately,” says Brady Thomas, a neonatologist at Stead. “Are you really going to make that much of a bigger impact by adding in this technology, and what risks might exist to those patients as you’re starting to trial it?”
Prognosis also varies widely from baby to baby depending on a variety of factors. “The girls do better than the boys. The bigger ones do better than the smaller ones,” says Mark Mercurio, a neonatologist and pediatric bioethicist at the Yale School of Medicine. So “how bad does the prognosis with current therapy need to be to justify use of an artificial womb?” That’s a question Mercurio would like to see answered.
What are the risks?
One ever-present concern in the tiniest babies is brain bleeds. “That’s due to a number of factors—a combination of their brain immaturity, and in part associated with the treatment that we provide,” Mychaliska says. Babies in an artificial womb would need to be on a blood thinner to prevent clots from forming where the tubes enter the body. “I believe that places a premature infant at very high risk for brain bleeding,” he says.
And it’s not just about the baby. To be eligible for EXTEND, infants must be delivered via cesarean section, which puts the pregnant person at higher risk for infection and bleeding. Delivery via a C-section can also have an impact on future pregnancies.
So if it works, could babies be grown entirely outside the womb?
Not anytime soon. Maybe not ever. In a paper published in 2022, Flake and his colleagues called this scenario “a technically and developmentally naive, yet sensationally speculative, pipe dream.” The problem is twofold. First, fetal development is a carefully choreographed process that relies on chemical communication between the pregnant parent’s body and the fetus. Even if researchers understood all the factors that contribute to fetal development—and they don’t—there’s no guarantee they could recreate those conditions.
The second issue is size. The artificial womb systems being developed require doctors to insert a small tube into the infant’s umbilical cord to deliver oxygenated blood. The smaller the umbilical cord, the more difficult this becomes.
What are the ethical concerns?
In the near term, there are concerns about how to ensure that researchers are obtaining proper informed consent from parents who may be desperate to save their babies. “This is an issue that comes up with lots of last-chance therapies,” says Vardit Ravitsky, a bioethicist and president of the Hastings Center, a bioethics research institute.
The Download: brain bandwidth, and artificial wombs
Last week, Elon Musk made the bold assertion that sticking electrodes in people’s heads is going to lead to a huge increase in the rate of data transfer out of, and into, human brains.
The occasion of Musk’s post was the announcement by Neuralink, his brain-computer interface company, that it was officially seeking the first volunteer to receive an implant that contains more than twice the number of electrodes than previous versions to collect more data from more nerve cells.
The entrepreneur mentioned a long-term goal of vastly increasing “bandwidth” between people, or people and machines, by a factor of 1,000 or more. But what does he mean, and is it even possible? Read the full story.
This story is from The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Thursday.
Everything you need to know about artificial wombs
Earlier this month, US Food and Drug Administration advisors met to discuss how to move research on artificial wombs from animals into humans.
These medical devices are designed to give extremely premature infants a bit more time to develop in a womb-like environment before entering the outside world. They have been tested with hundreds of lambs (and some piglets), but animal models can’t fully predict how the technology will work for humans.
Why embracing complexity is the real challenge in software today
The reason we can’t just wish away or “fix” complexity is that every solution—whether it’s a technology or methodology—redistributes complexity in some way. Solutions reorganize problems. When microservices emerged (a software architecture approach where an application or system is composed of many smaller parts), they seemingly solved many of the maintenance and development challenges posed by monolithic architectures (where the application is one single interlocking system). However, in doing so microservices placed new demands on engineering teams; they require greater maturity in terms of practices and processes. This is one of the reasons why we cautioned people against what we call “microservice envy” in a 2018 edition of the Technology Radar, with CTO Rebecca Parsons writing that microservices would never be recommended for adoption on Technology Radar because “not all organizations are microservices-ready.” We noticed there was a tendency to look to adopt microservices simply because it was fashionable.
This doesn’t mean the solution is poor or defective. It’s more that we need to recognize the solution is a tradeoff. At Thoughtworks, we’re fond of saying “it depends” when people ask questions about the value of a certain technology or approach. It’s about how it fits with your organization’s needs and, of course, your ability to manage its particular demands. This is an example of essential complexity in tech—it’s something that can’t be removed and which will persist however much you want to get to a level of simplicity you find comfortable.
In terms of microservices, we’ve noticed increasing caution about rushing to embrace this particular architectural approach. Some of our colleagues even suggested the term “monolith revivalists” to describe those turning away from microservices back to monolithic software architecture. While it’s unlikely that the software world is going to make a full return to monoliths, frameworks like Spring Modulith—a framework that helps developers structure code in such a way that it becomes easier to break apart a monolith into smaller microservices when needed—suggest that practitioners are becoming more keenly aware of managing the tradeoffs of different approaches to building and maintaining software.
Because technical solutions have a habit of reorganizing complexity, we need to carefully attend to how this complexity is managed. Failing to do so can have serious implications for the productivity and effectiveness of engineering teams. At Thoughtworks we have a number of concepts and approaches that we use to manage complexity. Sensible defaults, for instance, are starting points for a project or piece of work. They’re not things that we need to simply embrace as a rule, but instead practices and tools that we collectively recognize are effective for most projects. They give individuals and teams a baseline to make judgements about what might be done differently.
One of the benefits of sensible defaults is that they can guard you against the allure of novelty and hype. As interesting or exciting as a new technology might be, sensible defaults can anchor you in what matters to you. This isn’t to say that new technologies like generative AI shouldn’t be treated with enthusiasm and excitement—some of our teams have been experimenting with these tools and seen impressive results—but instead that adopting new tools needs to be done in a way that properly integrates with the way you work and what you want to achieve. Indeed, there are a wealth of approaches to GenAI, from high profile tools like ChatGPT to self-hosted LLMs. Using GenAI effectively is as much a question of knowing the right way to implement for you and your team as it is about technical expertise.
Interestingly, the tools that can help us manage complexity aren’t necessarily new. One thing that came up in the latest edition of Technology Radar was something called risk-based failure modeling, a process used to understand the impact, likelihood and ability of detecting the various ways that a system can fail. This has origins in failure modes and effects analysis (FMEA), a practice that dates back to the period following World War II, used in complex engineering projects in fields such as aerospace. This signals that there are some challenges that endure; while new solutions will always emerge to combat them, we should also be comfortable looking to the past for tools and techniques.
McKinsey’s argument that the productivity of development teams can be successfully measured caused a stir across the software engineering landscape. While having the right metrics in place is certainly important, prioritizing productivity in our thinking can cause more problems than it solves when it comes to complex systems and an ever-changing landscape of solutions. Technology Radar called this out with an edition with the theme, “How productive is measuring productivity?”This highlighted the importance of focusing on developer experience with the help of tools like DX DevEx 360.
Focusing on productivity in the way McKinsey suggests can cause us to mistakenly see coding as the “real” work of software engineering, overlooking things like architectural decisions, tests, security analysis, and performance monitoring. This is risky—organizations that adopt such a view will struggle to see tangible benefits from their digital projects. This is why the key challenge in software today is embracing complexity; not treating it as something to be minimized at all costs but a challenge that requires thoughtfulness in processes, practices, and governance. The key question is whether the industry realizes this.
This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.