Connect with us

Tech

The UK is spooking everyone with its new covid-19 strain. Here’s what scientists know.

Published

on

The UK is spooking everyone with its new covid-19 strain. Here’s what scientists know.


The situation could prove to be a false alarm. Sometimes virus variants appear to seem to spread more easily but in fact are being propelled by luck, like a superspreader event. 

British teams, and some abroad, are now racing to carry out the lab experiments necessary to demonstrate whether the new variant really infects human cells more easily, and whether vaccines will stop it; those studies will involve exposing the new strain to blood plasma from covid-19 survivors or vaccinated people, to see if their antibodies can block it. 

Viruses frequently mutate or develop small changes in their genetic code. Since the start of the pandemic, scientists sequencing samples of the coronavirus have been tracking those changes to gain insight into how, and where, the pathogen has been spreading. 

One reason the mutated virus was spotted in the UK might be that the country has pursued such “genomic epidemiology” aggressively. For example, British labs contributed fully 45% of the 275,000 coronavirus sequences deposited to the global GISAID database, according to a threat assessment brief from the European Centre for Disease Prevention and Control.  

According to the COVID-19 Genomics Consortium UK, the coalition of labs that’s been sequencing viruses, the variant’s earliest appearance is in a sample collected September 20 in Kent and one a day later in London. 

Distinct signature

While mutations in the coronavirus are seen all the time, the new variant raised alarms because it appeared at the same time as a sharp increase in cases in the southeast of England, where the infection rate has recently quadrupled. About half those cases were found to be caused by the new variant. 

The genetic code of the variant also caught scientists’ attention because of how much it differed from the original version. According to a preliminary characterizations posted to the website virological.org by the COVID-19 Genomics Consortium UK, the variant possesses a “distinct” genetic signature featuring “an unusually large number of genetic changes,” particularly in its spike protein, which are more likely to alter its function.

The mutations seen in the new variant have all been spotted previously, according to comments posted online by Francois Balloux, a computational biologist at the University of College London, but apparently not in this combination. They include one that causes the spike protein to bind more effectively to human cells, another linked to escape from human immune responses, and a third adjacent to a biologically critical component of the pathogen. 

During this pandemic, spreading variants of the virus have tended to pick up one or two new mutations a month. The UK scientists say they were surprised to find a variant that has accumulated a unique pattern of more than a dozen changes to important genes, which they suggested were clues the strain might be the result of evolutionary adaptation.

Dodging the immune response?

In the UK’s group preliminary report, Andrew Rambaut, a biologist at the University of Edinburgh, and his colleagues say they think the variant might have evolved inside a person who is immunocompromised and who became chronically infected with the coronavirus. Such people, in some cases, have been given multiple rounds of treatment with antibody and antiviral drugs. That could select for viruses that survive such treatment.

If the changed virus is able to “evade” the usual immune response, that may also explain why it’s spreading faster, since it would also affect some covid-19 survivors and therefore have more hosts to infect. According to the British scientific reports, four of about 1,000 people infected by the new variant previously had covid-19, although the scientists were not able to say if that figure was out of the ordinary. 

It would not come as a total surprise to learn the covid-19 virus is evolving enough to infect people a second time, despite immunity to the original germ. Other coronaviruses, like those that cause the common cold, are known to reinfect people frequently, possibly because of such shape-shifting.

Another way viruses can change significantly is if they establish themselves in another species—even zoo tigers can catch covid—and then jump back to people. That was seen in Denmark, which this fall  reported transmission of the covid virus between humans and mink and back again, a situation deemed so dangerous that the country ordered all the mink on commercial fur farms to be culled. 

Now the world will learn if it’s possible to stop the new variant from spreading. That won’t be easy. The existing forms of covid-19 are already transmitting quickly despite social distancing and masks. If the new variant is really 70% more easily spread, it could soon become the dominant form of the disease. 

British authorities over the weekend faced some criticism that they were raising alarms over the new strain to justify strict lockdown measures before Christmas, including stay-at-home orders for millions of people. But officials took to the air to encourage people to abide by the restrictions. “The new variant is out of control and we need to bring it under control,” Matt Hancock, the health secretary, told the BBC. He urged his countrymen to “act like you have the virus.” 



Tech

Everything you need to know about artificial wombs

Published

on

Everything you need to know about artificial wombs


The technology would likely be used first on infants born at 22 or 23 weeks who don’t have many other options. “You don’t want to put an infant on this device who would otherwise do well with conventional therapy,” Mychaliska says. At 22 weeks gestation, babies are tiny, often weighing less than a pound. And their lungs are still developing. When researchers looked at babies born between 2013 and 2018, survival among those who were resuscitated at 22 weeks was 30%. That number rose to nearly 56% at 23 weeks. And babies born at that stage who do survive have an increased risk of neurodevelopmental problems, cerebral palsy, mobility problems, hearing impairments, and other disabilities. 

Selecting the right participants will be tricky. Some experts argue that gestational age shouldn’t be the only criteria. One complicating factor is that prognosis varies widely from center to center, and it’s improving as hospitals learn how best to treat these preemies. At the University of Iowa Stead Family Children’s Hospital, for example, survival rates are much higher than average: 64% for babies born at 22 weeks. They’ve even managed to keep a handful of infants born at 21 weeks alive. “These babies are not a hopeless case. They very much can survive. They very much can thrive if you are managing them appropriately,” says Brady Thomas, a neonatologist at Stead. “Are you really going to make that much of a bigger impact by adding in this technology, and what risks might exist to those patients as you’re starting to trial it?”

Prognosis also varies widely from baby to baby depending on a variety of factors. “The girls do better than the boys. The bigger ones do better than the smaller ones,” says Mark Mercurio, a neonatologist and pediatric bioethicist at the Yale School of Medicine. So “how bad does the prognosis with current therapy need to be to justify use of an artificial womb?” That’s a question Mercurio would like to see answered.

What are the risks?

One ever-present concern in the tiniest babies is brain bleeds. “That’s due to a number of factors—a combination of their brain immaturity, and in part associated with the treatment that we provide,” Mychaliska says. Babies in an artificial womb would need to be on a blood thinner to prevent clots from forming where the tubes enter the body. “I believe that places a premature infant at very high risk for brain bleeding,” he says.  

And it’s not just about the baby. To be eligible for EXTEND, infants must be delivered via cesarean section, which puts the pregnant person at higher risk for infection and bleeding. Delivery via a C-section can also have an impact on future pregnancies.  

So if it works, could babies be grown entirely outside the womb?

Not anytime soon. Maybe not ever. In a paper published in 2022, Flake and his colleagues called this scenario “a technically and developmentally naive, yet sensationally speculative, pipe dream.” The problem is twofold. First, fetal development is a carefully choreographed process that relies on chemical communication between the pregnant parent’s body and the fetus. Even if researchers understood all the factors that contribute to fetal development—and they don’t—there’s no guarantee they could recreate those conditions. 

The second issue is size. The artificial womb systems being developed require doctors to insert a small tube into the infant’s umbilical cord to deliver oxygenated blood. The smaller the umbilical cord, the more difficult this becomes.

What are the ethical concerns?

In the near term, there are concerns about how to ensure that researchers are obtaining proper informed consent from parents who may be desperate to save their babies. “This is an issue that comes up with lots of last-chance therapies,” says Vardit Ravitsky, a bioethicist and president of the Hastings Center, a bioethics research institute. 

Continue Reading

Tech

The Download: brain bandwidth, and artificial wombs

Published

on

Elon Musk wants more bandwidth between people and machines. Do we need it?


Last week, Elon Musk made the bold assertion that sticking electrodes in people’s heads is going to lead to a huge increase in the rate of data transfer out of, and into, human brains.

The occasion of Musk’s post was the announcement by Neuralink, his brain-computer interface company, that it was officially seeking the first volunteer to receive an implant that contains more than twice the number of electrodes than previous versions to collect more data from more nerve cells.

The entrepreneur mentioned a long-term goal of vastly increasing “bandwidth” between people, or people and machines, by a factor of 1,000 or more. But what does he mean, and is it even possible? Read the full story.

—Antonio Regalado

This story is from The Checkup, MIT Technology Review’s weekly biotech newsletter. Sign up to receive it in your inbox every Thursday.

Everything you need to know about artificial wombs

Earlier this month, US Food and Drug Administration advisors met to discuss how to move research on artificial wombs from animals into humans.

These medical devices are designed to give extremely premature infants a bit more time to develop in a womb-like environment before entering the outside world. They have been tested with hundreds of lambs (and some piglets), but animal models can’t fully predict how the technology will work for humans. 

Continue Reading

Tech

Why embracing complexity is the real challenge in software today

Published

on

Why embracing complexity is the real challenge in software today


Redistributing complexity

The reason we can’t just wish away or “fix” complexity is that every solution—whether it’s a technology or methodology—redistributes complexity in some way. Solutions reorganize problems. When microservices emerged (a software architecture approach where an application or system is composed of many smaller parts), they seemingly solved many of the maintenance and development challenges posed by monolithic architectures (where the application is one single interlocking system). However, in doing so microservices placed new demands on engineering teams; they require greater maturity in terms of practices and processes. This is one of the reasons why we cautioned people against what we call “microservice envy” in a 2018 edition of the Technology Radar, with CTO Rebecca Parsons writing that microservices would never be recommended for adoption on Technology Radar because “not all organizations are microservices-ready.” We noticed there was a tendency to look to adopt microservices simply because it was fashionable.

This doesn’t mean the solution is poor or defective. It’s more that we need to recognize the solution is a tradeoff. At Thoughtworks, we’re fond of saying “it depends” when people ask questions about the value of a certain technology or approach. It’s about how it fits with your organization’s needs and, of course, your ability to manage its particular demands. This is an example of essential complexity in tech—it’s something that can’t be removed and which will persist however much you want to get to a level of simplicity you find comfortable.

In terms of microservices, we’ve noticed increasing caution about rushing to embrace this particular architectural approach. Some of our colleagues even suggested the term “monolith revivalists” to describe those turning away from microservices back to monolithic software architecture. While it’s unlikely that the software world is going to make a full return to monoliths, frameworks like Spring Modulith—a framework that helps developers structure code in such a way that it becomes easier to break apart a monolith into smaller microservices when needed—suggest that practitioners are becoming more keenly aware of managing the tradeoffs of different approaches to building and maintaining software.

Supporting practitioners with concepts and tools

Because technical solutions have a habit of reorganizing complexity, we need to carefully attend to how this complexity is managed. Failing to do so can have serious implications for the productivity and effectiveness of engineering teams. At Thoughtworks we have a number of concepts and approaches that we use to manage complexity. Sensible defaults, for instance, are starting points for a project or piece of work. They’re not things that we need to simply embrace as a rule, but instead practices and tools that we collectively recognize are effective for most projects. They give individuals and teams a baseline to make judgements about what might be done differently.

One of the benefits of sensible defaults is that they can guard you against the allure of novelty and hype. As interesting or exciting as a new technology might be, sensible defaults can anchor you in what matters to you. This isn’t to say that new technologies like generative AI shouldn’t be treated with enthusiasm and excitement—some of our teams have been experimenting with these tools and seen impressive results—but instead that adopting new tools needs to be done in a way that properly integrates with the way you work and what you want to achieve. Indeed, there are a wealth of approaches to GenAI, from high profile tools like ChatGPT to self-hosted LLMs. Using GenAI effectively is as much a question of knowing the right way to implement for you and your team as it is about technical expertise.

Interestingly, the tools that can help us manage complexity aren’t necessarily new. One thing that came up in the latest edition of Technology Radar was something called risk-based failure modeling, a process used to understand the impact, likelihood and ability of detecting the various ways that a system can fail. This has origins in failure modes and effects analysis (FMEA), a practice that dates back to the period following World War II, used in complex engineering projects in fields such as aerospace. This signals that there are some challenges that endure; while new solutions will always emerge to combat them, we should also be comfortable looking to the past for tools and techniques.

Learning to live with complexity

McKinsey’s argument that the productivity of development teams can be successfully measured caused a stir across the software engineering landscape. While having the right metrics in place is certainly important, prioritizing productivity in our thinking can cause more problems than it solves when it comes to complex systems and an ever-changing landscape of solutions. Technology Radar called this out with an edition with the theme, “How productive is measuring productivity?”This highlighted the importance of focusing on developer experience with the help of tools like DX DevEx 360. 

Focusing on productivity in the way McKinsey suggests can cause us to mistakenly see coding as the “real” work of software engineering, overlooking things like architectural decisions, tests, security analysis, and performance monitoring. This is risky—organizations that adopt such a view will struggle to see tangible benefits from their digital projects. This is why the key challenge in software today is embracing complexity; not treating it as something to be minimized at all costs but a challenge that requires thoughtfulness in processes, practices, and governance. The key question is whether the industry realizes this.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Continue Reading

Copyright © 2021 Seminole Press.