The obvious problem with the lab-leak theory, though, is that there remains no concrete evidence for it. Chan has no particular view about how exactly an accident might have happened—whether a student got sick in a bat cave, say, or secret research to infect mice with a novel virus went awry. After reading Chan’s posts, I noticed that many of her claims don’t even relate to direct evidence at all; more often, they revolve around its absence. She tends to point out things that Chinese researchers didn’t do or say, important facts they did not quickly reveal, the infected market animal they never found, or a database that’s no longer online. She’s plainly suggesting there is a cover-up—and, therefore, a plot to conceal the truth.
Last February, when leading scientists convened to analyze the virus genome, they ended up publishing two letters. One, in The Lancet, dismissed the lab-accident possibility outright as a “conspiracy theory” (its authors included a scientist who funded research at the Wuhan lab). The other was the “Proximal Origins” letter in Nature Medicine, coauthored by Kristian Andersen, an evolutionary biologist at the Scripps Research Institute in La Jolla, California. Andersen and his coauthors looked at the genome of the virus and marshaled arguments for why it was very likely a natural occurrence—backed by evidence that it was similar to others found in nature.
The 30,000 genetic letters in that genome remain the most widely studied clue to the virus’s origin. Coronaviruses frequently swap parts—a phenomenon called recombination. Andersen found that all the components of the virus had been seen before in samples collected over the years from animals. Evolution could have produced it, he believed. The Wuhan Institute had been genetically engineering bat viruses for scientific experiments, but the SARS-CoV-2 genome did not match any of the favorite “chassis” viruses used in those experiments, and it did not contain any other obvious sign of engineering.
According to Clarivate, an analytics company, the Nature Medicine letter was the 55th most cited article of 2020, with over 1,300 citations in the journals tracked. Email records would later show that starting in January 2020, the letter had been the subject of urgent, high-level messages and conference calls between the letters’ authors, Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases; top virologists; and the head of the Wellcome Trust, a major pharmaceutical research funding organization in the United Kingdom. Early on, the authors had worried that the virus looked suspicious before quickly coming together around a scientific analysis supporting a natural cause. Initially one of their aims was to quash rumors that the virus was a bioweapon or a result of engineering gone wrong, but they ended up going further, writing: “We do not believe that any type of laboratory-based scenario is plausible.”
Working from her home in Massachusetts, Chan soon found a way to revive the lab-accident theory by looking for differences with SARS, a similar virus that broke out in 2002 but caused only about 8,000 illnesses. With Shing Zhan, a bioinformatics specialist at the University of British Columbia, Chan looked at the early human cases of covid and saw that the new virus hadn’t mutated as fast as SARS had. If it were an animal virus from a market, she thought, its genome would show signs of adjusting more quickly to fit its brand-new human host. She prepared an analysis arguing that the virus was “pre-adapted” to humans and offered some theories as to why. Maybe it had been spreading undetected in people elsewhere in China. Or maybe, she thought, it had been growing in a lab somewhere, perhaps multiplying in human cells or in transgenic mice that had had human genes spliced into them.
The chance that a non-engineered virus could have “adapted to humans while being studied in a laboratory,” she wrote, “should be considered, regardless of how likely or unlikely.”
On May 2, 2020, Chan posted a preprint paper, coauthored with Deverman and Zhan, to the website bioRxiv, an online venue for quickly communicating results that haven’t yet been reviewed by other scientists. “Our observations suggest that by the time SARS-CoV-2 was first detected in late 2019, it was already pre-adapted to human transmission,” they wrote. The Broad Institute communications department also pointed Chan to examples of how to compose a “tweetorial,” a daisy chain of posts, with pictures, that present a compact scientific argument to a wider public. She posted her first tweetorial the following day.
For journalists suspicious about China’s handling of the virus, the thread—and those that followed—were dynamite. Here was an actual scientist at America’s biggest gene center who was explaining why the official story might be wrong. “Coronavirus did NOT come from animals in Wuhan market,” screamed a Daily Mail headline, in what became Chan’s first breakout into the public conversation.
While her report was a media success, what the Daily Mail described as Chan’s “landmark paper” has still never been formally accepted by a scientific journal. Chan says that’s because of censorship due to her raising the lab-origin possibility. Eisen of UC Davis, however, thinks Chan’s expectations for how the covid-19 virus should have behaved remain conjecture. He doesn’t think we’ve traced enough outbreaks in enough molecular detail to really know what’s normal. And, he notes, covid-19 has continued to change and adapt.
“My colleagues said, This is a conspiracy—don’t bother. I said, No, I am going to treat this like any other paper,” says Eisen, who took time to study the manuscript. “I think it’s interesting what she tried to do, but I am not convinced by the conclusion, and I think the inferences were wrong. I do commend her for posting it. Many of the people pushing the lab-origin theory are not making claims based on logic, but she presented her evidence. I don’t agree with it, but that is science.”
Wrong or right, though, the word Chan used—“pre-adapted”—sent shivers up the spine of people like author Nicholson Baker. “We were dealing with a disease that was exceptionally good, right out of the gate, at chewing up human airways,” says Baker, who got in touch with Chan to learn more. Several months later, in January of this year, Baker would publish a lengthy report in New York magazine saying he’d become convinced a laboratory accident was to blame. He cited a variety of sources, including Chan.
Chan wasn’t done knocking holes in the natural-origins narrative. She next took on four papers that had been rapidly published early in 2020, two of them in Nature, describing viruses in pangolins—endangered scale-covered mammals sometimes eaten as delicacies in China—that shared similarities to SARS-CoV-2. If researchers could find all the components of the pandemic virus, especially in wild animals illegally trafficked as food, they could cinch the case for a spillover from nature, given the way coronaviruses swap parts. The pangolin papers, published in quick succession in early 2020, were a promising start. To the authors of “Proximal Origins,” these similar viruses offered “strong” and “parsimonious” evidence for natural emergence.
Chan and Zhan noticed that all the papers described the same batch of animals—even though some failed to acknowledge the overlap. One even relabeled the data, which made it appear novel. To Chan, that wasn’t just sloppy work or scientific misconduct. There could, she believed, have been “coordination” between the overlapping authors of all these papers, some of whom had published together before. She created the hashtag #pangolinpapers—calling to mind the Panama Papers, documents that exposed secret offshore financial dealings.
Maybe, she thought, researchers were now laundering data to make it seem that nature was swimming with similar viruses.
Chan started emailing authors and journals to get the raw data she needed to more fully analyze what they had done. Making such data available is usually a condition of publication, but it can still be hard to obtain. After what she calls months of stonewalling, Chan finally lost her cool and blasted an accusation out from her browser. “I need the scientists + editors who are directly or indirectly covering up severe research integrity issues surrounding some of the key SARS-2-like viruses to stop and think for a bit,” she posted to Twitter. “If your actions obscure SARS2 origins, you’re playing a hand in the death of millions of people.”
Eddie Holmes, a prominent Australian virologist and coauthor of one of those papers (as well as “Proximal Origins”), called the tweet “one of most despicable things I read on the origins issue.” He felt accused, but he wondered what he was being accused of, since his paper had correctly accounted for its pangolin data sources. Holmes then circulated an intricate time line prepared by Chan of the publication dates and past connections between the authors. The chart’s dense web of arrows and connections bore an unmistakable resemblance to an obsessive’s cork board covered with red string and thumbtacks.
How the idea of a “transgender contagion” went viral—and caused untold harm
The ROGD paper was not funded by anti-trans zealots. But it arrived at exactly the time people with bad intentions were looking for science to buoy their opinions.
The results were in line with what one might expect given those sources: 76.5% of parents surveyed “believed their child was incorrect in their belief of being transgender.” More than 85% said their child had increased their internet use and/or had trans friends before identifying as trans. The youths themselves had no say in the study, and there’s no telling if they had simply kept their parents in the dark for months or years before coming out. (Littman acknowledges that “parent-child conflict may also explain some of the findings.”)
Arjee Restar, now an assistant professor of epidemiology at the University of Washington, didn’t mince words in her 2020 methodological critique of the paper. Restar noted that Littman chose to describe the “social and peer contagion” hypothesis in the consent document she shared with parents, opening the door for biases in who chose to respond to the survey and how they did so. She also highlighted that Littman asked parents to offer “diagnoses” of their child’s gender dysphoria, which they were unqualified to do without professional training. It’s even possible that Littman’s data could contain multiple responses from the same parent, Restar wrote. Littman told MIT Technology Review that “targeted recruitment [to studies] is a really common practice.” She also called attention to the corrected ROGD paper, which notes that a pro-gender-affirming parents’ Facebook group with 8,000 members posted the study’s recruitment information on its page—although Littman’s study was not designed to be able to discern whether any of them responded.
But politics is blind to nuances in methodology. And the paper was quickly seized by those who were already pushing back against increasing acceptance of trans people. In 2014, a few years before Littman published her ROGD paper, Time magazine had put Laverne Cox, the trans actress from Orange Is the New Black, on its cover and declared a “transgender tipping point.” By 2016, bills across the country that aimed to bar trans people from bathrooms that fit their gender identity failed, and one that succeeded, in North Carolina, cost its Republican governor, Pat McCrory, his job.
Yet by 2018 a renewed backlash was well underway—one that zeroed in on trans youth. The debate about trans youth competing in sports went national, as did a heavily publicized Texas custody battle between a mother who supported her trans child and a father who didn’t. Groups working to further marginalize trans people, like the Alliance Defending Freedom and the Family Research Council, began “printing off bills and introducing them to state legislators,” says Gillian Branstetter, a communications strategist at the American Civil Liberties Union.
The ROGD paper was not funded by anti-trans zealots. But it arrived at exactly the time people with bad intentions were looking for science to buoy their opinions. The paper “laundered what had previously been the rantings of online conspiracy theorists and gave it the resemblance of serious scientific study,” Branstetter says. She believes that if Littman’s paper had not been published, a similar argument would have been made by someone else. Despite its limitations, it has become a crucial weapon in the fight against trans people, largely through online dissemination. “It is astonishing that such a blatantly bad-faith effort has been taken so seriously,” Branstetter says.
Littman plainly rejects that characterization, saying her goal was simply to “find out what’s going on.” “This was a very good-faith attempt,” she says. “As a person I am liberal; I’m pro-LGBT. I saw a phenomenon with my own eyes and I investigated, found that it was different than what was in the scientific literature.”
One reason for the success of Littman’s paper is that it validates the idea that trans kids are new. But Jules Gill-Peterson, an associate professor of history at Johns Hopkins and author of Histories of the Transgender Child, says that is “empirically untrue.” Trans children have only recently started to be discussed in mainstream media, so people assume they weren’t around before, she says, but “there have been children transitioning for as long as there has been transition-related medical technology,” and children were socially transitioning—living as a different gender without any medical or legal interventions—long before that.
Many trans people are young children when they first observe a dissonance between how they are identified and how they identify. The process of transitioning is never simple, but the explanation of their identity might be.
Inside the software that will become the next battle front in US-China chip war
EDA software is a small but mighty part of the semiconductor supply chain, and it’s mostly controlled by three Western companies. That gives the US a powerful point of leverage, similar to the way it wanted to restrict access to lithography machines—another crucial tool for chipmaking—last month. So how has the industry become so American-centric, and why can’t China just develop its own alternative software?
What is EDA?
Electronic design automation (also known as electronic computer-aided design, or ECAD) is the specialized software used in chipmaking. It’s like the CAD software that architects use, except it’s more sophisticated, since it deals with billions of minuscule transistors on an integrated circuit.
There’s no single dominant software program that represents the best in the industry. Instead, a series of software modules are often used throughout the whole design flow: logic design, debugging, component placement, wire routing, optimization of time and power consumption, verification, and more. Because modern-day chips are so complex, each step requires a different software tool.
How important is EDA to chipmaking?
Although the global EDA market was valued at only around $10 billion in 2021, making it a small fraction of the $595 billion semiconductor market, it’s of unique importance to the entire supply chain.
The semiconductor ecosystem today can be seen as a triangle, says Mike Demler, a consultant who has been in the chip design and EDA industry for over 40 years. On one corner are the foundries, or chip manufacturers like TSMC; on another corner are intellectual-property companies like ARM, which make and sell reusable design units or layouts; and on the third corner are the EDA tools. All three together make sure the supply chain moves smoothly.
From the name, it may sound as if EDA tools are only important to chip design firms, but they are also used by chip manufacturers to verify that a design is feasible before production. There’s no way for a foundry to make a single chip as a prototype; it has to invest in months of time and production, and each time, hundreds of chips are fabricated on the same semiconductor base. It would be an enormous waste if they were found to have design flaws. Therefore, manufacturers rely on a special type of EDA tool to do their own validation.
What are the leading companies in the EDA industry?
There are only a few companies that sell software for each step of the chipmaking process, and they have dominated this market for decades. The top three companies—Cadence (American), Synopsys (American), and Mentor Graphics (American but acquired by the German company Siemens in 2017)—control about 70% of the global EDA market. Their dominance is so strong that many EDA startups specialize in one niche use and then sell themselves to one of these three companies, further cementing the oligopoly.
What is the US government doing to restrict EDA exports to China?
US companies’ outsize influence on the EDA industry makes it easy for the US government to squeeze China’s access. In its latest announcement, it pledged to add certain EDA tools to its list of technologies banned from export. The US will coordinate with 41 other countries, including Germany, to implement these restrictions.
Bright LEDs could spell the end of dark skies
Specifications in the current proposal provide a starting point for planning, including a color temperature cutoff of 3,000 K in line with Pittsburgh’s dark-sky ordinance, which passed last fall. However, Martinez says that is the maximum, and as they look for consultants, they’ll be taking into account which ones show dark-sky expertise. The city is also considering—budget and infrastructure permitting—a “network lighting management system,” a kind of “smart” lighting that would allow them to control lighting levels and know when there is an outage.
Martinez says there will be citywide engagement and updates on the status as critical milestones are reached. “We’re in the evaluation period right now,” she says, adding that the next milestone is authorization of a new contract. She acknowledges there is some “passionate interest in street lighting,” and that she too is anxious to see the project come to fruition: “Just because things seem to go quiet doesn’t mean work is not being done.”
While they aren’t meeting with light pollution experts right now, Martinez says the ones they met with during the last proposal round—Stephen Quick and Diane Turnshek of CMU— were “instrumental” in adopting the dark-sky ordinance.
In recent months, Zielinska-Dabkowska says, her “baby” has been the first Responsible Outdoor Light at Night Conference, an international gathering of more than 300 lighting professionals and light pollution researchers held virtually in May. Barentine was among the speakers. “It’s a sign that all of this is really coming along, both as a research subject but also something that attracts the interest of practitioners in outdoor lighting,” he says of the conference.
There is more work to be done, though. The IDA recently released a report summarizing the current state of light pollution research. The 18-page report includes a list of knowledge gaps to be addressed in several areas, including the overall effectiveness of government policies on light pollution. Another is how much light pollution comes from sources other than city streetlights, which a 2020 study found accounted for only 13% of Tucson’s light pollution. It is not clear what makes up the rest, but Barentine suspects the next biggest source in the US and Europe is commercial lighting, such as flashy outdoor LED signs and parking lot lighting.
Working with companies to reduce light emissions can be challenging, says Clayton Trevillyan, Tucson’s chief building officer. “If there is a source of light inside the building, technically it’s not regulated by the outdoor lighting code, even if it is emitting light outside,” Trevillyan says. In some cases, he says, in order to get around the city’s restrictions, businesses have suspended illuminated signs inside buildings but aimed them outside.
Light pollution experts generally say there is no substantial evidence that more light amounts to greater safety.
For cities trying to implement a lighting ordinance, Trevillyan says, the biggest roadblocks they’ll face are “irrelevant” arguments, specifically claims that reducing the brightness of outdoor lighting will cut down on advertising revenue and make the city more vulnerable to crime. The key to successfully enforcing the dark-sky rules, he says, is to educate the public and refuse to give in to people seeking exceptions or exploiting loopholes.
Light pollution experts generally say there is no substantial evidence that more light amounts to greater safety. In Tucson, for example, Barentine says, neither traffic accidents nor crime appeared to increase after the city started dimming its streetlights at night and restricting outdoor lighting in 2017. Last year, researchers at the University of Pennsylvania analyzed crime rates alongside 300,000 streetlight outages over an eight-year period. They concluded there is “little evidence” of any impact on crime rates on the affected streets—in fact, perpetrators seemed to seek out better-lit adjacent streets. Barentine says there is some evidence that “strategically placed lighting” can help decrease traffic collisions. “Beyond that, things get murky pretty quickly,” he says.