Hubble’s aging hardware was last serviced directly in 2009 by space shuttle astronauts, and engineers estimated back then that it would last until around 2016. “After a few years in flight with all the refurbishes, engineers reevaluated the survivability and reliability of the instruments and started pushing everything much further out,” says Tom Brown, the head of the Hubble Space Telescope mission office at the Space Telescope Science Institute in Baltimore. “The most recent estimates say that there’s an excellent chance we’re going to be doing science like we do today until at least 2026, and perhaps the whole decade. It’s looking pretty good right now.”
Hubble has been used in practically every kind of astronomy investigation: studying planets and moons in our own solar system; peering at distant stars, galaxies, supernovas, nebulas, and other astrophysical phenomena; studying the origins and expansion of the universe.
Its work in exoplanet science in the last decade has been especially surprising, considering that when the telescope was launched in 1990, we were still five years away from detecting the first exoplanet orbiting a sun-like star. Hubble isn’t useful for actually finding exoplanets but, rather, for follow-up observations that can characterize planets and their atmospheres once they’re found. When the James Webb Space Telescope launches later this year, the two observatories combined might finally help scientists identify an Earth-like world that’s truly hospitable to life.
The JWST is often promoted as Hubble’s successor, but that isn’t quite right. Hubble can observe the universe in visible and ultraviolet wavelengths, while JWST’s focus is on infrared observations, which help us study early-universe objects and characterize the chemistry on other worlds. Being situated in space, Hubble doesn’t have to worry about inference caused by Earth’s atmosphere, which is especially detrimental to ultraviolet observations (the ozone layer blocks out most UV radiation).
This is also critical when we need eyes to study poorly understood phenomena. Take the 2017 detection of gravitational waves produced by the collision of two neutron stars. Hubble was able to observe the event’s aftermath, providing data outside the infrared spectrum that was used to define the shape and evolution of the merger in crisper detail.
Four major scientific instruments are currently active onboard Hubble, so even if one or two things stop working, there is still a ton of major science the rest of the observatory can do. The telescope is also built with a lot of redundancy, so single hardware and software failures don’t necessarily stop individual instruments from working.
That being said, there are no plans for a new service mission. If there’s a catastrophic failure that takes Hubble entirely offline, it’s hard to see NASA greenlighting a repair mission for an observatory that’s over three decades old.
So what replaces Hubble when it’s finally ready to retire? Brown says other nations have nascent plans to put other missions in orbit that could take up the visible and UV investigations currently run by Hubble. India’s Astrosat space telescope currently does UV observations from space, but with a much smaller aperture. China is looking to launch a space telescope called Xuntian in 2024, and state media says it will observe an area of space 300 times larger than Hubble can.
The true successor to Hubble might be NASA’s proposed Large Ultraviolet Optical Infrared Surveyor space telescope, or LUVOIR, a general-purpose observatory capable of observing in multiple wavelengths (including infrared, optical, and ultraviolet). But if funded, LUVOIR wouldn’t launch until 2039 at the earliest.
It’s possible Hubble will stay on until it can be truly replaced, but most astronomers are bracing for a big knowledge gap when it finally stops working. “Hubble is really the premier game for doing ultraviolet and optical astronomy,” says Brown. “So much of astronomy, especially when it comes to understanding temperature and chemistry in outer space, hinges on the information you can really get from it. I fear the space community is really going to feel the loss when Hubble stops working.”
How the idea of a “transgender contagion” went viral—and caused untold harm
The ROGD paper was not funded by anti-trans zealots. But it arrived at exactly the time people with bad intentions were looking for science to buoy their opinions.
The results were in line with what one might expect given those sources: 76.5% of parents surveyed “believed their child was incorrect in their belief of being transgender.” More than 85% said their child had increased their internet use and/or had trans friends before identifying as trans. The youths themselves had no say in the study, and there’s no telling if they had simply kept their parents in the dark for months or years before coming out. (Littman acknowledges that “parent-child conflict may also explain some of the findings.”)
Arjee Restar, now an assistant professor of epidemiology at the University of Washington, didn’t mince words in her 2020 methodological critique of the paper. Restar noted that Littman chose to describe the “social and peer contagion” hypothesis in the consent document she shared with parents, opening the door for biases in who chose to respond to the survey and how they did so. She also highlighted that Littman asked parents to offer “diagnoses” of their child’s gender dysphoria, which they were unqualified to do without professional training. It’s even possible that Littman’s data could contain multiple responses from the same parent, Restar wrote. Littman told MIT Technology Review that “targeted recruitment [to studies] is a really common practice.” She also called attention to the corrected ROGD paper, which notes that a pro-gender-affirming parents’ Facebook group with 8,000 members posted the study’s recruitment information on its page—although Littman’s study was not designed to be able to discern whether any of them responded.
But politics is blind to nuances in methodology. And the paper was quickly seized by those who were already pushing back against increasing acceptance of trans people. In 2014, a few years before Littman published her ROGD paper, Time magazine had put Laverne Cox, the trans actress from Orange Is the New Black, on its cover and declared a “transgender tipping point.” By 2016, bills across the country that aimed to bar trans people from bathrooms that fit their gender identity failed, and one that succeeded, in North Carolina, cost its Republican governor, Pat McCrory, his job.
Yet by 2018 a renewed backlash was well underway—one that zeroed in on trans youth. The debate about trans youth competing in sports went national, as did a heavily publicized Texas custody battle between a mother who supported her trans child and a father who didn’t. Groups working to further marginalize trans people, like the Alliance Defending Freedom and the Family Research Council, began “printing off bills and introducing them to state legislators,” says Gillian Branstetter, a communications strategist at the American Civil Liberties Union.
The ROGD paper was not funded by anti-trans zealots. But it arrived at exactly the time people with bad intentions were looking for science to buoy their opinions. The paper “laundered what had previously been the rantings of online conspiracy theorists and gave it the resemblance of serious scientific study,” Branstetter says. She believes that if Littman’s paper had not been published, a similar argument would have been made by someone else. Despite its limitations, it has become a crucial weapon in the fight against trans people, largely through online dissemination. “It is astonishing that such a blatantly bad-faith effort has been taken so seriously,” Branstetter says.
Littman plainly rejects that characterization, saying her goal was simply to “find out what’s going on.” “This was a very good-faith attempt,” she says. “As a person I am liberal; I’m pro-LGBT. I saw a phenomenon with my own eyes and I investigated, found that it was different than what was in the scientific literature.”
One reason for the success of Littman’s paper is that it validates the idea that trans kids are new. But Jules Gill-Peterson, an associate professor of history at Johns Hopkins and author of Histories of the Transgender Child, says that is “empirically untrue.” Trans children have only recently started to be discussed in mainstream media, so people assume they weren’t around before, she says, but “there have been children transitioning for as long as there has been transition-related medical technology,” and children were socially transitioning—living as a different gender without any medical or legal interventions—long before that.
Many trans people are young children when they first observe a dissonance between how they are identified and how they identify. The process of transitioning is never simple, but the explanation of their identity might be.
Inside the software that will become the next battle front in US-China chip war
EDA software is a small but mighty part of the semiconductor supply chain, and it’s mostly controlled by three Western companies. That gives the US a powerful point of leverage, similar to the way it wanted to restrict access to lithography machines—another crucial tool for chipmaking—last month. So how has the industry become so American-centric, and why can’t China just develop its own alternative software?
What is EDA?
Electronic design automation (also known as electronic computer-aided design, or ECAD) is the specialized software used in chipmaking. It’s like the CAD software that architects use, except it’s more sophisticated, since it deals with billions of minuscule transistors on an integrated circuit.
There’s no single dominant software program that represents the best in the industry. Instead, a series of software modules are often used throughout the whole design flow: logic design, debugging, component placement, wire routing, optimization of time and power consumption, verification, and more. Because modern-day chips are so complex, each step requires a different software tool.
How important is EDA to chipmaking?
Although the global EDA market was valued at only around $10 billion in 2021, making it a small fraction of the $595 billion semiconductor market, it’s of unique importance to the entire supply chain.
The semiconductor ecosystem today can be seen as a triangle, says Mike Demler, a consultant who has been in the chip design and EDA industry for over 40 years. On one corner are the foundries, or chip manufacturers like TSMC; on another corner are intellectual-property companies like ARM, which make and sell reusable design units or layouts; and on the third corner are the EDA tools. All three together make sure the supply chain moves smoothly.
From the name, it may sound as if EDA tools are only important to chip design firms, but they are also used by chip manufacturers to verify that a design is feasible before production. There’s no way for a foundry to make a single chip as a prototype; it has to invest in months of time and production, and each time, hundreds of chips are fabricated on the same semiconductor base. It would be an enormous waste if they were found to have design flaws. Therefore, manufacturers rely on a special type of EDA tool to do their own validation.
What are the leading companies in the EDA industry?
There are only a few companies that sell software for each step of the chipmaking process, and they have dominated this market for decades. The top three companies—Cadence (American), Synopsys (American), and Mentor Graphics (American but acquired by the German company Siemens in 2017)—control about 70% of the global EDA market. Their dominance is so strong that many EDA startups specialize in one niche use and then sell themselves to one of these three companies, further cementing the oligopoly.
What is the US government doing to restrict EDA exports to China?
US companies’ outsize influence on the EDA industry makes it easy for the US government to squeeze China’s access. In its latest announcement, it pledged to add certain EDA tools to its list of technologies banned from export. The US will coordinate with 41 other countries, including Germany, to implement these restrictions.
Bright LEDs could spell the end of dark skies
Specifications in the current proposal provide a starting point for planning, including a color temperature cutoff of 3,000 K in line with Pittsburgh’s dark-sky ordinance, which passed last fall. However, Martinez says that is the maximum, and as they look for consultants, they’ll be taking into account which ones show dark-sky expertise. The city is also considering—budget and infrastructure permitting—a “network lighting management system,” a kind of “smart” lighting that would allow them to control lighting levels and know when there is an outage.
Martinez says there will be citywide engagement and updates on the status as critical milestones are reached. “We’re in the evaluation period right now,” she says, adding that the next milestone is authorization of a new contract. She acknowledges there is some “passionate interest in street lighting,” and that she too is anxious to see the project come to fruition: “Just because things seem to go quiet doesn’t mean work is not being done.”
While they aren’t meeting with light pollution experts right now, Martinez says the ones they met with during the last proposal round—Stephen Quick and Diane Turnshek of CMU— were “instrumental” in adopting the dark-sky ordinance.
In recent months, Zielinska-Dabkowska says, her “baby” has been the first Responsible Outdoor Light at Night Conference, an international gathering of more than 300 lighting professionals and light pollution researchers held virtually in May. Barentine was among the speakers. “It’s a sign that all of this is really coming along, both as a research subject but also something that attracts the interest of practitioners in outdoor lighting,” he says of the conference.
There is more work to be done, though. The IDA recently released a report summarizing the current state of light pollution research. The 18-page report includes a list of knowledge gaps to be addressed in several areas, including the overall effectiveness of government policies on light pollution. Another is how much light pollution comes from sources other than city streetlights, which a 2020 study found accounted for only 13% of Tucson’s light pollution. It is not clear what makes up the rest, but Barentine suspects the next biggest source in the US and Europe is commercial lighting, such as flashy outdoor LED signs and parking lot lighting.
Working with companies to reduce light emissions can be challenging, says Clayton Trevillyan, Tucson’s chief building officer. “If there is a source of light inside the building, technically it’s not regulated by the outdoor lighting code, even if it is emitting light outside,” Trevillyan says. In some cases, he says, in order to get around the city’s restrictions, businesses have suspended illuminated signs inside buildings but aimed them outside.
Light pollution experts generally say there is no substantial evidence that more light amounts to greater safety.
For cities trying to implement a lighting ordinance, Trevillyan says, the biggest roadblocks they’ll face are “irrelevant” arguments, specifically claims that reducing the brightness of outdoor lighting will cut down on advertising revenue and make the city more vulnerable to crime. The key to successfully enforcing the dark-sky rules, he says, is to educate the public and refuse to give in to people seeking exceptions or exploiting loopholes.
Light pollution experts generally say there is no substantial evidence that more light amounts to greater safety. In Tucson, for example, Barentine says, neither traffic accidents nor crime appeared to increase after the city started dimming its streetlights at night and restricting outdoor lighting in 2017. Last year, researchers at the University of Pennsylvania analyzed crime rates alongside 300,000 streetlight outages over an eight-year period. They concluded there is “little evidence” of any impact on crime rates on the affected streets—in fact, perpetrators seemed to seek out better-lit adjacent streets. Barentine says there is some evidence that “strategically placed lighting” can help decrease traffic collisions. “Beyond that, things get murky pretty quickly,” he says.