Inside effective altruism, where the far future counts a lot more more the present
Longtermism sees history differently: as a forward march toward inevitable progress. MacAskill references the past often in What We Owe the Future, but only in the form of case studies on the life-improving impact of technological and moral development. He discusses the abolition of slavery, the Industrial Revolution, and the women’s rights movement as evidence of how important it is to continue humanity’s arc of progress before the wrong values get “locked in” by despots. What are the “right” values? MacAskill has a coy approach to articulating them: he argues that “we should focus on promoting more abstract or general moral principles” to ensure that “moral changes stay relevant and robustly positive into the future.”
Worldwide and ongoing climate change, which already affects the under-resourced more than the elite today, is notably not a core longtermist cause, as philosopher Emile P. Torres points out in his critiques. While it poses a threat to millions of lives, longtermists argue, it probably won’t wipe out all of humanity; those with the wealth and means to survive can carry on fulfilling our species’ potential. Tech billionaires like Thiel and Larry Page already have plans and real estate in place to ride out a climate apocalypse. (MacAskill, in his new book, names climate change as a serious worry for those alive today, but he considers it an existential threat only in the “extreme” form where agriculture won’t survive.)
“To come to the conclusion that in order to do the most good in the world you have to work on artificial general intelligence is very strange.”
The final mysterious feature of EA’s version of the long view is how its logic ends up in a specific list of technology-based far-off threats to civilization that just happen to align with many of the original EA cohort’s areas of research. “I am a researcher in the field of AI,” says Gebru, “but to come to the conclusion that in order to do the most good in the world you have to work on artificial general intelligence is very strange. It’s like trying to justify the fact that you want to think about the science fiction scenario and you don’t want to think about real people, the real world, and current structural issues. You want to justify how you want to pull billions of dollars into that while people are starving.”
Some EA leaders seem aware that criticism and change are key to expanding the community and strengthening its impact. MacAskill and others have made it explicit that their calculations are estimates (“These are our best guesses,” MacAskill offered on a 2020 podcast episode) and said they’re eager to improve through critical discourse. Both GiveWell and CEA have pages on their websites titled “Our Mistakes,” and in June, CEA ran a contest inviting critiques on the EA forum; the Future Fund has launched prizes up to $1.5 million for critical perspectives on AI.
“We recognize that the problems EA is trying to address are really, really big and we don’t have a hope of solving them with only a small segment of people,” GiveWell board member and CEA community liaison Julia Wise says of EA’s diversity statistics. “We need the talents that lots of different kinds of people can bring to address these worldwide problems.” Wise also spoke on the topic at the 2020 EA Global Conference, and she actively discusses inclusion and community power dynamics on the CEA forum. The Center for Effective Altruism supports a mentorship program for women and nonbinary people (founded, incidentally, by Carrick Flynn’s wife) that Wise says is expanding to other underrepresented groups in the EA community, and CEA has made an effort to facilitate conferences in more locations worldwide to welcome a more geographically diverse group. But these efforts appear to be limited in scope and impact; CEA’s public-facing page on diversity and inclusion hasn’t even been updated since 2020. As the tech-utopian tenets of longtermism take a front seat in EA’s rocket ship and a few billionaire donors chart its path into the future, it may be too late to alter the DNA of the movement.
Politics and the future
Despite the sci-fi sheen, effective altruism today is a conservative project, consolidating decision-making behind a technocratic belief system and a small set of individuals, potentially at the expense of local and intersectional visions for the future. But EA’s community and successes were built around clear methodologies that may not transfer into the more nuanced political arena that some EA leaders and a few big donors are pushing toward. According to Wise, the community at large is still split on politics as an approach to pursuing EA’s goals, with some dissenters believing politics is too polarized a space for effective change.
But EA is not the only charitable movement looking to political action to reshape the world; the philanthropic field generally has been moving into politics for greater impact. “We have an existential political crisis that philanthropy has to deal with. Otherwise, a lot of its other goals are going to be hard to achieve,” says Inside Philanthropy’s Callahan, using a definition of “existential” that differs from MacAskill’s. But while EA may offer a clear rubric for determining how to give charitably, the political arena presents a messier challenge. “There’s no easy metric for how to gain political power or shift politics,” he says. “And Sam Bankman-Fried has so far demonstrated himself not the most effective political giver.”
Bankman-Fried has articulated his own political giving as “more policy than politics,” and has donated primarily to Democrats through his short-lived Protect Our Future PAC (which backed Carrick Flynn in Oregon) and the Guarding Against Pandemics PAC (which is run by his brother Gabe and publishes a cross-party list of its “champions” to support). Ryan Salame, the co-CEO with Bankman-Fried of FTX, funded his own PAC, American Dream Federal Action, which focuses mainly on Republican candidates. (Bankman-Fried has said Salame shares his passion for preventing pandemics.) Guarding Against Pandemics and the Open Philanthropy Action Fund (Open Philanthropy’s political arm) spent more than $18 million to get an initiative on the California state ballot this fall to fund pandemic research and action through a new tax.
IBM wants to build a 100,000-qubit quantum computer
Quantum computing holds and processes information in a way that exploits the unique properties of fundamental particles: electrons, atoms, and small molecules can exist in multiple energy states at once, a phenomenon known as superposition, and the states of particles can become linked, or entangled, with one another. This means that information can be encoded and manipulated in novel ways, opening the door to a swath of classically impossible computing tasks.
As yet, quantum computers have not achieved anything useful that standard supercomputers cannot do. That is largely because they haven’t had enough qubits and because the systems are easily disrupted by tiny perturbations in their environment that physicists call noise.
Researchers have been exploring ways to make do with noisy systems, but many expect that quantum systems will have to scale up significantly to be truly useful, so that they can devote a large fraction of their qubits to correcting the errors induced by noise.
IBM is not the first to aim big. Google has said it is targeting a million qubits by the end of the decade, though error correction means only 10,000 will be available for computations. Maryland-based IonQ is aiming to have 1,024 “logical qubits,” each of which will be formed from an error-correcting circuit of 13 physical qubits, performing computations by 2028. Palo Alto–based PsiQuantum, like Google, is also aiming to build a million-qubit quantum computer, but it has not revealed its time scale or its error-correction requirements.
Because of those requirements, citing the number of physical qubits is something of a red herring—the particulars of how they are built, which affect factors such as their resilience to noise and their ease of operation, are crucially important. The companies involved usually offer additional measures of performance, such as “quantum volume” and the number of “algorithmic qubits.” In the next decade advances in error correction, qubit performance, and software-led error “mitigation,” as well as the major distinctions between different types of qubits, will make this race especially tricky to follow.
Refining the hardware
IBM’s qubits are currently made from rings of superconducting metal, which follow the same rules as atoms when operated at millikelvin temperatures, just a tiny fraction of a degree above absolute zero. In theory, these qubits can be operated in a large ensemble. But according to IBM’s own road map, quantum computers of the sort it’s building can only scale up to 5,000 qubits with current technology. Most experts say that’s not big enough to yield much in the way of useful computation. To create powerful quantum computers, engineers will have to go bigger. And that will require new technology.
How it feels to have a life-changing brain implant removed
Burkhart’s device was implanted in his brain around nine years ago, a few years after he was left unable to move his limbs following a diving accident. He volunteered to trial the device, which enabled him to move his hand and fingers. But it had to be removed seven and a half years later.
His particular implant was a small set of 100 electrodes, carefully inserted into a part of the brain that helps control movement. It worked by recording brain activity and sending these recordings to a computer, where they were processed using an algorithm. This was connected to a sleeve of electrodes worn on the arm. The idea was to translate thoughts of movement into electrical signals that would trigger movement.
Burkhart was the first to receive the implant, in 2014; he was 24 years old. Once he had recovered from the surgery, he began a training program to learn how to use it. Three times a week for around a year and a half, he visited a lab where the implant could be connected to a computer via a cable leading out of his head.
“It worked really well,” says Burkhart. “We started off just being able to open and close my hand, but after some time we were able to do individual finger movements.” He was eventually able to combine movements and control his grip strength. He was even able to play Guitar Hero.
“There was a lot that I was able to do, which was exciting,” he says. “But it was also still limited.” Not only was he only able to use the device in the lab, but he could only perform lab-based tasks. “Any of the activities we would do would be simplified,” he says.
For example, he could pour a bottle out, but it was only a bottle of beads, because the researchers didn’t want liquids around the electrical equipment. “It was kind of a bummer it wasn’t changing everything in my life, because I had seen how beneficial it could be,” he says.
At any rate, the device worked so well that the team extended the trial. Burkhart was initially meant to have the implant in place for 12 to 18 months, he says. “But everything was really successful … so we were able to continue on for quite a while after that.” The trial was extended on an annual basis, and Burkhart continued to visit the lab twice a week.
The Download: brain implant removal, and Nvidia’s AI payoff
Leggett told researchers that she “became one” with her device. It helped her to control the unpredictable, violent seizures she routinely experienced, and allowed her to take charge of her own life. So she was devastated when, two years later, she was told she had to remove the implant because the company that made it had gone bust.
The removal of this implant, and others like it, might represent a breach of human rights, ethicists say in a paper published earlier this month. And the issue will only become more pressing as the brain implant market grows in the coming years and more people receive devices like Leggett’s. Read the full story.
You can read more about what happens to patients when their life-changing brain implants are removed against their wishes in the latest issue of The Checkup, Jessica’s weekly newsletter giving you the inside track on all things biotech. Sign up to receive it in your inbox every Thursday.
If you’d like to read more about brain implants, why not check out:
+ Brain waves can tell us how much pain someone is in. The research could open doors for personalized brain therapies to target and treat the worst kinds of chronic pain. Read the full story.
+ An ALS patient set a record for communicating via a brain implant. Brain interfaces could let paralyzed people speak at almost normal speeds. Read the full story.
+ Here’s how personalized brain stimulation could treat depression. Implants that track and optimize our brain activity are on the way. Read the full story.