Some argue that governments should also create separate targets to ensure that carbon removal (sometimes called “negative emissions”) does not count toward emissions reduction goals.
“Failure to make such a separation has already hampered climate policy, exaggerating the expected future contribution of negative emissions in climate models, while also obscuring the extent and pace of the investment needed to deliver negative emissions,” McLaren and others argued in Frontiers in Climate in 2019.
Sweden did a version of this, setting a goal of cutting emissions by at least 85% below 1990 levels by 2045 and relying largely on carbon removal to get the rest of the way to zero. The European Union included a similar provision within the proposed European Climate Law, limiting the role of carbon removal to 225 million tons, or a little more than 2 percentage points of the overall goal: a 55% reduction in emissions by 2030.
“It is now set in stone that the overwhelming majority of the EU’s mitigation efforts will need to be done by reducing emissions, with carbon removal helping to go the extra mile,” wrote Frances Wang and Mark Preston Aragonès, both of the ClimateWorks Foundation.
Early stage and high risk
Sally Benson, a professor of energy resources engineering at Stanford, says the money she sees flowing into carbon-removal startups today strikes her as very similar to the situation in clean tech in the 2000s, when investments poured into technologies that were very early stage and high risk.
Many of those investments didn’t pay off, as companies developing advanced biofuels and alternative solar materials failed in the marketplace.
“I do worry a little bit that that’s where we are with the carbon removal technologies,” she said in an email. “Some of the ones that are most mature and likely to succeed and make a material difference, like BECCS [bioenergy with carbon capture and storage], are getting a lot less attention compared to less mature technologies like direct air capture and mineralization.”
But she stresses that these are likely to be crucial technologies in the future, and “we’ve got to start somewhere.”
Yann LeCun has a bold new vision for the future of AI
Melanie Mitchell, an AI researcher at the Santa Fe Institute, is also excited to see a whole new approach. “We really haven’t seen this coming out of the deep-learning community so much,” she says. She also agrees with LeCun that large language models cannot be the whole story. “They lack memory and internal models of the world that are actually really important,” she says.
Natasha Jaques, a researcher at Google Brain, thinks that language models should still play a role, however. It’s odd for language to be entirely missing from LeCun’s proposals, she says: “We know that large language models are super effective and bake in a bunch of human knowledge.”
Jaques, who works on ways to get AIs to share information and abilities with each other, points out that humans don’t have to have direct experience of something to learn about it. We can change our behavior simply by being told something, such as not to touch a hot pan. “How do I update this world model that Yann is proposing if I don’t have language?” she asks.
There’s another issue, too. If they were to work, LeCun’s ideas would create a powerful technology that could be as transformative as the internet. And yet his proposal doesn’t discuss how his model’s behavior and motivations would be controlled, or who would control them. This is a weird omission, says Abhishek Gupta, the founder of the Montreal AI Ethics Institute and a responsible-AI expert at Boston Consulting Group.
“We should think more about what it takes for AI to function well in a society, and that requires thinking about ethical behavior, amongst other things,” says Gupta.
Yet Jaques notes that LeCun’s proposals are still very much ideas rather than practical applications. Mitchell says the same: “There’s certainly little risk of this becoming a human-level intelligence anytime soon.”
LeCun would agree. His aim is to sow the seeds of a new approach in the hope that others build on it. “This is something that is going to take a lot of effort from a lot of people,” he says. “I’m putting this out there because I think ultimately this is the way to go.” If nothing else, he wants to convince people that large language models and reinforcement learning are not the only ways forward.
“I hate to see people wasting their time,” he says.
The Download: Yann LeCun’s AI vision, and smart cities’ unfulfilled promises
“We’re addicted to being on Facebook.”
—Jordi Berbera, who runs a pizza stand in Mexico City, tells Rest of World why he has turned to selling his wares through the social network instead of through more conventional food delivery apps.
The big story
“Am I going crazy or am I being stalked?” Inside the disturbing online world of gangstalking
Jenny’s story is not linear, the way that we like stories to be. She was born in Baltimore in 1975 and had a happy, healthy childhood—her younger brother Danny fondly recalls the treasure hunts she would orchestrate. In her late teens, she developed anorexia and depression and was hospitalized for a month. Despite her struggles, she graduated high school and was accepted into a prestigious liberal arts college.
There, things went downhill again. Among other issues, chronic fatigue led her to drop out. When she was 25 she flipped that car on Florida’s Sunshine Skyway Bridge in an apparent suicide attempt. At 30, after experiencing delusions that she was pregnant, she was diagnosed with schizophrenia. She was hospitalized for half a year and began treatment, regularly receiving shots of an antipsychotic drug. “It was like having my older sister back again,” Danny says.
On July 17, 2017, Jenny jumped from the tenth floor of a parking garage at Tampa International Airport. After her death, her family searched her hotel room and her apartment, but the 42-year-old didn’t leave a note. “We wanted to find a reason for why she did this,” Danny says. And so, a week after his sister’s death, Danny—a certified ethical hacker—decided to look for answers on Jenny’s computer. He found she had subscribed to hundreds of gangstalking groups across Facebook, Twitter, and Reddit; online communities where self-described “targeted individuals” say they are being monitored, harassed, and stalked 24/7 by governments and other organizations—and the internet legitimizes them. Read the full story.
The US Supreme Court has overturned Roe v. Wade. What does that mean?
Access to legal abortion is now subject to state laws, allowing each state to decide whether to ban, restrict or allow abortion. Some parts of the country are much stricter than others—Arkansas, Oklahoma and Kentucky are among the 13 states with trigger laws that immediately made abortion illegal in the aftermath of the ruling. In total, around half of states are likely to either ban or limit access to the procedure, with many of them refusing to make exceptions, even in pregnancies involving rape, incest and fetuses with genetic abnormalities. Many specialized abortion clinics may be forced to close their doors in the next few days and weeks.
While overturning Roe v Wade will not spell an end to abortion in the US, it’s likely to lower its rates, and force those seeking them to obtain them using different methods. People living in states that ban or heavily restrict abortions may consider travelling to other areas that will continue to allow them, although crossing state lines can be time-consuming and prohibitively expensive for many people facing financial hardship.
The likelihood that anti-abortion activists will use surveillance and data collection to track and identify people seeking abortions is also higher following the decision. This information could be used to criminalize them, making it particularly dangerous for those leaving home to cross state lines.
Vigilante volunteers already stake out abortion clinics in states including Mississippi, Florida and North Carolina, filming people’s arrival on cameras and recording details about them and their cars. While they deny the data is used to harass or contact people seeking abortions, experts are concerned that footage filmed of clients arriving and leaving clinics could be exploited to target and harm them, particularly if law enforcement agencies or private groups were to use facial recognition to identify them.
Another option is to order so-called abortion pills to discreetly end a pregnancy at home. The pills, which are safe and widely prescribed by doctors, are significantly less expensive than surgical procedures, and already account for the majority of abortions in the US.