Tech
Machine-learning project takes aim at disinformation
Published
2 years agoon
By
Drew Simpson
There’s nothing new about conspiracy theories, disinformation, and untruths in politics. What is new is how quickly malicious actors can spread disinformation when the world is tightly connected across social networks and internet news sites. We can give up on the problem and rely on the platforms themselves to fact-check stories or posts and screen out disinformation—or we can build new tools to help people identify disinformation as soon as it crosses their screens.
Preslav Nakov is a computer scientist at the Qatar Computing Research Institute in Doha specializing in speech and language processing. He leads a project using machine learning to assess the reliability of media sources. That allows his team to gather news articles alongside signals about their trustworthiness and political biases, all in a Google News-like format.
“You cannot possibly fact-check every single claim in the world,” Nakov explains. Instead, focus on the source. “I like to say that you can fact-check the fake news before it was even written.” His team’s tool, called the Tanbih News Aggregator, is available in Arabic and English and gathers articles in areas such as business, politics, sports, science and technology, and covid-19.
Business Lab is hosted by Laurel Ruma, editorial director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.
This podcast was produced in partnership with the Qatar Foundation.
Show notes and links
Qatar Computing Research Institute
“Even the best AI for spotting fake news is still terrible,” MIT Technology Review, October 3, 2018
Full transcript
Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is disinformation. From fake news, to propaganda, to deep fakes, it may seem like there’s no defense against weaponized news. However, scientists are researching ways to quickly identify disinformation to not only help regulators and tech companies, but also citizens, as we all navigate this brave new world together.
Two words for you: spreading infodemic.
My guest is Dr. Preslav Nakov, who is a principal scientist at the Qatar Computing Research Institute. He leads the Tanbih project, which was developed in collaboration with MIT. He’s also the lead principal investigator of a QCRI MIT collaboration project on Arabic speech and language processing for cross language information search and fact verification. This episode of Business Lab is produced in association with the Qatar Foundation. Welcome, Dr. Nakov.
Preslav Nakov: Thanks for having me.
Laurel Ruma: So why are we deluged with so much online disinformation right now? This isn’t a new problem, right?
Nakov: Of course, it’s not a new problem. It’s not the case that it’s for the first time in the history of the universe that people are telling lies or media are telling lies. We had the yellow press, we had all these tabloids for years. It became a problem because of the rise of social media, when it suddenly has become possible to have a message that you can send to millions and millions of people. And not only that, you could now tell different things to different people. So, you could microprofile people and you could deliver them a specific personalized message that is designed, crafted for a specific person with a specific purpose to press a specific button on them. The main problem with fake news is not that it’s false. The main problem is that the news actually got weaponized, and this is something that Sir Tim Berners-Lee, the creator of the World Wide Web has been complaining about: that his invention was weaponized.
Laurel: Yeah, Tim Berners-Lee is obviously distraught that this has happened, and it’s not just in one country or another. It is actually around the world. So is there an actual difference between fake news, propaganda, and disinformation?
Nakov: Sure, there is. I don’t like the term “fake news.” This is the term that has picked up: it was declared “word of the year” by several dictionaries in different years, shortly after the previous presidential election in the US. The problem with fake news is that, first of all, there’s no clear definition. I have been looking into dictionaries, how they define the term. One major dictionary said, “we are not really going to define the term at all, because it’s something self-explanatory—we have ‘news,’ we have ‘fake,’ and it’s news that’s fake; it’s compositional; it was used the 19th century—there is nothing to define.” Different people put different meaning into this. To some people, fake news is just news they don’t like, regardless of whether it is false. But the main problem with fake news is that it really misleads people, and sadly, even certain major fact-checking organizations, to only focus on one thing, whether it’s true or not.
I prefer, and most researchers working on this prefer, the term “disinformation.” And this is a term that is adopted by major organizations like the United Nations, NATO, the European Union. And disinformation is something that has a very clear definition. It has two components. First, it is something that is false, and second, it has a malicious intent: intent to do harm. And again, the vast majority of research, the vast majority of efforts, many fact-checking initiatives, focus on whether something is true or not. And it’s typically the second part that is actually important. The part whether there is malicious intent. And this is actually what Sir Tim Berners-Lee was talking about when he first talked about the weaponization of the news. The main problem with fake news—if you talk to journalists, they will tell you this—the main problem with fake news is not that it is false. The problem is that it is a political weapon.
And propaganda. What is propaganda? Propaganda is a term that is orthogonal to disinformation. Again, disinformation has two components. It’s false and it has malicious intent. Propaganda also has two components. One is, somebody is trying to convince us of something. And second, there is a predefined goal. Now, we should pay attention. Propaganda is not true; it’s not false. It’s not good; it’s not bad. That’s not part of the definition. So, if a government has a campaign to persuade the public to get vaccinated, you can argue that’s for a good purpose, or let’s say Greta Thunberg trying to scare us that hundreds of species are getting extinct every day. This is a propaganda technique: appeal to fear. But you can argue that’s for a good purpose. So, propaganda is not bad; it’s not good. It’s not true; it’s not false.
Laurel: But propaganda has the goal to do something. And, and by forcing that goal, it is really appealing to that fear factor. So that is the distinction between disinformation and propaganda, is the fear.
Nakov: No, fear is just one of the techniques. We have been looking into this. So, a lot of research has been focusing on binary classification. Is this true? Is this false? Is this propaganda? Is this not propaganda? We have looked a little bit deeper. We have been looking into what techniques have been used to do propaganda. And again, you can talk about propaganda, you can talk about persuasion or public relations, or mass communication. It’s basically the same thing. Different terms for about the same thing. And regarding propaganda techniques, there are two kinds. The first kind are appeals to emotions: it can be appeal to fear, it can be appeal to strong emotions, it can be appeal to patriotic feelings, and so on and so forth. And the other half are logical fallacies: things like black-and-white fallacy. For example, you’re either with us or against us. Or bandwagon. Bandwagon is like, oh, the latest poll shows that 57% are going to vote for Hillary, so we are on the right side of history, you have to join us.
There are several other propaganda techniques. There is red herring, there is intentional obfuscation. We have looked into 18 of those: half of them appeal to emotions, and half of them use certain kinds of logical fallacies, or broken logical reasoning. And we have built tools to detect those in texts, so that you can really show them to the user and make this explicit, so that people can understand how they are being manipulated.
Laurel: So in the context of the covid-19 pandemic, the director general of the World Health Organization said, and I quote, “We’re not just fighting an epidemic; we’re fighting an infodemic.” How do you define infodemic? What are some of those techniques that we can use to also avoid harmful content?
Nakov: Infodemic, this is something new. Actually, MIT Technology Review had about a year ago, last year in February, had a great article that was talking about that. The covid-19 pandemic has given rise to the first global social media infodemic. And again, around the same time, the World Health Organization, back in February, had on their website a list of top five priorities in the fight against the pandemic, and fighting the infodemic was number two, number two in the list of the top five priorities. So, it’s definitely a big problem. What is the infodemic? It’s a merger of a pandemic and the pre-existing disinformation that was already present in social media. It’s also a blending of political and health disinformation. Before that, the political part, and, let’s say, the anti-vaxxer movement, those were separate. Now, everything is blended together.
Laurel: And that’s a real problem. I mean, the World Health Organization’s concern should be fighting the pandemic, but then its secondary concern is fighting disinformation. Finding hope in that kind of fear is very difficult. So one of the projects that you’re working on is called Tanbih. And Tanbih is a news aggregator, right? That uncovers disinformation. So the project itself has a number of goals. One is to uncover stance, bias, and propaganda in the news. The second is to promote different viewpoints and engage users. But then the third is to limit the effect of fake news. How does Tanbih work?
Nakov: Tanbih started indeed as a news aggregator, and it has grown into something quite larger than that, into a project, which is a mega-project in the Qatar Computing Research Institute. And it spans people from several groups in the institute, and it is developed in cooperation with MIT. We started the project with the aim of developing tools that we can actually put in the hands of the final users. And we decided to do this as part of a news aggregator, think of something like Google News. And as users are reading the news, we are signaling to them when something is propagandistic, and we’re giving them background information about the source. What we are doing is we are analyzing media in advance and we are building media profiles. So we are showing, telling users to what extent the content is propagandistic. We are telling them whether the news is from a trustworthy source or not, whether it is biased: left, center, right bias. Whether it is extreme: extreme left, extreme right. Also, whether it is biased with respect to specific topics.
And this is something that is very useful. So, imagine that you are reading some article that is skeptical about global warming. If we tell you, look, this news outlet has always been very biased in the same way, then you’ll probably take it with a grain of salt. We are also showing the perspective of reporting, the framing. If you think about it, covid-19, Brexit, any major event can be reported from different perspectives. For example, let’s take covid-19. It has a health aspect, that’s for sure, but it also has an economic aspect, even a political aspect, it has a quality-of-life aspect, it has a human rights aspect, a legal aspect. Thus, we are profiling the media and we are letting users see what their perspective is.
Regarding the media profiles, we are further exposing them as a browser plugin, so that as you are visiting different websites, you can actually click on the plugin and you can get very brief background information about the website. And you can also click on a link to access a more detailed profile. And this is very important: the focus is on the source. Again, most research has been focusing on “is this claim true or not?” And is this piece of news true or not? That’s only half of the problem. The other half is actually whether it is harmful, which is typically ignored.
The other thing is that we cannot possibly fact-check every single claim in the world. Not manually, not automatically. Manually, that’s out of the question. There was a study from MIT Media Lab about two years ago, where they have done a large study on many, many tweets. And it has been shown that false information goes six times farther and spreads much faster than real information. There was another study that is much less famous, but I find it very important, which shows that 50% of the lifetime spread of some very viral fake news happens in the first 10 minutes. In the first 10 minutes! Manual fact-checking takes a day or two, sometimes a week.
Automatic fact-checking? How can we fact-check a claim? Well, if we are lucky, if the claim is that the US economy grew 10% last year, that claim we can automatically check easily, by looking into Wikipedia or some statistical table. But if they say, there was a bomb in this little town two minutes ago? Well, we cannot really fact-check it, because to fact-check it automatically, we need to have some information from somewhere. We want to see what the media are going to write about it or how users are going to react to it. And both of those take time to accumulate. So, basically we have no information to check it. What can we do? What we are proposing is to move at a higher granularity, to focus on the source. And this is what journalists are doing. Journalists are looking into: are there two independent trusted sources that are claiming this?
So we are analyzing media. Even if bad people put a claim in social media, they are probably going to put a link to a website where one can find a whole story. Yet, they cannot create a new fake news website for every fake claim that they are making. They are going to reuse them. Thus, we can monitor what are the most frequently used websites, and we can analyze them in advance. And, I like to say that we can fact-check the fake news before it was even written. Because the moment when it’s written, the moment when it’s put in social media and there’s a link to a website, if we have this website in our growing database of continuously analyzed websites, we can immediately tell you whether this is a reliable website or not. Of course, reliable websites might have also poor information, good websites might sometimes be wrong as well. But we can give you an immediate idea.
Beyond the news aggregator, we started looking into doing analytics, but also we are developing tools for media literacy that are showing to people the fine-grained propaganda techniques highlighted in the text: the specific places where propaganda is happening and its specific type. And finally, we are building tools that can support fact-checkers in their work. And those are again problems that are typically overlooked, but extremely important for fact-checkers. Namely, what is worth fact-checking in the first place. Consider a presidential debate. There are more than 1,000 sentences that have been said. You, as a fact-checker can check maybe 10 or 20 of those. Which ones are you going to fact-check first? What are the most interesting ones? We can help prioritize this. Or there are millions and millions of tweets about covid-19 on a daily basis. And which of those you would like to fact-check as a fact-checker?
The second problem is detecting previously fact-checked claims. One problem with fact-checking technology these days is quality, but the second part is lack of credibility. Imagine an interview with a politician. Can you put the politician on the spot? Imagine a system that automatically does speech recognition, that’s easy, and then does fact-checking. And suddenly you say, “Oh, Mr. X, my AI tells me you are now 96% likely to be lying. Can you elaborate on that? Why are you lying?” You cannot do that. Because you don’t trust the system. You cannot put the politician on the spot in real time or during a political debate. But if the system comes back and says: he just said something that has been fact-checked by this trusted fact-checking organization. And here’s the claim that he made, and here’s the claim that was fact-checked, and see, we know it’s false. Then you can put him on the spot. This is something that can potentially revolutionize journalism.
Laurel: So getting back to that point about analytics. To get into the technical details of it, how does Tanbih use artificial intelligence and deep neural networks to analyze that content, if it’s coming across so much data, so many tweets?
Nakov: Tanbih initially was not really focusing on tweets. Tanbih has been focusing primarily on mainstream media. As I said, we are analyzing entire news outlets, so that we are prepared. Because again, there’s a very strong connection between social media and websites. It’s not enough just to put a claim on the Web and spread it. It can spread, but people are going to perceive it as a rumor because there’s no source, there’s no further corroboration. So, you still want to look into a website. And then, as I said, by looking into the source, you can get an idea whether you want to trust this claim among other information sources. And the other way around: when we are profiling media, we are analyzing the text of what the media publish.
So, we would say, “OK, let’s look into a few hundred or a few thousand articles by this target news outlet.” Then we would also look into how this medium self-represents in social media. Many of those websites have also social media accounts: how do people react to what they have been published in Twitter, in Facebook? And then if the media have other kinds of channels, for example, if they have a YouTube channel, we will go to it and analyze that as well. So we’ll look into not only what they say, but how they say it, and this is something that comes from the speech signal. If there is a lot of appeal to emotions, we can detect some of it in text, but some of it we can actually get from the tone.
We are also looking into what others write about this medium, for example, what is written about them in Wikipedia. And we are putting all this together. We are also analyzing the images that are put on this website. We are analyzing the connections between the websites. The relationship between a website and its readers, the overlap in terms of users between different websites. And then we are using different kinds of graph neural networks. So, in terms of neural networks, we’re using different kinds of models. It’s primarily deep contextualized text representation based on transformers; that’s what you typically do for text these days. We are also using graph neural networks and we’re using different kinds of convolutional neural networks for image analysis. And we are also using neural networks for speech analysis.
Laurel: So what do we learn by studying this kind of disinformation region by region or by language? How can that actually help governments and healthcare organizations fight disinformation?
Nakov: We can basically give them aggregated information about what is going on, based on a schema that we have been developing for analysis of the tweets. We have designed a very comprehensive schema. We have been looking not only into whether a tweet is true or not, but also into whether it’s spreading panic, or it is promoting bad cure, or xenophobia, racism. We are automatically detecting whether the tweet is asking an important question that maybe a certain government entity might want to answer. For example, one such question last year was: is covid-19 going to disappear in the summer? It’s something that maybe health authorities might want to answer.
Other things have been offering advice or discussing action taken, and possible cures. So we have been looking into not only negative things, things that you might act on, try to limit, things like panic or racism, xenophobia—things like “don’t eat Chinese food,” “don’t eat Italian food.” Or things like blaming the authorities for their action or inaction, which governments might want to pay attention to and see to what extent it is justified and if they want to do something about it. Also, an important thing a policy maker might want is to monitor social media and detect when there is discussion of a possible cure. And if it’s a good cure, you might want to pay attention. If it’s a bad cure, you might also want to tell people: don’t use that bad cure. And discussion of action taken, or a call for action. If there are many people that say “close the barbershops,” you might want to see why they are saying that and whether you want to listen.
Laurel: Right. Because the government wants to monitor this disinformation for the explicit purpose of helping everyone not take those bad cures, right. Not continue down the path of thinking this propaganda or disinformation is true. So is it a government action to regulate disinformation on social media? Or do you think it’s up to the tech companies to kind of sort it out themselves?
Nakov: So that’s a good question. Two years ago, I was invited by the Inter-Parliamentary Union’s Assembly. They had invited three experts and there were 800 members of parliament from countries around the world. And for three hours, they were asking us questions, basically going around the central topic: what kinds of legislation can they, the national parliaments, pass so that they get a solution to the problem of disinformation once and for all. And, of course, the consensus at the end was that that’s a complex problem and there’s no easy solution.
Certain kind of legislation definitely plays a role. In many countries, certain kinds of hate speech is illegal. And in many countries, there are certain kind of regulations when it comes to elections and advertisements at election time that apply to regular media and also extend to the web space. And there have been a lot of recent calls for regulations in UK, in the European Union, even in the US. And that’s a very heated debate, but this is a complex problem, and there’s no easy solution. And there are important players there and those players have to work together.
So certain legislation? Yes. But, you also need the cooperation of the social media companies, because the disinformation is happening in their platforms. And they’re in a very good position, the best position actually, to limit the spread or to do something. Or to teach their users, to educate them, that probably they should not spread everything that they read. And then the non-government organizations, journalists, all the fact-checking efforts, this is also very important. And I hope that the efforts that we as researchers are putting in building such tools, would also be helpful in that respect.
One thing that we need to pay attention to is that when it comes to regulation through legislation, we should not think necessarily what can we do about this or that specific company. We should think more in the long term. And we should be careful to protect free speech. So it’s kind of a delicate balance.
In terms of fake news, disinformation. The only case where somebody has declared victory, and the only solution that we have seen actually to work, is the case of Finland. Back in May 2019, Finland has officially declared that they have won the war on fake news. It took them five years. They started working on that after the events in Crimea; they felt threatened and they started a very ambitious media literacy campaign. They focused primarily on schools, but also targeted universities and all levels of society. But, of course, primarily schools. They were teaching students how to tell whether something is fishy. If it makes you too angry, maybe something is not correct. How to do, let’s say, reverse image search to check whether this image that is shown is actually from this event or from somewhere else. And in five years, they have declared victory.
So, to me, media literacy is the best long-term solution. And that’s why I’m particularly proud of our tool for fine-grained propaganda analysis, because it really shows the users how they are being manipulated. And I can tell you that my hope is that after people have interacted a little bit with a platform like this, they’ll learn those techniques. And next time they are going to recognize them by themselves. They will not need the platform. And it happened to me and several other researchers who have worked on this problem, it happened to us, and now I cannot read the news properly anymore. Each time I read the news, I spot these techniques because I know them and I can recognize them. If more people can get to that level, that will be good.
Maybe social media companies can do something like that when a user registers on their platform, they could ask the new users to take some digital literacy short course, and then pass something like an exam. And then, of course, maybe we should have government programs like that. The case of Finland shows that, if the government intervenes and puts in place the right programs, the fake news is something that can be solved. I hope that fake news is going to go the way of spam. It’s not going to be eradicated. Spam is still there, but it’s not the kind of problem that it was 20 years ago.
Laurel: And that’s media literacy. And even if it does take five years to eradicate this kind of disinformation or just improve society’s understanding of media literacy and what is disinformation, elections happen fairly frequently. And so that would be a great place to start thinking about how to stop this problem. Like you said, if it becomes like spam, it becomes something that you deal with every day, but you don’t actually think about or worry about anymore. And it’s not going to completely turn over democracy. That seems to me a very attainable goal.
Laurel: Dr. Nakov, thank you so much for joining us today on what’s been a fantastic conversation on the Business Lab.
Nakov: Thanks for having me.
Laurel: That was Dr. Preslav Nakov, a principal scientist at the Qatar Computing Research Institute, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River.
That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the Director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at events each year around the world. For information about us and the show, please check out our website at technologyreview.com.
The show is available wherever you get your podcasts.
If you enjoyed this podcast, we hope that you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next.
This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff.
You may like
-
Meta’s former CTO has a new $50 million project: ocean-based carbon removal
-
Innovation in the space industry takes off
-
How China takes extreme measures to keep teens off TikTok
-
The 11th Breakthrough Technology of 2023 takes flight
-
What would true diversity sound like?
-
This tiny Dutch vehicle for people with disabilities is taking off
Tech
Inside the quest to engineer climate-saving “super trees”
Published
4 hours agoon
06/08/2023By
Drew Simpson
Fifty-three million years ago, the Earth was much warmer than it is today. Even the Arctic Ocean was a balmy 50 °F—an almost-tropical environment that looked something like Florida, complete with swaying palm trees and roving crocodiles.
Then the world seemed to pivot. The amount of carbon in the atmosphere plummeted, and things began to cool toward today’s “icehouse” conditions, meaning that glaciers can persist well beyond the poles.
What caused the change was, for decades, unclear. Eventually, scientists drilling into Arctic mud discovered a potential clue: a layer of fossilized freshwater ferns up to 20 meters thick. The site suggested that the Arctic Ocean may have been covered for a time in vast mats of small-leaved aquatic Azolla ferns. Azollas are among the fastest-growing plants on the planet, and the scientists theorized that if such ferns coated the ocean, they could have consumed huge quantities of carbon, helping scrub the atmosphere of greenhouse gasses and thereby cooling the planet.
Patrick Mellor, paleobiologist and chief technology officer of the biotech startup Living Carbon, sees a lesson in the story about these diminutive ferns: photosynthesis can save the world. Certain fluke conditions seem to have helped the Azollas along, though. The arrangement of continental plates at the time meant the Arctic Ocean was mostly enclosed, like a massive lake, which allowed a thin layer of fresh river water to collect atop it, creating the kind of conditions the ferns needed. And crucially, when each generation of ferns died, they settled into saltier water that helped inhibit decay, keeping microbes from releasing the ferns’ stored carbon back into the atmosphere.
Mellor says we can’t wait millions of years for the right conditions to return. If we want plants to save the climate again, we have to prod them along. “How do we engineer an anthropogenic Azolla event?” he says. “That’s what I wanted to do.”
At Living Carbon, Mellor is trying to design trees that grow faster and grab more carbon than their natural peers, as well as trees that resist rot, keeping that carbon out of the atmosphere. In February, less than four years after he co-founded it, the company made headlines by planting its first “photosynthesis-enhanced” poplar trees in a strip of bottomland forests in Georgia.
This is a breakthrough, clearly: it’s the first forest in the United States that contains genetically engineered trees. But there’s still much we don’t know. How will these trees affect the rest of the forest? How far will their genes spread? And how good are they, really, at pulling more carbon from the atmosphere?
Living Carbon has already sold carbon credits for its new forest to individual consumers interested in paying to offset some of their own greenhouse gas emissions. They’re working with larger companies, to which they plan to deliver credits in the coming years. But academics who study forest health and tree photosynthesis question whether the trees will be able to absorb as much carbon as advertised.
Even Steve Strauss, a prominent tree geneticist at Oregon State University who briefly served on Living Carbon’s scientific advisory board and is conducting field trials for the company, told me in the days before the first planting that the trees might not grow as well as natural poplars. “I’m kind of a little conflicted,” he said, “that they’re going ahead with this—all the public relations and the financing—on something that we don’t know if it works.”
Roots of an idea
In photosynthesis, plants pull carbon dioxide out of the atmosphere and use the energy from sunlight to turn it into sugars. They burn some sugars for energy and use some to build more plant matter—a store of carbon.
A research group based at the University of Illinois Urbana-Champaign supercharged this process, publishing their results in early 2019. They solved a problem presented by RuBisCO, an enzyme many plants use to grab atmospheric carbon. Sometimes the enzyme accidentally bonds with oxygen, a mistake that yields something akin to a toxin. As the plant processes this material, it must burn some of its sugars, thereby releasing carbon back to the sky. A quarter or more of the carbon absorbed by plants can be wasted through this process, known as photorespiration.
The researchers inserted genes into tobacco plants that helped them turn the toxin-like material into more sugar. These genetically tweaked plants grew 25% larger than controls.
The breakthrough offered good news for the world’s natural landscapes: if this genetic pathway yields more productive crops, we’ll need less farmland, sparing forests and grasslands that otherwise would have to be cleared. As for the plants’ ability to remove atmospheric carbon over the long term, the new trick doesn’t help much. Each year, much of the carbon in a crop plant’s biomass gets returned to the atmosphere after it’s consumed, whether by microbes or fungi or human beings.
Still, the result caught the attention of Maddie Hall, a veteran of several Silicon Valley startups who was interested in launching her own carbon-capture venture. Hall reached out to Donald Ort, the biologist who’d led the project, and learned that the same tweaks might work in trees—which stay in the ground long enough to serve as a potential climate solution.
Late in 2019, Hall settled on the name for her startup: Living Carbon. Not long afterward, she met Mellor at a climate conference. Mellor was then serving as a fellow with the Foresight Institute, a think tank focused on ambitious future technologies, and had become interested in plants like Pycnandra acuminata. This tree, native to the South Pacific islands of New Caledonia, pulls huge quantities of nickel out of the soil. That’s likely a defense against insects, but as nickel has natural antifungal properties, the resulting wood is less prone to decay. Mellor figured if he could transfer the correct gene into more species, he could engineer his Azolla event.
When Mellor and Hall met, they realized their projects were complementary: put the genes together and you’d get a truly super tree, faster-growing and capable of more permanent carbon storage. Hall tapped various contacts in Silicon Valley to collect $15 million in seed money, and a company was born.
In some ways, Living Carbon’s goal was simple, at least when it came to photosynthesis: take known genetic pathways and place them in new species, a process that’s been conducted with plants for nearly 40 years. “There’s a lot of mystification of this stuff, but really it’s just a set of laboratory techniques,” Mellor says.
Since neither Mellor nor Hall had substantial experience with genetic transformation, they enlisted outside scientists to do some of the early work. The company focused on replicating Ort’s enhanced-photosynthesis pathway in trees, targeting two species: poplars, which are popular with researchers because of their well-studied genome, and loblolly pines, a common timber species. By 2020, the tweaked trees had been planted in a grow room, a converted recording studio in San Francisco. The enhanced poplars quickly showed results even more promising than Ort’s tobacco plants. In early 2022, Living Carbon’s team posted a paper on the preprint server bioRxiv claiming that the best-performing tree showed 53% more above-ground biomass than controls after five months. (A peer-reviewed version of the paper appeared in the journal Forests in April.)
Through the loophole
Plant genetics research can be a long scientific slog. What works in a greenhouse, where conditions can be carefully controlled, may not work as well in outdoor settings, where the amounts of light and nutrients a plant receives vary. The standard next step after a successful greenhouse result is a field trial, which allows scientists to observe how genetically engineered (GE) plants might fare outside without actually setting them fully loose.
US Department of Agriculture (USDA) regulations for GE field trials aim to minimize “gene drift,” in which the novel genes might spread into the wild. Permits require that biotech trees be planted far from species with which they could potentially reproduce, and in some cases the rules dictate that any flowers be removed. Researchers must check the field site after the study to ensure no trace of the GE plants remain.
Before planting trees in Georgia, Living Carbon launched its own field trials. The company hired Oregon State’s Strauss, who had given Living Carbon the poplar clone it had used in its gene transfer experiments. In the summer of 2021, Strauss planted the redesigned trees in a section of the university’s property in Oregon.
Strauss has been conducting such field trials for decades, often for commercial companies trying to create better timber technologies. It’s a process that requires patience, he says: most companies want to wait until a “half rotation,” or midway to harvest age, before determining whether a field trial’s results are promising enough to move forward with a commercial planting. Living Carbon’s trees may never be harvested, which makes setting a cutoff date difficult. But when we spoke in February, less than two years into the field trial and just before Living Carbon’s initial planting, Strauss said it was too early to determine whether the company’s trees would perform as they had in the greenhouse. “There could be a negative,” he said. “We don’t know.”
Strauss has been critical of the US regulatory requirements for field trials, which he sees as costly, a barrier that scares off many academics. The framework behind its rules emerged in the 1980s when, rather than wait on the slow grind of the legislative process, the Reagan administration adapted existing laws to fit new genetic technologies. For the USDA, the chosen tool was its broad authority over “plant pests,” a term meant to describe anything that might injure a plant—whether an overly hungry animal, a parasitic bacterium, or a weed that might outcompete a crop.
At the time, gene transfer in plants was almost entirely accomplished with the help of Agrobacterium tumefaciens. This microbe attacks plants by inserting its own genes, much like a virus. But scientists found they could convince the bacterium to deliver whatever snippets of code they desired. Since Agrobacterium itself is considered a plant pest, the USDA decided it had the authority to regulate the interstate movement and environmental release of any plant that had had its genes transformed by the microbe. This meant nearly comprehensive regulation of GE plants.
In 1987, just one year after the USDA established its policy, a team of Cornell researchers announced the successful use of what’s become known as a “gene gun”—or, less colorfully, “biolistics”—in which bits of DNA are literally blasted into a plant cell, carried by high-velocity particles. No plant pest was involved. This created a loophole in the system, a way to produce GE plants that the current laws did not cover.
Since then, more than 100 GE plants, mostly modified crop plants, have thus escaped the USDA’s regulatory scrutiny.
Agrobacterium remains a common method of gene transfer, and it’s how Living Carbon produced the trees discussed in its paper. But Mellor knew going to market with trees considered potential plant pests “would be a long and depressing path,” he says, one with tests and studies and pauses to collect public comment. “It would take years, and we just wouldn’t survive.”
Once Living Carbon saw that its trees had promise, it dove through the loophole, creating new versions of its enhanced trees via biolistics. In formal letters to the USDA the company explained what it was doing; the agency replied that, because the resulting trees had not been exposed to and did not contain genes from a plant pest, they were not subject to regulations.
Other federal agencies also have authority over biotechnology. The Environmental Protection Agency regulates biotech plants that produce their own pesticides, and the Food and Drug Administration examines anything humans might consume. Living Carbon’s trees do not fit into either of these categories, so they could be planted without any further formal studies.
A year after Living Carbon announced its greenhouse results—before the data from the field trial had any meaning, according to Strauss—the company sent a team to Georgia to plant the first batch of seedlings outside strictly controlled fields. Mellor indicated that this would double as one more study site, where the trees would be measured to estimate the rate of biomass accumulation. The company could make an effort to start soaking up carbon even as it was verifying the efficacy of its trees.
Out in the wild
Experiments with genetically modified trees have historically evoked a strong response from anti-GE activists. In 2001, around 800 specimens growing in Strauss’s test plots at Oregon State University were chopped down or otherwise mutilated.
In 2015, in response to the news that the biotech firm ArborGen had created a loblolly pine with “increased wood density,” protesters descended on the company’s South Carolina headquarters. (The company had taken advantage of the same loophole as Living Carbon; ArborGen has said the pine was never commercially planted.) But after the New York Times wrote about Living Carbon’s first planting in February, there were no notable protests.
One reason could be that the risk is far from clear-cut. Several forest ecologists I spoke to indicated that trees that grow substantially faster than other species could outcompete rivals, potentially making Living Carbon’s “super tree” a weed. None of these scientists, though, seemed particularly worried about that happening.
“I think it’d be difficult to on purpose make a tree that was a weed—that was able to invade and take over a forest,” said Sean McMahon, a forest ecologist with the Smithsonian Tropical Research Institute. “I think it’d be impossible by accident to do it. I’m really not worried about a tree that takes over the world. I just think you’re going to break [the tree].”
He pointed out that the timber industry has been working with scientists for decades, hoping to engineer fast-growing trees. “This is a billion-dollar industry, and if they could make trees grow to harvest in five years, they would,” he said. But there tend to be tradeoffs. A faster-growing tree, for example, might be more vulnerable to pests.
The other reason for the quiet reception of these trees may be climate change: in a ravaged world, people may be more willing to tolerate risk. Keolu Fox, a geneticist at the University of California San Diego, is a co-director of science at Lab to Land, a nonprofit that is studying the potential for biotechnology to accelerate conservation goals on threatened lands, particularly in California. “We’re now talking about editing natural lands—that’s desperation,” Fox says. He thinks this desperation is appropriate, given the state of the climate crisis, though he’s not entirely convinced by Living Carbon’s approach.
Mellor suggests that gene drift should not be a problem: Living Carbon is planting only female trees, so the poplars don’t produce any pollen. That will not prevent wild-growing male trees from fertilizing the transgenic poplars, though the amount of resulting gene drift will likely be small and easily contained, Living Carbon says, especially given the company’s ability to avoid planting its trees near species that could fertilize them. But Mellor says he prefers to focus on other issues. Yes, some companies, like Monsanto, have used transgenic crops in exploitative ways, but that doesn’t mean transgenic technologies are inherently bad, he says. “Purity” is a silly standard, he says, and by trying to keep plants pure we’re missing the chance for needed innovations.
Living Carbon’s poplars seem to grow faster and survive droughts better than their natural counterparts, Mellor says. The rest of their genes match. “So, if, say, that competitively replaces the non-photosynthesis-enhanced version, is that a problem?” he asks. “And what kind of a problem is that? That’s the question now.”
Plant or pest?
In 2019, before Living Carbon was formed, the USDA announced its intention to update its regulatory approach to transgenic plants. The new rules went into effect in August 2020, just after Living Carbon submitted letters seeking exemption for its trees; the letters were reviewed and the trees were grandfathered in under the old rules.
Any further biotechnology the company develops will be analyzed using the new approach, which focuses on what traits are inserted into plants rather than how they get there. There are still ways to avoid scrutiny: products whose genetic modification could be accomplished through conventional breeding, for example, are not subject to regulation—a loophole watchdog groups find problematic. But according to USDA spokespeople, Living Carbon’s core technology—fast-growing trees, produced through genetic insertion—does not appear to qualify for such exemptions. If Living Carbon wants to make even a slight genetic tweak to its trees, the new product will require further examination.
The USDA’s first step is to determine whether there is “a plausible pathway to increased plant pest risk.” If the answer is yes, the company will need permits to move or plant such trees until the USDA can complete a full regulatory review.
Because the agency has not yet reviewed a tree with enhanced photosynthesis, officials declined to comment on whether the trait might constitute a pest risk. Even if it does not, the process might miss other risks: a 2019 report from the National Academies of Sciences, Engineering, and Medicine pointed out that pest risk is a narrow metric that does not capture all of the potential threats to forest health.
Nor does the USDA process offer a seal of approval suggesting the trees will actually work.
“One of the things that concerns me is [Living Carbon is] just focusing on carbon acquisition,” says Marjorie Lundgren, a researcher at Lancaster University in the UK who has studied tree species with natural adaptations leading to increased photosynthetic efficiency. She notes that trees need more than just carbon and sunlight to grow; they need water and nitrogen, too. “The reason they have such a high growth rate is because in the lab, you can just super-baby them—you can give them lots of water and fertilizer and everything they need,” she says. “Unless you put resources in, which is time and money, and not great for the environment, either, then you’re not going to have those same outcomes.”
Living Carbon’s paper acknowledges as much, citing nitrogen as a potential challenge and noting that how the trees move carbon may become a limiting factor. The extra sugars produced through what the company calls “enhanced photosynthesis” must be transported to the right places, something trees haven’t typically evolved to do.
The final, peer-reviewed version of the paper was amended to note the need to compare the grow-room results with field trials. And, as it happened, in April—the month the paper was published—Strauss sent Living Carbon an annual report with exciting news. He had noted statistically significant differences in height and drought tolerance between Living Carbon’s trees and the controls. He also found “nearly” significant differences in volume and diameter for some lines of engineered trees.
Capturing the carbon
Living Carbon seems aware of the general public distrust of genetic technologies. Hall, the CEO, has said the company does not want to be “the Monsanto of trees” and is registered as a public benefit corporation. That allows it to decline ethically dubious projects without worrying about being sued by shareholders for passing up profits.
The company advertises its focus on “restoring land that has been degraded or is underperforming.” On its website, the pitch to potential carbon-credit buyers emphasizes that the tree-planting projects serve to restore ecosystems.
One hope is that Mellor’s metal-accumulating trees will be able to restore soils at abandoned mining sites. Brenda Jo McManama, a campaign organizer with the Indigenous Environmental Network, lives amid such landscapes in West Virginia. She has been fighting GE trees for almost a decade and remains opposed to the technology, but she understands the appeal of such remediating trees. One key problem: they remain experimental.
McManama notes, too, that landowners are allowed to harvest the wood from Living Carbon’s trees. This is not a problem for the climate—lumber still stores carbon—but it undercuts the idea that this is all about ecosystems. “Under their breath, it’s like, ‘Yeah, this will be a tree plantation,’” she says.
The initial planting site in Georgia, for example, belongs to Vince Stanley, whose family owns tens of thousands of acres of timber in the area. Stanley told the New York Times that the appeal of the trees was that he would be able to harvest them sooner than traditional trees.
Living Carbon contests the idea that it is creating “plantations,” which by definition would mean monocultures. But it has planted 12 different species on Stanley’s land. The company indicated that it is “interested” in partnering with timber companies; as Hall has noted, the top 10 in the US each own at least 1 million acres. But the Stanley site in Georgia is currently the only project that is technically classified as “improved forestry management.” (And even there, the company notes, the existing forest was regenerating very slowly due to wet conditions.)
Living Carbon funds its plantings—and makes its profits—by selling credits for the extra carbon the trees absorb. Currently, the company is offering “pre-purchases,” in which companies make a commitment to buy a future credit, paying a small portion of the fee up front to help Living Carbon survive long enough to deliver results.
The company has found that these buyers are more interested in projects with ecosystem benefits, which is why the first project, in Georgia, has become an outlier. There has been a subsequent planting in Ohio; this and all currently planned plantings are not near sawmills or in active timber harvesting regions. Thus, the company does not expect those trees to be harvested.
Wherever they plant trees—whether atop an old minefield or in a timber-producing forest—Living Carbon will pay the landowner an annual per-acre fee and cover the cost of plant site preparation and planting. At the end of the contract, after 30 or 40 years, the landowner can do whatever they want with the trees. If the trees grow as well as is hoped, Living Carbon assumes that even on timber land, their size would mean they’d be turned into “long-duration wood products,” like lumber for construction, rather than shredded to make pulp or paper.
Until recently, Living Carbon was also selling small-scale credits to individual consumers. When we spoke in February, Mellor pointed me toward Patch, a software company with a carbon-credit sales platform. The Georgia project was marketed there as “biotech-enhanced reforestation.” The credits were offered as a monthly subscription, at a price of $40 per metric ton of carbon removed.
When I pressed Mellor for details about how the company calculated this price, given the lack of any solid data on the trees’ performance, he told me something the company had not acknowledged in any public-facing documentation: 95% of the saplings at the Georgia site were not photosynthesis-enhanced. The GE poplar trees were planted in randomized experimental plots, with controls for comparison, and contribute only a small amount to the site’s projected carbon savings. Despite the advertising, then, customers were really paying for a traditional reforestation project with a small experiment tucked inside.
A spokesperson for Living Carbon clarified that this planting makeup was dictated by the standards of the American Carbon Registry, the organization that independently certified the resulting credits, and that subsequent plantings have included a higher proportion of enhanced trees. By partnering with a new credit registry, Living Carbon hopes its 2024 plantings will be closer to 50% photosynthesis-enhanced.
That carbon credits can be offered for the Georgia site at all serves as a reminder: old-fashioned trees, without any new genes, already serve as a viable carbon drawdown technology. “There’s 80,000 species of trees in the world. Maybe you don’t have to throw nickel in them and CRISPR them,” said McMahon, of the Smithsonian Tropical Research Institute. “Maybe just find the ones that actually grow fast [and] store carbon a long time.” Or, he added, pass regulation to protect existing forests, which he said could help the climate more than even a massive adoption of high-tech trees.
Grayson Badgley, an ecologist at the nonprofit CarbonPlan, notes that the cost of the credits on Patch was on the high side for a reforestation project. CarbonPlan examines the efficacy of various carbon removal strategies, a necessary intervention given that carbon markets are ripe for abuse. Several recent investigations have shown that offset projects can dramatically inflate their benefits. One major regulatory group, the Integrity Council for the Voluntary Carbon Market, recently announced a new set of rules, and Verra, a US nonprofit that certifies offset projects, also plans to phase out its old approach to forestry projects.
Given the increasingly shaky reputation of carbon markets, Badgley finds Living Carbon’s lack of transparency troubling. “People should know exactly what they’re buying when they plug in their credit card number,” he says.
Living Carbon says it began phasing out direct-to-consumer sales in late 2022, and that the final transaction was made late February, not long after the Georgia planting. (In total, subscribers funded 600 trees—a small portion of the 8,900 transgenic trees Living Carbon had planted as of late May.) I purchased a credit for research purposes in early February; as of March 1, when I canceled the subscription, I had received no details clarifying the makeup of the Georgia planting, nor any updates noting that the program was ending. I was also struck by the fact that in February, before Strauss delivered his data, Living Carbon was already touting field trial results on its website, ones that were even more impressive than its grow-room results. After I inquired about the source of these figures, the company removed them from the website.
The company says it’s fully transparent with the large-scale buyers who make up the core of its business strategy. What seemed to me like problematic embellishments and elisions were, according to spokespeople, the growing pains of a young startup with an evolving approach that is still learning how to communicate about its work.
They also pointed out that many of the problems with forestry carbon credits come from the projects meant to protect forests against logging. Such credits are granted based on a counterfactual: how many trees would be destroyed in the absence of protection? That’s impossible to know with any precision. How much extra carbon Living Carbon’s trees absorb will be measured much more clearly. And if the trees don’t work, Living Carbon won’t be able to deliver its promised credits or get paid for them. “The risk that in the end [the trees] won’t deliver the amount of carbon that’s expected is on us—it’s not on the climate,” a company spokesperson said.
Pines and pollen
Living Carbon has bigger plans in the works (which will likely need to undergo USDA scrutiny). Mellor hopes the photosynthesis-enhanced loblolly pines will be ready for deployment within two years, which would open opportunities for more collaboration with timber companies. Experiments with metal-accumulating trees are underway, with funding from the US Department of Energy. Last year, the company launched a longer-term project that aims to engineer algae to produce sporopollenin, a biopolymer that coats spores and pollen and can last 100 times longer than other biological materials—and maybe longer than that, the company says. This could create a secure, long-term way to store carbon.
Living Carbon is not alone in this field. Lab to Land, the nonprofit targeting California ecosystems, is considering how carbon markets might drive demand for deep-rooted grasses that store carbon. But Lab to Land is moving far more slowly than Living Carbon—it’s at least a decade away from the deployment of any biotechnology, one of the co-directors of science told me—and, as it progresses, it is building multiple councils to consider the ethics of biotechnology.
A Living Carbon spokesperson suggested that “every scientist is in a way a bioethicist,” and that the company operates with careful morals. As a startup, Living Carbon can’t afford to dither—it needs to make a profit—and Hall says the planet can’t afford to dither, either. To solve climate change, we have to start trying potential technology now. She sees the current plantings as further studies that will help the company and the world understand these trees.
Even with the new data, Steve Strauss remained circumspect about the trees’ long-term prospects. Living Carbon has only provided enough funding for the Oregon field tests to extend just beyond the current growing season; Strauss indicated that were this his company, he’d “want more time.”
Still, Strauss was the one academic scientist I spoke to who seemed enthused about Living Carbon’s plantings. He said they’d made a breakthrough, though one that is less scientific than social—a first step beyond the confines of test-plot fields. As a longtime proponent of genetic engineering, he thinks research into biotechnical solutions to climate change has been stalled for too long. The climate crisis is growing worse. Now someone is pushing forward. “Maybe this isn’t the ideal thing,” he told me when we first spoke in February. “And maybe they’re pushing this one product too hard, too fast. But I’m sort of glad it’s happening.”
Boyce Upholt is a writer based in New Orleans.
Tech
This unlikely fuel could power cleaner trucks and ships
Published
9 hours agoon
06/08/2023By
Drew Simpson
Shipping out
Companies trying to cut their climate impacts in the marine shipping sector are looking to alternative fuels, including methanol and ammonia. Amogy’s system could be a better option than combustion engines, though, since it would limit pollution that can trap heat in the atmosphere and harm human health and the environment.
I’ll note here that ammonia itself isn’t very pleasant to be around, and in fact it can be toxic. Proponents argue that safety protocols for handling it are pretty well established in industry, and professionals will be able to transport and use the chemical safely.
Amogy’s systems aren’t quite big enough for ships yet. The company is working on one more demonstration that will help it get closer to a commercial system: a tugboat, which it plans to launch later this year in upstate New York.
Eventually, the company plans to make modules that can fit together, making the systems large enough to power ships. Amogy’s first commercial maritime system will be deployed with Southern Devall, which transports ammonia on barges today in the US.
Global ammonia production topped 200 million metric tons in 2022, most of it used for fertilizer. The problem is, the vast majority of that was produced using fossil fuels.
For Amogy’s systems to cut emissions significantly, they’ll need to be powered by ammonia that’s made without producing a lot of greenhouse-gas emissions, likely using renewable electricity or maybe carbon capture systems.
According to Amogy’s estimates, supply for these low-carbon ammonia sources could reach 70 million tons by 2030. But those projects will need to make it out of the planning stages and actually start producing ammonia before it can be used in fertilizers, tractors, or tugboats.
Related reading
- Making low-carbon ammonia could require a whole lot of green hydrogen.
Another thing
There’s a lot of money flowing into ocean chemistry. A new initiative called Carbon to Sea is injecting $50 million over the next five years into a technique called ocean alkalinity enhancement. The basic idea is that adding alkaline substances into seawater could help the oceans suck up more carbon dioxide from the atmosphere, combating climate change.
Tech
Effective infrastructure enables universal data intelligence
Published
14 hours agoon
06/08/2023By
Drew Simpson
Infrastructure modernization
As data growth accelerates and data strategies are refined, organizations are under pressure to modernize their data infrastructure in a way that is cost-effective, secure, scalable, socially responsible, and compliant with regulations.
Organizations with legacy infrastructures often own hardware from multiple vendors, particularly if IoT and OT data is involved. Their challenge, then, is to create a seamless, unified system that takes advantage of automation to optimize routine processes and apply AI and machine learning to that data for further insights.
“That’s one of my focus areas at Hitachi Vantara,” says Patel. “How do we combine the power of the data coming in from OT and IoT? How can we provide insights to people in a heterogeneous environment if they don’t have time to go from one machine to another? That’s what it means to create a seamless data plane.”
Social responsibility includes taking a hard look at the organization’s carbon footprint and finding data infrastructure solutions that support emissions reduction goals. Hitachi Vantara estimates that emissions attributable to data storage infrastructure can be reduced as much as 96% via a combination of changing energy sources, upgrading infrastructure and hardware, adopting software to manage storage, and automating workflows—while also improving storage performance and cutting costs.
The hybrid cloud approach
While many organizations follow a “cloud-first” approach, a more nuanced strategy is gaining momentum among forward-thinking CEOs. It’s more of a “cloud where it makes sense” or “cloud smart” strategy.
In this scenario, organizations take a strategic approach to where they place applications, data, and workloads, based on security, financial and operational considerations. There are four basic building blocks of this hybrid approach: seamless management of workloads wherever they are located; a data plane that delivers suitable capacity, cost, performance, and data protection; a simplified, highly resilient infrastructure; and AIOps, which provides an intelligent automated control plane with observability across IT operations.
“I think hybrid is going to stay for enterprises for a long time,” says Patel. “It’s important to be able to do whatever you want with the data, irrespective of where it resides. It could be on-prem, in the cloud, or in a multi-cloud environment.”
Clearing up cloud confusion
The public cloud is often viewed as a location: a go-to place for organizations to unlock speed, agility, scalability, and innovation. That place is then contrasted with legacy on-premises infrastructure environments that don’t provide the same user-friendly, as-a-service features associated with cloud. Some IT leaders assume the public cloud is the only place they can reap the benefits of managed services and automation to reduce the burden of operating their own infrastructure.