Heirloom Carbon Technologies says it could do carbon dioxide removal for $50 a ton once it reaches commercial scale, which would come in well below the estimates for other industrial approaches. Its goal is to remove 1 billion tons of the main greenhouse gas fueling climate change by 2035.
The San Francisco–based company will announce on May 26 that it has raised an undisclosed amount of seed funding from major investors including Breakthrough Energy Ventures, Lowercarbon Capital, and Prelude Ventures. (Industry sources say it’s in the millions.)
In addition, the payment processing company Stripe, which has been funding demonstration projects in the technology, will announce that it plans to purchase nearly 250 tons of carbon removal from the company at $2,054 per ton.
Noah Deich, president of Carbon180, a research firm that advocates for the removal and reuse of carbon, says the company could help address a core challenge in carbon removal: technical approaches like those offered by direct-air-capture companies such as Climeworks and Carbon Engineering promise permanent results but cost a lot, while natural solutions like soil and forest offsets are cheap but often raise concerns about how reliable and durable the carbon removal is. If Heirloom hits its cost targets, it could offer permanent removal at relatively affordable prices, Deich says. (Heirloom’s CEO, Shashank Samala, took part in Carbon180’s entrepreneur-in-residence fellowship program.)
But the technology is at an early stage and the company will face numerous technical and market challenges along the way, including finding more buyers—like Stripe—willing to pay high prices for carbon removal for years to come.
A novel approach to carbon removal
The venture is getting attention in part because the process, described in a paper published in Nature Communications last year, was developed by prominent researchers exploring the use of minerals to capture and store carbon. Those include Greg Dipple at the University of British Columbia and Jennifer Wilcox, who is now principal deputy assistant secretary for fossil energy in the Biden administration. The lead author of the paper was Noah McQueen, a graduate student of Wilcox’s and now head of research at Heirloom.
Preventing the planet from warming by 2 ˚C could require pulling 10 billion tons of carbon dioxide from the atmosphere each year by 2050 and 20 billion annually by 2100, according to a 2018 study. But only a handful of mostly early-stage startups are actively working on this today, exploring a variety of means like creating machines that directly grab carbon dioxide molecules out of the air, converting biowaste into oil that is injected underground, or developing systems to incentivize or validate natural approaches like reforestation or agricultural practices that may take up more carbon in soils.
A number of scientists and nonprofits have also researched the possibility of accelerating the processes by which various minerals—particularly those rich in silicate, calcium, and magnesium—pull carbon dioxide out of air or rainwater. Some are grinding up and spreading out materials like olivine, while others are putting to use the already pulverized by-products of mining operations, even including asbestos.
Heirloom is taking a very different route, however.
How it works
The company will cook materials such as ground limestone, which is mostly calcium tied up with carbon dioxide, at temperatures of 400 to 900 ˚C—high enough for it to break down and release the greenhouse gas. This is similar to the first step in producing cement. (It could use other feedstocks as well, such as magnesite, which was the focus of the Nature Communications paper.)
Heirloom eventually intends to rely on electricity-driven kilns. That means the process can run on clean renewable energy sources and produces a stream of carbon dioxide free from fossil-fuel impurities. That carbon dioxide can then be relatively easily captured, compressed, and injected underground, storing it away basically forever.
The leftover oxide minerals, which would be calcium oxide if the process starts with limestone, can be spread out in thin layers across sheets, stacked vertically, and exposed to the open air. Think lunch trays on cafeteria racks.
The minerals are highly reactive, eager to bond with carbon dioxide in the air. With some additional enhancements, the company’s researchers believe, most of the materials will bond with the greenhouse gas in as fast as two weeks. Normally it would take around a year.
The startup won’t discuss the enhancements, but they might include automated ways of mixing the materials to continually expose them to open air.
That process would convert calcium oxide back into calcium carbonate, the main component of limestone, at which point the process can simply begin again. The company believes it can reuse the materials at least 10 times, possibly dozens, before they degrade too much to capture enough carbon dioxide.
Scaling carbon removal
All of this is very expensive today, as reflected in the price Stripe is paying. The payments company will announce on Wednesday that it will spend nearly $2.8 million to purchase carbon removal credits from six projects, plus another $5.25 million when (or if) those efforts complete certain milestones. The other recipients include CarbonBuilt, Running Tide, Seachange, Mission Zero, and the Future Forest Company, which is planning a mineral-weathering field trial that involves spreading basalt rock along a forest floor.
Heirloom’s Samala says these early, high-priced purchases are crucial for helping emerging carbon removal companies scale up and cut costs.
“Deployment is what makes this cheaper, unleashes new markets, and drives down costs further,” he says.
But finding more buyers willing to bear such costs is going to be a serious challenge for all carbon removal companies—particularly given the availability of cheap forest and soil offsets that allow buyers to claim they’re balancing out their emissions, whether or not such programs are reliable.
Meanwhile, the world needs to provide support for many more carbon removal research groups and startups, says Nan Ransohoff, head of climate at Stripe.
We have to “radically increase the number of projects” if we want to have “any shot” of hitting those 2050 carbon removal targets, Ransohoff says. “Ten gigatons is a lot—it’s just a massive number, and even in the best scenario, all the companies we have today aren’t going to get us there.”
Driving down costs
Heirloom is confident it can drive down the costs significantly because it’s avoiding expensive sorbents and the energy-intensive fans that blow air through the system in other approaches to direct air capture. In addition, it intends to rely heavily on robots, software, and other automation to speed up and slash the costs of the process, drawing on Samala’s earlier experience as the cofounder of Tempo Automation.
Heirloom will be leveraging several other advances under way as well, including improvements in electricity-driven heat technology, the declining costs of renewable energy, and the increasingly decarbonized grids across the world, says Clea Kolster, director of science at Lowercarbon Capital.
But their ultimate costs and ability to rapidly scale up will depend a lot on how much and how quickly those things continue to improve.
As it stands, generating the necessary temperatures from electricity with today’s technologies can be 5 to 10 times as expensive as directly burning coal or natural gas, says Addison Stark, director of the energy and environment program at advisory firm Clark Street Associates, who coauthored a recent paper in Joule on the topic. In addition, if the source of the electricity itself isn’t carbon-free, it undermines any carbon removal benefits.
Another question is how much and how reliably Heirloom will be able to cut down the time it takes for the oxides to bond with carbon dioxide, which will dramatically affect the economics, says Jeremy Freeman, executive director at CarbonPlan, which analyzes the scientific integrity of carbon removal efforts and helped evaluate the projects that applied for Stripe’s program.
Heirloom will also have to raise a far larger round of funding to eventually build a demonstration plant.
The company’s main business model will be selling carbon removal credits to corporations or individuals, through either voluntary offset systems or government-based carbon programs. Heirloom is banking on its offerings becoming ever more attractive as their costs decline and public policies provide carrots or sticks that make it more attractive—or more necessary—for companies or governments to pay for carbon removal over time.
Why detecting AI-generated text is so difficult (and what to do about it)
This tool is OpenAI’s response to the heat it’s gotten from educators, journalists, and others for launching ChatGPT without any ways to detect text it has generated. However, it is still very much a work in progress, and it is woefully unreliable. OpenAI says its AI text detector correctly identifies 26% of AI-written text as “likely AI-written.”
While OpenAI clearly has a lot more work to do to refine its tool, there’s a limit to just how good it can make it. We’re extremely unlikely to ever get a tool that can spot AI-generated text with 100% certainty. It’s really hard to detect AI-generated text because the whole point of AI language models is to generate fluent and human-seeming text, and the model is mimicking text created by humans, says Muhammad Abdul-Mageed, a professor who oversees research in natural-language processing and machine learning at the University of British Columbia
We are in an arms race to build detection methods that can match the latest, most powerful models, Abdul-Mageed adds. New AI language models are more powerful and better at generating even more fluent language, which quickly makes our existing detection tool kit outdated.
OpenAI built its detector by creating a whole new AI language model akin to ChatGPT that is specifically trained to detect outputs from models like itself. Although details are sparse, the company apparently trained the model with examples of AI-generated text and examples of human-generated text, and then asked it to spot the AI-generated text. We asked for more information, but OpenAI did not respond.
Last month, I wrote about another method for detecting text generated by an AI: watermarks. These act as a sort of secret signal in AI-produced text that allows computer programs to detect it as such.
Researchers at the University of Maryland have developed a neat way of applying watermarks to text generated by AI language models, and they have made it freely available. These watermarks would allow us to tell with almost complete certainty when AI-generated text has been used.
The trouble is that this method requires AI companies to embed watermarking in their chatbots right from the start. OpenAI is developing these systems but has yet to roll them out in any of its products. Why the delay? One reason might be that it’s not always desirable to have AI-generated text watermarked.
One of the most promising ways ChatGPT could be integrated into products is as a tool to help people write emails or as an enhanced spell-checker in a word processor. That’s not exactly cheating. But watermarking all AI-generated text would automatically flag these outputs and could lead to wrongful accusations.
The original startup behind Stable Diffusion has launched a generative AI for video
Set up in 2018, Runway has been developing AI-powered video-editing software for several years. Its tools are used by TikTokers and YouTubers as well as mainstream movie and TV studios. The makers of The Late Show with Stephen Colbert used Runway software to edit the show’s graphics; the visual effects team behind the hit movie Everything Everywhere All at Once used the company’s tech to help create certain scenes.
In 2021, Runway collaborated with researchers at the University of Munich to build the first version of Stable Diffusion. Stability AI, a UK-based startup, then stepped in to pay the computing costs required to train the model on much more data. In 2022, Stability AI took Stable Diffusion mainstream, transforming it from a research project into a global phenomenon.
But the two companies no longer collaborate. Getty is now taking legal action against Stability AI—claiming that the company used Getty’s images, which appear in Stable Diffusion’s training data, without permission—and Runway is keen to keep its distance.
Gen-1 represents a new start for Runway. It follows a smattering of text-to-video models revealed late last year, including Make-a-Video from Meta and Phenaki from Google, both of which can generate very short video clips from scratch. It is also similar to Dreamix, a generative AI from Google revealed last week, which can create new videos from existing ones by applying specified styles. But at least judging from Runway’s demo reel, Gen-1 appears to be a step up in video quality. Because it transforms existing footage, it can also produce much longer videos than most previous models. (The company says it will post technical details about Gen-1 on its website in the next few days.)
Unlike Meta and Google, Runway has built its model with customers in mind. “This is one of the first models to be developed really closely with a community of video makers,” says Valenzuela. “It comes with years of insight about how filmmakers and VFX editors actually work on post-production.”
Gen-1, which runs on the cloud via Runway’s website, is being made available to a handful of invited users today and will be launched to everyone on the waitlist in a few weeks.
Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made with them. Valenzuela hopes that putting Gen-1 into the hands of creative professionals will soon have a similar impact on video.
“We’re really close to having full feature films being generated,” he says. “We’re close to a place where most of the content you’ll see online will be generated.”
When my dad was sick, I started Googling grief. Then I couldn’t escape it.
I am a mostly visual thinker, and thoughts pose as scenes in the theater of my mind. When my many supportive family members, friends, and colleagues asked how I was doing, I’d see myself on a cliff, transfixed by an omniscient fog just past its edge. I’m there on the brink, with my parents and sisters, searching for a way down. In the scene, there is no sound or urgency and I am waiting for it to swallow me. I’m searching for shapes and navigational clues, but it’s so huge and gray and boundless.
I wanted to take that fog and put it under a microscope. I started Googling the stages of grief, and books and academic research about loss, from the app on my iPhone, perusing personal disaster while I waited for coffee or watched Netflix. How will it feel? How will I manage it?
I started, intentionally and unintentionally, consuming people’s experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials. It was as if the internet secretly teamed up with my compulsions and started indulging my own worst fantasies; the algorithms were a sort of priest, offering confession and communion.
Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself. My mournful digital life was preserved in amber by the pernicious personalized algorithms that had deftly observed my mental preoccupations and offered me ever more cancer and loss.
I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us?
I’m well aware of the power of algorithms—I’ve written about the mental-health impact of Instagram filters, the polarizing effect of Big Tech’s infatuation with engagement, and the strange ways that advertisers target specific audiences. But in my haze of panic and searching, I initially felt that my algorithms were a force for good. (Yes, I’m calling them “my” algorithms, because while I realize the code is uniform, the output is so intensely personal that they feel like mine.) They seemed to be working with me, helping me find stories of people managing tragedy, making me feel less alone and more capable.
In reality, I was intimately and intensely experiencing the effects of an advertising-driven internet, which Ethan Zuckerman, the renowned internet ethicist and professor of public policy, information, and communication at the University of Massachusetts at Amherst, famously called “the Internet’s Original Sin” in a 2014 Atlantic piece. In the story, he explained the advertising model that brings revenue to content sites that are most equipped to target the right audience at the right time and at scale. This, of course, requires “moving deeper into the world of surveillance,” he wrote. This incentive structure is now known as “surveillance capitalism.”
Understanding how exactly to maximize the engagement of each user on a platform is the formula for revenue, and it’s the foundation for the current economic model of the web.