Are computers ready to solve this notoriously unwieldy math problem?
In a sense, the computer and the Collatz conjecture are a perfect match. For one, as Jeremy Avigad, a logician and professor of philosophy at Carnegie Mellon notes, the notion of an iterative algorithm is at the foundation of computer science—and Collatz sequences are an example of an iterative algorithm, proceeding step-by-step according to a deterministic rule. Similarly, showing that a process terminates is a common problem in computer science. “Computer scientists generally want to know that their algorithms terminate, which is to say, that they always return an answer,” Avigad says. Heule and his collaborators are leveraging that technology in tackling the Collatz conjecture, which is really just a termination problem.
“The beauty of this automated method is that you can turn on the computer, and wait.”
Heule’s expertise is with a computational tool called a “SAT solver”—or a “satisfiability” solver, a computer program that determines whether there is a solution for a formula or problem given a set of constraints. Though crucially, in the case of a mathematical challenge, a SAT solver first needs the problem translated, or represented, in terms that the computer understands. And as Yolcu, a PhD student with Heule, puts it: “Representation matters, a lot.”
A longshot, but worth a try
When Heule first mentioned tackling Collatz with a SAT solver, Aaronson thought, “There is no way in hell this is going to work.” But he was easily convinced it was worth a try, since Heule saw subtle ways to transform this old problem that might make it pliable. He’d noticed that a community of computer scientists were using SAT solvers to successfully find termination proofs for an abstract representation of computation called a “rewrite system.” It was a longshot, but he suggested to Aaronson that transforming the Collatz conjecture into a rewrite system might make it possible to get a termination proof for Collatz (Aaronson had previously helped transform the Riemann hypothesis into a computational system, encoding it in a small Turing machine). That evening, Aaronson designed the system. “It was like a homework assignment, a fun exercise,” he says.
Aaronson’s system captured the Collatz problem with 11 rules. If the researchers could get a termination proof for this analogous system, applying those 11 rules in any order, that would prove the Collatz conjecture true.
Heule tried with state-of-the-art tools for proving the termination of rewrite systems, which didn’t work—it was disappointing if not so surprising. “These tools are optimized for problems that can be solved in a minute, while any approach to solve Collatz likely requires days if not years of computation,” says Heule. This provided motivation to hone their approach and implement their own tools to transform the rewrite problem into a SAT problem.
Aaronson figured it would be much easier to solve the system minus one of the 11 rules—leaving a “Collatz-like” system, a litmus test for the larger goal. He issued a human-versus-computer challenge: The first to solve all subsystems with 10 rules wins. Aaronson tried by hand. Heule tried by SAT solver: He encoded the system as a satisfiability problem—with yet another clever layer of representation, translating the system into the computer’s lingo of variables that can be either 0s and 1s—and then let his SAT solver run on the cores, searching for evidence of termination.
They both succeeded in proving that the system terminates with the various sets of 10 rules. Sometimes it was a trivial undertaking, for both the human and the program. Heule’s automated approach took at most 24 hours. Aaronson’s approach required significant intellectual effort, taking a few hours or even a day—one set of 10 rules he never managed to prove, though he firmly believes he could have, with more effort. “In a very literal sense I was battling a Terminator,” Aaronson says—“at least a termination theorem prover.”
Yolcu has since fine-tuned the SAT solver, calibrating the tool to better fit the nature of the Collatz problem. These tricks made all the difference—speeding up the termination proofs for the 10-rule subsystems and reducing runtimes to mere seconds.
“The main question that remains,” says Aaronson, “is, What about the full set of 11? You try running the system on the full set and it just runs forever, which maybe shouldn’t shock us, because that is the Collatz problem.”
As Heule sees it, most research in automated reasoning has a blind eye for problems that require lots of computation. But based on his previous breakthroughs he believes these problems can be solved. Others have transformed Collatz as a rewrite system, but it’s the strategy of wielding a fine-tuned SAT solver at scale with formidable compute power that might gain traction toward a proof.
So far, Heule has run the Collatz investigation using about 5,000 cores (the processing units powering computers; consumer computers have four or eight cores). As an Amazon Scholar, he has an open invitation from Amazon Web Services to access “practically unlimited” resources—as many as one million cores. But he’s reluctant to use significantly more.
“I want some indication that this is a realistic attempt,” he says. Otherwise, Heule feels he’d be wasting resources and trust. “I don’t need 100% confidence, but I really would like to have some evidence that there’s a reasonable chance that it’s going to succeed.”
Supercharging a transformation
“The beauty of this automated method is that you can turn on the computer, and wait,” says the mathematician Jeffrey Lagarias, of the University of Michigan. He’s toyed with Collatz for about fifty years and become keeper of the knowledge, compiling annotated bibliographies and editing a book on the subject, “The Ultimate Challenge.” For Lagarias, the automated approach brought to mind a 2013 paper by the Princeton mathematician John Horton Conway, who mused that the Collatz problem might be among an elusive class of problems that are true and “undecidable”—but at once not provably undecidable. As Conway noted: “… it might even be that the assertion that they are not provable is not itself provable, and so on.”
“If Conway is right,” Lagarias says, “there will be no proof, automated or not, and we will never know the answer.”
Fostering innovation through a culture of curiosity
And so I think a big part of it as a company, by setting these ambitious goals, it forces us to say if we want to be number one, if we want to be top tier in these areas, if we want to continue to generate results, how do we get there using technology? And so that really forces us to throw away our assumptions because you can’t follow somebody, if you want to be number one you can’t follow someone to become number one. And so we understand that the path to get there, it’s through, of course, technology and the software and the enablement and the investment, but it really is by becoming goal-oriented. And if we look at these examples of how do we create the infrastructure on the technology side to support these ambitious goals, we ourselves have to be ambitious in turn because if we bring a solution that’s also a me too, that’s a copycat, that doesn’t have differentiation, that’s not going to propel us, for example, to be a top 10 supply chain. It just doesn’t pass muster.
So I think at the top level, it starts with the business ambition. And then from there we can organize ourselves at the intersection of the business ambition and the technology trends to have those very rich discussions and being the glue of how do we put together so many moving pieces because we’re constantly scanning the technology landscape for new advancing and emerging technologies that can come in and be a part of achieving that mission. And so that’s how we set it up on the process side. As an example, I think one of the things, and it’s also innovation, but it doesn’t get talked about as much, but for the community out there, I think it’s going to be very relevant is, how do we stay on top of the data sovereignty questions and data localization? There’s a lot of work that needs to go into rethinking what your cloud, private, public, edge, on-premise look like going forward so that we can remain cutting edge and competitive in each of our markets while meeting the increasing guidance that we’re getting from countries and regulatory agencies about data localization and data sovereignty.
And so in our case, as a global company that’s listed in Hong Kong and we operate all around the world, we’ve had to really think deeply about the architecture of our solutions and apply innovation in how we can architect for a longer term growth, but in a world that’s increasingly uncertain. So I think there’s a lot of drivers in some sense, which is our corporate aspirations, our operating environment, which has continued to have a lot of uncertainty, and that really forces us to take a very sharp lens on what cutting edge looks like. And it’s not always the bright and shiny technology. Cutting edge could mean going to the executive committee and saying, Hey, we’re going to face a challenge about compliance. Here’s the innovation we’re bringing about architecture so that we can handle not just the next country or regulatory regime that we have to comply with, but the next 10, the next 50.
Laurel: Well, and to follow up with a bit more of a specific example, how does R&D help improve manufacturing in the software supply chain as well as emerging technologies like artificial intelligence and the industrial metaverse?
Art: Oh, I love this one because this is the perfect example of there’s a lot happening in the technology industry and there’s so much back to the earlier point of applied curiosity and how we can try this. So specifically around artificial intelligence and industrial metaverse, I think those go really well together with what are Lenovo’s natural strengths. Our heritage is as a leading global manufacturer, and now we’re looking to also transition to services-led, but applying AI and technologies like the metaverse to our factories. I think it’s almost easier to talk about the inverse, Laurel, which is if we… Because, and I remember very clearly we’ve mapped this out, there’s no area within the supply chain and manufacturing that is not touched by these areas. If I think about an example, actually, it’s very timely that we’re having this discussion. Lenovo was recognized just a few weeks ago at the World Economic Forum as part of the global lighthouse network on leading manufacturing.
And that’s based very much on applying around AI and metaverse technologies and embedding them into every aspect of what we do about our own supply chain and manufacturing network. And so if I pick a couple of examples on the quality side within the factory, we’ve implemented a combination of digital twin technology around how we can design to cost, design to quality in ways that are much faster than before, where we can prototype in the digital world where it’s faster and lower cost and correcting errors is more upfront and timely. So we are able to much more quickly iterate on our products. We’re able to have better quality. We’ve taken advanced computer vision so that we’re able to identify quality defects earlier on. We’re able to implement technologies around the industrial metaverse so that we can train our factory workers more effectively and better using aspects of AR and VR.
And we’re also able to, one of the really important parts of running an effective manufacturing operation is actually production planning, because there’s so many thousands of parts that are coming in, and I think everyone who’s listening knows how much uncertainty and volatility there have been in supply chains. So how do you take such a multi-thousand dimensional planning problem and optimize that? Those are things where we apply smart production planning models to keep our factories fully running so that we can meet our customer delivery dates. So I don’t want to drone on, but I think literally the answer was: there is no place, if you think about logistics, planning, production, scheduling, shipping, where we didn’t find AI and metaverse use cases that were able to significantly enhance the way we run our operations. And again, we’re doing this internally and that’s why we’re very proud that the World Economic Forum recognized us as a global lighthouse network manufacturing member.
Laurel: It’s certainly important, especially when we’re bringing together computing and IT environments in this increasing complexity. So as businesses continue to transform and accelerate their transformations, how do you build resiliency throughout Lenovo? Because that is certainly another foundational characteristic that is so necessary.
The Download: covid’s origin drama, and TikTok’s uncertain future
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
Newly-revealed coronavirus data has reignited a debate over the virus’s origins
This week, we’ve seen the resurgence of a debate that has been swirling since the start of the pandemic—where did the virus that causes covid-19 come from?
For the most part, scientists have maintained that the virus probably jumped from an animal to a human at the Huanan Seafood Market in Wuhan at some point in late 2019. But some claim that the virus leaped from humans to animals, rather than the other way around. And many continue to claim that the virus somehow leaked from a nearby laboratory that was studying coronaviruses in bats.
Data collected in 2020—and kept from public view since then—potentially adds weight to the animal theory. It highlights a potential suspect: the raccoon dog. But exactly how much weight it adds depends on who you ask. Read the full story.
This story is from The Checkup, Jessica’s weekly biotech newsletter. Sign up to receive it in your inbox every Thursday.
Read more of MIT Technology Review’s covid reporting:
+ Our senior biotech editor Antonio Regalado investigated the origins of the coronavirus behind covid-19 in his five-part podcast series Curious Coincidence.
+ Meet the scientist at the center of the covid lab leak controversy. Shi Zhengli has spent years at the Wuhan Institute of Virology researching coronaviruses that live in bats. Her work has come under fire as the world tries to understand where covid-19 came from. Read the full story.
+ This scientist now believes covid started in Wuhan’s wet market. Here’s why. Michael Worobey of the University of Arizona, believes that a spillover of the virus from animals at the Huanan Seafood market was almost certainly behind the origin of the pandemic. Read the full story.
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 TikTok’s future in the US is hanging in the balance
Banning it is a colossal challenge, and officials still lack the legal authority to do so. (WP $)
+ TikTok CEO Shou Zi Chew was grilled by a congressional committee. (FT $)
+ He told lawmakers the company would earn their trust. (WSJ $)
+ Meanwhile, TikTok paid for influencers to travel to DC to lobby its cause. (Wired $)
2 A crypto fugitive has been arrested in Montenegro
Do Kwon has been on the run since TerraUSD stablecoin collapsed last year. (WSJ $)
+ Want to mine Bitcoin? Get yourself to Texas. (Reuters)
+ What’s next for crypto. (MIT Technology Review)
3 Twitter’s getting rid of its legacy blue checks
On the entirely serious date of April 1. (The Verge)+ The platform’s still an unattractive prospect for advertisers. (Vox)
4 Chatbots are having tough conversations for us
ChatGPT is adept at writing scripts for sensitive talks with kids and colleagues. (NYT $)
+ OpenAI has given ChatGPT access to the web’s live data. (The Verge)
+ How Character.AI became a billion-dollar unicorn. (WSJ $)
+ The inside story of how ChatGPT was built from the people who made it. (MIT Technology Review)
5 Jack Dorsey’s Block has been accused of fraudulent transactions
The payments company denied it, and claims it inflated its users numbers, too.(FT $)
+ Dorsey doesn’t have a track record of caring about this kind of thing. (The Information $)
6 Homeowners associations are secretly installing surveillance systems
The system tracks license plates and follows residents’ movements. (The Intercept)
7 Inside the tricky ethics of using DNA to solve crimes
A new database could help to protect users’ privacy. (Wired $)|
+ The citizen scientist who finds killers from her couch. (MIT Technology Review)
8 There’s plenty of reasons to be optimistic about the climate
Healthier, more sustainable diets are a good place to start. (Scientific American)
+ Taking stock of our climate past, present, and future. (MIT Technology Review)
9 TikTok keeps hectoring us
It seems we just can’t get enough of being aggressively told what to do. (Vox)
10 Don’t get scammed by a deepfake
CallerID can’t be trusted to protect you from rogue AI calls. (Gizmodo)
Quote of the day
“Wait, I need content.”
—TikTok fashion creator Kristine Thompson refuses to miss a content opportunity during a trip to the US Capitol to lobby against a potential TikTok ban, she tells the New York Times.
The big story
This sci-fi blockchain game could help create a metaverse that no one owns
Dark Forest is a vast universe, and most of it is shrouded in darkness. Your mission, should you choose to accept it, is to venture into the unknown, avoid being destroyed by opposing players who may be lurking in the dark, and build an empire of the planets you discover and can make your own.
But while the video game seemingly looks and plays much like other online strategy games, it doesn’t rely on the servers running other popular online strategy games. And it may point to something even more profound: the possibility of a metaverse that isn’t owned by a big tech company. Read the full story.
We can still have nice things
A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)
+ If underwater terrors are your thing, Joe Romiero takes some seriously impressive shark pictures and videos.
+ Try as it might, Ted Lasso’s British dialog falls wide of the mark.
+ Let’s have a good old snoop around some celebrities’ bedrooms.
+ Why we can’t get enough of those fancy candles.
+ Interviewing animals with a tiny microphone, it doesn’t get much better than that.
Taking stock of our climate past, present, and future
Before you say anything, I do know that it is, in fact, nearly April. But this week has the distinct feeling of a sort of climate change New Year’s to me. Not only is it the spring equinox this week, which is celebrated as the new year in some cultures (Happy Nowruz!), but we also saw a big UN climate report drop on Monday, which has me in a very contemplative mood.
The report comes from the UN Intergovernmental Panel on Climate Change (IPCC), a group of scientists that releases reports about the state of climate change research.
The IPCC works in seven-year cycles, give or take. Each cycle, the group looks at all the published literature on climate change and puts together a handful of reports on different topics, leading up to a synthesis report that sums it all up. This week’s release was one of those synthesis reports. It follows one from 2014, and we should see another one around 2030.
Because these reports are a sort of summary of existing research, I’ve been thinking about this moment as a time to reflect. So for the newsletter this week, I thought we could get in the new year’s spirit and take a look at where we’ve come from, where we are, and where we’re going on climate change.
Climate past: 2014
Let’s start in 2014. The concentration of carbon dioxide in the atmosphere was just under 400 parts per million. The song “Happy” by Pharrell Williams was driving me slowly insane. And in November, the IPCC released its fifth synthesis report.
Some bits of the 2014 IPCC synthesis report feel familiar. Its authors clearly laid out the case that human activity was causing climate change, adaptation wasn’t going to cut it, and the world would need to take action to limit greenhouse-gas emissions. I saw all those same lines in this year’s report.
But there are also striking differences.
First, we were in a different place politically. World leaders hadn’t yet signed the Paris agreement, the landmark treaty that set a goal to limit global warming to 2 °C (3.6 °F) above preindustrial levels, with a target of 1.5 °C (2.7 °F). The 2014 assessment report laid the groundwork for that agreement.