Connect with us

Tech

Is 57 a prime number? There’s a game for that.

Published

on

Is 57 a prime number? There’s a game for that.


The Greek mathematician Euclid may very well have proved, circa 300 BCE, that there are infinitely many prime numbers. But it was the British mathematician Christian Lawson-Perfect who, more recently, devised the computer game “Is this prime?

Launched five years ago, the game surpassed three million tries on July 16—or, more to the point, it hit run 2,999,999—after a Hacker News post generated a surge of about 100,000 attempts.

The aim of the game is to sort as many numbers as possible into “prime” or “not prime” in 60 seconds (as Lawson-Perfect originally described it on The Aperiodical, a mathematics blog of which he’s a founder and editor).

A prime number is a whole number with precisely two divisors, 1 and itself.

“It’s very simple, but infuriatingly difficult,” says Lawson-Perfect, who works in the e-learning unit in Newcastle University’s School of Mathematics and Statistics. He created the game in his spare time, but it’s proved useful on the job: Lawson-Perfect writes e-assessment software (systems that evaluate learning). “The system I make is designed to randomly generate a maths question, and take an answer from the student, which it automatically marks and gives feedback on,” he says. “You could view the primes game as a kind of assessment”—he’s used it when doing outreach sessions in schools.

He made the game slightly easier with keyboard shortcuts—the y and n keys click the corresponding yes-no buttons on the screen—in order to save mouse-moving time.

Give it a whirl:

Primality-checking algorithms

Prime numbers have practical utility in computing—such as with error-correcting codes and encryption. But while prime factorization is hard (hence its value in encryption), primality checking is easier, if tricky. The Fields Medal–winning German mathematician Alexander Grothendieck infamously mistook 57 for prime (the “Grothendieck prime”). When Lawson-Perfect analyzed data from the game, he found that various numbers exhibited a certain “Grothendieckyness.” The number most often mistaken for a prime was 51, followed by 57, 87, 91, 119, and 133—Lawson-Perfect’s nemesis (he also devised a handy primality-checking service: https://isthisprime.com/2).

The most minimalistic algorithm for checking a number’s primeness is trial division—divide the number by every number up to its square root (the product of two numbers greater than the square root would be greater than the number in question).

However, this naïve method is not very efficient, and neither are some other techniques devised over the centuries—as the German mathematician Carl Friedrich Gauss observed in 1801, they “require intolerable labor even for the most indefatigable calculator.”

The algorithm Lawson-Perfect coded up for the game is called the Miller-Rabin primality test (which builds on a very efficient but not ironclad 17th-century method, “Fermat’s little theorem”). The Miller-Rabin test works surprisingly well. As far as Lawson-Perfect is concerned, it’s “basically magic”—“I don’t really understand how it works, but I’m confident I could if I spent the time to look at it more deeply,” he says.

Since the test uses randomness, it produces a probabilistic result. Which means that sometimes the test lies. “There is a chance of uncovering an imposter, a composite number that is trying to pass as prime,” says Carl Pomerance, a mathematician at Dartmouth College and coauthor of the book Prime Numbers: A Computational Perspective. The chances of an imposter slipping through the algorithm’s clever checking mechanism are maybe one in a trillion, though, so the test is “pretty safe.”

But as far as clever primality checking algorithms go, the Miller-Rabin test is “the tip of the iceberg,” says Pomerance. Notably, 19 years ago, three computer scientists—Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, all at the Indian Institute of Technology Kanpur—announced the AKS primality test (again building upon Fermat’s method), which finally provided a test for unequivocally proving that a number is prime, with no randomization and (theoretically, at least) with impressive speed. Alas, fast in theory doesn’t always translate to fast in real life, so the AKS test isn’t useful for practical purposes.

The unofficial world record

But practicality isn’t always the point. Occasionally Lawson-Perfect receives email from people keen to share their high scores in the game. Recently a player reported 60 primes in 60 seconds, but the record is more likely 127. (Lawson-Perfect doesn’t track high scores; he knows there are some cheaters, with computer-aided attempts that produce spikes in the data.)

The 127 score was achieved by Ravi Fernando, a mathematics graduate student at the University of California, Berkeley, who posted the result in July 2020. It’s still his personal best and, he reckons, the “unofficial world record.”

Since last summer, Fernando hasn’t played the game much with the default settings, but he has tried with customized settings, selecting for larger numbers and allowing longer time limits—he scored 240 with a five-minute limit. “Which took a lot of guesswork, because the numbers got into the high four-digit range and I’ve only ever memorized primes up to the low 3,000s,” he says. “I suppose some would argue even that is excessive.”

Fernando’s research is in algebraic geometry, which involves primes to some extent. But, he says, “my research has more to do with why I stopped playing the game than why I started” (he started his PhD in 2014). Plus, he figures 127 would be very hard to beat. And, he says, “it just feels right to stop at a prime-number record.”

Tech

These robots know when to ask for help

Published

on

These robots know when to ask for help


A new training model, dubbed “KnowNo,” aims to address this problem by teaching robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.

Andy Zeng, a research scientist at Google DeepMind who helped develop the new technique, says that while robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense.

For example, when asked to bring you a Coke, the robot needs to first understand that it needs to go into the kitchen, look for the refrigerator, and open the fridge door. Conventionally, these smaller substeps had to be manually programmed, because otherwise the robot would not know that people usually keep their drinks in the kitchen.

That’s something large language models (LLMs) could help to fix, because they have a lot of common-sense knowledge baked in, says Zeng. 

Now when the robot is asked to bring a Coke, an LLM, which has a generalized understanding of the world, can generate a step-by-step guide for the robot to follow.

The problem with LLMs, though, is that there’s no way to guarantee that their instructions are possible for the robot to execute. Maybe the person doesn’t have a refrigerator in the kitchen, or the fridge door handle is broken. In these situations, robots need to ask humans for help.

KnowNo makes that possible by combining large language models with statistical tools that quantify confidence levels. 

When given an ambiguous instruction like “Put the bowl in the microwave,” KnowNo first generates multiple possible next actions using the language model. Then it creates a confidence score predicting the likelihood that each potential choice is the best one.

Continue Reading

Tech

The Download: inside the first CRISPR treatment, and smarter robots

Published

on

The Download: inside the first CRISPR treatment, and smarter robots


The news: A new robot training model, dubbed “KnowNo,” aims to teach robots to ask for our help when orders are unclear. At the same time, it ensures they seek clarification only when necessary, minimizing needless back-and-forth. The result is a smart assistant that tries to make sure it understands what you want without bothering you too much.

Why it matters: While robots can be powerful in many specific scenarios, they are often bad at generalized tasks that require common sense. That’s something large language models could help to fix, because they have a lot of common-sense knowledge baked in. Read the full story.

—June Kim

Medical microrobots that travel inside the body are (still) on their way

The human body is a labyrinth of vessels and tubing, full of barriers that are difficult to break through. That poses a serious hurdle for doctors. Illness is often caused by problems that are hard to visualize and difficult to access. But imagine if we could deploy armies of tiny robots into the body to do the job for us. They could break up hard-to-reach clots, deliver drugs to even the most inaccessible tumors, and even help guide embryos toward implantation.

We’ve been hearing about the use of tiny robots in medicine for years, maybe even decades. And they’re still not here. But experts are adamant that medical microbots are finally coming, and that they could be a game changer for a number of serious diseases. Read the full story.

—Cassandra Willyard

Continue Reading

Tech

5 things we didn’t put on our 2024 list of 10 Breakthrough Technologies

Published

on

5 things we didn’t put on our 2024 list of 10 Breakthrough Technologies


We haven’t always been right (RIP, Baxter), but we’ve often been early to spot important areas of progress (we put natural-language processing on our very first list in 2001; today this technology underpins large language models and generative AI tools like ChatGPT).  

Every year, our reporters and editors nominate technologies that they think deserve a spot, and we spend weeks debating which ones should make the cut. Here are some of the technologies we didn’t pick this time—and why we’ve left them off, for now. 

New drugs for Alzheimer’s disease

Alzmeiher’s patients have long lacked treatment options. Several new drugs have now been proved to slow cognitive decline, albeit modestly, by clearing out harmful plaques in the brain. In July, the FDA approved Leqembi by Eisai and Biogen, and Eli Lilly’s donanemab could soon be next. But the drugs come with serious side effects, including brain swelling and bleeding, which can be fatal in some cases. Plus, they’re hard to administer—patients receive doses via an IV and must receive regular MRIs to check for brain swelling. These drawbacks gave us pause. 

Sustainable aviation fuel 

Alternative jet fuels made from cooking oil, leftover animal fats, or agricultural waste could reduce emissions from flying. They have been in development for years, and scientists are making steady progress, with several recent demonstration flights. But production and use will need to ramp up significantly for these fuels to make a meaningful climate impact. While they do look promising, there wasn’t a key moment or “breakthrough” that merited a spot for sustainable aviation fuels on this year’s list.  

Solar geoengineering

One way to counteract global warming could be to release particles into the stratosphere that reflect the sun’s energy and cool the planet. That idea is highly controversial within the scientific community, but a few researchers and companies have begun exploring whether it’s possible by launching a series of small-scale high-flying tests. One such launch prompted Mexico to ban solar geoengineering experiments earlier this year. It’s not really clear where geoengineering will go from here or whether these early efforts will stall out. Amid that uncertainty, we decided to hold off for now. 

Continue Reading

Copyright © 2021 Seminole Press.