Connect with us

Tech

What are quantum-resistant algorithms—and why do we need them?

Published

on

What are quantum-resistant algorithms—and why do we need them?


Luckily, symmetric-key encryption methods are not in danger because they work very differently and can be secured by simply increasing the size of the keys they use—that is, unless mathematicians can come up with a way for quantum computers to break those as well. But even increasing the key size can’t protect existing public-key encryption algorithms from quantum computers. New algorithms are needed.

What are the repercussions if quantum computers break encryption we currently use?

Yeah, it’s bad. If public-key encryption were suddenly broken without a replacement, digital security would be severely compromised. For example, websites use public-key encryption to maintain secure internet connections, so sending sensitive information through websites would no longer be safe. Cryptocurrencies also depend on public-key encryption to secure their underlying blockchain technology, so the data on their ledgers would no longer be trustworthy.

There is also concern that hackers and nation-states might be hoarding highly sensitive government or intelligence data—data they can’t currently decipher—in order to decrypt it later once quantum computers become available. 

How is work on quantum-resistant algorithms progressing?

In the US, NIST has been looking for new algorithms that can withstand attacks from quantum computers. The agency started taking public submissions in 2016, and so far these have been narrowed down to four finalists and three backup algorithms. These new algorithms use techniques that can withstand attacks from quantum computers using Shor’s Algorithm.

Project lead Dustin Moody says NIST is on schedule to complete standardization of the four finalists in 2024, which involves creating guidelines to ensure that the new algorithms are used correctly and securely. Standardization of the remaining three algorithms is expected in 2028.

The work of vetting candidates for the new standard falls mostly to mathematicians and cryptographers from universities and research institutions. They submit proposals for post-quantum cryptographic schemes and look for ways to attack them, sharing their findings by publishing papers and building on each other’s different methods of attack.

Tech

The AI myth Western lawmakers get wrong

Published

on

China just announced a new social credit law. Here’s what it means.


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

While the US and the EU may differ on how to regulate tech, their lawmakers seem to agree on one thing: the West needs to ban AI-powered social scoring.

As they understand it, social scoring is a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen. 

The EU is currently negotiating a new law called the AI Act, which will ban member states, and maybe even private companies, from implementing such a system.

The trouble is, it’s “essentially banning thin air,” says Vincent Brussee, an analyst at the Mercator Institute for China Studies, a German think tank.

Back in 2014, China announced a six-year plan to build a system rewarding actions that build trust in society and penalizing the opposite. Eight years on, it’s only just released a draft law that tries to codify past social credit pilots and guide future implementation. 

There have been some contentious local experiments, such as one in the small city of Rongcheng in 2013, which gave every resident a starting personal credit score of 1,000 that can be increased or decreased by how their actions are judged. People are now able to opt out, and the local government has removed some controversial criteria. 

But these have not gained wider traction elsewhere and do not apply to the entire Chinese population. There is no countrywide, all-seeing social credit system with algorithms that rank people.

As my colleague Zeyi Yang explains, “the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either.” 

What has been implemented is mostly pretty low-tech. It’s a “mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values,” Zeyi writes. 

Kendra Schaefer, a partner at Trivium China, a Beijing-based research consultancy, who compiled a report on the subject for the US government, couldn’t find a single case in which data collection in China led to automated sanctions without human intervention. The South China Morning Post found that in Rongcheng, human “information gatherers” would walk around town and write down people’s misbehavior using a pen and paper. 

The myth originates from a pilot program called Sesame Credit, developed by Chinese tech company Alibaba. This was an attempt to assess people’s creditworthiness using customer data at a time when the majority of Chinese people didn’t have a credit card, says Brussee. The effort became conflated with the social credit system as a whole in what Brussee describes as a “game of Chinese whispers.” And the misunderstanding took on a life of its own. 

The irony is that while US and European politicians depict this as a problem stemming from authoritarian regimes, systems that rank and penalize people are already in place in the West. Algorithms designed to automate decisions are being rolled out en masse and used to deny people housing, jobs, and basic services. 

For example in Amsterdam, authorities have used an algorithm to rank young people from disadvantaged neighborhoods according to their likelihood of becoming a criminal. They claim the aim is to prevent crime and help offer better, more targeted support.  

But in reality, human rights groups argue, it has increased stigmatization and discrimination. The young people who end up on this list face more stops from police, home visits from authorities, and more stringent supervision from school and social workers.

It’s easy to take a stand against a dystopian algorithm that doesn’t really exist. But as lawmakers in both the EU and the US strive to build a shared understanding of AI governance, they would do better to look closer to home. Americans do not even have a federal privacy law that would offer some basic protections against algorithmic decision making. 

There is also a dire need for governments to conduct honest, thorough audits of the way authorities and companies use AI to make decisions about our lives. They might not like what they find—but that makes it all the more crucial for them to look.   

Deeper Learning

A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing

Research company OpenAI has built an AI that binged on 70,000 hours of videos of people playing Minecraft in order to play the game better than any AI before. It’s a breakthrough for a powerful new technique, called imitation learning, that could be used to train machines to carry out a wide range of tasks by watching humans do them first. It also raises the potential that sites like YouTube could be a vast and untapped source of training data. 

Why it’s a big deal: Imitation learning can be used to train AI to control robot arms, drive cars, or navigate websites. Some people, such as Meta’s chief AI scientist, Yann LeCun, think that watching videos will eventually help us train an AI with human-level intelligence. Read Will Douglas Heaven’s story here.

Bits and Bytes

Meta’s game-playing AI can make and break alliances like a human

Diplomacy is a popular strategy game in which seven players compete for control of Europe by moving pieces around on a map. The game requires players to talk to each other and spot when others are bluffing. Meta’s new AI, called Cicero, managed to trick humans to win. 

It’s a big step forward toward AI that can help with complex problems, such as planning routes around busy traffic and negotiating contracts. But I’m not going to lie—it’s also an unnerving thought that an AI can so successfully deceive humans. (MIT Technology Review) 

We could run out of data to train AI language programs 

The trend of creating ever bigger AI models means we need even bigger data sets to train them. The trouble is, we might run out of suitable data by 2026, according to a paper by researchers from Epoch, an AI research and forecasting organization. This should prompt the AI community to come up with ways to do more with existing resources. (MIT Technology Review)

Stable Diffusion 2.0 is out

The open-source text-to-image AI Stable Diffusion has been given a big facelift, and its outputs are looking a lot sleeker and more realistic than before. It can even do hands. The pace of Stable Diffusion’s development is breathtaking. Its first version only launched in August. We are likely going to see even more progress in generative AI well into next year. 



Continue Reading

Tech

Human creators stand to benefit as AI rewrites the rules of content creation

Published

on

Human creators stand to benefit as AI rewrites the rules of content creation


A game-changer for content creation

Among the AI-related technologies to have emerged in the past several years is generative AI—deep-learning algorithms that allow computers to generate original content, such as text, images, video, audio, and code. And demand for such content will likely jump in the coming years—Gartner predicts that by 2025, generative AI will account for 10% of all data created, compared with 1% in 2022. 

Screenshot of Jason Allen’s work “Théâtre D’opéra Spatial,” Discord 

“Théâtre D’opéra Spatial” is an example of AI-generated content (AIGC), created with the Midjourney text-to-art generator program. Several other AI-driven art-generating programs have also emerged in 2022, capable of creating paintings from single-line text prompts. The diversity of technologies reflects a wide range of artistic styles and different user demands. DALL-E 2 and Stable Diffusion, for instance, are focused mainly on western-style artwork, while Baidu’s ERNIE-ViLG and Wenxin Yige produce images influenced by Chinese aesthetics. At Baidu’s deep learning developer conference Wave Summit+ 2022, the company announced that Wenxin Yige has been updated with new features, including turning photos into AI-generated art, image editing, and one-click video production.

Meanwhile, AIGC can also include articles, videos, and various other media offerings such as voice synthesis. A technology that generates audible speech indistinguishable from the voice of the original speaker, voice synthesis can be applied in many scenarios, including voice navigation for digital maps. Baidu Maps, for example, allows users to customize its voice navigation to their own voice just by recording nine sentences.

Recent advances in AI technologies have also created generative language models that can fluently compose texts with just one click. They can be used for generating marketing copy, processing documents, extracting summaries, and other text tasks, unlocking creativity that other technologies such as voice synthesis have failed to tap. One of the leading generative language models is Baidu’s ERNIE 3.0, which has been widely applied in various industries such as health care, education, technology, and entertainment.

“In the past year, artificial intelligence has made a great leap and changed its technological direction,” says Robin Li, CEO of Baidu. “Artificial intelligence has gone from understanding pictures and text to generating content.” Going one step further, Baidu App, a popular search and newsfeed app with over 600 million monthly users, including five million content creators, recently released a video editing feature that can produce a short video accompanied by a voiceover created from data provided in an article.

Improving efficiency and growth

As AIGC becomes increasingly common, it could make content creation more efficient by getting rid of repetitive, time-intensive tasks for creators such as sorting out source assets and voice recordings and rendering images. Aspiring filmmakers, for instance, have long had to pay their dues by spending countless hours mastering the complex and tedious process of video editing. AIGC may soon make that unnecessary. 

Besides boosting efficiency, AIGC could also increase business growth in content creation amid rising demand for personalized digital content that users can interact with dynamically. InsightSLICE forecasts that the global digital creation market will on average grow 12% annually between 2020 and 2030 and hit $38.2 billion. With content consumption fast outpacing production, traditional development methods will likely struggle to meet such increasing demand, creating a gap that could be filled by AIGC. “AI has the potential to meet this massive demand for content at a tenth of the cost and a hundred times or thousands of times faster in the next decade,” Li says.

AI with humanity as its foundation

AIGC can also serve as an educational tool by helping children develop their creativity. StoryDrawer, for instance, is an AI-driven program designed to boost children’s creative thinking, which often declines as the focus in their education shifts to rote learning. 

Continue Reading

Tech

The Download: the West’s AI myth, and Musk v Apple

Published

on

The Download: the West’s AI myth, and Musk v Apple


While the US and the EU may differ on how to regulate tech, their lawmakers seem to agree on one thing: the West needs to ban AI-powered social scoring.

As they understand it, social scoring is a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen.

The reality? While there have been some contentious local experiments with social credit scores in China, there is no countrywide, all-seeing social credit system with algorithms that rank people.

The irony is that while US and European politicians try to ban systems that don’t really exist, systems that do rank and penalize people are already in place in the West—and are denying people housing and jobs in the process. Read the full story.

—Melissa Heikkilä

Melissa’s story is from The Algorithm, her weekly AI newsletter covering all of the industry’s most interesting developments. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Apple has reportedly threatened to pull Twitter from the App Store
According to Elon Musk. (NYT $)
+ Musk has threatened to “go to war” with the company after it decided to stop advertising on Twitter. (WP $)
+ Apple’s reluctance to advertise on Twitter right now isn’t exactly unique. (Motherboard)
+ Twitter’s child protection team in Asia has been gutted. (Wired $)

2 Another crypto firm has collapsed
Lender BlockFi has filed for bankruptcy, and is (partly) blaming FTX. (WSJ $)
+ The company is suing FTX founder Sam Bankman-Fried. (FT $)
+ It looks like the much-feared “crypto contagion” is spreading. (NYT $)

3 AI is rapidly becoming more powerful—and dangerous
That’s particularly worrying when its growth is too much for safety teams to handle. (Vox)
+ Do AI systems need to come with safety warnings? (MIT Technology Review)
+ This AI chat-room game is gaining a legion of fans. (The Guardian)

4 A Pegasus spyware investigation is in danger of being compromised 
It’s the target of a disinformation campaign, security experts have warned. (The Guardian)
+ Cyber insurance won’t protect you from theft of your data. (The Guardian)

5 Google gave the FBI geofence data for its January 6 investigation 
Google identified more than 5,000 devices near the US Capitol during the riot. (Wired $)

6 Monkeypox isn’t going anywhere
But it’s not on the rise, either. (The Atlantic $)
+ The World Health Organization says it will now be known as mpox. (BBC)
+ Everything you need to know about the monkeypox vaccines. (MIT Technology Review)

7 What it’s like to be the unwitting face of a romance scam
James Scott Geras’ pictures have been used to catfish countless women. (Motherboard)

Continue Reading

Copyright © 2021 Seminole Press.