Connect with us

Tech

The Download: text-to-video AI, and China’s big methanol bet

Published

on

🙁


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Meta’s new AI can turn text prompts into videos

What’s happened: Meta has unveiled an AI system that generates short videos based on text prompts. Make-A-Video lets you type in a string of words, like “A dog wearing a superhero outfit with a red cape flying through the sky,” and then generates a five-second clip that, while pretty accurate, has the aesthetics of a trippy old home video.

How it works: Meta combined data from three open-source image and video data sets to train its model. Standard text-image data sets of labeled still images helped the AI learn what objects are called and what they look like. And a database of videos helped it learn how those objects are supposed to move in the world. 

Why it matters: Although the effect is rather crude, the system offers an early glimpse of what’s coming next for generative artificial intelligence, and it is the next obvious step from the text-to-image AI systems that have caused huge excitement this year. But it also raises some big ethical questions. Read the full story.

—Melissa Heikkilä

China is betting big on another gas engine alternative: methanol cars

As the Chinese government works to reach ambitious carbon goals, the country has become a global leader in the adoption of electric vehicles. But that’s not the only greener car alternative it’s pursuing.

While methanol fuel has been discussed and piloted in China for a decade, its adoption has long lagged. Now the government is trying to accelerate the adoption of methanol cars, along with other state efforts in the last year to draft methanol car standards and support relevant industries, reaffirm its commitment to the alternative fuel. 

This matters because, just like EVs, the technology could become both a commercial success and a political boost to China’s climate-tech ambitions. Read the full story.

—Zeyi Yang

Can we find ways to live beyond 100? Millionaires are betting on it.

Scientists and biotech companies have been networking with uber-wealthy investors at a swanky conference in Switzerland this week, making the case for longevity science and anti-aging strategies. My colleague Jess Hamzelou, our senior biomedicine reporter, joined them, and got a peek at some of the most cutting-edge work in the field. Read about what she discovered.

Jess’s story is from The Checkup, her new weekly newsletter giving you the inside track on all things health and tech-related. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Hurricane Ian has left vast swathes of Florida underwater
As it heads towards South Carolina, Biden has warned it could become the deadliest in Florida history. (The Guardian)
+ Coral reefs are an effective natural defense against hurricanes. (Vox)
+ The storm is a potent mix of powerful and unpredictable. (The Atlantic $)
+ It could be on course to join the list of storms as severe as Katrina. (New Yorker $)

2 Iran is ramping up internet blackouts and censorship 
Thus far, it’s not achieving the government’s desired outcome. (Slate $)
+ A niche tech publisher is shining a light on China’s surveillance machine. (The Atlantic $)

3 What makes plastic so useful also makes it a nightmare to recycle
A new method of breaking it down could help. (Economist $)
+ A French company is using enzymes to recycle one of the most common single-use plastics. (MIT Technology Review)

4 Why Russia’s cyber war never really materialized 
The attacks it did land didn’t deliver the intended consequences. (FT $)
+ Here’s how the war in Ukraine could end. (New Yorker $)
+ Russian men are reportedly pretending to have HIV to escape conscription. (Rest of World)

5 Jack Dorsey tried to get Elon Musk a place on Twitter’s board
But the other members saw the appointment as too risky. (CNBC)
+ The former CEO also tried to get Musk and CEO Parag Agrawal off on the right foot. (WSJ $)
+ Musk wanted to search for ‘Trump’ in his hunt for bot data. (Bloomberg $)
+ Musk also toyed with appointing Oprah to Twitter’s board. (The Information $)

6 The Arctic Ocean is rapidly becoming more acidic
Unsurprisingly, climate change is the culprit. (Motherboard)
+ China, the world’s biggest greenhouse gas emitter, is suffering. (Vox)

7 Reboosting the Hubble Telescope would give it a new lease of life
NASA and SpaceX think they could do just that. (BBC)
+ NASA has taken stunning photos of Jupiter’s moon Europa. (New Scientist $) 

8 Brace yourself for a new wave of at-home tests
They’re not just for covid, either. (Neo.Life)
+ Genome sequencing has never been so cheap, or easy. (Wired $)

9 Why voice notes are so controversial 
Send yours with caution. (WSJ $)
+ Lasers can send a whispered audio message directly to one person’s ear. (MIT Technology Review)

10 AI is creating horrible new Pokémon 
Don’t say I didn’t warn you. (WP $)
+ This artist is dominating AI-generated art. And he’s not happy about it. (MIT Technology Review)

Quote of the day

“I guess you learn who your real friends are when you can’t get allocation in their seed round ”

—Maia Bittner, an angel investor, jokily tweets about the pitfalls of investing in friends’ startups, Bloomberg reports.

The big story

Meet the wannabe kidfluencers struggling for stardom

December 2019 

On YouTube, children can become millionaires—seemingly overnight, without trying. The highest paid of them, eight-year-old Ryan Kaji, made $22 million in 2018 by playing with toys on his channel Ryan ToysReview (now Ryan’s World). There are now thousands of similarly famous child YouTubers: babies who have been vlogged since the moment of their birth, 10-year-old streamers showing off video-game tricks, teenage girls giving acne advice from their bedrooms.

Why do so many kids want to be YouTubers? Do they only seek fame, or is there more to it: creativity, community, and a future career? How are their parents helping them? And what happens if, after spending thousands of dollars or dropping out of school, it doesn’t work out? Read the full story.

—Amelia Tait

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Francis Ford Coppola’s excellent chiller Bram Stoker’s Dracula is back in movie theaters this Halloween. Enjoy the opulent 4K trailer here.
+ These scallops can’t get enough of bright lights.
+ Cher’s sprawling home is every bit as lavish as you’d expect.
+ Nope, it’s not a joke, they really are turning The Matrix into a dance show.
+ Controversial take klaxon: are these really the best songs of the 90s?



Tech

The AI myth Western lawmakers get wrong

Published

on

China just announced a new social credit law. Here’s what it means.


This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

While the US and the EU may differ on how to regulate tech, their lawmakers seem to agree on one thing: the West needs to ban AI-powered social scoring.

As they understand it, social scoring is a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen. 

The EU is currently negotiating a new law called the AI Act, which will ban member states, and maybe even private companies, from implementing such a system.

The trouble is, it’s “essentially banning thin air,” says Vincent Brussee, an analyst at the Mercator Institute for China Studies, a German think tank.

Back in 2014, China announced a six-year plan to build a system rewarding actions that build trust in society and penalizing the opposite. Eight years on, it’s only just released a draft law that tries to codify past social credit pilots and guide future implementation. 

There have been some contentious local experiments, such as one in the small city of Rongcheng in 2013, which gave every resident a starting personal credit score of 1,000 that can be increased or decreased by how their actions are judged. People are now able to opt out, and the local government has removed some controversial criteria. 

But these have not gained wider traction elsewhere and do not apply to the entire Chinese population. There is no countrywide, all-seeing social credit system with algorithms that rank people.

As my colleague Zeyi Yang explains, “the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either.” 

What has been implemented is mostly pretty low-tech. It’s a “mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values,” Zeyi writes. 

Kendra Schaefer, a partner at Trivium China, a Beijing-based research consultancy, who compiled a report on the subject for the US government, couldn’t find a single case in which data collection in China led to automated sanctions without human intervention. The South China Morning Post found that in Rongcheng, human “information gatherers” would walk around town and write down people’s misbehavior using a pen and paper. 

The myth originates from a pilot program called Sesame Credit, developed by Chinese tech company Alibaba. This was an attempt to assess people’s creditworthiness using customer data at a time when the majority of Chinese people didn’t have a credit card, says Brussee. The effort became conflated with the social credit system as a whole in what Brussee describes as a “game of Chinese whispers.” And the misunderstanding took on a life of its own. 

The irony is that while US and European politicians depict this as a problem stemming from authoritarian regimes, systems that rank and penalize people are already in place in the West. Algorithms designed to automate decisions are being rolled out en masse and used to deny people housing, jobs, and basic services. 

For example in Amsterdam, authorities have used an algorithm to rank young people from disadvantaged neighborhoods according to their likelihood of becoming a criminal. They claim the aim is to prevent crime and help offer better, more targeted support.  

But in reality, human rights groups argue, it has increased stigmatization and discrimination. The young people who end up on this list face more stops from police, home visits from authorities, and more stringent supervision from school and social workers.

It’s easy to take a stand against a dystopian algorithm that doesn’t really exist. But as lawmakers in both the EU and the US strive to build a shared understanding of AI governance, they would do better to look closer to home. Americans do not even have a federal privacy law that would offer some basic protections against algorithmic decision making. 

There is also a dire need for governments to conduct honest, thorough audits of the way authorities and companies use AI to make decisions about our lives. They might not like what they find—but that makes it all the more crucial for them to look.   

Deeper Learning

A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing

Research company OpenAI has built an AI that binged on 70,000 hours of videos of people playing Minecraft in order to play the game better than any AI before. It’s a breakthrough for a powerful new technique, called imitation learning, that could be used to train machines to carry out a wide range of tasks by watching humans do them first. It also raises the potential that sites like YouTube could be a vast and untapped source of training data. 

Why it’s a big deal: Imitation learning can be used to train AI to control robot arms, drive cars, or navigate websites. Some people, such as Meta’s chief AI scientist, Yann LeCun, think that watching videos will eventually help us train an AI with human-level intelligence. Read Will Douglas Heaven’s story here.

Bits and Bytes

Meta’s game-playing AI can make and break alliances like a human

Diplomacy is a popular strategy game in which seven players compete for control of Europe by moving pieces around on a map. The game requires players to talk to each other and spot when others are bluffing. Meta’s new AI, called Cicero, managed to trick humans to win. 

It’s a big step forward toward AI that can help with complex problems, such as planning routes around busy traffic and negotiating contracts. But I’m not going to lie—it’s also an unnerving thought that an AI can so successfully deceive humans. (MIT Technology Review) 

We could run out of data to train AI language programs 

The trend of creating ever bigger AI models means we need even bigger data sets to train them. The trouble is, we might run out of suitable data by 2026, according to a paper by researchers from Epoch, an AI research and forecasting organization. This should prompt the AI community to come up with ways to do more with existing resources. (MIT Technology Review)

Stable Diffusion 2.0 is out

The open-source text-to-image AI Stable Diffusion has been given a big facelift, and its outputs are looking a lot sleeker and more realistic than before. It can even do hands. The pace of Stable Diffusion’s development is breathtaking. Its first version only launched in August. We are likely going to see even more progress in generative AI well into next year. 



Continue Reading

Tech

Human creators stand to benefit as AI rewrites the rules of content creation

Published

on

Human creators stand to benefit as AI rewrites the rules of content creation


A game-changer for content creation

Among the AI-related technologies to have emerged in the past several years is generative AI—deep-learning algorithms that allow computers to generate original content, such as text, images, video, audio, and code. And demand for such content will likely jump in the coming years—Gartner predicts that by 2025, generative AI will account for 10% of all data created, compared with 1% in 2022. 

Screenshot of Jason Allen’s work “Théâtre D’opéra Spatial,” Discord 

“Théâtre D’opéra Spatial” is an example of AI-generated content (AIGC), created with the Midjourney text-to-art generator program. Several other AI-driven art-generating programs have also emerged in 2022, capable of creating paintings from single-line text prompts. The diversity of technologies reflects a wide range of artistic styles and different user demands. DALL-E 2 and Stable Diffusion, for instance, are focused mainly on western-style artwork, while Baidu’s ERNIE-ViLG and Wenxin Yige produce images influenced by Chinese aesthetics. At Baidu’s deep learning developer conference Wave Summit+ 2022, the company announced that Wenxin Yige has been updated with new features, including turning photos into AI-generated art, image editing, and one-click video production.

Meanwhile, AIGC can also include articles, videos, and various other media offerings such as voice synthesis. A technology that generates audible speech indistinguishable from the voice of the original speaker, voice synthesis can be applied in many scenarios, including voice navigation for digital maps. Baidu Maps, for example, allows users to customize its voice navigation to their own voice just by recording nine sentences.

Recent advances in AI technologies have also created generative language models that can fluently compose texts with just one click. They can be used for generating marketing copy, processing documents, extracting summaries, and other text tasks, unlocking creativity that other technologies such as voice synthesis have failed to tap. One of the leading generative language models is Baidu’s ERNIE 3.0, which has been widely applied in various industries such as health care, education, technology, and entertainment.

“In the past year, artificial intelligence has made a great leap and changed its technological direction,” says Robin Li, CEO of Baidu. “Artificial intelligence has gone from understanding pictures and text to generating content.” Going one step further, Baidu App, a popular search and newsfeed app with over 600 million monthly users, including five million content creators, recently released a video editing feature that can produce a short video accompanied by a voiceover created from data provided in an article.

Improving efficiency and growth

As AIGC becomes increasingly common, it could make content creation more efficient by getting rid of repetitive, time-intensive tasks for creators such as sorting out source assets and voice recordings and rendering images. Aspiring filmmakers, for instance, have long had to pay their dues by spending countless hours mastering the complex and tedious process of video editing. AIGC may soon make that unnecessary. 

Besides boosting efficiency, AIGC could also increase business growth in content creation amid rising demand for personalized digital content that users can interact with dynamically. InsightSLICE forecasts that the global digital creation market will on average grow 12% annually between 2020 and 2030 and hit $38.2 billion. With content consumption fast outpacing production, traditional development methods will likely struggle to meet such increasing demand, creating a gap that could be filled by AIGC. “AI has the potential to meet this massive demand for content at a tenth of the cost and a hundred times or thousands of times faster in the next decade,” Li says.

AI with humanity as its foundation

AIGC can also serve as an educational tool by helping children develop their creativity. StoryDrawer, for instance, is an AI-driven program designed to boost children’s creative thinking, which often declines as the focus in their education shifts to rote learning. 

Continue Reading

Tech

The Download: the West’s AI myth, and Musk v Apple

Published

on

The Download: the West’s AI myth, and Musk v Apple


While the US and the EU may differ on how to regulate tech, their lawmakers seem to agree on one thing: the West needs to ban AI-powered social scoring.

As they understand it, social scoring is a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen.

The reality? While there have been some contentious local experiments with social credit scores in China, there is no countrywide, all-seeing social credit system with algorithms that rank people.

The irony is that while US and European politicians try to ban systems that don’t really exist, systems that do rank and penalize people are already in place in the West—and are denying people housing and jobs in the process. Read the full story.

—Melissa Heikkilä

Melissa’s story is from The Algorithm, her weekly AI newsletter covering all of the industry’s most interesting developments. Sign up to receive it in your inbox every Monday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Apple has reportedly threatened to pull Twitter from the App Store
According to Elon Musk. (NYT $)
+ Musk has threatened to “go to war” with the company after it decided to stop advertising on Twitter. (WP $)
+ Apple’s reluctance to advertise on Twitter right now isn’t exactly unique. (Motherboard)
+ Twitter’s child protection team in Asia has been gutted. (Wired $)

2 Another crypto firm has collapsed
Lender BlockFi has filed for bankruptcy, and is (partly) blaming FTX. (WSJ $)
+ The company is suing FTX founder Sam Bankman-Fried. (FT $)
+ It looks like the much-feared “crypto contagion” is spreading. (NYT $)

3 AI is rapidly becoming more powerful—and dangerous
That’s particularly worrying when its growth is too much for safety teams to handle. (Vox)
+ Do AI systems need to come with safety warnings? (MIT Technology Review)
+ This AI chat-room game is gaining a legion of fans. (The Guardian)

4 A Pegasus spyware investigation is in danger of being compromised 
It’s the target of a disinformation campaign, security experts have warned. (The Guardian)
+ Cyber insurance won’t protect you from theft of your data. (The Guardian)

5 Google gave the FBI geofence data for its January 6 investigation 
Google identified more than 5,000 devices near the US Capitol during the riot. (Wired $)

6 Monkeypox isn’t going anywhere
But it’s not on the rise, either. (The Atlantic $)
+ The World Health Organization says it will now be known as mpox. (BBC)
+ Everything you need to know about the monkeypox vaccines. (MIT Technology Review)

7 What it’s like to be the unwitting face of a romance scam
James Scott Geras’ pictures have been used to catfish countless women. (Motherboard)

Continue Reading

Copyright © 2021 Seminole Press.