Connect with us

Tech

The Algorithm: AI-generated art raises tricky questions about ethics, copyright, and security

Published

on

The Algorithm: AI-generated art raises tricky questions about ethics, copyright, and security


Thanks to his distinctive style, Rutkowski is now one of the most commonly used prompts in the new open-source AI art generator Stable Diffusion, which was launched late last month—far more popular than some of the world’s most famous artists, like Picasso. His name has been used as a prompt around 93,000 times.

But he’s not happy about it. He thinks it could threaten his livelihood—and he was never given the choice of whether to opt in or out of having his work used this way. 

The story is yet another example of AI developers rushing to roll out something cool without thinking about the humans who will be affected by it. 

Stable Diffusion is free for anyone to use, providing a great resource for AI developers who want to use a powerful model to build products. But because these open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists, they are raising tricky questions about ethics, copyright, and security. 

Artists like Rutkowski have had enough. It’s still early days, but a growing coalition of artists are figuring out how to tackle the problem. In the future, we might see the art sector shifting toward pay-per-play or subscription models like the one used in the film and music industries. If you’re curious and want to learn more, read my story. 

And it’s not just artists: We should all be concerned about what’s included in the training data sets of AI models, especially as these technologies become a more crucial part of the internet’s infrastructure.

In a paper that came out last year, AI researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe analyzed a smaller data set similar to the one used to build Stable Diffusion. Their findings are distressing. Because the data is scraped from the internet, and the internet is a horrible place, the data set is filled with explicit rape images, pornography, malign stereotypes, and racist and ethnic slurs. 

A website called Have I Been Trained lets people search for images used to train the latest batch of popular AI art models. Even innocent search terms get lots of disturbing results. I tried searching the database for my ethnicity, and all I got back was porn. Lots of porn. It’s a depressing thought that the only thing the AI seems to associate with the word “Asian” is naked East Asian women. 

Not everyone sees this as a problem for the AI sector to fix. Emad Mostaque, the founder of Stability.AI, which built Stable Diffusion, said on Twitter he thought the ethics debate around these models to be “paternalistic silliness that doesn’t trust people or society.”  

But there’s a big safety question. Free open-source models like Stable Diffusion and the large language model BLOOM give malicious actors tools to generate harmful content at scale with minimal resources, argues Abhishek Gupta, the founder of the Montreal AI Ethics Institute and a responsible-AI expert at Boston Consulting Group.

The sheer scale of the havoc these systems enable will constrain the effectiveness of traditional controls like limiting how many images people can generate and restricting dodgy content from being generated, Gupta says. Think deepfakes or disinformation on steroids. When a powerful AI system “gets into the wild,” Gupta says, “that can cause real trauma … for example, by creating objectionable content in [someone’s] likeness.” 

We can’t put the cat back in the bag, so we really ought to be thinking about how to deal with these AI models in the wild, Gupta says. This includes monitoring how the AI systems are used after they have been launched, and thinking about controls that “can minimize harms even in worst-case scenarios.” 

Deeper Learning

There’s no Tiananmen Square in the new Chinese image-making AI



Tech

The Download: metaverse fashion, and looser covid rules in China

Published

on

The Download: metaverse fashion, and looser covid rules in China


Fashion creator Jenni Svoboda is designing a beanie with a melted cupcake top, sprinkles, and doughnuts for ears. But this outlandish accessory isn’t destined for the physical world—Svoboda is designing for the metaverse. She’s working in a burgeoning, if bizarre, new niche: fashion stylists who create or curate outfits for people in virtual spaces.

Metaverse stylists are increasingly sought-after as frequent users seek help dressing their avatars—often in experimental, wildly creative looks that defy personal expectations, societal standards, and sometimes even physics. 

Stylists like Svoboda are among those shaping the metaverse fashion industry, which is already generating hundreds of millions of dollars. But while, to the casual observer, it can seem outlandish and even obscene to spend so much money on virtual clothes, there are deeper, more personal, reasons why people are hiring professionals to curate their virtual outfits. Read the full story.

—Tanya Basu

Making sense of the changes to China’s zero-covid policy

On December 1, 2019, the first known covid-19 patient started showing symptoms in Wuhan. Three years later, China is the last country in the world holding on to strict pandemic control restrictions. However, after days of intense protests that shocked the world, it looks as if things could finally change.

Beijing has just announced wide-ranging relaxations of its zero covid policy, including allowing people to quarantine at home instead of in special facilities for the first time.

Continue Reading

Tech

Uber’s facial recognition is locking Indian drivers out of their accounts 

Published

on

""


Uber checks that a driver’s face matches what the company has on file through a program called “Real-Time ID Check.” It was rolled out in the US in 2016, in India in 2017, and then in other markets. “This prevents fraud and protects drivers’ accounts from being compromised. It also protects riders by building another layer of accountability into the app to ensure the right person is behind the wheel,” Joe Sullivan, Uber’s chief security officer, said in a statement in 2017.

But the company’s driver verification procedures are far from seamless. Adnan Taqi, an Uber driver in Mumbai, ran into trouble with it when the app prompted him to take a selfie around dusk. He was locked out for 48 hours, a big dent in his work schedule—he says he drives 18 hours straight, sometimes as much as 24 hours, to be able to make a living. Days later, he took a selfie that locked him out of his account again, this time for a whole week. That time, Taqi suspects, it came down to hair: “I hadn’t shaved for a few days and my hair had also grown out a bit,” he says. 

More than a dozen drivers interviewed for this story detailed instances of having to find better lighting to avoid being locked out of their Uber accounts. “Whenever Uber asks for a selfie in the evenings or at night, I’ve had to pull over and go under a streetlight to click a clear picture—otherwise there are chances of getting rejected,” said Santosh Kumar, an Uber driver from Hyderabad. 

Others have struggled with scratches on their cameras and low-budget smartphones. The problem isn’t unique to Uber. Drivers with Ola, which is backed by SoftBank, face similar issues. 

Some of these struggles can be explained by natural limitations in face recognition technology. The software starts by converting your face into a set of points, explains Jernej Kavka, an independent technology consultant with access to Microsoft’s Face API, which is what Uber uses to power Real-Time ID Check. 

Adnan Taqi holds up his phone in the driver’s seat of his car. Variations in lighting and facial hair have likely caused him to lose access to the app.

SELVAPRAKASH LAKSHMANAN

“With excessive facial hair, the points change and it may not recognize where the chin is,” Kavka says. The same thing happens when there is low lighting or the phone’s camera doesn’t have a good contrast. “This makes it difficult for the computer to detect edges,” he explains.

But the software may be especially brittle in India. In December 2021, tech policy researchers Smriti Parsheera (a fellow with the CyberBRICS project) and Gaurav Jain (an economist with the International Finance Corporation) posted a preprint paper that audited four commercial facial processing tools—Amazon’s Rekognition, Microsoft Azure’s Face, Face++, and FaceX—for their performance on Indian faces. When the software was applied to a database of 32,184 election candidates, Microsoft’s Face failed to even detect the presence of a face in more than 1,000 images, throwing an error rate of more than 3%—the worst among the four. 

It could be that the Uber app is failing drivers because its software was not trained on a diverse range of Indian faces, Parsheera says. But she says there may be other issues at play as well. “There could be a number of other contributing factors like lighting, angle, effects of aging, etc.,” she explained in writing. “But the lack of transparency surrounding the use of such systems makes it hard to provide a more concrete explanation.” 

Continue Reading

Tech

The Download: Uber’s flawed facial recognition, and police drones

Published

on

The Download: Uber’s flawed facial recognition, and police drones


One evening in February last year, a 23-year-old Uber driver named Niradi Srikanth was getting ready to start another shift, ferrying passengers around the south Indian city of Hyderabad. He pointed the phone at his face to take a selfie to verify his identity. The process usually worked seamlessly. But this time he was unable to log in.

Srikanth suspected it was because he had recently shaved his head. After further attempts to log in were rejected, Uber informed him that his account had been blocked. He is not alone. In a survey conducted by MIT Technology Review of 150 Uber drivers in the country, almost half had been either temporarily or permanently locked out of their accounts because of problems with their selfie.

Hundreds of thousands of India’s gig economy workers are at the mercy of facial recognition technology, with few legal, policy or regulatory protections. For workers like Srikanth, getting blocked from or kicked off a platform can have devastating consequences. Read the full story.

—Varsha Bansal

I met a police drone in VR—and hated it

Police departments across the world are embracing drones, deploying them for everything from surveillance and intelligence gathering to even chasing criminals. Yet none of them seem to be trying to find out how encounters with drones leave people feeling—or whether the technology will help or hinder policing work.

A team from University College London and the London School of Economics is filling in the gaps, studying how people react when meeting police drones in virtual reality, and whether they come away feeling more or less trusting of the police. 

MIT Technology Review’s Melissa Heikkilä came away from her encounter with a VR police drone feeling unnerved. If others feel the same way, the big question is whether these drones are effective tools for policing in the first place. Read the full story.

Melissa’s story is from The Algorithm, her weekly newsletter covering AI and its effects on society. Sign up to receive it in your inbox every Monday.

Continue Reading

Copyright © 2021 Seminole Press.