Connect with us

Tech

The Download: Algorithms’ shame trap, and London’s safer road crossings

Published

on

❤


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How algorithms trap us in a cycle of shame

Working in finance at the beginning of the 2008 financial crisis, mathematician Cathy O’Neil got a firsthand look at how much people trusted algorithms—and how much destruction they were causing. Disheartened, she moved to the tech industry, but encountered the same blind faith. After leaving, she wrote a book in 2016 that dismantled the idea that algorithms are objective. 

O’Neil showed how every algorithm is trained on historical data to recognize patterns, and how they break down in damaging ways. Algorithms designed to predict the chance of re-arrest, for example, can unfairly burden people, typically people of color, who are poor, live in the wrong neighborhood, or have untreated mental-­health problems or addictions.

Over time, she came to realize another significant factor that was reinforcing these inequities: shame. Society has been shaming people for things they have no choice or voice in, such as weight or addiction problems, and weaponizing that humiliation. The next step, O’Neill recognized, was fighting back. Read the full story.

—Allison Arieff

London is experimenting with traffic lights that put pedestrians first

The news: For pedestrians, walking in a city can be like navigating an obstacle course. Transport for London, the public body behind transport services in the British capital, has been testing a new type of crossing designed to make getting around the busy streets safer and easier.

How does it work? Instead of waiting for the “green man” as a signal to cross the road, pedestrians will encounter green as the default setting when they approach one of 18 crossings around the city. The light changes to red only when the sensor detects an approaching vehicle—a first in the UK.

How’s it been received? After a trial of nine months, the data is encouraging: there is virtually no impact on traffic, it saves pedestrians time, and it makes them 13% more likely to comply with traffic signals. Read the full story.

—Rachael Revesz

Check out these stories from our new Urbanism issue. You can read the full magazine for yourself and subscribe to get future editions delivered to your door for just $120 a year.

– How social media filters are helping people to explore their gender identity.
– The limitations of tree-planting as a way to mitigate climate change.

Podcast: Who watches the AI that watches students?

A boy wrote about his suicide attempt. He didn’t realize his school’s software was watching. While schools commonly use AI to sift through students’ digital lives and flag keywords that may be considered concerning, critics ask: at what cost to privacy? We delve into this story, and the wider world of school surveillance, in the latest episode of our award-winning podcast, In Machines We Trust.

Check it out here.

ICYMI: Our TR35 list of innovators for 2022

In case you missed it yesterday, our annual TR35 list of the most exciting young minds aged 35 and under is now out! Read it online here or subscribe to read about them in the print edition of our new Urbanism issue here.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 There’s now a crazy patchwork of abortion laws in the US
Overturning Roe has triggered a legal quagmire—including some abortion laws that contract others within the same state. (FT $)
+ Protestors are doxxing the Supreme Court on TikTok. (Motherboard)
+ Planned Parenthood’s abortion scheduling tool could share data. (WP $)
+ Here’s the kind of data state authorities could try to use to prosecute. (WSJ $)
+ Tech firms need to be transparent about what they’re asked to share. (WP $)
+ Here’s what people in the trigger states are Googling. (Vox)

2 Chinese students were lured into spying for Beijing
The recent graduates were tasked with translating hacked documents. (FT $)
+ The FBI accused him of spying for China. It ruined his life. (MIT Technology Review)

3 Why it’s time to adjust our expectations of AI
Researchers are getting fed up with the hype. (WSJ $)
+ Meta still wants to build intelligent machines that learn like humans, though. (Spectrum IEEE)
+ Yann LeCun has a bold new vision for the future of AI. (MIT Technology Review)
+ Understanding how the brain’s neurons really work will aid better AI models. (Economist $)

4 Bitcoin is facing its biggest drop in more than 10 years
The age of freewheeling growth really is coming to an end. (Bloomberg $)
+ The crash is a threat to funds worth millions stolen by North Korea. (Reuters)
+ The cryptoapocalypse could worsen before it levels out. (The Guardian)
+ The EU is one step closer towards regulating crypto. (Reuters)

5 Singapore’s new online safety laws are a thinly-veiled power grab
Empowering its authoritarian government to exert even greater control over civilians. (Rest of World)

6 Recommendations algorithms require effort to work properly
Telling them what you like makes it more likely it’ll present you with decent suggestions. (The Verge)

7 China’s on a mission to find an Earth-like planet
But what they’ll find is anyone’s guess. (Motherboard)
+ The ESA’s Gaia probe is shining a light on what’s floating in the Milky Way. (Wired $) 

8 Inside YouTube’s meta world of video critique
Video creators analyzing other video creators makes for compelling watching. (NYT $)
+ Long-form videos are helping creators to stave off creative burnout. (NBC)

9 Time-pressed daters are vetting potential suitors over video chat
To get the lay of the land before committing to an IRL meet-up. (The Atlantic $)

10 How fandoms shaped the internet
For better—and for worse. (New Yorker $)

Quote of the day

“This is no mere monkey business.”

—A lawsuit filed by Yuga Labs, the creators of the Bored Ape NFT collection, against conceptual artists Ryder Ripps, claims Ripps copied their distinctive simian artwork, Gizmodo reports.

The big story

This restaurant duo want a zero-carbon food system. Can it happen?

September 2020

When Karen Leibowitz and Anthony Myint opened The Perennial, the most ambitious and expensive restaurant of their careers, they had a grand vision: they wanted it to be completely carbon-neutral. Their “laboratory of environmentalism in the food world” opened in San Francisco in January 2016, and its pièce de résistance was serving meat with a dramatically lower carbon footprint than normal. 

Myint and Leibowitz realized they were on to something much bigger—and that the easiest, most practical way to tackle global warming might be through food. But they also realized that what has been called the “country’s most sustainable restaurant” couldn’t fix the broken system by itself. So in early 2019, they dared themselves to do something else that nobody expected. They shut The Perennial down. Read the full story.

—Clint Rainey

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ A look inside the UK’s blossoming trainspotting scene (don’t worry, it’s nothing to do with the Irvine Welsh novel of the same name.)
+ This is the very definition of a burn.
+ A solid science joke.
+ This amusing Twitter account compiles some of the strangest public Spotify playlists out there (Shout out to Rappers With Memory Problems)
+ Have you been lucky enough to see any of these weird and wonderful buildings in person?



Tech

Uber’s facial recognition is locking Indian drivers out of their accounts 

Published

on

""


Uber checks that a driver’s face matches what the company has on file through a program called “Real-Time ID Check.” It was rolled out in the US in 2016, in India in 2017, and then in other markets. “This prevents fraud and protects drivers’ accounts from being compromised. It also protects riders by building another layer of accountability into the app to ensure the right person is behind the wheel,” Joe Sullivan, Uber’s chief security officer, said in a statement in 2017.

But the company’s driver verification procedures are far from seamless. Adnan Taqi, an Uber driver in Mumbai, ran into trouble with it when the app prompted him to take a selfie around dusk. He was locked out for 48 hours, a big dent in his work schedule—he says he drives 18 hours straight, sometimes as much as 24 hours, to be able to make a living. Days later, he took a selfie that locked him out of his account again, this time for a whole week. That time, Taqi suspects, it came down to hair: “I hadn’t shaved for a few days and my hair had also grown out a bit,” he says. 

More than a dozen drivers interviewed for this story detailed instances of having to find better lighting to avoid being locked out of their Uber accounts. “Whenever Uber asks for a selfie in the evenings or at night, I’ve had to pull over and go under a streetlight to click a clear picture—otherwise there are chances of getting rejected,” said Santosh Kumar, an Uber driver from Hyderabad. 

Others have struggled with scratches on their cameras and low-budget smartphones. The problem isn’t unique to Uber. Drivers with Ola, which is backed by SoftBank, face similar issues. 

Some of these struggles can be explained by natural limitations in face recognition technology. The software starts by converting your face into a set of points, explains Jernej Kavka, an independent technology consultant with access to Microsoft’s Face API, which is what Uber uses to power Real-Time ID Check. 

Adnan Taqi holds up his phone in the driver’s seat of his car. Variations in lighting and facial hair have likely caused him to lose access to the app.

SELVAPRAKASH LAKSHMANAN

“With excessive facial hair, the points change and it may not recognize where the chin is,” Kavka says. The same thing happens when there is low lighting or the phone’s camera doesn’t have a good contrast. “This makes it difficult for the computer to detect edges,” he explains.

But the software may be especially brittle in India. In December 2021, tech policy researchers Smriti Parsheera (a fellow with the CyberBRICS project) and Gaurav Jain (an economist with the International Finance Corporation) posted a preprint paper that audited four commercial facial processing tools—Amazon’s Rekognition, Microsoft Azure’s Face, Face++, and FaceX—for their performance on Indian faces. When the software was applied to a database of 32,184 election candidates, Microsoft’s Face failed to even detect the presence of a face in more than 1,000 images, throwing an error rate of more than 3%—the worst among the four. 

It could be that the Uber app is failing drivers because its software was not trained on a diverse range of Indian faces, Parsheera says. But she says there may be other issues at play as well. “There could be a number of other contributing factors like lighting, angle, effects of aging, etc.,” she explained in writing. “But the lack of transparency surrounding the use of such systems makes it hard to provide a more concrete explanation.” 

Continue Reading

Tech

The Download: Uber’s flawed facial recognition, and police drones

Published

on

The Download: Uber’s flawed facial recognition, and police drones


One evening in February last year, a 23-year-old Uber driver named Niradi Srikanth was getting ready to start another shift, ferrying passengers around the south Indian city of Hyderabad. He pointed the phone at his face to take a selfie to verify his identity. The process usually worked seamlessly. But this time he was unable to log in.

Srikanth suspected it was because he had recently shaved his head. After further attempts to log in were rejected, Uber informed him that his account had been blocked. He is not alone. In a survey conducted by MIT Technology Review of 150 Uber drivers in the country, almost half had been either temporarily or permanently locked out of their accounts because of problems with their selfie.

Hundreds of thousands of India’s gig economy workers are at the mercy of facial recognition technology, with few legal, policy or regulatory protections. For workers like Srikanth, getting blocked from or kicked off a platform can have devastating consequences. Read the full story.

—Varsha Bansal

I met a police drone in VR—and hated it

Police departments across the world are embracing drones, deploying them for everything from surveillance and intelligence gathering to even chasing criminals. Yet none of them seem to be trying to find out how encounters with drones leave people feeling—or whether the technology will help or hinder policing work.

A team from University College London and the London School of Economics is filling in the gaps, studying how people react when meeting police drones in virtual reality, and whether they come away feeling more or less trusting of the police. 

MIT Technology Review’s Melissa Heikkilä came away from her encounter with a VR police drone feeling unnerved. If others feel the same way, the big question is whether these drones are effective tools for policing in the first place. Read the full story.

Melissa’s story is from The Algorithm, her weekly newsletter covering AI and its effects on society. Sign up to receive it in your inbox every Monday.

Continue Reading

Tech

I met a police drone in VR—and hated it

Published

on

I met a police drone in VR—and hated it


It’s important because police departments are racing way ahead and starting to use drones anyway, for everything from surveillance and intelligence gathering to chasing criminals.

Last week, San Francisco approved the use of robots, including drones that can kill people in certain emergencies, such as when dealing with a mass shooter. In the UK most police drones have thermal cameras that can be used to detect how many people are inside houses, says Pósch. This has been used for all sorts of things: catching human traffickers or rogue landlords, and even targeting people holding suspected parties during covid-19 lockdowns

Virtual reality will let the researchers test the technology in a controlled, safe way among lots of test subjects, Pósch says.

Even though I knew I was in a VR environment, I found the encounter with the drone unnerving. My opinion of these drones did not improve, even though I’d met a supposedly polite, human-operated one (there are even more aggressive modes for the experiment, which I did not experience.)  

Ultimately, it may not make much difference whether drones are “polite”  or “rude” , says Christian Enemark, a professor at the University of Southampton, who specializes in the ethics of war and drones and is not involved in the research. That’s because the use of drones itself is a “reminder that the police are not here, whether they’re not bothering to be here or they’re too afraid to be here,” he says.

“So maybe there’s something fundamentally disrespectful about any encounter.”

Deeper Learning

GPT-4 is coming, but OpenAI is still fixing GPT-3

The internet is abuzz with excitement about AI lab OpenAI’s latest iteration of its famous large language model, GPT-3. The latest demo, ChatGPT, answers people’s questions via back-and-forth dialogue. Since its launch last Wednesday, the demo has crossed over 1 million users. Read Will Douglas Heaven’s story here. 

GPT-3 is a confident bullshitter and can easily be prompted to say toxic things. OpenAI says it has fixed a lot of these problems with ChatGPT, which answers follow-up questions, admits its mistakes, challenges incorrect premises, and rejects inappropriate requests. It even refuses to answer some questions, such as how to be evil, or how to break into someone’s house. 



Continue Reading

Copyright © 2021 Seminole Press.