This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.
How the covid pop-up window is wreaking havoc on daily life in China
In 2020, China rolled out a contact tracing program that assigns a QR code to everyone in the country. It shows your covid status and allows you to enter public venues or take public transportation. Part of China’s stringent zero-covid policy, the system has persisted, and some of the once-lauded features that kept deaths comparatively low in the country now feel more burdensome than beneficial to its citizens.
For example, the more than 20 million people who live in or visit Beijing are now being plagued by a pop-up window that can randomly show up on your phone to disrupt all your plans. The persistent pop-up is designed to mask the QR code, preventing access to just about everywhere in China, and won’t go away unless the user immediately takes a PCR test.
The problem is, despite being touted as a high-tech pandemic solution, the app’s risk-identifying mechanism tends to cast a wider-than-necessary net, meaning no one knows why they are receiving the pop-up window or when they will get it, and there’s no way to prepare for it. Read the full story.
This story is from China Report, our new weekly newsletter getting you up to speed on everything that’s happening in China. Sign up to receive it in your inbox every Tuesday.
Podcast: I Was There When AI Mastered Chess
In the late 90s, IBM’s Deep Blue computer beat Garry Kasparov—the reigning world champion of chess. It paved the way for a revolution in automation. In the latest episode of MIT Technology Review’s In Machines We Trust podcast, we meet Kasparov and hear the battle with Deep Blue told from his side of the chessboard. Listen to it on Apple Podcasts, or wherever else you usually listen.
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 Elon Musk’s deal to buy Twitter appears to be back on
The billionaire has offered to complete the deal at the originally proposed price, potentially as soon as this week. (NYT $)
+ A successful deal would make Musk’s to-do list even longer. (WSJ $)
+ It’s probably no coincidence this happened days before the court case. (FT $)
+ Twitter could end up folded into a superapp called ‘X’. (Bloomberg $)
2 It’s not looking good for financial markets
Inflation in the US appears to be on track to slowing, but at what price? (Economist $)
+ The UN has accused rich nations of risking a developing world-harming recession. (The Guardian)
3 The dearth of Uber drivers is over
It follows two years of global driver shortages. (FT $)
5 Adderall users are considering switching drugs
Pharmacies can’t keep up with the steep demand for it, and patients are suffering. (Motherboard)
6 How Ukraine’s tech workers built a new normal
Many displaced employees carried on working from other countries. Now, they’re returning home. (Rest of World)
+ It’s tough for displaced Ukrainians to prove they own their homes. (Slate $)
+ Russia is increasingly relying on its private army of mercenaries. (LA Times)
7 The dream of a decentralized web
Advocates for DWeb are resigned to fighting an uphill battle when there’s not vast amounts of money to be made. (The Atlantic $)
+ A big tech company is working to free the internet from big tech companies. (MIT Technology Review)
8 Here’s what quantum computing could do for us
But putting the theory into practice is the biggest challenge. (Vox)
+ What are quantum-resistant algorithms—and why do we need them? (MIT Technology Review)
9 YouTube was never neutral
Its powerful recommendation algorithm shaped the attention economy as we know it. (New Yorker $)
+ Hated that video? YouTube’s algorithm might push you another just like it. (MIT Technology Review)
10 America’s chess grandmaster may have cheated over 100 times
The plot thickens! (WSJ $)
Quote of the day
“Games tell us about the stories we want to tell about conflict.”
—Ian Kikuchi, co-curator at a new exhibition exploring war in video games, tells the Financial Times how games can rewrite the history of war by exaggerating the role of the individual.
The big story
The Atlantic’s vital currents could collapse. Scientists are racing to understand the dangers.
Scientists and technicians are searching for clues about one of the most important forces in the planet’s climate system: a network of ocean currents known as the Atlantic Meridional Overturning Circulation (AMOC). Critically, they want to better understand how global warming is changing it, and how much more it could shift in the coming decades—even whether it could collapse.
The problem is the Atlantic circulation seems to be weakening, transporting less water and heat. Because of climate change, melting ice sheets are pouring fresh water into the ocean at the higher latitudes, and the surface waters are retaining more of their heat. Warmer and fresher waters are less dense and thus not as prone to sink, which may be undermining one of the currents’ core driving forces. Read the full story.
We can still have nice things
+ Is there anything more iconic than The Matrix’s green code? I don’t think so.
+ How big is infinity, really? Answers on a postcard.
+ These Pokemon town cardboard models are super cute.
+ Optical illusions are guaranteed to get your head in a spin.
+ There’s some real domestic falcon drama going down in Melbourne (thanks Kirsten!)
Uber’s facial recognition is locking Indian drivers out of their accounts
Uber checks that a driver’s face matches what the company has on file through a program called “Real-Time ID Check.” It was rolled out in the US in 2016, in India in 2017, and then in other markets. “This prevents fraud and protects drivers’ accounts from being compromised. It also protects riders by building another layer of accountability into the app to ensure the right person is behind the wheel,” Joe Sullivan, Uber’s chief security officer, said in a statement in 2017.
But the company’s driver verification procedures are far from seamless. Adnan Taqi, an Uber driver in Mumbai, ran into trouble with it when the app prompted him to take a selfie around dusk. He was locked out for 48 hours, a big dent in his work schedule—he says he drives 18 hours straight, sometimes as much as 24 hours, to be able to make a living. Days later, he took a selfie that locked him out of his account again, this time for a whole week. That time, Taqi suspects, it came down to hair: “I hadn’t shaved for a few days and my hair had also grown out a bit,” he says.
More than a dozen drivers interviewed for this story detailed instances of having to find better lighting to avoid being locked out of their Uber accounts. “Whenever Uber asks for a selfie in the evenings or at night, I’ve had to pull over and go under a streetlight to click a clear picture—otherwise there are chances of getting rejected,” said Santosh Kumar, an Uber driver from Hyderabad.
Others have struggled with scratches on their cameras and low-budget smartphones. The problem isn’t unique to Uber. Drivers with Ola, which is backed by SoftBank, face similar issues.
Some of these struggles can be explained by natural limitations in face recognition technology. The software starts by converting your face into a set of points, explains Jernej Kavka, an independent technology consultant with access to Microsoft’s Face API, which is what Uber uses to power Real-Time ID Check.
“With excessive facial hair, the points change and it may not recognize where the chin is,” Kavka says. The same thing happens when there is low lighting or the phone’s camera doesn’t have a good contrast. “This makes it difficult for the computer to detect edges,” he explains.
But the software may be especially brittle in India. In December 2021, tech policy researchers Smriti Parsheera (a fellow with the CyberBRICS project) and Gaurav Jain (an economist with the International Finance Corporation) posted a preprint paper that audited four commercial facial processing tools—Amazon’s Rekognition, Microsoft Azure’s Face, Face++, and FaceX—for their performance on Indian faces. When the software was applied to a database of 32,184 election candidates, Microsoft’s Face failed to even detect the presence of a face in more than 1,000 images, throwing an error rate of more than 3%—the worst among the four.
It could be that the Uber app is failing drivers because its software was not trained on a diverse range of Indian faces, Parsheera says. But she says there may be other issues at play as well. “There could be a number of other contributing factors like lighting, angle, effects of aging, etc.,” she explained in writing. “But the lack of transparency surrounding the use of such systems makes it hard to provide a more concrete explanation.”
The Download: Uber’s flawed facial recognition, and police drones
One evening in February last year, a 23-year-old Uber driver named Niradi Srikanth was getting ready to start another shift, ferrying passengers around the south Indian city of Hyderabad. He pointed the phone at his face to take a selfie to verify his identity. The process usually worked seamlessly. But this time he was unable to log in.
Srikanth suspected it was because he had recently shaved his head. After further attempts to log in were rejected, Uber informed him that his account had been blocked. He is not alone. In a survey conducted by MIT Technology Review of 150 Uber drivers in the country, almost half had been either temporarily or permanently locked out of their accounts because of problems with their selfie.
Hundreds of thousands of India’s gig economy workers are at the mercy of facial recognition technology, with few legal, policy or regulatory protections. For workers like Srikanth, getting blocked from or kicked off a platform can have devastating consequences. Read the full story.
I met a police drone in VR—and hated it
Police departments across the world are embracing drones, deploying them for everything from surveillance and intelligence gathering to even chasing criminals. Yet none of them seem to be trying to find out how encounters with drones leave people feeling—or whether the technology will help or hinder policing work.
A team from University College London and the London School of Economics is filling in the gaps, studying how people react when meeting police drones in virtual reality, and whether they come away feeling more or less trusting of the police.
MIT Technology Review’s Melissa Heikkilä came away from her encounter with a VR police drone feeling unnerved. If others feel the same way, the big question is whether these drones are effective tools for policing in the first place. Read the full story.
Melissa’s story is from The Algorithm, her weekly newsletter covering AI and its effects on society. Sign up to receive it in your inbox every Monday.
I met a police drone in VR—and hated it
It’s important because police departments are racing way ahead and starting to use drones anyway, for everything from surveillance and intelligence gathering to chasing criminals.
Last week, San Francisco approved the use of robots, including drones that can kill people in certain emergencies, such as when dealing with a mass shooter. In the UK most police drones have thermal cameras that can be used to detect how many people are inside houses, says Pósch. This has been used for all sorts of things: catching human traffickers or rogue landlords, and even targeting people holding suspected parties during covid-19 lockdowns.
Virtual reality will let the researchers test the technology in a controlled, safe way among lots of test subjects, Pósch says.
Even though I knew I was in a VR environment, I found the encounter with the drone unnerving. My opinion of these drones did not improve, even though I’d met a supposedly polite, human-operated one (there are even more aggressive modes for the experiment, which I did not experience.)
Ultimately, it may not make much difference whether drones are “polite” or “rude” , says Christian Enemark, a professor at the University of Southampton, who specializes in the ethics of war and drones and is not involved in the research. That’s because the use of drones itself is a “reminder that the police are not here, whether they’re not bothering to be here or they’re too afraid to be here,” he says.
“So maybe there’s something fundamentally disrespectful about any encounter.”
GPT-4 is coming, but OpenAI is still fixing GPT-3
The internet is abuzz with excitement about AI lab OpenAI’s latest iteration of its famous large language model, GPT-3. The latest demo, ChatGPT, answers people’s questions via back-and-forth dialogue. Since its launch last Wednesday, the demo has crossed over 1 million users. Read Will Douglas Heaven’s story here.
GPT-3 is a confident bullshitter and can easily be prompted to say toxic things. OpenAI says it has fixed a lot of these problems with ChatGPT, which answers follow-up questions, admits its mistakes, challenges incorrect premises, and rejects inappropriate requests. It even refuses to answer some questions, such as how to be evil, or how to break into someone’s house.