The emails span a period from October 2018 through February 2020, beginning when Clearview AI CEO Hoan Ton-That was introduced to NYPD deputy inspector Chris Flanagan. After initial meetings, Clearview AI entered into a vendor contract with NYPD in December 2018 on a trial basis that lasted until the following March.
The documents show that many individuals at NYPD had access to Clearview during and after this time, from department leadership to junior officers. Throughout the exchanges, Clearview AI encouraged more use of its services. (“See if you can reach 100 searches,” its onboarding instructions urged officers.) The emails show that trial accounts for the NYPD were created as late as February 2020, almost a year after the trial period was said to have ended.
We reviewed the emails, and talked to top surveillance and legal experts about their contents. Here’s what you need to know.
NYPD lied about the extent of its relationship with Clearview AI and the use of its facial recognition technology
The NYPD told BuzzFeed News and the New York Post previously that it had “no institutional relationship” with Clearview AI, “formally or informally.” The department did disclose that it had trialed Clearview AI, but the emails show that the technology was used over a sustained time period by a large number of people who completed a high volume of searches in real investigations.
In one exchange, a detective working in the department’s facial recognition unit said, “App is working great.” In another, an officer on the NYPD’s identity theft squad said that “we continue to receive positive results” and have “gone on to make arrests.” (We have removed full names and email addresses from these images; other personal details were redacted in the original documents.)
Albert Fox Cahn, executive director at the Surveillance Technology Oversight Project, a nonprofit that advocates for the abolition of police use of facial recognition technology in New York City, says the records clearly contradict NYPD’s previous public statements on its use of Clearview AI.
“Here we have a pattern of officers getting Clearview accounts—not for weeks or months, but over the course of years,” he says. “We have evidence of meetings with officials at the highest level of the NYPD, including the facial identification section. This isn’t a few officers who decide to go off and get a trial account. This was a systematic adoption of Clearview’s facial recognition technology to target New Yorkers.”
Further, NYPD’s description of its facial recognition use, which is required under a recently passed law, says that “investigators compare probe images obtained during investigations with a controlled and limited group of photographs already within possession of the NYPD.” Clearview AI is known for its database of over 3 billion photos scraped from the web.
NYPD is working closely with immigration enforcement, and officers referred Clearview AI to ICE
The documents contain multiple emails from the NYPD that appear to be referrals to aid Clearview in selling its technology to the Department of Homeland Security. Two police officers had both NYPD and Homeland Security affiliations in their email signature, while another officer identified as a member of a Homeland Security task force.
New York is designated as a sanctuary city, meaning that local law enforcement limits its cooperation with federal immigration agencies. In fact, NYPD’s facial recognition policy statement says that “information is not shared in furtherance of immigration enforcement” and “access will not be given to other agencies for purposes of furthering immigration enforcement.”
“I think one of the big takeaways is just how lawless and unregulated the interactions and surveillance and data sharing landscape is between local police, federal law enforcement, immigration enforcement,” says Matthew Guariglia, an analyst at the Electronic Frontier Foundation. “There just seems to be so much communication, maybe data sharing, and so much unregulated use of technology.”
Cahn says the emails immediately ring alarm bells, particularly since a great deal of law enforcement information funnels through central systems known as fusion centers.
“You can claim you’re a sanctuary city all you want, but as long as you continue to have these DHS task forces, as long as you continue to have information fusion centers that allow real-time data exchange with DHS, you’re making that promise into a lie.”
Many officers asked to use Clearview AI on their personal devices or through their personal email accounts
At least four officers asked for access to Clearview’s app on their personal devices or through personal emails. Department devices are closely regulated, and it can be difficult to download applications to official NYPD mobile phones. Some officers clearly opted to use their personal devices when department phones were too restrictive.
Clearview replied to this email, “Hi William, you should have a setup email in your inbox shortly.”
Jonathan McCoy is a digital forensics attorney at Legal Aid Society and took part in filing the freedom of information request. He found the use of personal devices particularly troublesome: “My takeaway is that they were actively trying to circumvent NYPD policies and procedures that state that if you’re going to be using facial recognition technology, you have to go through FIS (facial identification section) and they have to use the technology that’s already been approved by the NYPD wholesale.” NYPD does already have a facial recognition system, provided by a company called Dataworks.
The Download: Introducing our TR35 list, and the death of the smart city
Spoiler alert: our annual Innovators Under 35 list isn’t actually about what a small group of smart young people have been up to (although that’s certainly part of it.) It’s really about where the world of technology is headed next.
As you read about the problems this year’s winners have set out to solve, you’ll also glimpse the near future of AI, biotech, materials, computing, and the fight against climate change.
To connect the dots, we asked five experts—all judges or former winners—to write short essays about where they see the most promise, and the biggest potential roadblocks, in their respective fields. We hope the list inspires you and gives you a sense of what to expect in the years ahead.
Read the full list here.
The Urbanism issue
The modern city is a surveillance device. It can track your movements via your license plate, your cell phone, and your face. But go to any city or suburb in the United States and there’s a different type of monitoring happening, one powered by networks of privately owned doorbell cameras, wildlife cameras, and even garden-variety security cameras.
The latest print issue of MIT Technology Review examines why, independently of local governments, we have built our neighborhoods into panopticons: everyone watching everything, all the time. Here is a selection of some of the new stories in the edition, guaranteed to make you wonder whether smart cities really are so smart after all:
– How groups of online neighborhood watchmen are taking the law into their own hands.
– Why Toronto wants you to forget everything you know about smart cities.
– Bike theft is a huge problem. Specialized parking pods could be the answer.
– Public transport wants to kill off cash—but it won’t be as disruptive as you think.
Toronto wants to kill the smart city forever
Most Quayside watchers have a hard time believing that covid was the real reason for ending the project. Sidewalk Labs never really painted a compelling picture of the place it hoped to build.
The new Waterfront Toronto project has clearly learned from the past. Renderings of the new plans for Quayside—call it Quayside 2.0—released earlier this year show trees and greenery sprouting from every possible balcony and outcropping, with nary an autonomous vehicle or drone in site. The project’s highly accomplished design team—led by Alison Brooks, a Canadian architect based in London; the renowned Ghanaian-British architect David Adjaye; Matthew Hickey, a Mohawk architect from the Six Nations First Nation; and the Danish firm Henning Larsen—all speak of this new corner of Canada’s largest city not as a techno-utopia but as a bucolic retreat.
In every way, Quayside 2.0 promotes the notion that an urban neighborhood can be a hybrid of the natural and the manmade. The project boldly suggests that we now want our cities to be green, both metaphorically and literally—the renderings are so loaded with trees that they suggest foliage is a new form of architectural ornament. In the promotional video for the project, Adjaye, known for his design of the Smithsonian Museum of African American History, cites the “importance of human life, plant life, and the natural world.” The pendulum has swung back toward Howard’s garden city: Quayside 2022 is a conspicuous disavowal not only of the 2017 proposal but of the smart city concept itself.
To some extent, this retreat to nature reflects the changing times, as society has gone from a place of techno-optimism (think: Steve Jobs introducing the iPhone) to a place of skepticism, scarred by data collection scandals, misinformation, online harassment, and outright techno-fraud. Sure, the tech industry has made life more productive over the past two decades, but has it made it better? Sidewalk never had an answer to this.
“To me it’s a wonderful ending because we didn’t end up with a big mistake,” says Jennifer Keesmaat, former chief planner for Toronto, who advised the Ministry of Infrastructure on how to set this next iteration up for success. She’s enthusiastic about the rethought plan for the area: “If you look at what we’re doing now on that site, it’s classic city building with a 21st-century twist, which means it’s a carbon-neutral community. It’s a totally electrified community. It’s a community that prioritizes affordable housing, because we have an affordable-housing crisis in our city. It’s a community that has a strong emphasis on green space and urban agriculture and urban farming. Are those things that are derived from Sidewalk’s proposal? Not really.”
Rewriting what we thought was possible in biotech
What ML and AI in biotech broadly need to engage with are the holes that are unique to the study of health. Success stories like neural nets that learned to identify dogs in images were built with the help of high-quality image labeling that people were in a good position to provide. Even attempts to generate or translate human language are easily verified and audited by experts who speak a particular language.
Instead, much of biology, health, and medicine is very much in the stage of fundamental discovery. How do neurodegenerative diseases work? What environmental factors really matter? What role does nutrition play in overall human health? We don’t know yet. In health and biotech, machine learning is taking on a different, more challenging, task—one that will require less engineering and more science.
Marzyeh Ghassemi is an assistant professor at MIT and a faculty member at the Vector Institute (and a 35 Innovators honoree in 2018).