The new lawsuit that shows facial recognition is officially a civil rights issue
Williams’s wrongful arrest, which was first reported by the New York Times in August 2020, was based on a bad match from the Detroit Police Department’s facial recognition system. Two more instances of false arrests have since been made public. Both are also Black men, and both have taken legal action.
Now Williams is following in their path and going further—not only by suing the department for his wrongful arrest, but by trying to get the technology banned.
On Tuesday, the ACLU and the University of Michigan Law School’s Civil Rights Litigation Initiative filed a lawsuit on behalf of Williams, alleging that the arrest violated his Fourth Amendment rights and was in defiance of Michigan’s civil rights law.
The suit requests compensation, greater transparency about the use of facial recognition, and an end to the Detroit Police Department’s use of facial recognition technology, whether direct or indirect.
What the lawsuit says
The documents filed on Tuesday lay out the case. In March 2019, the DPD had run a grainy photo of a Black man with a red cap from Shinola’s surveillance video through its facial recognition system, made by a company called DataWorks Plus. The system returned a match with an old driver’s license photo of Williams. Investigating officers then included William’s license photo as part of a photo line-up, and a Shinola security contractor (who wasn’t actually present at the time of the theft) identified Williams as the thief. The officers obtained a warrant, which requires multiple sign-offs from department leadership, and Williams was arrested.
The complaint argues that the false arrest of Williams was a direct result of the facial recognition system, and that “this wrongful arrest and imprisonment case exemplifies the grave harm caused by the misuse of, and reliance upon, facial recognition technology.”
The case contains four counts, three of which focus on the lack of probable cause for the arrest while one focuses on the racial disparities in the impact of facial recognition. “By employing technology that is empirically proven to misidentify Black people at rates far higher than other groups of people,” it states, ”the DPD denied Mr. Williams the full and equal enjoyment of the Detroit Police Department’s services, privileges, and advantages because of his race or color.”
Facial recognition technology’s difficulties in identifying darker-skinned people are well documented. After the killing of George Floyd in Minneapolis in 2020, some cities and states announced bans and moratoriums on the police use of facial recognition. But many others, including Detroit, continued to use it despite growing concerns.
“Relying on subpar images”
When MIT Technology review spoke with Williams’s ACLU lawyer, Phil Mayor, last year, he stressed that problems of racism within American law enforcement made the use of facial recognition even more concerning.
“This isn’t a one-bad-actor situation,” Mayor said. “This is a situation in which we have a criminal legal system that is extremely quick to charge, and extremely slow to protect people’s rights, especially when we’re talking about people of color.”
Eric Williams, a senior staff attorney at the Economic Equity Practice in Detroit, says cameras have many technological limitations, not least that they are hard-coded with color ranges for recognizing skin tone and often simply cannot process darker skin.
“I think every Black person in the country has had the experience of being in a photo and the picture turns up either way lighter or way darker,” says Williams, who is a member of the ACLU of Michigan’s lawyers committee but is not working on the Robert Williams case. “Lighting is one of the primary factors when it comes to the quality of an image. So the fact that law enforcement is relying, to some degree … on really subpar images is problematic.”
There have been cases that challenged biased algorithms and artificial-intelligence technologies on the basis of race. Facebook, for example, underwent a massive civil rights audit after its targeted advertising algorithms were found to serve ads on the basis of race, gender, and religion. YouTube was sued in a class action lawsuit by Black creators who alleged that its AI systems profile users and censor or discriminate against content on the basis of race. YouTube was also sued by LGBTQ+ creators who said that content moderation systems flagged the words “gay” and “lesbian.”
Some experts say it was only a matter of time until the use of biased technology by a major institution like the police was met with legal challenges.
IBM wants to build a 100,000-qubit quantum computer
Quantum computing holds and processes information in a way that exploits the unique properties of fundamental particles: electrons, atoms, and small molecules can exist in multiple energy states at once, a phenomenon known as superposition, and the states of particles can become linked, or entangled, with one another. This means that information can be encoded and manipulated in novel ways, opening the door to a swath of classically impossible computing tasks.
As yet, quantum computers have not achieved anything useful that standard supercomputers cannot do. That is largely because they haven’t had enough qubits and because the systems are easily disrupted by tiny perturbations in their environment that physicists call noise.
Researchers have been exploring ways to make do with noisy systems, but many expect that quantum systems will have to scale up significantly to be truly useful, so that they can devote a large fraction of their qubits to correcting the errors induced by noise.
IBM is not the first to aim big. Google has said it is targeting a million qubits by the end of the decade, though error correction means only 10,000 will be available for computations. Maryland-based IonQ is aiming to have 1,024 “logical qubits,” each of which will be formed from an error-correcting circuit of 13 physical qubits, performing computations by 2028. Palo Alto–based PsiQuantum, like Google, is also aiming to build a million-qubit quantum computer, but it has not revealed its time scale or its error-correction requirements.
Because of those requirements, citing the number of physical qubits is something of a red herring—the particulars of how they are built, which affect factors such as their resilience to noise and their ease of operation, are crucially important. The companies involved usually offer additional measures of performance, such as “quantum volume” and the number of “algorithmic qubits.” In the next decade advances in error correction, qubit performance, and software-led error “mitigation,” as well as the major distinctions between different types of qubits, will make this race especially tricky to follow.
Refining the hardware
IBM’s qubits are currently made from rings of superconducting metal, which follow the same rules as atoms when operated at millikelvin temperatures, just a tiny fraction of a degree above absolute zero. In theory, these qubits can be operated in a large ensemble. But according to IBM’s own road map, quantum computers of the sort it’s building can only scale up to 5,000 qubits with current technology. Most experts say that’s not big enough to yield much in the way of useful computation. To create powerful quantum computers, engineers will have to go bigger. And that will require new technology.
How it feels to have a life-changing brain implant removed
Burkhart’s device was implanted in his brain around nine years ago, a few years after he was left unable to move his limbs following a diving accident. He volunteered to trial the device, which enabled him to move his hand and fingers. But it had to be removed seven and a half years later.
His particular implant was a small set of 100 electrodes, carefully inserted into a part of the brain that helps control movement. It worked by recording brain activity and sending these recordings to a computer, where they were processed using an algorithm. This was connected to a sleeve of electrodes worn on the arm. The idea was to translate thoughts of movement into electrical signals that would trigger movement.
Burkhart was the first to receive the implant, in 2014; he was 24 years old. Once he had recovered from the surgery, he began a training program to learn how to use it. Three times a week for around a year and a half, he visited a lab where the implant could be connected to a computer via a cable leading out of his head.
“It worked really well,” says Burkhart. “We started off just being able to open and close my hand, but after some time we were able to do individual finger movements.” He was eventually able to combine movements and control his grip strength. He was even able to play Guitar Hero.
“There was a lot that I was able to do, which was exciting,” he says. “But it was also still limited.” Not only was he only able to use the device in the lab, but he could only perform lab-based tasks. “Any of the activities we would do would be simplified,” he says.
For example, he could pour a bottle out, but it was only a bottle of beads, because the researchers didn’t want liquids around the electrical equipment. “It was kind of a bummer it wasn’t changing everything in my life, because I had seen how beneficial it could be,” he says.
At any rate, the device worked so well that the team extended the trial. Burkhart was initially meant to have the implant in place for 12 to 18 months, he says. “But everything was really successful … so we were able to continue on for quite a while after that.” The trial was extended on an annual basis, and Burkhart continued to visit the lab twice a week.
The Download: brain implant removal, and Nvidia’s AI payoff
Leggett told researchers that she “became one” with her device. It helped her to control the unpredictable, violent seizures she routinely experienced, and allowed her to take charge of her own life. So she was devastated when, two years later, she was told she had to remove the implant because the company that made it had gone bust.
The removal of this implant, and others like it, might represent a breach of human rights, ethicists say in a paper published earlier this month. And the issue will only become more pressing as the brain implant market grows in the coming years and more people receive devices like Leggett’s. Read the full story.
You can read more about what happens to patients when their life-changing brain implants are removed against their wishes in the latest issue of The Checkup, Jessica’s weekly newsletter giving you the inside track on all things biotech. Sign up to receive it in your inbox every Thursday.
If you’d like to read more about brain implants, why not check out:
+ Brain waves can tell us how much pain someone is in. The research could open doors for personalized brain therapies to target and treat the worst kinds of chronic pain. Read the full story.
+ An ALS patient set a record for communicating via a brain implant. Brain interfaces could let paralyzed people speak at almost normal speeds. Read the full story.
+ Here’s how personalized brain stimulation could treat depression. Implants that track and optimize our brain activity are on the way. Read the full story.