The study supplies the latest evidence that Facebook has not resolved its ad discrimination problems since ProPublica first brought the issue to light in October 2016. At the time, ProPublica revealed that the platform allowed advertisers of job and housing opportunities to exclude certain audiences characterized by traits like gender and race. Such groups receive special protection under US law, making this practice illegal. It took two and half years and several legal skirmishes for Facebook to finally remove that feature.
But a few months later, the US Department of Housing and Urban Development (HUD) levied a new lawsuit, alleging that Facebook’s ad-delivery algorithms were still excluding audiences for housing ads without the advertiser specifying the exclusion. A team of independent researchers including Korolova, led by Northeastern University’s Muhammad Ali and Piotr Sapieżyński , corroborated those allegations a week later. They found, for example, that houses for sale were being shown more often to white users and houses for rent were being shown more often to minority users.
Korolova wanted to revisit the issue with her latest audit because the burden of proof for job discrimination is higher than for housing discrimination. While any skew in the display of ads based on protected characteristics is illegal in the case of housing, US employment law deems it justifiable if the skew is due to legitimate qualification differences. The new methodology controls for this factor.
“The design of the experiment is very clean,” says Sapieżyński, who was not involved in the latest study. While some could argue that car and jewelry sales associates do indeed have different qualifications, he says, the differences between delivering pizza and delivering groceries are negligible. “These gender differences cannot be explained away by gender differences in qualifications or a lack of qualifications,” he adds. “Facebook can no longer say [this is] defensible by law.”
The release of this audit comes amid heightened scrutiny of Facebook’s AI bias work. In March, MIT Technology Review published the results of a nine-month investigation into the company’s Responsible AI team, which found that the team, first formed in 2018, had neglected to work on issues like algorithmic amplification of misinformation and polarization because of its blinkered focus on AI bias. The company published a blog post shortly after, emphasizing the importance of that work and saying in particular that Facebook seeks “to better understand potential errors that may affect our ads system, as part of our ongoing and broader work to study algorithmic fairness in ads.”
“We’ve taken meaningful steps to address issues of discrimination in ads and have teams working on ads fairness today,” said Facebook spokesperson Joe Osborn in a statement. “Our system takes into account many signals to try and serve people ads they will be most interested in, but we understand the concerns raised in the report… We’re continuing to work closely with the civil rights community, regulators, and academics on these important matters.”
Despite these claims, however, Korolova says she found no noticeable change between the 2019 audit and this one in the way Facebook’s ad-delivery algorithms work. “From that perspective, it’s actually really disappointing, because we brought this to their attention two years ago,” she says. She’s also offered to work with Facebook on addressing these issues, she says. “We haven’t heard back. At least to me, they haven’t reached out.”
In previous interviews, the company said it was unable to discuss the details of how it was working to mitigate algorithmic discrimination in its ad service because of ongoing litigation. The ads team said its progress has been limited by technical challenges.
Sapieżyński, who has now conducted three audits of the platform, says this has nothing to do with the issue. “Facebook still has yet to acknowledge that there is a problem,” he says. While the team works out the technical kinks, he adds, there’s also an easy interim solution: it could turn off algorithmic ad targeting specifically for housing, employment, and lending ads without affecting the rest of its service. It’s really just an issue of political will, he says.
Christo Wilson, another researcher at Northeastern who studies algorithmic bias but didn’t participate in Korolova’s or Sapieżyński’s research, agrees: “How many times do researchers and journalists need to find these problems before we just accept that the whole ad-targeting system is bankrupt?”
The Download: Introducing our TR35 list, and the death of the smart city
Spoiler alert: our annual Innovators Under 35 list isn’t actually about what a small group of smart young people have been up to (although that’s certainly part of it.) It’s really about where the world of technology is headed next.
As you read about the problems this year’s winners have set out to solve, you’ll also glimpse the near future of AI, biotech, materials, computing, and the fight against climate change.
To connect the dots, we asked five experts—all judges or former winners—to write short essays about where they see the most promise, and the biggest potential roadblocks, in their respective fields. We hope the list inspires you and gives you a sense of what to expect in the years ahead.
Read the full list here.
The Urbanism issue
The modern city is a surveillance device. It can track your movements via your license plate, your cell phone, and your face. But go to any city or suburb in the United States and there’s a different type of monitoring happening, one powered by networks of privately owned doorbell cameras, wildlife cameras, and even garden-variety security cameras.
The latest print issue of MIT Technology Review examines why, independently of local governments, we have built our neighborhoods into panopticons: everyone watching everything, all the time. Here is a selection of some of the new stories in the edition, guaranteed to make you wonder whether smart cities really are so smart after all:
– How groups of online neighborhood watchmen are taking the law into their own hands.
– Why Toronto wants you to forget everything you know about smart cities.
– Bike theft is a huge problem. Specialized parking pods could be the answer.
– Public transport wants to kill off cash—but it won’t be as disruptive as you think.
Toronto wants to kill the smart city forever
Most Quayside watchers have a hard time believing that covid was the real reason for ending the project. Sidewalk Labs never really painted a compelling picture of the place it hoped to build.
The new Waterfront Toronto project has clearly learned from the past. Renderings of the new plans for Quayside—call it Quayside 2.0—released earlier this year show trees and greenery sprouting from every possible balcony and outcropping, with nary an autonomous vehicle or drone in site. The project’s highly accomplished design team—led by Alison Brooks, a Canadian architect based in London; the renowned Ghanaian-British architect David Adjaye; Matthew Hickey, a Mohawk architect from the Six Nations First Nation; and the Danish firm Henning Larsen—all speak of this new corner of Canada’s largest city not as a techno-utopia but as a bucolic retreat.
In every way, Quayside 2.0 promotes the notion that an urban neighborhood can be a hybrid of the natural and the manmade. The project boldly suggests that we now want our cities to be green, both metaphorically and literally—the renderings are so loaded with trees that they suggest foliage is a new form of architectural ornament. In the promotional video for the project, Adjaye, known for his design of the Smithsonian Museum of African American History, cites the “importance of human life, plant life, and the natural world.” The pendulum has swung back toward Howard’s garden city: Quayside 2022 is a conspicuous disavowal not only of the 2017 proposal but of the smart city concept itself.
To some extent, this retreat to nature reflects the changing times, as society has gone from a place of techno-optimism (think: Steve Jobs introducing the iPhone) to a place of skepticism, scarred by data collection scandals, misinformation, online harassment, and outright techno-fraud. Sure, the tech industry has made life more productive over the past two decades, but has it made it better? Sidewalk never had an answer to this.
“To me it’s a wonderful ending because we didn’t end up with a big mistake,” says Jennifer Keesmaat, former chief planner for Toronto, who advised the Ministry of Infrastructure on how to set this next iteration up for success. She’s enthusiastic about the rethought plan for the area: “If you look at what we’re doing now on that site, it’s classic city building with a 21st-century twist, which means it’s a carbon-neutral community. It’s a totally electrified community. It’s a community that prioritizes affordable housing, because we have an affordable-housing crisis in our city. It’s a community that has a strong emphasis on green space and urban agriculture and urban farming. Are those things that are derived from Sidewalk’s proposal? Not really.”
Rewriting what we thought was possible in biotech
What ML and AI in biotech broadly need to engage with are the holes that are unique to the study of health. Success stories like neural nets that learned to identify dogs in images were built with the help of high-quality image labeling that people were in a good position to provide. Even attempts to generate or translate human language are easily verified and audited by experts who speak a particular language.
Instead, much of biology, health, and medicine is very much in the stage of fundamental discovery. How do neurodegenerative diseases work? What environmental factors really matter? What role does nutrition play in overall human health? We don’t know yet. In health and biotech, machine learning is taking on a different, more challenging, task—one that will require less engineering and more science.
Marzyeh Ghassemi is an assistant professor at MIT and a faculty member at the Vector Institute (and a 35 Innovators honoree in 2018).