Connect with us

Tech

An AI saw a cropped photo of AOC. It autocompleted her wearing a bikini.

Published

on

An AI saw a cropped photo of AOC. It autocompleted her wearing a bikini.


Language-generation algorithms are known to embed racist and sexist ideas. They’re trained on the language of the internet, including the dark corners of Reddit and Twitter that may include hate speech and disinformation. Whatever harmful ideas are present in those forums get normalized as part of their learning.

Researchers have now demonstrated that the same can be true for image-generation algorithms. Feed one a photo of a man cropped right below his neck, and 43% of the time, it will autocomplete him wearing a suit. Feed the same one a cropped photo of a woman, even a famous woman like US Representative Alexandria Ocasio-Cortez, and 53% of the time, it will autocomplete her wearing a low-cut top or bikini. This has implications not just for image generation, but for all computer-vision applications, including video-based candidate assessment algorithms, facial recognition, and surveillance.

Ryan Steed, a PhD student at Carnegie Mellon University, and Aylin Caliskan, an assistant professor at George Washington University, looked at two algorithms: OpenAI’s iGPT (a version of GPT-2 that is trained on pixels instead of words) and Google’s SimCLR. While each algorithm approaches learning images differently, they share an important characteristic—they both use completely unsupervised learning, meaning they do not need humans to label the images.

This is a relatively new innovation as of 2020. Previous computer-vision algorithms mainly used supervised learning, which involves feeding them manually labeled images: cat photos with the tag “cat” and baby photos with the tag “baby.” But in 2019, researcher Kate Crawford and artist Trevor Paglen found that these human-created labels in ImageNet, the most foundational image data set for training computer-vision models, sometimes contain disturbing language, like “slut” for women and racial slurs for minorities.

The latest paper demonstrates an even deeper source of toxicity. Even without these human labels, the images themselves encode unwanted patterns. The issue parallels what the natural-language processing (NLP) community has already discovered. The enormous datasets compiled to feed these data-hungry algorithms capture everything on the internet. And the internet has an overrepresentation of scantily clad women and other often harmful stereotypes.

To conduct their study, Steed and Caliskan cleverly adapted a technique that Caliskan previously used to examine bias in unsupervised NLP models. These models learn to manipulate and generate language using word embeddings, a mathematical representation of language that clusters words commonly used together and separates words commonly found apart. In a 2017 paper published in Science, Caliskan measured the distances between the different word pairings that psychologists were using to measure human biases in the Implicit Association Test (IAT). She found that those distances almost perfectly recreated the IAT’s results. Stereotypical word pairings like man and career or woman and family were close together, while opposite pairings like man and family or woman and career were far apart.

iGPT is also based on embeddings: it clusters or separates pixels based on how often they co-occur within its training images. Those pixel embeddings can then be used to compare how close or far two images are in mathematical space.

In their study, Steed and Caliskan once again found that those distances mirror the results of IAT. Photos of men and ties and suits appear close together, while photos of women appear farther apart. The researchers got the same results with SimCLR, despite it using a different method for deriving embeddings from images.

These results have concerning implications for image generation. Other image-generation algorithms, like generative adversarial networks, have led to an explosion of deepfake pornography that almost exclusively targets women. iGPT in particular adds yet another way for people to generate sexualized photos of women.

But the potential downstream effects are much bigger. In the field of NLP, unsupervised models have become the backbone for all kinds of applications. Researchers begin with an existing unsupervised model like BERT or GPT-2 and use a tailored datasets to “fine-tune” it for a specific purpose. This semi-supervised approach, a combination of both unsupervised and supervised learning, has become a de facto standard.

Likewise, the computer vision field is beginning to see the same trend. Steed and Caliskan worry about what these baked-in biases could mean when the algorithms are used for sensitive applications such as in policing or hiring, where models are already analyzing candidate video recordings to decide if they’re a good fit for the job. “These are very dangerous applications that make consequential decisions,” says Caliskan.

Deborah Raji, a Mozilla fellow who co-authored an influential study revealing the biases in facial recognition, says the study should serve as a wakeup call to the computer vision field. “For a long time, a lot of the critique on bias was about the way we label our images,” she says. Now this paper is saying “the actual composition of the dataset is resulting in these biases. We need accountability on how we curate these data sets and collect this information.”

Steed and Caliskan urge greater transparency from the companies who are developing these models to open source them and let the academic community continue their investigations. They also encourage fellow researchers to do more testing before deploying a vision model, such as by using the methods they developed for this paper. And finally, they hope the field will develop more responsible ways of compiling and documenting what’s included in training datasets.

Caliskan says the goal is ultimately to gain greater awareness and control when applying computer vision. “We need to be very careful about how we use them,” she says, “but at the same time, now that we have these methods, we can try to use this for social good.”

Tech

Investing in women pays off

Published

on

Investing in women pays off


“Starting a business is a privilege,” says Burton O’Toole, who worked at various startups before launching and later selling AdMass, her own marketing technology company. The company gave her access to the HearstLab program in 2016, but she soon discovered that she preferred the investment aspect and became a vice president at HearstLab a year later. “To empower some of the smartest women to do what they love is great,” she says. But in addition to rooting for women, Burton O’Toole loves the work because it’s a great market opportunity. 

“Research shows female-led teams see two and a half times higher returns compared to male-led teams,” she says, adding that women and people of color tend to build more diverse teams and therefore benefit from varied viewpoints and perspectives. She also explains that companies with women on their founding teams are likely to get acquired or go public sooner. “Despite results like this, just 2.3% of venture capital funding goes to teams founded by women. It’s still amazing to me that more investors aren’t taking this data more seriously,” she says. 

Burton O’Toole—who earned a BS from Duke in 2007 before getting an MS and PhD from MIT, all in mechanical engineering—has been a “data nerd” since she can remember. In high school she wanted to become an actuary. “Ten years ago, I never could have imagined this work; I like the idea of doing something in 10 more years I couldn’t imagine now,” she says. 

When starting a business, Burton O’Toole says, “women tend to want all their ducks in a row before they act. They say, ‘I’ll do it when I get this promotion, have enough money, finish this project.’ But there’s only one good way. Make the jump.”

Continue Reading

Tech

Preparing for disasters, before it’s too late

Published

on

Preparing for disasters, before it’s too late


All too often, the work of developing global disaster and climate resiliency happens when disaster—such as a hurricane, earthquake, or tsunami—has already ravaged entire cities and torn communities apart. But Elizabeth Petheo, MBA ’14, says that recently her work has been focused on preparedness. 

It’s hard to get attention for preparedness efforts, explains Petheo, a principal at Miyamoto International, an engineering and disaster risk reduction consulting firm. “You can always get a lot of attention when there’s a disaster event, but at that point it’s too late,” she adds. 

Petheo leads the firm’s projects and partnerships in the Asia-Pacific region and advises globally on international development and humanitarian assistance. She also works on preparedness in the Asia-Pacific region with the United States Agency for International Development. 

“We’re doing programming on the engagement of the private sector in disaster risk management in Indonesia, which is a very disaster-prone country,” she says. “Smaller and medium-sized businesses are important contributors to job creation and economic development. When they go down, the impact on lives, livelihoods, and the community’s ability to respond and recover effectively is extreme. We work to strengthen their own understanding of their risk and that of their surrounding community, lead them through an action-planning process to build resilience, and link that with larger policy initiatives at the national level.”

Petheo came to MIT with international leadership experience, having managed high-profile global development and risk mitigation initiatives at the World Bank in Washington, DC, as well as with US government agencies and international organizations leading major global humanitarian responses and teams in Sri Lanka and Haiti. But she says her time at Sloan helped her become prepared for this next phase in her career. “Sloan was the experience that put all the pieces together,” she says.

Petheo has maintained strong connections with MIT. In 2018, she received the Margaret L.A. MacVicar ’65, ScD ’67, Award in recognition of her role starting and leading the MIT Sloan Club in Washington, DC, and her work as an inaugural member of the Graduate Alumni Council (GAC). She is also a member of the Friends of the MIT Priscilla King Gray Public Service Center.

“I believe deeply in the power and impact of the Institute’s work and people,” she says. “The moment I graduated, my thought process was, ‘How can I give back, and how can I continue to strengthen the experience of those who will come after me?’”

Continue Reading

Tech

The Download: a curb on climate action, and post-Roe period tracking

Published

on

The US Supreme Court just gutted the EPA’s power to regulate emissions


Why’s it so controversial?: Geoengineering was long a taboo topic among scientists, and some argue it should remain one. There are questions about its potential environmental side effects, and concerns that the impacts will be felt unevenly across the globe. Some feel it’s too dangerous to ever try or even to investigate, arguing that just talking about the possibility could weaken the need to address the underlying causes of climate change.

But it’s going ahead?: Despite the concerns, as the threat of climate change grows and major nations fail to make rapid progress on emissions, growing numbers of experts are seriously exploring the potential effects of these approaches. Read the full story.

—James Temple

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The belief that AI is alive refuses to die
People want to believe the models are sentient, even when their creators deny it. (Reuters)
+ It’s unsurprising wild religious beliefs find a home in Silicon Valley. (Vox)
+ AI systems are being trained twice as quickly as they were just last year. (Spectrum IEEE)

2 The FBI added the missing cryptoqueen to its most-wanted list
It’s offering a $100,000 reward for information leading to Ruja Ignatova, whose crypto scheme defrauded victims out of more than $4 billion. (BBC)
+ A new documentary on the crypto Ponzi scheme is in the works. (Variety)

3 Social media platforms turn a blind eye to dodgy telehealth ads
Which has played a part in the prescription drugs abuse boom. (Protocol)
+ The doctor will Zoom you now. (MIT Technology Review)

4 We’re addicted to China’s lithium batteries
Which isn’t great news for other countries building electric cars. (Wired $)
+ This battery uses a new anode that lasts 20 times longer than lithium. (Spectrum IEEE)
+ Quantum batteries could, in theory, allow us to drive a million miles between charges. (The Next Web)

5 Far-right extremists are communicating over radio to avoid detection
Making it harder to monitor them and their violent activities. (Slate $)
+ Many of the rioters who stormed the Capitol were carrying radio equipment. (The Guardian)

6 Bro culture has no place in space 🚀
So says NASA’s former deputy administrator, who’s sick and tired of misogyny in the sector. (CNN)

7 A US crypto exchange is gaining traction in Venezuela
It’s helping its growing community battle hyperinflation, but isn’t as decentralized as they believe it to be. (Rest of World)
+ The vast majority of NFT players won’t be around in a decade. (Vox)
+ Exchange Coinbase is working with ICE to track and identify crypto users. (The Intercept)
+ If RadioShack’s edgy tweets shock you, don’t forget it’s a crypto firm now. (NY Mag)

8 It’s time we learned to love our swamps
Draining them prevents them from absorbing CO2 and filtering out our waste. (New Yorker $)
+ The architect making friends with flooding. (MIT Technology Review) 

9 Robots love drawing too 🖍️
Though I’ll bet they don’t get as frustrated as we do when they mess up. (Input)

10 The risky world of teenage brains
Making potentially dangerous decisions is an important part of adolescence, and our brains reflect that. (Knowable Magazine)

Quote of the day

“They shamelessly celebrate an all-inclusive pool party while we can’t even pay our rent!”

Continue Reading

Copyright © 2021 Seminole Press.