Connect with us

Tech

How beauty filters took over

Published

on

How beauty filters took over


There are thousands of distortion filters available on major social platforms, with names like La Belle, Natural Beauty, and Boss Babe. Even the goofy Big Mouth on Snapchat, one of social media’s most popular filters, is made with distortion effects.

In October 2019, Facebook banned distortion effects because of “public debate about potential negative impact.” Awareness of body dysmorphia was rising, and a filter called FixMe, which allowed users to mark up their faces as a cosmetic surgeon might, had sparked a surge of criticism for encouraging plastic surgery. But in August 2020, the effects were re-released with a new policy banning filters that explicitly promoted surgery. Effects that resize facial features, however, are still allowed. (When asked about the decision, a spokesperson directed me to Facebook’s press release from that time.)

When the effects were re-released, Rocha decided to take a stand and began posting condemnations of body shaming online. She committed to stop using deformation effects herself unless they are clearly humorous or dramatic rather than beautifying and says she didn’t want to “be responsible” for the harmful effects some filters were having on women: some, she says, have looked into getting plastic surgery that makes them look like their filtered self.

“I wish I was wearing a filter right now”

Krista Crotty is a clinical education specialist at the Emily Program, a leading center on eating disorders and mental health based in St. Paul, Minnesota. Much of her job over the past five years has focused on educating patients about how to consume media in a healthier way. She says that when patients present themselves differently online and in person, she sees an increase in anxiety. “People are putting up information about themselves—whether it’s size, shape, weight, whatever—that isn’t anything like what they actually look like,” she says. “In between that authentic self and digital self lives a lot of anxiety, because it’s not who you really are. You don’t look like the photos that have been filtered.”

 “There’s just somewhat of a validation when you’re meeting that standard, even if it’s only for a picture.”

For young people, who are still working out who they are, navigating between a digital and authentic self can be particularly complicated, and it’s not clear what the long-term consequences will be.

“Identity online is kind of like an artifact, almost,” says Claire Pescott, the researcher from the University of South Wales. “It’s a kind of projected image of yourself.”

Pescott’s observations of children have led her to conclude that filters can have a positive impact on them. “They can kind of try out different personas,” she explains. “They have these ‘of the moment’ identities that they could change, and they can evolve with different groups.”

A screenshot from the Instagram Effects gallery. These are some of the top filters in the “selfies” category.

But she doubts that all young people are able to understand how filters affect their sense of self. And she’s concerned about the way social media platforms grant immediate validation and feedback in the form of likes and comments. Young girls, she says, have particular difficulty differentiating between filtered photos and ordinary ones.

Pescott’s research also revealed that while children are now often taught about online behavior, they receive “very little education” about filters. Their safety training “was linked to overt physical dangers of social media, not the emotional, more nuanced side of social media,” she says, “which I think is more dangerous.”

Bailenson expects that we can learn about some of these emotional unknowns from established VR research. In virtual environments, people’s behavior changes with the physical characteristics of their avatar, a phenomenon called the Proteus effect. Bailenson found, for example, that people who had taller avatars were more likely to behave confidently than those with shorter avatars. “We know that visual representations of the self, when used in a meaningful way during social interactions, do change our attitudes and behaviors,” he says.

But sometimes those actions can play on stereotypes. A well-known study from 1988 found that athletes who wore black uniforms were more aggressive and violent while playing sports than those wearing white uniforms. And this translates to the digital world: one recent study showed that video game players who used avatars of the opposite sex actually behaved in a way that was gender stereotypical.

Bailenson says we should expect to see similar behavior on social media as people adopt masks based on filtered versions of their own faces, rather than entirely different characters. “The world of filtered video, in my opinion—and we haven’t tested this yet—is going to behave very similarly to the world of filtered avatars,” he says.

Selfie regulation

Considering the power and pervasiveness of filters, there is very little hard research about their impact—and even fewer guardrails around their use.

I asked Bailenson, who is the father of two young girls, how he thinks about his daughters’ use of AR filters. “It’s a real tough one,” he says, “because it goes against everything that we’re taught in all of our basic cartoons, which is ‘Be yourself.’”

Bailenson also says that playful use is different from real-time, constant augmentation of ourselves, and understanding what these different contexts mean for kids is important.

“Even though we know it’s not real… We still have that aspiration to look that way.”

What few regulations and restrictions there are on filter use rely on companies to police themselves. Facebook’s filters, for example, have to go through an approval process that, according to the spokesperson, uses “a combination of human and automated systems to review effects as they are submitted for publishing.” They are reviewed for certain issues, such as hate speech or nudity, and users are also able to report filters, which then get manually reviewed.

The company says it consults regularly with expert groups, such as the National Eating Disorders Association and the JED Foundation, a mental-health nonprofit.

“We know people may feel pressure to look a certain way on social media, and we’re taking steps to address this across Instagram and Facebook,” said a statement from Instagram. “We know effects can play a role, so we ban ones that clearly promote eating disorders or that encourage potentially dangerous cosmetic surgery procedures… And we’re working on more products to help reduce the pressure people may feel on our platforms, like the option to hide like counts.”

Facebook and Snapchat also label filtered photos to show that they’ve been transformed—but it’s easy to get around the labels by simply applying the edits outside of the apps, or by downloading and reuploading a filtered photo.

Labeling might be important, but Pescott says she doesn’t think it will dramatically improve an unhealthy beauty culture online.

“I don’t know whether it would make a huge amount of difference, because I think it’s the fact we’re seeing it, even though we know it’s not real. We still have that aspiration to look that way,” she says. Instead, she believes that the images children are exposed to should be more diverse, more authentic, and less filtered.

There’s another concern, too, especially since the majority of users are very young: the amount of biometric data that TikTok, Snapchat and Facebook have collected through these filters. Though both Facebook and Snapchat say they do not use filter technology to collect personally identifiable data, a review of their privacy policies shows that they do indeed have the right to store data from the photographs and videos on the platforms. Snapchat’s policy says that snaps and chats are deleted from its servers once the message is opened or expires, but stories are stored longer. Instagram stores photo and video data as long as it wants or until the account is deleted; Instagram also collects data on what users see through its camera.

Meanwhile, these companies continue to concentrate on AR. In a speech made to investors in February 2021, Snapchat co-founder Evan Spiegel said “our camera is already capable of extraordinary things. But it is augmented reality that’s driving our future”, and the company is “doubling down” on augmented reality in 2021, calling the technology “a utility”.

And while both Facebook and Snapchat say that the facial detection systems behind filters don’t connect back to the identity of users, it’s worth remembering that Facebook’s smart photo tagging feature—which looks at your pictures and tries to identify people who might be in them—was one of the earliest large-scale commercial uses of facial recognition. And TikTok recently settled for $92 million in a lawsuit that alleged the company was misusing facial recognition for ad targeting. A spokesperson from Snapchat said “Snap’s Lens product does not collect any identifiable information about a user and we can’t use it to tie back to, or identify, individuals.”

And Facebook in particular sees facial recognition as part of it’s AR strategy. In a January 2021 blog post titled “No Looking Back,” Andrew Bosworth, the head of Facebook Reality Labs, wrote: “It’s early days, but we’re intent on giving creators more to do in AR and with greater capabilities.” The company’s planned release of AR glasses is highly anticipated, and it has already teased the possible use of facial recognition as part of the product.

In light of all the effort it takes to navigate this complex world, Sophia and Veronica say they just wish they were better educated about beauty filters. Besides their parents, no one ever helped them make sense of it all. “You shouldn’t have to get a specific college degree to figure out that something could be unhealthy for you,” Veronica says.

Tech

Why detecting AI-generated text is so difficult (and what to do about it)

Published

on

Why detecting AI-generated text is so difficult (and what to do about it)


This tool is OpenAI’s response to the heat it’s gotten from educators, journalists, and others for launching ChatGPT without any ways to detect text it has generated. However, it is still very much a work in progress, and it is woefully unreliable. OpenAI says its AI text detector correctly identifies 26% of AI-written text as “likely AI-written.” 

While OpenAI clearly has a lot more work to do to refine its tool, there’s a limit to just how good it can make it. We’re extremely unlikely to ever get a tool that can spot AI-generated text with 100% certainty. It’s really hard to detect AI-generated text because the whole point of AI language models is to generate fluent and human-seeming text, and the model is mimicking text created by humans, says Muhammad Abdul-Mageed, a professor who oversees research in natural-language processing and machine learning at the University of British Columbia

We are in an arms race to build detection methods that can match the latest, most powerful models, Abdul-Mageed adds. New AI language models are more powerful and better at generating even more fluent language, which quickly makes our existing detection tool kit outdated. 

OpenAI built its detector by creating a whole new AI language model akin to ChatGPT that is specifically trained to detect outputs from models like itself. Although details are sparse, the company apparently trained the model with examples of AI-generated text and examples of human-generated text, and then asked it to spot the AI-generated text. We asked for more information, but OpenAI did not respond. 

Last month, I wrote about another method for detecting text generated by an AI: watermarks. These act as a sort of secret signal in AI-produced text that allows computer programs to detect it as such. 

Researchers at the University of Maryland have developed a neat way of applying watermarks to text generated by AI language models, and they have made it freely available. These watermarks would allow us to tell with almost complete certainty when AI-generated text has been used. 

The trouble is that this method requires AI companies to embed watermarking in their chatbots right from the start. OpenAI is developing these systems but has yet to roll them out in any of its products. Why the delay? One reason might be that it’s not always desirable to have AI-generated text watermarked. 

One of the most promising ways ChatGPT could be integrated into products is as a tool to help people write emails or as an enhanced spell-checker in a word processor. That’s not exactly cheating. But watermarking all AI-generated text would automatically flag these outputs and could lead to wrongful accusations.

Continue Reading

Tech

The original startup behind Stable Diffusion has launched a generative AI for video

Published

on

The original startup behind Stable Diffusion has launched a generative AI for video


Set up in 2018, Runway has been developing AI-powered video-editing software for several years. Its tools are used by TikTokers and YouTubers as well as mainstream movie and TV studios. The makers of The Late Show with Stephen Colbert used Runway software to edit the show’s graphics; the visual effects team behind the hit movie Everything Everywhere All at Once used the company’s tech to help create certain scenes.  

In 2021, Runway collaborated with researchers at the University of Munich to build the first version of Stable Diffusion. Stability AI, a UK-based startup, then stepped in to pay the computing costs required to train the model on much more data. In 2022, Stability AI took Stable Diffusion mainstream, transforming it from a research project into a global phenomenon. 

But the two companies no longer collaborate. Getty is now taking legal action against Stability AI—claiming that the company used Getty’s images, which appear in Stable Diffusion’s training data, without permission—and Runway is keen to keep its distance.

Gen-1 represents a new start for Runway. It follows a smattering of text-to-video models revealed late last year, including Make-a-Video from Meta and Phenaki from Google, both of which can generate very short video clips from scratch. It is also similar to Dreamix, a generative AI from Google revealed last week, which can create new videos from existing ones by applying specified styles. But at least judging from Runway’s demo reel, Gen-1 appears to be a step up in video quality. Because it transforms existing footage, it can also produce much longer videos than most previous models. (The company says it will post technical details about Gen-1 on its website in the next few days.)   

Unlike Meta and Google, Runway has built its model with customers in mind. “This is one of the first models to be developed really closely with a community of video makers,” says Valenzuela. “It comes with years of insight about how filmmakers and VFX editors actually work on post-production.”

Gen-1, which runs on the cloud via Runway’s website, is being made available to a handful of invited users today and will be launched to everyone on the waitlist in a few weeks.

Last year’s explosion in generative AI was fueled by the millions of people who got their hands on powerful creative tools for the first time and shared what they made with them. Valenzuela hopes that putting Gen-1 into the hands of creative professionals will soon have a similar impact on video.

“We’re really close to having full feature films being generated,” he says. “We’re close to a place where most of the content you’ll see online will be generated.”

Continue Reading

Tech

When my dad was sick, I started Googling grief. Then I couldn’t escape it.

Published

on

The Download: trapped by grief algorithms, and image AI privacy issues


I am a mostly visual thinker, and thoughts pose as scenes in the theater of my mind. When my many supportive family members, friends, and colleagues asked how I was doing, I’d see myself on a cliff, transfixed by an omniscient fog just past its edge. I’m there on the brink, with my parents and sisters, searching for a way down. In the scene, there is no sound or urgency and I am waiting for it to swallow me. I’m searching for shapes and navigational clues, but it’s so huge and gray and boundless. 

I wanted to take that fog and put it under a microscope. I started Googling the stages of grief, and books and academic research about loss, from the app on my iPhone, perusing personal disaster while I waited for coffee or watched Netflix. How will it feel? How will I manage it?

I started, intentionally and unintentionally, consuming people’s experiences of grief and tragedy through Instagram videos, various newsfeeds, and Twitter testimonials. It was as if the internet secretly teamed up with my compulsions and started indulging my own worst fantasies; the algorithms were a sort of priest, offering confession and communion. 

Yet with every search and click, I inadvertently created a sticky web of digital grief. Ultimately, it would prove nearly impossible to untangle myself. My mournful digital life was preserved in amber by the pernicious personalized algorithms that had deftly observed my mental preoccupations and offered me ever more cancer and loss. 

I got out—eventually. But why is it so hard to unsubscribe from and opt out of content that we don’t want, even when it’s harmful to us? 

I’m well aware of the power of algorithms—I’ve written about the mental-health impact of Instagram filters, the polarizing effect of Big Tech’s infatuation with engagement, and the strange ways that advertisers target specific audiences. But in my haze of panic and searching, I initially felt that my algorithms were a force for good. (Yes, I’m calling them “my” algorithms, because while I realize the code is uniform, the output is so intensely personal that they feel like mine.) They seemed to be working with me, helping me find stories of people managing tragedy, making me feel less alone and more capable. 

In my haze of panic and searching, I initially felt that my algorithms were a force for good. They seemed to be working with me, making me feel less alone and more capable. 

In reality, I was intimately and intensely experiencing the effects of an advertising-driven internet, which Ethan Zuckerman, the renowned internet ethicist and professor of public policy, information, and communication at the University of Massachusetts at Amherst, famously called “the Internet’s Original Sin” in a 2014 Atlantic piece. In the story, he explained the advertising model that brings revenue to content sites that are most equipped to target the right audience at the right time and at scale. This, of course, requires “moving deeper into the world of surveillance,” he wrote. This incentive structure is now known as “surveillance capitalism.” 

Understanding how exactly to maximize the engagement of each user on a platform is the formula for revenue, and it’s the foundation for the current economic model of the web. 

Continue Reading

Copyright © 2021 Seminole Press.