Ice Lounge Media

Ice Lounge Media

Hyper-realistic beauty filters are here to stay

IceLoungeMedia IceLoungeMedia

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

You might think that the latest viral example, the Bold Glamour beauty filter on TikTok, isn’t relevant to you. But I’d like to kindly disagree. Let me explain why we should all care about these sorts of augmented-reality (AR) filters, regardless of whether we use them or not. 

The Bold Glamour filter, now used over 16 million times since its release last month, contours your cheekbone and jawline in a sharp but subtle line. It also highlights the tip of your nose, the area under your eyebrows, and the apples of your cheeks. In addition, it lifts your eyebrows, applies a shimmer to your eyelids, and gives you thick, long, black eyelashes. It has, as the name implies, a glamorous effect.

The aesthetic itself is impressive. However, the really amazing thing is how well it functions. The filter doesn’t glitch when your face moves or if something like a waving hand interrupts the visual field, as filters usually do.

@rosaura_alvrz

“You guys. This is a problem. You can’t even tell it’s a filter anymore,” lamented user @rosaura_alvrz as she patted her face to test the filter in a review on TikTok. 

Professional filter and AR creator Florencia Solari says Bold Glamour likely employs machine learning, and though it’s not the first time an AI filter has made waves, she says, “The experience with these filters is so seamless, and can achieve such a convincing level of reality, that it’s not a surprise people are freaking out.”

And, indeed, people are freaking out. So how concerned should we be about this distorted-reality world?

First, some context. For years, augmented-reality filters on social media sites like Snap, Instagram, and TikTok have allowed users to easily edit their pictures and videos with preset characteristics that often perpetuate specific beauty standards like plump lips, hollow cheeks, thin noses, and wide eyes. 

Beauty filters, in conjunction with influencer culture and algorithmic amplification, have led to a rapid narrowing of beauty standards in a way that prioritizes whiteness and thinness.  

Young people love using filters (the latest numbers I have from Meta show that over 600 million people have used at least one of its AR products), but there’s minimal research into the effects on our mental health, identity, and behavior. 

The research that we do have indicates some serious risks. Girls are more likely than boys to use filters for beautification rather than for play starting at an early age, and social media is known to have negative effects on the mental health and body image of young people. According to a survey conducted by beauty brand Dove, 80% of girls had used filters or photo editing to change their appearance online by the age of 13. 

Still, filters have been around for years. So why are we talking about this now? Bold Glamour could usher in a new age of high-tech, hyper-realistic beauty filters. 

We’re likely to see these filters increasingly make use of recent advances in machine learning, specifically generative adversarial networks (known as GANs), in combination with the facial detection technology that’s standard to face filters. (Read this great story by Jess Weatherbed and Mia Sato that goes into the tech in depth!) The results are so ultra-realistic it’s going to become harder and harder to distinguish what’s real from what’s not. 

Last month, TikTok released new generative AI tools that help people create filters for the platform, though the company would not tell us whether Bold Glamour does indeed make use of generative AI. 

Rather than answering our questions, a TikTok spokesperson provided a statement that read, “Being true to yourself is celebrated and encouraged on TikTok. Creative Effects are a part of what makes it fun to create content, empowering self-expression and creativity. Transparency is built into the effect experience, as all videos using them are clearly marked by default.”

But there’s a fraught debate about whether filters enable self-expression or cause users, particularly young girls, to hold themselves to unattainable ideals. Florencia Solari, the professional filter creator, has thought about this a lot. (I spoke with her in more depth for a story this past summer.

“As a technologist, I see this as part of the natural evolution of filters as a whole … if we look at this in terms of innovation, it’s really exciting that we’re able to build this kind of thing,” she says. But as generative AI finds its way into more parts of our lives, having serious discussions about its impacts is also important. 

“It’s good that people are freaking out. It’s raising awareness. Is that really beauty? Do we really want young girls to dream about becoming the clone of a clone?” she asks.

What else I am reading

Most Estonians voted online in a recent parliamentary election.

  • The country has long been one of the most digitally advanced governments, but this is a remarkable outlier when it comes to global elections. Over 50% of the votes cast were done through encrypted “i-voting,” as J.D. Capelouto writes in Semafor. 

A great investigation by Wired and Lighthouse Reports gets deep under the hood of a welfare algorithm used in the Netherlands and finds that it discriminates based on ethnicity and gender. 

Twitter quietly updated its policies to prohibit threatening tweets. 

  • Brian Merchant of the LA Times writes that Elon Musk’s dream of free speech on Twitter is dead. Merchant writes, “It died as it lived: confusingly, underwhelmingly, and at the vainglorious whims of the man who dreamt it.”

What I learned this week

A much-debated surveillance program that allows the NSA and FBI to monitor communications for intelligence gathering without warrants has been reined in over the past few years, according to a new government report shown to the New York Times. The program and the corresponding statute that legalizes it, called Section 702, came out of an initially secret wiretapping program under the Bush administration after 9/11. 

Section 702 is set to expire at the end of December 2023 unless Congress renews it, a prospect that has already been subject to much debate. Last year, the FBI reported that it conducted fewer than 3.4 million searches as part of the program in 2021, but it set about limiting the program after a judge accused it of widespread malfeasance. 

The new report is not public and does not provide specific numbers but describes a “dramatic decrease” in the number of searches since 2021. You’re likely to hear much more about this in the coming months!