Since the internet began, there has been debate about how websites can ensure that explicit content is only accessed by consenting adults. Whether it’s explicit songs on Spotify or outright violence on TikTok, much of this content has been accessible to anyone.

With recent legislation such as the UK’s Online Safety Act and similar laws in the US, major internet platforms, including Reddit, Spotify, and YouTube, have introduced AI-powered age verification and estimation tools. Pornhub, whose parent company Aylo owns and operates a number of studios and streaming platforms, has also begun reassessing whether to comply with the age verification laws that have seen it banned in over a dozen US states.

The processes for both age verification and estimation involve sending sensitive personal information to the platform you’re trying to access. Age estimation requires a photo or photos of your face; your age will be estimated based on those pictures. Age verification is more precise, but it requires submitting a photo of your government-issued ID, one of the most sensitive documents you can provide to anyone, to the platform.

These tools typically use AI for automated facial recognition. And as we’ve seen with other AI tools, this can have harmful effects for users when there’s no human oversight or a reasonable appeals process. If the AI tool estimates your age incorrectly, it can prevent you from accessing content—or worse. Technology similar to this has been used by law enforcement and apps like Google Photos for years.

You can’t just opt out, either. These age estimation tools also typically come with a penalty for not using them. Most platforms will simply prevent an account from viewing 18+ content, but some, like Spotify, will delete or deactivate your account if your age is estimated incorrectly or if you refuse to use age estimation or verification.

A Spotify spokesperson tells us that users are given “a 90-day period to allow sufficient time for those who are over the minimum age to take the steps required to pass the ID check. If they do not participate in the age check during this time, their account may be deleted.”

As with any tools that gate people off from certain content, it’s hard not to wonder how far these companies could or would take these tools, which are ostensibly used for child safety. Currently, companies like Spotify promise to delete photos or IDs their users upload for age estimation or verification purposes, but will this always be the case? Can you really trust these companies to always keep your data safe?

Should You Trust AI With Your Face?

It may sound like paranoia, but it is a fair question. Your government ID and selfies definitely count as sensitive data. Platforms come and go, and they don’t always tidy up after themselves when they leave. The internet is littered with defunct websites that are an expired license or misconfigured bucket away from spilling all of their users’ personally identifiable information. When 23andMe filed for bankruptcy, it left many users concerned for the safety of their genetic data for this exact reason.  

So if, for instance, Reddit is committed to keeping its users’ data safe now as an active website, will it be as conscientious of its users’ data security if it shuts down? When good intentions are not backed up with action, the results can be disastrous. The Tea app, which was ostensibly created to help keep women safer during the dating process, ended up doing the exact opposite when 72,000 of its users’ selfies and photos for identification were leaked in a hack.

“When used for age estimation, facial scanning is often inaccurate. It’s in the name: age estimation.”
– Adam Schwartz, privacy litigation director at the EFF

Even when companies claim to delete sensitive data or never retain it, this data can still be at risk. For example, the recent Discord hack exposed age verification information, including 70,000 government IDs. The hack was accomplished by breaching a third-party company, 5CA, which Discord contracted to bolster its customer service. 

Tech Companies Say ‘Trust Us’

The companies adding age verification and estimation to their platforms at least claim to take user data seriously. We reached out to Google for comment on how it plans to keep users’ age estimation and verification data safe on YouTube and got this back: “Google uses advanced security measures to protect user data against threats, and you can choose the privacy settings that are right for you, including deleting your data.” Which doesn’t really say much about what those security measures actually are. 

While Google is building and operating its age verification technology in-house, Spotify has partnered with digital identity firm Yoti to manage these processes. In response to questions about the steps it’s taking to keep user data safe, a Spotify spokesperson says, “Spotify does not keep any data (face scans or ID verification) provided by the user for the age checks conducted through Yoti’s integration. The data is provided directly by the user to Yoti, and Yoti immediately deletes the data after the age check is complete. The user is informed of this. Spotify keeps limited data on the outcome of the age assurance check—only the age in years (not date of birth), the method of age check (facial age estimation or ID verification), and the date of the check.”

Additionally, the spokesperson says that Spotify has implemented “pseudonymization, encryption, access, and retention policies” to “guard against unauthorized access and unnecessary retention of personal data in our systems.” Yoti also makes some of the details of its data encryption and security measures available, including the use of TLS 1.2 by default and TLS 1.3 where supported.

Newsletter Icon

Get Our Best Stories!

Stay Safe With the Latest Security News and Updates

SecurityWatch Newsletter Image

Sign up for our SecurityWatch newsletter for our most important privacy and security stories delivered right to your inbox.

Sign up for our SecurityWatch newsletter for our most important privacy and security stories delivered right to your inbox.

By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Privacy Experts Say ‘Not So Fast’

However, online privacy experts don’t think any amount of “advanced security measures” is enough to justify the use of these technologies. In a statement provided to PCMag, Adam Schwartz, privacy litigation director at the Electronic Frontier Foundation (EFF), warns that “algorithmic face scans” are “dangerous, whether used to estimate our age, our name, or other demographics.” He adds that the EFF is in favor of “a ban on government use of this technology, and strict regulation… for corporate use.”

Schwartz highlights several flaws associated with the use of facial recognition technology. “When used for age estimation, facial scanning is often inaccurate,” he says. “It’s in the name: age estimation. That means these face scans will regularly mistake adults for adolescents, and wrongfully deny them access to restricted websites.”

There is also the issue of discrimination, whether it’s Google Photos mislabeling a photo of Black software engineer Jacky Alciné and his friend as “gorillas” or the many examples of facial recognition software incorrectly identifying suspects for the police. Face scanning, as Schwartz puts it, is “more likely to err in estimating the age of people of color and women.” As a result, he adds, “these face scans will have an unfair disparate impact.”

Schwartz says that age estimation scans “create new threats to our privacy and information security,” pointing out “our faces are unique, immutable, and constantly on display—creating risk of biometric tracking across innumerable virtual and real-life contexts” and that “at least one age verification vendor has already experienced a reported breach.”

Recommended by Our Editors

Platforms Have Struggled, Too

Privacy advocates, such as the EFF, aren’t the only ones with concerns about age verification and estimation. Content platforms have also felt the strain of compliance with these new laws. “We have publicly supported age verification of users for years,” a representative for Aylo (parent company of Pornhub and other adult content services) tells us in a statement, “but we believe that any law to this effect must preserve user safety and privacy, and must effectively protect children from accessing content intended for adults.”

However, the spokesperson says that the regulations enacted by governments like the UK have been “ineffective, haphazard, and dangerous.” Asking platforms like Aylo’s Pornhub to collect the sensitive PII required for age verification “[puts] user safety in jeopardy.” “Moreover,” they continue, “as experience has demonstrated, unless properly enforced, users will simply access non-compliant sites or find other methods of evading these laws.”

The representative details Aylo’s experience complying with these laws, specifically citing the company’s efforts to comply with regulations in the UK and the state of Louisiana. In both cases, Aylo found that its traffic dropped by nearly 80% after implementing age verification measures. “These people did not stop looking at porn,” the representative explains. “They just migrated to darker corners of the internet that don’t ask users to verify age, that don’t follow the law, that don’t take user safety seriously, and that often don’t even moderate content. In practice, the laws have just made the internet more dangerous for adults and children.”

Aylo didn’t just mention the company’s issues with age verification laws, though. The representative proposes what Aylo, as well as many privacy advocates, believe would be the “best solution to make the internet safer, preserve user privacy, and prevent children from accessing adult content”: perform age verification on the device.

“These people did not stop looking at porn…In practice, the laws have just made the internet more dangerous for adults and children.”
– Spokesperson for Aylo, owners of Pornhub

The spokesperson points out that “the technology to accomplish this exists today” and that many Internet-capable devices “offer free and easy-to-use parental control features that can prevent children from accessing adult content without risking the disclosure of sensitive user data.” All that’s missing is “the political and social will to make it happen.”

Where’s the Oversight?

By handing off the process to AI, users often lack much recourse when their age is incorrectly estimated. As mentioned above, Spotify’s policy in these cases is to either delete your account or request that you provide your government-issued ID to rectify the issue. This is in addition to other concerns, such as the inherent issue of granting AI and large language models access to the vast datasets they require to operate, which could lead to catastrophic data breaches. 

Age estimation and verification are not without their workarounds and alternatives. Discord users have found success by uploading photos of video game characters to bypass the platform’s age estimation process. A good VPN can also help access age-restricted content on websites like YouTube as easily as it can be used to access region-locked content on those same sites. YouTube alternatives like NewPipe also exist and can sometimes offer equivalent or better user experiences than the platform they aim to replace. However, most such sites will not be able to provide the same user interface polish as the major platforms.

On the other hand, finding alternatives to the web’s most popular services is difficult, and you can’t be blamed if you still want to use the platforms and tools that everyone else uses, even after being educated on the risks. It’s just important to recognize that companies will always prioritize their needs and safety over yours, and that uploading sensitive information to the internet will always come with risks and vulnerabilities, no matter how safe these companies claim your data is in their hands.

About Our Expert

Zephin Livingston

Zephin Livingston

Contributor

Experience

I’ve been covering the tech industry, with a focus on cybersecurity and AI, since 2021. While most of my work has been on the reporting side, I have also written product reviews, how-to guides, and articles on consumer cybersecurity technology. In addition to PCMag, my work has been published at WIRED, eWeek, Little Village Magazine, IT Business Edge, eSecurity Planet, and FinTech Futures. 

I primarily cover cybersecurity and AI, focusing on Windows and Android, although I also have experience with macOS, iOS, and Linux. I also cover video games and the communities built within or around them. 

Most of my work is done on an Acer laptop, though I also use my Samsung Galaxy phone from time to time. On the gaming side of things, I have a PS5 and a Switch. 


Read Full Bio