Technology

Meta tests facial detection for spotting “celeb-bait” ads scams and easier accounts recovery

Meta, Facebook’s owner, announced on Monday that it is expanding its tests of facial recognition to fight celebrity scam ads.

Monika Bickert, Meta’s VP of content policy, wrote in a blog post that some of the tests aim to bolster its existing anti-scam measures, such as the automated scans (using machine learning classifiers) run as part of its ad review system, to make it harder for fraudsters to fly under its radar and dupe Facebook and Instagram users to click on bogus ads.

“Scammers often try to use images of public figures, such as content creators or celebrities, to bait people into engaging with ads that lead to scam websites where they are asked to share personal information or send money. She wrote that this scheme, also known as ‘celeb bait,’ violates Meta’s policies and is bad to people who use our products. “Of course, many legitimate ads feature celebrities. But because celeb-bait ads are often designed to look real, it’s not always easy to detect them.”

The tests appear to be using facial recognition as a back-stop for checking ads flags as suspect by existing Meta systems when they contain the image of a public figure at risk of so-called “celeb-bait.”

“We will try to use facial recognition technology to compare faces in the ad against the public figure’s Facebook and Instagram profile pictures,” Bickert wrote. The feature will only be used to combat scam ads, Meta claims. She said that the company immediately deleted any facial data generated by ads for the one-time comparison, regardless of whether the system found a match. We don’t use the facial data for any other purposes. It’s an interesting time for Meta to push facial recognition-based measures to combat fraud, as the company is also attempting to gather as much data from users to train their commercial AI models.

In the coming weeks Meta said it will start displaying in-app notifications to a larger group of public figures who’ve been hit by celeb-bait — letting them know they’re being enrolled in the system.

“Public figures enrolled in this protection can opt-out in their Accounts Center anytime,” Bickert noted.

Meta is also testing use of facial recognition for spotting celebrity imposer accounts — for example, where scammers seek to impersonate public figures on the platform in order to expand their opportunities for fraud — again by using AI to compare profile pictures on a suspicious account against a public figure’s Facebook and Instagram profile pictures.

“We hope to test this and other new approaches soon,” Bickert added.

Video selfies plus AI for account unlocking

Additionally, Meta has announced that it’s trialling the use of facial recognition applied to video selfies to enable faster account unlocking for people who have been locked out of their Facebook/Instagram accounts after they’ve been taken over by scammers (such as if a person were tricked into handing over their passwords).

This looks intended to appeal to users by promoting the apparent utility of facial recognition tech for identity verification — with Meta implying it will be a quicker and easier way to regain account access than uploading an image of a government-issued ID (which is the usual route for unlocking access access now).

“Video selfie verification expands on the options for people to regain account access, only takes a minute to complete and is the easiest way for people to verify their identity,” Bickert said. This method of verification will be more difficult for hackers to abuse in the long run than the traditional document-based verification. Bickert said that as soon as a user uploads a selfie video, it is encrypted and securely stored. We immediately delete any facial data generated after this comparison regardless of whether there is a match or not. We immediately delete any facial data generated after this comparison regardless of whether there’s a match or not.”

Conditioning users to upload and store a video selfie for ID verification could be one way for Meta to expand its offerings in the digital identity space — if enough users opt in to uploading their biometrics.

No tests in UK or EU — for now

All these tests of facial recognition are being run globally, per Meta. The company has stated that the tests will not be conducted in the UK or EU, where there are strict data protection laws. In the case of biometrics used for ID verification, data protection regulations in the EU require explicit consent by the individual. Meta’s testing appears to be part of a larger PR campaign it has been waging in Europe to pressure local legislators to weaken privacy protections for citizens. This time, the cause it’s invoking to press for unfettered data-processing-for-AI is not a (self-serving) notion of data diversity or claims of lost economic growth but the more straightforward goal of combating scammers.

“We are engaging with the U.K. regulator, policymakers and other experts while testing moves forward,” Meta spokesman Andrew Devoy told TechCrunch. “We’ll continue to seek feedback from experts and make adjustments as the features evolve.”

However while use of facial recognition for a narrow security purpose might be acceptable to some — and, indeed, might be possible for Meta to undertake under existing data protection rules — using people’s data to train commercial AI models is a whole other kettle of fish.

story originally seen here

Editorial Staff

Founded in 2020, Millenial Lifestyle Magazine is both a print and digital magazine offering our readers the latest news, videos, thought-pieces, etc. on various Millenial Lifestyle topics.

Leave a Reply

Your email address will not be published. Required fields are marked *