Meta, the parent company of Facebook and Instagram, is launching a new facial recognition trial aimed at curbing the rising use of celebrity images in fraudulent advertisements.
Aiming to tackle “celeb-bait” scams, the trial, set to begin in December, will enrol around 50,000 public figures whose Facebook profile pictures will be compared to images used in suspected scam ads.
If a match is found, Meta will block the offending advertisement. Celebrities involved will be notified and given the option to opt out of the programme.
The initiative will be implemented globally, with exceptions in regions such as the UK, the EU, South Korea, and select U.S. states where regulatory clearance is still pending.
Monika Bickert, Meta’s vice president of Content Policy, noted that this new feature is one of the company’s tactics to protect public figures from scams that exploit their likeness without consent. “We aim to provide as much protection as possible,” Bickert stated, adding that public figures can easily opt-out if they prefer not to participate.
The announcement follows a challenging period for Meta in addressing privacy issues. In 2021, the company shut down its previous facial recognition system, due to societal concerns, and more recently faced a $1.4 billion fine in Texas over accusations of improperly collecting biometric data.
Despite these setbacks, Meta is pushing forward with this targeted use of facial recognition technology, aiming to strike a balance between fighting the growing threat of scam ads and respecting privacy boundaries.
Meta has promised to delete all facial data used in the comparison process immediately, ensuring that it will not be stored or reused for other purposes.
In addition to tackling scams, the company is also testing the use of facial recognition to help users regain access to compromised accounts, ensuring a more secure alternative to traditional document-based verification methods.