In a significant move to combat increasing scams utilizing false celebrity endorsements, Meta, the parent company of Facebook and Instagram, has announced the introduction of advanced facial recognition technology. This initiative aims to protect users and celebrities alike from fraudsters who exploit the identities of famous figures to promote misleading advertisements. The rampant nature of these scams has impacted notable personalities, including Elon Musk and financial advisor Martin Lewis, who have reported instances of their likenesses being misappropriated for fraudulent investment schemes, particularly in the growing realms of cryptocurrency.
Celebrity scams have concerned Meta for several years. Martin Lewis, who has been particularly vocal about this issue, highlighted that he receives countless notifications about such fraudulent activities daily, expressing a strong emotional toll from these violations of his likeness. During a stint on the BBC Radio 4’s Today programme, Lewis revealed the distress these unauthorized uses of his identity cause him. Previously, he took legal action against Facebook over this problem, eventually dropping the case when the platform agreed to implement tools allowing users to report scams directly. This steps included a commitment from Facebook to provide monetary support to Citizens Advice, aiming to assist those affected by financial fraud.
To enhance their current ad review system, which deploys artificial intelligence (AI) to identify potentially false endorsements, Meta is integrating facial recognition capabilities. This technology will function by juxtaposing images flagged as suspicious in advertisements with the profile pictures of celebrities on Facebook and Instagram. If a match is confirmed—and the advertisement is deemed fraudulent—it will be automatically removed. Furthermore, Meta has indicated that “early testing” of this facial recognition system has yielded encouraging results. As part of the initiative, the company plans to dispatch in-app alerts to an expanding array of public figures affected by what has become known in the industry as “celeb-bait.”
The problem with celebrity scams has evolved significantly in the digital landscape. Scammers have shifted from simple impersonations to employing deepfake technology—sophisticated methods that create hyper-realistic video and audio mimicking the expressions and speech patterns of actual celebrities. This technological evolution has made scams more convincing and subsequently riskier for consumers, leading to urgent calls for regulatory oversight, particularly from figures like Martin Lewis. Recently, he urged the UK government to empower the regulatory body, Ofcom, with enhanced capabilities to tackle the growing epidemic of misleading ads, especially after an instance involving a faked interview with a government official was employed to defraud individuals.
Meta has acknowledged the adaptive nature of scammers, stating that they continuously refine their tactics to escape detection by established safeguards. The company’s commitment to sharing its strategies against these online purveyors of fraud aims to bolster industry-wide defenses, supporting a united front against digital nefariousness.
In addition to addressing celebrity scams, Meta has announced plans to employ facial recognition technology for facilitating user access recovery for individuals locked out of their social media accounts. Currently, recovering access demands an uploading of official identification documents, a process that can be cumbersome. This new method involves a video selfie, where facial recognition will serve as a means of validation against pre-existing profile images to expedite account restoration. This added layer of technology raises eyebrows regarding privacy and security, as Meta has faced scrutiny over its past implementations of facial recognition, leading to the suspension of such efforts in 2021.
While Meta asserts that the technology will ensure strict data security—by encrypting video selfies and securely storing the data used for identity verification—the implementation will not roll out in jurisdictions without regulatory approval, including the UK and EU for the time being. This cautious approach highlights the ongoing tension between technological innovation and user privacy rights as companies navigate the complexities of modern digital security. As social media continues to play a pivotal role in everyday communication, strategies like these underscore the imperative of maintaining user trust while striving to counteract the ever-evolving tactics of online scammers.









