On Oct. 1, well-known actor Tom Hanks posted on Instagram that an ad for dental plans has used a deep fake version of himself. He attempted to warn his followers to “beware” of the misleading advertisement. “I have nothing to do with it,” Hanks said.
Deep fakes are created when an artificial neural network is trained from source images of someone’s face, then used to project that face onto a video of another person. Unauthorized use of someone’s likeness has already been a discussion among people in Hollywood. Gayle King, a host of CBS Mornings exposed a weight loss advertisement that used her image without permission. “I’ve never heard of this product or used it! Please don’t be fooled by these AI videos,” King said.
Very successful YouTuber Jimmy Donaldson (Mr. Beast), has found himself as another victim of ads that use AI to mislead people. A TikTok ad had a deep fake Donaldson telling viewers that he was offering $2 iPhones. While some adults may be able to tell that an ad isn’t trustworthy, it can be harder for children, and scammers are aware of this. Scams like this make it easy for kids to give up their parent’s financial information.
When Donaldson was made aware of the ad, he posted the video to X, formerly Twitter. “Are social media platforms ready to handle the rise of AI deep fakes? This is a serious problem,” Donaldson wrote.
Celebrity endorsement has been a marketing strategy for decades, since well known people bring legitimacy to services or projects they are involved in. Beats by Dr. Dre demonstrated this at the start of their company by sending headphones to artists like Lady Gaga. Consumers will have to be certain when looking at celebrity endorsed products that they are not being tricked.
As social media becomes ever so saturated with images of celebrities and influencers, there is more source material available for realistic deep fakes to be made. The technology is so easy to access and use that we will likely see similar instances in the future.