Artificial intelligence is rapidly transforming the landscape of fraud, making it easier than ever for scammers to impersonate real people, create fake identities and replicate legitimate websites, according to cybersecurity experts and a demonstration conducted by CBS News California Investigates. From identity theft involving rideshare drivers to fake businesses created to secure bank loans and credit cards, experts estimate that roughly half of these scams now involve AI tools, including deepfake technology. Traditional advice for spotting scams is no longer reliable as scammers are becoming increasingly more sophisticated with their methods. Trying to identify if a scammer was catfishing you used to be easy by requesting a video chat, but now they are morphing into anyone they want to be with ease. Soups Ranjan, CEO of fraud prevention company Sardine, warned that AI-driven fraud is poised for rapid growth. “AI-generated fraud is going to be the big growth industry of all time,” Ranjan said, adding that “it is really easy nowadays to create a deepfake video of someone else.” During a demonstration, Ranjan and his team showed how readily available apps can transform a person’s appearance in real time. Using a consumer app, they altered consumer investigator Kristine Lazar’s image to resemble pop star Taylor Swift, creating a deepfake within minutes. The video seemed pretty convincing to someone who doesn’t know Lazar personally and wouldn’t be able to recognize her mannerisms. The same technology can be used in more sophisticated schemes, including impersonating high-profile figures such as Elon Musk or posing as someone during video-based identity verification checks, experts said. Fraudsters can also generate fake identification documents using widely accessible online tools. In the demonstration, a fabricated passport was created using publicly available software not from the dark web and populated with both false and real personal information. “This is a really fast-growing website that’s used to basically create fake digital identity documents,” said Matt Vega, chief of staff at Sardine. Even individuals who take steps to protect their personal information may still be vulnerable. Vega said small digital traces, such as a social media post, can expose key details. “You basically have wiped the internet of your date of birth,” Vega said. “However, if you go back a few years on your Facebook, someone accidentally wished you a happy birthday on the wrong date, and that allowed me to identify what your birthday was.” He added that scammers can combine that information with data obtained from breaches to create documents that could pass many verification systems. “I can create a very realistic-looking document that can be used almost to get approved 100% of the time,” Vega said. AI tools are also being used to clone legitimate websites. By taking screenshots of a real site, scammers can quickly generate a near-identical version designed to steal user credentials or financial information. “It doesn’t matter what security system they’re using,” Vega said. “Using AI, [we can] create basically a copy replica of it within minutes.” Despite advances in detection technology, experts say the average consumer remains highly vulnerable. While companies like Sardine are developing tools to identify deepfakes in real time, individuals must rely on spotting subtle warning signs. Glitches in video, unnatural facial movement, or a lack of blinking can be indicators that the content is fake. The rapid evolution of AI-driven fraud underscores the need for heightened vigilance online, experts say, as scammers continue to exploit increasingly sophisticated technology.
https://www.cbsnews.com/losangeles/news/deep-fakes-ai-tools-scammers-identity-theft-cybersecurity/
Deep fake demonstration shows sophistication of AI editing tools scammers use to commit identity theft