Mon Oct 06 14:46:58 UTC 2025: ## AI Voice Cloning Fuels Rise in Phone Scam Targeting High-Profile Figures
**Rome, Italy** – A new wave of sophisticated phone scams leveraging advanced AI-generated “deepfake” audio is targeting wealthy individuals and business leaders, raising serious concerns about the potential for fraud and misinformation.
Earlier this year, several prominent Italian businessmen were contacted by someone impersonating Defence Minister Guido Crosetto, who was seeking financial assistance to free kidnapped Italian journalists. The voice, convincingly mimicking Crosetto, requested a large sum of money be wired to an overseas bank account. Crosetto became aware of the scam only when targeted individuals reached out to him.
This incident highlights the alarming advancements in AI technology capable of creating ultra-realistic voice clones indistinguishable from real human voices. Research from Queen Mary University of London confirms the ability of AI to generate incredibly convincing fake audio, with participants in a study often mistaking AI-generated voices for genuine ones and even rating them as more trustworthy.
One victim of the deepfake scam was Massimo Moratti, the former owner of Inter Milan football club, who actually wired the requested funds before authorities were able to intervene and freeze the transfer. Moratti has since filed a legal complaint, stating, “It all seemed real. They were good. It could happen to anyone.”
Experts warn that this type of fraud is on the rise. Resemble AI, a California-based AI company, estimates that global losses due to deepfake scams reached over $547 million in the first half of this year alone, a sharp increase from previous quarters.
Beyond financial fraud, the potential for misuse extends to fake news, political manipulation, and the creation of non-consensual sexual content. The proliferation of deepfakes online is expected to explode in the coming years, with DeepMedia estimating that eight million deepfakes will be created and shared online by the end of 2024.
Governments are beginning to respond to the threat. In the United States, it is now a federal crime to publish intimate images of a person without their consent, including AI-generated deepfakes. Australia has also announced a ban on applications used to create deepfake nude images.
As AI technology continues to advance, experts are urging caution and increased awareness to combat the growing threat of deepfake fraud and its potential to undermine trust in digital communication.