Senator Hassan Demands Answers From Voice Cloning Companies After FBI Reports $893 Million In AI Voice Scams
A sitting US senator just put four AI voice-cloning companies on notice. The numbers behind the letters are the reason it matters.
On April 16, Senator Maggie Hassan, ranking member of the Joint Economic Committee, sent formal requests to ElevenLabs, LOVO, Speechify and VEED, asking how each company prevents scammers from turning their tools into engines of impersonation fraud. The same week, the FBI’s Internet Crime Complaint Center reported that AI-related scams accounted for $893 million in losses in 2025, across more than 22,000 complaints.
What Senator Hassan Is Asking Voice-Cloning Companies
Hassan’s letters want specifics. How do the four companies monitor for scam misuse. How do they confirm a person has consented before their voice is cloned. How do they catch attempts to imitate public figures or minors. Whether they watermark AI-generated audio and keep provenance data.
These are the questions a prosecutor or a forensic examiner would ask when a scam call ends up in a case file. They are also the ones Consumer Reports flagged in March 2025, when its investigators tested six voice-cloning products and found four erected no meaningful barriers to cloning someone’s voice without their consent.
Hassan quoted that finding in her letters. She also cited the AI Fraud Accountability Act, a bipartisan Senate bill introduced by Senator Lisa Blunt Rochester and Senator Tim Sheehy, which would make digital impersonation to defraud a federal crime carrying up to three years in prison.
ElevenLabs told Axios, which first broke the letters, that it has “an extensive array of safeguards,” including blocking the cloning of celebrity and public-figure voices and using automated and human review of flagged content. The other three companies had not responded publicly as of the release.
The New Hampshire Grandparent Scam Case Hassan Cited
Hassan did not cite the $893 million in the abstract. She pointed to a specific case. In June 2025, a New York man was sentenced to prison for his role in what prosecutors called an elaborate grandparent scam that stole roughly $20,000 from three New Hampshire families. The victims told the Union Leader that the callers used AI to mimic a loved one’s voice and convince family members to send bail money. One victim described hearing her son’s voice on the call, full of terror.
That is what a voice clone does when it lands in the wrong hands. It turns an audio sample, sometimes three seconds of a voicemail or a podcast clip, into a synthetic version of someone a victim trusts. By the time the family calls to verify, the money is gone.
Why AI Voice Detection Alone Cannot Close The Gap
Detection is the first thing people think of when voice clones come up. Detection work matters. It narrows down what is suspicious. But detection produces probability scores. A tool can tell you a clip is likely synthetic. It cannot tell you, to the standard a court expects, where the clip came from, what device produced it or whether the person on the caller ID actually spoke. That second set of questions is device-level digital forensics. It is what happens after a detection score raises a flag.
On a grandparent-scam call, the forensic chain runs through the caller’s number and routing records, any recorded audio from the victim’s voicemail or call app, the device the victim used to receive the call and the financial records that show where the money went. None of that is visible to a detection tool looking at the audio alone. All of it is the evidentiary substrate a prosecution or civil claim eventually has to stand on. I covered why live deepfake audio pushes the evidentiary problem past what detection can solve in a recent column on audio evidence.
Hassan’s letters sit at the front of that chain. They ask whether the companies that generated the synthetic voice kept records an investigator could use. Provenance tags. Watermarks. Account logs. Whether the firms know who the user was, and whether they can produce that information when a fraud is documented.
The FBI’s $893 Million AI Scam Breakdown
The $893 million figure breaks down inside the IC3 report. Investment fraud with an AI nexus accounted for about $632 million. Business email compromise with AI, about $30 million. Romance and confidence scams with a likely AI element, about $19 million. Tech and customer support scams, about $19 million. The rest is split across smaller categories.
Elder fraud complaints with an AI component totaled about $352 million in 2025 on their own. That category includes grandparent scams, impersonation of law enforcement and fake calls from utility companies demanding immediate payment. McAfee has publicly estimated that three seconds of someone’s voice is enough to build a clone. Podcast audio, voicemail greetings, social media videos and recorded webinars all clear that bar.
What To Expect Next On AI Voice-Clone Legislation
Hassan set a deadline for the companies to respond. Whatever comes back will become the record Congress uses to decide whether voluntary safeguards are working. The FBI’s 2025 numbers are the baseline. The 2026 numbers will tell the rest of the story.
Families asked to wire bail money should hang up and call the relative at a known number, not the one on caller ID. Banks should treat any urgent wire request tied to a phone call as a high-risk transaction and require a callback channel that does not route through a phone that could already be compromised. Attorneys handling a case where a voice call is part of the evidence should preserve the original recording, the carrier’s call records and the devices involved. An audio file alone is a starting point for a forensic examination, not the end of one.
The senator has names on the table now. Four of them. The scammers using these tools already had names for the voices they cloned. The rest of us are about to find out how much of that chain was traceable, and how much was not.