I cloned my own voice to prank a friend, and... Wow, it was a gut-dropping moment when I understood just how dangerous this tool is for precisely this type of scam.
It's one thing to hear about it, but to actual experience it... Terrifying.
This is a most excellent place for technology news and articles.
I cloned my own voice to prank a friend, and... Wow, it was a gut-dropping moment when I understood just how dangerous this tool is for precisely this type of scam.
It's one thing to hear about it, but to actual experience it... Terrifying.
We neeeed a guide 👋
Check out ElevenLabs.
Mind sharing more info about the prank? Sounds like an interesting story
Oh, it was nothing more than just showing off the technology, really. It wasn't a committed bit.
I cloned my voice then left a voicemail that said something like: "hey buddy it's me. My car broke down and I'm at... Actually I don't know where I'm at. I walked to the gas station and borrowed this guy's phone. He said he'll give me a ride into to town if I can get him $50 bucks. Could you venmo it to him at @franks_diner? I'll get you back as soon as I can find my phone. ... By the way this is really me, definitely not a bot pretending to be me."
Do you guys remember when the T-1000 did this?
What's wrong with Wolfie? I can hear him barking...
Your parents are dead
In Terminator 1 the T-800 made a scam call to Sarah in order to find out where she is. He deepfaked the voice of Sarah’s mother, and she fell for it.
As someone who has an uncanny ability to recognize voices, I'm skeptical about how good these really are. Of course, most people don't share that ability.
Meanwhile, I could probably be fooled by a picture.
Hmm, I understand your sentiment, but how would you know. Of course you'd pick out the bad dupes but this technology is getting really good that I fear it would go unnoticed, especially if they keep the detectable ones to reinforce bias
I always thought being able to recognize voices is a common skill? Is it not?
Very much not, in my experience.
Yeah, does a familiar voice mean a famous person or personal friend?
For me, it could be either. Some of us recognize people by their voices more than by their faces.
I don’t have examples but having listened to some samples of various Ai generated clones (the one paper had samples of I believe 10s, 30s, 1min, 5 min) and all of them progressively sounded better. The 10 second one basically sounded like a voice call whose bit rate dropped out mid word. And the voice so long as you used words that were similar in phoenix sounded pretty close. Although this is just my experience, but to you it might sound pretty bad while to me it sounded pretty reasonable if under bad audio conditions.
https://github.com/CorentinJ/Real-Time-Voice-Cloning
This is the main one I’ve seen examples of. You’ll have to find the samples yourself, I believe it was in the actual paper?
That code was state of the art (for free software) when the author first published it with his master's thesis four years ago, but it hasn't improved a lot since then and I wouldn't recommend it today. See the Heads Up section of the readme. Coqui (a free software Mozilla spinoff) is better but also is sadly still nowhere near as convincing as the proprietary stuff.
Wait it’s been 4 years? Time really goes by. Yeah with most Ai things I assumed those with more time and resources would create better models. OS Ai is at a great disadvantage when it comes to data set size and compute power.
Good luck criminals. I ignore nearly every call.
Yeah but they'll call your family. A friend of mine was recently affected by this, a scammer had a clone of her voice asking for around $300 to fix their car because they got stranded in the middle of nowhere. So they call up your parents and to your mom it's like "Oh no! My baby! Of course I'll help you!" and your mom gives them $300 thinking it's you.
Yeah my family knows better. I don’t call anyone either plus I’ve got all of my family on DEFCON 1 when it comes to asking for money. Had someone try and scam my mom via Facebook pretending to be my sister. I have family members contacting me ALL the time with issues with their stuff so they don’t trust anything at all.
This all stems from myself getting scammed nearly 20 years ago via email so I’ve educated everyone immensely.
Anyone know how many hours of training data it takes to build up a convincing model of someone’s voice? It was 10’s of hours when I did a bit of research a year ago… the article says social media is the likely source of training data for these scams, but that seems unlikely at this point.
A current state of the art ai model from Microsoft can achieve acceptable quality with about 3 seconds of audio. Commercially available stuff like eleven labs about 30 minutes. But quality will obviously vary heavily but then again they're using a low quality phone call so maybe not that important
That’s downright scary :-) I think it took longer in the last Mission Impossible.
30 minutes is still pretty minimal for the kind of targeted attack it sounds like this is used for. I suppose we all need to work with our families on code words or something.
I went in thinking the article was a bit alarmist, but that’s clearly not the case. Thank for the insight.
With that little, they may be able to recreate the timbre of someone's voice, but speech carries a multitude of other identifiers and idiosyncrasies that they're unlikely to get with that little audio, like personal vocabulary (we don't choose the same words and phrasings for things), specific pronunciations (e.g. "library" vs "libary"), voice inflections, etc. Obviously, the more training data you have, the better the output.
ElevenLabs only needs 1 minute, but it also works with even shorter clips.
I literally just cloned someone's voice for a presentation on AI and did it using maybe 30 total minutes of audio....
Took me about an hour and it was free. Hardest part was clipping the audio to get the 'good bits.'
The voice was absolutely convincing.
it's no wonder actors are taking an interest, given the level of tech Disney and everybody else must have access to
Wow! That’s really impressive.
The most advanced Model I know just needs half an hour of your voice or sth.
Someone else mentioned that Microsoft has one capable of working with far less material.
But 30 minutes is definitely short enough to make this sort of scam/attack feasible in my mind.
BADONK!