- cross-posted to:
- technology@beehaw.org
- technology@lemmy.ml
- cross-posted to:
- technology@beehaw.org
- technology@lemmy.ml
cross-posted from: https://sh.itjust.works/post/18066953
On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.
Great potential for abuse, scams, etc.
So fascinating yet so scary to realize how quickly can AI take over. A few minor tweaks here & there and it will be hard to know if the video you are watching is fake or not.
Nah it’s actually pretty obvious if you know where to look.
Watch the "person"s teeth as they talk. Warning: it actually can be kinda gross once you are watching for it.
For now you can tell. Next year you may not be able to.
Every month for the last year, they have made more progress in AI than they thought they’d make in the next couple of years combined, and the rate of progress is accelerating. It’s coming much sooner than anyone thinks.
It’s weird to always see these dismissals about how easy it is to pinpoint generated media, like we haven’t already seen an insane jump in ability in just the last year. There is no future where this tech doesn’t start to become a problem with its realism, and personally I think it’s much closer than most seem to think it is.
Just don’t practice on Rudy Giuliani
I am getting %100 uncanny valley chills from these videos
Maybe the outcome of all this deep fake shit is that video and photo evidence will be inadmissible in court. Maybe that’s the goal?
This is the best summary I could come up with:
On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track.
In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.
To show off the model, Microsoft created a VASA-1 research page featuring many sample videos of the tool in action, including people singing and speaking in sync with pre-recorded audio tracks.
The examples also include some more fanciful generations, such as Mona Lisa rapping to an audio track of Anne Hathaway performing a “Paparazzi” song on Conan O’Brien.
While the Microsoft researchers tout potential positive applications like enhancing educational equity, improving accessibility, and providing therapeutic companionship, the technology could also easily be misused.
“We are opposed to any behavior to create misleading or harmful contents of real persons, and are interested in applying our technique for advancing forgery detection,” write the researchers.
The original article contains 797 words, the summary contains 183 words. Saved 77%. I’m a bot and I’m open source!
We’re almost at Misson Impossible.