arstechnica , Englisch
@arstechnica@mastodon.social avatar

Microsoft’s VASA-1 can deepfake a person with one photo and one audio track

YouTube videos of 6K celebrities helped train AI model to animate photos in real time.

https://arstechnica.com/information-technology/2024/04/microsofts-vasa-1-can-deepfake-a-person-with-one-photo-and-one-audio-track/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

jcriecke ,
@jcriecke@urbanists.social avatar

@arstechnica silly as it sounds, THIS is a serious use case for blockchain. If a video isn’t authenticated by your official address then it’s assumed to be fake.

18Rabbit ,
@18Rabbit@mastodon.social avatar

@arstechnica @catsalad I do not like the moving teeth and non-moving hair but I’m sure they’ll sort that out. Scammers of the world rejoiced.

aapis ,
@aapis@mastodon.world avatar

@arstechnica time to start bullying the microdick employees in my circles

Secret_Squirrel ,
@Secret_Squirrel@mastodon.social avatar

@arstechnica Can we not do this? Please?

LibrationPointThree ,
@LibrationPointThree@ioc.exchange avatar

@arstechnica must use all facial expressions all the time!

FaithfullJohn ,
@FaithfullJohn@mastodon.scot avatar

@arstechnica there is zero point to this technology other than deception and the destruction of trust. 😭🤬🤮

logickinlambda ,
@logickinlambda@mastodon.social avatar

@arstechnica

I doubt it have any purpose, but expecting more scam call and misinformation with fake faces.

Please prove me wrong.

#internetsecurity #internetsafety #networksecurity #privacy #scam

binaryequation ,
@binaryequation@freeradical.zone avatar

@arstechnica Something like this was used on my work picture for a silly celebration video at work. Most people thought the video was hilarious, but I felt pretty fucking violated.

  • Alle
  • Abonniert
  • Moderiert
  • Favoriten
  • random
  • haupteingang
  • Alle Magazine