Thursday, September 28, 2023

Deepfaked: faking myself in the interest of science

Deepfakes are freaky, and everywhere. The ability of AI to quickly and easily plaster one face onto another body, clone a voice, or create an entirely new scene or video, has taken the world by storm. My feature for Nature looks into the major ways scientists are trying to combat the harm.

 https://www.nature.com/articles/d41586-023-02990-y

And here's a podcast: https://www.nature.com/articles/d41586-023-03042-1

Here are some fun facts/stories that didn't make it in:

I tried to use DALL-E to generate images for my son's storybook. It was hilariously bad, incorporating weird misunderstandings. Plenty of people are making very realistic fake content out there, but it isn't me! Here for example is "a pencil and watercolour drawing of a penguin fighting a big scary green monster". Huh?

 


One of the classic errors made by generative AI is that it has trouble producing comprehensible text, as if it doesn't know what letters look like. Here, for example, is a generated front cover of Nature magazine...


 

I also tried to clone my voice using the free version of LyreBird. To do this, I recorded 10 minutes of me talking, and read out a disclosure to prove I was cloning my own voice. This was pretty darn good: the kids say it sounds just like me. The free version has a very limited vocabulary, though, so my 'voice' keeps saying "gibber" when it hits a word it doesn't know.

PODCAST: https://www.nature.com/articles/d41586-023-03042-1

Check out Black Mirror's latest season episode 1 (Joan is awful), in which "Streamberry" (a thinly disguised fictional version of Netflix) listens in to your life using their phones/computers etc, and then uses generative AI to create a tv show based on you (making for free content: no sets or script writers required). Even better: they use generative AI to plaster a famous actor's faces on to your face (to improve watchability). Even even better: they make you slightly worse / more awful than you are in real life (because studies show that makes the viewing more compelling). In the show, everyone tries to sue Streamberry: the person whose life was stolen for the pilot, the actress whose face was plastered on her face. All the lawyers say: sorry, you signed your rights away. This is all terrifyingly accurate. The lawyers I spoke to thought this was hilarious... and scary.

Also check out a show called "Deep fake love", a reality show in which they take couples keen to test the strength of their love, split them into two groups, and then show them videos of their partner's cheating on them, WITHOUT disclosing that those videos are deepfakes. They eventually get around to telling them they MIGHT be deepfakes, and that it's their job to tell fact from fiction (for cash). Psychological torture! Great! Surely the next step will be 'Love is Blind' where at least 1 of the participants isn't a real person, but is just a chat bot... (can I patent that idea?)

On the other hand, here's the current level of generated video tech: Synthetic Summer.

 

Overall, I'm more intrigued than scared by deepfakes. Most of the real damage (eg faked images of warfare, election propoganda, privacy violations...) seem to me to be extensions of things that already happen without deepfake tech. I do think the four main prongs of attack against misuse will be reasonably effective (secure media provenance tracking; synthetic media detection; legislation to improve accountability; education), while on the other hand plenty of people will still believe whatever they want to believe, no matter if they are told it's fake or not. So: same problems as always, but with a different face.



No comments:

Post a Comment