Avatars are Us
Surreal, but real
In today's busy world, wouldn't we all like to be two places at once?
Hermione Granger, in the Harry Potter books, manages to be in two classes at the same time through using the 'Time Turner'. Hermione turns back time in order to go to classes that have taken place in the past.
For those of us without access to magic, there is...AI.
Our friend Andy asked, "Have you seen how you can make a video avatar of yourself online?"
I hadn't, so I went online to investigate personal avatars. Sure enough, they exist. HeyGen and Synthesia are the top hits. The avatars they create are not cute cartoon images for use in gaming or messaging. They look and talk like you. The only limitation on how good these avatars can be is the amount, type and quality of video of yourself you provide for their training – like any AI tool, the better the training data, the better the product.
HeyGen and Synthesia are targetting online content creators who don't have enough time to record videos, then rerecord the errors out of them. Not to mention, online avatars are much cheaper than professional video footage. You feed your avatar text and it voices the text, looking and sounding just like you. If you want the avatar to sound even more perfectly like you, you can provide a voice recording and the avatar will lip sync with it. Another layer on this is, you can get ChatGPT to create your content for you – you can have an avatar that looks and sounds like you saying things you didn't have to spend the time thinking up yourself.
Your avatar can be out there presenting videos any time and all the time. There can be many online yous, who are just like you. Your online yous can speak in any language and the internet can carry out your translating.
What are the risks of such avatars?
The obvious risk is someone else creating a video of you (or someone more important than you) and providing the avatar with their own message. There could be many yous, or David Seymours, or Gwyneth Paltrows, out there promoting messages that neither you, David, nor Gwyneth espouse.
It unlikely fake videos will be created through avatar models being stolen – it would be really bad for business if HeyGen and Synthesia were that leaky. However, HeyGen only requires 2-5 minutes of uploaded video to create an avatar. If there's 2-5 minutes video of you in an upper body video recorded online, someone else can access that video and create an avatar of you. Then they can make the avatar of you say...whatever they want.
The other obvious risk is, how can we now trust any online video content? We already know written online content can be unreliable and voices can be cloned. Now any and all online content is potentially unreliable. The only time we can be sure a person is their real self, is when they are...in person.
Of course, the avatars aren't perfect, yet. There are issues with avatars lip syncing - their speech can be just 'off', alerting a watcher. However, that's not going to last. In the same way the AI summary at the top of Google searches started out poor and is rapidly improving, and ChatGPT started out by producing horribly verbose text and is becoming more concise, video avatars are going to improve more swiftly than we can imagine until we can't tell which image is a real person speaking and which is computer generated.
So have I tried out video avatars yet? No, for a few reasons. I considered having a go prior to writing this blog but I've got a head cold – I don't want an avatar of myself sounding like its nose is blocked. I'm also not sure I want the creepiness of seeing a video of myself performing. Perhaps that's just vanity and I should get over it. If any of you have tried out online video avatars, I'd be really interested to hear what your experience has been.
For now, I'll leave you with the thought...are all those videos of Donald Trump real? Or is he just an avatar?