From finding data in just a few seconds to creating eerily realistic faces and voices, it's clear that AI has become a powerful part of modern society - and is here to stay.
Create a free account to read this article
$0/
(min cost $0)
or signup to continue reading
But how does it affect our relationships with one another? It's a topic of interest for University of Tasmania seinor lecturer Joey Crawford, an expert in leadership behaviour.
Dr Crawford wants to know how human relationships are changing as a result of our "hugely close" relationships with technologies instead of people.
ACM's The Examiner sat down with Dr Crawford to discuss.
What different applications are we seeing come out of AI, especially deepfakes?
There are a few big ones in the AI world. Earlier on, people wrote essays and marketing strategies through generative AI; more recently they've moved beyond that and there's opportunities for augmented reality AI.
Put some headphones on and you can look at a girlfriend, a friend or whatever it is, that's been generated by an artificial intelligence and that's becoming quite popular.
We're seeing that the deepfakes are being used to make life easier for some people, and make life harder for others. I can in minutes, put in some information into an AI generator like my CV, a recording of my voice and previous speeches I've done and I can get it to generate a new speech for me that sounds like me.
I can then put in a different AI and it will generate a visual that looks like me - so I can very quickly make a presentation for the world without actually needing to do any work.
We're only seeing the beginning of what this technology can do - should we be more concerned or embrace it?
I would say be open to the opportunities that it offers with a little degree of caution. We're entering a post-truth, or post-plagiarism era, where we won't know what's real and what's not.
Learning to get really clear on who the sources are, and what sources are actually really the core truth that we're looking for, is really important.
There's lots of reasons for concern now - it's very easy to generate deep fakes of two people engaging in something nefarious. It's very easy for me to say I don't like you, so I'm going to go out and try to engage in some kind of video- whether it's pornographic or whether it's something that I can blackmail you with.
People who are really law abiding, upstanding and ethical people can now be compromised in these kinds of deepfake spaces.
Should governments be taking AI more seriously? Should AI usage be legislated?
I think governments will never be able to move fast enough, but they definitely should be responding to this. It's really blurry and grey as to what constitutes a real problem here, and what constitutes evidence.
We're seeing in schools now, young people that are quite vulnerable, are experiencing heavy volumes of these technologies because they're native to this environment where AI's are engaging with them on a regular basis.
Is this technology changing how we communicate with people?
If I comment on Replika - Replika isn't a deepfake but the tool itself allows you to jump into a virtual world and it's primary job is to provide you with social companionship and happy conversations.
That's really cool sometimes but part of the human experience is not always happy. Relationships aren't all about everyone agreeing all the time - they're about challenge or people learning from each other.
So for young people that are engaging with tools like that - I could see a world where they might learn very quickly that a normal human should be agreeing with them. A 13-year-old boy who has a relationship with an AI knows it will do whatever it tells it to do.
But what happens when you try to move into a real world circumstance where those things aren't available to you? When you go on a first date and you say the wrong thing, you can't just backspace because you're nervous or you're uncomfortable.
There are no ramifications for being mean to an AI or really direct and to the point.
For people that have low emotional intelligence or low self awareness - I'd be curious to see to what extent they then learn that behaviour from the AI and then try to replicate that in human life.