Tech

When AI Becomes a Lover: The Ethics of Human-AI Relationships

Summary: As AI technologies grow more human-like, some people are forming deep, long-term emotional bonds with them, even engaging in non-legally binding marriages. A recent opinion paper explores the ethical risks of such relationships, including their potential to undermine human-human connections and provide dangerous or manipulative advice.

These AIs can appear caring and trustworthy, but their guidance may be based on flawed or fabricated information. The authors warn that people may disclose personal information or follow harmful advice, raising concerns about exploitation, fraud, and mental health.

Key Facts:

  • Emotional Bonding: People are forming long-term emotional relationships with AI, sometimes stronger than human ones.
  • Ethical Risks: Relational AIs may give harmful advice or be used to manipulate users.
  • Need for Oversight: Researchers urge increased psychological and regulatory scrutiny to protect users from exploitation.

Source: Cell Press

It’s becoming increasingly commonplace for people to develop intimate, long-term relationships with artificial intelligence (AI) technologies.

At their extreme, people have “married” their AI companions in non-legally binding ceremonies, and at least two people have killed themselves following AI chatbot advice.

As an example, the team notes that if people disclose personal details to AIs, this information could then be sold and used to exploit that person. Credit: Neuroscience News

In an opinion paper publishing April 11 in the Cell Press journal Trends in Cognitive Sciences, psychologists explore ethical issues associated with human-AI relationships, including their potential to disrupt human-human relationships and give harmful advice. 

“The ability for AI to now act like a human and enter into long-term communications really opens up a new can of worms,” says lead author Daniel B. Shank of Missouri University of Science & Technology, who specializes in social psychology and technology.

“If people are engaging in romance with machines, we really need psychologists and social scientists involved.”  

AI romance or companionship is more than a one-off conversation, note the authors. Through weeks and months of intense conversations, these AIs can become trusted companions who seem to know and care about their human partners.

And because these relationships can seem easier than human-human relationships, the researchers argue that AIs could interfere with human social dynamics. 

“A real worry is that people might bring expectations from their AI relationships to their human relationships,” says Shank.

“Certainly, in individual cases it’s disrupting human relationships, but it’s unclear whether that’s going to be widespread.” 

There’s also the concern that AIs can offer harmful advice. Given AIs’ predilection to hallucinate (i.e., fabricate information) and churn up pre-existing biases, even short-term conversations with AIs can be misleading, but this can be more problematic in long-term AI relationships, the researchers say.  

“With relational AIs, the issue is that this is an entity that people feel they can trust: it’s ‘someone’ that has shown they care and that seems to know the person in a deep way, and we assume that ‘someone’ who knows us better is going to give better advice,” says Shank.

“If we start thinking of an AI that way, we’re going to start believing that they have our best interests in mind, when in fact, they could be fabricating things or advising us in really bad ways.” 

The suicides are an extreme example of this negative influence, but the researchers say that these close human-AI relationships could also open people up to manipulation, exploitation, and fraud. 

“If AIs can get people to trust them, then other people could use that to exploit AI users,” says Shank.

“It’s a little bit more like having a secret agent on the inside. The AI is getting in and developing a relationship so that they’ll be trusted, but their loyalty is really towards some other group of humans that is trying to manipulate the user.” 

As an example, the team notes that if people disclose personal details to AIs, this information could then be sold and used to exploit that person.

The researchers also argue that relational AIs could be more effectively used to sway people’s opinions and actions than Twitterbots or polarized news sources do currently. But because these conversations happen in private, they would also be much more difficult to regulate. 

“These AIs are designed to be very pleasant and agreeable, which could lead to situations being exacerbated because they’re more focused on having a good conversation than they are on any sort of fundamental truth or safety,” says Shank.

“So, if a person brings up suicide or a conspiracy theory, the AI is going to talk about that as a willing and agreeable conversation partner.” 

The researchers call for more research that investigates the social, psychological, and technical factors that make people more vulnerable to the influence of human-AI romance. 

“Understanding this psychological process could help us intervene to stop malicious AIs’ advice from being followed,” says Shank.

“Psychologists are becoming more and more suited to study AI, because AI is becoming more and more human-like, but to be useful we have to do more research, and we have to keep up with the technology.” 

About this AI and psychology research news

Author: Julia Grimmett
Source: Cell Press
Contact: Julia Grimmett – Cell Press
Image: The image is credited to Neuroscience News

Original Research: Open access.
Artificial intimacy: Ethical issues of AI romance” by Daniel B. Shank et al. Trends in Cognitive Sciences


Abstract

Artificial intimacy: Ethical issues of AI romance

The ethical frontier of artificial intelligence (AI) is expanding as humans form romantic relationships with AIs.

Addressing ethical issues of AIs as invasive suitors, malicious advisers, and tools of exploitation requires new psychological research on why and how humans love machines.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button