author of Honorable Influence - founder of Mindful Marketing -
author of Mindful Marketing: Business Ethics that Stick
A colleague recently shared an article with me and several others that she found disheartening: Married women in China who find their real-life relationship with their spouse lacking are spending the equivalent of thousands of U.S. dollars a year on AI boyfriends. The digital rendezvous often occur in otome games like Love and Romance, Light and Night, and Beyond the World.
I wasn’t entirely surprised by the article, as the issue has been on my radar, along with other AI-related concerns, for about two years, and over past several months, I’ve been tracking related stories such as these:
- The increase in AI relationships could lead to a rise in divorces.
- People are having “children” with chatbot partners.
- Parents are turning to chatbots to mind their young children.
- Adult children are leaning on AI to substitute for their own communication with aging parents.
Still, in doing research for this piece, I was stunned by some of the usage statistics:
- Since 2014, more than 660 million residents of China have used Xiaoice, the world’s “most popular chatbot,” which Microsoft “uniquely designed as an AI companion with an emotional connection to satisfy the human need for communication, affection, and social belonging.”
- Nearly 20% high schoolers report that they or someone they know has had a romantic relationship with AI.
- Nearly 20% U.S. adults have used AI to simulate a romantic partner, and within young adults age 18-30, 31% of men and 23% of women have used AI in this way.
- Since its launch in 2017, the AI companion Replika has had 30 million users, while the similar product Character AI has 20 million active users. Over half of Character AI’s users are age 18 to 24, and around a fourth are 25 to 34.
Writing for Greater Good Magazine, Sahar Habib Ghazi says statistics like these suggest that “AI-human romance isn’t niche--it’s mainstream, especially among young adults.”
Since there have been people, there have been interpersonal relationships. Of course, some reasons for those relationships have been very practical, e.g., procreation, protection; however, humans also have simply sought each other’s company and companionship.
In more recent times, researchers have empirically studied humans’ sociological and psychological behaviors and developed theories to describe them. Maslow’s classic Hierarchy of Needs suggests that the desire for belonging is among the most basic of all human desires, preempted only by physiological needs (e.g., air, food, water) and the need for safety.
Indeed, most people want to be around other people, if not all or most of the time, some of the time. In fact, it’s so unusual for anyone to spurn social interaction entirely that the rare individual who does receives the label hermit or recluse.
With technology, even a recluse can get a ‘social fix’ through one-way interactions, such as by following influencers or watching TV shows with favorite actors, or regularly listening to a particular podcast. In these cases, the followers/viewers/listeners don’t really know the ‘celebrity others,’ yet the former often feel a sense of connection to the latter.
There also are ways to fulfill social needs without any people. Perhaps the most popular substitutes are pets, which many people regularly enjoy. Harvard Health reports that “pets can provide their owners with more than companionship,” and Psychology Today suggests that pets can be “friends.”
Similarly, farmers sometimes bond with the livestock for which they care, e.g., a lead cow. Some people even gain a sense of social interaction by nurturing immobile living beings, i.e., plants, which can help them feel less lonely.
Together these examples form a continuum on which people might find satisfaction of social needs, ranging from extensive human contact, to relatively little, to none.
There also are countless cases in which people use other things not to meet social needs but to shift their focus from them, e.g., work, hobbies, media. For instance, someone might immerse themself in their job to help take their mind off feelings of loneliness.
Given the many ways of meeting and masking social needs currently and historically, is there any reason not to accept AI as a relationship alternative? After all, it can produce more human-like interaction than virtually any of the secondary options. Some would even say better-than-human.
There are advantages and disadvantages of AI relationships. The following two lists are not exhaustive but seem to be some of the main pros and cons.
Pros:
- Readily available: No person is accessible all the time to talk, listen, etc. Chatbots are available 24/7. They’re also extremely fast, and they don’t get tired.
- Nonjudgmental: For many people, it’s hard to simply listen to others’ disclosures without sharing their opinions of them. Chatbots typically refrain from such appraisals, which can be especially helpful for people who experience social anxiety or mental health challenges.
- Very smart: Of course, AI makes mistakes, but the vast repository of information it can draw from and assimilate means it doesn’t suffer from ignorance and inexperience to the extent that many people do. What's more, AI’s ability to sensitively apply its expansive knowledge base means it can seem “more ‘human’ than many people.”
- Adaptable: As people, we can adapt to others’ needs but it’s hard because as we do, we often need to stretch ourselves or give up our own needs. AI doesn’t have those limitations; it can be 100% accommodating.
Cons:
- Dependency: Given that AI is so readily available, accommodating, and reluctant to ever say “no,” there’s risk of dependency and even addiction. In fact, some humans who have found themselves spending far too much time with AI companions have turned to the app I Am Sober to help break their obsessive compulsive behavior.
- Data vulnerability: There’s risk involved with any of the information we share on websites or enter into apps, but the risk is greatly magnified when considering the very sensitive information individuals are likely to reveal to their AI companions, whose discretion is only as great as that of the companies behind them.
- Manipulation: Along with potential misuse of users’ data is the potential for users to be unknowingly manipulated into buying products that a chatbot’s parent company wants to promote. It’s hard to imagine that companies won’t seek to monetize those intimate relationships – something that would almost never happen with a human partner.
- Unrealistic expectations: In keeping with the previous point, AI’s varied advantages over people can cause its users to show little tolerance for human imperfections. Instead, they expect the people in their lives to offer support at an AI level.
- Not true love: Although those who use AI companions may experience a “form of ‘love,’” it’s not likely real love given that genuine love involves the desire to nurture another’s well-being, and chatbots “don’t have well-being to nurture.” By the same token, AI can “replicate” some dimensions of love, but what it’s offering is just that – imitation, not genuine love.
- Mistakes: As time goes on, AI seems to be making fewer mistakes and having less frequent hallucinations; however, the nature of the mistakes have sometimes been disastrous, such as when AI has offered to serve as a suicide coach and write troubled teens’ farewell letters.
Another possible con I was going to list for AI companions was the inability for physical expression, e.g., a touch, a hug, a kiss. However, it probably shouldn’t be surprising that some technically savvy companies have integrated AI into sex dolls to create life-like sex robots.
Also, while writing this piece, I learned of a platform called Moltbook, a website where AI agents interact with each other. Humans can only observe; they cannot enter the conversations. The dialogue is both interesting and disconcerting. It portends a time when AI agents might go rogue, working against their human principals, not for them. If this prediction is in any way a real possibility, engaging a bot as a companion seems even more precarious.
Although my secondary research for pieces like this is helpful, it’s often even more valuable for me to gain insights directly from experts. In this instance, I reached out to Dr. John King, an associate professor of counseling at Liberty University, who is a Licensed Professional Counselor, a National Certified Counselor, and a former pastor. I asked for his perspective on human-AI relationships.
Dr. King has “seen firsthand the devastation affecting a generation” – addiction to gaming and particularly online pornography, especially among young men. He’s also witnessed a rise in mental illness from addiction to phones and related technologies, which he believes has resulted in “a second pandemic: Generalized Anxiety Disorder.”
He adds, “When ethics and morality lag behind technological advancement, it seems inevitable that AI‑based romantic relationships will further increase mental‑health struggles, particularly among adolescents and young adults whose brains are still developing.”
Dr. King, whose Christian faith informs his professional perspective, believes that because God created people for relationship with Him and other people, trusting technology for companionship risks idolatry and will inevitably result in harm. For these reasons he hopes parents, religious leaders, educators, and government officials “will have the wisdom to address these issues proactively.”
Dr. King isn’t opposed to AI use. Like many of us, he uses AI for certain methodical tasks like proofreading; however, he stops well short of suggesting AI as a soulmate.
His perspective speaks to me, as I find it increasingly hard to envision the rewards of human-AI relationships outweighing the risks, either to the individual or to society.
As is the case with the six “pros” I outlined above, discussion of benefits of human-AI relationships almost always focuses on what the human user gains from the interaction. Benefits like 24/7 access are certainly appealing; however, the exclusive emphasis on getting misses the entire other half of healthy relationships – giving.
To at least some extent, the more people are getting their social needs met through AI, the less people are giving human support to others. Perhaps some individuals can effectively manage both types of relationships simultaneously, but it seems more likely that human-bot relationship time comes at the expense of human-human relationship time.
However, there’s another important concern beyond simple social need supply and demand. Humans are wired to give. Often the greatest satisfaction and fulfillment in life comes from giving: parents caring for children, spouses supporting each other, friends loving friends, neighbors helping neighbors, people uplifting strangers.
When individuals are engaged in AI relationships, to whom are they giving? The answer to that rhetorical question – no one – may be the foremost flaw of human-AI relationships.
Is there a place for human-AI relationships? Should companies offer them? Given some of the benefits mentioned above, I hesitate to answer “no” unequivocally. However, it seems AI organizations and the entities that regulate them should think very carefully about who has access to AI companions, for what reasons, and under what conditions.
For instance, age restrictions are an absolute necessity, minimum ones and perhaps maximum ones, or some type of cognitive test to protect people susceptible to manipulation because of cognitive decline. Should AI relationships be regulated like some pharmaceuticals are and require a prescription, or should AI relationships be subject to outside monitoring?
I wish I had better insights. What I do feel certain about is companies that make AI relationships easily available without setting limits and carefully considering likely individual and societal tolls are courting Single-Minded Marketing.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick
RSS Feed