Mindful Marketing
  • Home
    • Ethics Challenge
  • About
    • Mission
    • Mindful Matrix
    • Leadership
  • Mindful Matters Blog
  • Mindful Marketing Book
  • Mindful Ads?
  • Contact

Does the World Need "Moral Police" for AI?

11/1/2025

0 Comments

 
Picture

by David Hagenbuch - professor of marketing at Messiah University -
​author of 
Honorable Influence - founder of Mindful Marketing -
author of Mindful Marketing: Business Ethics that Stick 

Open AI’s recent announcement that it would free ChatGPT to engage in erotica sparked backlash and prompted CEO Sam Altman to defend the decision: “we are not the elected moral police of the world.” At first glance, it’s difficult to deny the AI visionary’s disclaimer, but his statement raises important questions: Should there be people appointed to monitor AI morality and if so, who?
 
In an October 14 post on X, Altman shared, “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”
 
The CEO then defended the firm’s controversial new initiative with a follow-up X post on October 15: “But we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”
 
It’s doubtful many people have ever used the term “moral police,” or something similar, in a positive way. Instead, Altman’s implication seems akin to the way “police” is used pejoratively in this office scenario:
  • Joe: “Bob, are you going to the bathroom again?!”
  • Bob: “Who are you, the bathroom police?”
 
On one hand, Altman’s defense is reasonable. Who can possibly judge the ethical actions of the planet’s 8 billion-plus residents? Besides being logistically impossible, some may say it’s philosophically undesirable, as many different worldviews and multitudes of corresponding moral standards drive individuals’ actions.
 
Furthermore, as two research colleagues and I found in a recent study we conducted about marketing ethics, people’s opinions of what constitutes sex-related indecency tend to vary more than perspectives on issues involving other values like fairness and honesty.
 
It’s also easy to understand the financial incentives Open AI may gain by enabling more adult content. The adage “sex sells” rings true even when the ‘personal interaction’ for sale doesn’t involve a real person.
 
In a BBC interview, Parmy Olson of Bloomberg Opinion reported that about 30% of prompts typed into AI assistants were of a romantic or sexual nature. She also stated that chatbot companies that restrict adult content “lose millions of users.”
 
Regardless whether the percentage of X-rated interactions is 5% or 35% of total chatbot use, it’s likely a multibillion-dollar market opportunity for OpenAI and a very significant amount of revenue for any company to reject, especially one that’s not yet profitable.
 
In 2024, OpenAI had about $5 billion in losses on $3.7 billion in revenue. The firm’s revenues are expected to reach $12.7 billion in 2025, but OpenAI isn’t projected to make money until it reaches revenues of $125 billion in 2029.

​
Picture

Despite these logistical and financial reasons for refraining to act as “moral police,” Altman’s hands-off approach to ethics is alarming. ChatGPT accounts for 80.92% of chatbot use worldwide. It’s very troubling that a world leader in artificial intelligence seems to decline ethical accountability for the transformational technology.
 
Again, it’s true that opinions about decency vary greatly, especially when it comes to sexual liberties; however, the specific issue here isn’t the most important point. If OpenAI doesn’t want to act as “moral police” on this issue, can we expect it to accept moral responsibility for any ethical issue involving AI?
 
Perhaps Altman believes that AI will handle moral issues involving AI. That belief seemed wishful to me eight years ago; now it appears increasingly naïve.
 
In September 2017, I wrote an article for Business Insider in which I posed for the first time the question, “Can machines can learn to be moral?” It was five years before ChatGPT’s public release, so the essay was largely speculative, based on the little experience most of us had with AI at the time. Still, it was hard to imagine how an algorithm could make subjective decisions about what’s decent, fair, honest, respectful or responsible.
 
Over the years since, I’ve increasingly used AI, including chatbots, as many people have. In a Mindful Marketing article in May of 2024, I proposed that “Questions are the Key to AI and Ethics,” and ultimately concluded that we can’t rely on AI for moral accountability, rather it’s up to people to hit the brakes, or press pause, when AI-related ethical issues arise.
 
Even as AI grows more adept at completing nonmoral tasks (ones not involving ethical issues), there are still strong signs moral accountability must come from outside algorithms, for example:
  • Just a few days ago, a deepfake of Nvidia CEO Jensen Huang promoting a cryptocurrency scam performed better than the firm’s real GTC 2025 keynote, fooling tens of thousands of viewers.
  • Chatbots have been implicated in the deaths of teenagers who have taken their own lives, as AI has offered to write farewell notes and serve as a “suicide coach.”
  • Nudify apps leverage AI to digitally remove clothing in photos of unsuspecting individuals. The apps users, who have included middle schoolers, often share the lewd results with others, causing great emotional and social pain for the victims.
 
If AI has a sense of right and wrong, why hasn’t it stopped these acts, which most reasonable people would call ‘reprehensible’? What’s worse, AI not only didn’t pump the brakes, or hit pause, it’s what enabled these actions to occur; it made them possible. Examples like these have led me to conclude:
 
If left unchecked by humans, AI will continue to enable unethical behavior.
 
So, opposite Altman’s abdication of ethical accountability, we do need “moral police,” badly. The question, then, is who should they be?
 
I asked a similar question in an April 2025 article, “Who Will be the Adult in the Room with AI?” and suggested several sets of complementary accountability partners: legal systems, industry associations, organizations, and individuals.
 
In a recent talk I’ve given titled, “Exercising Our Moral Leadership,” I’ve continued to press the theme of broad-based moral responsibility, arguing that each of us needs to lean on others for support in our ethical decision making, ranging from clarification of factual information to serving as moral sounding boards and accountability partners.
 
Ultimately, I believe ethics is a team sport – every stakeholder employing thoughtful analysis and engaging in constructive conversations. Still, the idea that ‘ethics is everyone’s job’ can seem like a platitude if people don’t genuinely embrace moral responsibility and seriously prepare themselves for ethical decision-making, which involves intentional actions like:
  • Adopting a moral anchor, or a set of moral standards, such as the Golden Rule or specific moral principles like decency, fairness, honesty, respect, and responsibility.
  • Thoroughly knowing one’s field because making ethical decisions also demands firm grasps of objective, factual information
  • Looking beyond immediate circumstances to consider long-term consequences and impacts on secondary stakeholders
  • Questioning the consensus because task definitions and time pressures sometimes lead teams to choose what can be done versus what should be done
 
Artificial intelligence appears to be as transformative a technology as our world has ever seen. Most of us have experienced firsthand how it can help us work more efficiently and effectively. We’ve also collectively witnessed how, if left unchecked, it can enable actions that are morally suspect.
 
We do need moral police for AI. You and I should be part of that patrol. Leading the force should be those who are most knowledgeable about AI – police chiefs at Alphabet, Microsoft, Nvidia, and  . . . OpenAI.
 
It’s understandable that Altman wants to respect personal liberties and make Open AI profitable. However, allowing AI to police itself is asking for moral anarchy by way of Single-Minded Marketing.
​
Picture
Subscribe to Mindful Matters blog.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick
0 Comments



Leave a Reply.

    Subscribe to receive this blog by email

    Editor

    David Hagenbuch,
    founder of
    Mindful Marketing  and author of Honorable Influence
    and
    ​Mindful Marketing: Business Ethics that Stick

    Archives

    November 2025
    October 2025
    September 2025
    August 2025
    July 2025
    June 2025
    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    December 2024
    November 2024
    October 2024
    September 2024
    August 2024
    June 2024
    May 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    May 2023
    April 2023
    March 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014

    Categories

    All
    + Decency
    + Fairness
    Honesty7883a9b09e
    * Mindful
    Mindless33703c5669
    > Place
    Price5d70aa2269
    > Product
    Promotion37eb4ea826
    Respect170bbeec51
    Simple Minded
    Single Minded2c3169a786
    + Stewardship

    RSS Feed

    Share this blog:

    Subscribe to
    Mindful Matters
    blog by email

    Illuminating
    ​Marketing Ethics ​

    Encouraging
    ​Ethical Marketing  ​


    Copyright 2025
    David Hagenbuch

Proudly powered by Weebly