author of Honorable Influence - founder of Mindful Marketing -
author of Mindful Marketing: Business Ethics that Stick
In an October 14 post on X, Altman shared, “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”
The CEO then defended the firm’s controversial new initiative with a follow-up X post on October 15: “But we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”
It’s doubtful many people have ever used the term “moral police,” or something similar, in a positive way. Instead, Altman’s implication seems akin to the way “police” is used pejoratively in this office scenario:
- Joe: “Bob, are you going to the bathroom again?!”
- Bob: “Who are you, the bathroom police?”
On one hand, Altman’s defense is reasonable. Who can possibly judge the ethical actions of the planet’s 8 billion-plus residents? Besides being logistically impossible, some may say it’s philosophically undesirable, as many different worldviews and multitudes of corresponding moral standards drive individuals’ actions.
Furthermore, as two research colleagues and I found in a recent study we conducted about marketing ethics, people’s opinions of what constitutes sex-related indecency tend to vary more than perspectives on issues involving other values like fairness and honesty.
It’s also easy to understand the financial incentives Open AI may gain by enabling more adult content. The adage “sex sells” rings true even when the ‘personal interaction’ for sale doesn’t involve a real person.
In a BBC interview, Parmy Olson of Bloomberg Opinion reported that about 30% of prompts typed into AI assistants were of a romantic or sexual nature. She also stated that chatbot companies that restrict adult content “lose millions of users.”
Regardless whether the percentage of X-rated interactions is 5% or 35% of total chatbot use, it’s likely a multibillion-dollar market opportunity for OpenAI and a very significant amount of revenue for any company to reject, especially one that’s not yet profitable.
In 2024, OpenAI had about $5 billion in losses on $3.7 billion in revenue. The firm’s revenues are expected to reach $12.7 billion in 2025, but OpenAI isn’t projected to make money until it reaches revenues of $125 billion in 2029.
Despite these logistical and financial reasons for refraining to act as “moral police,” Altman’s hands-off approach to ethics is alarming. ChatGPT accounts for 80.92% of chatbot use worldwide. It’s very troubling that a world leader in artificial intelligence seems to decline ethical accountability for the transformational technology.
Again, it’s true that opinions about decency vary greatly, especially when it comes to sexual liberties; however, the specific issue here isn’t the most important point. If OpenAI doesn’t want to act as “moral police” on this issue, can we expect it to accept moral responsibility for any ethical issue involving AI?
Perhaps Altman believes that AI will handle moral issues involving AI. That belief seemed wishful to me eight years ago; now it appears increasingly naïve.
In September 2017, I wrote an article for Business Insider in which I posed for the first time the question, “Can machines can learn to be moral?” It was five years before ChatGPT’s public release, so the essay was largely speculative, based on the little experience most of us had with AI at the time. Still, it was hard to imagine how an algorithm could make subjective decisions about what’s decent, fair, honest, respectful or responsible.
Over the years since, I’ve increasingly used AI, including chatbots, as many people have. In a Mindful Marketing article in May of 2024, I proposed that “Questions are the Key to AI and Ethics,” and ultimately concluded that we can’t rely on AI for moral accountability, rather it’s up to people to hit the brakes, or press pause, when AI-related ethical issues arise.
Even as AI grows more adept at completing nonmoral tasks (ones not involving ethical issues), there are still strong signs moral accountability must come from outside algorithms, for example:
- Just a few days ago, a deepfake of Nvidia CEO Jensen Huang promoting a cryptocurrency scam performed better than the firm’s real GTC 2025 keynote, fooling tens of thousands of viewers.
- Chatbots have been implicated in the deaths of teenagers who have taken their own lives, as AI has offered to write farewell notes and serve as a “suicide coach.”
- Nudify apps leverage AI to digitally remove clothing in photos of unsuspecting individuals. The apps users, who have included middle schoolers, often share the lewd results with others, causing great emotional and social pain for the victims.
If AI has a sense of right and wrong, why hasn’t it stopped these acts, which most reasonable people would call ‘reprehensible’? What’s worse, AI not only didn’t pump the brakes, or hit pause, it’s what enabled these actions to occur; it made them possible. Examples like these have led me to conclude:
If left unchecked by humans, AI will continue to enable unethical behavior.
So, opposite Altman’s abdication of ethical accountability, we do need “moral police,” badly. The question, then, is who should they be?
I asked a similar question in an April 2025 article, “Who Will be the Adult in the Room with AI?” and suggested several sets of complementary accountability partners: legal systems, industry associations, organizations, and individuals.
In a recent talk I’ve given titled, “Exercising Our Moral Leadership,” I’ve continued to press the theme of broad-based moral responsibility, arguing that each of us needs to lean on others for support in our ethical decision making, ranging from clarification of factual information to serving as moral sounding boards and accountability partners.
Ultimately, I believe ethics is a team sport – every stakeholder employing thoughtful analysis and engaging in constructive conversations. Still, the idea that ‘ethics is everyone’s job’ can seem like a platitude if people don’t genuinely embrace moral responsibility and seriously prepare themselves for ethical decision-making, which involves intentional actions like:
- Adopting a moral anchor, or a set of moral standards, such as the Golden Rule or specific moral principles like decency, fairness, honesty, respect, and responsibility.
- Thoroughly knowing one’s field because making ethical decisions also demands firm grasps of objective, factual information
- Looking beyond immediate circumstances to consider long-term consequences and impacts on secondary stakeholders
- Questioning the consensus because task definitions and time pressures sometimes lead teams to choose what can be done versus what should be done
Artificial intelligence appears to be as transformative a technology as our world has ever seen. Most of us have experienced firsthand how it can help us work more efficiently and effectively. We’ve also collectively witnessed how, if left unchecked, it can enable actions that are morally suspect.
We do need moral police for AI. You and I should be part of that patrol. Leading the force should be those who are most knowledgeable about AI – police chiefs at Alphabet, Microsoft, Nvidia, and . . . OpenAI.
It’s understandable that Altman wants to respect personal liberties and make Open AI profitable. However, allowing AI to police itself is asking for moral anarchy by way of Single-Minded Marketing.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick
RSS Feed