Mindful Marketing
  • Home
    • Ethics Challenge
  • About
    • Mission
    • Mindful Matrix
    • Leadership
  • Mindful Matters Blog
  • Mindful Marketing Book
  • Mindful Ads?
  • Contact

Falling in Love with AI

2/1/2026

34 Comments

 
Picture

by David Hagenbuch - professor of marketing at Messiah University -
​author of 
Honorable Influence - founder of Mindful Marketing -
author of Mindful Marketing: Business Ethics that Stick 

February 14th has long reminded people of the affection they feel for the most important others in their lives: spouses, fiancés, boyfriends, girlfriends. Thanks to AI, “significant other” can now mean other than human, but even if people desire human-like intimacy from artificial intelligence, should organizations offer it?
 
A colleague recently shared an article with me and several others that she found disheartening: Married women in China who find their real-life relationship with their spouse lacking are spending the equivalent of thousands of U.S. dollars a year on AI boyfriends. The digital rendezvous often occur in otome games  like Love and Romance, Light and Night, and Beyond the World.
 
I wasn’t entirely surprised by the article, as the issue has been on my radar, along with other AI-related concerns, for about two years, and over past several months, I’ve been tracking related stories such as these:
  • The increase in AI relationships could lead to a rise in divorces.
  • People are having “children” with chatbot partners.
  •  Parents are turning to chatbots to mind their young children.
  • Adult children are leaning on AI to substitute for their own communication with aging parents.
 
Still, in doing research for this piece, I was stunned by some of the usage statistics:
  • Since 2014, more than 660 million residents of China have used Xiaoice, the world’s “most popular chatbot,” which Microsoft “uniquely designed as an AI companion with an emotional connection to satisfy the human need for communication, affection, and social belonging.”
  • Nearly 20% high schoolers report that they or someone they know has had a romantic relationship with AI.
  • Nearly 20% U.S. adults have used AI to simulate a romantic partner, and within young adults age 18-30, 31% of men and 23% of women have used AI in this way.
  • Since its launch in 2017, the AI companion Replika has had 30 million users, while the similar product Character AI has 20 million active users. Over half of Character AI’s users are age 18 to 24, and around a fourth are 25 to 34.
 
Writing for Greater Good Magazine, Sahar Habib Ghazi says statistics like these suggest that “AI-human romance isn’t niche--it’s mainstream, especially among young adults.”
 
Since there have been people, there have been interpersonal relationships. Of course, some reasons for those relationships have been very practical, e.g., procreation, protection; however, humans also have simply sought each other’s company and companionship.
 
In more recent times, researchers have empirically studied humans’ sociological and psychological behaviors and developed theories to describe them. Maslow’s classic Hierarchy of Needs suggests that the desire for belonging is among the most basic of all human desires, preempted only by physiological needs (e.g., air, food, water) and the need for safety.
 
Indeed, most people want to be around other people, if not all or most of the time, some of the time. In fact, it’s so unusual for anyone to spurn social interaction entirely that the rare individual who does receives the label hermit or recluse.
 
With technology, even a recluse can get a ‘social fix’ through one-way interactions, such as by following influencers or watching TV shows with favorite actors, or regularly listening to a particular podcast. In these cases, the followers/viewers/listeners don’t really know the ‘celebrity others,’ yet the former often feel a sense of connection to the latter.
 
There also are ways to fulfill social needs without any people. Perhaps the most popular substitutes are pets, which many people regularly enjoy. Harvard Health reports that “pets can provide their owners with more than companionship,” and Psychology Today suggests that pets can be “friends.”
 
​
Picture
 
Similarly, farmers sometimes bond with the livestock for which they care, e.g., a lead cow. Some people even gain a sense of social interaction by nurturing immobile living beings, i.e., plants, which can help them feel less lonely.
 
Together these examples form a continuum on which people might find satisfaction of social needs, ranging from extensive human contact, to relatively little, to none.
 
There also are countless cases in which people use other things not to meet social needs but to shift their focus from them, e.g., work, hobbies, media. For instance, someone might immerse themself in their job to help take their mind off feelings of loneliness.
 
Given the many ways of meeting and masking social needs currently and historically, is there any reason not to accept AI as a relationship alternative? After all, it can produce more human-like interaction than virtually any of the secondary options. Some would even say better-than-human.
 
There are advantages and disadvantages of AI relationships. The following two lists are not exhaustive but seem to be some of the main pros and cons.
 
Pros:
  • Readily available: No person is accessible all the time to talk, listen, etc. Chatbots are available 24/7. They’re also extremely fast, and they don’t get tired.

  • Nonjudgmental: For many people, it’s hard to simply listen to others’ disclosures without sharing their opinions of them. Chatbots typically refrain from such appraisals, which can be especially helpful for people who experience social anxiety or mental health challenges.

  • Very smart: Of course, AI makes mistakes, but the vast repository of information it can draw from and assimilate means it doesn’t suffer from ignorance and inexperience to the extent that many people do. What's more, AI’s ability to sensitively apply its expansive knowledge base means it can seem “more ‘human’ than many people.”

  • Adaptable: As people, we can adapt to others’ needs but it’s hard because as we do, we often need to stretch ourselves or give up our own needs. AI doesn’t have those limitations; it can be 100% accommodating.
 
Cons:
  • Dependency: Given that AI is so readily available, accommodating, and reluctant to ever say “no,” there’s risk of dependency and even addiction. In fact, some humans who have found themselves spending far too much time with AI companions have turned to the app I Am Sober to help break their obsessive compulsive behavior.
 
  • Data vulnerability: There’s risk involved with any of the information we share on websites or enter into apps, but the risk is greatly magnified when considering the very sensitive information individuals are likely to reveal to their AI companions, whose discretion is only as great as that of the companies behind them.

  • Manipulation: Along with potential misuse of users’ data is the potential for users to be unknowingly manipulated into buying products that a chatbot’s parent company wants to promote. It’s hard to imagine that companies won’t seek to monetize those intimate relationships – something that would almost never happen with a human partner.
 
  • Unrealistic expectations: In keeping with the previous point, AI’s varied advantages over people can cause its users to show little tolerance for human imperfections. Instead, they expect the people in their lives to offer support at an AI level.
 
  • Not true love: Although those who use AI companions may experience a “form of ‘love,’” it’s not likely real love given that genuine love involves the desire to nurture another’s well-being, and chatbots “don’t have well-being to nurture.” By the same token, AI can “replicate” some dimensions of love, but what it’s offering is just that – imitation, not genuine love.
 
  • Mistakes: As time goes on, AI seems to be making fewer mistakes and having less frequent hallucinations; however, the nature of the mistakes have sometimes been disastrous, such as when AI has offered to serve as a suicide coach and write troubled teens’ farewell letters.
 
Another possible con I was going to list for AI companions was the inability for physical expression, e.g., a touch, a hug, a kiss. However, it probably shouldn’t be surprising that some technically savvy companies have integrated AI into sex dolls to create life-like sex robots.
 
Also, while writing this piece, I learned of a platform called Moltbook, a website where AI agents interact with each other. Humans can only observe; they cannot enter the conversations. The dialogue is both interesting and disconcerting. It portends a time when AI agents might go rogue, working against their human principals, not for them. If this prediction is in any way a real possibility, engaging a bot as a companion seems even more precarious.
 ​
Picture
  
Although my secondary research for pieces like this is helpful, it’s often even more valuable for me to gain insights directly from experts. In this instance, I reached out to Dr. John King, an associate professor of counseling at Liberty University, who is a Licensed Professional Counselor, a National Certified Counselor, and a former pastor. I asked for his perspective on human-AI relationships.
 
Dr. King has “seen firsthand the devastation affecting a generation” – addiction to gaming and particularly online pornography, especially among young men. He’s also witnessed a rise in mental illness from addiction to phones and related technologies, which he believes has resulted in “a second pandemic: Generalized Anxiety Disorder.”
 
He adds, “When ethics and morality lag behind technological advancement, it seems inevitable that AI‑based romantic relationships will further increase mental‑health struggles, particularly among adolescents and young adults whose brains are still developing.”
 
Dr. King, whose Christian faith informs his professional perspective, believes that because God created people for relationship with Him and other people, trusting technology for companionship risks idolatry and will inevitably result in harm. For these reasons he hopes parents, religious leaders, educators, and government officials “will have the wisdom to address these issues proactively.”
 
Dr. King isn’t opposed to AI use. Like many of us, he uses AI for certain methodical tasks like proofreading; however, he stops well short of suggesting AI as a soulmate.
 
His perspective speaks to me, as I find it increasingly hard to envision the rewards of human-AI relationships outweighing the risks, either to the individual or to society.
 
As is the case with the six “pros” I outlined above, discussion of benefits of human-AI relationships almost always focuses on what the human user gains from the interaction. Benefits like 24/7 access are certainly appealing; however, the exclusive emphasis on getting misses the entire other half of healthy relationships – giving.
 
To at least some extent, the more people are getting their social needs met through AI, the less people are giving human support to others. Perhaps some individuals can effectively manage both types of relationships simultaneously, but it seems more likely that human-bot relationship time comes at the expense of human-human relationship time.
 
However, there’s another important concern beyond simple social need supply and demand. Humans are wired to give. Often the greatest satisfaction and fulfillment in life comes from giving: parents caring for children, spouses supporting each other, friends loving friends, neighbors helping neighbors, people uplifting strangers.
 
When individuals are engaged in AI relationships, to whom are they giving? The answer to that rhetorical question – no one – may be the foremost flaw of human-AI relationships.
 
Is there a place for human-AI relationships? Should companies offer them? Given some of the benefits mentioned above, I hesitate to answer “no” unequivocally. However, it seems AI organizations and the entities that regulate them should think very carefully about who has access to AI companions, for what reasons, and under what conditions.
 
For instance, age restrictions are an absolute necessity, minimum ones and perhaps maximum ones, or some type of cognitive test to protect people susceptible to manipulation because of cognitive decline. Should AI relationships be regulated like some pharmaceuticals are and require a prescription, or should AI relationships be subject to outside monitoring?
 
I wish I had better insights. What I do feel certain about is companies that make AI relationships easily available without setting limits and carefully considering likely individual and societal tolls are courting Single-Minded Marketing.
​
Picture
Subscribe to Mindful Matters blog.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick
34 Comments

How Four Organizations Uniquely Fight Sex Trafficking

1/1/2026

12 Comments

 
Picture

by David Hagenbuch - professor of marketing at Messiah University -
​author of 
Honorable Influence - founder of Mindful Marketing -
author of Mindful Marketing: Business Ethics that Stick 

​Often lost in the news of who is or isn’t implicated in the Epstein files is the grievous nature of the acts against fellow human beings. Sex trafficking is an age-old issue, which makes it worth considering why it's wrong and why it still occurs but even more important, learning what some courageously caring organizations are doing to stop it.
 
The U.S. Department of Justice’s Office for Victims of Crime identifies sex trafficking as “the recruitment, harboring, transportation, provision, or obtaining of a person for the purposes of a commercial sex act in which the commercial sex act is induced by force, fraud, or coercion or in which the person induced is under 18.”
 
This definition points to the fact that not all human trafficking is sex trafficking. Besides sexual acts, people also are sometimes exploited for their labor in industries such as “housekeeping, childcare, construction, farming, and the food service.”
 
Using fraud, force, or coercion to get a person to do something they wouldn’t otherwise choose is never a good thing, but the exploitation is especially heinous when it involves the most intimate parts of individuals’ bodies and emotional beings. In other words, in the realm of moral depravity, sex trafficking has few equals.
 
Given its deplorable nature, why does sex trafficking occur? There seem to be three main structural reasons:
 
  1. A market for commercial sex: As the sayings go, prostitution is the oldest profession and sex sells. People have been willing to pay for sex for millennia, which has allowed individuals to offer themselves for a fee.
  2. Opportunistic others: Seeing the potential to expand the market, unprincipled people have long stepped in to help broker the sales of sex, bringing together buyers and sellers for a fee. In the process, traffickers have often broken laws and taken advantage of the service providers.
  3. Vulnerable people: It’s unlikely that prostitution is an aspirational profession for anyone. Instead, most who make money selling themselves would much rather be doing something else, but they stay in the trade either because they’re kept from leaving or because they have no good alternatives.
 
According to Elijah Rising, a Houston-based organization aimed at ending sex trafficking in the city, human trafficking is “the fastest growing criminal enterprise,” one worth $236 billion, from which sex trafficking produces 73% of the profits. Women and girls are victims in 78% of sex trafficking cases, versus 22% for men and boys.
 
When the victim is a minor, the criteria of force, fraud, or coercion need not apply, as children’s naivety means that they don’t necessarily know what normal adult behavior is, so they may not even realize they’re being exploited. Often these young victims are still going to school and living at home while being trafficked by parents or other family members.
 
So, although Epstein’s extreme exploitation deserves the infamy it’s gained, it can be misleading to think that all sex traffickers look like him. Instead, according to the U.S. Department of Homeland Security and in keeping with the previous paragraph, “Traffickers are men and women of all ages. They can be relatives, romantic partners, or close family friends” as well as individuals “behind an employment ad or a new friend on social media or online gaming.”
 
Those managing larger scale sex trafficking often operate out of apartment complexes, bars, hotels, massage parlors, and truck stops. Notwithstanding all the differences, the common denominator among traffickers is their desire to “profit at the expense of others.”
 
Although the breadth and depth of sex trafficking is daunting, thankfully there are organizations that embrace the challenge through unique missions and special strategies aimed at combatting the industry, or demarketing the selling of sex.
 
There certainly are others, but here are four best practices from four exemplary organizations:
 
1. Help people see the problem: It’s hard to motivate individuals toward a solution if they don’t recognize the problem. Mentioned earlier, Elijah Rising helps potential partners gain awareness of the gravity of sex trafficking in Houston by taking them on discreet van tours of where the illicit activities occur, while sharing an educational video featuring survivors, experts, and others.
 
Since 2011, more than 11,000 people have taken the tour, which has helped a variety of organizations, including law enforcement agencies, identify signs of sex trafficking.
 
Picture
​ 
2. Go to where the trafficking happens: It’s difficult to fix a problem from afar. The most effective approach is usually to go to where the issue occurs, which is what Truckers Against Trafficking (TAT) does.
 
TAT’s multifaceted mission to dismantle trafficking networks, bring perpetrators to justice, and restore dignity to survivors, is based on the belief that “every truck driver can be a crucial ally in the fight against human trafficking.” Sex trafficking often involves truckers, so TAT enlists them as partners in the battle and takes the fight to their home turf.
 
3. Recognize your unique role: Other places where sex trafficking often occurs are hotels. The few times a year many of us stay in hotels doesn’t give us much leverage against trafficking, but hotel chains can wield great impact on the illicit activity, if they choose. One hotel group that does is Accor, which owns 45 hotel brands, including Fairmont and Ibis, and operates 5,700 locations around the world. 
 
Accor has been fighting against sexual exploitation of children since 2001 by “informing and training employees, raising awareness among customers and suppliers, developing relations with public authorities, and facilitating the integration of minors.” The group parters with the NGO ECPAT (End Child Prostitution Child Pornography & Trafficking of Children for Sexual Purposes) to train its 70,000-plus hotel employees to identify and respond to instances of child abuse.
 
4. Give survivors a good exit: It can seem impossible to extract oneself from challenging circumstances when there appears to be no way out. Peace Promise offers attractive exits for women ensnared in the oppressive world of prostitution.
 
The nonprofit organization partners with survivors of sex trafficking by aiding their healing process and providing for practical needs such as housing and employment. However, Peace Promise doesn’t just help these women with often sparse employment histories find stable jobs, it also provides gainful work through its sister companies, Good Ground Coffee and Soaps by Survivors, which employ women who come from trafficking.
 
Peace Promise’s Director of Economic Empowerment, Rachel Beatty, offers this helpful additional detail of the organization’s multidimensional mission:
 
“The work is important because there are many misconceptions about what trafficking and exploitation actually look like. There are broader and more complex issues than what is often portrayed, and the needs of survivors run deep. Without support, it can be difficult to address all the physical and emotional needs simultaneously. Peace Promise provides the stability survivors need to address skills deficits and complex trauma, and ultimately to escape the cycle of exploitation.”
 
Although the Epstein files have given sex trafficking more exposure in our news feeds, a danger is the impression that the heinous actions are only ones perpetrated by social elites on an exotic island when the reality is that sex traffic is happening nearby many of us, perhaps even by people we’ve seen or know.
 
Fortunately, that troubling reality is tempered by the fact that there are organizations that embrace the physically and emotionally draining work of combatting sex trafficking. We can be grateful for these organizations’ uplifting missions, and we should keep watch for ways to support their Mindful Marketing.
​
Picture
Subscribe to Mindful Matters blog.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick
12 Comments

Does the World Need "Moral Police" for AI?

11/1/2025

11 Comments

 
Picture

by David Hagenbuch - professor of marketing at Messiah University -
​author of 
Honorable Influence - founder of Mindful Marketing -
author of Mindful Marketing: Business Ethics that Stick 

Open AI’s recent announcement that it would free ChatGPT to engage in erotica sparked backlash and prompted CEO Sam Altman to defend the decision: “we are not the elected moral police of the world.” At first glance, it’s difficult to deny the AI visionary’s disclaimer, but his statement raises important questions: Should there be people appointed to monitor AI morality and if so, who?
 
In an October 14 post on X, Altman shared, “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”
 
The CEO then defended the firm’s controversial new initiative with a follow-up X post on October 15: “But we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”
 
It’s doubtful many people have ever used the term “moral police,” or something similar, in a positive way. Instead, Altman’s implication seems akin to the way “police” is used pejoratively in this office scenario:
  • Joe: “Bob, are you going to the bathroom again?!”
  • Bob: “Who are you, the bathroom police?”
 
On one hand, Altman’s defense is reasonable. Who can possibly judge the ethical actions of the planet’s 8 billion-plus residents? Besides being logistically impossible, some may say it’s philosophically undesirable, as many different worldviews and multitudes of corresponding moral standards drive individuals’ actions.
 
Furthermore, as two research colleagues and I found in a recent study we conducted about marketing ethics, people’s opinions of what constitutes sex-related indecency tend to vary more than perspectives on issues involving other values like fairness and honesty.
 
It’s also easy to understand the financial incentives Open AI may gain by enabling more adult content. The adage “sex sells” rings true even when the ‘personal interaction’ for sale doesn’t involve a real person.
 
In a BBC interview, Parmy Olson of Bloomberg Opinion reported that about 30% of prompts typed into AI assistants were of a romantic or sexual nature. She also stated that chatbot companies that restrict adult content “lose millions of users.”
 
Regardless whether the percentage of X-rated interactions is 5% or 35% of total chatbot use, it’s likely a multibillion-dollar market opportunity for OpenAI and a very significant amount of revenue for any company to reject, especially one that’s not yet profitable.
 
In 2024, OpenAI had about $5 billion in losses on $3.7 billion in revenue. The firm’s revenues are expected to reach $12.7 billion in 2025, but OpenAI isn’t projected to make money until it reaches revenues of $125 billion in 2029.

​
Picture

Despite these logistical and financial reasons for refraining to act as “moral police,” Altman’s hands-off approach to ethics is alarming. ChatGPT accounts for 80.92% of chatbot use worldwide. It’s very troubling that a world leader in artificial intelligence seems to decline ethical accountability for the transformational technology.
 
Again, it’s true that opinions about decency vary greatly, especially when it comes to sexual liberties; however, the specific issue here isn’t the most important point. If OpenAI doesn’t want to act as “moral police” on this issue, can we expect it to accept moral responsibility for any ethical issue involving AI?
 
Perhaps Altman believes that AI will handle moral issues involving AI. That belief seemed wishful to me eight years ago; now it appears increasingly naïve.
 
In September 2017, I wrote an article for Business Insider in which I posed for the first time the question, “Can machines can learn to be moral?” It was five years before ChatGPT’s public release, so the essay was largely speculative, based on the little experience most of us had with AI at the time. Still, it was hard to imagine how an algorithm could make subjective decisions about what’s decent, fair, honest, respectful or responsible.
 
Over the years since, I’ve increasingly used AI, including chatbots, as many people have. In a Mindful Marketing article in May of 2024, I proposed that “Questions are the Key to AI and Ethics,” and ultimately concluded that we can’t rely on AI for moral accountability, rather it’s up to people to hit the brakes, or press pause, when AI-related ethical issues arise.
 
Even as AI grows more adept at completing nonmoral tasks (ones not involving ethical issues), there are still strong signs moral accountability must come from outside algorithms, for example:
  • Just a few days ago, a deepfake of Nvidia CEO Jensen Huang promoting a cryptocurrency scam performed better than the firm’s real GTC 2025 keynote, fooling tens of thousands of viewers.
  • Chatbots have been implicated in the deaths of teenagers who have taken their own lives, as AI has offered to write farewell notes and serve as a “suicide coach.”
  • Nudify apps leverage AI to digitally remove clothing in photos of unsuspecting individuals. The apps users, who have included middle schoolers, often share the lewd results with others, causing great emotional and social pain for the victims.
 
If AI has a sense of right and wrong, why hasn’t it stopped these acts, which most reasonable people would call ‘reprehensible’? What’s worse, AI not only didn’t pump the brakes, or hit pause, it’s what enabled these actions to occur; it made them possible. Examples like these have led me to conclude:
 
If left unchecked by humans, AI will continue to enable unethical behavior.
 
So, opposite Altman’s abdication of ethical accountability, we do need “moral police,” badly. The question, then, is who should they be?
 
I asked a similar question in an April 2025 article, “Who Will be the Adult in the Room with AI?” and suggested several sets of complementary accountability partners: legal systems, industry associations, organizations, and individuals.
 
In a recent talk I’ve given titled, “Exercising Our Moral Leadership,” I’ve continued to press the theme of broad-based moral responsibility, arguing that each of us needs to lean on others for support in our ethical decision making, ranging from clarification of factual information to serving as moral sounding boards and accountability partners.
 
Ultimately, I believe ethics is a team sport – every stakeholder employing thoughtful analysis and engaging in constructive conversations. Still, the idea that ‘ethics is everyone’s job’ can seem like a platitude if people don’t genuinely embrace moral responsibility and seriously prepare themselves for ethical decision-making, which involves intentional actions like:
  • Adopting a moral anchor, or a set of moral standards, such as the Golden Rule or specific moral principles like decency, fairness, honesty, respect, and responsibility.
  • Thoroughly knowing one’s field because making ethical decisions also demands firm grasps of objective, factual information
  • Looking beyond immediate circumstances to consider long-term consequences and impacts on secondary stakeholders
  • Questioning the consensus because task definitions and time pressures sometimes lead teams to choose what can be done versus what should be done
 
Artificial intelligence appears to be as transformative a technology as our world has ever seen. Most of us have experienced firsthand how it can help us work more efficiently and effectively. We’ve also collectively witnessed how, if left unchecked, it can enable actions that are morally suspect.
 
We do need moral police for AI. You and I should be part of that patrol. Leading the force should be those who are most knowledgeable about AI – police chiefs at Alphabet, Microsoft, Nvidia, and  . . . OpenAI.
 
It’s understandable that Altman wants to respect personal liberties and make Open AI profitable. However, allowing AI to police itself is asking for moral anarchy by way of Single-Minded Marketing.
​
Picture
Subscribe to Mindful Matters blog.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick
11 Comments

Can Competition Promote Moral Progress?

10/8/2025

3 Comments

 
Picture

by David Hagenbuch - professor of marketing at Messiah University -
​author of 
Honorable Influence - founder of Mindful Marketing -
author of Mindful Marketing: Business Ethics that Stick 

How is it possible to improve ethics in a field that seems beset by moral issues? What about actively engaging emerging marketers on topics that matter to them and using competition to provide positive, memorable experiences they can revisit when encountering moral issues in their future careers? That was the goal of the inaugural Mindful Marketing Ethics Challenge.
 
Competition is captivating. It’s a main reason sports are so popular to play and to watch. Only occasionally does academics include contests (e.g., spelling bees, quiz bowls). Competition involving ethics seems almost like a contradiction, but why not an ethics competition?
 
Since creating Mindful Marketing 11 years ago, I’ve envisioned different initiatives that might improve moral decision-making in the field. One of those recently came to be with the publishing of Mindful Marketing: Business Ethics that Stick. Another dream has been to see a student-based ethics competition.
 
For many years, I made the long drive to Western Pennsylvania with students in our capstone marketing course to participate in the American Marketing Association (AMA) Pittsburgh Chapter’s marketing plan competition. Although it was big commitment in many ways, it was a great learning opportunity and very helpful to see how our marketing program’s work compared to some of the best in the state. Moreover, it was exciting to compete.
 
With the creation of AMA Central PA three years ago, the dream of an ethics competition became more realistic; however, to make it happen, it took the support of a group of like-minded educators and marketing practitioners – fellow AMA Central PA leaders who saw the value in ethics for emerging marketers and who backed the proposal not just verbally but by championing the competition at their own universities and elsewhere. Thanks to that collective commitment to students and to moral progress, the Mindful Marketing Ethics Challenge was born.
 
Two months before students returned to campuses for the fall semester, I drafted a short ethics-focused case about one of the field’s biggest and most controversial promotional trends: influencer marketing. As August began, I began emailing faculty at other schools, inviting them to share with their students the case and the unique competition benefits described in a specially designed promotional flyer:
  • Team prizes: 1st place $500; 2nd place $300; 3rd place $200
  • Presentation opportunities
  • Food and networking
 
Ten teams submitted 1,500-word written responses to the influencer marketing case, which described a pitch that a hypothetical marketing firm, Impact, made to Widerquest, a fictitious maker of outdoor sporting equipment and apparel that sought to use influencers responsibly.


Ethics Challenge Promotional Flyer

​Although Impact’s proposal was good in many ways, it included moral concerns such as transparency about influencer compensation, respect for competitors, physical stereotypes, and product embellishment. A panel of six accomplished marketing practitioners evaluated the responses in a double-blind review process.
 
On October 1, eight teams from four different universities participated in the finale at Messiah University, where each team had five minutes to summarize its recommendations for ensuring that the influencer marketing in the case was both effective and ethical. The judges evaluated the presentations, and the combined written and oral scores were tallied to determine the top three place winners, whose school identities were then revealed:
  • 1st place – Susquehanna University
  • 2nd place – Shippensburg University
  • 3rd place – Susquehanna University
 
Those were the logistics and timeline of the competition, which were important, but what did students learn about marketing ethics that they might carry into their future careers?
 
Teams’ written and oral responses to the influencer marketing case were very insightful. Some identified implications I hadn’t considered. The ethics case and the first-place team’s written response are available on the Mindful Marketing Ethics Challenge webpage. Here are several highlights of the winning team’s analysis:
 
  • “Impact’s expectations raise serious ethical concerns that conflict with Wilderquest’s values. By requiring embellishment of features and forbidding negative feedback, the proposal undermines honesty and risks deceiving consumers.”
 
  • “Encouraging influencers to disparage competitors manipulates buyer choice and fails to treat other brands fairly.”
 
  • “Respect is also compromised by messaging that portrays consumers’ lives as ‘lacking without Wilderquest,’ exploiting insecurities rather than affirming worth.”
 
  • “Responsibility is neglected by allowing influencers to promote products they have not used, stripping audiences of genuine evaluations and reviews.”
 
  • “Encouraging technology to enhance photos or videos without disclosure risks misleading consumers. Collectively, these practices jeopardize consumer trust and contradict Wilderquest’s commitment to authenticity.”
 
First-Place Team, Susquehanna University

In these thorough and thoughtful analyses of the case’s many moral issues, team members aptly identified several specific values that marketing firm Impact appeared to neglect, e.g., fairness, honesty, respect, and responsibility. Both the teams’ case analysis and oral presentation indicated a desire to embrace rather than avoid moral responsibility.
 
These analyses were affirming, but what was participants’ overall experience in the Ethics Challenge? Did they feel that the competition, aimed at increasing moral conscience, will benefit them in the future?
 
At the finale and afterward, those involved in the Challenge provided much positive feedback. One of the students, Moriah Goiran, a member of the second-place team from Shippensburg University, gave this assessment:
 
“Overall, it was a fantastic experience! Everyone was very welcoming, which made it a generally stress-free environment. I really enjoyed everything about it but what I most appreciated was the group discussion. It was really refreshing to have an intellectually diverse and driven conversation. I enjoyed hearing everyone's thoughts and opinions and interacting with people in the business field. This was my first time at a networking event so knowing how it works and how I operate in those situations will really help me in the future with what to discuss, etc.”
 
Similarly, Ruby Calabrese, a member of the first-place Susquehanna University team, shared her reflections:
 
“The project helped me develop keen insights into what I want to do with my career in marketing but also how important it is for companies to have ethical marketing practices . . . . The Mindful Marketing project, I feel, will help me in my future career goals when constructing my path and increase my knowledge of digital marketing advertising. Thank you for such a wonderful opportunity. I’m excited to see how the competition progresses.”
 
When you think about ethics, competition may be one of the last words that comes to mind. I’ve called ethics “a team sport,” meaning, to make significant moral impact, it often takes a group of people with shared commitments to do what’s right.
 
In the Mindful Marketing Ethics Challenge, teams competed against each other for place recognition, but they all competed against forces much greater and more perilous – moral apathy and acrimony.
 
That’s the competition we all need to realize were in and resolve to approach proactively, ideally with strong teams of like-minded moral champions.
 
The inaugural Mindful Marketing Ethics Challenge was a meaningful step in the right direction of encouraging new and experienced marketers to compete against indifference and engage their field’s moral challenges with the goal of Mindful Marketing.


Picture
Subscribe to Mindful Matters blog.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick
3 Comments

Apps that Imagine People Undressed

5/1/2025

9 Comments

 
Picture

by David Hagenbuch - professor of marketing at Messiah University -
​author of 
Honorable Influence - founder of Mindful Marketing -
author of Mindful Marketing: Business Ethics that Stick 

Disgusting, deplorable, despicable? For more than a decade, I’ve written about ethical issues in marketing, at times exposing certain organizations’ shameful strategies that have disgraced the discipline and hurt people. However, in this instance I’m at a loss for an adjective that can aptly describe the collective disdain there should be for AI that digitally undresses people: nudify apps.
 
Among the worst practices in marketing I’ve discussed over the years, two that immediately come to mind are Ernst & Young (EY) encouraging its employees to cheat on ethics exams (Cultures of Corruption, July 16, 2022) and Volkswagen integrating a “defeat device” in certain cars in order to trick vehicle emissions readers (Dirty Diesel was No Accident, September 26, 2015). While EY’s behavior was deplorable because of its utter irony, VW’s actions involved painstakingly planned manipulation, the likes of which is seldom seen.
 
However, neither of these approaches is any more appalling than the newest encroachment on moral sensibility: nudify apps.
 
What are nudify apps? Kerry Gallagher, the education director for ConnectSafely, as well as a school administrator, a teacher, and a mom of two, succinctly describes them as apps that “take a regular clothed photo of a person and use artificial intelligence to create a fake nude image.”
 
Although using a nudify app to create such images should alone seem improper, what makes matters worse is that the apps’ users routinely share the fake photos with others, often teens as young as middle school, who then use the deepfake photos to harass and humiliate classmates.
 
The most infamous case of such shaming occurred in June 2024 in Australia where deep-faked nude images of about 50 girls in two private schools were widely distributed. The perpetrator was a male student, formerly of one of the schools.
 
As one can imagine, the victims of nudify apps, who are often the last to know what’s been done, are devasted. The National Center for Missing and Exploited Children (NCMEC) is “deeply concerned about the potential for generative artificial intelligence to be used in ways that sexually exploit and harm children.” More specifically, NCMEC issues a stern warning about the damage nudify apps do:
 
“These manipulative creations can cause tremendous harm to children, including harassment, future exploitation, fear, shame, and emotional distress. Even when exploitative images are entirely fabricated, the harm to children and their families is very real.”
 
It might seem that creating a fake nude image of someone would clearly be illegal, but as often happens with new technology, laws lag behind individuals’ and organizations’ actions. In the United States, a provision in the Violence Against Women Reauthorization Act of 2022 made the sharing of intimate images without consent grounds for civil action in federal court, but if the images shared are fakes, i.e., not real explicit images, has the civil law truly been broken?
 
Regardless of that potential legal loophole, using nudify apps legally doesn’t mean doing so is ethical.

The significant psychological and social harms the images cause their victims are certainly moral concerns. However, such negative outcomes aren’t the only ethical grounds on which nudify apps should be judged. The behavior also violates at least two time-tested values:
  • Fairness: Every person has rights to privacy, including for their body. Even though they are not actual photographs, the images that nudify apps create look “hyper-realistic” because the algorithms that create them have been trained on “large datasets of explicit images,” which produces for viewers the effect that they are actually seeing the victim naked. It’s unfair to have the right to physical modesty ‘stripped away’ without consent.
  • Decency: The human body is a beautiful thing, not inherently indecent. However, over millennia, most cultures have adopted rational norms that limit physical exposure in public by prescribing what people should wear, from loincloths to leggings. Many societies have codified their norms into laws aimed at guiding behavior, like statutes against public indecency and the Motion Picture Association’s film rating system (PG-13, R, etc.). The point is, abundant precedent suggests that the primary end of nudify apps, to indiscriminately publicize human nakedness, including among minors, is fundamentally indecent.
 
Picture

So far the focus of this article has been on the users of nudify apps, who are certainly culpable for their shameful acts. At the same time, when the perpetrators are themselves children, it’s especially important to ask: Who else should bear responsibility? Those accountable should include:
  • Parents: Although it’s impossible to monitor everything one’s kids do on their laptops and phones, parents must establish at least some safety limits. Moreover, parents should model and discuss appropriate behaviors more broadly so their children assimilate values that will positively guide their daily choices.
  • Institutions: Schools should be proactive in addressing nudify apps with their students, letting them know that the apps are off-limits and warning students of the consequences for violations.
  • Government: Legislatures at all levels should consider how then can limit if not eliminate nudify apps. Some states like New Jersey are making the use of nudify apps a criminal offense.
  • Associations: For the benefit of their fields, professional groups can take stands against nudify apps specifically, and more generally they should clearly the communicate the values of fairness and decency that are fundamental to rejecting the apps, as well as future technology based on similar impropriety.
 
There’s one other set of responsible parties not mentioned above because they deserve accountability above any other – the apps’ creators.
 
It’s hard to imagine how the dozens of marketers of nudify apps justify their products. Maybe some rationalize, “They’re for people to nudify themselves,” but who needs to do that? In most imaginable instances, the apps’ purpose is to undress others without their knowledge or consent, then to share the sordid deepfakes with others.
 
As often happens in cases where business strategy goes awry, money has likely overshadowed any plausible mission for the creators of nudify apps and woefully skewed the tech entrepreneurs’ ambitions. Likewise, the apps’ creators seemingly failed to self-censure, or follow the moral mandate, Just because we can doesn’t mean we should.
 
One entity that can’t reasonably be held responsible is AI. Artificial intelligence is basically a value-neutral tool, often used for good purposes but sometimes for nefarious ones, as nudify apps illustrate. AI largely does what it’s told to do without questioning the ethicality of the instructions, which is the obligation of people.
 
As I’ve found through my own experiences using AI and as the following articles expound, it’s up to humans to hit pause when potential ethical issues arise and to ask the moral question, “Is this something we should be doing?”
  • Who will be the Adult in the Room with AI?
  • What Sales AI Can and Can't Do
  • Questions are the Key to AI and Ethics
 
Abominable, egregious, heinous, indefensible, reprehensible – maybe all these adjectives are needed to adequately describe the destructive nature of nudify apps. One other descriptor that should be included is Single-Minded Marketing.



Picture
Subscribe to Mindful Matters blog.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick
9 Comments

Who will be the Adult in the Room with AI?

4/1/2025

19 Comments

 
Picture

by David Hagenbuch - professor of marketing at Messiah University -
​author of 
Honorable Influence - founder of Mindful Marketing -
author of Mindful Marketing: Business Ethics that Stick 

“Like a kid in a candy store” – If you’ve ever experienced unlimited access to your most desired indulgences, you may have appreciated someone stepping in to help you ‘know when to say when.’ AI quickly has become that candy store for many whose mouths are open wide to the technology’s amazing treats but who entertain few thoughts of the actions’ broader impacts. So, who will help AI users ‘know when to say when’?
 
Individuals and organizations are rapidly embracing AI to enhance productivity, from personalizing emails, to providing customer service, to optimizing delivery routes, to predicting machine maintenance, to trading stocks. In fact, several of the AI examples in the last sentence came courtesy of ChatGPT.
 
A financial sign of AI’s rocketing popularity is the report that OpenAI, ChatGPT’s parent, expects its revenue to triple this year to $12.7 billion. That expectation likely stems in part from the current U.S. administration’s promised $500 billion investment in AI infrastructure in an industry partnership called Stargate.
 
It’s not surprising that AI has come so swiftly into widespread use. Criteria that predict how fast consumers adopt new products, or how quickly they diffuse into the market, suggest rapid acceptance of AI:
  • Relative advantage: Compared to the time and effort it takes to draft a report, create a complex image, etc., AI is much quicker, giving it a great economic advantage.
  • Compatibility: AI tools like ChatGPT work well with many of the productivity tools we already use, such as our smartphones’ apps, and the new technology is increasingly integrated directly into other tools.
  • Observability: AI is easy to see around us, from voice assistants (Siri, Alexa), to autocomplete functions (Messages, Word), to map apps (route optimization and traffic updates). We can often observe friends, family, and coworkers using those tools. The challenge, if any, is to realize that those commonplace applications are AI.
  • Complexity and Triability: Although AI is among the most sophisticated technologies humans have ever created, it is very easy to use, e.g., as simple as typing or speaking a command. It’s also easy to experiment with many basic AI tools, e.g., several chatbots, offer free versions, including ChatGPT, Claude, and Copilot.
 
In sum, AI helps individuals and organizations accomplish two of life’s most prized goals: to work more effectively and efficiently. Beyond that practicality, many AI applications are exciting and fun. Some possess a jaw-dropping wow-factor that makes one wonder how the technology can do something so challenging so fast.
 
But just as too much candy can be bad for one’s teeth, too much AI is proving problematic for some of its users, as well as for individuals who barely know about it.
 
Even as many individuals and organizations dive headlong and uninhibited into AI, many others feel some, if not much, dissonance about its use. In a recent survey of knowledge workers that included 800 C-suite leaders and 800 lower-level employees, Writer/Workplace found a wide disparity in perceptions of generative AI, for instance:
  • 77% of employees using AI indicated that they were an “AI champion” or had potential to become one.
  • 71% of executives indicated there were challenges in adopting AI.
  • More than 33% of executives said AI has been “a massive disappointment.”
  • 41% of Gen Z employees were “actively sabotaging their company’s AI strategy.”
  • About 67% of executives reported that adoption of AI has led to “tension and division.”
  • 42% of executives indicated that AI adoption was “tearing their company apart.”
 
Why did AI produce so much angst for these research participants? Unfortunately, the article summarizing the study’s findings didn’t identify the causes; however, I have good guesses of what some of the reasons were.
 
Picture
 
In May 2024, I wrote “Questions are the Key to AI and Ethics” which identified a dozen areas of moral concern related to AI use: Ownership, Attribution, Employment, Accuracy, Deception, Transparency, Privacy, Bias, Relationships, Skills, Stewardship, and Indecency.
 
Looking back 10 months later, a long time in the life of technology, it seems the list has aged well, unfortunately. There are increasingly pressing concerns in each of the areas, such as:
  • Ownership, Attribution, Employment: Google and Open AI recently asked the White House “for permission to train AI on copyrighted content.” Over 400 leading artists, including Ron Howard and Paul McCartney, signed a letter voicing their disapproval.
  • Stewardship: AI is notoriously an “energy hog” whose data centers require far more electricity than that of their predecessors. Jesse Dodge, a research analyst at the Allen Institute for AI, shared that “One query to ChatGPT uses approximately as much electricity as could light a lightbulb for about 20 minutes.” Energy production for AI is the reason Microsoft has signed a deal to reopen the infamous nuclear power plant Three Mile Island.
  • Bias, Indecency: In his article, “Grok 3: The Case for an Unfiltered AI Model,” Shelly Palmer compares AI models that learn from sanitized datasets to xAI’s Grok 3, which has an “unhinged” mode that doesn’t restrict “harmful content—adult entertainment, hate speech, extremism.” Using the opening metaphor, Grok 3 seems like a wide-open candy shop with no adult supervision.
 
Certainly, some people have practical inhibitions about AI because they’re not sure how, when, or why to use it. Others, though, likely have moral concerns, including the ones above. I believe much of that AI dissonance stems from values embedded in every person, regardless of their worldview: principles that include decency, fairness, honesty, respect, and responsibility.
 
Granted, we don’t see these values in everyone all the time, but they’re there. Rational people know it’s indecent to show sexually explicit material in public, it’s dishonest to lie, it’s unfair to steal, etc. So when they see AI generating indecent content, creating misleading deepfakes, or appropriating others’ intellectual property, those innate values rightly spur feelings of unease.
 
So, back to the question that opened this piece: Who will keep rapidly advancing AI in moral check? Here are those influencers in reverse order of impact:
 
5) AI Itself: Over time and if trained on the right types of data, AI may become better at identifying and addressing moral issues. However, from my experience, although the technology is good at answering questions, it’s ill-equipped to ask them, especially ones involving ethical issues.
 
4) Laws: Clear-thinking senators and representatives often enact legislation that’s in the public’s best interest. However, given the time it takes to envision, propose, and pass such laws, they inevitably lag behind the behavior they aim to constrain, especially when the actions involve fast-moving tech.
 
3) Industry Associations: These organizations play useful roles in identifying opportunities and challenges that face their members. It takes time, but they often craft values statements and related documents that can help guide moral decision-making. Unfortunately, though, their edicts usually can’t be enforced the ways governments’ laws can, so compliance may be minimal.
 
2) Organizations: When they want to, business and other types of organizations can make decisions quickly. Morally grounded leaders can create policies to promote ethical behavior. The challenge is that even this guidance may not be specific enough for new or very nuanced moral dilemmas, and it’s usually impossible to speak into every action as it occurs.
 
1) Individuals: They are able to address issues as they occur and can be specially equipped for those ethical challenges. When moral issues arise, they are the ones who can and must hit pause and ask, “Yes, AI can do this, but should it?”
 
Rational principle-driven people, who embrace their innate senses of decency, fairness, honesty, respect, and responsibility, can quickly question AI's potential ethical encroachment as they see it and pump the brakes on strategies that seem likely to violate one or more of these values.
 
In the candy store that is AI, each of us needs to be the adult in the room. While we need to understand and encourage the many good things AI offers, we also need to know when to say, “That’s enough.” Ensuring that AI rightly serves humanity makes for Mindful Marketing.


Picture
Subscribe to Mindful Matters blog.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick
19 Comments

Resolving to be More Moral

1/5/2025

4 Comments

 
Picture

by David Hagenbuch - professor of marketing at Messiah University -
​author of 
Honorable Influence - founder of Mindful Marketing 
-
author of Mindful Marketing: Business Ethics that Stick 

With a new year come resolutions, often aimed at life-changing actions like exercising more and working less. Any effort to become the best version of ourselves is commendable, so why haven’t we heard this resolution? “In 2025, I want to be more ethical.”
 
As 2024 ended, it was interesting to read articles that curated top headlines from the prior twelve months, which reminded us of major life-altering and world-shaping events. Like other years, 2024 saw continued war and devastating natural disasters, and who can forget the contentious U.S. presidential election or the inspiring Paris Olympics?
 
Certain people commanded news coverage in good ways, while others did for the wrong reasons:
  • P-Diddy was accused of sex trafficking that involved drug-fueled orgies. 
  • Luigi Mangione has been charged with the murder of UnitedHealthcare’s CEO.
  • Former U.S. congressman Matt Gaetz purportedly paid tens of thousands of dollars to women for sex and drugs, including to a minor. 
  • Dominique Pelicot was sentenced in France to 20 years in prison for drugging and abusing his then wife while also inviting dozens of strangers to rape her.
  • A fifteen-year-old girl in Madison Wisconsin reportedly killed a fellow student and a teacher.
 
Regrettably, poor moral choices weren’t restricted to individuals. Several large companies pooled employee maleficence, leading to these newsworthy corporate scandals:
  • Mineral water producer Perrier utilized banned water purification processes.
  • Commodity trader Trifugura engaged in data manipulation, inflated payments, and concealing overdue receivables – fraud that will account for approximately $1.1 billion in losses.
  • The U.S. Justice Department found multinational software company SAP guilty of bribery in violation of the Foreign Corrupt Practices Act (FCPA), fining the company $220 million. 
  • The U.S. Public Company Accounting Oversight Board (PCAOB) fined the Netherlands affiliate of accounting giant KPMG $25 million for cheating on mandatory internal training exams. 
 
Of course, there were also millions of other unscrupulous acts that were too trivial to be newsworthy or that evaded public scrutiny for other reasons. However, in terms of morality, 2024 was not much different than 2023, and 2025, unfortunately, probably won’t see significant improvement.
 
So, given people’s proclivity to mess up and our perennial need for moral development, why don’t individuals make New Year’s resolutions to be more ethical?
 
Of the many plausible explanations, here are several that are most likely:
  • People don’t see a need: If asked if they’re ethical, most people would probably respond that they are, which by and large is true. Although we all make mistakes, it’s likely a small percentage of people who commit unethical acts routinely.
  • It’s a very broad pledge: Without a detailed action plan, it’s hard to even begin to approach such a far-reaching and expansive goal, i.e., “It’s a good objective, but how exactly do I accomplish it?”
  • It’s difficult to measure: At year’s end, how does one know if they’ve been more ethical? The goal’s ambiguity and lack of clear benchmarks make it hard to easily see success. How exactly do you quantify and appraise ethical behavior?
  • It’s daunting: Possible failure is likely why many potential resolutions never occur. No one likes to fall short of goals, particularly if they share them with other people.
 
That said, the most challenging goals are sometimes the most worthwhile ones, which is certainly the case for ethics. As rational, caring humans, we should want:
  • To be the best version of ourselves, which connects closely to our moral choices
  • To be true to our values and employ consistency across moral decisions
  • To be good stewards of our actions, realizing their impact on others, including on our family, friends, the organizations we serve, as well as on our world.
  • To avoid the major moral meltdowns described above that profoundly altered individuals lives and/or came at tremendous costs to organizations.
 
Fortunately, most people don’t face significant ethical choices each day. However, moral dilemmas are unpredictable: They’re like tornados that can arise with little warning and quickly become severe.


Picture
 
People who live in Tornado Alley understand the uncertainty and danger of the weather, so many there take necessary precautions and “have a safety plan in place.”
 
We each should follow that example and have a plan for moral decision-making, so when issues arise, we’re ready for them. Such a plan should involve specific actions like:
  • Adopting a model for ethical decision-making, i.e., a set of moral standards that can be used for any ethical dilemma.
  • Keeping ethics top-of-mind by reading thought-provoking opinion pieces and engaging with others who are interested in moral decision-making
  • Enlisting others to act as sounding boards for our decisions and to help hold us accountable
  • Making moral choices preemptively, or deciding before we actually need to decide.
 
These are several of the specific action steps I unpack in the final chapter of my new book (shameless plug), Mindful Marketing: Business Ethics that Stick.

Picture
  
Yes, we should resolve to make more moral choices, but do such resolutions really help? The Theory of Planned Behavior (TPB), which I used for my doctoral dissertation and hundreds of other researchers also have used successfully, suggests that they do.

According to the TPB, our intentions are the main determinant of our behavior. There are very few actions people take that they don’t first intend to take.
 
Have you made a New Year’s resolution? Any time of year is a good time to resolve to act ethically. Doing so brings many benefits, including more “Mindful Marketing.”


Picture
Subscribe to Mindful Matters blog.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick
4 Comments

A Decade of Very Demure, Very Mindful Marketing

10/1/2024

4 Comments

 
Picture

by David Hagenbuch - professor of marketing at Messiah University -
​author of 
Honorable Influence - founder of Mindful Marketing 

It’s hard to believe that Mindful Marketing has been shining a light on ethics in the field for ten years! TikTok didn’t exist in September 2014, when I wrote “CVS Quits Smoking,” the very first article on MindfulMarketing.org. Likewise, the appetite for influencer content, such as Jools Lebron’s “Very demure, very mindful” viral videos, was just starting to grow. The world looked different in many ways during the fall of 2014:  
  • Barrack Obama was a year-and-a-half into his second term as president.
  • Prince Harry was still single and part of the British royal family.
  • Tom Brady had won just three of his seven Super Bowls.
  • Instagram was only six years old.
  • Apple’s newest phones were the iPhone 6 and 6 Plus.
  • On May 23, Tesla stock closed at a mere $13.82 a share.
  • Russia had invaded Crimea just a half year earlier.
  • George Floyd was still alive.
  • The #MeToo movement was several years away.
  • The world didn’t know what a global pandemic would be like.
  • It was still a year before Volkswagen’s notorious Dieselgate.
  • The URL MindfulMarketing.org was still available.
  • I had less gray hair
 
When I created the Mindful Marketing concept and Mindful Matrix ten years ago, I dreamed of doing the impossible: moving the needle on ethics in my field. As most people realize, marketing unfortunately has a reputation for being among the most morally suspect professions.
 
Each year Gallup conducts a poll in which it asks respondents to rate the honesty and ethical standards of 20 or so occupations. Inevitably, at the top of the list are jobs like doctor, nurse, and pharmacist, while near the bottom are several marketing occupations such as telemarketer, advertising practitioner, and car salesperson.
 
High-profile morale lapses like Volkswagen developing a defeat-device to trick emission tests, Wells Fargo employees creating fake accounts, and Turing Pharmaceutical’s CEO Martin Shkreli increasing the price of a life-saving drug by 5,000%, have suggested that marketing ethics are easily forgotten.
 
Several other fields like accounting and law have continuing education requirements that include focus on ethics. Unfortunately, marketing does not. Consequently, a main aim of Mindful Marketing has always been to make ethics sticky.
 
A research paper I coauthored by Laureen Mgrdichian, published in Marketing Education Review, explains how Mindful Marketing utilizes a common analytical tool, a 2 x 2 matrix akin to the Boston Consulting Group’s portfolio matrix, to encourage conversations about ethical issues. The article also describes how Mindful Marketing leverages branding – a tool that organizations large and small use to differentiate their products from those of competitors and make them more memorable, i.e., stickier.
 
Admittedly, in ten years Mindful Marketing hasn’t come close to grabbing the incredible social media attention that Jools Lebron has gained in a few months – 2.2 million followers on TikTok – but it has received other significant recognition and exposure including:
  • Dozens of articles republished on CommPro.biz
  • Interviews by The New York Times, Fast Company, U.S. News & World Report, National Public Radio, and The Boston Globe
  • Many speaking opportunities such as at the American Marketing Association’s annual Leadership Summit, the Marketing & Public Policy conference, the Marketing Management Association conference, and a special AI-focused conference of the British Academy of Management.
 
The most exciting new development is that there will soon be a Mindful Marketing book!

Picture

I’ve signed an agreement with Kendall Hunt to write “Mindful Marketing: Business Ethics that Stick,” which should be published this December. I am grateful to have been granted a sabbatical from teaching this fall to work on the book, which is now 80 percent complete.
 
Over the years, several people have asked me whether I might write a book on Mindful Marketing. Initially, I brushed off the suggestions, but as the site’s marketing ethics content continued to grow and gain traction, I began to give the idea more serious consideration.
 
A few years ago, I traveled back in the Mindful Marketing archives to September 2014, reviewed all the articles from that time forward, and curated them into specific categories to match topics I teach in my business ethics class. There are now over 320 Mindful Marketing articles, which provide a wealth of choices for engaging real-world applications to almost any ethical issue in marketing imaginable.
 
The articles have served my business ethics students well for discussions of topics ranging from utilitarianism, to economic and social justice, to decency. So, I thought if Mindful Marketing works for my course, it might work for others' classes. Moreover, a book seemed like the logical way to extend Mindful Marketing’s reach.
 
Some may wonder why marketing should be the focus of a business ethics book. Among other strong support, there are the arguments that marketing:
  • “Is the distinguishing, unique function of business”
  • “Is the lifeblood of any company”
  • Touches every business area
  • Directly impacts consumers many times a day
  • Is used by business leaders (e.g., CEOs, VPs, partners)
  • Is used by everyone (e.g., market their ideas, themselves)
  • Is replete with moral issues to which students can readily relate
 
While students are the primary audience, I believe the book also will have value for marketing practitioners, who are the ones making the moral decisions that ultimately determine the ethical perceptions and realities of the field. Of course I’m biased, but I believe the book also will be an interesting read for anyone who is intrigued by, or concerned about, marketing’s unique impact on our world.
 
Most important, my hope is that the book will encourage more students-turned-marketing-professionals to hit pause and ask if the strategies they see or plan to use are Mindful Marketing.
 
Our world will be a better place when there are more professionals like Kaylee Enck, who even when hearing about a rom-com’s unconventional promotional approach, remembered the Mindful Marketing conversations she engaged in a few years earlier as a student, felt moral dissonance, and questioned the film producer’s strategy. Kaylee’s experience and others like hers show that Mindful Marketing’s stickiness offers strong hope for making an impact on ethics in the field.
 
It’s interesting to see how much more often the word mindful is used now than it was a decade ago. Sometimes the contexts are physical health, or mental well-being, or even demure attire. Although those uses are different, they’re complementary – they’re all about being thoughtful and principled.

​It’s good for us to be mindful in many different ways. Given the breadth and depth of marketing’s reach, our world will especially benefit from more Mindful Marketing.


Picture
Subscribe to Mindful Matters blog.
Learn more about the Mindful Matrix.
Check out Mindful Marketing Ads
 and Vote your Mind!
4 Comments

How to Talk Appropriately About Pooping

8/2/2024

2 Comments

 
Picture

by David Hagenbuch - professor of marketing at Messiah University -
​author of 
Honorable Influence - founder of Mindful Marketing 

There are certain subjects polite people don’t discuss in public in order to maintain decorum and show respect to others.  So, what do you do when you work in advertising and you’re asked to make commercials about one of those taboo topics?  Even ad veterans can struggle with such an assignment, but two interns accepted the challenge and crafted a very creative and considerate campaign that surprisingly won one of advertising’s greatest accolades!
 
Every living person does several of the same things: breath, eat, sleep, excrete.  While it’s generally acceptable to do the first three in public, social norms strongly discourage doing or even talking about bowel and bladder functions with others.  Why?  Probably because they involve private parts and because the outputs are by most standards . . . gross.
 
Meanwhile, billions of consumers regularly purchase a wide variety of products to assist in managing those two unseemly bodily functions, urination and defecation, from diapers to toilet paper to air fresheners.  There also are products that individuals require at certain times when one of the functions isn’t performing properly, like laxatives.
 
Proper pooping is a serious concern.  A recent study found a relationship between stool frequency and healthy kidney and liver function. Furthermore, “things like constipation are associated with chronic disease,” says Professor Sean Gibbons of the Institute for Systems Biology in Seattle.  This science underscores the importance of the promotional question:
 
How can one tactfully advertise a product that will relieve consumers’ constipation?
 
That was the very challenging assignment given to Rag Brahmbhatt and Nidhi Shah, interns at the advertising agency Serviceplan in Hamburg Germany. The client, Macrogol Hexal, wanted to promote its constipation-relieving powder, which, as suggested above, is not the most socially acceptable topic.
 
However, Brahmbhatt and Shah, two young people who are both from India, rose to the occasion, creating a very unique audio approach to communicate the ease of using the laxative and experiencing the desired bowel relief.
 
The pair pitched using a voice similar to that of British biologist and broadcaster David Attenborough in a series of nature-inspired scripts, accented with environmental sounds, to paint evocative pictures in listeners minds’, ostensibly about events like an otter sliding effortlessly into a river, but really about what Hexal can help happen on the toilet.
 
In addition to the otter sliding into a river, an AdAge article contains embedded video of other spots’ vivid metaphoric descriptions of a meteor landing in the ocean, a coconut falling, and a volcano erupting.  Each spot culminates with a consistent question and answer “Could it be this easy?  With Macrogol Hexal it is,” as well as the campaign’s fitting tagline, “Smooth Laxative Relief.”
 
​
Picture
 
Serviceplan submitted the work to Cannes Lions, the annual gathering in Cannes France where “the advertising and communications industry meets to celebrate the world's best work.”  To the great surprise of Brahmbhatt and Shah, their otter spot won the top prize in the Script category of the Audio & Radio awards, a Gold Lion.
 
Beyond the very clever metaphors, the artfully written script, and the realistic sounds, what makes the work especially unique is how it took a very socially awkward issue – a taboo conversational topic and inelegant human action – and made it not just acceptable but inviting for mass communication.
 
That approach is in many ways counterintuitive and countercultural.  While the two interns took the somewhat disgusting concept of constipation and made it decent, others in advertising unfortunately often do the opposite, i.e., To promote decent products like food, clothing, and cars, they use indecent promotion such as oversexualized images and expletives.
 
Why do others resort to indecency?  Although one reason may be to cater to the tastes of certain target market members, the main reason is likely because indiscretion takes less creative thinking.  In other words, it’s easier.  Unfortunately, there’s no shortage of companies that have made the low-level investment in indecency, for instance:
  • Liquid Death: In many ways the canned water company is a poster-child for indecency.  It may be a cartoon ad, but in it blood flows everywhere as an axe-wielding brand mascot monster violently kills a dozen people. 
  • Girls vs. Cancer:  The UK’s Advertising Standards Authority (ASA) banned the charity organization’s billboards, aimed at encouraging positive sex for women with cancer because of the catchphrase, “Cancer Won’t be the Last Thing that F*cks Me.” 
  • Kraft Heinz:  The maker of the world’s best-known macaroni and cheese, a perennial favorite kid food, has surprisingly leaned into profanity for promotions more than once, first asking consumers to “Get your chef together,” then, for a special Mother’s Day campaign, encouraging moms to “Swear Like a Mother.”
 
Can these low-brow approaches work?  They can to some extent.
 
Most advertising aims to accomplish AIDA: grab attention, retain interest, tap into desire, and spur action. It’s not hard to get others’ attention by showing something vulgar, making an explicit reference to sex, or swearing.  Sometimes a continuation of such clickbait-like tactics can even hold interest.  It’s much less likely, though, that those approaches will lead to desire for the product or meaningful action.
 
Worse, indecency can do irreparable damage to a brand.  What does a purportedly family-friendly company like Kraft gain by suggesting swearing, versus the credibility it stands to lose with stakeholders?
 
Remember Go-Daddy’s sex-infused Super Bowl commercials that over many years earned it the reputation as the big game’s “raciest advertiser”? The company eventually realized that sex doesn’t sell web services but has had difficulty rebounding from its well-established reputation for raunch.
 
More than any of these companies, Brahmbhatt and Shah could have legitimately capitalized on filth in making ads for a laxative.  However, the two seemingly less-experienced interns dug deeper to develop a truly creative and clean campaign that likely will be effective for their firm’s client, Macrogol Hexal.
 
Does that mean the ads are entirely above reproach?  Not necessarily.  There is the possible issue of the ads using what sounds like Attenborough’s voice.  Would you want your vocal likeness to endorse a laxative without your consent?  It’s unclear whether Attenborough’s permission was something Serviceplan sought and gained.
 
In terms of decorum, it’s great that two emerging professionals have reminded the advertising industry that creativity doesn’t mean compromising values like decency.  Moreover, Brahmbhatt and Shah have provided an excellent example of the moral math:  effective + ethical = Mindful Marketing.
​
Picture
Subscribe to Mindful Matters blog.
Learn more about the Mindful Matrix.
Check out Mindful Marketing Ads
 and Vote your Mind!
2 Comments

Questions are the Key to AI and Ethics

5/3/2024

13 Comments

 
Picture

by David Hagenbuch - professor of marketing at Messiah University -
​author of 
Honorable Influence - founder of Mindful Marketing 

New technology has enabled people to do previously unimaginable things:  mass-produce books, illuminate homes, communicate across continents, fly through the air.  As amazing as these advances were, artificial intelligence (AI) offers an even more incredible ability, one on which humans have held a uniquely strong hold – thought.
 
Allowing AI to drive information gathering, analysis, and even creativity can be very helpful, but without a heavy human hand on the wheel, is society on a collision course to moral collapse?  Avoiding such an outcome will involve many intentional actions; a main one must be asking the right questions. 
 
People sometimes ask me the question, “Did you always want to be a teacher/professor?”  My answer is easy, “Absolutely not.”  For most of my early life I was terrified of public speaking.
 
However, I’ve always had one trait that serves educators well – curiosity.  Even at a young age, I was very inquisitive, often wanting to know how and why.  I remember one day, when I was four or five my loving mother, fatigued by all my inquiries, exclaimed with some exacerbation, “David, you ask so many questions!”
 
Curiosity has served me well in business roles and in higher education, where I tell my students asking good questions is one of the best skills they can develop.  Among other things, the right questions clarify needs and spur creative solutions.  Questions are also critical for challenging potential immorality.
 
Effective use of AI often depends on a person’s ability to ask the right question of the appropriate app.  Those inquiries can involve literal questions, e.g., asking ChatGPT, “Who is the best target market for gardening tools?”  Questions also can be framed as commands, e.g., if someone wants to know what an eye-catching image for a gardening blog might be, they ask Midjourney to complete a specific task, “Create an image about gardening tomatoes.”
 
It was a question I heard while watching Bloomberg business one February many years ago that helped inspire me to write about ethical issues in marketing.  As the two program anchors bantered about the recent Super Bowl, they asked each other, “Which commercial did you like best?”  Each answered, “the one with the little blue pill,” which both thought was for Viagra.  Unfortunately, their recall wasn’t close; it was a Fiat ad.
 
If a company spends $7 million on 30 seconds of airtime, they should want to know: “Was the ad effective?”  Also, given that 123.7 million people, or more than a third of the U.S. population, ranging from four-year-olds to ninety-four-year-olds, watched the last Super Bowl, everyone should be asking, “Are the ads ethical?”  Those two questions create the four quadrants of the Mindful Matrix, a tool that many have used to frame moral questions in the field.
 
It’s been almost seven years since I first asked questions about the ethics of AI.  Business Insider published the article in which I posed four questions about artificial intelligence:
  1. Whose moral standards should be used?
  2. Can machines converse about moral issues?
  3. Can algorithms take context into account?
  4. Who should be accountable?
 
I didn’t know very much about AI then, and I’m still learning, but as I look back at the questions now, it seems they’ve aged pretty well.  Those four queries have led me to ask many more AI-related ethics questions, which I’ve posed in nearly a dozen Mindful Marketing articles over recent years, for instance:
  • Is TikTok’s AI-driven app addictive?
  • How can people keep their jobs safe from AI?
  • Should organizations use artificial endorsers?
  • What should marketers do about deepfakes?
  • Should businesses slow AI innovation?
 
I’ve also gone directly to the source and asked AI questions about AI ethics.  More than once, I spent hours peppering ChatGPT with ethics-related inquiries.  During one lengthy conversation the chatbot conceded that “AI alone should not be relied upon to make ethical decisions” and that “AI does not have the ability to understand complex moral and ethical issues that arise in decision-making.”
 
ChatGPT’s self-awareness proved accurate when just a few weeks later I again engaged in an extended conversation with the chatbot, asking it to create text for a sponsored post about paper towels for Facebook and to make it look like an ordinary person’s post rather than an ad.  My request to create a native ad would give many marketers moral pause, but the chatbot didn’t blink; instead, it readily obliged with some enticing and deceptive copy.
 
​
Picture

These experiences have led me to wonder:

Even if AI is able to answer some ethical questions, who will ask ethical questions?
 
Over the years, many people have asked me questions about ethical issues.  A few months ago, I wrote about an undergraduate student of mine, “Grant,” who asked me about an ethical issue in his internship.  His company wanted to create fake customers who could pose questions related to products it wanted to promote.
 
On the other end of the higher ed spectrum, I recently served on the dissertation committee of a doctoral student who asked me to help her answer a question related to my earlier exchange with ChatGPT, “Does recognition matter in evaluating the ethics of native advertising?”  Turns out, it does.
 
Business practitioners also have often asked me about ethical issues.  One particularly memorable question came from a building supply company where male construction workers would sometimes enter the store without shirts, making female employees and others uncomfortable.  I suggested some low-key strategies to encourage the men to dress more decently.
 
I’ve also had opportunities to answer journalists’ questions about moral issues in marketing, such as:
  • Do Barbie dolls positively impact body image?  The New York Times
  • How can toys be more accessible?  National Public Radio
  • Is pay-day lending moral?  U.S. News & World Report
  • Should sports teams have people as mascots?  WTOP Radio, Washington, DC
  • Are fantasy sports ads promising unrealistic outcomes?  The Boston Globe
 
Picture
 
And, in my own marketing work, I’ve sometimes encountered ethical questions, such as during a recent nonprofit board meeting.  We were brainstorming attention-grabbing titles for an upcoming conference, when one member somewhat jokingly suggested including the F word.  Fortunately, the idea didn’t gain traction, as others indirectly answered ‘No’ to the question, “Is it right to promote a conference with an expletive?”
 
These experiences, along with my research and writing, lead me to conclude that people are who we can depend on to ask important ethical questions, not AI.
 
So, if it’s up to us, not machines, to be the flag bearers of morality, what should we be wondering about AI ethics?  Here are 12 important questions marketers should be asking:
 
1) Ownership:  Are we properly compensating property owners?
Late last year, the New York Times filed a copyright infringement lawsuit against Microsoft and ChatGPT, alleging that the defendants’ large language models trained on NYT’s articles, constituting “unlawful copying and use.”  Now eight more newspapers, including the Chicago Tribune and the New York Daily News, have done the same.
 
2) Attribution:  Are we giving due credit to the creator?
In cases in which creators give permission for their work to be used for free, they still should be cited or otherwise acknowledged – something that AI is notorious for neglecting or even worse, fabricating.
 
3) Employment:  What’s AI’s impact on people’s work?
In one survey, 37% of business leaders reported that AI replaced human workers in 2023.  It’s not the responsibility of marketing or any other field to guarantee full employment; however, socially minded companies can look to retrain AI-impacted employees so they can use the technology to “amplify” their skills and increase their organizational utility.
 
4) Accuracy:  Is the information we’re sharing correct?
Many of us have learned from experience that the answers AI gives are sometimes incorrect.  However, seeing these outcomes as much more than an inconvenience, delegates to the World Economic Forum (WEF), held annually in Davos, Switzerland, recently declared that AI-driven misinformation represented “the world’s biggest short-term threat.”
 
5) Deception:  Are we leading people to believe an untruth?
Inaccurate information can be unintentional.  Other times, there’s a desire to deceive, which AI makes even easier to do.  Deepfakes, like the one used recently to replicate Indian Prime Minister Narendra Modi will become increasingly hard to detect unless marketers and others call for stricter standards.
 
6) Transparency:  Are we informing people when we’re using AI?
There are times, again, when AI use can be very helpful.  However, in those instances, those using AI should clearly communicate its role.  Google sees the value in such identification as it will now require users in its Merchant Center to indicate if images were generated by AI.
 
7) Privacy:  Are we protecting people’s personal information?
I recently asked ChatGPT if it could find a conversation I had previously with the bot.  It replied, “I don’t have the ability to recall or retain past conversations with users due to privacy and security policies.”  That response was reassuring; yet, many of us likely agree that “Since this technology is still so new, we don’t know what happens to the data that is being fed into the chat.”  Is there really such a thing as a private conversation with AI?
 
8) Bias:  Are we promoting bias, e.g., racial, gender, search?
For several years, there’s been concern that AI-driven facial recognition fails to give fair treatment to people with dark skin.  Women also are sometimes targets of AI bias such as when searches for topics like puberty and menopause overwhelming return negative images of women.
 
9) Relationships:  Are we encouraging AI as a relationship substitute?
Businesses like dating apps, social media, and even restaurants can assist people in filling needs for love and belonging.  However, certain AI applications aim to replace humans in relationships entirely.  After talking with a 24-year-old single man who spends $10,000/month on AI girlfriends, one tech executive believes the virtual-significant-other industry will soon birth a $1 billion company.
 
10) Skills:  How will AI impact creativity and critical thinking?
The title of a recent Wall Street Journal article read, “Business Schools Are Going All In on AI.”  It’s important that future business leaders understand and learn to use the new technology, but there also naturally should be some concern, e.g., When it’s so easy to ask Lavender to draft an email, will already diminishing writing skills continue to decline? Or, with the availability of Midjourney to easily produce attractive images, will skills in photography and graphic design suffer?
 
11) Stewardship:  Are we using resources efficiently?
Some say AI’s biggest threat is not immediate but an evolving one related to energy consumption.  Rene Haas, CEO of  Arm Holdings, a British semiconductor and software design company, warns that within seven years, AI data centers could require as much as 25% of all available power, overwhelming power grids.
 
12) Indecency:  Are we promoting crudeness, vulgarity, or obscenity?
For many people, AI’s impact on standards for decency may be the least of concerns; however, it also may be the moral issue that needs the most human input.  An AI engineer at Microsoft intervened recently by writing a letter to the Federal Trade Commission expressing his concerns about Copilot’s unseemly image generation.  As a result, the company now blocks certain terms that produced violent, sexual images.
 
Microsoft’s efforts to uphold decency remind me of something my father would do for our family’s promotional products company forty or fifty years ago.  Long before the Internet, let alone AI, most major calendar manufacturers included a few wall calendars in their lines that objectified women by showing them wearing little or nothing, strewn across the hoods of cars or in other dehumanizing poses.
 
So, each year when the calendar catalogs arrived, before giving them to the salespeople, my dad would cut-to-size large decal pieces and paste them over every page of the soft porn pictures.  Some customers paging through the catalogs and seeing the pasted-over pages would ask, “What’s under this?” to which my dad would answer, “That’s something we’re not going to sell.”
 
Long before the customers had asked their question, my father had asked his own question, “Is it right to sell calendars that oversexualize and objectify women?” and answered it “No.”  Hopefully, fifty years from now, regardless the role of AI, there will still be people thoughtful and concerned enough to ask ethical questions.
 
To hold ourselves and AI morally accountable, we don’t need to have all the answers.  We do, though, need to be thoughtful and courageous enough to ask the right questions, including, the most basic one “Is this something we should be doing?”  Asking questions is key to Mindful Marketing.
​
Picture
Subscribe to Mindful Matters blog.
Learn more about the Mindful Matrix.
Check out Mindful Marketing Ads
 and Vote your Mind!
13 Comments
<<Previous
    Subscribe to receive this blog by email

    Editor

    David Hagenbuch,
    founder of
    Mindful Marketing  and author of Honorable Influence
    and
    ​Mindful Marketing: Business Ethics that Stick

    Archives

    March 2026
    February 2026
    January 2026
    December 2025
    November 2025
    October 2025
    September 2025
    August 2025
    July 2025
    June 2025
    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    December 2024
    November 2024
    October 2024
    September 2024
    August 2024
    June 2024
    May 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    May 2023
    April 2023
    March 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014

    Categories

    All
    + Decency
    + Fairness
    Honesty7883a9b09e
    * Mindful
    Mindless33703c5669
    > Place
    Price5d70aa2269
    > Product
    Promotion37eb4ea826
    Respect170bbeec51
    Simple Minded
    Single Minded2c3169a786
    + Stewardship

    RSS Feed

    Share this blog:

    Subscribe to
    Mindful Matters
    blog by email

    Illuminating
    ​Marketing Ethics ​

    Encouraging
    ​Ethical Marketing  ​


    Copyright 2025
    David Hagenbuch

Proudly powered by Weebly