author of Honorable Influence - founder of Mindful Marketing -
author of Mindful Marketing: Business Ethics that Stick
Individuals and organizations are rapidly embracing AI to enhance productivity, from personalizing emails, to providing customer service, to optimizing delivery routes, to predicting machine maintenance, to trading stocks. In fact, several of the AI examples in the last sentence came courtesy of ChatGPT.
A financial sign of AI’s rocketing popularity is the report that OpenAI, ChatGPT’s parent, expects its revenue to triple this year to $12.7 billion. That expectation likely stems in part from the current U.S. administration’s promised $500 billion investment in AI infrastructure in an industry partnership called Stargate.
It’s not surprising that AI has come so swiftly into widespread use. Criteria that predict how fast consumers adopt new products, or how quickly they diffuse into the market, suggest rapid acceptance of AI:
- Relative advantage: Compared to the time and effort it takes to draft a report, create a complex image, etc., AI is much quicker, giving it a great economic advantage.
- Compatibility: AI tools like ChatGPT work well with many of the productivity tools we already use, such as our smartphones’ apps, and the new technology is increasingly integrated directly into other tools.
- Observability: AI is easy to see around us, from voice assistants (Siri, Alexa), to autocomplete functions (Messages, Word), to map apps (route optimization and traffic updates). We can often observe friends, family, and coworkers using those tools. The challenge, if any, is to realize that those commonplace applications are AI.
- Complexity and Triability: Although AI is among the most sophisticated technologies humans have ever created, it is very easy to use, e.g., as simple as typing or speaking a command. It’s also easy to experiment with many basic AI tools, e.g., several chatbots, offer free versions, including ChatGPT, Claude, and Copilot.
In sum, AI helps individuals and organizations accomplish two of life’s most prized goals: to work more effectively and efficiently. Beyond that practicality, many AI applications are exciting and fun. Some possess a jaw-dropping wow-factor that makes one wonder how the technology can do something so challenging so fast.
But just as too much candy can be bad for one’s teeth, too much AI is proving problematic for some of its users, as well as for individuals who barely know about it.
Even as many individuals and organizations dive headlong and uninhibited into AI, many others feel some, if not much, dissonance about its use. In a recent survey of knowledge workers that included 800 C-suite leaders and 800 lower-level employees, Writer/Workplace found a wide disparity in perceptions of generative AI, for instance:
- 77% of employees using AI indicated that they were an “AI champion” or had potential to become one.
- 71% of executives indicated there were challenges in adopting AI.
- More than 33% of executives said AI has been “a massive disappointment.”
- 41% of Gen Z employees were “actively sabotaging their company’s AI strategy.”
- About 67% of executives reported that adoption of AI has led to “tension and division.”
- 42% of executives indicated that AI adoption was “tearing their company apart.”
Why did AI produce so much angst for these research participants? Unfortunately, the article summarizing the study’s findings didn’t identify the causes; however, I have good guesses of what some of the reasons were.
In May 2024, I wrote “Questions are the Key to AI and Ethics” which identified a dozen areas of moral concern related to AI use: Ownership, Attribution, Employment, Accuracy, Deception, Transparency, Privacy, Bias, Relationships, Skills, Stewardship, and Indecency.
Looking back 10 months later, a long time in the life of technology, it seems the list has aged well, unfortunately. There are increasingly pressing concerns in each of the areas, such as:
- Ownership, Attribution, Employment: Google and Open AI recently asked the White House “for permission to train AI on copyrighted content.” Over 400 leading artists, including Ron Howard and Paul McCartney, signed a letter voicing their disapproval.
- Stewardship: AI is notoriously an “energy hog” whose data centers require far more electricity than that of their predecessors. Jesse Dodge, a research analyst at the Allen Institute for AI, shared that “One query to ChatGPT uses approximately as much electricity as could light a lightbulb for about 20 minutes.” Energy production for AI is the reason Microsoft has signed a deal to reopen the infamous nuclear power plant Three Mile Island.
- Bias, Indecency: In his article, “Grok 3: The Case for an Unfiltered AI Model,” Shelly Palmer compares AI models that learn from sanitized datasets to xAI’s Grok 3, which has an “unhinged” mode that doesn’t restrict “harmful content—adult entertainment, hate speech, extremism.” Using the opening metaphor, Grok 3 seems like a wide-open candy shop with no adult supervision.
Certainly, some people have practical inhibitions about AI because they’re not sure how, when, or why to use it. Others, though, likely have moral concerns, including the ones above. I believe much of that AI dissonance stems from values embedded in every person, regardless of their worldview: principles that include decency, fairness, honesty, respect, and responsibility.
Granted, we don’t see these values in everyone all the time, but they’re there. Rational people know it’s indecent to show sexually explicit material in public, it’s dishonest to lie, it’s unfair to steal, etc. So when they see AI generating indecent content, creating misleading deepfakes, or appropriating others’ intellectual property, those innate values rightly spur feelings of unease.
So, back to the question that opened this piece: Who will keep rapidly advancing AI in moral check? Here are those influencers in reverse order of impact:
5) AI Itself: Over time and if trained on the right types of data, AI may become better at identifying and addressing moral issues. However, from my experience, although the technology is good at answering questions, it’s ill-equipped to ask them, especially ones involving ethical issues.
4) Laws: Clear-thinking senators and representatives often enact legislation that’s in the public’s best interest. However, given the time it takes to envision, propose, and pass such laws, they inevitably lag behind the behavior they aim to constrain, especially when the actions involve fast-moving tech.
3) Industry Associations: These organizations play useful roles in identifying opportunities and challenges that face their members. It takes time, but they often craft values statements and related documents that can help guide moral decision-making. Unfortunately, though, their edicts usually can’t be enforced the ways governments’ laws can, so compliance may be minimal.
2) Organizations: When they want to, business and other types of organizations can make decisions quickly. Morally grounded leaders can create policies to promote ethical behavior. The challenge is that even this guidance may not be specific enough for new or very nuanced moral dilemmas, and it’s usually impossible to speak into every action as it occurs.
1) Individuals: They are able to address issues as they occur and can be specially equipped for those ethical challenges. When moral issues arise, they are the ones who can and must hit pause and ask, “Yes, AI can do this, but should it?”
Rational principle-driven people, who embrace their innate senses of decency, fairness, honesty, respect, and responsibility, can quickly question AI's potential ethical encroachment as they see it and pump the brakes on strategies that seem likely to violate one or more of these values.
In the candy store that is AI, each of us needs to be the adult in the room. While we need to understand and encourage the many good things AI offers, we also need to know when to say, “That’s enough.” Ensuring that AI rightly serves humanity makes for Mindful Marketing.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick