author of Honorable Influence - founder of Mindful Marketing -
author of Mindful Marketing: Business Ethics that Stick
A few months ago, a cringingly cute TikTok video went viral. In what seemed to be low resolution surveillance video from someone’s backyard, a collection of fun-loving bunnies playfully bounced on a large trampoline. Few things could be more wholesomely entertaining . . . or contrived.
The problem was that the rabbit roundup never really happened. Especially observant viewers recognized some non-lifelike video peculiarities, e.g., a pair of ears protruding from one bunny’s backside and another rabbit disappearing mid bounce. No, the video wasn’t real, rather it was the product of Google’s Veo 3, a realistic, AI-driven video generator.
Most of us are familiar with deepfake videos, which have become more and more ubiquitous on social media. Based on my viewing habits, YouTube sends me a steady stream of short videos featuring animals that include crocodiles, snakes, gorillas, and sharks, which I find fascinating. Some clips are real, but occasionally interspersed are ones that are too far-fetched to be actual animals, and, like the bunnies, there are sometimes video abnormalities that point to fabrication.
For me, these animal videos are just entertainment, which may make the deceptive ones less problematic. In fact, for the purpose of entertainment, people often want to be deceived – every time we go to a movie, play, or musical we pay to watch actors pretend to be people they’re not, in situations and settings that aren’t real. Most consider those kinds of mutual deceits morally acceptable.
However, to be “mutual,” the deceit should involve informed consent, meaning that the viewer A) knows what they’re seeing is imaginary, and B) they agree to watch it. For me, I believe YouTube’s animal videos uphold #2 but not #1, i.e., I certainly agree to watch them, but I don’t feel I always know what’s true.
AI is an incredible tool with an ever-growing assortment of applications for individuals and organizations, including the creation of visuals like complex graphics, realistic photos, and convincing videos.
With any tool, especially one as powerful as AI, comes the duty to wield it responsibly. While many use AI with discernment others don’t. Individuals in the latter group may have one or more of the following motivations, which range from relatively benign to troublingly malicious:
- Experimenting with the new tech
- Looking to gain likes and shares
- Charting a quick and path to monetization
- Seeking to deceive and mislead
Unfortunately, the latter categories seem to be producing new examples continually. For instance, deepfake investment schemes, which often combine forged images, voice, and video, already have become so pervasive that many prominent organizations and institutions have issued warnings and guidance including JP Morgan, the Securities and Exchange Commission, and the state of New York.
As troubling as these carefully orchestrated schemes of the criminally minded are, the democratized use of deception by ordinary people in their daily lives is just as disturbing. One such broad-based indiscretion is employees’ use of AI to create fake receipts for meals or entire business trips they never experienced but that they submit for reimbursement.
The use of AI to visually deceive is increasingly a temptation for everyone.
What can be done to stem the tide of misleading machine-generated optics? There’s no one solution, rather individuals and organizations should embrace the following two approaches to start.
1. Set Standards: Rather than ‘figuring things out after the fact, it’s almost always better to establish guidelines that proactively steer behavior in positive directions. In almost every area of life, we experience such rules that inform us of things from how fast we can drive to what tax deductions we can claim. Why should AI use be any different?
In “Questions are the Key to AI and Ethics,” an article I wrote in May of 2024, a suggested several specific standards for AI use including acknowledging and compensating the human creators from whose work AI borrows, protecting privacy, avoiding racial and gender bias, and respecting relationships. I also encouraged transparency in terms of informing people when AI is being used.
A leader in encouraging standards for visual creations is a company renowned for digital design, Adobe. The firm’s Content Authenticity Initiative (CAI) has been the impetus behind an open system for attaching provenance metadata to digital media, which over 4,500 organizations have embraced. Adobe explains how its Content Authenticity app works:
“Like a nutrition label for digital content, Content Credentials provide creation information about who made the content and when, and what type of edits happened along the way. Unlike other provenance solutions, they’re built on a trust model wherein they’re securely attached to content and validated by the tool used to attach them. They create a verifiable record of the creative process, bring information to the forefront, and help people understand the origins of digital content.”
2. Use Labeling: As the Adobe example suggests, one particularly important standard for AI-generated visuals is labeling. As with food, there’s not necessarily anything wrong with including hot and spicy ingredients in a dish, but a menu should provide an appropriate alert, so diners know what they’re going to consume.
Adobe’s Content Credentials encourage optional labeling. As of September 1, 2025, China has made AI labeling mandatory. The Chinese law means audio and visual content distributed on Chinese platforms must contain both technical identifiers (e.g., metadata, watermarks) and visible labels (i.e., ones evident to average consumers).
Should AI labeling be law? Ideally self-regulation happens outside the legal process. That’s the kind of responsibility Pinterest showed last March when it decided to start labeling generative AI content. Given that Pinterest showcases many human-made items like food and crafts, it’s especially helpful to know that what’s pictured on its site is real.
As I’ve said before, ethics is a team sport that plays out best when all stakeholders commit themselves to do what’s right and to support others in doing the same. In terms of AI-created visuals, two of the most important things team members can do is to 1) proactively set AI standards, and particularly to 2) label AI-generated content so consumers know when they’re seeing it. That labeling might occur through a visible watermark or through provenance metadata stored in a data file header, separate metadata file, etc.
People shouldn’t believe everything they see. They also shouldn’t need to suspend belief each time they see something new. Individuals and organizations that help consumers understand what’s real and what’s not are critical team players for creating Mindful Marketing.
Learn more about the Mindful Matrix.
Check out the book, Mindful Marketing: Business Ethics that Stick
RSS Feed