“Write an introduction for a report on the sale of a bakery to a private equity firm.”
“Draft a 500-word speech for the CEO of a software company that celebrates year-end results.”
“Create an ad campaign on a new dog food product…. Now make it funny.”
Each of these queries is possible, can be done within seconds, and be completed with perfect grammar and writing structure.
Artificial intelligence is software programmed to “think” intelligently. Using enormous amounts of information, it “learns” and processes new data and then draws conclusions based on the presented information.
From a business and professional services perspective, AI applications can save hours. It can research, summarize the findings into a report, synthesize it into a press release, and then further condense it into social media content. But, as businesses from health care and finance to agriculture and marketing embrace the efficiency and cost-savings of AI tools, they must consider the reputational risks.
Important to note is that there are many AI apps and algorithms with differing capabilities. Some tools are for video, others for image development, coding or written content. The following are risks associated with businesses using AI to generate written content.
Incorrect conclusions based on false or fictitious information.
Generative AI gathers a massive set of inputs found online and creates a new response that sounds uncomfortably similar to the most intelligent humans.
While AI is quickly improving its ability to filter misinformation—or qualify uncertain or controversial content—there are cautionary tales highlighting the value of expert review and contextual understanding of a situation.
A lawyer who relied on ChatGPT to prepare a court filing suing an airline found themselves in a precarious situation when AI invented court cases that didn't exist and asserted that they were real.
Poor assumptions based on limited context.
Strong spokespersons know to avoid bluffing. It is only a matter of time before the true facts will be exposed and credibility lost. But it seems AI does not always have the same discipline.
We asked ChatGPT to “generate a quote from Nike about the strike in front of Yorkdale Mall,” and it produced the following:
"At Nike, we recognize and respect the right of individuals to peacefully express their opinions. We are committed to engaging in open dialogue and working toward resolutions that uphold the values of fairness, respect, and inclusivity. Our focus remains on fostering positive change and creating opportunities that empower athletes and communities alike."
Note: Both Nike’s request and the strike were fictitious. But had it been real, what was produced by ChatGPT reveals dangerous assumptions. In this case, it asserts:
- The strike is connected to Nike.
- There is a problem between management and employees.
- The strike was in response to unfair and inequitable employee treatment.
AI is a creative storyteller that combines true and false information to fill in details. AI is unafraid of bluffing as it cannot feel the potential consequences. It most certainly is not a dependable, risk-preventing corporate spokesperson.
Missed opportunity to connect with the target market.
AI uses language processing tools that generate, classify, and summarize text. When millions of dollars are invested into creating a well-established online brand, AI does a good job crafting information that could pass as human-developed. But can it produce an authentic voice for brands in niche sectors and smaller promotional budgets? We tested this question on our own brands to see if what AI produced properly reflected our services and the needs of our clients.
When asked to create an Instagram post on Wendolyn Reputation Management, AI produced this post that could jeopardize reputation and waste business development dollars.
“At Wendolyn Reputation Management, we understand the importance of maintaining a positive online presence. 🌐💼 Whether you're an individual or a business, your reputation can greatly impact your success. That's why we're here to help!”
The AI-generated post makes potential clients think that Wendolyn Reputation Management:
- Focuses primarily on online reputations. Untrue.
- Targets individuals as well as businesses. Untrue.
- Is a colloquial-sounding brand that uses cringy emojis and overused exclamation marks over carefully crafted descriptors. Untrue!!!!
Knightlabs provides market research and social media marketing for health and wellness organizations. Did AI perform any better when asked to write a Tweet about its services?
No. Critical details about the company, such as contact information, were incorrect. It also used inaccurate terminology regarding Kightlab’s services. For example:
- “AI development” (i.e. creating AI applications) should be “using AI” algorithms for research and analysis.
- “Digital transformation” (i.e., embedding technologies across the business) should be “digital marketing.”
AI is great at picking up industry buzzwords but less so in applying them in the proper context. And, once again, AI loves its emojis, running the risk of Knightlabs’ clients thinking the company is either stuck in time or run by a group of entrepreneurial, geeky teens.
With this understanding, are the cost savings and efficiencies worth the reputational and business risks? The answer comes down to a risk assessment weighing the importance of the content and the consequences of getting it wrong.