In the year since ChatGPT exploded on the internet, the speed at which artificial intelligence (AI) and advancing powerful technologies are developing and improving is head-spinning. Just a few significant highlights from 2023 worth mentioning include:

  • Google Bard, Microsoft Bing Chat AI and Apple's AI Chat, among others, came rapidly on the scene as competing chatbots.
  • President Biden recently issued an executive order to help manage AI risks and establish new standards for AI safety and security.
  • At least 25 states and the District of Columbia have introduced AI-related legislation since ChatGPT made its public debut, according to data from the National Conference of State Legislatures.

In the legal realm, AI is now a daily news topic and law firms are paying attention. In the past year, law firms have been forming AI task forces, establishing leadership roles to manage and train on AI, creating AI practice groups, and embracing these new and emerging technologies for everyday use.

For the first time, the International Legal Technology Association (ILTA) included an AI tools usage question in their annual survey, and found that 15% of firms reported using generative AI tools such as ChatGPT, Dall-E or Harvey for business-related tasks. That number will undoubtedly increase in next year's survey.

When we consider how the use of AI affects legal PR and communications, we have to look at it as an industrywide global phenomenon. A recent online conference, the Virtual AI in PR Summit, provided an overview of the latest AI trends in public relations, and specifically, the impact of AI on communications. Here are some of the key points and takeaways from several of the speakers, who provided current best practices, tips, concerns and case studies.

State of AI in PR Survey

Linda Zebian, senior director of communications and community at Muck Rack, provided an overview about how PR pros are adopting generative AI and their top concerns. She reviewed Muck Rack's recent State of AI in PR 2023 survey and its key findings:

  • For the first time, AI came in among the top five responses about successful skills needed for PR professionals, and 61% of public relations professionals currently use AI or are interested in using AI in their workflow.
  • Crafting pitches, writing press releases and writing social copy are the top three ways that PR pros currently use AI, although agency professionals are using it for research, strategy and planning purposes more than in-house or brand professionals.
  • Responses about the likely future use of AI suggest that it might be most effective for PR pros in research and list building, monitoring and measuring, and writing.
  • The top concerns expressed about using AI are that its output could be unscrutinized and lower the quality of conversations, followed closely by concerns that younger or newer PR pros won't learn the principles of the profession and will rely too heavily on it. Another top risk that AI poses to the PR profession, respondents said, was that clients might think that they don’t need human content creators anymore.

PR pros must provide journalists with ethical and accurate information, so take responsibility for what generative AI puts out before sending anything to media or to your client, Zebian advised.

AI and Global Public Relations

Johna Burke, CEO of the International Association for the Measurement and Evaluation of Communication (AMEC), spoke about the need for PR practitioners to focus on critical thinking about AI and understand its usage beyond borders.

In Europe, there is a movement that PR agencies must disclose any AI usage to clients and describe how it’s being used, Burke said. Europe and other countries are ahead of the curve on regulation of AI. For example, ChatGPT is blocked in Italy and China. Being accountable for content and abiding by their stricter copyright laws is driving this increased governance.

Uploading content that infringes copyright laws, or has confidential proprietary corporate information, can be traced back to the individual. Burke offered words of caution, stressing the need for global strategists to understand implications, risks and rewards when using AI technologies.

ChatGPT Potential

Erik Rolfsen, senior media relations specialist at the University of British Columbia, provided insights about harnessing the advanced potential of ChatGPT to elevate PR campaigns and strategies, and the importance of powerful prompts.

Rolfsen pointed out that ChatGPT is very good at generating ideas for suggesting content, headlines, questions and titles; creating structural outlines for articles, plans and presentations; and summarizing, proofreading and editing content that is fed in. It will even create spreadsheet formulas for Excel.

He noted that ChatGPT is not good with numbers because it does not compute, and it can’t detect sarcasm and irony. A lack of real-time knowledge (at this time) and the fact that it “hallucinates” and makes up content from thin air are also barriers to accuracy.

When giving ChatGPT a prompt, Rolfsen discussed how directions are critical to getting the response that you are looking for. He suggests:

  • Provide context in the instructions.
  • Explain the objective and the audience.
  • Be specific.
  • Include constraints or requirements.
  • Describe desired tone, word count or format.
  • Keep narrowing and revising the prompt based on the response, until you get more helpful responses in return.

Rolfsen also warned that if you pivot to a new topic within a prompt thread, ChatGPT won’t understand the change in direction. Be sure to open a new chat for each topic.

The Dark Side of AI

Dave Fleet, who oversees Global Digital Crisis at Edelman, touched on the darker side of AI, describing five macro threats and risks where it has been used for nefarious purposes.

  1. Misinformation and disinformation. Threats from disinformation are headlining news around the globe, most recently concerning the Israel-Hamas war. Users are sharing old, unrelated videos and fake news to stir outrage. Technology can create and manipulate hyper-realistic deep fake media, voices and images, making it difficult for consumers to differentiate between fact and fiction. Emerging tools, techniques and tactics are becoming exponentially more sophisticated, he said.
  2. Digital media manipulation. Fleet described the heightened risks of mass-generated inauthentic content as a close cousin of misinformation. This includes search engine manipulation, fake Amazon reviews, and fake advocacy, such as made-up company actions and contentious statements that could be considered a valid corporate stance and have spurred boycotts and backlash.
  3. Cybersecurity. The introduction and use of AI is increasing, creating escalating cyber threats that require more readiness with more sophisticated tools. For example, AI can produce personalized phishing messages, with familiar tone and voice, making it more difficult for individuals to recognize threats aimed at exploiting weaknesses.
  4. Copyright and ownership. Responsibility for content still has a lot of ambiguity. There are many lawsuits about ownership of AI-generated content. In the U.S., a federal court has ruled that works created only by AI are not eligible for copyright protection. Authors are suing AI platforms for use of their material in training their AI models. This is a volatile space where, without guidelines, inadvertent infringement may occur. Some companies are rolling out indemnification rules about using AI with regard to copyright. Fleet said it’s becoming a key concern.
  5. Emergent crises. One example of an emergent risk is unintended bias. In the realm of higher education, for example, research this year has shown that AI text detector tools are flagging content from non-native English speakers for plagiarism at much higher rates than for material by native English speakers. Non-native speakers tend to use less-complex language and simpler sentence structure, which are markers these systems are trained to flag when detecting plagiarism.

Fleet provided several ways to detect and combat these negative threats:

  • Establish how you will handle the use and implications of generative AI.
  • Revisit crisis plans and pay particular attention to disinformation and cyber readiness for AI-fueled threats. “What was fit five years ago is not fit nowadays,” he said.
  • Explore new AI tools in communications to understand them.
  • Stay on top of latest developments.
  • Prioritize governance and proper protocols and training for using these platforms.

All these speakers agreed that AI makes many things faster, easier and more efficient, but emphasized that PR practitioners have a responsibility to clients for accuracy, transparency and clarity in all content, whether AI-driven or not. It’s critically important to verify and validate AI data and content, and not rely on it to replace your own critical thinking, strategy and planning. Keep learning about its burgeoning uses and stay aware of new developments and emerging threats. AI can be an exciting, fun and effective communications tool to support PR goals, when fully understood.

To learn more about AI in PR or for assistance with developing a comprehensive PR strategy, reach out to Vivian Hood at vhood@jaffepr.com.