How to create a generative AI policy your marketing team will actually use
While AI adoption has exploded, AI policies haven’t kept pace. Here's how to create one your team will follow.
While AI adoption has exploded, AI policies haven’t kept pace. Here's how to create one your team will follow.
Author: Colleen Weinkam & Diane Hammons
Last updated: 08/26/25
Generative AI tools are reshaping everything. But without the right guardrails, they can create risk. A clear AI policy helps your team explore safely. Here’s how to build one your team will actually follow.
If your healthcare marketing team is using generative AI tools — or even just curious about them — you’re not alone.
In the WG Content 2024 State of Content Planning survey, more than half of respondents (55%) said they use AI for content planning. That’s a significant number — and it likely underrepresents informal or “shadow” use across teams.
Yet while AI adoption has exploded, AI policies haven’t kept pace. “We often see organizations holding off on developing a policy because they’re trying to get it exactly right,” says Diane Hammons, director of digital engagement at WG Content. “But while they’re waiting, people are already using these tools. That’s a risky gap.”
If your team is thinking about creating a generative AI policy — or you’re unsure where to start — here’s a marketer-friendly guide, grounded in real-world experience and aligned with healthcare’s compliance needs.
Generative AI tools like ChatGPT, Perplexity and Claude are easy to access, which means your team is likely already experimenting, whether formally or not. A policy helps you get ahead of that experimentation.
“It’s not just about control,” Diane says. “It’s about enabling responsible use. A policy helps your team understand what’s allowed, what’s not and why. It builds a shared foundation.”
That’s especially important in regulated industries like healthcare, where patient privacy and brand integrity are always at stake. Even if your marketing team doesn’t work with protected health information (PHI) directly, you may be handling sensitive patient stories, internal data or early-stage campaign work that isn’t ready to be shared with public models.
As Diane puts it: “You want to trust your people, but you also want to prepare them. A generative AI policy is part of that education.”
Want more tips on smart AI use in healthcare marketing? Check out this guide.
A good AI policy isn’t just a list of rules. It’s a tool for shared understanding.
A policy helps you understand what’s allowed, what’s not and why.
Diane hammons, director of digital engagement, WG Content
Effective AI policies should:
“No one likes to be told ‘just do it,’” Diane says. “People respond much better when you explain how the decisions were made — and why the policy may need to change as the environment changes.”
Don’t forget to keep the policy visible and active. Post it in a shared workspace. Review it regularly. Host quick refreshers or office hours to answer questions.
Every organization is different, but the process usually starts with two questions:
“At WG Content, we started with a risk and asset analysis,” Diane explains. “What are our must-protect areas? What would happen if something leaked? Then we looked at how AI might help — and what our guiding principles needed to be.”
From there, build your policy around:
Curious how other marketers are experimenting safely? Meet Genie, our internal AI assistant.
Get more healthcare marketing tips and best practices: Subscribe to the WG Content newsletter.
It might feel safe to say, “We don’t use AI, period.” But that position could quickly become a liability.
“If you don’t leave any margin for responsible, intentional use, you risk being left behind,” Diane says. “It’s better to build smart use cases and educate your team than to ban tools outright and pretend the risk doesn’t exist.”
Other mistakes to avoid:
An effective AI policy isn’t just a defense mechanism. It’s a growth enabler.
“When your team understands how to use AI safely, they gain back time, clarity and confidence,” Diane says. “It’s not just about speed. It’s about improving the workday — brainstorming better, digesting complexity faster and seeing new angles.”
Even small uses of AI — like drafting meta descriptions, summarizing research or outlining articles — can have a big impact when applied responsibly.
And when your whole organization is aligned, the benefits ripple outward: fewer bottlenecks, more consistency, and a shared culture of curiosity and responsibility.
If your team is unsure where to start, try our WGC Catalyst consulting services.
We’re also partnering with Loop, an AI consultancy, to offer AI adoption workshops that cover:
Explore WGC Catalyst consulting services.
A generative AI policy isn’t a blocker. It’s an on-ramp. It’s how your team safely explores the future while protecting what matters most.
As Diane puts it: “AI is a teammate. It’s not replacing us. But it is part of the lineup now.”
Visibility and education are key. Post the policy in your team’s shared workspace, cover it in onboarding and host short refresher sessions. The more accessible and practical the policy feels, the more likely people are to follow it.
Strong policies are cross-functional. Include compliance, legal, HR and frontline communicators to make sure your guidelines are realistic, enforceable and aligned with organizational values.
AI technology evolves quickly, so your policy should too. Review it at least every 6 months, or whenever major tools, regulations or internal processes change.
Sign up for WG Content’s newsletter, Content Counts.
Gearing up for a website overhaul? Be wary of these...
Social media is becoming a key data source for AI...
Retellings are empowering because they bring fresh perspectives to aging...