How to create a generative AI policy your marketing team will actually use

Illustration showing an AI policy for marketing teams


Key takeaways:

  • A clear, flexible AI policy empowers your team to explore responsibly, before shadow AI usage becomes a problem.
  • Marketers, not just IT professionals, should understand what AI policies protect — and where they can advocate for tools that help.
  • Your policy doesn’t need to be perfect to start. But it does need to start.


If your healthcare marketing team is using generative AI tools — or even just curious about them — you’re not alone.

In the WG Content 2024 State of Content Planning survey, more than half of respondents (55%) said they use AI for content planning. That’s a significant number — and it likely underrepresents informal or “shadow” use across teams.

Yet while AI adoption has exploded, AI policies haven’t kept pace. “We often see organizations holding off on developing a policy because they’re trying to get it exactly right,” says Diane Hammons, director of digital engagement at WG Content. “But while they’re waiting, people are already using these tools. That’s a risky gap.”

If your team is thinking about creating a generative AI policy — or you’re unsure where to start — here’s a marketer-friendly guide, grounded in real-world experience and aligned with healthcare’s compliance needs.

Generative AI tools like ChatGPT, Perplexity and Claude are easy to access, which means your team is likely already experimenting, whether formally or not. A policy helps you get ahead of that experimentation.

“It’s not just about control,” Diane says. “It’s about enabling responsible use. A policy helps your team understand what’s allowed, what’s not and why. It builds a shared foundation.”

That’s especially important in regulated industries like healthcare, where patient privacy and brand integrity are always at stake. Even if your marketing team doesn’t work with protected health information (PHI) directly, you may be handling sensitive patient stories, internal data or early-stage campaign work that isn’t ready to be shared with public models.

As Diane puts it: “You want to trust your people, but you also want to prepare them. A generative AI policy is part of that education.”

Want more tips on smart AI use in healthcare marketing? Check out this guide.

A good AI policy isn’t just a list of rules. It’s a tool for shared understanding.

A policy helps you understand what’s allowed, what’s not and why.

Diane hammons, director of digital engagement, WG Content

Effective AI policies should:

  • Explain the why, not just the what
  • Clarify approved tools and use cases
  • Offer education on terms like “training,” “uploading” and “prompts”
  • Acknowledge that the policy will evolve

“No one likes to be told ‘just do it,’” Diane says. “People respond much better when you explain how the decisions were made — and why the policy may need to change as the environment changes.”

Don’t forget to keep the policy visible and active. Post it in a shared workspace. Review it regularly. Host quick refreshers or office hours to answer questions.

How to build a generative AI policy

Every organization is different, but the process usually starts with two questions:

  1. What do we need to protect?
  2. Where can AI help us work better?

“At WG Content, we started with a risk and asset analysis,” Diane explains. “What are our must-protect areas? What would happen if something leaked? Then we looked at how AI might help — and what our guiding principles needed to be.”

From there, build your policy around:

  • Business needs (speed, efficiency, brainstorming, summaries, etc.)
  • Human oversight (ensuring accuracy, empathy, brand alignment)
  • Tool assessment (What’s secure enough? What offers value?)
  • Guiding principles (e.g., ensuring humans are always in the loop)

Curious how other marketers are experimenting safely? Meet Genie, our internal AI assistant.

It might feel safe to say, “We don’t use AI, period.” But that position could quickly become a liability.

“If you don’t leave any margin for responsible, intentional use, you risk being left behind,” Diane says. “It’s better to build smart use cases and educate your team than to ban tools outright and pretend the risk doesn’t exist.”

Other mistakes to avoid:

  • Failing to educate non-technical users about AI terms and risks
  • Assuming IT owns the policy (marketers should help shape it)
  • Not involving cross-functional partners like legal, HR and compliance
  • Setting and forgetting — policies should evolve with tech and regulations

An effective AI policy isn’t just a defense mechanism. It’s a growth enabler.

“When your team understands how to use AI safely, they gain back time, clarity and confidence,” Diane says. “It’s not just about speed. It’s about improving the workday — brainstorming better, digesting complexity faster and seeing new angles.”

Even small uses of AI — like drafting meta descriptions, summarizing research or outlining articles — can have a big impact when applied responsibly.

And when your whole organization is aligned, the benefits ripple outward: fewer bottlenecks, more consistency, and a shared culture of curiosity and responsibility.

If your team is unsure where to start, try our WGC Catalyst consulting services.

We’re also partnering with Loop, an AI consultancy, to offer AI adoption workshops that cover:

  • AI literacy and terminology
  • How to build smart use cases
  • Policy foundations and governance
  • Prompts, tools and safeguards

Explore WGC Catalyst consulting services.

A generative AI policy isn’t a blocker. It’s an on-ramp. It’s how your team safely explores the future while protecting what matters most.

As Diane puts it: “AI is a teammate. It’s not replacing us. But it is part of the lineup now.”

Visibility and education are key. Post the policy in your team’s shared workspace, cover it in onboarding and host short refresher sessions. The more accessible and practical the policy feels, the more likely people are to follow it.

Strong policies are cross-functional. Include compliance, legal, HR and frontline communicators to make sure your guidelines are realistic, enforceable and aligned with organizational values.

AI technology evolves quickly, so your policy should too. Review it at least every 6 months, or whenever major tools, regulations or internal processes change.

Want more insights on all things content?

Sign up for WG Content’s newsletter, Content Counts.

Count Counts WG Content Newsletter