Let me tell you how this all started. 

Someone asked me—half joking, half serious—why we added an Acceptable Use of AI Policy Builder to Cyber Liability Essentials, our lowest-tier product. 

Why not put it in the high-end package and charge extra? 

Here’s the simple answer: because your smallest clients are going to cause your biggest AI-related messes. And they’re going to do it faster than you can say “ChatGPT, write me a contract.” 

I’d been talking to a few MSP owners—people just like you—who confessed they didn’t have time to slog through a 4-page AI acceptable use worksheet for those small, budget-allergic clients who think “policies” are something for other businesses. 

So what happens? They go rogue. 

And according to IBM’s latest numbers, 97% of organizations that suffered an AI-related breach lacked proper AI access controls. Ninety-seven percent. 

That’s not a statistic—that’s a warning label. 

 

The Myth of “It’s Just a Small Client” 

Here’s the trap: 

You think your big clients carry the most risk because they have more data, more systems, more everything. But your little guys? 

They have the same attack surface minus the budget, the discipline, and the common sense. 

These are the folks who will: 

  • Feed sensitive client data into a public AI chatbot because it’s “easier than Googling.” 
  • Let the intern write marketing copy with AI and accidentally publish confidential M&A details. 
  • Use a free AI code tool to “save dev time” and embed open-source malware into the company website. 

 

And when it blows up? They won’t blame themselves. 

They’ll blame you. 

 

Why We Built It Into the Low-End Package 

If you’re waiting for your smallest clients to ask for an AI policy, you’re already too late. 

They don’t know they need one. They won’t want to pay for one. And when they break something, it’ll be your problem. 

That’s why we built the Acceptable Use of AI Policy Builder right into Cyber Liability Essentials. 

It’s dead simple. You give it to your client, they do the heavy lifting of customizing it, and you step into the trusted advisor role when they inevitably have questions. 

You’re not just covering them—you’re covering yourself. 

Because when the lawsuits, insurance denials, and regulatory questions come, the difference between “your fault” and “not your problem” is whether you can prove you warned them and gave them the tools to act. 

 

The Real Opportunity Here 

Yes, this is about avoiding disasters. 

But it’s also about planting your flag as the AI risk expert for your clients—big and small. 

The MSPs who make AI governance part of every client conversation now will be the ones getting the security stack expansions, the compliance add-ons, and the long-term contracts. The ones who don’t? They’ll be the ones explaining to a lawyer why they let the receptionist run the company chatbot strategy. 

 

Bottom line: AI risk is already here. Your smallest clients will be your first AI breach headline. You can either wait for it to happen… or make sure every single one of your clients—especially the little ones—has an AI plan in place today. 

 

That’s why we put the Acceptable Use of AI Policy Builder in the Essentials tier. Get after it.