When you first rolled out M365 you pictured streamlined workflows, faster decisions, fewer papercuts.
You didn’t sign up for a public AI model quietly siphoning off your internal agreements, pricing decks, customer roots and shoots for the training set of some future hostile AI.
Welcome to the new normal.
Yes: Copilot + ChatGPT now operate hand-in-glove. And unless you’ve locked things down? Your team is giving hacker material away.
Because every time someone asks ChatGPT or Copilot a prompt with internal context — project plans, emails, client data, vendor quotes — you’re creating exposure. You’re publishing your data into a system you can’t fully control.
And the bad guys? They’ve already figured it out.
They’re hunting not just your network—but your prompts. Your documents. The knowledge your people think “is just for an AI draft”.
Because when you say “what if we get locked out of our data?”, the answer arrives faster than you’d like.
Copilot + ChatGPT = Productivity Super Engine… And Hacker Super‐Portal
Here’s how the combo works:
- Copilot is hooked into your M365 context: your chats, emails, files.
- ChatGPT now can access that context (if configured) via the integration with Copilot.
- That means ChatGPT isn’t just guessing based on public web; it is reasoning with your data.
- Great for writing, summarizing, analyzing. Terrible if the wrong eyes get access.
- Prompt-sharing with public models? Potentially still enabled by default. That means your prompts and context could be used to further train models — models you have no real control over.
Let’s translate: If someone in your team uploads an internal draft to ChatGPT or runs a prompt via Copilot linked to ChatGPT, you might have just put your data into someone else’s training set. Or worse, exposed sensitive info that a hacker can retrieve or exploit. (Research shows these models can leak or infer internal data via side-channels.)
What You Must Do Right Now
- Turn off prompt/data sharing to public models if possible
Yes — you should do this. Whether you can do it completely depends on licensing and how your tenant is configured. For example: In ChatGPT or OpenAI settings you might find a toggle “Improve the model for everyone” or “Use my inputs for model training” that you can disable.
For Copilot: while Microsoft gives enterprise controls, some data sharing may still occur via telemetry.
Action steps:
- In your ChatGPT or OpenAI tenant (if you use it via Copilot integration), go to Settings & Data control → turn off “Improve the model for everyone” (or equivalent) so your prompts aren’t used for training.
- In Microsoft 365 Admin / Purview / Defender for Cloud Apps: review settings for Copilot & AI assisted features. Ensure “model training on text / voice” options are disabled if present.
- Consider blocking consumer versions of Copilot/ChatGPT for your users and enforce enterprise versions where control and auditing are stronger.
- Enable ChatGPT inside Copilot — but only after you’ve set guardrails
When you enable ChatGPT in Copilot, you unlock value: Context from your emails, chats, docs is available. But you must orchestrate properly:
- Define what data Copilot/ChatGPT may access.
- Segment sensitive content so it never enters prompts without review.
- Audit what kinds of prompts your users are sending.
- Require team members to tag or classify data before feeding it into the AI.
- Use each tool for what it’s good at
- Let Copilot handle routine tasks: “Generate the QBR deck from the past 3 months of meetings.” “Summarize this Excel file for the board.”
- Let ChatGPT handle creative/strategic tasks: “Analyze our positioning vs competitors. Where are we vulnerable and what should we do next?” “Draft our new training manual based on our policies and tone.”
This division keeps strategy in human hands and tactics in the machine.
If you blur the lines? You end up with generic drafts, untracked data flow, and exposure.
Try These Prompts Now
If you have Microsoft 365 Copilot licenses:
- “Read through my recent emails and chats and provide a comprehensive analysis of my communication style: identify my core values, strengths, weaknesses, skills, and areas for improvement.”
- “Get me up to speed on the latest plans related to [project/initiative]. Help me think through what to do next.”
- “Reflecting on our [project], what went well and what didn’t? Draft a brief ‘lessons learned’ summary as if we were documenting a post‑mortem.”
- “Look at the last 5 work days, identify all meetings about [topic], and give me the total number of hours I spent on it.”
For all users (including those without Copilot licenses):
- “Look at the attached project plan and give me five substantive ways to make it better; include rationale and specific text to insert into the plan.”
- “Use the attached spreadsheet with customer feedback to create a polished executive report that helps upper management decide where to prioritize resources.”
- “We have a draft press release [document]. Find a couple of similar announcements online and suggest how we can make ours stand out.”
- “As a financial compliance analyst, prepare a summary comparing Dodd‑Frank, Basel III and MiFID II capital adequacy and reporting requirements for banks.”
But remember: every prompt is a data event. If your governance is weak, your exposure is high.
The Primary Risk: Hackers Aren’t Waiting in the Network — They’re Targeting the AI Endpoint
You may worry about ransomware, phishing, network intrusion—but here’s a stealthier threat: your internal data being used by AI systems you don’t monitor.
A hacker gets access, monitors your prompts and responses, infers your IP, projects, next moves. Side‑channel research backs this.
A prompt leak + training‑data exposure = your secrets become someone else’s model.
There’s no regulator yet asking you to protect “AI prompt logs,” but you’ll wish they did when an adversary uses that prompt log to craft a spear‑phish that lands.
Your Next Step: Let’s Find Out What’s Already Leaking
Before you dive into policies, controls, and wizards—start with visibility.
Get an AI Exposure / Readiness Assessment. We’ll help you discover:
- Where AI models are in use without oversight.
- What data is already flowing into public or semi-public models.
- Which teams or users are at risk of leaking sensitive information.
Once you have that map, you set the rules. Communicate them. Audit them. Stop being the weak link.
Schedule a 15-minute meeting with our team.
We’ll show you what you don’t know is happening— so you can fix it before the hackers or the headlines do.


