
Tony Leary
Chief Information Security Officer, Kerv|Kerv
Have a question?
Get in touchPublished 24/06/25 under:
Last week, I hosted a webinar focused on helping organisations get their house in order before diving into AI adoption.
It was great to see so much engagement from professionals across sectors who are all trying to navigate the same challenge: How do we use tools like Microsoft Copilot safely and effectively?
The main message we shared was this—AI is only useful if the security and governance foundations are already in place. Without that, you’re just building risk into your environment.
The Real Starting Point Isn’t AI
Many organisations come to us asking how to roll out Microsoft Copilot. But when we ask about access controls, data classification, or governance policies, there’s often a pause. The truth is, AI is a layer on top. You have to get the basics right first.
We talked about cyber hygiene. Before you give AI tools access to your environment, you need to know:
- What data is where?
- Who has access to it?
- Is that access appropriate?
- Are the risks understood and managed?
If you can’t answer those questions, then you’re not ready to roll out AI, no matter how exciting the opportunity seems.
What Came Up in the Webinar
There were some great questions from attendees, and many pointed to common challenges we see every day:
“We don’t even know what AI we’re already using.”
This is a typical scenario. AI is already baked into Microsoft 365 in places like Edge, Azure and Dynamics. It’s often working behind the scenes. Also, ‘shadow IT’ is risk: there are lots of public AI tools to tempt users. Step one is to take an inventory and understand your current exposure.
“Our SharePoint access is a mess. How do we fix that before Copilot?”
This is where many organisations struggle. We discussed using Restricted SharePoint Search, sensitivity labels and permission clean-ups. If Microsoft Copilot can access all SharePoint content, then every permission issue becomes a potential data leak.
“We’ve got the licences, but no plan yet.”
Buying licences is easy. Preparing your environment is not. We recommended creating a governance framework that aligns with your risk appetite and regulatory obligations. Even something as simple as a usage policy can prevent confusion later.
“How do we make sure people use it properly?”
Training and culture came up several times. You need a clear communication plan that explains what AI can do, what it should not do, and how to escalate concerns. This isn’t just an IT rollout. It’s an organisational change.
Five Things to Do Now
If you’re not sure where to start, here’s a simple checklist:
- Engage your stakeholders. IT, legal, compliance, HR, and end users all need to be part of this conversation.
- Define acceptable use. Create an internal AI policy that’s practical and easy to understand.
- Understand your current risks. Tools like Defender can gather application inventory data from your devices. Web access logs can show what public tools may be in use.
- Map your data and permissions. Use tools like Microsoft Purview or a basic audit to understand where your risks are.
- Don’t rush. You don’t need to switch on Copilot for everyone at once. Start with a pilot, test the impact, and iterate.
Final Thoughts
This webinar reinforced something we see a lot—leaders want to adopt AI but aren’t sure if they’re ready. The good news is that you don’t need to solve everything at once. Start by improving your visibility and control, then build from there.
Our role at Kerv Consult is to make transformation safe, structured, and manageable. If you’d like help assessing your current environment or planning a secure rollout, get in touch.
Have a question?
"*" indicates required fields