Preparing for AI: The CISO’s role in security, ethics and compliance

As generative AI (GenAI) tools become embedded in the fabric of enterprise operations, they bring transformative promise, but also considerable risk.
For CISOs, the challenge lies in facilitating innovation while securing data, maintaining compliance across borders, and preparing for the unpredictable nature of large language models and AI agents.
The stakes are high; a compromised or poorly governed AI tool could expose sensitive data, violate global data laws, or make critical decisions based on false or manipulated inputs.
To mitigate these risks, CISOs must rethink their cyber security strategies and policies across three core areas: data use, data sovereignty, and AI safety.
Data use: Understanding the terms before sharing vital information
The most pressing risk in AI adoption is not malicious actors but ignorance. Too many organisations integrate third-party AI tools without fully understanding how their data will be used, stored, or shared. Most AI platforms are trained on vast swathes of public data scraped from the internet, often with little regard for the source.
While the larger players in the industry, like Microsoft and Google, have started embedding more ethical safeguards and transparency into their terms of service, much of the fine print remains opaque and subject to change.
For CISOs, this means rewriting data-sharing policies and procurement checklists. AI tools should be treated as third-party vendors with high-risk access. Before deployment, security teams must audit AI platform terms of use, assess where and how enterprise data might be retained or reused, and ensure opt-outs are in place where possible.
Investing in external consultants or AI governance specialists who understand these nuanced contracts can also protect organisations from inadvertently sharing proprietary information. In essence, data used with AI must be treated like a valuable export which is carefully considered, tracked, and regulated.
Data sovereignty: Guardrails for a borderless technology
One of the hidden dangers in AI integration is the blurring of geographical boundaries when it comes to data. What complies with data laws in one country may not in another.
For multinationals, this creates a minefield of potential regulatory breaches, particularly under acts such as DORA and the forthcoming UK Cyber Security and Resilience Bill as well as frameworks like the EU’s GDPR or the UK Data Protection Act.
CISOs must adapt their security strategies to ensure AI platforms align with regional data sovereignty requirements, which means reviewing where AI systems are hosted, how data flows between jurisdictions, and whether appropriate data transfer mechanisms such as standard contractual clauses or binding corporate rules are in place.
Where AI tools do not offer adequate localisation or compliance capabilities, security teams must consider applying geofencing, data masking, or even local AI deployments.
Policy updates should mandate that data localisation preferences be enforced for sensitive or regulated datasets, and AI procurement processes should include clear questions about cross-border data handling. Ultimately, ensuring data remains within the bounds of compliance is a legal issue as well as a security imperative.
Safety: Designing resilience into AI deployments
The final pillar of AI security lies in safeguarding systems from the growing threat of manipulation, be it through prompt injection attacks, model hallucinations, or insider misuse.
While still an emerging threat category, prompt injection has become one of the most discussed vectors in GenAI security. By cleverly crafting input strings, attackers can override expected behaviours or extract confidential information from a model. In more extreme examples, AI models have even hallucinated bizarre or harmful outputs, with one system reportedly refusing to be shut down by developers.
For CISOs, the response must be twofold. First, internal controls and red-teaming exercises, like traditional penetration testing, should be adapted to stress-test AI systems. Techniques like chaos engineering can help simulate edge cases and uncover flaws before they’re exploited.
Second, there needs to be a cultural shift in how vendors are selected. Security policies should favour AI providers who demonstrate rigorous testing, robust safety mechanisms, and clear ethical frameworks. While such vendors may come at a premium, the potential cost of trusting an untested AI tool is far greater.
To reinforce accountability, CISOs should also advocate for contracts that place responsibility on AI vendors for operational failures or unsafe outputs. A well-written agreement should address liability, incident response procedures, and escalation routes in the event of a malfunction or breach.
From gatekeeper to enabler
As AI becomes a core part of business infrastructure, CISOs must evolve from being gatekeepers of security to enablers of safe innovation. Updating policies around data use, strengthening controls over data sovereignty, and building a layered safety net for AI deployments will be essential to unlocking the full potential of GenAI without compromising trust, compliance, or integrity.
The best defence to the rapid changes caused by AI is proactive, strategic adaptation rooted in knowledge, collaboration, and an unrelenting focus on responsibility.
Elliott Wilkes is CTO at Advanced Cyber Defence Systems. A seasoned digital transformation leader and product manager, Wilkes has over a decade of experience working with both the American and British governments, most recently as a cyber security consultant to the Civil Service.