Vetting AI Tools for Business Use

Vetting AI Tools - A Practical Cyber Security Guide

Artificial intelligence tools, agents, and copilots can deliver real productivity gains — but they also introduce new cyber and data risks. That’s why vetting AI Tools is an essential good practice.

Before approving any AI tool for use in your organisation, it’s essential to apply the same level of scrutiny you would to any supplier that touches your data.

AI tools should never bypass security controls simply because they are useful or innovative. That’s why vetting AI Tools is essential.

Before You Approve an AI Tool, Ask These Questions

1. Data & Privacy (The Most Important Question)

Start with data. Always.

Ask:

  • What data will the AI tool access, process, or store?
  • Does it handle personal, confidential, client, or employee data?
  • Are prompts, files, or outputs stored or logged by the provider?
  • Is business data excluded from training AI models by default?
  • Can data be deleted permanently on request?

 

Only approve tools where data handling is clear, documented, and defensible.

 

🚩 Red flags

  • “We may use your data to improve our models”
  • No clear data retention or deletion policy
  • Training opt‑out only available on expensive enterprise tiers
  • The vendor cannot clearly explain where data is processed or stored

2. Access Control & Identity Management

AI tools must integrate with your existing security controls — not work around them.

Ask:

  • Does the tool support Single Sign‑On (SSO)?
  • Can access be restricted by role or department?
  • Is Multi‑Factor Authentication (MFA) supported or enforced?
  • Can access be revoked immediately if required?

 

Only approve tools that can be controlled and disabled quickly.

 

🚩 Red flags

  • Shared logins or personal accounts
  • No MFA
  • No admin view of who is using the tool

3. Vendor Security & Maturity

You are not just buying a tool — you are trusting a supplier.

Ask:

  • Does the vendor publish security documentation?
  • Do they reference recognised standards (e.g. ISO 27001 or SOC 2)?
  • Is there a defined incident response and notification process?
  • Are sub‑processors clearly disclosed?

 

Only approve vendors who treat security as a core responsibility.

 

🚩 Red flags

  • “We’re too new for formal security”
  • No commitment to incident notification
  • Vague or defensive answers to basic security questions

4. AI‑Specific Risks & Behaviour

AI introduces risks that traditional software does not.

Ask:

  • Can the AI be manipulated through prompts?
  • Can it access internal systems, files, or emails automatically?
  • Are outputs reviewed by humans before action?
  • Are permissions limited to what is strictly necessary?

 

AI should support decisions — not make them autonomously.

 

🚩 Red flags

  • Autonomous agents acting without approval
  • Over‑permissioned access “for convenience”
  • No safeguards against misuse or hallucinations

5. Monitoring, Logging & Governance

If you can’t see how a tool is being used, you can’t govern it.

Ask:

  • Can usage be monitored or audited?
  • Are prompts and actions logged?
  • Can risky behaviour be detected or restricted?
  • Is there a named internal owner for the tool?

 

Only approve tools where accountability is clear.

 

🚩 Red flags

  • No audit trail
  • “Black box” behaviour
  • No clear internal ownership

6. Legal, Compliance & Reputation

Always assume you may need to explain your AI usage to a regulator, client, or board.

Ask:

  • Does usage align with GDPR and data protection obligations?
  • Is AI‑generated content clearly identifiable?
  • Could misuse cause regulatory or reputational damage?

 

If you wouldn’t defend it publicly, don’t approve it internally.

 

🚩 Red flags

  • Legal terms conflict with privacy obligations
  • AI output indistinguishable from human decisions
  • No transparency around automated processing

Do Not Approve an AI Tool If:

  • You don’t know where your data goes
  • You can’t switch it off instantly
  • You can’t explain the risk to leadership
  • It bypasses existing security controls

 

AI adoption without governance is just shadow IT at scale.

 

You can find out more at the AI Cyber Security Code of Practice – GOV.UK

Vetting AI Tools - The Final Approval Question

Before approving any AI tool, ask:

 

“If this AI tool was compromised tomorrow, would we know what data was exposed — and could we stop it immediately?”

 

If the answer isn’t a confident yes, the tool should not be approved.

Vetting AI Tools
``