top of page

Keeping up with AI

  • Writer: Carlos Phoenix
    Carlos Phoenix
  • Feb 23
  • 3 min read

Carlos Phoenix is a CISO and Advisor with over 20 years' experience. Previously, he was the Product CISO at VMware and spent 15 years doing consulting/audit at Deloitte, KPMG, Cognizant, and Coalfire. He holds multiple security and compliance patents, published NIST publications, and has contributed to regulatory standards for PCI and NERC CIP. Carlos is certified as a Certified Information Systems Security Professional (CISSP), Certified Information Systems Auditor (CISA), and Project Management (PRINCE2).


It’s normal for many security teams to feel like they’re arriving late to the AI party. We’re seeing businesses racing to implement AI faster than security teams can work to secure their systems. These businesses are feeling the pressure from all around them to build fast — and often the signal is clear that they’re prioritizing the deployment of AI over fostering team buy-in, training staff on safe practices, or building AI security governance in parallel. The only choice for security teams seeking to embed controls and understand the risks is to match the pace and get up to speed – fast.

This is not new to security teams. A great CISO knows the value of moving fast while understanding how to communicate the risk. A valuable skill is using professional judgement within the risk ownership equation. Since risk and business direction go hand-in-hand, setting expectations is vital. Once a pattern emerges, security leaders must be nimble and draw parallels with previous lessons (internal ones or ones learned by industry peers) that can be leveraged to ensure new technology is onboarded safely.

For some organizations, the CISO may need to write up the risk and ensure there is a business sign-off accepting the security team’s inability to keep pace with AI adoption. In other instances, the company will move quickly under the impression that security can be bolted on afterwards, which happens all too often.

It is paramount that security professionals highlight these patterns and they reach an understanding with the business, because choices are being made that will absolutely impact the future security of the business. The CISO’s strength is in not waiting for the perfect controls but rather managing AI security along a path of learning.

One strategy to keep up with AI security is to define an agile approach that favors the bold. Point out key threats such as privacy fines, or the reputation and real costs of a breach. Another approach is guiding the company to take smaller steps in rapid succession, while ensuring those steps include a sprinkling of security controls.

Experimenting with security controls alongside the business can be uncomfortable due to the known unknowns, but it is also essential to get more clarity on what the

business is pushing to do. This conversation should be familiar to CISOs aware of how nascent security programs take hold. Deploying AI security can feel more like a “just-in-time” provisioning approach, meaning you find ways to implement something, rather than waiting for the perfect, all-encompassing solution.

Looking at methods and controls that are broad in their application will help. Breaking this into orders of magnitude uncovers what can be done immediately, followed by more precise controls and a deepening understanding of what options exist. One tactic is to request prototypes of experiments requiring AI security tools to see what might stick. Another approach is to make use of existing vendor capabilities to establish guardrails before refining and customizing the solution. It is paramount that the risk be understood because first or second order of magnitude controls are still going to fall short of the real target.

The goal is to have key elements that are known to security professionals: budget, staff, and clarity about risk. AI usage should make room for an AI security budget. AI teams will need to hire AI security staff. AI risk should be signed off by the business and revisited as the risk becomes better understood. The cost of these investments should be revisited to reflect how AI usage benefits the business.

Security teams must participate in the conversation because AI security is not immune to traditional concepts around security best practices. Yes, the exact controls and types of resources (people, process, or technology) may not look the same as your traditional security configuration, but security has adapted to each technology evolution before AI. A cool-headed risk conversation can drive the business towards funding, staffing, and smart security decisions. That is how we keep pace with the rocket ride of AI.

 
 
bottom of page