top of page

Who Watches the Watchman?

  • Writer: Carlos Phoenix
    Carlos Phoenix
  • Feb 18
  • 3 min read

Updated: Feb 23



Carlos Phoenix is a CISO and Advisor with over 20 years' experience. Previously, he was the Product CISO at VMware and spent 15 years doing consulting/audit at Deloitte, KPMG, Cognizant, and Coalfire. He holds multiple security and compliance patents, published NIST publications, and has contributed to regulatory standards for PCI and NERC CIP. Carlos is certified as a Certified Information Systems Security Professional (CISSP), Certified Information Systems Auditor (CISA), and Project Management (PRINCE2).


The original Latin phrase “Quis custodiet ipsos custodes?” translates to “Who watches the watchman?” This is something to bear in mind as we enter this new realm of AI.

For several years, security has stressed the importance of automation, configuration drift detection, and a focus on global parameters to drive application security. Human in the Loop (HITL) is not a new control, yet here we are brewing up another cup of alphabet soup. Security professionals know manual controls are not ideal because they are inconsistent when compared with automated controls. AI provides such flexibility via both prompts and image generation that we must deliberately return to the basics of manual controls. The challenge is not the manual nature of the controls, but the fact that this is the baseline where AI security must begin.

Humans are the reason we have systems. We have applications to help humans get work done. Our workflow is engineered to support the human who is receiving goods or services. AI usage must still have a role for humans, however, and this is where risk enters the equation.

As AI evolves, the exact role for HITL is blurred. Agentic AI has introduced more autonomy with AI agents empowered to act, if not outright designed to do so. As humans are removed from processes, the human raison d'être comes into question.  Much of the pushback behind low or delayed AI adoption seen across social media, news, and academic publications is a result of the perception of humans as afterthought. When AI is used appropriately as a tool to supplement workflow, it’s inherently dependent on the Human In The Loop. Tools, by nature, require sentient operators. As AI systems are employed to replace humans, we lose control of – and visibility into – those systems.

We are rapidly approaching a reality where AI systems monitor other AI systems, making it harder (yet even more vital) to implement human oversight. The CISO will not embrace relinquishing critical security controls, nor should they. As AI systems are abstracted away from GenAI prompt style usage, the Human In The Loop will need to carefully monitor trends and rely on tools that enable monitoring, and alerts.

Especially in the Agentic AI framework, the limitation around access control and separation of duties will require stronger controls to mitigate the risk of the system expanding the scope it was initially designed to complete. Yet the access controls that came in the form of a matrix were structured based on a human organization (departments, groups, types of users, etc.). Without the ability to create a matrix aligned to an organizational chart (people and relationship focused), access control can be harder to manage because the AI system is not just a box in the org chart. We created the concepts of groups, roles, and permissions to help tailor technology to humans accessing systems. AI systems absent a human structure leaves us in uncharted waters. The necessity to monitor will introduce more questions–does the AI monitoring other AI systems place humans above the AI monitoring layer? How do we develop governance layers to keep from erecting teetering towers of AIs monitoring other AIs?

Human In The Loop should be non-negotiable for all serious businesses. When AI is used as a tool to supplement work, then each user is naturally in the HITL role and can monitor their own usage. Given the current state of AI and its rapid and at times uncontrolled evolution, security teams need to consider the presence or absence of humans early and often, and think about the dangers of layers of unbroken AI-on-AI surveillance. It’s only human.

 
 
bottom of page