Back to blog

Why AI Coding Agents Need Security Controls

By Oculi Team

85% of developers now regularly use AI coding assistants, according to the JetBrains 2025 Developer Ecosystem Survey of over 24,000 respondents. A peer-reviewed study published in Science in January 2026 found that 29% of Python functions in the US are now AI-written, up from 5% in 2022, based on analysis of over 30 million GitHub commits.

These are not code completion tools anymore. AI coding agents like Claude Code, Cursor, and Windsurf execute shell commands, modify files, call external APIs, and interact with internal systems. They operate with the same permissions as the engineer who invoked them. And in most organizations, security teams have no visibility into what these agents are doing.

The gap

The adoption curve has outpaced security controls. According to an ISACA poll of 3,270 respondents, only 15% of organizations have a formal AI usage policy in place. Cisco's 2025 Cybersecurity Readiness Index, surveying 8,000 businesses, found that 60% of IT teams cannot even see the prompts employees make to generative AI tools.

That means your engineers are using AI agents that can read credentials, execute destructive commands, and push code to production, and your security team has no audit trail, no policy enforcement, and no way to demonstrate compliance.

What traditional tools miss

EDR does not distinguish between a developer running a shell command and an AI agent running one on their behalf. SIEM ingestion does not capture which policy should have applied to a given agent action. There is no equivalent of a firewall or proxy for AI agent tool calls.

The result is a blind spot. Apiiro reported that AI-generated code introduced over 10,000 new security findings per month by June 2025, a 10x increase from December 2024. These findings are not hypothetical. They represent real vulnerabilities entering real codebases through agent-assisted workflows that no existing security tool was designed to monitor.

What security teams actually need

Three capabilities are missing from the current toolchain:

  • Visibility. A complete record of every action an AI agent takes: what tool was called, what arguments were passed, what the outcome was.
  • Policy enforcement. The ability to define what agents can and cannot do, enforced before execution, not discovered after the fact.
  • Audit trails. Structured logs that satisfy compliance requirements, support incident investigations, and provide proof of control.

What we are building

Oculi is a security layer that sits between AI coding agents and the systems they interact with. It intercepts every tool call and enforces security policies before execution. Every action is logged with full context for audit and compliance purposes.

It deploys alongside existing agent tooling in minutes. No changes to developer workflows. No SDK integration. Policies are defined in code, version-controlled, and enforceable across the organization.

If you are responsible for securing AI agent usage at your org, we are working with a small cohort of security teams during early access. Apply here.