📋

Key Facts

  • More than half of organizations have deployed AI agents to some extent, with more expected in the next two years.
  • Four-in-10 tech leaders regret not establishing a stronger governance foundation from the start.
  • Three principal risks: shadow AI, gaps in ownership and accountability, and lack of explainability.
  • Three guidelines: default human oversight, baked-in security with logging, and explainable outputs.

Quick Summary

In the rapidly evolving landscape of AI adoption, organizations are turning to AI agents for significant ROI. More than half have already deployed them to some extent, with many more planning to follow in the next two years. However, early adopters are reevaluating their strategies, as four-in-10 tech leaders regret not building a stronger governance foundation from the outset.

Key risks include shadow AI, where unauthorized tools bypass IT oversight; gaps in ownership and accountability for autonomous actions; and a lack of explainability in how agents achieve goals. To address these, organizations should implement three core guidelines: default human oversight with assigned owners and approval paths; baked-in security through compliant platforms, role-based permissions, and comprehensive logging; and explainable outputs with logged inputs and reasoning traces.

By prioritizing these measures, companies can harness AI agents' potential to accelerate processes while minimizing exposure to new risks, ensuring both speed and security in their deployments.

The Rise of AI Agents in Organizations

As AI use evolves in large organizations, leaders seek developments that deliver major ROI. The latest trend is the adoption of AI agents, autonomous systems designed to handle complex tasks and adapt to new information.

More than half of organizations have deployed AI agents to some extent, with additional adopters expected in the next two years. This rapid uptake reflects their appeal for improving efficiency beyond traditional automation, which excels at repetitive, rule-based processes.

However, many early adopters are now reevaluating their approach. Four-in-10 tech leaders regret not establishing a stronger governance foundation initially, highlighting the need for policies, rules, and best practices to ensure responsible, ethical, and legal AI use.

As adoption accelerates, organizations must balance exposure risks with guardrails to secure AI implementation, fostering both innovation and safety.

"The strength of AI agents lies in their autonomy."

— João Freitas, GM and VP of engineering for AI and automation at PagerDuty

Key Risks in AI Agent Deployment

AI agents introduce potential risks that organizations must address for safer adoption. There are three principal areas of consideration to mitigate these challenges.

Shadow AI and Unauthorized Use

The first risk is shadow AI, where employees use unauthorized tools without permission, bypassing approved processes. This has existed since AI tools emerged, but agent autonomy enables unsanctioned operations outside IT's view, introducing fresh security risks.

IT teams should create processes for experimentation and innovation to encourage efficient AI use within controlled environments.

Gaps in Ownership and Accountability

Secondly, organizations must close gaps in AI ownership and accountability. The strength of AI agents lies in their autonomy, but unexpected actions require clear responsibility for resolution.

Without defined owners, teams struggle to prepare for incidents or processes gone wrong, underscoring the need for structured accountability.

Lack of Explainability

The third risk is the absence of explainability for AI agent actions. These goal-oriented systems can achieve objectives in unclear ways, making it hard for engineers to trace or roll back problematic decisions.

AI agents must incorporate explainable logic to integrate seamlessly with existing systems and prevent issues.

Addressing these risks will not delay adoption but will enhance overall security.

Guidelines for Responsible AI Adoption

Once risks are identified, organizations should implement guidelines and guardrails for safe AI agent usage. Following these three steps minimizes potential pitfalls.

1. Make Human Oversight the Default

AI agency evolves quickly, but human oversight remains essential when agents act, decide, or pursue goals impacting key systems. Keep humans in the loop by default, especially for business-critical cases.

Teams must understand potential actions and intervention points. Start conservatively, gradually increasing agency levels. Operations, engineering, and security professionals should define their supervisory roles, with each agent assigned a specific human owner for oversight and accountability.

Allow flagging or overriding agent behavior for negative outcomes. While traditional automation handles structured tasks, AI agents tackle complex, adaptive ones—but control actions early via approval paths for high-impact decisions, preventing scope creep.

2. Bake in Security

New tools must not introduce fresh risks. Choose agentic platforms compliant with high standards like SOC2 or FedRAMP. Limit agent access; align permissions with the owner's scope and avoid extended tool privileges.

Role-based access ensures smooth deployment. Maintain complete logs of every action to trace incidents effectively.

3. Make Outputs Explainable

AI must never be a black box. Illustrate reasoning behind actions, enabling engineers to access decision context and traces. Log inputs and outputs for all actions to provide a clear overview of agent logic, invaluable during issues.

These practices establish transparency and control.

Securing Success with AI Agents

AI agents offer immense opportunities to accelerate and improve organizational processes. Yet, neglecting security and governance could expose companies to new vulnerabilities.

As these agents become more prevalent, organizations need systems to measure performance and intervene when problems arise. By embedding human oversight, robust security, and explainability from the start, leaders can achieve the right balance of speed and protection.

Prioritizing these elements not only mitigates risks but also builds trust in AI deployments, paving the way for ethical and efficient innovation. In an era of accelerating AI adoption, strong guardrails are key to turning potential nightmares into strategic advantages for site reliability engineering and beyond.

"AI agents offer a huge opportunity for organizations to accelerate and improve their existing processes."

— João Freitas, GM and VP of engineering for AI and automation at PagerDuty