15 April 2026 15:50 - 16:10
Who owns an AI agent when it acts?
As AI agents move from recommendation to execution, questions of ownership become unavoidable. When a system takes actions on its own triggering workflows, changing state, or affecting customers it’s no longer clear where responsibility sits once something goes wrong.
We’ll discuss how teams assign accountability for agent decisions, what being “on call” means when behavior is non-deterministic, and how organizations reconcile autonomy with existing operational and incident-response models.
The focus is on clarifying responsibility once systems act independently and what must change in engineering, governance, and operations to support that reality.
Key takeaways:
→ How teams define ownership and accountability for agent-driven actions.
→ What on-call responsibility looks like when systems behave probabilistically.
→ Why autonomy forces changes to incident response, escalation, and governance models.