AIM302AdvancedBreakout sessionAI & Machine Learning Playbook 5 live updates

Agentic AI Meets Responsible AI - Science, Strategy and Practice

What this session is about

AI agents offer powerful capabilities — and introduce fundamentally new risks that require more than traditional controls. This session explores responsible agentic AI through three lenses: the science, the framework, and a real-world customer story. Understand the scientific frontiers that make agents different — from emergent behaviour and agent-to-agent trust to the challenges of governing systems that plan, negotiate, and act autonomously. Learn the four areas of the AWS Responsible AI framework where agents change the rules, and hear how one of Australia's leading health insurer is putting responsible AI into practice — from strategy to governance to real-world trade-offs.

Playbook

Editorial commentary · what to actually do about this on Monday

The concept
Agents introduce fundamentally new risks: emergent behaviour, agent-to-agent trust, autonomous action. Old controls (designed for predictive ML: input → score → decision) don't fit.
Why it matters
Your existing AI risk framework was designed for the predictive era. Agents *plan*, *negotiate*, *act*. The audit surface is different.
The hard parts
How do you audit a system that plans? Logs of what happened don't explain *why*. The reasoning is the artefact you need to capture, and the reasoning is in natural language and tool calls, not structured logs.
Playbook moves
(1) Capture agent reasoning (tool calls, plans, decisions, intermediate scratchpads), not just outputs. (2) Build an agent-specific risk taxonomy — most existing AI risk frameworks won't translate. (3) Create a clear escalation path for agent decisions that exceed predefined authority levels.
The surprise
The hardest agent risk isn't "agent does bad thing" — it's "agent confidently fabricates evidence to *justify* the bad thing." Your auditing must catch the justification narrative, not just the action. Agents can produce coherent-looking reasoning for incorrect actions, and that reasoning is what humans will trust. ---

Independent editorial perspective — not an official AWS or speaker statement. Designed for executives evaluating what to brief their teams on next.

Live updates related to this session LIVE

Sourced via Parallel AI Monitor — continuous web watch on 21 topical streams. Updated .

External links matched to this session via topic relevance. The KB does not endorse third-party content; verify before citing.