Most security tools find the problem and hand it to a human. Plerion closes the loop. In this talk, we'll show how Pleri, our AI security engineer powered by Amazon Bedrock, takes a critical cloud risk from detection to remediation without the alert-ticket-backlog cycle. Watch a top risk get prioritized, a ticket filed, a PR opened, and code-level remediation land in your environment. Re-define what it means to have an AI teammate that does the work, not just alerts and reporting.
What this session is about
Playbook
Editorial commentary · what to actually do about this on Monday
Independent editorial perspective — not an official AWS or speaker statement. Designed for executives evaluating what to brief their teams on next.
Live updates related to this session LIVE
Sourced via Parallel AI Monitor — continuous web watch on 21 topical streams. Updated .
- digitalapplied.com high confidence Agent safety & prompt injection
Prompt Injection in Production Agents: 2026 Taxonomy
Security Disclosure: Microsoft disclosed two critical vulnerabilities in the Semantic Kernel framework that enable Remote Code Execution (RCE) and sandbox escapes via prompt injection. 1) CVE-2026-26030: A vulnerability in the In-Memory Vector Store's filter function (using unsaf
- onereach.ai high confidence Agent identity & delegation
From AI Agent Sprawl to Unified AI Operations
Google and Microsoft have jointly proposed a new W3C standard called WebMCP (Web Model Context Protocol). This standard aims to allow websites to expose structured, callable tools directly to AI agents through a native browser API, fundamentally changing how agents discover and i
- microsoft.com high confidence Agent safety & prompt injection
When prompts become shells: RCE vulnerabilities in AI agent ...
Security Disclosure: Microsoft disclosed two critical vulnerabilities in the Semantic Kernel framework that enable Remote Code Execution (RCE) and sandbox escapes via prompt injection. 1) CVE-2026-26030: A vulnerability in the In-Memory Vector Store's filter function (using unsaf
- ndss-symposium.org high confidence Agent safety & prompt injection
Prompt Injection Attack to Tool Selection in LLM Agents
Security Disclosure: Microsoft disclosed two critical vulnerabilities in the Semantic Kernel framework that enable Remote Code Execution (RCE) and sandbox escapes via prompt injection. 1) CVE-2026-26030: A vulnerability in the In-Memory Vector Store's filter function (using unsaf
- arxiv.org high confidence Agent safety & prompt injection
[2602.21012] International AI Safety Report 2026 - arXiv.org
Security Disclosure: Microsoft disclosed two critical vulnerabilities in the Semantic Kernel framework that enable Remote Code Execution (RCE) and sandbox escapes via prompt injection. 1) CVE-2026-26030: A vulnerability in the In-Memory Vector Store's filter function (using unsaf
External links matched to this session via topic relevance. The KB does not endorse third-party content; verify before citing.