Palo Alto Networks is calling its new Prisma SD-WAN Troubleshooting Agent autonomous. The product page says it handles network blackouts and application brownouts without human intervention. The company's blog post headline reads "autonomous resolution." But buried in the same blog post is a one-line constraint that collapses the claim: "Critically, the system incorporates human-in-the-loop oversight at all times and generates a clear remediation plan."
That is the story. Not autonomous resolution — autonomous plan generation, with a human signing off before anything actually changes.
The product, announced this month on Palo Alto's blog, uses a hierarchy of domain-specific agents trained on internal knowledge bases, remediation playbooks, and configuration guides via retrieval-augmented generation (RAG). When something goes wrong on a SD-WAN edge node, the system can presumably find the right page in the internal wiki faster than an on-call engineer can. Whether that is "agentic" depends on what you are selling.
Palo Alto claims the agent cuts mean time to resolution (MTTR) from hours to minutes. No customer is named. No case study is cited. The claim floats free of any empirical anchor, which is the specific problem with most agent infrastructure marketing right now. The promise sounds transformative. The evidence trail is a blog post.
The broader context comes from Gartner. In an August 2025 report — "Agentic NetOps Will Revolutionize Network Operations," by analysts Jonathan Forest and Andrew Lerner — the firm predicted that 50% of organizations will use agentic NetOps with minimal human involvement by 2030, up from nearly 0% in 2025. That prediction anchors the entire vendor narrative, and it is the number every subsequent press release cites.
The problem is that the prediction is gated. The full text is not publicly available, and every source echoing the 50% figure traces back to either Palo Alto's blog or the same Gartner document. NTT DATA's insights page cites it. NetBrain's blog cites it. Selector AI's analysis cites it. None of them have independent data. NetBrain, notably, appears as a Representative Vendor in the same Gartner report it cites — a commercial relationship that the analyst blog does not flag.
Selector AI, which has no obvious stake in the Palo Alto announcement, is more candid. "One of the biggest risks in the current market," its blog reads, "is superficial agent washing, where vendors rebrand chat interfaces or scripts as autonomous intelligence." That is a precise description of what the Palo Alto headline does versus what the blog body says.
NTT DATA, citing its own IDC study, claims nearly 40% of organizations are already evaluating agentic AI in network operations. The study was commissioned by NTT. The adoption level right now, across all vendors, is estimated at less than 1% of organizations, per NTT's own estimates. So there is a gap between evaluation pipelines and production deployment — which is exactly where agent infrastructure tends to stall.
The honest version of what Palo Alto has built is a search-and-suggest engine for network remediation. It probably does find the right playbook faster than a human googling through internal docs. That is useful. It is not the same as an autonomous agent that fixes things while the NOC team sleeps.
The gap between those two things — search tool versus autonomous actor — is where the AI-replaces-IT narrative lives right now. The vendors are selling the latter. Their own documentation describes the former. Someone in the NOC still has to say yes.