News • May 15, 2026
Agentic Workflow Governance: Who Actually Decides What Your AI Agent Does?
Campbell Brown, Meta's former news chief, says the people shaping what AI tells us are too far from the consequences. For B2B leaders buying agentic workflows in 2026, that distance is already costing money.
The Agents Behind the Curtain—and the Governance Gap They Expose
In May 2026, Campbell Brown—who led Meta's news partnerships until 2023—told TechCrunch something the AI industry doesn't want to hear: the people deciding what AI surfaces are too insulated from the fallout. She'd watched it inside Meta. Engineers and policy teams in Menlo Park debated which news to prioritize, rarely engaging with the business owners, journalists, or skeptical public who would live with the results.
The same dynamic now runs through every B2B agentic workflow deployment. Who controls your AI agent in B2B workflows? Who defines what's useful, what's compliant, and what's dangerous? These aren't philosophical questions—they're the operational questions that determine whether your agent closes deals or creates liability.
The gap between those building agentic systems and the businesses relying on them isn't just cultural. It's a risk surface, a revenue lever, and a trust problem simultaneously. And without deliberate agentic workflow governance, it widens by default.
From Boardrooms to Backchannels: What Curation Gaps Actually Cost
Meta wasn't the first to face this tension, and your vendor won't be the last to create it.
In 2025, a Salesforce internal audit found their AI-powered sales assistant failed to surface critical compliance updates for UK clients after a data pipeline overlooked a new FCA regulation. The result: a £1.2 million lost bid and a public apology. In South Africa, a major telecom's procurement bot—built by a Silicon Valley vendor—routinely misread B-BBEE compliance requirements because its training data never included local government PDFs. In San Diego, a logistics firm's AI quoting engine lowballed insurance risks in hurricane zones, inheriting a geographic blind spot from a Boston-based consultancy's risk model.
These aren't edge cases. They're the predictable output of agentic AI governance for business operations that was never properly scoped. The builders had debated safety, fairness, and relevance in the abstract—without a line of sight to the regulatory, commercial, or geographic contexts where their systems would operate.
For UK buyers, GDPR and FCA compliance aren't optional configuration items—they're baseline requirements that must be embedded at the design stage, not patched in after a breach. For US buyers focused on sales velocity and process control, an agent with miscalibrated risk logic doesn't just slow pipeline—it creates audit exposure. For South African businesses operating with leaner teams and tighter budgets, a vendor's geographic blind spot isn't an inconvenience—it's a procurement failure you pay for twice.
The Gap Is Experiential, Not Just Technical
Brown's point wasn't about Silicon Valley groupthink alone. It's about the consequences of an experiential gap—what happens when the people making curation decisions have never run your sales team, filed under your regulatory regime, or operated in your market.
Inside a typical agentic workflow build, decisions get made like this:
- Product managers default to data sources that are accessible or 'uncontroversial'—not necessarily relevant to your context.
- Engineers focus on hallucination and prompt injection, but rarely have visibility into your compliance exposure.
- Corporate policy teams weigh brand risk, not the deal you'll lose when the agent misses a critical local requirement.
Your SDRs, finance analysts, and compliance officers inherit outputs shaped by debates they were never part of. As one UK SaaS founder told us in 2024: 'Our AI onboarding agent feels like it was built for a different country, by people who've never run a sales team.' That's not a product complaint—it's a governance failure.
Building trustworthy AI agents for B2B companies requires closing this experiential gap before deployment, not after the first expensive mistake.
What B2B Buyers Must Demand From Agentic AI Implementation
The answer isn't to wait for tech giants to self-correct. Agentic AI implementation, governance first, is the only approach that protects revenue and manages risk at scale. Here's what that looks like in practice:
Demand AI agent transparency. Who is making choices about what your agent surfaces, what it ignores, and what guardrails it applies? If your vendor can't answer that question in plain language, that's your answer.
Require participatory design. Your subject-matter experts—compliance leads, RevOps managers, regional sales heads—must be in scope sessions before launch, not consulted after something breaks.
Insist on data provenance. If your agentic workflow pulls from third-party APIs, regulatory feeds, or news sources, ask who curates those lists, how often they're updated, and how fast a critical gap can be patched. AI workflow audit capability should be a contractual requirement, not a nice-to-have.
Set hard requirements for local context. Operating in South Africa means your agent must understand B-BBEE and POPIA. Operating in the UK means FCA and GDPR are non-negotiable. Operating across US regions means geographic risk variance must be modeled explicitly. 'We can configure that later' is not a governance framework.
The agentic workflow consultants who will win in 2026 aren't the ones with the slickest demos—they're the ones who treat governance as the product, not an afterthought. At funnnl, every engagement starts with co-design sessions involving real end users, not just product owners. We audit curation decisions for lived context, not just technical accuracy. AI agent oversight and compliance for RevOps teams isn't a bolt-on service—it's the foundation.
The Deeper Question: Agency Over the Agent Itself
There's a harder problem now coming into focus. As agentic workflows become genuinely autonomous—making decisions, not just recommendations—the governance stakes rise sharply. Agentic workflow risk management for enterprises isn't about preventing bad outputs. It's about establishing who holds authority over the agent's decision logic, and how that authority is exercised in real time.
In 2025, a US healthcare network deployed a procurement agent to evaluate RFPs. The agent applied scoring rules designed by a Texas vendor—without human review. When a critical supplier was disqualified due to a misinterpretation of local licensing law, the fallout was swift: lost contracts, board scrutiny, and an expensive audit. The agent had followed its design exactly. Its design had simply never been scrutinized by the people who would pay for its mistakes.
How to audit AI agent decisions in real time isn't a technical question—it's a governance question. It requires defined ownership, documented decision logic, and a continuous review process that doesn't wait for a postmortem. As the AI layer becomes the operational layer, the gap between builder intent and buyer consequence becomes existential. One-off training sessions don't close it. Shared, ongoing agency over the agent does.
The B2B Advantage: You Can Set the Terms
Brown's critique is a warning, but B2B buyers hold leverage that consumer users don't. You can set contractual terms, demand AI agent transparency, and shape the governance layer to reflect your operational reality—not a vendor's defaults.
The most successful agentic workflows that buyers actually trust in 2026 won't be the most automated—they'll be the most governed. They'll embed lived experience, local expertise, and continuous oversight into every decision the agent makes. They'll be designed for your context, not Silicon Valley's assumptions about it.
At funnnl, bridging the gap between builder intent and buyer consequence is the core of how we design agentic workflows. The closer your agent is to your operational reality, the more value it creates—and the less risk it carries.
The question is no longer just who decides what AI tells you. It's whether the right people—your people—are in the room every time that decision gets made.

