Reference Architecture · May 2026
Building AI Agents
Right.
A Reference Architecture
Neet (Clement Wong)
Director, Solutions Engineering COE, APAC — Cloudflare
Presented to Foris / Crypto.com · May 2026
Act 1 — The Shared Bet · 0:30–2:30

You bought ai.com.
That's a platform bet.

Your bet — Foris / ai.com
  • Spent $70M because the agentic future is a category, not a feature
  • Launched at the Super Bowl — ai.com as big as crypto.com
  • Building onchain — agents owned by users, not platforms
My bet — OpenClaw
  • Same thesis. Different stack.
  • 138,000 GitHub stars · 1M+ weekly downloads
  • Built on Cloudflare. Open-source. Production.

Let me show you the architecture I landed on — and the decisions you'll need to make building yours.

Act 2 — Reference Architecture · 2:30–4:00

OpenClaw on Cloudflare —
The Three Layers

👁️
Layer 3 — Observe Everything
Full audit trail · Session replay · Anomaly detection · "If you can't see it, you can't trust it."
🔒
Layer 2 — Scope Permissions
Tool registry · Sandboxed execution · Human-in-the-loop for high-stakes actions
🔑
Layer 1 — Connect Securely
Per-agent identity · JIT credentials · Mutual TLS · Zero Trust network egress
Cloudflare AI Gateway Workers Zero Trust KV / R2 / Durable Objects

Framework: Volker Rath (Cloudflare) — Connect · Scope · Observe

Layer 1 — Connect Securely · 4:00–5:30

The question isn't who the user is.
It's who the agent is.

❌ The Default (most platforms)
  • Shared service account for all agents
  • Static API keys that never rotate
  • No per-agent audit trail
  • One compromised agent = all agents compromised
✓ The OpenClaw Approach
  • Unique credentials per agent — issued at task time, revoked after
  • JIT access — no standing permissions anywhere
  • Mutual TLS — both sides authenticate
  • Zero Trust egress — tunnels, not open gateways
Your agents should never have more access than a junior employee on their first day.
Layer 1 — Connect Securely · 5:30–6:30

How OpenClaw Connects —
The Flow

1
Task
Assigned
Intent
2
Agent Requests
Credentials
JIT Token ⏱
3
Connects
to Tool
mTLS
4
Tool Validates
Identity
Scoped only
5
Task
Completes
Auto-revoke 🗑
6
Audit Record
Created
Immutable log

⚠️ If you're building your platform: This is the first thing to wire in — before you hit 100 agents. The cost of adding per-agent identity later is exponentially higher than building it now.

Layer 2 — Scope Permissions · 6:30–9:00

Agents should be powerful.
Not omnipotent.

If an agent can call any tool, it will eventually call the wrong one — through prompt injection, hallucination, or user error. The model can't enforce scope. Only infrastructure can.

Tool Registry
Every callable tool must be explicitly registered. Unregistered = unreachable. The tool simply doesn't exist to the agent.
Sandboxed Execution
Every tool runs in isolated Workers with explicit data bindings. The sandbox IS the permission model — not a wrapper around it.
Human-in-the-Loop
For anything irreversible — sending funds, publishing, modifying accounts — the agent queues the action. A human approves. Only then does it execute.

🏦 For Foris: HITL for all financial-agent interactions needs to be in your v1 spec. Not v2. V1.

Layer 3 — Observe Everything · 9:00–11:00

If you can't see it,
you can't trust it.

What OpenClaw Observes

  • Every prompt sent — what the agent asked, what context was included
  • Every response received — what the model said, token count, latency, cost
  • Every tool call — which tool, what params, response, duration
  • Rate anomalies — if an agent makes 10× normal calls, we catch it
  • Full session replay — reconstruct any agent session exactly
📋 Audit Entry Example
agent: finance-bot-7
tool: send_payment
params: amount=500 USDC
status: HITL_APPROVED
approver: user:clement
ts: 2026-05-14T10:42:01Z
⚠️ Anomaly Alert
Agent finance-bot-315× normal API call rate detected. Auto-throttled.
If a regulator asks "what did this agent access and why?" — your answer should be a log entry, not a discussion.
Act 3 — The Foris Equation · 11:00–14:00

What changes when
you're Foris?

① Regulatory Stack
MAS DTSP + Agentic AI Governance

Dual compliance. Most AI platforms don't face this. Yours does.

MAS specifically mandates: scope, audit, lifecycle, data segregation for all agentic AI.

Active enforcement
② Asset Exposure
Data vs. Assets

Non-financial AI compromised → data exfiltration

Foris agent compromised → asset movement.

Different class of problem. Entirely.

③ Ecosystem Context
The $CLAWD Signal

$16M market cap fake token — same playbook that will target YOUR platform.

You've been here before — 2022 breach. You know what sophisticated attackers look like.

You're not just building an AI platform. You're building a financial AI platform. Different requirements. Higher stakes. And better positioned to get it right because you already understand the threat model.

Landscape · 14:00–15:00

The AI Agent Landscape —
Where You Fit

General AI Agents
OpenClaw Open-source · 138K stars · Cloudflare-native
OpenAI Agents Enterprise · Closed
Lindy / LangFlow Workflow automation
Crypto-Native AI Agents
Virtuals Protocol Base network · Agent economy
ElizaOS Open-source crypto agent framework
CLANKER Emerging protocol
ai.com — Your Position
Consumer-product-first Not token/protocol-first
Onchain by design Users own their agents
Edge-powered for scale Cloudflare = complement, not compete

Cloudflare at the edge is complementary to your onchain architecture — not competing with it. You handle decentralized logic. We handle global delivery, security, and distribution.

Ecosystem Data · 15:00–16:00

What the Ecosystem
Is Teaching Us

OpenClaw's growth has been explosive — and the security issues scaled just as fast.

42,900
Exposed agent instances — public internet, no auth, 82 countries
15,200+
With RCE vulnerabilities — pre-built attack vectors
341+
Malicious skills on ClawHub — supply chain poisoning is real
CVE-2026-25253
Auth token theft → Full RCE. One vuln, all agents.
$CLAWD
$16M market cap fake token on Solana — rug pull via AI hype. In your world.

Every exploit in the OpenClaw ecosystem is a preview of what targets YOUR platform. You now know what the industry learned the hard way. Use that.

Act 4 — Platform Requirements · 16:00–18:00

9 Guardrails →
Product Requirements

⬡ Layer 1 — Connect
Unique credentials per agent
Non-negotiable
JIT access — no standing permissions
Design principle
mTLS between agents and tools
Defense in depth
⬡ Layer 2 — Scope
Tool registry — explicit allowlist
Unregistered = unreachable
Sandboxed execution
Isolation by design, not wrapper
HITL gates for financial actions
v1 requirement. Not v2.
⬡ Layer 3 — Observe
Full audit logging
Immutable · queryable · replayable
Rate limiting + anomaly detection
Per-agent, not aggregate
Skill supply chain governance
Who can publish? What vetting?

These aren't security controls bolted onto a product. They're product features your platform can't ship without.

Starting Points · 18:00–19:00

If I were starting
your platform today

Three things to wire in before you ship anything externally:

Agent Identity Model
Build per-agent identity into your protocol layer. Not optional. Every agent gets its own cryptographic identity. Affects your credential model, audit trail, and trust architecture. Hard — and expensive — to add later.
Build this first
Audit Trail Architecture
Define your audit schema before you write your first agent handler. What gets logged? How stored? How queried? Make it immutable and replayable from day one. Your future regulator will thank you.
Before your first MAS query
Human-in-the-Loop for Financial Actions
HITL in the protocol, not the UI. The agent literally cannot execute a financial transaction without human approval. This is your regulatory requirement AND your user trust model.
Before your first incident
Close + Q&A · 19:00–20:00
"You're building the future of AI agents.
The architecture you choose now
is the advantage you keep.
"
Trust isn't a feature you add. It's an architecture you choose from day one.
Neet (Clement Wong)