Confidential · Internal Use Only · Magic Transit & Magic Firewall

Selling Motion Playbooks

Discovery questions, objection handling, and messaging for the three SHAPE selling motions. Built from the Network Protection GTM Guide (March 2026). Use these before every discovery call.

🔧
Selling Motion 01 — SHAPE
Hardware Retirement
Target: Accounts running Arbor, Radware, or F5 on-premise DDoS appliances — especially those approaching a hardware refresh cycle or experiencing capacity concerns.
Opening Line

"Your hardware handles everyday traffic just fine. But when a modern DDoS attack hits — a 10, 20, or 30+ Tbps flood — your appliance is overwhelmed in seconds. Matching a 7.3 Tbps attack with hardware would require 292 enterprise firewalls at $7.3M+. There's a better way."

300×
Modern attacks vs. max hardware capacity (31 Tbps vs. 100 Gbps)
292
Firewalls needed to match a single 7.3 Tbps attack — at $7.3M+
50%
Reduction in network maintenance costs reported by Cloudflare customers
477Tbps
Cloudflare DDoS mitigation capacity — vs. 25–100 Gbps for hardware
Discovery Questions
Q1 How are you currently protecting your public-facing infrastructure — data centers, gateways, VPNs? Confirms hardware ownership and where it sits in the stack.
Q2 What's your current DDoS appliance setup — vendor, deployment model, capacity? Identifies the incumbent (Arbor / Radware / F5) and sizes the opportunity.
Q3 When are you next up for a hardware refresh cycle? Key compelling event. If within 12 months, the CapEx conversation is already open internally.
Q4 What's the maximum bandwidth your current appliance can handle before it saturates? Sets up the capacity gap math. Any number below 1 Tbps is a vulnerability.
Q5 How much staff time goes into managing, patching, and maintaining these devices? Surfaces operational burden — OpEx reduction is often the CFO's strongest motivator.
Q6 Has your team ever had to manually activate or tune your DDoS response during an active attack? On-demand activation is a hardware weakness. Cloudflare is always-on.
🎯 "Have you ever experienced an attack that approached or exceeded your hardware's rated capacity — or had any near-misses where your team was concerned about being overwhelmed?" Key probe. A "yes" or a hesitation here is your strongest discovery signal.
🎯 "Are you looking to reduce CapEx or shift infrastructure costs to OpEx as part of a broader modernization initiative?" Connects to CIO / CFO priorities. CapEx-to-OpEx is often an executive-level mandate.
Signals to Listen For
"We have appliances in our data centers for DDoS protection."
"We're coming up on a hardware refresh cycle."
"We're concerned about capacity — our hardware maxes out at X Gbps."
"We spend a lot of time managing and patching these devices."
"Our hardware is end-of-life or nearing it."
"We're looking to reduce our on-prem infrastructure footprint."
"We've been hit by an attack that slowed things down."
"The board is pushing us to reduce CapEx."
Key Messages

Capacity The math doesn't work for hardware

Hardware maxes out at 25–100 Gbps. Modern attacks exceed 31 Tbps — 300x the limit. No hardware vendor offers an upgrade path to terabit-scale protection. The ISP link saturates before your appliance ever fires a packet.

Speed Hardware requires humans in the loop

On-premise appliances require manual intervention to activate or tune during an attack. Cloudflare detects and mitigates in <3 seconds — automatically, 24/7. You're exposed in the window between attack start and manual activation.

Cost Eliminate the CapEx refresh trap

Every 3–5 years your hardware forces a $250K–$500K+ refresh cycle — before power, cooling, or staffing. Cloudflare shifts that to predictable OpEx. Customers report 50% reduction in network maintenance costs.

Operations No more hardware operations

No hardware to rack, configure, maintain, patch, or replace. Auto-patching from Cloudflare's threat intelligence — derived from 20% of all Internet traffic — means you're protected from zero-days before your hardware vendor even publishes a firmware update.

Objection Handling
"Our hardware has been reliable for years — why change what works?"

It's reliable against yesterday's attacks. The threat landscape has fundamentally changed — modern volumetric attacks are 300x your hardware's rated capacity. Your appliance doesn't fail because it's broken. It fails because it was built for a world where the largest attacks measured in gigabits, not terabits. The reliability window is closing.

Say This

"Your hardware is excellent at what it was designed for. The problem is the attacks it was designed for no longer represent the threat. Cloudflare stopped a 31.4 Tbps attack in 35 seconds. No hardware can touch that — not because of brand, but because of physics."

"We just refreshed our hardware — we're not going to rip it out."

Don't push for full replacement. Pivot to the Network Extension / dual-vendor motion. Cloudflare can sit in front of existing hardware as Layer 1, absorbing volumetric attacks at the edge while the hardware handles protocol-level filtering. When the next refresh cycle comes, the conversation is already open.

Say This

"We're not asking you to rip anything out. Cloudflare can act as your upstream DDoS layer — stopping terabit-scale attacks before they reach your hardware. Your existing investment stays fully utilized, and you add the capacity you can't get from hardware alone."

"We have a strong relationship with Arbor/Radware — they know our environment."

Acknowledge the relationship. Don't attack it. Lead with what they can't give you: upstream protection before the ISP link, terabit-scale capacity, and always-on mitigation. Their vendor knows the environment but cannot change the laws of physics on link saturation.

Say This

"That relationship matters and we respect it. What Arbor can't solve — and isn't designed to solve — is what happens when the attack saturates your ISP link before your appliance sees a single packet. That's a topology problem, not a vendor problem. Cloudflare solves it upstream."

"The migration is too complex — we can't risk disruption."

Reference Melbourne Airport: deployed in under 36 hours with zero disruption. Cloudflare's Professional Services team builds a migration blueprint. Traffic shifts via BGP — your existing infrastructure stays live until you're ready to cut over. Zero disruption is the default, not the exception.

Say This

"Melbourne Airport replaced a complex multi-vendor hardware environment with Magic Transit in under 36 hours — during live operations. Our deployment runs in parallel with your existing infrastructure. You cut over on your schedule, not ours."

Who to Engage
Head of Network Security / VP IT
Primary champion. Lead with capacity gap and operational burden reduction. They shoulder the "nothing bad happening" responsibility.
CIO
Frame as infrastructure modernization and CapEx-to-OpEx transformation. This is their language.
CFO
Lead with: hardware refresh cost eliminated, 50% OpEx reduction, $1M+ average cost of downtime vs. Cloudflare subscription.
Proof Point
Melbourne Airport — Critical Infrastructure

Melbourne Airport deployed Cloudflare Magic Transit in under 36 hours, replacing a complex multi-vendor hardware environment. Result: 100% uptime, zero DDoS attacks reached infrastructure. The deployment ran in parallel with existing systems — zero disruption to live operations.


☁️
Selling Motion 02 — SHAPE
Prolexic Takeout
Target: Accounts using Akamai Prolexic or similar cloud scrubbing center services. Lead with the capacity math — the largest 2025 attack exceeded Prolexic's entire infrastructure by 43%. Check for Prolexic renewals and coordinate with Jet Ski SPIFF program.
Opening Line

"Akamai Prolexic works like a life vest that sinks in deep water. It handles everyday attacks. But it has only ~22 Tbps of capacity. The largest 2025 attack was 31.4 Tbps — 43% beyond Prolexic's limit. When the real attack hits, Prolexic is overwhelmed. And it adds latency during every mitigation event — even the ones it can handle."

22×
Cloudflare capacity vs. Prolexic — 477 Tbps vs. ~22 Tbps
+43%
Largest 2025 attack exceeded Prolexic's full capacity
11×
More mitigation locations — 335+ cities vs. ~50 scrubbing centers
35s
Time to autonomously mitigate the world-record 31.4 Tbps attack
Discovery Questions
Q1 How long have you been using Prolexic, and when does your current contract come up for renewal? Identifies compelling event. Prolexic renewals are a primary SPIFF trigger — coordinate with Jet Ski team.
Q2 Have you noticed any latency or performance degradation during a mitigation event when Prolexic is active? Backhaul latency is Prolexic's Achilles heel. Any "yes" is a pain point you can solve immediately.
Q3 How was the onboarding and professional services experience with Prolexic — any complexity or lengthy timelines? Prolexic is known for complex, expensive PS engagements. Contrast with Cloudflare's faster deployment.
Q4 Do you use Prolexic always-on, or do you switch to on-demand during an attack? On-demand mode exposes them during the activation window. Always-on with Prolexic adds persistent latency. Both are problems.
Q5 How many scrubbing locations does your current coverage span, and are there any regions where you feel underprotected? Prolexic's ~50 locations vs. Cloudflare's 335+ is a coverage gap, especially in APAC and LATAM.
Q6 Have you reviewed the capacity of your current solution against modern attack sizes — specifically attacks in the 20–30+ Tbps range? Sets up the world-record attack stat. Most Prolexic customers have not done this math.
🎯 "Were you aware the largest DDoS attack ever recorded was 31.4 Tbps — and that it exceeded Akamai Prolexic's entire rated capacity by 43%? How does that factor into your resilience planning?" This is the single most powerful question in this motion. Let the silence sit after you ask it.
🎯 "If Prolexic experienced an outage or was overwhelmed during an active attack, what's your fallback? What does your recovery look like?" Single-vendor dependency is a risk. Opens the door to either full replacement or dual-vendor motion.
Signals to Listen For
"We use Akamai Prolexic (or a similar cloud scrubbing service)."
"We've noticed latency during mitigation — traffic has to route to a scrubbing center."
"We're not sure 22 Tbps is enough for modern attacks."
"The Prolexic onboarding was complex and expensive."
"Our Prolexic renewal is coming up and we're evaluating options."
"We've had issues getting consistent coverage across all our regions."
"Prolexic isn't available in some locations we need coverage."
"Our Akamai costs have been increasing year over year."
Key Messages

Capacity 477 Tbps vs. 22 Tbps — the math is simple

The largest attack in history was 31.4 Tbps. Prolexic's entire infrastructure is rated at ~22 Tbps. That attack would have exceeded Prolexic by 43%. Cloudflare stopped it in 35 seconds. There is no version of Prolexic that handles modern hyper-volumetric attacks.

Latency Zero backhaul — mitigation at the edge

Prolexic routes traffic to centralized scrubbing centers, adding latency during every mitigation event. Cloudflare mitigates inline at 335+ locations worldwide — zero backhaul, zero added latency. Your users never feel the difference.

Coverage 335+ cities vs. ~50 scrubbing centers

Cloudflare operates in 11x more locations than Prolexic. In regions where Prolexic has no scrubbing center — particularly APAC and LATAM — traffic must backhaul significant distances. Cloudflare has local mitigation everywhere.

Operations Days to onboard, not weeks

Prolexic requires complex, expensive professional services engagement — often taking weeks and significant PS spend. Cloudflare onboards in days, with a straightforward BGP configuration. Melbourne Airport: fully live in under 36 hours.

Objection Handling
"We've been with Akamai for years — why switch?"

The capacity gap is structural, not a version upgrade. Prolexic's architecture is built on centralized scrubbing — which cannot scale to match distributed anycast mitigation. This isn't a feature comparison. It's a fundamental architectural difference.

Say This

"Prolexic's architecture is based on routing traffic to ~50 scrubbing centers. Cloudflare mitigates at 335+ locations simultaneously. The largest attack in 2025 was larger than Prolexic's entire network. That's not a Cloudflare marketing claim — that's publicly verified attack data."

"Akamai has a strong SOC and managed services team — we rely on them."

Akamai competes hard on SOC and professional services — engage Cloudflare PS early so this is never a gap. Magic Transit is fully managed, with automatic mitigation. For customers who need SOC augmentation, Cloudflare's TAM and managed service options cover it. Don't lead with price — lead with capacity and performance.

Say This

"Cloudflare's automated mitigation means your SOC team spends less time firefighting DDoS, not more. We have TAMs, professional services, and managed support options. But the key difference is: our system doesn't need a human to turn it on during an attack — it's already blocking."

"We just renewed our Prolexic contract — we're locked in."

Acknowledge the contract. Explore the dual-vendor / Network Extension motion in the meantime — Cloudflare can add upstream protection on top of Prolexic today. When the renewal window opens, you're already in the account with proven value. Coordinate with Jet Ski on SPIFF timing.

Say This

"We understand — timing matters. In the meantime, we can add Cloudflare as an upstream layer above Prolexic, providing capacity that Prolexic can't. When your renewal comes up, you'll have real-world data on Cloudflare's performance rather than making a decision blind."

"Migration risk — we can't afford disruption switching from Prolexic."

Cloudflare offers a side-by-side evaluation setup before any cutover. Traffic migrates via BGP in a phased approach. Prolexic stays fully active until the moment you're ready to complete the transition. Reference Melbourne Airport: zero disruption, under 36 hours.

Say This

"We run Prolexic and Cloudflare side by side during evaluation. You don't cut over until you've validated every aspect of Cloudflare's performance against your specific traffic patterns. The migration blueprint is zero-disruption by design."

Who to Engage
Head of Network Security
Lead with capacity and latency. They've felt the Prolexic backhaul pain during attacks. The 31.4 Tbps stat hits hardest with them.
CISO
Lead with residual risk — the gap between Prolexic's capacity and modern attack sizes creates board-level exposure. Especially relevant in Finance and Healthcare.
CFO
Akamai Prolexic is expensive — lead with total cost comparison including PS, renewal pricing trends, and avoided downtime ROI.
Proof Point
World Record — 31.4 Tbps Autonomous Mitigation

In November 2025, Cloudflare autonomously detected and mitigated the largest DDoS attack ever recorded — 31.4 Tbps — in 35 seconds. Zero customer impact. Prolexic's entire rated capacity is ~22 Tbps — this attack would have exceeded it by 43%. No scrubbing center architecture can handle what Cloudflare's anycast network can absorb.

Sales Program Alert

An active SPIFF program and buyout program for Akamai Prolexic customers is running in Q2 FY26. Coordinate with the Jet Ski team on all active Prolexic renewal opportunities. Jet Ski Specialists are available for all $100K+ deals.


🛡️
Selling Motion 03 — SHAPE
Network Extension
Target: Accounts seeking dual-vendor DDoS protection, compliance-driven resilience (DORA, HIPAA, NIS2, NERC CIP), or a defense-in-depth strategy that doesn't require replacing their current solution. This is the land motion — lead here when there's resistance to full replacement.
Opening Line

"Smart homeowners use multiple locks — a deadbolt, a security chain, a monitored alarm. If one fails, the others hold. Your network protection works the same way. A single vendor — no matter how good — is a single point of failure. Cloudflare as your second layer doesn't replace what already works. It ensures that when your primary solution is overwhelmed, you don't go down."

$1M+
Average cost per downtime event — one avoided attack pays for years of dual-vendor coverage
DORA
NIS2
EU regulations increasingly requiring documented dual-vendor network resilience
<1hr
Time for Visma to restore all services after Magic Transit was activated during an active attack
Discovery Questions
Q1 Are you subject to any regulatory frameworks that require network resilience — DORA, NIS2, HIPAA, NERC CIP, FFIEC? Compliance is a forcing function. DORA and NIS2 increasingly require documented dual-vendor resilience. This is non-negotiable for regulated industries.
Q2 Is your current DDoS protection single-vendor? What is your failover plan if that vendor experiences an outage or is overwhelmed? Forces them to confront the single point of failure. Most organizations have no answer to this question.
Q3 Has your security team discussed a multi-vendor or defense-in-depth approach to network protection? Tests for internal appetite. If the CISO or Head of Network Security has already raised this, you have a champion.
Q4 If your primary protection vendor went down during a major attack, how long before your team detected it — and how long before traffic was rerouted? Surfaces manual activation gap. If the answer is "minutes" or "we'd have to call them," that's your opening.
Q5 Are there situations where your current solution requires manual intervention — and how does your team handle that at 2am on a Sunday? Operational reality check. Staffing constraints make manual activation a real vulnerability.
Q6 Are you growing through M&A or expanding into new regions where your current vendor doesn't have strong coverage? M&A creates network integration challenges. New regions may have coverage gaps in existing provider's footprint.
🎯 "What happens to your network — and your business — if your current protection vendor experiences an outage or is overwhelmed by an attack that exceeds their capacity?" The highest-impact discovery question for this motion. The answer determines urgency.
🎯 "Does your compliance team or auditors have visibility into your network resilience posture — and have they asked about dual-vendor documentation?" Connects the compliance obligation to a concrete internal ask. Great for regulated industries.
Signals to Listen For
"We're concerned about a single point of failure in our protection chain."
"Our compliance framework (DORA, HIPAA, NIS2) requires dual-vendor network resilience."
"We want a second layer of protection without replacing our current solution."
"We need automatic failover — we can't rely on manual intervention during an attack."
"We're growing through M&A and need to consolidate network protection."
"Our auditors have been asking about our resilience documentation."
"We want to reduce our dependency on a single vendor."
"We had an incident where our provider was slow to respond."
Key Messages

Resilience No single point of failure

If your primary DDoS vendor is overwhelmed or experiences an outage, Cloudflare automatically takes over — Active/Passive or Active/Active configuration. No manual activation. No human required. Your network stays up regardless of what happens to your primary vendor.

Compliance DORA, NIS2, HIPAA, NERC CIP

Regulators are increasingly requiring documented dual-vendor network resilience. Cloudflare provides the compliance documentation, architecture diagrams, and SLA evidence your auditors need. Stop treating this as a security decision — it's a regulatory obligation.

Non-Disruptive Sits in front — nothing gets replaced

Cloudflare deploys in front of your existing stack. Your current solution stays fully in place. Active/Passive mode uses Cloudflare as the primary absorbing layer. Active/Active mode load-balances across both. No rip-and-replace. No disruption. No political risk of removing an incumbent.

ROI One avoided attack pays for years

Average cost of a major DDoS outage: $1M+. Revenue loss per hour: $100K–$540K. Cloudflare's dual-vendor layer costs a fraction of that. One avoided outage pays for years of protection. This is not a cost center decision — it's risk management math.

Objection Handling
"We already have DDoS protection — we don't need another vendor."

Every solution has a capacity ceiling and a failure mode. The question isn't whether your current solution is good — it's whether it's sufficient for the worst-case scenario. One vendor, no matter the quality, is a single point of failure for your most critical infrastructure.

Say This

"Smart organizations don't rely on a single firewall, a single data center, or a single ISP. Network protection is no different. Cloudflare isn't here to replace your existing solution — it's here to be the layer that catches what falls through."

"What if Cloudflare goes down?"

Cloudflare's anycast architecture means no single data center failure impacts protection — traffic automatically routes to the next nearest of 335+ locations. In Active/Active mode with your incumbent, if Cloudflare experiences any issue, your existing solution carries full load. Neither vendor is a single point of failure.

Say This

"In an Active/Active architecture, if Cloudflare has any disruption, your existing solution immediately carries full load — and vice versa. You've eliminated the single point of failure in both directions. That's the definition of resilience."

"This adds complexity — we're trying to simplify our vendor stack."

Acknowledge the simplification goal. Counter with the operational reality of the alternative: a single vendor failure during a major attack is far more complex to manage than a pre-configured failover. Cloudflare reduces operational complexity during an attack — which is the moment that matters most.

Say This

"Adding one vendor now is simpler than the operational chaos of explaining to your board why your single-vendor protection failed during your biggest attack of the year. Complexity is the wrong frame — resilience is the right one."

"We can't justify the budget for a second vendor."

Reframe around risk cost, not product cost. Average downtime event: $1M+. Revenue loss per hour: $100K–$540K. One avoided major attack pays for multiple years of dual-vendor coverage. For regulated industries, add regulatory fine exposure. The math is not close.

Say This

"The average cost of a major DDoS outage is over $1M. One event. Cloudflare's dual-vendor coverage is a fraction of that. You're not budgeting for a second vendor — you're budgeting for the insurance that means that $1M event never happens."

Who to Engage
CISO
Primary driver for compliance-framed resilience. DORA, NIS2, NERC CIP — they own the regulatory obligation. Lead with risk exposure and compliance documentation.
Head of Network Security
Owns the "nothing bad happening" mandate. Single point of failure resonates personally — their job is on the line if primary protection fails during a major attack.
CFO
ROI conversation: $1M+ per incident vs. dual-vendor subscription cost. One avoided outage justifies years of coverage. Frame as risk insurance, not product spend.
Proof Points
Melbourne Airport — Defense-in-Depth

Melbourne Airport added Cloudflare as a second protection layer in under 36 hours without disrupting existing services. Defense-in-depth strategy fully implemented overnight — no disruption, no rip-and-replace, full resilience from day one.

Visma — Technology

During an active attack, Visma activated Magic Transit and restored all services in under 1 hour. Without a second layer, recovery would have depended entirely on the primary vendor's response time — and the attack was already underway.


ADVANCE — Universal Differentiators
Five Reasons Cloudflare Wins
Apply these across all three selling motions. When a prospect pushes back on price or asks "why Cloudflare?", anchor on these five points. They are the ADVANCE layer of the TSA framework.
Fastest

No performance tradeoff

Inline mitigation at every Cloudflare data center — zero backhaul, zero added latency. Hardware forces you to choose between speed and protection. Prolexic's scrubbing centers add latency during every mitigation event. "Other solutions require traffic to detour to a scrubbing center. Cloudflare mitigates at the edge — your users never feel the difference."

Most Capacity

The only solution for modern attacks

477 Tbps across 335+ cities. No hardware vendor comes close (25–100 Gbps max). No scrubbing center comes close (Prolexic ~22 Tbps). The largest attack ever recorded was 31.4 Tbps — Cloudflare stopped it in 35 seconds. "The largest DDoS attack in history was 31.4 Tbps. Prolexic's entire capacity is 22 Tbps. Your hardware maxes at 100 Gbps. We stopped that attack in 35 seconds. That's the math."

Always-On

No downtime, no manual activation

Protection is active 24/7 from day one. Detection and mitigation in <3 seconds. Competitors set up "on-demand" because always-on incurs latency in their architecture. Not ours. "On-demand means you're exposed in the window between attack start and activation. We're already blocking when the attack begins — automatically."

Auto-Patching

Protection before you know you need it

Cloudflare's threat intelligence from 20% of all Internet traffic means we identify and patch new attack vectors before they're publicly known. Hardware firmware updates take months. Cloudflare patches automatically — often before zero-days are disclosed. "Attack vectors evolve weekly. Hardware patches take months. We update our defenses from the signal of 20% of all Internet traffic."

Less OpEx

Fewer resources, less overhead

No hardware to rack, configure, maintain, patch, or replace every 5 years. No large SOC team required for manual mitigation. Customers report 50% reduction in network maintenance costs. "One customer reduced network maintenance costs by 50%. Another recovered in under an hour during an active attack. The operational savings alone often justify the investment."