Transcript
NETWORK FABRIC IMPLICATIONS FOR DATA CENTER DESIGN Debbie Montano –
[email protected] Nov 2, 2010
SESSION ABSTRACT What defines a network fabric? Relationship of switch fabric to network fabric Network fabric versus a tree-shaped network design for modern
data center architectures. Data center traffic flow changes -- with the advent of modern applications, service-oriented architectures, web browsers and cloud computing, the traffic flows have changed. Data center design -- different topologies to support new traffic patterns and larger data center infrastructure.
2
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
FRAMING THE DISCUSSION What issues are you having in your data centers? What problems do you feel could be (or should be)
addressed by a new or improved design for your data center network?
What is a Network Fabric? Why are you interested in it?
3
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
TOPICS 4
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EVOLUTION Switch Fabric - Inside a Single Switch/Router
Extending Switching Beyond a Single Chassis
Virtual Chassis for Switches
The Network Fabric 5
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
SWITCHING FUNDAMENTALS 6
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
CROSSBAR SWITCH FOR TELEPHONE SWITCHING (ALSO CALLED CROSS-POINT SWITCH OR MATRIX SWITCH)
7
100-point, 6-wire crossbar switch. Manufactured by Western Electric April, 1970
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
CROSSBAR SWITCH - HISTORICAL A crossbar switch is an assembly of
individual switches between multiple inputs and multiple outputs. The switches are arranged in a matrix. If the crossbar switch has M inputs and N outputs, then a crossbar has a matrix with M x N cross-points or places where the "bars" cross. At each crosspoint is a switch; when closed, it connects one of M inputs to one of N outputs. A given crossbar is a single layer, non-blocking switch
8
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
CLOS NETWORK Charles Clos, published in March 1953 -- "A study of non-blocking
switching networks". In the Bell Systems Technical Journal. Type of multistage switching network Clos networks required when circuit switching needs exceed the capacity of the largest feasible single crossbar switch. Fewer crosspoints required than if implemented as a single switch. 3 stages: ingress stage, middle stage, and egress stage. Defined by three integers n, m, and r. n = the number of sources which feed into each r ingress stage crossbar switches & number of outputs from each egress stage switch. r = number of ingress switches & number of egress switches m = number of outputs from each ingress switch, number of middle stage switches, and number of inputs to each egress switch
9
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
CLOS – THREE STAGE NETWORK
10
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
CLOS – THREE STAGE NETWORK Single ingress
switch – n inputs, m outputs Can connect any of the n inputs and of the m outputs.
11
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
CLOS – THREE STAGE NETWORK Middle switch –
connects to each of r ingress and each egress switches -- one connection to each.
12
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
CLOS – THREE STAGE NETWORK
13
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
CLOS NETWORK VARIATIONS Strict-sense nonblocking Clos networks (m ≥ 2n - 1) Connection can always be made without having to rearrange calls
Rearrangeably nonblocking Clos networks (m ≥ n) Calls may have to be reassigned to a different middle CLOS switch
Clos Networks with More Stages Replace center stage crossbar switches with 3-stage Clos Network – gives 5 stages; can repeat for even more stages.
14
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
SWITCH FABRIC & SWITCH/ROUTER ARCHITECTURE 15
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
FABRIC…
16
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
SWITCH/ROUTER ARCHITECTURE SUPPORTS NETWORK SCALE
FORWARDING PLANE Move the bits Efficient, scalable & reliable intelligent transport
Control
CONTROL PLANE Direct the bits Scalable, reliable and flexible control plane
Forwarding
SERVICES PLANE Give value to the bits Open, service-enabling across control and forwarding plane
17
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
ROUTER ARCHITECTURE (T-SERIES)
Routing Engine
Routing Engine
FPC
FPC
FPC
FPC
PIC
PIC
PIC
PIC
Switching Planes
Separate Control and Forwarding Planes
Control Plane Implemented on Routing Engines (RE)
Data Plane Implemented on Flexible PIC Concentrators (FPCs) and Physical Interface Cards (PICs)
Cross Bar Switch Fabric
Distributed Packet Forwarding Engines on each FPC
18
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
CONTROL PLANE: ROUTING ENGINE (RE) OVERVIEW
The Routing Engine handles: all routing protocol
processes software processes that control the router's interfaces, chassis components, system management & user access to the router. RE interacts with the Packet Forwarding Engine. RE creates one or more routing tables RE derives the forwarding table (active routes), forwards to PFE. PFE forwarding table updated without interrupting forwarding
performance. 19
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
BUILDING BLOCK—THE SF CROSSBAR SWITCH
16 x 16 crossbar Any input to any output Non-blocking design Cornerstone technology T1600 MX TX Matrix Plus
20
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
SWITCH FABRIC – 5 PLANES
Non-blocking connectivity between packet forwarding engines (PFEs) N+1 redundancy: 4 0perationally independent parallel switch planes plus 1
identical hot spare switch plane
21
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
WHAT IS A TX MATRIX PLUS ROUTING NODE?
A Matrix built with T1600 technology. 25 Tbps, 16 x T1600 Routing Node 128 FPC slots with 100G/slot capacity
22
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
TX MATRIX PLUS
23
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
CLOS TOPOLOGY – 3 STAGE CLOS FABRIC
24
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
T SERIES MULTI CHASSIS SWITCH FABRIC PLANES
4 Operationally independent, identical switch planes (plus 1 backup) Each plane contains a 3 stage Clos fabric Implemented using Juniper Switch Fabric ASICs Packet Forwarding Engines (PFEs) distribute cells equally across the 4
switch planes on cell-by-cell basis, not packet-by-packet basis
25
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
TX MATRIX PLUS: SWITCHING ARCHITECTURE TX Matrix Plus Central Routing Control
RE/CB 0 Standby
RE/CB 0 Standby
RE/CB 1 Main
Optical Data Plane
RE/CB 1 Main
RE/CB 0 Standby
RE/CB 1 Main
F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB
F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB F2 SIB
T1600 LCC 00 F13 SIB LCC 01 T1600 Plane 0 - Standby LCC 02 F13 SIB LCC 03 RE/CB 0 RE/CB 1 T1600 Plane 1 Main Standby LCC 04 F13 SIB LCC 05 T1600 T1600 Plane 2 LCC 00 F13 SIB LCC 01 LCC 06 F13 SIB LCC 07 T1600 Plane 3 LCC 02 F13 SIB LCC 03 T1600 Plane 4 LCC 08 F13 SIB LCC 09 SIB LCC 05 LCC 00 F13 SIBLCC LCC04 01F13 LCC 10 F13 SIB LCC 11 SIB LCC 07 LCC 02 F13 SIBLCC LCC06 03F13 LCC 12 F13 SIB LCC 13 SIB LCC 09 LCC 04 F13 SIBLCC LCC08 05F13 LCC 1411 F13 SIB LCC 15 TX Matrix Plus LCC 06 F13 SIBLCC LCC10 07F13 SIB LCC 23″ enclosure LCC 08 F13 SIBLCC LCC12 09F13 SIB LCC 13 open LCC 10 F13 SIBLCC LCC14 11F13 SIB LCC 15 open 3 chassis, 2 planes/chassis LCC 12 F13 SIBLCC LCC00 13F13 SIB LCC 01 open Central routing entity LCC 14 F13 SIBLCC LCC02 15 F13 SIB LCC 03 open 3 Stage Clos switch fabric LCC 00 F13 SIBLCC LCC04 01F13 SIB LCC 05 open Single management interface LCC 02 F13 SIBLCC LCC06 03F13 SIB LCC 07 open open LCC 04 F13 SIBLCC LCC08 05F13 SIB LCC 09 open T1600 routing nodes LCC 06 F13 SIBLCC LCC10 07F13 SIB LCC 11 LCC 08 F13 SIBLCC LCC12 09F13 SIB LCC 13 Fiber interconnect LCC 10 F13 SIBLCC LCC14 11F13 SIB LCC 15 REs: Local chassis management LCC 12 F13 SIB LCC 13 Distributed packet forwarding LCC Copyright © 2010 Juniper Networks, Inc. www.juniper.net 26 14 F13 SIB LCC 15
TX MATRIX PLUS – SWITCH PLANE CONNECTIONS
27
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
ENABLING TECHNOLOGY COMPONENTS Shared chassis, blades, interfaces, power, links
PSD4
Copyright © 2010 Juniper Networks, Inc.
Mobile Packet Core
Core Private 1
Physically secure, individually managed networks
28
T1600
PSD3 PSD2 PSD1
Public
– Shared “pool” of slots – Each slot is a “router” – Up to 25Tbps of capacity
JCS1200
Core Private 2
Transcend “physical” router boundaries
TX Matrix Plus
www.juniper.net
Shared uplinks, Integrated optics, 100GE, Highdensity 10GE, Service
ROUTER CONNECTIVITY WITH JCS1200 PLATFORM
JCS Platform
RE m RE b RE m
Router
RE b
Routing Engine Router internal LAN
External LAN
Control Board
Routing Engine
Switch Switch
Control Board
RE m RE b RE m RE b
PIC SIB PIC FPC PIC SIB PIC FPC PIC SIB PIC FPC PIC SIB PIC FPC 29
PIC PIC PIC SIB FPC PIC SIB FPC
Backup Plane A Plane B Plane C Plane D Fabric ASIC
SIB FPC
PIC PIC
SIB FPC
PIC PIC
Switch fabric forCopyright data © 2010 Juniper Networks, Inc.
www.juniper.net
RE m RE b JCS Platform Internal LAN
RE m RE b
SIMPLIFYING THE DATA CENTER 30
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
DATA CENTER OBJECTIVES SUMMARY
DO MORE WITH LESS AND DO IT NOW PROVIDE MORE SERVICES TO END CUSTOMERS AND PROVIDE THEM NOW, .. SECURELY CHARGE CUSTOMERS FOR ONLY WHAT THEY USE, WHEN THEY USE IT OFFER MORE RELIABLE SERVICE AT LOWER COSTS TO END CUSTOMERS MAKE DATA CENTER MORE PROFITABLE / ECONOMICAL
31
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
GARTNER'S DEFINITION OF CLOUD AND THE CRITICAL ATTRIBUTES OF CLOUD SERVICES
5 Attributes that support outcomes
Gartner defines cloud computing as "a style of computing where scalable and elastic IT-enabled capabilities are provided 'as a service' to external customers using Internet Technologies."
32
1
Service Based
Consumer concerns are abstracted from provider concerns through service interfaces
2
Scalable and Elastic
Services scale on-demand to add or remove resources as needed.
3
Shared
Services share a generalized pool of resources to build economies of scale.
4
Metered by Use
Services are tracked with usage metrics to enable multiple payment models.
Internet Technologies
Services are delivered through use of Internet identifiers, formats, and protocols.
5
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
HISTORY LESSON
In the 1980’s (and prior) campus computing was centralised in a “computing center” with mainframes and minis The “computing center” was relatively efficient in energy, space and compute resources (time sharing a common device) but “they” controlled it and everyone had to play by their rules The 1990’s brought the PC revolution and resources were devolved to the end user with PC’s eventually becoming servers for workgroups or individuals The PC provided freedom and control to the end users but at the cost of efficiency. Multiple servers live in all sorts of locations consuming resources even when idle.
33
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
CURRENT STATE OF PLAY
Servers are now everywhere, often performing a single function or no function at all As security has become a greater concern this has resulted in Firewalls and other security devices being scattered everywhere to protect these servers, with inconsistent security policies, managed by “no one” Little scale and high in complexity and resource consumption Real estate, power, cooling, management
34
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
RETURN OF THE COMPUTING CENTER
Now called the Data Center A place to consolidate compute and storage resources Non stop operation of a critical resource Applying Green concepts to reduce the resource impact Using virtualization to provide local control of slices of resources to the users
35
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
BIGGEST IMPACT TO THE NETWORK
Due to: Virtual Machine Mobility Resource Pooling Cloud Computing
36
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
POOLING RESOURCES – SERVICE PRODUCTION Larger pools are more efficient and elastic The network interconnects the pool
Resource Pool
Problem: Today’s Data Center Network Limits Scalability 37
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
SERVER VIRTUALIZATION AND WORKLOAD MOBILITY
What is it? One of the most important technical evolutions occurring in
the computing and data center environment today Virtualization partitions individual servers into multiple
virtual machines, each running its own local OS Opens up agility and service acceleration opportunities Creates new networking, security, management challenges VMs can be Intel x86-based (Dell, IBM, HP) or based on
other server platforms (IBM p-series, Oracle/Sun) Basis of virtualization is the hypervisor .. a layer of code on the physical server on which VMs run Hypervisor examples: VMware ESX, Microsoft Hyper-V,
Linux KVM, Citrix Xen
VMWare ESX VMWare ESX VMWare ESX
Creates new tier of DC network inside the virtualized
servers Extends network ‘edge’ 1 more hop, into the virtual domain
38
VMWare ESX VMWare ESX VMWare ESX VMWare ESX VMWare ESX
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
vSwitch vSwitch vSwitch vSwitch vSwitch
vSwitch vSwitch vSwitch
New Access Tier (Server admin)
DATA CENTER NETWORK COMPLEXITY GROWS WITH SERVER GROWTH
Source: Yankee Group 2010
39
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EVOLUTION OF THE DATA CENTER NETWORK Local Area
20 years ago And the Ethernet switch the Ethernet switch became the basic was introduced to building block of the solve the LAN network problem
40
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EVOLUTION OF THE DATA CENTER NETWORK Local Area
Data Center
And so we wired the data center the same way
41
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EVOLUTION OF THE DATA CENTER NETWORK Local Area
Data Center
Redundancy introduces loops
Spanning Tree Protocol blocks loops
42
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EVOLUTION OF THE DATA CENTER NETWORK Local Area
Data Center
Client SOA Server applications
Up to 75% of traffic
43
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EVOLUTION OF THE DATA CENTER NETWORK Data Center
Today’s Architectural Challenges: 1. Requirement to run Spanning Tree 2. The requirement for east/west traffic to also go north & south 4. Multiple networks Up to 75% of traffic
5. Inability to scale without complexity
44
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EVOLUTION OF THE DATA CENTER NETWORK Data Center Factor in Cloud Computing
…And there is a need to RETHINK the data center network
45
Copyright © 2010 Juniper Networks, Inc.
Up to 75% of traffic
www.juniper.net
3 STEPS TO A CLOUD READY DATA CENTER
46
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
47
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
SIMPLIFY – JUNIPER’S VISION LEGACY NETWORK L2/L3 Switch
SSL VPN Firewall IPSec VPN IPS
L2/L3 Switch
L2/L3 Switch
L2/L3 Switch
L2 Switch
SERVERS
STORAGE FC SAN
48
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
THE MULTI-TIER LEGACY NETWORK IS A BARRIER The challenge
Multi-tier legacy network
Too slow
N
Too complex Too expensive
Up to 50% of the ports interconnect switches, not servers or storage
Up to 75% of traffic
E
Spanning Tree disables up to 50% of bandwidth
Complexity
W
S
Scale 49
Unnecessary layers add hops and latency
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
JUNIPER APPROACH: SIMPLIFY
L2/L3 Switch
SSL VPN Firewall IPSec VPN IPS
L2/L3 Switch
L2/L3 Switch
L2/L3 Switch
L2 Switch
SERVERS
Copyright © 2010 Juniper Networks, Inc.
2
Virtualize the access layer
3
Collapse core & aggregation layers
4
Connect data centers
5
Simplify management
STORAGE FC SAN
50
1
Consolidate & virtualize security appliances
www.juniper.net
JUNIPER APPROACH: SIMPLIFY
1 L2/L3 Switch
Consolidate & virtualize security appliances
SRX—Dynamic services gateway
L2/L3 Switch
Offers high capacity SSL VPN Firewall IPSec VPN IPS
Consolidate multiple security devices into one L2/L3 Switch
Simpler
L2/L3 Switch
SRX5800
One hop-lower latency Rapid provisioning Lower power, cooling, and space
L2 Switch
SERVERS
STORAGE FC SAN
51
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
JUNIPER APPROACH: SIMPLIFY
22 L2/L3 Switch
Virtualize the Access Layer
EX4200 with Virtual Chassis
L2/L3 Switch
Reduces latency across racks
Lowest in class latency = 1.96 microseconds
L2/L3 Switch
Reduce uplinks
L2/L3 Switch
1/10th switches to manage
SRX5800
EoR chassis features at ToR economics Deployment flexibility— EoR or ToR
EX4200 L2 Switch
SERVERS
STORAGE FC SAN
52
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
JUNIPER APPROACH: SIMPLIFY
3 L2/L3 Switch
Collapse core & aggregation layers
EX4200 with Virtual Chassis
L2/L3 Switch
EX8200 with industry-leading line rate density EX8216 L2/L3 Switch
Eliminates the aggregation layer
L2/L3 Switch
SRX5800
Simplifies the architecture Reduces space, power, and cooling requirements
EX4200
SERVERS
STORAGE FC SAN
53
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
JUNIPER APPROACH: SIMPLIFY
4
Connect data centers
MX & M Series
MX Series
Powerful, reliable routers for the edge Powerful multicast
For inter-data center mobility
EX8216
SRX5800
MX in the core MPLS, VPLS extend VLANs enabling mobility
EX4200
SERVERS
STORAGE FC SAN
54
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
JUNIPER APPROACH: SIMPLIFY
5 Simplify Management
Simplify Management
NSM, STRM, AIM
MX Series
Single pane of glass Centralized automation Open API for 3rd-party integration
EX8216
Support of full network lifecycle
SRX5800
EX4200
SERVERS
STORAGE FC SAN
55
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
SIMPLIFY—JUNIPER’S VISION DATA CENTER FABRIC
MX Series
SRX5800
EX8216
STORAGE SERVERS
FC SAN
56
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
SIMPLIFY—JUNIPER’S VISION DATA CENTER FABRIC
MX Series
Virtualized Security & SRX5800 Application Services
The Stratus Project
SERVERS
57
Copyright © 2010 Juniper Networks, Inc.
STORAGE
www.juniper.net
VIRTUAL CHASSIS 58
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
JUNIPER VIRTUAL CHASSIS TECHNOLOGY enables users to interconnect multiple individual switches into a
single logical device greatly simplifies data center and campus core network configurations, management, and troubleshooting minimizes or eliminates the need for Spanning Tree Protocol (STP) while increasing network performance by enabling full utilization of all network uplinks
59
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
VIRTUAL CHASSIS TECHNOLOGY COMPARISON WITH STACKABLES
Virtual Chassis provides chassis-like capabilities not found in a typical stackable switch: Chassis extension via 10 Gigabit Ethernet Modular Uplinks Dedicated Master & Standby routing engines Graceful Routing Engine Switchover (GRES) Redundant and hot-swappable power supply units (PSUs) Field serviceable fan trays and redundant fans Licensing per Routing Engine (RE), not per switch Uses chassis modular configuration & numbering
60
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
VIRTUAL CHASSIS TECHNOLOGY COMPARISON WITH STACKABLES
Superior backplane capacity Configuration Flexibility Chassis extension via 10GbE Modular uplinks Chassis Like HA Dedicated Master & Standby Routing Engines Graceful Routing Engine Switchover (GRES) Non-stop routing (NSR)/ISSU Redundant & hot-swappable internal PSUs Field-serviceable fan tray w/ redundant fans Operational Simplicity Licensing per RE, not per switch Uses chassis module configuration & numbering / LCD
Virtual Chassis
Typical Stackable
128Gbps
10-80Gbps
$$$
$$$
Roadmap 61
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EX4200 VIRTUAL CHASSIS CABLING Option 1: Daisy Chain Ring Wiring Closets
Option 2: Braided Ring Data Center Top of Rack, Wiring Closets
Longest Virtual Chassis cable spans entire Virtual Chassis; max height or width is 5 meters
Longest Virtual Chassis cable spans just three switches; max height or width is 25 meters
62
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EX4200 VIRTUAL CHASSIS CABLING Option 3: Extended Virtual Chassis
Across wiring closets, data center racks or rows Up to 40 km
Dedicated Virtual Chassis
GbE or 10GbE Virtual Chassis Extension Virtual Chassis Location #2
Virtual Chassis Location #1
GbE or 10GbE Virtual Chassis Extension
Extend height and/or width of Virtual Chassis by GbE or 10GbE uplinks Up to distance of optics (40km) Maximum circumference of 100km
63
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EX4200: CHASSIS-CLASS MAINTENANCE Master RE Backup RE Backup RE Line Card 3
1) Issue recycle command 2) Attach new switch 3) RE downloads software & config 64
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
VIRTUAL CHASSIS TECHNOLOGY COST BENEFITS Configuration
Traditional Chassis
EX4200 with Virtual Chassis Technology
Campus or data center switch aggregation Full device redundancy
Savings with Virtual Chassis
48 GbE SFP Four 10GbE XFP
65
Space Requirements
15 Rack Units
2 RU
86%
Power Requirements
1060 W
216 W
80%
Cooling Requirements
4480 BTU/hr
743 BTU/hr
83%
Deployment Cost
$126,500
$37,000
71%
Sparing Cost
$66,000
$18,500
72%
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
IMPROVING PERFORMANCE AND EFFICIENCY Aggregation/Core Switches
×
up to 160µs*
Access Switch
Access Switch
EX 4200
Server 1 Rack 1
~2.6 – 72µs*
EX 4200
Server 2 Rack 2
3 Hypervisor Hypervisor (VMWare) O/S
O/S
App 1
App 2
VM 1
VM 2
66
U n u s e d
VM 3 Copyright © 2010 Juniper Networks, Inc.
*Depending on network design, equipment and traffic
Hypervisor Hypervisor (VMWare) O/S O/S O/S
O/S O/S U
App App 4 1 App 5 2
App App 3 3
VM 4
VM 5
www.juniper.net
n u s e d
VM 3
U n u s e d
EX8216 CHASSIS OVERVIEW
Passive mid-plane Current switch fabric capacity 6.2 Tbps Supports future scalability to 12.4 Tbps
21RU height, 25” depth Up to two chassis per standard rack Switch fabrics located in the back Targeted at data center, cloud computing
and campus core deployments
LCD panel Three shipping options Each option ships with eight SFs and two fan trays Base configuration: (1) RE and (2) 3000W AC power supplies Redundant configurations: (2) REs and (6) 3000W AC supplies (2) REs and (6) 2000W AC supplies
67
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EX8216 SWITCH FABRIC
Proven Juniper switch fabric technology Used in MX Series and T Series
Resilient design Eight active load-balanced switch
fabrics in the back of the chassis 10GbE line-rate performance maintained with single SF failure Hot swappable
Credit-based fabric 8,192 WRED virtual output
queues per system
Switch fabric module
Single tier low-latency crossbar No head-of-line blocking Efficient multicast replication
68
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EX8216: HIGHLY SCALABLE SWITCHING FABRIC Eight switch fabric modules connected to the mid plane
SF
SF
SF
SF
SF
SF
320 Gbps
320 Gbps
Line Card 0
Line Card 15
…
128 x 10GbE ports 1,920 Mpps throughput Wire-rate multicast replication 69
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
SF
SF
EX8200 LINE CARD ARCHITECTURE (8-PORT 10GbE) 320 Gbps to Switch Fabric
Switch Fabric Interface
Switch Fabric Interface
Packet Processor
Packet Processor
PFE2
PFE2
Line Card CPU
Line Card
Switch Fabric Interface
Switch Fabric Interface
Packet Processor
Packet Processor
PFE2
PFE2
(8) 10GbE Ports
70
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
Congestion mgmt • 512Mb buffer/port • 100 ms of buffering Traffic scheduler Multicast replication L2 and L3 (IPv4 & v6) Access control lists QoS marking Rate limiting Port mirroring GRE tunneling MPLS (2-label)
EX8200 VIRTUAL CHASSIS
71
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EX8200 VIRTUAL CHASSIS TECHNOLOGY
72
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
EX8200 EXTERNAL ROUTING ENGINE (XRE200)
Enables EX8200 Virtual Chassis technology Extends Virtual Chassis to the core Most available single-control plane
implementation
Simplifies management and reduces complexity Reduces the number of managed logical
core devices Eliminates the need for Spanning Tree
Flexible connectivity to EX8200 RE 10/100/1000BASE-T 1000BASE-X SFP (up to 40km)
Control plane offload XRE200 runs routing, Multicast and LAG
protocols EX8200 RE provides chassis management, monitoring and bring-up functions
73
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
PUTTING IT ALL TOGETHER IN THE DATA CENTER: VIRTUAL CHASSIS AND VIRTUALIZE SECURITY
74
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
NETWORK FABRIC 75
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
DEFINING A NETWORK FABRIC Any port in the network is directly
connected to any other port. This significantly reduces transmission latency.
Packet processing is done once to reach
the final destination. The fabric has a single, shared state table for all ports. The network is flat; all ports in the device are connected in a single tier. All ports in the fabric can be managed from a single point.
76
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
DEFINING THE IDEAL DATA CENTER NETWORK Typical tree Flat, any-to-any configuration connectivity
77
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
DEFINING THE IDEAL DATA CENTER NETWORK Flat, any-to-any connectivity
Single device N=1 Switch Fabric
Switch Fabric Data Plane Flat Any-to-any
Control Plane Single device Shared state Performance and Simplicity of a single switch 78
Single switch does not scale Single point of failure Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
THE IDEAL NETWORK – A FABRIC
Virtual Chassis
Stratus
A Network Fabric has the…. Simplicity of a single switch 79
Copyright © 2010 Juniper Networks, Inc.
Scalability of a network www.juniper.net
HOW IS A TRUE FABRIC DIFFERENT FROM SOLUTIONS FROM OTHER VENDORS Network Fabric Data Plane
True Fabric
Flat Any-to-any
Building a fabric to behave like a single switch
L2 & L3
Control Plane Single device Shared state
Other Vendors
Yes Yes
Network Fabric Data Plane
using Trill
Flat Any-to-any
Making multiple switches try to behave like a fabric
L2 only
Control Plane Single device Shared state
80
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
No No
BENEFITS OF A NETWORK FABRIC
Technical Benefits: Better network performance: Fewer packet processing steps,
plus the elimination of STP, mean all network traffic can traverse the network much faster; in large networks, this turns out to be orders of magnitude faster. Simplified network design: Today, fabric at the access and core layers allows network managers to remove the aggregation tier from the architecture, creating a flatter network. In the future, the network will look like a single network device, dramatically simplifying the design by reducing the network to a single flat tier. A network that scales faster: Since all the devices in the network contribute to the “fabric,” scaling the network is as simple as installing new devices. Each new device simply adds to the fabric and becomes part of the network’s virtual “single-device” architecture.
81
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
BENEFITS OF A NETWORK FABRIC
Business Benefits: Better ROI for virtualization technology Lower capital expenditure: Since all the ports are active, fewer
network devices are required. This reduces the capital outlay required to build or refresh the data center network. Lower operational costs: Fewer managed devices and much fewer device interactions that are exposed to the customer result in simplified management. Thus, network managers can spend more time on strategic initiatives and less time managing the day-to-day operations of the network. Better application performance: Since no port is ever more than one hop away from any other, application traffic can traverse the network much faster than with a traditional network architecture. This significantly enhances the performance of latency-sensitive applications. More scalable networks: The biggest inhibitor for scaling networks is the associated complexity that primarily comes from managing multiple devices. With fabric technology, multiple devices can be operated as a single logical device. As a result, there is a significant decrease in complexity, allowing networks to scale much more. 82
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net
A NETWORK FABRIC Any port in the network is directly
83
connected to any other port. Packet processing is done once to reach the final destination. The fabric has a single, shared state table for all ports. The network is flat; all ports in the device are connected in a single tier. All ports in the fabric can be managed from a single point.
Copyright © 2010 Juniper Networks, Inc.
www.juniper.net