Transcript
www.tc.tuwien.ac.at
Optical Technologies for Datacenters and Cloud Computing Dr. Slavisa Aleksic Institute of Telecommunications Vienna University of Technology
Lecture at the University of Zagreb Faculty of Electrical Engineering and Computing (FER) 2014/10/20 /42
1. Introduction
www.tc.tuwien.ac.at
Outline
2. Concept of Cloud Computing 3. Carrier Clouds 4. Performance and Privacy Aspects 5. Optical Technologies within Datacenters 6. Optical Technologies for Core Networks
© Slavisa Aleksic
2 /42
1. Introduction Introduction 1.
www.tc.tuwien.ac.at
Outline
2. Concept of Cloud Computing 3. Carrier Clouds 4. Performance and Privacy Aspects 5. Optical Technologies within Datacenters 6. Optical Technologies for Core Networks
© Slavisa Aleksic
3 /42
Search for Information
Media
Internet-Forums
Entertainment
www.tc.tuwien.ac.at
The Internet
Gaming E-Shopping E-Government E-Health E-Learning Smart Grids Telemedicine
Telepresence Communication © Slavisa Aleksic
4 /42
www.tc.tuwien.ac.at
Introduction to Cloud Computing End-to-end connection: • Signal transmission delay • Data processing delay • Data (packet) losses • Encapsulation overhead (stratum, overlay, tunneling) • Signaling overhead
Virtualization
Data Center
VM Mobility Consolidation
Cloud Data Center
Internet
Server
Several ten thousends or even hundert thousands of servers!
Core Virtual Machine
Aggregation
Virtual Storage
Access
Virtual Switch
Client (end-user device) Client (end-user device)
© Slavisa Aleksic
5 /42
1. Introduction
www.tc.tuwien.ac.at
Outline
2. Concept of Cloud Computing 2. Concept of Cloud Computing 3. Carrier Clouds 4. Performance and Privacy Aspects 5. Optical Technologies within Datacenters 6. Optical Technologies for Core Networks
© Slavisa Aleksic
6 /42
NIST Cloud Computing Definition: „Cloud Computing is a model for enabling convenient, on-demand network access
to a shared pool of configurable computing resources (e.g. networks, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.“
www.tc.tuwien.ac.at
Definition of Cloud Computing
Essential characteristics of cloud computing services:
On-demand self-service: computing capabilities such as server time and network storage are provisioned to users automatically
Broad network access: capabilities can be accessed over the network via thin or thick client platforms – PCs, laptops, smartphones, tablet PCs, PDAs)
Resource pooling: resources are pooled to serve multiple consumers using a multitenant model and dynamically assigned and reassigned according to consumer demand
Rapid elasticity: capabilities can be rapidly and elastically provisioned and often appear to be unlimited
Measured service: resource usage can be monitored, controlled and reported. NIST: National Institute of Standards, U.S. Department of Commerce
© Slavisa Aleksic
7 /42
Software as a Service (SaaS) consumers use provider‘s applications running on a cloud infrastructure. The application is accessible from various client devices through a thin client interface.
www.tc.tuwien.ac.at
Cloud Service Models
Platform as a Service (PaaS) consumers can deploy onto the cloud infrastructure consumer created or acuired applications created using programming languages and tools supported by the provider.
Infrastructure as a Service (IaaS) the providers provision to customers processing, storage, networks and other fundamental computing and communication resorces where they are able to deploy and run arbitrary software including operating systems and applications.
© Slavisa Aleksic
8 /42
1. Introduction
www.tc.tuwien.ac.at
Outline
2. Concept of Cloud Computing 3. Carrier Carrier Clouds 3. Clouds 4. Performance and Privacy Aspects 5. Optical Technologies within Datacenters 6. Optical Technologies for Core Networks
© Slavisa Aleksic
9 /42
www.tc.tuwien.ac.at
Carrier Cloud Cloud Service Provider Data Center
Communication Service Provider Communication Path
Data Center
Network
Data Center
Cloud Consumer
In the carrier cloud, the cloud infrastructure is owned and cloud services are provided by communication service provider (CSP).
Main benefits of the carrier cloud: • • • • • • •
CSPs have control over the network (performance improvement) Geo-distributed data centers easily realizable (small data centers close to consumers) Low latencies achievable Carrier-grade service level agreement (SLA) achievable Application awareness across fixed and mobile networks (core and access) Supports private, public and hybrid cloud services Considers network status in conjunction with the status of cloud resources © Slavisa Aleksic
10 /42
www.tc.tuwien.ac.at
Requirements for Carrier Clouds Network functions virtualization Software-defined networking (SDN) Efficient resource sharing Integration with existing carrier VPN network Application transparency
Points of presence of a network provider in the City of Vienna
Facilities and skilled personal
VPN: Virtual Private Network
© Slavisa Aleksic
11 /42
www.tc.tuwien.ac.at
Network Virtualization • Overlay Network
Virtual Link
• Virtual Local Area Network (VLAN)
Virtual Switch
• Virtual Private Network (VPN) Virtual Topology
• Software-Defined Networking (SDN) Storage Data Center
Customer Edge Device
Storage SAN Switch Storage
Cloud Data Center
Internet
Server
Storage Area Network (SAN)
Virtual Machine Virtual Storage Virtual Switch
Client (end-user device) Client (end-user device)
12 /42
www.tc.tuwien.ac.at
Software-Defined Networking (SDN)
SDN is a complementary technology to network virtualization that provides: – Unbundling network control software from network hardware – Standardized programming interface for application developers Flow Service SDN – Enable flow-level programmability and security (e.g. OpenFlow) – Switches and routers can be programmed to support various specialized packet processing platforms for control of the flows (e.g. detecting and scrubbing denial of service attacks) Virtualization SDN – Aims to make orchestration of moves, adds and changes easy to manage for networking equipment (switches, routers firewalls, and VPN aggregators) Infrastructure SDN – Expose programmability of network resources to software applications
– Beneficial wherever resources are constrained (e.g. because of expense, geographic, or power/space limitations) or there are dynamic changes (failures, traffic changes) © Slavisa Aleksic
13 /42
1. Introduction
www.tc.tuwien.ac.at
Outline
2. Concept of Cloud Computing 3. Carrier Clouds 4. Performance Performance and and Privacy Aspects Aspects 4. Privacy 5. Optical Technologies within Datacenters 6. Optical Technologies for Core Networks
© Slavisa Aleksic
14 /42
As the number of servers in the cloud increases and the usage of cloud services grows, providing high performance to cloud consumers becomes challenging:
www.tc.tuwien.ac.at
Performance and Privacy Aspects
• The traditional data center network design is not scalable • Bandwidth demand is constantly increasing • It is difficult to offer and meet strong SLAs because of large and nondeterministic latencies and data losses • The total energy consumption of ICT infrastructure increases • There are legal and regulatory issues that restrict the flexibility in implementing and optimizing cloud infrastructures
• Many customers are concerned about the security of their data
© Slavisa Aleksic
15 /42
In a traditional data center network: • L2/L3 boundary is on the WAN router
• L2 VLAN is not scalable and cannot meet the high tenant scale requirement (4K VLAN limit)
www.tc.tuwien.ac.at
Optimizing DC Network Architecture
• L2 forwarding is per per-VLAN – load balancing is not efficient
These limitations can be addressed by: • Optimized L2 forwarding – e.g. E-VPN provides L2 multi-point bridging with control and data plane similar to L3 VPN better scalability and load balancing
• Optimized L3 routing – Move the L3 gateway deeper into the data center in order to reduce the L2 domain and thus achieve a better ARP scale E-VPN : Ethernet Virtual Private Network ARP: Address Resolution Protocol
© Slavisa Aleksic
16 /42
Response time between PlanetLab and Amazon`s Virginia and Singapore data centers vs. geographic distance
Carrier Clouds Latency sensitive applications like • telephony • online gaming, and • video-conferencing benefit from local datacenters that are closer to end users
www.tc.tuwien.ac.at
Latency
Similarly, services like content distribution networks (CDNs) aim to push static content towards the users at the edge of the network. Layer 1 VPNs that provide dynamic connection provisioning directly in the optical domain are able to reduce network latencies close to the minimum value. Source: Y. Ben-David, S. Hasan,P. Pearce, “Location Matters: Limitations of Global-Scale Datacenters
© Slavisa Aleksic
17 /42
www.tc.tuwien.ac.at
Security and Privacy Issues How to transmit and store data?
Survey: At the top of IT decision maker concerns
Where to store data? • Data are typically transmitted through the core network using secure VPNs:
Source: Alcatel-Lucent, “Global Cloud IT Decision Maker Study”, September 2011
- IPsec VPNs - SSL VPNs - PPTP VPNs secured with MPPE, and - L2TP VPNs secured using IPsec
• Data transmitted within the data center are usually not encrypted possible security leak • Data privacy, integrity and cryptographic isolation can only be guaranteed when using strong end-to-end encryption and authentication of each packet or frame © Slavisa Aleksic
18 /42
How data can be stored: privacy laws •
US: HIPAA, FERPA, and GLBA
•
EU: EUDPD
www.tc.tuwien.ac.at
Legal Concerns and Constraints
Where data can be stored: transborder data laws •
EUDPD flatly forbids the transfer of data to other jurisdictions not explicitly approved (e.g. US-EU Safe Harbor Framework). Where EU data can be stored globally
Contradiction: •
The USA PATRIOT Act gives the US government broad powers to collect private data stored inside the USA and outside the USA if stored by US companies.
GLBA: GrammLeach Bliley Act • governs sensitive financial data FERPA: Family Education Right and Privacy Act • governs student records EUDPD: European Union‘s Data Protection Directive • governs sensitive private data HIPAA: Health Insurance Portability and Accountability Act • governs patient information
© Slavisa Aleksic
Source: Y. Ben-David, S. Hasan,P. Pearce, “Location Matters: Limitations of Global-Scale Datacenters
19 /42
© Slavisa Aleksic
NFV and Cross-Layer SDN
Security and Legal Constraints
Energy Efficiency
Performance and Scalability
www.tc.tuwien.ac.at
Future Directions
20 /42
www.tc.tuwien.ac.at
Outline
1. Introduction
2. Concept of Cloud Computing 3. Carrier Clouds 4. Performance and Privacy Aspects 5. Optical Optical Technologies within Datacenters 5. Technologies within Datacenter 6. Optical Technologies for Core Networks
© Slavisa Aleksic
21 /42
•
Data center: large dedicated an interconnection network.
•
Data center traffic *1 :
cluster
of
servers
connected
through
www.tc.tuwien.ac.at
Data Center Traffic
1. increases with a compound annual growth rate (CAGR) of 31% 2. is composed by 76% of traffic between servers in the same data center 3. main contributor is cloud computing traffic: 2/3 of the total
*1 Cisco white paper: “Cisco Global Cloud Index: Forecast and Methodology, 2011-2016,” 2012. © Slavisa Aleksic
22 /42
www.tc.tuwien.ac.at
Inter-Datacenter Network •
Exponential growth in data center traffic poses a significant challenge on the interconnection network.
•
Current data center interconnection network: • fat-tree 3-tiers architecture • point-to-point interconnects • electronic packet switching
© Slavisa Aleksic
23 /42
• When considering future data centers requirements, architectures based on point-to-point interconnects and electronic packet switching will lead to: • limited network scalability
www.tc.tuwien.ac.at
Limits of current data center interconnects
• high energy consumption • Solution: optically switched interconnects • wavelength division multiplexing (WDM) links • optical switching
• Optically switched interconnect architectures for data centers have been recently proposed and can be categorized in: 1. hybrid solutions: packet switching in the electronic domain and circuit switching in the optical domain
2. optical circuit switching solutions 3. optical burst/packet switching solutions © Slavisa Aleksic
24 /42
Power consumption of top HPC systems 100.0
2.1 HPC Systems
1.9
HPC Systems
1.7
BlueGene/Q
Power Consumption [MW]
Energy Efficiency Rmax /Pcons [GFLOP/s/W]
Efficiency of top HPC systems
1.5 1.3 K Computer
1.1 0.9 0.7 0.5 0.3 0.1
> 12 MW power!
www.tc.tuwien.ac.at
High-Performance Computing
10.0
K Computer 1.0
10.5 PFLOP/s! BlueGene/Q 0.1
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Performance Efficiency R max /R peak Data taken from TOP500: http://www.top500.org/
0.1
1.0
10.0
100.0
Performance R max [PFLOP/s]
ITRS Forecast for 2020: Single processors with ~ 100 TFLOP/s ~ 230 Tbit/s per processor Power per processor limited to 200 W Requirement for Interconnects: energy efficiency in the order of hundreds of fJ/bit © Slavisa Aleksic
25 /42
www.tc.tuwien.ac.at
Interconnecting Technologies Backplane and Intramodule
Proprietary FC
PCI 32/33
SCSI-3
PCI 64/66
PCI-X
SCSI Ultra
PCI 32/33
Chiptochip
GbE DDR PCI-X Serial SCSI Optical Interconnects Backplane TFI-5
PCI 64/66
PCI-X
RIO Infiniband
XAUI PCI Express HT Infiniband
UTOPIA L1
SFI-4 CSIX XGMII
GMII
Data path UTOPIA L2
SPI-3 SPI-4
10G FC PCI Express
Interlaken XLGMII CGMII
XLAUI CAUI CEI-28G XLGMII CGMII
SFI-5
XSBISPI-5
XLAUI CAUI
Time 1992
1994
PCI: Peripheral Component Interconnect SPI: System Packet Interface SFI: SERDES Framer Interface XGMII: 10 Gbit Media Independent Interface FC: Fibre Channel
1996
1998
2000
2002
RIO: Rapid I/O CSIX: Common Switch Interface XAUI: 10 Gigabit Attachment Unit Interface XSBI: 10 Gigabit Serial Bus Interface GbE: Gigabit Ethernet
© Slavisa Aleksic
2004
2006
2008
DDR: Double Data Rate HT: Hyper Transport TFI: TDM Fabric Interface SCSI: Small Computer System Interface
26 /42
2 – 100 Gbit/s aggregate data rate Over 4 to 12 lanes
Up to 25 Gbit/s per lane
© Slavisa Aleksic
Up to 100 cm over PCB traces Up to 20 m over copper cables Up to 300 m over optical multimode fibers
www.tc.tuwien.ac.at
Point-to-Point Interconnects: Overview
27 /42
Advantages of optical interconnects: Loss independent of frequency High data rates over long distances (high bandwidth-length product)
www.tc.tuwien.ac.at
Optical vs. Electrical Interconnects
Resistance to electromagnetic interference High physical interconnect density (bandwidth density) Optical
Electrical
© Slavisa Aleksic
28 /42
Switch Effect
Port Count Swtich Type Y-branch digital LiNbO3 (1x2)
Low Port Count
Elektro-optic
AWG switch Acusto-optic
Mach-Zehnder LiNbO3 (2x2) Y-branch digital PLZT (1x2 - 8x8) Directional coupler PLZT (1x2 - 1x8) Silicon Ring Resonator (4x4) Lyquid Crystal (2x2) SOA (4x4) SOA (1x4 - 1x8) Electroholographic (1x2) AWG (NxN) for N=4,8,16 or 32 Moderate Port Count AWG (4x4) TeO2Port crystalCount (2x2) Low LiNbO3 crystal (2x2) Mach-Zehnder Silica (16x16) Moderate Port Count
Thermo-optic
Electro-mechanical
Y-branch digital Polymer (1x2) Silicon Ring Resonator (8x4) Bubble (32x32) HighMEMS Port(8x8) Count MEMS (16x16) MEMS (32x32) MEMS (64x64)
© Slavisa Aleksic
Switching Speed Insertion Loss
Crosstalk
PDL
4 dB
45 dB
very low
155 ns
10 dB 5 dB 10 dB 2 dB 2 dB 1 dB 0 dB 0.5 dB 5 - 7 dB 7 - 9 dB 15 dB
10 dB 22 dB 20 dB 25 dB 20 dB low 12 dB very low 21 dB 25 dB 40 dB
very low very low very low very low 0.8 dB 1 dB 2 dB 0.5 dB 0.4 dB low very low
300 ns
8 dB
18 dB
very low
4.1 ms
5.6 dB 1 dB 4 dB 8.5 dB 1.7 dB 2 dB 2.2 dB 2.6 dB
50 dB 70 dB 30 dB 55 dB 70 dB 70 dB 70 dB 50 dB
very low 0.11 dB 0.1 dB very low 0.4 dB 0.4 dB
5 ns
Fast 100 ps 20 ns 2.5 ns < 1 ns 50 ms 100 ps 650 ps 10 ns < 100 ns
Moderate Slow 10 ms 300 µs 10 ms 12 ms 12 ms 1 ms 1 ms
www.tc.tuwien.ac.at
Optical Switches
29 /42
Large switching fabrics
Single switch devices 10.0
1,000
6 × 6 12 × 12 100
SOA-based switches
Optical SOA-based switch w/ synchronizers, FDLs, TWCs and 3R-reg
10 Gbit/s line data rate
32 × 32
Energy per bit [nJ/bit]
Energy per bit [pJ/bit]
Electronic buffered 4 × 4 packet switches
16 × 16
Electronic cross8×8 16 × 16 point switches 144 × 144 10 Optical 3D-MEMS 24 × 24
80 × 80
160 × 160
Electronic buffered packet switch
1.0
www.tc.tuwien.ac.at
Energy Consumption of High-Performance Switches
Electronic circuit switch based on large cross point switches
Optical circuit switch based on 3D MEMS w/ TWCs
32 × 32
10 Gbit/s line data rate
1
0.1 10
100
1,000
10,000
Aggregate switch capacity [Gbit/s]
1
10
100
Aggregate switch capacity [Tbit/s]
Source: S. Aleksic, "Energy Efficiency of Electronic and Optical Network Elements"; IEEE Journal of Selected Topics in Quantum Electronics (invited), 17 (2011), 3; 296 - 308.
Source: OMM Inc.
© Slavisa Aleksic
30 /42
3-stage Clos
Clos (Fat-Tree) Can be strict-sense non-blocking (p ≥ 2n – 1) Typically best performance (highest bandwidth, lowest latency) 3D Mesh
Mesh or 3D Torus
3D Torus
www.tc.tuwien.ac.at
Switch Architecture and Network Topolgy
Blocking network, cost-effective for for large systems Good performance for application with locality
Switch Architectures
Blocking Type
Network Topologies
Blocking Type
Classical logN Benes Crossbar Spanke
blocking rearrangeably non-blocking wide sense non-blocking strict sense non-blocking Strict-sense
Clos (Fat-Tree, Multistage) d-dim Symmetric Mesh d-dim Symmetric Torus d-dym Hypercube
Strict-sense non-blocking strict sense non-blocking if p**≥ 2n-1 rearrangeable, blocking if p > 2 rearrangeable, blocking if p > 2 reareangeable
Cantor Banyan
non-blocking strict sense non-blocking blocking
© Slavisa Aleksic
* p is the number of edge switches n is the number of ports of a single switching element
31 /42
3-stage Clos
Clos (Fat-Tree) Can be strict-sense non-blocking (p ≥ 2n – 1) Typically best performance (highest bandwidth, lowest latency) 3D Mesh
3D Torus
Mesh or 3D Torus
www.tc.tuwien.ac.at
Network Topologies
Blocking network, cost-effective for for large systems Good performance for application with locality About 5 Billions of Links
1.E+10
No. of Links
960,000 1.E+09 Links
1.E+08
Full Mesh
1.E+07 1.E+06
3D Torus
K Computer
1.E+05
TOFU Clos 2D Torus
1.E+04 1.E+03 1.E+02 1.E+02 Source: A. D. Hospodor, E. L. Miller, „Interconnection Architectures for Petabyte-Scale HighPerformance Storage Systems“, 21st IEEE / 12th NASA Goddard Conference on Mass Storage Systems and Technologies, April 2004, pages 273-281.
© Slavisa Aleksic
1.E+03
1.E+04
No. of Nodes
1.E+05
80,000 Nodes
32 /42
© Slavisa Aleksic
www.tc.tuwien.ac.at
The Wiring Problem
33 /42
www.tc.tuwien.ac.at
High-Speed Interconnects Point-to-Point Optical Interconnects 1 E/O
O/E
E/O
O/E 1
2 E/O
O/E
Large E/O Electronic Switch
O/E 2
E/O
O/E N
N E/O
O/E
SOA-based Optically Switched Interconnects Active SOA
E/O
Inactive SOA
1
E/O
O/E O/E O/E O/E
2
E/O
AWG-based Optically Switched Interconnects E/O
3
E/O
O/E 1
TWC
4
AWG E/O
TWC
OR
Optical Coupler
O/E 2
MEMS E/O
TWC
1 2 3 4
O/E N
To implement large-scale optically switched interconnects, multi-stage architecture, 3R signal regeneration and optical amplification can be used.
E/O and O/E: Electro-Optical and Opto-Electrical Conversion AWG: Arrayed Waveguide Grating © Slavisa Aleksic
TWC: Tuneable Wavelength Converter SOA: Semiconductor Optical Amplifier
34 /42
www.tc.tuwien.ac.at
Scalability of Optical Interconnects Physical layer simulations were carried out by taking into account the most relevant effects such as: • attenuation, • dispersion, • noise accumulation, • nonlinear effects, and • crosstalk. Required Number of Fibers
Scalability
4xSDR IB
AWG switches (w/ inband crosstalk)
8
6
SOA switches
Required No. of Fibers
Eye Closure Penalty [dB]
> 100.000 Fibers
1.E+06
10
3D-MEMS switches
4 AWG switches (w/o inband crosstalk) 2
0
1.E+05
80%
4xDDR IB 4xQDR IB
1.E+04
95%
10G XAUI
1.E+03 40G parallel
1.E+02 WDM
Optically Switched Interconnects
1.E+01 4
8
12
16
20
24
28
32
36
40
44
48
52
56
60
64
Number of Switch Ports
1
10
100
Aggregate Capacity [Tb/s]
© Slavisa Aleksic
35 /42
0.8 – 1.18 nJ/bit
Replacing electronic switches with optical ones can save more than 50 % of power
40G WDM Point-to-point interconnects
40G parallel
Electronic switch
4xQDR
3D-MEMS based interconnects consume less than 50% of the energy consumed by the SOA-based interconnects
4xDDR
www.tc.tuwien.ac.at
Energy Efficiency of Optical Interconnects
4xSDR 10G XAUI 0.0
0.2
0.4
0.6
0.8
0.26 – 0.61 nJ/bit
1.0
1.2
1.4
1.6
Optically switched interconnects have the potential to significantly improve the energy efficiency of large-scale systems
Optically switched interconnects
3D-MEMS
with optical amplification and signal regeneration AWG
SOA
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
Source: N. Fehratovic, S. Aleksic, "Power Consumption and Scalability of Optically Switched Interconnects for High-Capacity Network Elements“, Optical Fiber Communication Conference and Exposition (OFC 2011), Los Angeles, CA, USA, IEEE/OSA, (2011), ISBN: 978-1-55752-906-0; JWA84-1 - JWA84-3.
Energy Efficiency [nJ/bit] 36 /42
1. Introduction
www.tc.tuwien.ac.at
Outline
2. Concept of Cloud Computing 3. Carrier Clouds 4. Performance and Privacy Aspects 5. Optical Technologies within Datacenters
6. Technologies for Core Networks 6. Optical Optical Technologies for Core Networks
© Slavisa Aleksic
37 /42
• Considering the state-of-the-art photonic technology, we can hardly realize complex optical signal processing.
www.tc.tuwien.ac.at
Optical Transmission and Signal Processing
• Optical random access memories (ORAMs) at high data rates are not feasible yet.
• A crucial precondition for implementing complex optical switching and data processing systems is high-density photonic integration. • If we would be able to implement an optical packet router having the same complexity and providing the same performance as current high-performance electronic routers, it would probably consume comparable amount of energy as its electronic counterpart. • Simple optical functions such as filtering, splitting, combining, reflection, diffraction and interference can be used to increase energy efficiency of network elements. © Slavisa Aleksic
38 /42
~25%
>70%
www.tc.tuwien.ac.at
Optimizing DC Network Architecture: Optical Switching
~65% Peak Capacity per Server [Gbps]
Internet
Core
Aggregation M. Fiorani, et al., „Energy-Efficient Elastic Optical Interconnect Architecture for Data Centers”, IEEE Communications Letters, 2014,DOI: 10.1109/LCOMM.2014.2339322.
ToR
ToR
ToR
ToR
ToR
ToR
ToR
ToR
ToR
M. Fiorani,i et al., „Hybrid Optical Switching for Data Center Networks”, Journal of Electrical and Computer Engineering, 2014
ToR: Top-of-the-Rack
© Slavisa Aleksic
39 /42
www.tc.tuwien.ac.at
Multi-Layer SDN Current SDN implementations focus mainly on Ethernet networks for data centers. •
Extend the SDN concept to transport networks on layer 0/1 (e.g. DWDM, OTN, HOS, EON), layer 2 (e.g. Ethernet) and layer 2.5 (e.g. MPLS-TP), where there is currently a lack of standards and products providing automated provisioning across these layers.
Modern optical transport networks already provide a relatively high level of flexibility and controllable attributes. Many optical transport systems available on the market today implement the path computation element (PCE).
© Slavisa Aleksic
40 /42
Additional standardization is required to allow the SDN controllers to directly manage adaptable optical transmission and switching components such as: • variable bandwidth transceivers (VBTs) • colorless, directionless, and contentionless (CDC) reconfigurable optical add/drop multiplexer (ROADM)
www.tc.tuwien.ac.at
Optical SDN
• flex-grid wavelength selective switch (WSS).
Parameters that can be controlled Modulation Scheme
Symbol Rate
No. of Optical Carriers
FEC Overhead
Wavelength
Add/Drop Channels
Switching/ Forwarding
Port Data Rate
No. of Spectral Slices
Configurable optical network elements Future Software-Controlled VBT
Future Software-Controlled CDC ROADM © Slavisa Aleksic
Future Software-Controlled Flex-grid WSS 41 /42
www.tc.tuwien.ac.at
Thank you for listening Questions? Contact: Dr. Slavisa Aleksic Vienna University of Technology Institute of Telecomminications Favoritenstrasse 9/E389 A-1040 Vienna Tel: +43/1/58801-38831 Fax: +43/1/58801-938831 e-mail:
[email protected]
© Slavisa Aleksic
42 /42