General Problems
- Cloud requires the agility of data center
- Data center with conventional network architecture can't fulfill that demand
- Different branches of network tree is required different capacity (switch at core layer is oversubscribed by factor 1:80 to 1:240 while ones at lower layer is 1:5 or more)
- Does not prevent traffic flood by one service from affecting the others (commonly have to suffer collateral damage)
- Conventional networks achieve scale by assigning servers IP addresses and dividing them into VLANs => migrating VMs requires reconfiguration, human involvement requires reconfiguration => limit the speed of deployment
Realizing this vision concretely translates into building a network that meets the following three objectives:
- Uniform high capacity
- Performance capacity
- Layer-2 semantics
For the compatibility, changes to current network hardware is limited, except the software and operating system on data center servers.
Using a layer 2.5 shim in server's network stack to work around limitations of network devices.
VL2 consists of a network built from low-cost switch ASICs arranged into a Clos topology [2] that provides extensive path diversity between servers. To cope with this volatility, we adopt Valiant Load Balancing (VLB) to spread traffic across all available paths without any centralized coordination or tra c engineering.
Problems in production data centers
To limit overheads (packet flooding, ARP broadcast) => use virtual LAN technique for servers. However, it suffers from 3 limitations:
Traffic: 1) The ratio of entering/leaving traffic volume is 4:1. 2) Computation is focused on where high speed access to data is fast + cheap even though data is distributed across multiple data centers (due to cost of long-haul link). 3) Demand of bandwidth between servers inside a data center is growing faster than the demand for bandwidth to external host. 4) The network is a bottleneck to computation.
Flow distribution: Flow size is around 100MB no matter the total size of flows is GB. This is because the file is broken into chunks and stored in various servers. The percentage of machine with 80 concurrent flows is 5%, and more than 50% of the time, a machine has about 10.
Traffic matrix: N/A
Failure Characteristics: failure is defined as a event which is logged for a > 30s pending function. Most failures are small in size (involve few of devices) but downtime can be significant (95% of failures are resolved in 10 min but 0.09% last > 10 days). VL2 moves 1:1 redundancy to n:m redundancy.
VL2
Design principles:
Scale-out topology
Problems in production data centers
To limit overheads (packet flooding, ARP broadcast) => use virtual LAN technique for servers. However, it suffers from 3 limitations:
- Limited server-to-server capacity (due to servers locate in different virtual LAN): idle server cannot be assigned to overloaded services
- Fragmentation of resources: spreading a service outside a single layer-2 domain frequently requires reconfiguring IP addresses and VLAN trunks => avoid by reserving resource for each service to respond to overloaded cases (demand spike, failure). This in turn incurs significant cost and disruption
- Poor reliability and utilization: there must be sufficient remaining idle capacity on a counterpart device to carry the load if an aggregation switch or access router fails => each device and link to be run up to at most 50% of its maximum utilization
Traffic: 1) The ratio of entering/leaving traffic volume is 4:1. 2) Computation is focused on where high speed access to data is fast + cheap even though data is distributed across multiple data centers (due to cost of long-haul link). 3) Demand of bandwidth between servers inside a data center is growing faster than the demand for bandwidth to external host. 4) The network is a bottleneck to computation.
Flow distribution: Flow size is around 100MB no matter the total size of flows is GB. This is because the file is broken into chunks and stored in various servers. The percentage of machine with 80 concurrent flows is 5%, and more than 50% of the time, a machine has about 10.
Traffic matrix: N/A
Failure Characteristics: failure is defined as a event which is logged for a > 30s pending function. Most failures are small in size (involve few of devices) but downtime can be significant (95% of failures are resolved in 10 min but 0.09% last > 10 days). VL2 moves 1:1 redundancy to n:m redundancy.
VL2
Design principles:
- Randomize to cope with volatility: using VLB to do destination-independent (e.g. random) traffic spreading across multiple intermediate nodes
- Building on proven networking technology: using ECMP forwarding with anycast address to enable VLB with minimal control plane messaging or churn.
- Separate names from locators: same as Portland
- Embracing end systems
Scale-out topology
- Add intermedia nodes between two Aggregate switches => increase the bandwidth. This is an example of Clos network.
- VLB: take a random path up to a random intermediate switch and a random path down to a destination ToR switch
VL2 Addressing and Routing
Store, lookup and update AA-to-LA mapping
Evaluation
VL2 Addressing and Routing
- Packet forwarding, Address resolution, Access control via the directory service
- Random traffic spreading over multiple paths: VLB distributes traffic across a set of intermediate nodes and ECMP distributes across equal-cost paths
- ECMP problems: 16-way => define several anycast address, switch cannot retrieve five-tuple values when a packet is encapsulated with multiple IP headers => use hash value
- Backwards compatibility
Store, lookup and update AA-to-LA mapping
Evaluation
- Uniform high bandwidth: using goodput, efficiency of goodput
- VLB fairness: evaluate effectiveness of VL2's implementation of VLB in splitting traffic evenly across the network.