VLAN DataCenterx4
VLAN DataCenterx4
TwoTwo
VLANs
VLANs= Two subnets TwoTwo
VLANs
VLANs= Two subnets
Two Subnets Two Subnets
• Important notes on VLANs:
VLANs are assigned to switch ports. There is no “VLAN” VLANs separate broadcast domains!
assignment done on the host (usually). e.g. without VLAN the ARP would be seen on all subnets.
In order for a host to be a part of that VLAN, it must be
assigned an IP address that belongs to the proper subnet.
Remember: VLAN = Subnet
Assigning a host to the correct VLAN is a 2-step process:
Connect the host to the correct port on the switch.
Assign to the host the correct IP address depending on the
VLAN membership
802.1Q Trunk
Trunk Port
VLAN X VLAN Y VLAN X VLAN Y
VLAN Tagging Edge Ports
Edge ports are not tagged, they are just The IEEE standard that defines how
“members” of a VLAN ethernet frames should be tagged when
You only need to tag frames in switch-to- moving across switch trunks
switch links (trunks), when transporting This means that switches from different
multiple VLANs vendors are able to exchange VLAN
A trunk can transport both tagged and traffic.
untagged VLANs
As long as the two switches agree on how to handle those
802.1Q tagged frame 802.1Q Header
• A 4-byte tag header containing a tag protocol identifier
(TPID) and tag control information (TCI) with the following
elements:
TPID
• A 2-byte TPID with a fixed value of 0x8100.
• This value indicates that the frame carries the
802.1Q/802.1p tag information.
TCI
• A TCI containing the following elements:
- Three-bit user priority (8 priority levels, 0 thru 7)
- One-bit canonical format (CFI indicator), 0 = canonical, 1 = noncanonical,
to signal bit order in the encapsulated frame
(www.faqs.org/rfcs/rfc2469.html - “A Caution On the Canonical
Ordering of Link-Layer Addresses”)
- Twelve-bit VLAN identifier (VID)-Uniquely identifies the VLAN to which
the frame belongs, defining 4,096 VLANs, with 0 and 4095 reserved.
connected? Layer 3
AR AR … AR AR
– Typically a hierarchical inter-connection structure
– core routers
• Is such an architecture good enough? 18
*3 yr amortization for servers, 15 yr for infrastructure; 5% cost of money • Workloads often unpredictable:
– Multiple services run concurrently within a DC
• Total cost varies
– upwards of $1/4 B for mega data center – Demand for new services may spike unexpected
– server costs dominate • Spike of demands for new services mean success!
– network costs significant • But this is when success spells trouble (if not prepared)!
• Long provisioning timescales: • Failures of servers are the norm
– new servers purchased quarterly at best – GFS, MapReduce, etc., resort to dynamic re-assignment of
Source: the Cost of a Cloud: Research Problems in Data Center Networks. Sigcomm CCR 2009. chunkservers, jobs/tasks (worker servers) to deal with
Greenberg, Hamilton, Maltz, Patel.
failures; data is often replicated across racks, …
– “Traffic matrix” between servers are constantly changing
Overall Data Center Design Goal Achieving Agility …
Agility – Any service, Any Server • Workload Management
• Turn the servers into a single large fungible – means for rapidly installing a service’s code on a server
– dynamical cluster scheduling and server assignment
pool • E.g., MapReduce, …
– Let services “breathe” : dynamically expand and contract – virtual machines, disk images
their footprint as needed
• Storage Management
• Benefits
– means for a server to access persistent data
– Increase service developer productivity
– distributed file systems (e.g., GFS)
– Lower cost
– Achieve high performance and reliability • Network Management
– Means for communicating with other servers, regardless of
These are the three motivators for most data center where they are in the data center
infrastructure projects! – Achieve high performance and reliability
24
A Scalable, Commodity Data Center Fat-Tree Based DC Architecture
Network Architecture • Inter-connect racks (of servers) using a fat-tree topology
• Main Goal: addressing the limitations of • Fat-Tree: a special type of Clos Networks (after C. Clos)
K-ary fat tree: three-layer topology (edge, aggregation and core)
today’s data center network architecture – each pod consists of (k/2)2 servers & 2 layers of k/2 k-port switches
– sing point of failure – each edge switch connects to k/2 servers & k/2 aggr. switches
– each aggr. switch connects to k/2 edge & k/2 core switches
– over subscript of links higher up in the topology – (k/2)2 core switches: each connects to k pods
• trade-offs between cost and providing