BRKDCT-1345 - Deploying UCS M Servers With Highly Distributed Applications
BRKDCT-1345 - Deploying UCS M Servers With Highly Distributed Applications
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Cloud Scale Architectures
Cloud-Scale Inverts Computing Architecture
Core Enterprise Workloads Cloud Scale
SCM ERP/ Legacy CRM Email Online Gaming Mobile IoT eCommerce
Financial Content
Single
Server Server
Single
Application
Hypervisor
Many Many
Applications Servers
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cloud-Scale Application Components
Cloud Scale
Cloud Scale
Single
Application
Many
Servers
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Sample Cloud Scale Application Consumer
• Cloud scale applications
distribute the workload across
multiple component nodes
• These nodes have various
system requirements Object
Data Store
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Compute Infrastructure Requirements
• App Node
• Manager Node Single or Dual Socket/4-18 Core
Dual-Socket/8-16 core
2.0-2.5Ghz
2.5Ghz or better
128-512GB Memory 16-128GB Memory
1/10Gbps Ethernet 1Gbps Ethernet
300GB-4TB HDD (RAID)
Redundancy at HW & app level 50-100GB HDD
Redundancy handled at app level
• Web Node
Single Socket/2-4 cores • Db Node
1.0-2.0Ghz
8-16GB Memory
Single or Dual Socket/4-24 Core
1Gbps Ethernet 2.0-3.0Ghz
20-100GB HDD 32-256GB Memory
Redundancy at app level
1Gbps Ethernet
• Content Node 100-250GB HDD
Single Socket/2-4 Core Redundancy handled at app level
2.0-3.7 Ghz
16-32GB Memory
1/10Gbps Ethernet
50-200GB HDD
Redundancy at app level
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Infrastructure Requirements
• Object Store
1-500TB Storage
SSD Options
JBOD/RAID capabilities Object
1-40Gbps Network BW Data Store
FC/FCoE initiator capabilities
Dual Socket/24-48 Cores
2.0-2.5Ghz
Redundancy at HW level Application Data
• Application Data
High Performance I/O – Application Acceleration
Data Optimization
Various Workloads
High Availability
Scalability
FC or iSCSI connectivity
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco System Link Technology
Extending the UCS Fabric inside the server
Compute
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco System Link Technology
Extending the UCS Fabric inside the server
Compute
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco System Link Technology
Extending the UCS Fabric inside the server
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco System Link Technology
Extending the UCS Fabric inside the server
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
M-Series Design Considerations
• UCS M-Series was designed to complement the compute infrastructure
requirements in the data center
• The goal of the M-Series is to offer smaller compute nodes to meet the needs
of scale out applications, while taking advantage of the management
infrastructure and converged innovation of UCS
• By disaggregating the server components, UCS M-Series helps provide a
component life cycle management strategy as opposed to a server-by-server
strategy
• UCS M-Series provides a platform that will provide flexible and rapid
deployment of compute, storage, and networking resources
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
UCS M-Series Modular Servers
UCS True Server
M-Series Disaggregation
Lightweight
Compute Cartridge Shared Local Resources
Two Independent Intel Four shared SSDs in the chassis
Xeon E3 Servers
Shared dual 40Gb connectivity
No adapters or HDDs
Compute Density
Shared Local Resources 16 Intel Xeon E3 Compute nodes in 2RU chassis
Network and storage resources Each cartridge holds two independent
compute nodes
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
UCS M-Series Specifics
Front View Rear View
2 RU
CPU Intel Xeon E3 1275Lv3, 1240Lv3, 1220Lv3
Aggregate Capacity per Chassis
16 Servers, 64 Cores, 512 GB Memory Memory 8 GB UDIMM 32 GB Max/Cartridge
Disks 2 or 4 SSDs SATA (240 GB, 480 GB) SAS (400
GB, 800 GB, 1.6 TB)
RAID Cisco 12 Gb Modular RAID Controller with 2GB Flash-
Controller Backed Write Cache (FBWC)
Network 2 x 40 Gbps
Compute Cartridge
2 Server Nodes Power 2 x 1400W
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Typical Rack Deployment
UCS M-Series Overview 10 Gb 62xx Fabric
Front View
Interconnects
2 RU
8 Cartridges
1 2
Power Supplies –
2 x 1400 Watts
Rear View
2 x 40 Gb Uplinks
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
System Architecture Diagram
2 x 40 Gb
Uplinks Controller and 4 x SSD Drives
Redundant
Power Supplies
Shared
Resources
Midplane
Compute
Resources
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
UCS M4308 Chassis
• 2U Rack-Mount Chassis (3.5” H x 30.5” L x 17.5” W)
• 3rd Generation UCS VIC ASIC (System Link Technology)
• 1 Chassis Management Controller - CMC (Manages Chassis resources)
• 8 cartridge slots (x4 PCIe Gen3 lanes per slot)
Slots and ASIC adaptable for future use
• Four 2.5” SFF Drive Bays (SSD Only)
• 1 Internal x 8 PCIe Gen3 connection to Cisco 12 SAS RAID card
• 1 Internal x8 PCIe Gen3 slot, ½ height, ½ Width (future use)
• 6 hot swappable fans, accessed by top rear cover removal
• 2 external Ethernet management ports, 1 external serial console
port (connected to CMC for out of band troubleshooting)
• 2 AC Power Module bays - (1+1 redundant, 220V only)
• 2 QSFP 40Gbps ports (data and management - server and chassis)
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
UCS M142 Cartridge
• UCS M142 Cartridge contains 2 distinct E3 servers
• Each Server independently manageable and has it’s
own memory, CPU, management controller (CIMC) Host PCIe Interface
x2 PCIe Gen3
• Cartridge connects to Mid-plane for access to power, Lanes
network
• ~15.76 Gbps I/O Bandwidth per server
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
System Link Technology
System Link Technology Overview
eth0 eth1 eth2 eth3
• System Link Technology is built on proven eth0 eth1
operating system operating system
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
System Link Technology Overview eth0 eth1 eth2 eth3
eth0 eth1
operating system operating system
each server
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco Innovation -Converged Network Adapter (M71KR-Q/E)
• The Initial Converged Network adapter from Cisco
combined two physical PCIe devices on one card.
• Intel/Broadcom Ethernet Adapter
• Qlogic / Emulex HBA
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco Innovation – The Cisco VIC (M81KR, 1280, & 1380)
• The Cisco VIC was an extension of the first Converged
networking adapters but created 2 new PCIe devices.
• vNIC
• vHBA
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Cisco VIC technology vs. SR-IOV
• A point that is often confused is that of SR-IOV and the operation of the VIC.
• SR-IOV allows for the creation of virtual functions on a physical PCIe card.
• The main difference is that a virtual function does not allow for direct configuration and must
use the configuration of the primary physical card they are created on.
• In addition SR-IOV devices require that the operating system be SR-IOV aware to
communicate with the virtual endpoints.
• VIC technology differs because each vNIC or vHBA is a PCIe physical function with full
independent configuration options for each device, and requiring no OS dependencies.
• It is important to note that VIC technology is SR-IOV capable. For operating systems that can
use and/or require SR-IOV support the capability does exist and is supported on the card.
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
System Link Technology – 3rd Generation of Innovation
• System Link technology provides the
same capabilities as a VIC to configure
PCIe devices for use by the server
• The difference with System Link is that it is
an ASIC within the chassis and not a PCIe
card
• The ASIC is core to the M-Series platform
and provides access to I/O resources
• The ASIC connects devices to the
compute resource through the system mid Virtual
plane Drive
SCSI
• System Link provides the ability to access Commands
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
System Link Technology – 3rd Generation of Innovation
Cartridges Network
• Same ASIC used in the 3rd
Generation VIC
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Introduction of the sNIC
• The SCSI NIC (sNIC) is the PCIe
device that provides access to the
storage components of the UCS M-
Series Chassis
Virtual
SCSI Commands Drive
• The sNIC presents to the operating LUN0
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Chassis Storage Components
• The chassis storage components consist of:
• Cisco Modular 12 Gb SAS RAID Controller with 2GB Flash
• Drive Mid-plane
• 4 SSD Drives
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Controller - Drive Groups
• RAID configuration groups drives together to
form a RAID volume
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Controller - Drive Groups
• RAID configuration groups drives together to
form a RAID volume
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Controller - Drive Groups
• RAID configuration groups drives together to
form a RAID volume
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Controller - Drive Groups
• RAID configuration groups drives together to
form a RAID volume
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Controller - Drive Groups
• RAID configuration groups drives together to
form a RAID volume
RAID 0
• Drive Groups define the operations of the Drive Group 1
RAID 0
• Depending on the RAID Level a group could be Drive Group 3
as small as 1 drive (RAID 0) or in the case of M-
RAID 0
Series 4 Drives (R0, R1, R5, R10) Drive Group 4
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Controller - Virtual Drives
RAID 0
• After creating a drive group on a controller one RAID 1 Drive
Drive Group 1 Group
must then create a virtual drive to be presented 1.6TB 2
as a LUN (drive) to the operating system 1.6TB
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Controller - Virtual Drives
RAID 0
• After creating a drive group on a controller one RAID 1 Drive
Drive Group 1 Group
must then create a virtual drive to be presented 1.6TB 2
as a LUN (drive) to the operating system 1.6TB
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Controller - Virtual Drives
RAID 0
• After creating a drive group on a controller one RAID 1 Drive
Drive Group 1 Group
must then create a virtual drive to be presented 1.6TB 2
as a LUN (drive) to the operating system V
V V
1.6TB
V
i V
i
i i r V
i r
r r t i
r t
t t u r
t Va uV
V u V u t
u il ai
i a i a u
a rD lr
r l r l a
l tr Dt
t D t D l
D V ru
•
rV r ui D
r u u i ia
•
G t G t t
B r r r t Br
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Capabilities
• The Cisco Modular Storage Controller • All virtual drives in the drive group have the
supports same r/w and stripe size settings
• 16 Drive Groups
• Drive groups configured as fault tolerant RAID
• 64 Virtual Drives across all drive groups at initial ship
groups will support:
• 64 virtual drives in a single drive group
• UCSM reporting of degraded virtual drive
• UCS M-Series Supports • UCSM reporting of Failed Drive
• 2 Virtual Drives per server (service profile) • Hot swap capabilities from the back of the chassis
• Service Profile mobility within the chassis • Automatic RAID rebuild
• UCS Manager provides status of RAID rebuild
• Maximum Flexibility:
• Virtual drives can be different sizes within the drive group
• Drive groups can have different number of virtual drives
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Mapping Disk resources to M-Series Servers
• From the perspective of the server in the cartridge, the storage
controller and the virtual drive are local resources Host PCIe
Interface
• Through the use of policies and service profiles, UCS Manager
create the sNIC in the System Link Technology as a PCIe endpoint
for the server RAID 1 RAID 0
Drive
Drive Group 1 Group 2
1.6TB 1.6TB
/
through the configured sNIC
the virtual drive through the PCIe architecture in the System Link vd5
vd29
200GB
Technology 100GB
• The end result are SCSI drives attached to the servers as LOCAL
storage
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Mapping Disk resources to M-Series Servers
• From the perspective of the server in the cartridge, the storage
controller and the virtual drive are local resources Host PCIe
Interface
/dev/sda C:\
• Through the use of policies and service profiles, UCS Manager vd5
200GB
create the sNIC in the System Link Technology as a PCIe endpoint /dev/sdb D:
\
for the server vd29 RAID 1 RAID 0
Drive
100GB Drive Group 1 Group 2
1.6TB 1.6TB
/
through the configured sNIC
the virtual drive through the PCIe architecture in the System Link
Technology
• The end result are SCSI drives attached to the servers as LOCAL
storage
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
SCSI Packet Flow
Lun0
100GB
/dev/sda
SCSI Read
/
Cartridge 1 Server 1 -> vd25
Cartridge 1 Server 2 -> vd16
Cartridge 2 Server 1 -> vd9
Cartridge 2 Server 2 -> vd22 RAID 0
Cartridge 3 Server 1 -> vd12 Drive
Group 3
Cartridge 3 Server 2 -> vd29 1,6TB
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
System Link vs. MR-IOV
• MR-IOV is a PCIe specification that allows Multiple end host CPU to access
a PCIe endpoint connected through a multi-root aware (MRA) PCIe Switch
• The same endpoint is shared between the hosts and must be able to
identify and communicate with each host directly.
PCIe PCIe PCIe PCIe
• MR-IOV Protocol changes are introduced to PCIe to support MR-IOV
support sNIC is
MRA PCIE
• Operating systems must understand and support these protocol change to Switch Endpoint for
CPU
be MR-IOV aware
• System Link DOES NOT require MR-IOV to support multi host access to PCIe device
MR-IOV
is the same
storage devices Endpoint
endpoint for
each host
• System link technology provides a single root port that connects the ASIC to
the storage subsystem
• The sNIC and SCSI commands from the host are translated by System Link
directly to the controller so that from the controller perspective it is only Storage
Controller is
communicating with one host (the System Link Technology ASIC) PCIe
endpoint for
• From the perspective of the M-Series server the PCIe endpoint is the sNIC ASIC
adapter and the storage device is the virtual drive provided through the
mapping in the System Link Technology ASIC
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Mapping Network resources to the M-Series Servers
• The System Link Technology provides the network interface
Host PCIe
connectivity for all of the servers Interface
• Virtual NICs (vNIC) are created for each server and are Fabric Interconnect A
mapped to the appropriate fabric through the service profile
on UCS Manager
• Servers can have up to 4 vNICs.
/
• The operating system sees each vNIC as a 10Gbps Fabric Interconnect B
Ethernet eth0 eth0
• The interfaces can be rate limited and provide QoS marking eth1 eth
1
in hardware.
• Interfaces are 802.1Q capable
• Fabric Failover is supported, so in the event of a failure
traffic is automatically moved to the second fabric
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Mapping Network resources to the M-Series Servers
• The System Link Technology provides the network interface
Host PCIe
connectivity for all of the servers Interface
• Virtual NICs (vNIC) are created for each server and are Fabric Interconnect A
mapped to the appropriate fabric through the service profile
on UCS Manager
• Servers can have up to 4 vNICs.
/
• The operating system sees each vNIC as a 10Gbps Fabric Interconnect B
Ethernet eth0 eth0
• The interfaces can be rate limited and provide QoS marking eth1 eth
1
in hardware.
• Interfaces are 802.1Q capable
• Fabric Failover is supported, so in the event of a failure
traffic is automatically moved to the second fabric
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Networking Capabilities
• The System Link ASIC supports 1024 virtual devices. Scale for a particular M-
Series chassis will depend on the number of uplinks to the Fabric interconnect
• At initial ship System Link supports Ethernet traffic only. The M-Series devices
can connect to external storage volumes like NFS, CIFS, HTTPS, or iSCSI.
(FCoE to be supported in a future release)
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Service Profile Mobility UCS Service Profile Template
Unified Device Management
Network Policy
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
UCS Manager for M-Series
• Version 2.5 M Release at Initial Shipping
• M-Series only
• Ethernet Only
• Support for 2 LUNs per host
• Support for 4 vNICs per host
• Merge of B, C, and M series into a single build will be in the 3.1(1) or later
• UCS Central Manageability planned for the next release after Initial Shipping
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Mapping application architecture to
infrastructure
Sample Cloud Scale Application Consumer
• Cloud scale applications distribute the
workload across multiple component
nodes
• These nodes have various system
requirements Object
Data Store
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Compute Infrastructure Requirements
• App Node
• Manager Node Single or Dual Socket/4-18 Core
Dual-Socket/8-16 core 2.0-2.5Ghz
2.5Ghz or better
128-512GB Memory 16-128GB Memory
1/10Gbps Ethernet 1Gbps Ethernet
300GB-4TB HDD (RAID) 50-100GB HDD
Redundancy at HW & app level
Redundancy handled at app level
• Web Node
Single Socket/2-4 cores • Db Node
1.0-2.0Ghz Single or Dual Socket/4-24 Core
8-16GB Memory
1Gbps Ethernet 2.0-3.0Ghz
20-100GB HDD 32-256GB Memory
Redundancy at app level 1Gbps Ethernet
• Content Node
100-250GB HDD
Redundancy handled at app level
Single Socket/2-4 Core
2.0-3.7 Ghz
16-32GB Memory
1/10Gbps Ethernet
50-200GB HDD
Redundancy at app level
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Infrastructure Requirements
• Object Store
1-500TB Storage
SSD Options
JBOD/RAID capabilities
1-40Gbps Network BW Object
Data Store
FC/FCoE initiator capabilities
Dual Socket/24-48 Cores
2.0-2.5Ghz
Redundancy at HW level
Application Data
• Application Data
High Performance I/O – Application Acceleration
Data Optimization
Various Workloads
High Availability
Scalability
FC or iSCSI connectivity
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Application Profiles
• Service Profiles allow you to
build an application profile for
each type of node within the
application stack
Service Profile Templates
• The service profile defines the Map to application profiles
Network Policy
node type, network connectivity,
and storage configuration for Storage Policy
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Application Profiles
• Cloud applications are built to
withstand loss of multiple
nodes, but the individual nodes
Service Profile Templates
should be striped across the Map to application profiles
Network Policy
chassis
Storage Policy
• Striping the nodes also allows
Server Policy
for better distribution of the
network and storage resources
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Mapping Applications to Shared Resources
• Web Node
50GB Drive RAID1
2x1Gbps Ethernet
• Content Node
50GB Drive RAID 1
200GB Drive
1x1Gbps & 1x10Gbps Ethernet
• Shared Resources Example • App Node
2x800GB SAS SDD
50GB Drive RAID 1
2x1.6TB SAS SDD
200GB Drive
2x40Gbps Network Connections
2x1Gbps & 1x10Gbps
• Db Node
50GB Drive RAID 1
250GB Drive
2x1Gbps & 1x10Gbps
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Resource Configuration
RAID 0
• Disk groups will be used by storage profile RAID 1 Disk
Disk Group 1 Group
configurations to specify how the resources are 800GB 2
1.6TB
consumed by the server nodes
• Create a RAID 1 disk group with the 2x800GB drives to
host the 50GB RAID 1 required by each server node
• Create 2 separate RAID 0 disk groups to accommodate
the 200GB and 250 GB drive requirements of the other
• Disk Group 1
server nodes 800GB RAID 1 Available
16x 50GB Drives
• Use specific drive numbers for the RAID 0 groups so • Disk Group 2
RAID 0
Disk
1.6TB Single Drive RAID 0
that you can control the mapping of specific applications Available Group
4x 200GB Drives 3
to specific drives 2x 250GB Drives 1.6TB
• Disk Group 3
1.6TB Single Drive RAID 0
Available
4x 200GB Drives
2x 250GB Drives
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Resource Configuration
RAID 0
• Disk groups will be used by storage profile RAID 1 Disk
Disk Group 1 Group
configurations to specify how the resources are 800GB 2
1.6TB
consumed by the server nodes
• Create a RAID 1 disk group with the 2x800GB drives to
host the 50GB RAID 1 required by each server node Virtual
Drive 0
50GB
• Create 2 separate RAID 0 disk groups to accommodate
the 200GB and 250 GB drive requirements of the other
• Disk Group 1
server nodes 800GB RAID 1 Available
16x 50GB Drives
• Use specific drive numbers for the RAID 0 groups so • Disk Group 2
RAID 0
Disk
1.6TB Single Drive RAID 0
that you can control the mapping of specific applications Available Group
4x 200GB Drives 3
to specific drives 2x 250GB Drives 1.6TB
• Disk Group 3
1.6TB Single Drive RAID 0
Available
4x 200GB Drives
2x 250GB Drives
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Storage Resource Configuration
RAID 0
• Disk groups will be used by storage profile RAID 1 Disk
Disk Group 1 Group
configurations to specify how the resources are 800GB 2
V V V 1.6TB
consumed by the server nodes
V V V V
i i i i i i
i
r r r r r r
r
Vt tV Vt Vt t t t
iu u
i iu iu u u u
ra Va r ra raV Va Va a
tV l i lt Vt l tl i i l il l V
uiD rD u i uD r D Di
•
uDr rD
ar t ra r ar rt t r rr
•
0 aV
0 G G l B li ia
•
2 0 5
Disk Group 1 0
0
0
G
0
G
server nodes
G B B
B
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Mapping Disk resources to M-Series Servers
• Create a storage profile for each application Host PCIe
Interface
type that defines the number of drives, the /dev/sda C:\
/dev/sdb D:
size of the drives and which disk group to RAID 1
\
RAID 0
Disk
App Node Disk Group 1 Group 2
800GB
/
Storage Policy
Server Policy
service profile to specify which resources
are available for the compute node RAID 0
Disk
Group 3
1.6TB
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Demo
Creating Storage Policies
Application Networking Needs Consumer
• A cloud scale application will
typically have many network 1Gbps DMZ
segments and requirements.
• Within the construct of the
service profile we can connect 1Gbps
the server node to the required Internal
networks
Analytics
•
Data
Networks may need to be added
or upgraded over the life of the 1Gbps
application. App Cluster 10Gbps
10Gbps Content
Application Data iSCS/FCoE
Object
Data Store
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Mapping Disk resources to M-Series Servers
• Create a storage profile for each application Host PCIe
Interface
type that defines the number of drives, the
size of the drives and which disk group to RAID 1 RAID 0
Disk
App Node Disk Group 1 Group 2
800GB
/
Storage Policy
Server Policy
service profile to specify which resources
are available for the compute node RAID 0
Disk
Group 3
1.6TB
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Mapping Disk resources to M-Series Servers
• Create a storage profile for each application Host PCIe
Interface
type that defines the number of drives, the Vd5
/dev/sda C:\
50GB
/dev/sdb D:
size of the drives and which disk group to Vd29 RAID 1
\
RAID 0
Disk
App Node 200GB Disk Group 1 Group 2
800GB
/
Storage Policy
Server Policy
service profile to specify which resources
are available for the compute node RAID 0
Disk
Group 3
1.6TB
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Demo
Creating Network Policies
Application Profiles Service Profiles
Map to application profiles
Network Policy
• The service profile is used to
Storage Policy
combine; Network, Storage, and
Server policies to map to the needs of Server Policy
a specific application
• The profile defines the server and it is
applied to a compute resource
• Once the service profile is defined
other service profiles can easily be
created through cloning or templates
• Through templates changes can be
made to multiple compute nodes at
one time
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Demo
Creating Profiles
Scaling and Maintaining the infrastructure
The “Scale” in Cloud-Scale
Cloud Scale
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Scale out automation Web Server Template
Network Policy
• UCS Manager Allows servers to be Web Server Pool
1.1Ghz + 8GB Storage Policy
qualified into resource pools depending Server Policy
on available resources
• Service Profile templates can be App Server Template
Network Policy
assigned to consume resources in App Server Pool
Storage Policy 2.0Ghz + 32GB
pools Server Policy
• When new chassis are added to a
system the resources can immediately Db Server Template
Network Policy
be added to pools and have service Db Server Pool
2.7Ghz + 64GB Storage Policy
profiles assigned Server Policy
power
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Maintenance and Elasticity
• Storage and Data is specific to the
chassis and not the server node.
• If there is a failure of a critical
component you can move the
application to a new server node if
one is available
• It is also possible to repurpose a
server node during peek times
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Component Life-Cycle Management
• As the upstream network is upgraded from
10Gbps to 40Gbps the chassis platform
remains the same
• Drive failures for RAID protected volumes do
not require downtime on the server node for
repair
• Server lifecycle is separate from storage,
• Drive and controller life-cycle are separated
power, networking
from the individual server nodes
• You can upgrade memory, processors and
• If a chassis replacement were required the
cartridges without the need to change network
data from the drives would be recoverable in
cabling or rebuild/recover drive data
the replacement chassis
• As new cartridges are released they can be
installed in existing chassis providing a longer
life-cycle for the platform
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Incorporating Storage into the infrastructure
Object Store
• Within cloud scale application the emergence of
object based network file systems are becoming
an important part of the architecture
• The Cisco C3160 dense storage server can be
loaded with a cloud scale file system (e.g. Ceph,
Swift, Gluster, etc.) to provide this component of
the architecture
• This can be added to a UCS infrastructure as an
appliance device on the fabric interconnect
• Longer term the storage server platform will
become integrated into UCS Manager
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Performance Application
Storage
• Many applications are starting to take advantage
of high speed application storage.
• The Invicta appliance, storage blade, and scale
out system provide this type of performance
storage infrastructure
• Today the Invicta appliance can be added to a
M-Series infrastructure as a a network appliance
• Roadmap provides the opportunities to connect to
a storage blade, Appliance or scale out system in
the future
• The system also provides the possibility for the
storage to these systems to be included in a
service profile providing more application
automation and flexibility
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Demo
Inserting a C3160 into the architecture
Summary
• The UCS M-Series platform provides a dense scale-out platform for cloud scale
applications
• Shared resources like networking and local storage make it easy to map
applications to specific resources within the infrastructure
• Service profiles provide a means to map the shared resources and abstract the
application from the physical infrastructure
• The disaggregation of the server components provide an infrastructure that
separates component life-cycles and maintenance and provides system
elasticity
• As advanced storage capabilities becomes an important part of the UCS
infrastructure, these components can be utilized within the infrastructure to build
complete cloud architecture.
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public
Call to Action
• Visit the World of Solutions for
– Cisco Data Centre – M4380 M-Series Modular Server
– Technical Solution Clinics
• Meet the Engineer
• Lunch time Table Topics
• DevNet Zone
• Recommended Reading: for reading material and further resources for this
session, please visit www.pearson-books.com/CLMilan2015
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
Complete Your Online Session Evaluation
• Please complete your online session
evaluations after each session.
Complete 4 session evaluations
& the Overall Conference Evaluation
(available from Thursday)
to receive your Cisco Live T-shirt.
BRKDCT-1345 © 2015 Cisco and/or its affiliates. All rights reserved. Cisco Public 78