0% found this document useful (0 votes)
88 views294 pages

Ccna Cloud

Uploaded by

mamaslittleboy20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views294 pages

Ccna Cloud

Uploaded by

mamaslittleboy20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 294

vii

Contents at a Glance
Introduction xxi

Part I Cloud Concepts


Chapter 1 What Is Cloud Computing? 3

Chapter 2 Cloud Shapes: Service Models 29

Part II Cloud Deployments


Chapter 3 Cloud Heights: Deployment Models 57
Chapter 4 Behind the Curtain 87

Part III Server Virtualization for Cloud


Chapter 5 Server Virtualization 119

Chapter 6 Infrastructure Virtualization 149

Chapter 7 Virtual Networking Services and Application Containers 187

Part IV Cloud Storage


Chapter 8 Block Storage Technologies 221

Chapter 9 File Storage Technologies 265

Part V Architectures for Cloud


Chapter 10 Network Architectures for the Data Center: Unified Fabric 301

Chapter 11 Network Architectures for the Data Center: SDN and ACI 363

Chapter 12 Unified Computing 407

Chapter 13 Cisco Cloud Infrastructure Portfolio 457

Chapter 14 Integrated Infrastructures 493

Chapter 15 Final Preparation 517

Glossary 523

Appendix A Answers to Pre-Assessments and Quizzes 539

Appendix B Memory Tables 543

Appendix C Answers to Memory Tables 561

Index 578

Appendix D Study Planner CD


viii CCNA Cloud CLDFND 210-451 Official Cert Guide

Contents
Introduction xxi
Part I Cloud Concepts
Chapter 1 What Is Cloud Computing? 3
“Do I Know This Already?” Quiz 3
Foundation Topics 7
Welcome to the Cloud Hype 7
Historical Steps Toward Cloud Computing 9
The Many Definitions of Cloud Computing 11
The Data Center 12
Common Cloud Characteristics 14
On-Demand Self-Service 14
Rapid Elasticity 16
Resource Pooling 17
Measured Service 19
Broad Network Access 20
Multi-tenancy 21
Classifying Clouds 22
Around the Corner: Agile, Cloud-Scale Applications, and DevOps 24
Further Reading 26
Exam Preparation Tasks 27
Review All the Key Topics 27
Complete the Tables and Lists from Memory 27
Define Key Terms 27

Chapter 2 Cloud Shapes: Service Models 29


“Do I Know This Already?” Quiz 29
Foundation Topics 32
Service Providers and Information Technology 32
Service-Level Agreement 34
Cloud Providers 34
Infrastructure as a Service 36
Regions and Availability Zones 38
IaaS Example: Amazon Web Services 39
Platform as a Service 43
PaaS Example: Microsoft Azure 45
Software as a Service 49
SaaS Examples 50
Around the Corner: Anything as a Service 52
Further Reading 53
ix

Exam Preparation Tasks 54


Review All the Key Topics 54
Complete the Tables and Lists from Memory 54
Define Key Terms 54

Part II Cloud Deployments


Chapter 3 Cloud Heights: Deployment Models 57
“Do I Know This Already?” Quiz 57
Foundation Topics 61
Public Clouds 61
Risks and Challenges 62
Security 62
Control 63
Cost 64
Private Clouds 65
Community Clouds 67
Hybrid Clouds 69
Cisco Intercloud 70
Cisco Intercloud Fabric 73
Intercloud Fabric Architecture 74
Intercloud Fabric Services 76
Intercloud Fabric Use Cases 83
Around the Corner: Private Cloud as a Service 83
Further Reading 83
Exam Preparation Tasks 84
Review All the Key Topics 84
Complete the Tables and Lists from Memory 84
Define Key Terms 84

Chapter 4 Behind the Curtain 87


“Do I Know This Already?” Quiz 87
Foundation Topics 89
Cloud Computing Architecture 89
Cloud Portal 90
Cloud Orchestrator 94
Cloud Meter 97
Cloud Infrastructure: Journey to the Cloud 99
Consolidation 100
Virtualization 102
Standardization 103
x CCNA Cloud CLDFND 210-451 Official Cert Guide

Automation 103
Orchestration 104
Application Programming Interfaces 105
CLI vs API 106
RESTful APIs 111
Around the Corner: OpenStack 115
Further Reading 116
Exam Preparation Tasks 117
Review All the Key Topics 117
Complete the Tables and Lists from Memory 117
Define Key Terms 117

Part III Server Virtualization for Cloud


Chapter 5 Server Virtualization 119
“Do I Know This Already?” Quiz 119
Foundation Topics 122
Introduction to Servers and Operating Systems 122
What Is a Server? 122
Server Operating Systems 124
Server Virtualization History 125
Mainframe Virtualization 126
Virtualization on x86 127
Server Virtualization Definitions 128
Hypervisor 129
Hypervisor Types 130
Virtual Machines 130
Virtual Machine Manager 132
Hypervisor Architectures 132
VMware vSphere 133
Microsoft Hyper-V 133
Linux Kernel-based Virtual Machine 134
Multi-Hypervisor Environments 135
Server Virtualization Features 136
Virtual Machine High Availability 136
Virtual Machine Live Migration 137
Resource Load Balancing 140
Virtual Machine Fault Tolerance 140
Other Interesting Features 141
xi

Cloud Computing and Server Virtualization 142


Self-Service on Demand 142
Resource Pooling 143
Elasticity 144
Around the Corner: Linux Containers and Docker 144
Further Reading 145
Exam Preparation Tasks 146
Review All Key Topics 146
Complete the Tables and Lists from Memory 146
Define Key Terms 146

Chapter 6 Infrastructure Virtualization 149


“Do I Know This Already?” Quiz 149
Foundation Topics 152
Virtual Machines and Networking 152
An Abstraction for Virtual Machine Traffic Management 152
The Virtual Switch 154
Distributed Virtual Switch 157
Virtual Networking on Other Hypervisors 158
Networking Challenges in Server Virtualization Environments 159
Cisco Nexus 1000V 161
Cisco Nexus 1000V Advanced Features 166
Cisco Nexus 1000V: A Multi-Hypervisor Platform 168
Virtual eXtensible LAN 171
VXLAN in Action 173
How Does VXLAN Solve VLAN Challenges? 177
Standard VXLAN Deployment in Cisco Nexus 1000V 177
VXLAN Gateways 180
Around the Corner: Unicast-Based VXLAN 181
Further Reading 184
Exam Preparation Tasks 185
Review All the Key Topics 185
Complete the Tables and Lists from Memory 185
Define Key Terms 185

Chapter 7 Virtual Networking Services and Application Containers 187


“Do I Know This Already?” Quiz 187
Foundation Topics 190
Virtual Networking Services 190
Service Insertion in Physical Networks 190
xii CCNA Cloud CLDFND 210-451 Official Cert Guide

Virtual Services Data Path 192


Cisco Virtual Security Gateway 193
Cisco Adaptive Security Virtual Appliance 197
Cisco Cloud Services Router 1000V 199
Citrix NetScaler 1000V 201
Cisco Virtual Wide Area Application Services 205
vPath Service Chains 208
Virtual Application Containers 210
Around the Corner: Service Insertion Innovations 217
Further Reading 218
Exam Preparation Tasks 219
Review All the Key Topics 219
Complete the Tables and Lists from Memory 219
Define Key Terms 219

Part IV Cloud Storage


Chapter 8 Block Storage Technologies 221
“Do I Know This Already?” Quiz 221
Foundation Topics 224
What Is Data Storage? 224
Hard Disk Drives 225
RAID Levels 226
Disk Controllers and Disk Arrays 228
Volumes 231
Accessing Blocks 233
Advanced Technology Attachment 234
Small Computer Systems Interface 235
Fibre Channel Basics 237
Fibre Channel Topologies 238
Fibre Channel Addresses 239
Fibre Channel Flow Control 241
Fibre Channel Processes 241
Fabric Shortest Path First 243
Fibre Channel Logins 245
Zoning 246
SAN Designs 247
Virtual SANs 250
VSAN Definitions 251
VSAN Trunking 253
xiii

Zoning and VSANs 254


VSAN Use Cases 255
Internet SCSI 256
Cloud Computing and SANs 258
Block Storage for Cloud Infrastructure 258
Block Storage as a Service 259
Around the Corner: Solid-State Drives 260
Further Reading 261
Exam Preparation Tasks 262
Review All the Key Topics 262
Complete the Tables and Lists from Memory 262
Define Key Terms 263

Chapter 9 File Storage Technologies 265


“Do I Know This Already?” Quiz 265
Foundation Topics 268
What Is a File? 268
File Locations 269
Main Differences Between Block and File Technologies 270
Building a File System 271
File Namespace 272
Linux File Naming Rules 272
Windows File Naming Rules 273
Volume Formatting 274
Extended Filesystems 274
FAT and NTFS 278
Permissions 281
Linux Permissions 281
NTFS Permissions 282
Accessing Remote Files 285
Network File System 286
Common NFS Client Operations 287
Common NFS NAS Operations 289
Server Message Block 289
Common SMB Client Operations 292
Common SMB NAS Operations 292
Other File Access Protocols 293
Cloud Computing and File Storage 294
File Storage for Cloud Infrastructure 294
xiv CCNA Cloud CLDFND 210-451 Official Cert Guide

File Hosting 294


OpenStack Manila 295
Around the Corner: Object Storage 297
Further Reading 298
Exam Preparation Tasks 299
Review All the Key Topics 299
Complete the Tables and Lists from Memory 299
Define Key Terms 299
Part V Architectures for Cloud
Chapter 10 Network Architectures for the Data Center: Unified Fabric 301
“Do I Know This Already?” Quiz 301
Foundation Topics 304
Attributes of Data Center Networks 304
The Three-Tier Design 305
Device Virtualization 307
Why Use VDCs? 309
Creating VDCs 310
Allocating Resources to VDCs 312
Virtual PortChannels 313
Link Aggregation 315
Creating vPCs 317
Adding vPCs to the Three-Tier Design 319
Fabric Extenders 320
Top-of-Rack Designs 320
End-of-Row and Middle-of-Row Designs 321
Enter the Nexus 2000 322
High-available Fabric Extender Topologies 325
Overlay Transport Virtualization 326
Layer 2 Extension Challenges 327
I Want My OTV! 329
Configuring OTV 332
OTV Site Designs 335
I/O Consolidation 336
Data Center Bridging 338
Priority-based Flow Control 338
Enhanced Transmission Selection 339
Data Center Bridging Exchange 340
Fibre Channel over Ethernet 341
FCoE Definitions 341
xv

Deploying I/O Consolidation 343


I/O Consolidation Designs 346
FabricPath 349
Address Learning with FabricPath 351
Configuring FabricPath 352
FabricPath and Spanning Tree Protocol 354
Introduction to Spine-Leaf Topologies 356
Around the Corner: VXLAN Fabrics 358
Further Reading 360
Exam Preparation Tasks 361
Review All the Key Topics 361
Complete the Tables and Lists from Memory 361
Define Key Terms 361

Chapter 11 Network Architectures for the Data Center: SDN and ACI 363
“Do I Know This Already?” Quiz 363
Foundation Topics 366
Cloud Computing and Traditional Data Center Networks 366
The Opposite of Software-Defined Networking 367
Network Programmability 369
Network Management Systems 369
Automated Networks 370
Programmable Networks 371
SDN Approaches 374
Separation of the Control and Data Planes 375
The OpenFlow Protocol 376
OpenDaylight 378
Software-based Virtual Overlays 381
Application Centric Infrastructure 382
Problems Not Addressed by SDN 382
ACI Architecture 383
ACI Policy Model 385
Concerning EPGs 388
Concerning Contracts 389
Cisco APIC 391
Fabric Management 392
Integration 394
Visibility 395
A Peek into ACI’s Data Plane 396
Integration with Virtual Machine Managers 398
xvi CCNA Cloud CLDFND 210-451 Official Cert Guide

Around the Corner: OpenStack Neutron 399


Further Reading 403
Exam Preparation Tasks 404
Review All the Key Topics 404
Complete the Tables and Lists from Memory 404
Define Key Terms 404

Chapter 12 Unified Computing 407


“Do I Know This Already?” Quiz 407
Foundation Topics 410
Physical Servers in a Virtual World 410
X86 Microarchitecture 411
Physical Server Formats 413
Server Provisioning Challenges 414
Infrastructure Preparation 415
Pre-Operating System Installation Operations 417
Introducing the Cisco Unified Computing System 418
UCS Fabric Interconnects 419
UCS Manager 424
UCS B-Series 426
UCS C-Series 430
UCS Virtual Interface Cards 432
UCS Server Identity 436
Building a Service Profile 437
Policies 442
Cloning 443
Pools 444
Templates 445
UCS Central 449
Cloud Computing and UCS 451
Around the Corner: OpenStack Ironic 453
Further Reading 453
Exam Preparation Tasks 454
Review All the Key Topics 454
Complete the Tables and Lists from Memory 454
Define Key Terms 454

Chapter 13 Cisco Cloud Infrastructure Portfolio 457


“Do I Know This Already?” Quiz 457
Foundation Topics 460
xvii

Cisco MDS 9000 Series Multilayer Directors and Fabric Switches 460
Cisco Nexus Data Center Switches 462
Cisco Nexus 1000V Series Switches 462
Cisco Nexus 1100 Cloud Services Platforms 463
Cisco Nexus 2000 Series Fabric Extenders 464
Cisco Nexus 3000 Series Switches 466
Cisco Nexus 5000 Series Switches 469
Cisco Nexus 7000 Series Switches 471
Cisco Nexus 9000 Series Switches 475
Cisco Prime Data Center Network Manager 478
Cisco Unified Computing System 479
Cisco UCS 6200 and 6300 Series Fabric Interconnects 480
Cisco UCS 5100 Series Blade Server Chassis 481
Cisco UCS 2200 Series Fabric Extenders 481
Cisco UCS B-Series Blade Servers 482
Cisco UCS C-Series Rack Servers 482
Cisco UCS Invicta 483
Cisco UCS M-Series Modular Servers 484
Cisco Virtual Networking Services 486
Cisco Adaptive Security Virtual Appliance 486
Cisco Cloud Services Router 1000V 487
Citrix NetScaler 1000V 488
Cisco Virtual Wide-Area Application Services 489
Virtual Security Gateway 490
Exam Preparation Tasks 491
Review All the Key Topics 491
Complete the Tables and Lists from Memory 491
Define Key Terms 491

Chapter 14 Integrated Infrastructures 493


“Do I Know This Already?” Quiz 493
Foundation Topics 497
Modular Data Centers 497
Pool of Devices 497
Custom PODs vs. Integrated Infrastructures 501
FlexPod 503
Vblock 506
VSPEX 508
UCS Integrated Infrastructure for Red Hat OpenStack 510
xviii CCNA Cloud CLDFND 210-451 Official Cert Guide

Around the Corner: Hyperconvergence 510


Further Reading 512
Before We Go 512
Exam Preparation Tasks 514
Review All the Key Topics 514
Define Key Terms 514

Chapter 15 Final Preparation 517


Tools for Final Preparation 517
Pearson Cert Practice Test Engine and Questions 517
Companion Website 517
Pearson IT Certification Practice Test Engine and Questions 518
Install the Software 518
Activate and Download the Practice Exam 519
Activating Other Exams 520
Assessing Exam Readiness 520
Premium Edition eBook and Practice Tests 520
Premium Edition 520
The Cisco Learning Network 520
Memory Tables 521
Chapter-Ending Review Tools 521
Suggested Plan for Final Review/Study 521
Using the Exam Engine 522
Summary 522

Glossary 523

Appendix A Answers to Pre-Assessments and Quizzes 539

Appendix B Memory Tables 543

Appendix C Answers to Memory Tables 561

Index 578

Appendix D Study Planner CD


xix

Icons Used in This Book

Branch Office Employee/ End User Running Network Clouds


Accounting and Sales Person

PC Web Laptop CiscoWorks Newton


Server Workstation

File Application 10GE/FCoE Mainframe Database UCS 5108 Blade MUX


Server Chassis

10GE

Nexus UCS C-Series Workgroup Nexus Nexus 2000


7000 Switch 5000 10GE

Nexus 2000 Router Nexus Cisco ASA System


Fabric Extender 1KV VSM 5500 Controller

Multilayer Bridge Firewall FC Storage Server Load


Switch Balancer

Wide Area Nexus Cisco MDS Cisco MDS Multilayer UCS 6200 Series
Application 1000 Multilayer Fabric Switch Fabric Interconnect
Engine Director
xx CCNA Cloud CLDFND 210-451 Official Cert Guide

Command Syntax Conventions


The conventions used to present command syntax in this book are the same conventions
used in the IOS Command Reference. The Command Reference describes these conven-
tions as follows:

■ Boldface indicates commands and keywords that are entered literally as shown. In
actual configuration examples and output (not general command syntax), boldface
indicates commands that are manually input by the user (such as a show command).
■ Italic indicates arguments for which you supply actual values.
■ Vertical bars (|) separate alternative, mutually exclusive elements.
■ Square brackets ([ ]) indicate an optional element.
■ Braces ({ }) indicate a required choice.
■ Braces within brackets ([{ }]) indicate a required choice within an optional element.
xxi

Introduction
Working as an information technology professional for many years, I have pursued a con-
siderable number of certifications. However, I have always reserved a special place in my
heart for my first one: Cisco Certified Network Associate (CCNA).

Back in 1999, I was thrilled to discover that having obtained this certification was going
to radically change my career for the better. Undoubtedly, I was being recognized by the
market as a tested network professional, and better job opportunities immediately started
to appear.

What surprised me the most was that the CCNA certification did not dwell too much on
products. Instead, it focused on foundational networking concepts, which I still use today
on a daily basis. Smartly, Cisco had already realized that technologies may quickly change,
but concepts remain consistent throughout the years, like genes that are passed through
uncountable generations of life forms.

Fast forwarding 17 years, the world has turned its attention to cloud computing and all
the promises it holds to make IT easy and flexible. But contrarily to the late 1990s, the
explosion of information and opinions that currently floods on the Internet causes more
confusion than enlightenment in professionals interested in understanding any IT related
topic with reasonable depth.

Bringing method and objectivity to such potential chaos, Cisco has launched a brand-new,
associate-level certification: CCNA Cloud. And fortunately, the invitation to write this
book has given me not only the opportunity to systematically explore cloud computing,
but also the personal satisfaction of positively contributing to my favorite certification.

Goals and Methods


Obviously, the primary objective of this book is to help you pass the CCNA Cloud
CLDFND 210-451 Exam. However, as previously mentioned, it is also designed to facili-
tate your learning of foundational concepts underlying cloud computing that will carry
over into your professional job experience; this book is not intended to be an exercise in
rote memorization of terms and technologies.

With the intention of giving you a holistic view of cloud computing and a more reward-
ing learning experience, the order in which I present the material is designed to provide
a logical progression of explanations from basic concepts to complex architectures.
Notwithstanding, if you are interested in covering specific gaps in your preparation for
the exam, you can also read the chapters out of the proposed sequence.

Each chapter roughly follows this structure:

■ A description of the business and technological context of the explained technology,


approach, or architecture.
■ An explanation of the challenges addressed by such technology, approach, or
architecture.
■ A detailed analysis that immerses the reader in the main topic of the chapter, including
its characteristics, possibilities, results, and consequences.
xxii CCNA Cloud CLDFND 210-451 Official Cert Guide

■ A thorough explanation of how this technology, approach, or architecture is applicable


to real-world cloud computing environments.
■ A section called “Around the Corner” that points out related topics, trends, and technol-
ogies that you are not specifically required to know for the CCNA Cloud CLDFND 210-
451 exam, but are very important for your knowledge as a cloud computing professional.

Who Should Read This Book?


CCNA Cloud certification candidates are the target audience for this book . However, it is
also designed to offer a proper introduction to fundamental concepts and technologies for
engineers, architects, developers, analysts, and students that are interested in cloud computing.

Strategies for Exam Preparation


Whether you want to read the book in sequence or pick specific chapters to cover knowl-
edge gaps, I recommend that you include the following guidelines in your study for the
CCNA Cloud CLDFND 210-451 exam each time you start a chapter:

■ Answer the “Do I Know This Already?” quiz questions to assess your expertise in the
chapter topic.
■ Check the results in Appendix A, “Answers to the Pre-Assessments and Quizzes.”
■ Based on your results, read the Foundation Topics sections, giving special attention to
the sections corresponding to the questions you have not answered correctly.
■ After the first reading, try to complete the memory tables and define the key terms
from the chapter, and verify the results in the appendices. If you make a mistake in a
table entry or the definition of a key term, review the related section.
Remember: discovering gaps in your preparation for the exam is as important as address-
ing them.

Additionally, you can use Appendix D, “Study Planner,” to control the pace of your study
during the first reading of this certification guide as whole. In this appendix, you can
establish goal dates to read the contents of each chapter and reserve time to test what you
have learned through practice tests generated from the Pearson Cert Practice Test engine.

How This Book Is Organized


In times where blog posts and tweets provide disconnected pieces of information, this
book intends to serve a complete learning experience, where order and consistency
between chapters do matter.

For such purpose, Chapters 1 through 15 cover the following topics:

■ Chapter 1, “What Is Cloud Computing?”—Unfortunately, massive hype surround-


ing cloud computing in the past several years has resulted in more distraction than
certainty for the majority of IT professionals. With lots of different vendors claiming
that cloud environments can only exist via their products, many fundamental aspects of
cloud computing have been simply glossed over or, even worse, undiscovered.
xxiii

Peeling away these marketing layers, this chapter focuses on the history of cloud com-
puting, from its humble beginnings to its widespread adoption during this decade. As a
theoretical foundation, it explores NIST’s definition of cloud computing and the essen-
tial common characteristics of cloud computing environments.
■ Chapter 2, “Cloud Shapes: Service Models”— Besides using services from established
cloud providers such as Amazon Web Services (AWS) and Microsoft Azure, IT depart-
ments are becoming true cloud service providers within their own organizations. This
chapter examines the implications of this responsibility, analyzing the well-known
cloud service models (Infrastructure as a Service [IaaS], Platform as a Service [PaaS], and
Software as a Service [SaaS]). To put such concepts into practice, all service models are
explained through illustrative real-world examples.
■ Chapter 3, “Cloud Heights: Deployment Models”—An organization may choose to
build a cloud environment for its own exclusive use or choose to share another cloud
environment with one or many other companies. This chapter describes the main
characteristics of private, community, public, and hybrid clouds while also discussing
the reasons for choosing each of these deployment models. Additionally, it dedicates
special focus to the benefits of the Cisco Intercloud strategy, and presents the main
characteristics of the Cisco Intercloud Fabric solution.
■ Chapter 4, “Behind the Curtain”—Building on the conceptual basis provided in the
previous three chapters, this chapter introduces you to the most important implemen-
tation and operation challenges of a cloud computing environment. The chapter pres-
ents the main software and hardware components of a cloud project, the data center
journey into a cloud-based architecture, and essential requirements such as application
programming interfaces (APIs).
After reading this chapter, you will be fully prepared to clearly understand how each
of the technologies explained in the subsequent chapters fit into cloud computing
deployments.
■ Chapter 5, “Server Virtualization”—The exploration of cloud computing infrastruc-
ture begins in earnest with this chapter, which analyzes server virtualization as a major
enabling technology of cloud computing environments. After quickly addressing the
origins and main features of server virtualization, the chapter explains how it differs
from cloud computing and, most importantly, what must be done to adapt server virtu-
alization environments to the automation required by cloud computing environments.
■ Chapter 6, “Infrastructure Virtualization”—Data exchange is essential to any
application, regardless of whether it belongs to a server virtualization environment.
Nevertheless, connectivity presents particular challenges when virtual machines must
communicate with each other and with the outside world. On the other hand, cloud
networking faces additional constraints because standardization and automation have
become required design factors in such projects. This chapter presents the main prin-
ciples of and new technologies for virtual and cloud networking through practical
examples and clear explanations.
■ Chapter 7, “Virtual Networking Services and Application Containers”—As virtual and
cloud networking have evolved, networking services that used to be deployed only as
physical appliances can now be ported into virtual machines. These virtual networking
services leverage the advantages of server virtualization environments to offer benefits that
xxiv CCNA Cloud CLDFND 210-451 Official Cert Guide

were unimaginable with their physical counterparts. Besides exploring these services using
real-world examples, this chapter also addresses the concept of application containers,
which can be used to secure tenants within a cloud computing environment.
■ Chapter 8, “Block Storage Technologies”—Data processing, transmission, and stor-
age technologies have always been intertwined in computer science: any change to one
technology will always produce effects on the other two. Consequently, storage tech-
nologies have evolved to keep pace with the liberal use of virtual servers and virtual
networks in cloud computing.
This chapter explores block storage provisioning concepts and the most widely used
technologies within such context, such as SAN and disk arrays.
■ Chapter 9, “File Storage Technologies”—Files are arguably the most popular method
of data storage due to their simplicity and scale. This chapter explores concepts and
technologies that support file systems for cloud computing, such as NAS and file shar-
ing protocols.
■ Chapter 10, “Network Architectures for the Data Center: Unified Fabric”—In
the late 2000s, Cisco introduced numerous innovations to data center networking
through its Unified Fabric architecture. This chapter focuses on the most impactful of
these modernizations, including device virtualization (VDCs and their relationship to
VLANs and VRF instances), virtual PortChannels, Fabric Extenders, Overlay Transport
Virtualization (OTV), and Layer 2 Multipathing with FabricPath.
■ Chapter 11, “Network Architectures for the Data Center: SDN and ACI”—Cloud
networking requires a robust physical infrastructure with intrinsic support for dynamic
and scalable designs. This chapter explains two cutting-edge architectures for data
center networks: Software-Defined Networking (SDN) and Cisco Application Centric
Infrastructure (ACI).
■ Chapter 12: “Unified Computing”—Although many IT professionals may view servers
as self-sufficient devices within a data center, Cisco Unified Computing System (UCS)
encompasses technologies that closely interact with all architectures presented in the
previous chapters. This chapter introduces the main components of Cisco UCS and
explains why this solution was designed from the ground up to be the best server archi-
tecture for cloud computing environments.
■ Chapter 13, “Cisco Cloud Infrastructure Portfolio”—This chapter briefly describes
the Cisco products that are used to build optimal cloud computing infrastructures. It is
designed to provide a quick reference guide of the ever-evolving family of Cisco prod-
ucts and to materialize the theoretical concepts explained in the previous chapters.
■ Chapter 14: “Integrated Infrastructures”—Cloud computing environments require
levels of speed and elasticity that have challenged how data centers are designed and
expanded. Using the concept of pool of devices (POD), multiple companies have
formed alliances to provide standardized integrated platforms that include server, net-
working, storage, and virtualization software as a predictable cloud module. This chap-
ter explains the advantages of such an approach and explores the main similarities and
differences between FlexPod (Cisco and NetApp), Vblock (VCE), VSPEX (EMC), and
UCSO (Cisco and Red Hat).
xxv

■ Chapter 15: “Final Preparation”— Considering you have learned the content
explained in the certification guide, this chapter includes guidelines and tips that are
intended to support your study until you take your exam.

Certification Exam Topics and This Book


Although this certification guide covers all topics from the CCNA Cloud CLDFND 210-
451 Exam, it does not follow the exact order of the exam blueprint published by Cisco.
Instead, the chapter sequence is purposely designed to enhance your learning through a
gradual progression of concepts.

Table I-1 lists each exam topic in the blueprint along with a reference to the book chapter
that covers the topic.

Table I-1 CLDFND Exam 210-451 Topics and Chapter References


CLDFND 210-451 Exam Topic Chapter(s) in Which
Topic Is Covered
1.0 Cloud Characteristics and Models 1, 2
1.1 Describe common cloud characteristics 1
1.1.a On-demand self-service 1
1.1.b Elasticity 1
1.1.c Resource pooling 1
1.1.d Metered service 1
1.1.e Ubiquitous network access (smartphone, tablet, mobility) 1
1.1.f Multi-tenancy 1
1.2 Describe Cloud Service Models 2
1.2.a Infrastructure as a Service (IaaS) 2
1.2.b Software as a Service (SaaS) 2
1.2.c Platform as a Service (PaaS) 2
2.0 Cloud Deployment 3
2.1 Describe cloud deployment models 3
2.1.a Public 3
2.1.b Private 3
2.1.c Community 3
2.1.d Hybrid 3
2.2 Describe the Components of the Cisco Intercloud Solution 3
2.2.a Describe the benefits of Cisco Intercloud 3
2.2.b Describe Cisco Intercloud Fabric Services 3
xxvi CCNA Cloud CLDFND 210-451 Official Cert Guide

CLDFND 210-451 Exam Topic Chapter(s) in Which


Topic Is Covered
3.0 Basic Knowledge of Cloud Compute 5, 12, 13
3.1 Identify key features of Cisco UCS 12, 13
3.1.a Cisco UCS Manager 12
3.1.b Cisco UCS Central 12
3.1.c B-Series 12, 13
3.1.d C-Series 12, 13
3.1.e Server identity (profiles, templates, pools) 12
3.2 Describe Server Virtualization 5
3.2.a Basic knowledge of different OS and hypervisors 5
4.0 Basic Knowledge of Cloud Networking 6, 7, 10, 11, 13
4.1 Describe network architectures for the data center 10, 11, 13
4.1.a Cisco Unified Fabric 10, 13
4.1.a.1 Describe the Cisco nexus product family 10, 13
4.1.a.2 Describe device virtualization 10
4.1.b SDN 11
4.1.b.1 Separation of control and data 11
4.1.b.2 Programmability 11
4.1.b.3 Basic understanding of Open Daylight 11
4.1.c ACI 11
4.1.c.1 Describe how ACI solves the problem not addressed by SDN 11
4.1.c.2 Describe benefits of leaf/spine architecture 10
4.1.c.3 Describe the role of APIC Controller 11
4.2 Describe Infrastructure Virtualization 6, 7, 13
4.2.a Difference between vSwitch and DVS 6
4.2.b Cisco Nexus 1000V components 6, 13
4.2.b.1 VSM 6, 13
4.2.b.2 VEM 6, 13
4.2.b.3 VSM appliance 6, 13
4.2.c Difference between VLAN and VXLAN 6
4.2.d Virtual networking services 7
4.2.e Define Virtual Application Containers 7
xxvii

CLDFND 210-451 Exam Topic Chapter(s) in Which


Topic Is Covered
4.2.e.1 Three-tier application container 7
4.2.e.2 Custom container 7
5.0 Basic Knowledge of Cloud Storage 8, 9, 10, 13, 14
5.1 Describe storage provisioning concepts 8
5.1.a Thick 8
5.1.b Thin 8
5.1.c RAID 8
5.1.d Disk pools 8
5.2 Describe the difference between all the storage access 8, 9
technologies
5.2.a Difference between SAN and NAS; block and file 9
5.2.b Block technologies 8
5.2.c File technologies 9
5.3 Describe basic SAN storage concepts 8
5.3.a Initiator, target, zoning 8
5.3.b VSAN 8
5.3.c LUN 8
5.4 Describe basic NAS storage concepts 9
5.4.a Shares / mount points 9
5.4.b Permissions 9
5.5 Describe the various Cisco storage network devices 8, 10, 13
5.5.a Cisco MDS family 8, 13

5.5.b Cisco Nexus family 10, 13


5.5.c UCS Invicta (Whiptail) 8, 13
5.6 Describe various integrated infrastructures 14
5.6.a FlexPod (NetApp) 14
5.6.b Vblock (VCE) 14
5.6.c VSPEX (EMC) 14
5.6.d OpenBlock (Red Hat) 14

The CCNA Cloud CLDFND 210-451 exam can have topics that emphasize different
functions or features, and some topics can be rather broad and generalized. The goal
xxviii CCNA Cloud CLDFND 210-451 Official Cert Guide

of this book is to provide the most comprehensive coverage to ensure that you are well
prepared for the exam. Although some chapters might not address specific exam topics,
they provide a foundation that is necessary for a clear understanding of important top-
ics. Your short-term goal might be to pass this exam, but your long-term goal should be
to become a qualified cloud professional.

It is also important to understand that this book is a “static” reference, whereas the exam
topics are dynamic. Cisco can and does change the topics covered on certification exams
often.

This exam guide should not be your only reference when preparing for the certifica-
tion exam. You can find a wealth of information available at Cisco.com that covers each
topic in great detail. If you think that you need more detailed information on a specific
topic, read the Cisco documentation that focuses on that topic.

Taking the CCNA CLDFND 210-451 Exam


As with any Cisco certification exam, you should strive to be thoroughly prepared
before taking the exam. There is no way to determine exactly what questions are on the
exam, so the best way to prepare is to have a good working knowledge of all subjects
covered on the exam. Schedule yourself for the exam and be sure to be rested and ready
to focus when taking the exam.

The best place to find out about the latest available Cisco training and certifications is
under the Training & Events section at Cisco.com.

Tracking Your Status


You can track your certification progress by checking http://www.cisco.com/go/
certifications/login. You must create an account the first time you log in to the site.

Cisco Certifications in the Real World


Cisco is one of the most widely recognized names in the IT industry. Cisco Certified
cloud specialists bring quite a bit of knowledge to the table because of their deep under-
standing of cloud technologies, standards, and designs. This is why the Cisco certifica-
tion carries such high respect in the marketplace. Cisco certifications demonstrate to
potential employers and contract holders a certain professionalism, expertise, and dedi-
cation required to complete a difficult goal. If Cisco certifications were easy to obtain,
everyone would have them.

Exam Registration
The CCNA Cloud CLDFND 210-451 exam is a computer-based exam, with around 55
to 65 multiple-choice, fill-in-the-blank, list-in-order, and simulation-based questions.
You can take the exam at any Pearson VUE (http://www.pearsonvue.com) testing center.
xxix

According to Cisco, the exam should last about 90 minutes. Be aware that when you
register for the exam, you might be instructed to allocate an amount of time to take the
exam that is longer than the testing time indicated by the testing software when you
begin. The additional time is for you to get settled in and to take the tutorial about the
test engine.

Companion Website
Register this book to get access to the Pearson IT Certification test engine and other study
materials plus additional bonus content. Check this site regularly for new and updated
postings written by the author that provide further insight into the more troublesome top-
ics on the exam. Be sure to check the box that you would like to hear from us to receive
updates and exclusive discounts on future editions of this product or related products.
To access this companion website, follow the steps below:

Step 1 Go to www.pearsonITcertification.com/register and log in or create a new


account.

Step 2 Enter the ISBN: 9781587147005

Step 3 Answer the challenge question as proof of purchase.

Step 4 Click on the “Access Bonus Content” link in the Registered Products section
of your account page, to be taken to the page where your downloadable
content is available.

Please note that many of our companion content files can be very large, especially image
and video files.

If you are unable to locate the files for this title by following the steps at left, please
visit www.pearsonITcertification.com/contact and select the “Site Problems/ Comments”
option. Our customer service representatives will assist you.

Pearson IT Certification Practice Test Engine and


Questions
The companion website includes the Pearson IT Certification Practice Test engine—software
that displays and grades a set of exam-realistic multiple-choice questions. Using the Pearson
IT Certification Practice Test engine, you can either study by going through the questions in
Study Mode, or take a simulated exam that mimics real exam conditions. You can also serve
up questions in a Flash Card Mode, which will display just the question and no answers, chal-
lenging you to state the answer in your own words before checking the actual answers to
verify your work.

The installation process requires two major steps: installing the software and then activat-
ing the exam. The website has a recent copy of the Pearson IT Certification Practice Test
engine. The practice exam (the database of exam questions) is not on this site.
Technet24.ir

xxx CCNA Cloud CLDFND 210-451 Official Cert Guide

NOTE: The cardboard case in the back of this book includes a piece of paper. The paper
lists the activation code for the practice exam associated with this book. Do not lose the
activation code. On the opposite side of the paper from the activation code is a unique, one-
time-use coupon code for the purchase of the Premium Edition eBook and Practice Test.

Install the Software


The Pearson IT Certification Practice Test is a Windows-only desktop application. You
can run it on a Mac using a Windows virtual machine, but it was built specifically for the
PC platform. The minimum system requirements are as follows:

■ Windows 10, Windows 8.1, or Windows 7


■ Microsoft .NET Framework 4.0 Client
■ Pentium-class 1GHz processor (or equivalent)
■ 512MB RAM
■ 650MB disk space plus 50MB for each downloaded practice exam
■ Access to the Internet to register and download exam databases

The software installation process is routine as compared with other software installation pro-
cesses. If you have already installed the Pearson IT Certification Practice Test software from
another Pearson product, there is no need for you to reinstall the software. Simply launch
the software on your desktop and proceed to activate the practice exam from this book by
using the activation code included in the access code card sleeve in the back of the book.

The following steps outline the installation process:

Step 1 Download the exam practice test engine from the companion site.

Step 2 Respond to windows prompts as with any typical software installation


process.

The installation process will give you the option to activate your exam with the activa-
tion code supplied on the paper in the cardboard sleeve. This process requires that you
establish a Pearson website login. You need this login to activate the exam, so please do
register when prompted. If you already have a Pearson website login, there is no need to
register again. Just use your existing login.

Activate and Download the Practice Exam


Once the exam engine is installed, you should then activate the exam associated with this
book (if you did not do so during the installation process) as follows:

Step 1 Start the Pearson IT Certification Practice Test software from the Windows
Start menu or from your desktop shortcut icon.

Step 2 To activate and download the exam associated with this book, from the My
Products or Tools tab, click the Activate Exam button.
xxxi

Step 3 At the next screen, enter the activation key from paper inside the cardboard
sleeve in the back of the book. Once entered, click the Activate button.
Step 4 The activation process will download the practice exam. Click Next, and then
click Finish.

When the activation process completes, the My Products tab should list your new exam.
If you do not see the exam, make sure that you have selected the My Products tab on the
menu. At this point, the software and practice exam are ready to use. Simply select the
exam and click the Open Exam button.

To update a particular exam you have already activated and downloaded, display the
Tools tab and click the Update Products button. Updating your exams will ensure that
you have the latest changes and updates to the exam data.

If you want to check for updates to the Pearson Cert Practice Test exam engine software,
display the Tools tab and click the Update Application button. You can then ensure that
you are running the latest version of the software engine.

Activating Other Exams


The exam software installation process, and the registration process, has to happen only
once. Then, for each new exam, only a few steps are required. For instance, if you buy
another Pearson IT Certification Cert Guide, extract the activation code from the card-
board sleeve in the back of that book; you do not even need the exam engine at this
point. From there, all you have to do is start the exam engine (if not still up and running)
and perform Steps 2 through 4 from the previous list.

Assessing Exam Readiness


Exam candidates never really know whether they are adequately prepared for the exam until
they have completed about 30% of the questions. At that point, if you are not prepared, it
is too late. The best way to determine your readiness is to work through the “Do I Know
This Already?” quizzes at the beginning of each chapter and review the foundation and key
topics presented in each chapter. It is best to work your way through the entire book unless
you can complete each subject without having to do any research or look up any answers.

Premium Edition eBook and Practice Tests


This book also includes an exclusive offer for 70% off the Premium Edition eBook and
Practice Tests edition of this title. Please see the coupon code included with the card-
board sleeve for information on how to purchase the Premium Edition.
Technet24.ir

This chapter covers the following topics:

■ Welcome to the Cloud Hype

■ Historical Steps Toward Cloud Computing

■ The Many Definitions of Cloud Computing

■ The Data Center

■ Common Cloud Characteristics

■ Classifying Clouds

This chapter covers the following exam objectives:

■ 1.1 Describe common cloud characteristics


■ 1.1.a On-demand self service
■ 1.1.b Elasticity
■ 1.1.c Resource pooling
■ 1.1.d Metered service
■ 1.1.e Ubiquitous network access (smartphone, tablet, mobility)
■ 1.1.f Multi-tenancy
CHAPTER 1

What Is Cloud Computing?


Not too long ago (2011), many technology enthusiasts were predicting that cloud
computing would address all information technology challenges. And rather loudly, they
had already declared the cloud as the decade’s panacea.

Although I had been led astray earlier in my career by hyperbolic statements predicting
the revolutionary impact of one technology or another on the future of IT, it was hard not
to be impressed by all the promises associated with cloud computing: agility, simplicity,
efficiency, and control. It just seemed the perfect fit for the exceedingly complex world of
IT, especially in my area of specialization: data centers.

But like other seasoned IT professionals, I now have a healthy level of skepticism and thus
have braced myself for the front of “cloud computing” offerings from literally thousands
of manufacturers, vendors, integrators, and service providers. Many of these companies
have latched onto the cloud movement in hope of rebranding their standard products and
services with the new and hot “cloud” moniker…and many of their customers are buying
into the hype.

Thankfully, within a relatively short time, informed CIOs and IT managers realized that
cloud computing is not a miraculous product, solution, or technology but rather a model
that enables them to exploit computing resources in a new and cost-efficient manner. And
through the efforts of organizations such as the U.S. National Institute of Standards and
Technologies (NIST), cloud computing has been appropriately defined as a new access
model for IT, created to solve problems that are ingrained in the manual operations that still
creep IT departments from myriad organizations in the world.

The CLDFND exam requires knowledge about the common characteristics of cloud
computing as defined by NIST: on-demand self-service, rapid elasticity, resource pooling,
broad network access, and measured service. It also demands understanding about a
subtopic of resource pooling, multi-tenancy, and its importance to cloud implementations.
To help you master these concepts, this chapter contextualizes the perception of cloud
computing during its hype in the late 2000s, presents some of the historical milestones in
the evolution of computing toward cloud computing, and explains each one of the cloud
essential characteristics using real examples and concepts picked from the daily routine of
an IT professional.

“Do I Know This Already?” Quiz


The “Do I Know This Already?” quiz allows you to assess whether you should read this
entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in
doubt about your answers to these questions or your own assessment of your knowledge
of the topics, read the entire chapter. Table 1-1 lists the major headings in this chapter and
their corresponding “Do I Know This Already?” quiz questions. You can find the answers in
Appendix A, “Answers to Pre-Assessments and Quizzes.”
Technet24.ir
4 CCNA Cloud CLDFND 210-451 Official Cert Guide

Table 1-1 “Do I Know This Already?” Section-to-Question Mapping


Foundation Topics Section Questions
Welcome to the Cloud Hype 1
Historical Steps Toward Cloud Computing 2
The Many Definitions of Cloud Computing 3
The Data Center 4
Common Cloud Characteristics 5–10
Classifying Clouds 11

1. The year 2009 saw a huge interest in cloud computing. Which of the following events
was the biggest influence in creating this “cloud hype”?
a. Cisco Unified Computing System launch in 2009
b. VMware vSphere release 4.0 in 2009
c. Amazon Web Services launch in 2006
d. World financial crisis in 2007-2008
e. Microsoft Windows Server 2008

2. Which of the following options does not represent a fundamental milestone toward
cloud computing in the history of computing?
a. Mainframe time-sharing
b. “Computation as a public utility” (John McCarthy, 1961)
c. “Intergalactic computer network” (J.C.R. Licklider, 1963)
d. Virtual local-area networks (Bellcore, 1984)
e. Salesforce.com launch in 2009

3. Which of the following represents NIST’s definition of cloud computing?


a. “Cloud computing refers to the on-demand delivery of IT resources and
applications via the Internet with pay-as-you-go pricing.”
b. “Cloud computing, often referred to as simply ‘the cloud,’ is the delivery
of on-demand computing resources—everything from applications to data
centers—over the Internet on a pay for use basis.”
c. “IT resources and services that are abstracted from the underlying infrastructure
and provided ‘on-demand’ and ‘at scale’ in a multitenant environment.”
d. “Cloud computing refers to the use of networked infrastructure software and
capacity to provide resources to users in an on-demand environment.”
e. “Cloud computing is a model for enabling ubiquitous, convenient, on-demand
network access to a shared pool of configurable computing resources (e.g.,
networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service provider
interaction.”
Chapter 1: What Is Cloud Computing? 5

4. Which of the following are data center resources that can be offered through cloud
computing? (Choose all that apply.)
1
a. Building
b. Server
c. Raised floor
d. Cooling system
e. Data storage
f. Network bandwidth
g. Server cabinets

5. Which of the following tools gives cloud end users access to request resources?
a. Service catalog in web portal
b. Mailer group
c. 1-800 telephone number
d. None; requests are always delegated to the IT department
e. SLA

6. Which of the following options characterizes elasticity according to the NIST


definition of cloud computing?
a. Identical cloud resources are provisioned in different cloud computing
environments.
b. Cloud computing resources can be expanded but never decreased.
c. Cloud capabilities can be scaled rapidly outward and inward according to
demand.
d. Cloud resources are doubled after at least 24 hours.
e. The leasing period of a resource can be extended for free.

7. What option best defines the opposite of the NIST essential characteristic “resource
pooling” for cloud computing?
a. Resource clusters
b. Sharing
c. Resources that can be easily reassigned
d. Grouping of similar resources
e. Silos
Technet24.ir

6 CCNA Cloud CLDFND 210-451 Official Cert Guide

8. Which of the following options are direct benefits from the cloud computing
measured service characteristic? (Choose all that apply.)
a. Automatic control
b. Elasticity
c. Resource optimization
d. Security
e. Risk management
f. Transparency between provider and consumer

9. Which of the following options represent devices that can utilize cloud resources?
(Choose all that apply.)
a. Personal computer
b. Mobile phones
c. Tablets
d. Mainframe terminal
e. Offline laptop

10. What is a tenant in the context of cloud computing?


a. An organization
b. A single user account
c. Any application that requires isolation from other tenants
d. A department
e. A community of users

11. Which of the following options represent NIST methods of classifying cloud
implementations? (Choose all that apply.)

a. Providers
b. Deployment models
c. OPEX and CAPEX
d. Service models
Chapter 1: What Is Cloud Computing? 7

Foundation Topics
1
Welcome to the Cloud Hype
It has been a while since IT was considered just a boring subject restricted to water cooler
conversations. As the 21st century welcomed a new generation unaware of a world without
the Internet or mobile phones, IT naturally became an integral part of the strategy of business
corporations and public sector companies. And with almost the totality of their transactions
based on electronic data and applications, these organizations realized that the content of the
data center has become much more valuable than all of their combined material assets.

At the time of this writing, IT bears a striking resemblance to the fashion industry, where inno-
vative concepts and paradigm shifts are introduced to huge acclaim and are strongly promoted
as the latest trend (even if they may appear unsuitable for present needs). Some of these
campaigns are so overwhelming that they end up fomenting a period of hype in which many
organizations include the technology du jour into their short-term IT plan (sometimes without
having enough time to understand its true value to the company objectives).

Although the precise origins of the term cloud computing are fittingly nebulous, its hype
certainly peaked around 2011, as Figure 1-1 demonstrates.

Figure 1-1 Peak of the Cloud Hype

Figure 1-1 depicts results from Google Trends, a tool that expresses the interest in particular
keywords over time based on the history of searches conducted via its wildly popular
search engine. As you can see, interest in the term cloud computing arose at the end of
2008, a year whose mere mention gives you a hint as to the root cause of the cloud hype.

Contrary to what many vendors may claim, no technological innovation was able to
raise more interest in cloud computing than the 2007-2008 global financial crisis, which
prompted an immediate period of corporate belt-tightening that throttled investment in IT.
Technet24.ir

8 CCNA Cloud CLDFND 210-451 Official Cert Guide

During this period of diminished investment, traditional IT management challenges became


even more difficult for chief information officers (CIOs) around the world. Table 1-2
describes the three main challenges.

Table 1-2 Traditional IT Challenges


Challenge Description
Low Although IT systems are fairly expensive, their overall utilization is relatively low
efficiency because hardware and software are sized according to business activity peaks.
High costs While other parts of the organization already use consumption-based models, IT
usually requires heavy investment before any system is actually available.
Lack of Due to its extreme complexity, IT remains the least flexible link in the chain
agility when compared to other parts of the organization.

Meeting these challenges under the new budget constraints led CIOs (and their
bosses) to search for cost-efficient alternatives, and the proponents of cloud
computing were eager to guide them, claiming results that could help CIOs overcome
all of their budgetary obstacles. You can easily relate to this situation if you imagine
hearing speeches such as the following (preferably in the “movie trailer guy” voice):

“In a world where information technology is expensive, complex, and rigid,


cloud computing allows end users to immediately provision any IT resources
without any previous investment from you. Almost unbelievably, you will only
pay for the actual use of these resources, which can be easily decommissioned
as soon as the users do not need them.”

Figure 1-2 graphically represents the explosion of cloud services soon being offered to the
IT community to meet their every need.

In technical diagrams, cloud drawings are generally used to hide specific implementation
details from the viewer, specifically turning his attention to the global function of the
discussed system. Cloud computing applies the exact same principle to real IT deployments,
relieving users and IT managers from the complexities related to the provisioning of
computing resources, which includes, for example, servers, file repositories, desktops,
development platforms, business applications, collaboration tools, audio streaming, and just
about any other derivative from data processing, storage, and communication.

Avoiding the usual traps many IT departments get caught in, a cloud computing deployment
does not expose convoluted operational details. Instead, through radical simplification,
cloud computing connects end users directly to their required IT services.

As is true of many other revolutions in the world of computing, cloud computing was not
the result of a sudden burst of creativity. In the next section, you will learn about several
conceptual leaps and technological innovations that paved the road for such transformation.
Chapter 1: What Is Cloud Computing? 9

Figure 1-2 Cloud Computing Proposition

Historical Steps Toward Cloud Computing


Unbeknownst to many of its ardent devotees, some of the concepts that support cloud
computing were developed more than 50 years ago, as Figure 1-3 illustrates.

“Intergalactic Computer
Network” (J.C.R. Salesforce.com is
Licklider) launched
1963 1999
Mainframe Personal
Time-sharing ARPANET Computers
1957 Early
1969
80s

1950s 1960s 1970s 1980s 1990s 2000s

1961 Mid 90s


“Computation as a World Wide Web
public utility” (John (WWW) and
McCarthy) 1973 Virtual Private 2006
Networks
Virtual Machine (VPNs) Amazon
(IBM) Web Services
is launched

Figure 1-3 Computing Milestones Toward Cloud Computing


Technet24.ir

10 CCNA Cloud CLDFND 210-451 Official Cert Guide

Figure 1-3 shows a timeline of some of the achievements that coalesced into the cloud
computing model. IBM’s creation of time-sharing for its mainframes in 1957 arguably
initiated the path to cloud because, before this technology, a mainframe end user had
exclusive use of the whole computer for a certain time period to execute his tasks. Another
user could not use the computer resource until the previous user had released it.

Ingeniously, time-sharing offered small slices of time of the mainframe resources to multiple
different users. Repeatedly, the mainframe halted a user job, saved the job state in memory,
and loaded the state of another user to execute it. Because these operations occurred at a very
fast rate, the users had the perception of accessing a dedicated resource although they were
sharing the same system. Such illusion is central to cloud computing environments.

The evolution to cloud computing happened not only through technological innovation, but
also via visionary contributions from computer scientists such as Professor John McCarthy,
the creator of the term “artificial intelligence.” In 1961, he introduced the concept of
computation as a public utility that advocated the offer of computing resources as a public
utility, like water, electricity, and telephony.

Another computer science luminary, J. C. R. Licklider (the first director of the Information
Processing Techniques Office at the U.S. Department of Defense’s Advanced Research
Projects Agency [ARPA]) envisioned the Intergalactic Computer Network, a radical
extrapolation of the concept of connected computers. Foreseeing the Internet, Licklider
explored the concept of using remote processing capacity, which is a fundamental aspect
of cloud computing. Not coincidentally, the first computer network was created in 1969
within Licklider’s own organization and given the proper name of ARPANET.

History got one big step closer to cloud computing with another contribution from IBM. In
1972, the company officially released mainframe virtualization along with its new generation
of processors (System/370). Through a concept called virtual machine, a mainframe could
emulate hardware through software, allowing users to deploy their own set of software
(including operating system and developed applications) over a single computing resource.

NOTE Although further details about operating systems and virtualization of computing
resources are out of the scope of the present discussion, Chapter 5, “Server Virtualization,”
will address these topics in detail.

Another huge milestone in the progression toward cloud computing was the widespread
adoption of personal computers (PCs) during the 1980s. Computing processing left the
confinements of a few organizations and became pervasive in offices and households, lead-
ing to exponential growth in the number of computer users and the advent of the consumer
market for IT resources.
The intense exchange of knowledge about computers in the 1980s prepared the world for
the Internet revolution of the 1990s, enabling users to adapt easily to the concept of the
World Wide Web, a key component of cloud computing. However, in its infancy, the “net-
work of networks” hardly represented a secure communication medium. The introduction
of virtual private networks (VPNs) led to the development of a set of security-related stan-
dards, including Internet Protocol Security (IPsec) and Secure Sockets Layer (SSL), which
brought trust to business transactions over the Internet.
Chapter 1: What Is Cloud Computing? 11

With these concepts in position, it was only a matter of time before a corporation
combined them all to offer IT services via the Internet. It first happened in 1999, with the
1
launch of Salesforce.com, a company that has specialized in offering customer relationship
management (CRM) software as a service. In its proposal model, Salesforce.com customers
do not have to use any special software or infrastructure to manage their respective data.
Instead, the company provided this complete infrastructure, the only requirements for its
users being a functional web browser and an Internet connection.

In 2003, Amazon (the world biggest Internet-based retailer) began to pursue a broader
approach to offer IT services through the Internet, through an internal project that would
originate Amazon Web Services (AWS). Just like other retail companies, Amazon has
its biggest sales peaks during Christmas seasons. Realizing that huge amounts of unused
capacity exist in its data centers during all other periods of the year, Amazon’s internal
project aimed to rent “pure” computing resources to remote users in an effort to monetize
all that unused capacity. The release of the first AWS products occurred in 2006: Elastic
Compute Cloud (EC2) and Simple Storage Service (S3). These services, respectively, offer
processing and storage services through the Internet, with very fast provisioning, scalable
capabilities, and monthly payments according to resource usage.

Sensing an untapped opportunity, companies such as Rackspace and Terremark (a subsidiary


of Verizon) followed suit and started offering services in a similar way to AWS. And a new
market was born under the name of cloud computing.

The Many Definitions of Cloud Computing


As I briefly mentioned in the section “Welcome to the Cloud Hype,” the huge interest in cloud
computing also produced adverse effects for prospective users. For example, it spawned a huge
number of vendors vying for their business who had neither the technology nor the expertise
to offer anything similar to the services offered by Salesforce.com and Amazon. Consequently,
many organizations interested in cloud computing instead found themselves in a fog of
confusion and endless discussions about what exactly characterized a cloud computing service.

A quick search of the Internet demonstrates what prompted such bewilderment, as you
behold proposed definitions for cloud computing such as the following:

“No matter which provider you choose, you’ll find that almost every cloud
has these core characteristics: it’s virtual, it’s flexible and scalable, it’s open (or
closed), it can be secure, it can be affordable, it can be secure AND affordable.”

“Cloud computing refers to the use of networked infrastructure software and


capacity to provide resources to users in an on-demand environment.”

“Cloud computing refers to the on-demand delivery of IT resources and appli-


cations via the Internet with pay-as-you-go pricing.”

Although these definitions may share some similarities, they subtly bend the term according
to the offered solutions of each company. But with rapid maturation of cloud computing
in recent years, it is fairly easy to point out examples that contradict these definitions, such
as cloud computing services that can offer direct access to physical hardware or that are
accessible from private networks instead of the Internet.
Technet24.ir
12 CCNA Cloud CLDFND 210-451 Official Cert Guide

To dispel the confusion about what constitutes cloud computing, several standards
organizations have devoted efforts to formally define and categorize cloud computing
implementations. Officially launched in 2008, the U.S. National Institute of Standards and
Technology (NIST) Cloud Computing Program (NCCP) has generated Special Publications
(SPs) containing definitions, reference architectures, and classification criteria for cloud
environments.

Though NIST created these standards to accelerate the adoption of cloud computing in the
U.S. federal government, they constitute a crowning achievement in the theoretical study
of this subject. For example, NIST Special Publication 800-145 (The NIST Definition of
Cloud Computing) states that

“Cloud computing is a model for enabling ubiquitous, convenient, on-demand


network access to a shared pool of configurable computing resources (e.g., net-
works, servers, storage, applications, and services) that can be rapidly provisioned
and released with minimal management effort or service provider interaction.”

One important aspect of this definition is the fact that NIST characterizes cloud computing
as an access model for computing resources rather than a technology. Besides that subtle
but important distinction, SP 800-145 cites five essential characteristics that all cloud
computing scenarios must share:

■ On-demand self-service
■ Rapid elasticity
■ Resource pooling
■ Measured service
■ Broad network access

Before we delve into each of these characteristics, allow me to describe the environment
where computing resources are actually allocated and provisioned for cloud end users.

The Data Center


The infrastructure that makes cloud computing possible is the data center. In summary, a
data center is a special facility conceived to house, manage, and support critical computing
resources for one or more organizations. A particularly complex entity, a typical data center
encompasses special building structures, power backup structures, cooling systems, special-
purpose rooms (entrance and telecommunications, for example), equipment cabinets, struc-
tured cabling, network devices, storage systems, servers, data security systems, mainframes,
application software, physical security devices, monitoring centers, and many other support
systems. All these resources and their interaction are (locally or remotely) managed by spe-
cialized personnel.

Figure 1-4 depicts the physical view of a data center.


Chapter 1: What Is Cloud Computing? 13

Entrance Room
Power Backup Telecommunications Room 1
Systems
Cooling System

Raised Floor
Racks

Figure 1-4 Data Center

Table 1-3 lists and describes the data center components depicted in Figure 1-4.

Table 1-3 Data Center Physical Components


Component Description
Power backup Provide electrical power for the data center in the case of a major
systems failure in the main power source. These systems are generally powered
by diesel and special batteries.
Entrance room Allows physical access for data center operational teams and includes
security measures to exclude everyone else.
Telecommunications Encases all devices that are responsible for the data center external
room communication. For high-availability purposes, this room offers access
to at least two telecommunication service providers.
Cooling systems Decrease the temperature of data center equipment such as servers,
storage systems, and network switches to improve performance and
avoid overheating. Typical cooling systems operate by recirculating air
throughout the data center.
Racks Physically support devices such as servers, storage systems, and
network switches.
Raised floor Creates an elevated structured floor to provide a hidden space for the
accommodation of mechanical, electrical, and networking material.

Whereas Figure 1-4 portrays a single data center computer room, numerous data centers contain
several of these rooms spread across different floors or buildings. Besides size, data centers can
also vary in their infrastructure robustness, depending on how critical their supported systems are.
Technet24.ir
14 CCNA Cloud CLDFND 210-451 Official Cert Guide

Standard data center locations are designed to support business applications such as busi-
ness intelligence (BI), CRM, data warehouse (DW), e-commerce, enterprise resource plan-
ning (ERP), supply chain management (SCM), and many others. By contrast, data centers
dedicated to cloud computing are equipped to support whichever applications an organiza-
tion is offering to cloud end users. Notwithstanding, these services will surely employ the
basic computing resources installed in the facility: processing, storage, and networking.

Common Cloud Characteristics


In the following sections, you will learn about each essential characteristic of cloud comput-
ing as described by NIST Special Publications 800-145 and 800-146 (Cloud Synopsys and
Recommendations). In addition, I will discuss another aspect of these environments that
is extremely important: multi-tenancy. Although NIST does not explicitly designate multi-
tenancy as an essential characteristic of cloud computing, the CLDFND exam objectives list
it as a common cloud characteristic.
Within the discussion of a particular characteristic, I will refer to some real-world examples
for the sake of clarity. In addition, I will use an interesting tool for explaining abstract con-
cepts: exploring direct opposites to highlight the main distinctions between cloud comput-
ing and traditional IT practices.

On-Demand Self-Service
Clearly speaking, on-demand means “when required” while self-service can be understood
as a service system where customers select goods for themselves. Together, these terms
form one of the central principles of cloud computing: end users autonomously request
cloud resources, which are promptly serviced to them.
By contrast, end users of traditional IT systems must request these resources through formal
and numerous sequential channels of communication. Figure 1-5 exhibits an example of
such “catered” IT services.

3
We want to hire a service.


Contracts and • Negotiation Sales and
Acquisitions • Contracts
Final Contract

Technical Project
2 4
Specifications Characteristics

IT IT

I need more
1 5 Deployment
resources!

Computing
End User Resources

Enterprise Service Provider

Figure 1-5 Catered IT Service Example


Chapter 1: What Is Cloud Computing? 15

In the scenario shown in Figure 1-5, an employee of an enterprise wants to use a computing
resource combining processing, storage, and networking capabilities. Because the company’s
1
internal policy requires the hiring of an external organization (service provider) to fulfill
such requests, the end user must follow a predefined procedure to gain access to the
resource:

1. The employee requests the resource through a telephone call, e-mail, or an online
form.
2. The request reaches the IT department in the corporation, which technically details
what the end user needs.
3. With these details in hand, the contracts and acquisitions department submits a
formal request for these resources to a service provider. After a lengthy negotiation,
both companies sign an agreement.
4. The service provider sales department requests its own IT department to provision the
requested computing resources.
5. If there are not enough resources to honor the request, the service provider orders more
servers, storage systems, or networking devices. The provider IT team provisions the
resources, and the end user can finally use them according to his original purposes.

This process of interactions and formal agreements not only takes a significant amount of
time, but also introduces the possibility of mistakes at any stage of the human transactions.

According to NIST, a cloud computing deployment cannot function under such conditions.
As SP 800-145 succinctly states, on-demand self-service means end users “can unilaterally
provision computing capabilities…as needed automatically without requiring human
interaction with each service provider.”

Consequently, a cloud end user has a very different experience, as demonstrated in Figure 1-6.

Cloud Provider

Cloud Portal

I need Automatic
resources! Deployment

Cloud
End User

Figure 1-6 Cloud On-Demand Self-Service


Technet24.ir

16 CCNA Cloud CLDFND 210-451 Official Cert Guide

In Figure 1-6, the cloud end user uses a web browser to access a portal containing a catalog
of available services for her account. After she submits her request to the portal, the cloud
provider automatically provisions the resources without any manual interactions.

Rapid Elasticity
Elasticity denotes the quality of an object to change and adapt to a new situation. In
cloud computing, an end-user change request for already provisioned resources commonly
initiates such transformation.

Continuing our comparison of traditional IT and cloud computing, as an example of the


rigidity present in traditional IT provisioning, Figure 1-7 depicts what happens in the
same scenario presented in Figure 1-5 when the end user needs a change in the already
provisioned computing resources.

This change was


not specified in
the contract.
I need more
resources! Sales and
Contracts

We need to buy
more resources to
fulfill the change.
End User
IT

Enterprise Service Provider

Figure 1-7 Traditional IT Rigidity Example

Unsurprisingly, the enterprise internal policies require the repetition of many of the
previously described human interactions, including the creation of a new end-user request
for the enterprise IT department and, probably, a new technical specification.

As soon as the service provider is involved in the change request, one of the following
situations is bound to happen:

■ If the change was not specified in the agreement, the companies will probably conduct a
new negotiation to modify it.
■ If the agreement already envisaged the change, the service provider will verify if
enough computing resources are available to fulfill the change; if not, it must buy more
resources.
Chapter 1: What Is Cloud Computing? 17

Because this example represents a simplified version of real-world change negotiations,


you can easily deduce that the end user requesting more resources is facing a long and slow
1
process for obtaining them. With such rigidity, many end users have the impulse to request
a surplus of resources up front to avoid asking for subsequent changes. Such behavior
actually reduces the already lackluster efficiency of traditional IT environments because
surplus resources typically remain untapped for a while.

NIST SP 800-145 defines rapid elasticity as follows: “Capabilities can be elastically


provisioned and released, in some cases automatically, to scale rapidly outward and inward
commensurate with demand. To the consumer, the capabilities available for provisioning
often appear to be unlimited and can be appropriated in any quantity at any time.”

As an illustration, Figure 1-8 clarifies how a cloud deployment enables the rapid elasticity
of resources.

Cloud Provider

Cloud Portal

I need MORE Changes


resources!

End User

Figure 1-8 Cloud Elasticity

As Figure 1-8 explains, cloud end users are empowered to request computing resource
changes, which the cloud provider will automatically execute. As NIST points out, the rapid
elasticity of a cloud deployment creates the perception of infinite resource availability
for its consumers. Requests for additional resources and requests to release resources can
happen at any time and with practically immediate results.

Resource Pooling
Pooling generically means the grouping of resources to maximize advantages and minimize
risks for the users of those resources. In IT, resource pooling refers to a set of computing
resources (such as storage, processing, memory, and network bandwidth) that work in
tandem as one big resource shared by many users.

It is easy to imagine an antagonistic scenario for this concept, as Figure 1-9 exhibits.
Technet24.ir

18 CCNA Cloud CLDFND 210-451 Official Cert Guide

Service Provider

Company 1
(Consumer)

Resource Silo for Company 1

Company 2
(Consumer)

Resource Silo for Company 2

Figure 1-9 Resource Silos Example

In Figure 1-9, a service provider organization provides computing resources for two
different consumers (Company 1 and Company 2). Because the provider does not
implement resource pooling, it separates these resources into silos (perhaps the direct result
of separate acquisitions, to fulfill the agreements with each company). Because silos cannot
be shared per definition, if Company 1 is not using the totality of its assigned resources,
Company 2 cannot access them, worsening resource efficiency as a whole for the service
provider.

By contrast, cloud computing deployments greatly benefit from resource pooling, as


demonstrated in Figure 1-10.
Chapter 1: What Is Cloud Computing? 19

Service Provider
1
Company 1
(Consumer)

Resource
Pool

Resources Allocated
for Company 1

Company 2
(Consumer)

Resources Allocated
for Company 2

Figure 1-10 Cloud Resource Pooling

In a cloud computing provider, cloud end users have access to resource pools that group all
computing resources, which are dynamically assigned and reassigned according to consumer
demand. Therefore, if Company 1 decommissions a certain resource, the cloud provider can
return it to the pool and later allocate it to another consumer, such as Company 2.

As NIST comments in SP 800-145, with resource pooling, cloud end users generally have
“no control or knowledge over the exact origin of the provisioned resources.”

TIP Depending on the cloud computing implementation, an end user may be able to spec-
ify locations at a higher level of abstraction, such as locations and availability zones, as you
will learn in Chapter 2, “Cloud Shapes: Service Models.”

Measured Service
Although many IT departments may take exception to this statement, careful measurement
of computing resource usage is not standard practice. The considerable complexity of
IT management, attending to the plethora of menial operations, typically leads to service
metering being relegated to “some time later.” For this reason, many organizations seek
to increase visibility over their true IT demand through service outsourcing to specialized
providers. Nevertheless, considering the characteristics of traditional IT practices pointed
out in the previous sections (catered IT, rigidity, and resource silos), even these providers
tend to size resource capacity according to peaks of utilization, as depicted in Figure 1-11.
Technet24.ir

20 CCNA Cloud CLDFND 210-451 Official Cert Guide

Computing
Resource
Utilization
Provisioned
Resource

End User

T1 T2 T3 Time

Figure 1-11 Resource Utilization Example

The example in Figure 1-11 represents the utilization of a certain computing resource
assigned to an end user. Even if service metering exists for this user, the level of utilization
in T3 defines how much of this resource must be allocated to her. Due to the lack of agility,
this allocation is probably fixed and the consumer billing follows a capital expenditure
(CAPEX) model, where a business expense must first occur to create future benefit. This
leads to the underutilization of the resource shown in T1 and T2.

With metered service being one of the essential characteristics of cloud computing, end
users have access to detailed information about their past resource usage. Consequently,
the cloud provider can plan more effectively the resource capacity of the cloud through the
correlation of this data for all of the consumers.

Moreover, one of the most attractive aspects of cloud computing is the fact that it normally
applies the operational expenditure (OPEX) model to charge end users. With such a billing
method, a cloud consumer only pays for resources after they are used.

In a nutshell, when resource utilization is systematically monitored, controlled, and


reported, it guarantees transparency for both the provider and consumer of the cloud
service.

Broad Network Access


When mainframes ruled the earth, users had to remain in close proximity to the computing
resources. With the stated intention of breaking this restriction, the Advanced Research
Projects Agency (ARPA) of the U.S. Department of Defense proposed and delivered the
capability to access remote mainframe computers via its network, ARPANET, thereby
introducing the concept of networking as we know it today.

Standing on the shoulders of the ARPANET giants, cloud computing is fundamentally


based on services provisioned through broad network access. Figure 1-12 further explores
this characteristic.
Chapter 1: What Is Cloud Computing? 21

Cloud

Internet Intranet Extranet

Figure 1-12 Cloud Computing Network Access

As shown in Figure 1-12, a cloud computing deployment can employ multiple types of
networks, including an intranet (which belongs to a single organization), an extranet (which
serves a group of associated organizations), and, of course, ARPANET’s most famous
offspring, the Internet.

Cloud services must be available over at least one of these networks and should be
accessible through standard mechanisms compatible with the largest majority of client
platforms, including smartphones, tablets, laptops, and workstations. With such ubiquitous
presence, end users can easily extend their local computing resources through remote cloud
services.

Multi-tenancy
Although NIST does not explicitly cite it as an essential characteristic of cloud computing,
multi-tenancy is an important property of such environments. Generally, a tenant is
defined as any application environment that requires some form of isolation from the
“outside world,” which includes all other tenants. Although this flexible notion of tenant
can represent a whole organization, it may also mean a single department or any other
subdivision that requires special segregation within an IT system or application.

Multi-tenancy consequently refers to the capacity of an IT resource to support multiple


tenants according to an accepted isolation technique. The concept of multi-tenancy is quite
distinct from multi-user and multi-instance.

The vast majority of applications are multi-user because they serve numerous users.
Similarly, a cloud computing deployment is a multi-user system.
Technet24.ir
22 CCNA Cloud CLDFND 210-451 Official Cert Guide

However, imagine that such deployment does not have any multi-tenant system or
application within its structure. In such a scenario, the cloud architect may decide to design
services that require new resource instances for each new user. Building such a cloud as a
multi-instance system eventually will add complexity to its operations because upgrades and
fixes will have to be applied to every resource instance. Moreover, each resource will have
to be dedicated to a single application, decreasing efficiency of the cloud implementation.

On the other hand, cloud deployments greatly benefit from using multi-tenancy
components because they can deploy a single instance of software or hardware to several
different tenants. In this fashion, any change, upgrade, or tweak is immediately available
to every tenant. As a drawback, all tenants may share the same fate in the case of a major
system failure (which would not happen in a multi-instance system).

Undoubtedly, the reasonable balance between multi-instance and multi-tenant resources


within a cloud project will dictate the efficiency and availability required by future
consumers.

Classifying Clouds
The previous sections have discussed characteristics that all cloud computing deployments
share, but these environments can differ wildly from each other. Such variation really
blossomed during the cloud hype, where innovative services and security concerns opened
up new opportunities.

Much like actual clouds, these models required a classification system to simplify their
individual analysis. For tropospheric clouds, which reside in the lowest and thickest part of
Earth’s atmosphere, nephologists have been using the classification system created by Luke
Howard in 1802. In his book Modifications of Clouds, the British chemist and amateur
meteorologist created an interesting taxonomy based on cloud shapes and heights, which
the World Meteorological Organization (WMO) later extended. Figure 1-13 illustrates this
system.
Chapter 1: What Is Cloud Computing? 23

Cirrostratus
1
Cirrocumulus Cirrus
7000m Cumulonimbus

Altocumulus
Altostratus

2000m

Cumulus
Nimbostratus
Stratocumulus

Figure 1-13 Cloud Classification

The system separates clouds into three altitude levels: low clouds (below 6,500 feet or
2,000 meters), mid clouds (between 6,500 and 20,000 feet, or 2,000 and 7,000 meters),
and high clouds (above 20,000 feet or 7,000 meters). In each of these layers, clouds are
classified according to their shapes.

Similarly, NIST also has established a simple classification system for cloud computing
environments. Table 1-5 describes the two basic criteria that define this system.

Table 1-5 NIST Cloud Criteria


Criterion Description
Service models Classify clouds according to the nature of the service they provide to
consumers (Infrastructure as a Service, Platform as a Service, or Software as a
Service).
Deployment Classify cloud computing deployments according to who the cloud
models infrastructure is provisioned for (public, private, community, or hybrid).
Technet24.ir
24 CCNA Cloud CLDFND 210-451 Official Cert Guide

Complete analysis of all three service models will be provided in Chapter 2.


Subsequently, Chapter 3, “Cloud Heights: Deployment Models,” will explore the four
deployment models defined by NIST.

Around the Corner: Agile, Cloud-Scale Applications,


and DevOps
Based on the fact that you’re reading this book, I’m guessing that you probably have
an infrastructure background (networking, server, or storage) and want to expand your
knowledge about cloud computing. Assuming that I am right, you likely will be surprised
to discover that the benefits of cloud computing go way beyond providing a “smart data
center” for traditional applications.
A data center exists to support critical applications, so it is only natural that the way
software is developed affects this IT structure. Since the 1950s, software development has
followed the principles of the waterfall model, which is a sequential design process with
origins in project management procedures from other industries. Figure 1-14 captures the
idea behind this approach.

Analyze Plan Design Build Test Operation

Project Timeline

Figure 1-14 Waterfall Model

In a waterfall, water never goes upstream. Similarly, this software development model
establishes phases for the whole project, where each phase only begins after the complete
result from the previous phase is delivered. Table 1-6 provides an overview of the phases
shown in Figure 1-14.

Table 1-6 Common Waterfall Phases


Phase Description
Analyze The user requirements are gathered and captured in a product requirement
document, resulting in models, schema, and business rules.
Plan A project management strategy is developed to enumerate all resources
required in the project.
Design The architecture of the software is detailed and the project is broken down
into smaller pieces.
Build The software is developed, proved, and integrated. One important
observation: software is usable only after this phase is complete.
Test The developed system is tested to demonstrate that it conforms to the
requirements established in the first phase.
Operation Consists of the installation, migration, support, and maintenance of the
complete system.
Chapter 1: What Is Cloud Computing? 25

During a waterfall project timeline, user requirements of the software and design must
remain constant because a simple scope change can force the project to return to the
1
first phase (analysis).

Business applications such as ERP, CRM, and e-mail still use slight variations of the
waterfall model in their development. All of these systems share some noticeable
characteristics: they are designed to serve a well-known number of users, are based on
a few software components, and will run over a considerably reliable infrastructure.

In the wake of the 2007-2008 global financial crisis, business organizations have
gradually changed their perception of IT as a cost center to an enabler of new
opportunities. With potential customers armed with always-online smartphones,
tablets, and portable computers, an “app” released today may become the source of
millions of dollars in the near future. In this landscape of smaller budgets and intense
competition, application development cannot afford to spend the months (or even
years) common in waterfall-based projects. A short time to market is crucial to gain
the attention of this customer base, so software development must respond to a
prototype request in a few days or weeks.

To fulfill such an aggressive schedule, a new software development model was


required. With origins in the late 1990s, a new model dubbed Agile embodies the
consolidation of many ideas focused on simplicity, close collaboration between
development and business, continuous delivery of valuable software, constant change
of requirements, and self-organizing teams. Figure 1-15 summarizes how these
principles change software development.

Design Build Design Build Design Build

Analyze Plan Operation Analyze Plan Operation Analyze Plan Operation


Test Test Test

Project Timeline

Requirements Change Customer Turnover Technology Innovation

Figure 1-15 Agile Model

As you can see in Figure 1-15, the Agile model essentially shrinks the waterfall development
cycle into faster rounds, which produce useful software code in fractions of the final
product timeline. And if a company desires to develop an application prototype between
Monday and Thursday, Agile can be the software development approach that can deliver
this kind of result.

Architecturally speaking, these modern apps are very different from traditional applications.
The main reason for this change is quite simple: if an app proves to be popular, its number of
users may grow exponentially in a very short period. Consequently, these apps usually possess
multiple small components that perform very specific functions and that can be rapidly scaled,
in the case of sudden interest. And because cloud computing environments provide perfect
conditions for scaling such apps, online gaming, video on demand, content delivery, instant
messaging, and mobility applications are also referred to as cloud-scale apps.
Technet24.ir
26 CCNA Cloud CLDFND 210-451 Official Cert Guide

But the modernization of software development does not stop at the point code is ready
for production. Another movement called DevOps has enhanced application deployment
through the expanded collaboration between the development staff and operations staff
throughout all stages of the development lifecycle. With DevOps, rather than creating
software and delivering it to the operations team, the development team works closely with
operations to produce a much more efficient and reliable final product.

Further Reading
■ Agile Alliance: https://www.agilealliance.org/agile101/what-is-agile/
■ Kim, Gene, Spafford, George, and Kevin Behr. The Phoenix Project: A Novel About IT,
DevOps, and Helping Your Business Win. IT Revolution Press, 2013.
Chapter 1: What Is Cloud Computing? 27

Exam Preparation Tasks


1
Review All the Key Topics
Review the most important topics in this chapter, denoted with a Key Topic icon in the
outer margin of the page. Table 1-7 lists a reference of these key topics and the page
number on which each is found.

Table 1-7 Key Topics for Chapter 1


Key Topic Element Description Page Number
Table 1-2 Traditional IT challenges 8
Table 1-3 Data center physical components 13
Table 1-5 NIST cloud criteria 23

Complete the Tables and Lists from Memory


Print a copy of Appendix B, “Memory Tables” (found on the CD), or at least the section
for this chapter, and complete the tables and lists from memory. Appendix C, “Answers to
Memory Tables,” also on the CD, includes completed tables and lists so that you can check
your work.

Define Key Terms


Define the following key terms from this chapter, and check your answers in the glossary:

time-sharing, computation as a public utility, personal computer (PC), virtual private


network (VPN), National Institute of Standards and Technologies (NIST), on-demand self-
service, rapid elasticity, resource pooling, measured service, broad network access, multi-
tenant
Technet24.ir

This chapter covers the following topics:

■ Service Providers and Information Technology

■ Infrastructure as a Service

■ Platform as a Service

■ Software as a Service

This chapter covers the following exam objectives:

■ 1.2 Describe Cloud Service Models


■ 1.2.a Infrastructure as a Service (IaaS)
■ 1.2.b Software as a Service (SaaS)
■ 1.2.c Platform as a Service (PaaS)
CHAPTER 2

Cloud Shapes: Service Models


After the mild sense of disappointment that followed its initial hype in the late 2000s, cloud
computing began to morph into different shapes in a similar way as its atmospheric counter-
parts. Currently, cloud services seem to be bound only by the creative limitations of provid-
ers and their execution capacity, providing IT resources that range from simple processing
capacity to fully provisioned applications at the click of a browser button.

To identify important aspects of cloud computing and to serve as a means for broad com-
parisons of cloud services and deployment strategies, the National Institute of Standards
and Technology (NIST) Special Publication 800-145, “The NIST Definition of Cloud Com-
puting,” describes three service models that classify cloud services according to their flex-
ibility and readiness to support consumer needs: Infrastructure as a Service (IaaS), Platform
as a Service (PaaS), and Software as a Service (SaaS).

TIP Due to its initials, these service models are also known as IPS Stack.

The CLDFND exam requires that you understand these service models, so this chapter
focuses on providing a detailed explanation of them, including basic concepts, applicability,
benefits, and challenges. To illustrate specific aspects of each of these service models, the
chapter also introduces some examples from well-known cloud providers. The chapter con-
cludes with an overview of new hybrid models that are on the horizon.

“Do I Know This Already?” Quiz


The “Do I Know This Already?” quiz allows you to assess whether you should read this
entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in
doubt about your answers to these questions or your own assessment of your knowledge
of the topics, read the entire chapter. Table 2-1 lists the major headings in this chapter and
their corresponding “Do I Know This Already?” quiz questions. You can find the answers in
Appendix A, “Answers to Pre-Assessments and Quizzes.”

Table 2-1 “Do I Know This Already?” Section-to-Question Mapping


Foundation Topics Section Questions
Service Providers and Information Technology 1
Infrastructure as a Service 2–4
Platform as a Service 5–7
Software as a Service 8–9
Technet24.ir

30 CCNA Cloud CLDFND 210-451 Official Cert Guide

1. Which of the following represent key aspects of a service-level agreement between a


data center service provider and a consumer? (Choose all that apply.)
a. Performance
b. Mean time to recover
c. Contract changes
d. Data handling
e. Uptime

2. Which of the following represents the service models described by NIST?


a. XaaS, PaaS, SaaS
b. SaaS, IaaS, PaaS
c. Private, public, hybrid
d. On-premise, off-premise, managed
e. EaaS, XaaS, IaaS

3. Which of the following are true about Infrastructure as a Service? (Choose all that
apply.)
a. Most typical consumers are IT administrators.
b. Virtualization technologies are mandatory for the implementation of IaaS.
c. IaaS basically offers computing hardware for its consumers.
d. Among all service models, IaaS is the least flexible option.

4. Which of the following are correct about cloud regions and availability zones?
(Choose all that apply.)
a. Regions represent data center installations from a cloud provider that can be used
as options for the consumer resource deployment.
b. Availability zones represent data center installations from a cloud provider that
can be used as options for the consumer resource deployment.
c. Regions are independent locations within a single data center facility.
d. Availability zones are independent locations within a single data center facility.

5. Which of the following are offered by the cloud provider in PaaS? (Choose all that
apply.)
a. Application
b. Operating system
c. Computing hardware
d. Virtualization layer
e. Development tools
Chapter 2: Cloud Shapes: Service Models 31

6. Which of the following represents the typical PaaS consumers?


a. IT administrators
b. Application end users
c. Application developers
d. Cloud brokers 2
7. Which of the following represents the typical SaaS consumers?
a. IT administrators
b. Application end users
c. Application developers
d. Cloud brokers

8. Which of the following must be provided by the consumer in SaaS?


a. Application
b. Operating system
c. Computing hardware
d. Virtualization layer
e. None of the above

9. Which of the following is correct about SaaS? (Choose all that apply.)
a. Among all cloud service models, SaaS requires less customization from a con-
sumer standpoint.
b. SaaS provides full control over hardware for a cloud consumer.
c. SaaS has had the slowest adoption among all cloud service models.
d. SaaS providers may use PaaS resources for development and IaaS resources for
production.
Technet24.ir
32 CCNA Cloud CLDFND 210-451 Official Cert Guide

Foundation Topics

Service Providers and Information Technology


A service provider (SP) is a company that offers specialized services to organizations. These
services may include pretty much anything these corporations need to properly function
(from toilet paper supply to business consulting). In the context of information technology,
the term service provider applies to outsourced suppliers that can provide a set of tech-
nologies to an organization during an agreed (and compensated) period of time.

Since the dawn of computing, corporations have been hiring service providers for several
different reasons, such as to reduce CAPEX, to sharpen business focus, or simply because
they lacked the capacity to internally support an IT system. And although there are service
providers that can provide services covering the entirety of IT systems, most organizations
typically work with a mix of in-house environments and outsourced systems hired from
highly specialized SPs.

Figure 2-1 portrays a scenario with some specialized service providers.


Storage Service
Provider
Telecommunications
Computer Service Service Provider
Provider

Primary
Internet Service Data Center
Network Service Provider
Provider

Application Service
Provider

Secondary
Data Center
Data Center Service
Provider
Managed Service
Provider

Figure 2-1 Specialized Service Providers Supporting a Single Corporation

The service providers supporting the company represented in Figure 2-1 are described in
Table 2-2.

Table 2-2 Specialized Service Providers


Type Description
Application Offers software services (applications) to customers through a computer net-
service pro- work such as the Internet. An ASP normally hosts, owns, operates, and main-
vider (ASP) tains the same software that would be installed locally in the customer and
customizes the service according to the customer needs.
Chapter 2: Cloud Shapes: Service Models 33

Type Description
Computer ser- Provides and supports a complete computer system, which includes hardware,
vice provider software, communication systems, and power backup. This service provider is
(CSP) more common in mainframe-based environments.
Data center Offers all technologies, facility components, and activities related to the oper- 2
service pro- ation of a data center. A data center service may include computing, storage,
vider (DCSP) and networking, among other offers. Common options are hosting (customer
leases hardware that the ISP has acquired) and colocation (customer acquires
hardware and leases a server cabinet in the ISP data center).
Internet ser- Provides services for accessing the Internet, offering options such as Internet
vice provider transit and domain name registration.
(ISP)
Managed ser- Remotely controls components of the IT infrastructure of a customer, which
vice provider may include desktops, critical applications, networks, or even every IT sys-
(MSP) tem. The latter situation is commonly called full IT outsourcing.
Network ser- Offers data communication services to its customers through a shared “back-
vice provider bone” network. These services generally include a committed bandwidth for
(NSP) each site and, optionally, Internet access.
Storage ser- Provides computer storage capacity and data management services (such as
vice provider backup) at a customer site or remotely, using its data center facilities. Figure
(SSP) 2-1 depicts a local SSP service.
Telecommuni- Offers long distance communication resources for traditional telephony and
cations service data leased lines between customer premises or between a customer premises
provider (TSP) and an NSP (which is known as last mile link).

Throughout the many decades of relationships between service providers and their custom-
ers, many SPs have bundled services to both simplify service contracts and leverage the
synergy between technologies (such as a network and Internet access, for example). And
unsurprisingly, the world has witnessed a consolidation trend among IT service providers
since the early 2000s.

As academic and consulting studies have discussed extensively, there is not a unique and
definitive answer to the question “should my company outsource IT system X?” In fact, the
number of factors that must be considered essentially dictates the complexity of such deci-
sion. Notwithstanding, one important aspect that must be taken into account is how critical
to the business the IT system in consideration is. Because noncritical systems do not have
any impact on the competitiveness of a company, they are usually the ideal candidates for
outsourcing, as long as the pricing makes sense for the customer’s budget.

To summarize why this discussion has endured for a very long time, I will simply paraphrase
a teacher of mine who joked that, each year, one-third of companies outsource their IT,
33.3% bring their systems back to the company premises (in a process called insourcing),
and the remaining organizations decide not to change their outsourcing policy (for at least a
year).
Technet24.ir

34 CCNA Cloud CLDFND 210-451 Official Cert Guide

Service-Level Agreement
A service is formally defined in a service contract signed by both the service provider and its
customer. Additionally, as a way to regulate the expectation about the scope and quality of
the service, both parties typically define another contract called a service-level agreement
(SLA).

Obviously, the parameters defined in an SLA highly differ depending on the type of service
that is being offered and the parties involved in the agreement. But generally speaking, SLAs
usually address the following aspects:

■ Performance: Defines a number of operations that the service provider must guarantee
in a time interval, offered capacity, or time that will be spent in the service deployment.
■ Uptime: Measure of the amount of time an IT system must work correctly. It is generally
represented as a percentage of availability over the total interval.
■ Mean time to recover (MTTR): Average time the service provider will take to recover a
failed system.
■ Customer data handling: Defines data management strategies to avoid data loss (e.g.,
backup policies), how long the customer data is available to the customer after the ser-
vice agreement is terminated, data confidentiality, and deletion policies.

Service providers may also use the SLA to control unrealistic customer expectations by
including terms regarding maintenance windows, unavoidable accidents (force majeure),
payment policies, and noncompliance fines and penalties.

Cloud Providers
Cloud computing services share many similarities with traditional service provider offerings.
As an illustration, Figure 2-2 exhibits some of the most popular cloud services available at
the time of this writing.
Collaboration
Tools Publishing

Servers Middleware

Storage Databases

Networking
Applications

Desktops Web
Streaming Services

Figure 2-2 Cloud Services Examples

As indicated in Figure 2-2, a cloud provider can possibly offer the following services to its
consumers (end users):

■ Servers: Specialized computers running software that processes client requests and pro-
vides appropriate responses to them
Chapter 2: Cloud Shapes: Service Models 35

■ Storage: Capability to store consumer data for a certain period of time


■ Networking: Connectivity between cloud elements and external resources, domain name
registration, and IP addressing, among others
■ Desktops: Computers to be used for traditional end-user applications
■ Middleware: Supplementary software, including libraries, programming language inter- 2
preters, database services, user authentication services, account management, and so
forth
■ Applications: Software created to achieve objectives of an end user
■ Collaboration tools: Applications that are especially designed to optimize the joint work
among different people
■ Publishing: Applications that facilitate the publication of texts such as blogs on the
Internet
■ Databases: Organized collection of data that can be queried by other applications
■ Streaming: Media, such as audio and video, that is delivered to end users as a constant
flow of data and is generally rendered by a desktop application
■ Web services: Standardized methods of communication between two systems over an IP
network

Potentially, this list may encompass all IT services available from service providers. Nev-
ertheless, as you have learned in Chapter 1, “What Is Cloud Computing?”, some common
parameters defined in traditional SLAs may collide with the essential characteristics of
cloud computing. In that chapter, I have juxtaposed opposite characteristics from traditional
SP practices such as catered services, rigidity, silos, and overprovisioning to further highlight
the NIST definitions for cloud services.

Over time, cloud providers started to attract interest from corporations that desired more
dynamic services and less complex hiring procedures. However, all cloud computing
companies still constitute SPs, sharing many concerns and responsibilities with these long-
established providers. And for such reason, a certain service provider mentality is very
welcome in cloud deployments, regardless of whether they are strictly internal or not.

Because Internet access is sometimes all you need to deploy external cloud resources, many
companies started to deal with a menace called shadow IT. In these relatively new scenari-
os, cloud services are hired by employees without approval from the organization, exposing
the whole company to uncontrolled risks. As a reaction, an IT department may either act as
a cloud broker, intermediating cloud service hiring on behalf of the employees and accord-
ing to predefined compliance policies, or become a cloud provider itself for its internal
customers. In the latter case, a private cloud can offer the same level of service of external
cloud providers without their associated risks for the business.

NOTE Cloud deployment models such as private cloud will be fully discussed in
Chapter 3, “Cloud Heights: Deployment Models.”

To categorize the benefits and issues related to cloud services and help IT decision makers
that are dealing with such projects, NIST has released Special Publication 800-146, “Cloud
Technet24.ir
36 CCNA Cloud CLDFND 210-451 Official Cert Guide

Computing Synopsis and Recommendations.” Besides providing valuable information about


service-level agreements, the publication also details the IPS stack, which will be further
explored in the next sections.

Infrastructure as a Service
As the first service model that was widely advertised as a cloud computing platform in the
late 2000s, Infrastructure as a Service (IaaS) consists of cloud services developed for con-
sumers looking for pure processing, storage, networking, or other fundamental computing
resources.

When compared to traditional service providers, IaaS-based cloud providers correlate to


CSPs, SSPs, and NSPs. To reinforce the comparison, Figure 2-3 represents the distribution
of responsibilities between an IaaS provider and its consumers through the use of a simpli-
fied computing component stack.

Application

Infrastructure Software Consumer Responsibility


Operating System
Provider Responsibility
Virtualization

Server Storage Network

Figure 2-3 Infrastructure as a Service Component Stack

As shown in Figure 2-3, the cloud provider controls the most basic layers of the stack (serv-
er, storage, network, and virtualization), empowering IaaS consumers to run any compatible
software over them, including operating system, infrastructure software (such as middle-
ware, databases, and authentication services), and custom server applications.

To exhibit the essential characteristics of a cloud computing environment, especially elas-


ticity and resource pooling, cloud providers typically deploy virtualization technologies
on top of the cloud infrastructure hardware (server, storage, and network). However, what
exactly does “virtualization” mean in such context? Unfortunately, virtualization is perhaps
the only term that is more overloaded than “cloud” in IT. Epitomizing another technology
gold rush that happened during the mid-2000s, virtualization can be generically defined
as a set of techniques that enables the creation of logical servers, logical storage, and logi-
cal networks from their physical counterparts. And specifically in the context of data cen-
ters, these logical devices can be simply defined as transparent emulations of computing
resources, producing benefits that were unavailable in their original physical form.

Of course, within such a broad umbrella, there are multiple types of virtualization tech-
niques, which are listed and described in Table 2-3.
Chapter 2: Cloud Shapes: Service Models 37

Table 2-3 Virtualization Types


Type Description
Pooling Multiple physical elements are consolidated into a single logical entity that
technologies shares characteristics with the original computing resources. In summary, such
techniques optimize computing resource management and availability.
2
Abstraction Techniques where the logical resources do not maintain the characteristics
technologies of their physical counterparts. Instead, via the emulation of other resources,
these technologies generally simplify operations through the preservation of
existing procedures for the simulated devices.
Partitioning Characterized through the creation of independent logical partitions that
technologies emulate the characteristics of a physical resource. In essence, such techniques
enable resource usage efficiency.

NOTE Throughout this certification guide, you will learn in detail about examples of each
type of virtualization technology such as hypervisors (partitioning), explained in Chapter 5,
“Server Virtualization;” virtual switches (abstraction), explored in Chapter 6, “Infrastructure
Virtualization;” and RAID groups (pooling), addressed in Chapter 8, “Block Storage
Technologies.”

Regardless of their type, all virtualization technologies share a very important “collateral
effect:” virtual servers, virtual storage, and virtual networks can be provisioned without
physical operations. As a consequence, it is much easier for an IaaS cloud to offer a virtual
resource to its consumers than a physical one. Still, there are multiple IaaS cloud providers
whose service is based on provisioning physical computing resources to support consumers
with specific requirements for their applications (such as high performance or control).

Although their customers could potentially deploy any choice of software over the offered
computing resources (virtual or physical), most IaaS cloud providers deliver prepackaged
software, such as an operating system, to simplify software installation procedures.

The target consumers for IaaS-based cloud providers are systems admins that prefer to rent
computing hardware rather than acquire and manage hardware in their IT projects. For this
reason, IaaS cloud providers offer a wide range of plans that include variable charges based
on amount of processing used during a period, data stored for a period, consumed band-
width, number of assigned public IP addresses, and many other creative choices.

The fact that an IaaS provider offers plain hardware to its consumers facilitates the migra-
tion of stored data and legacy applications from a standard data center to the cloud. Fur-
thermore, the simplicity of IaaS potentially allows an easier portability among cloud provid-
ers when compared to the other service models (which will be explained in later sections).

Such flexibility may also pose some risks and challenges that must be addressed before any
IaaS resource is put into production:
■ Application security: IaaS consumers must be aware that legacy applications migrated to the
cloud will take with them all inherent vulnerabilities. Moreover, these applications likely will
Technet24.ir

38 CCNA Cloud CLDFND 210-451 Official Cert Guide

be exposed to a less secure environment when compared to the native protection of a com-
pany-owned data center. For this reason, many cloud providers offer add-on security services
that can be combined (with an associated fee) to the consumer-provisioned resources.
■ Noisy and suspect neighbors: Due to its native multitenant infrastructure, SPs of IaaS
clouds deploying partitioning virtualization technologies may contractually disavow
any liability for harm that a tenant suffers as a result of the operations of other tenants
sharing hardware components … or worse, harm that a tenant suffers from data theft or
denial-of-service attacks because of intentional tampering from other tenants. To miti-
gate such risks, many IaaS cloud providers offer dedicated hardware for a single tenant,
though at a premium charge.

Directly competing with hardware manufacturers, IaaS cloud providers initially gained the
most traction among small businesses and midsized companies. Through the gradual addi-
tion of security features, these providers have slowly attracted the attention of enterprise
corporations and public sector organizations.

Regions and Availability Zones


Although it is not considered one of the essential characteristics of cloud computing, non-
localization of resources is very commonly associated with these environments. Hence, it
is common to assume that a cloud consumer “does not care” from where its service is being
provisioned: what matters is the service itself.
Notwithstanding, with more responsibilities on their shoulders when compared to consum-
ers of other service models, IaaS cloud tenants may not want to risk loss of application
availability if all of its resources are provisioned in the same failure domain, which can be
understood as the area of a data center facility that can be impacted during a major system
malfunction. Consequently, knowing where a resource is located is an advantage for most
consumers with critical applications.
IaaS cloud providers have supported such a requirement through localization services
known as regions and availability zones. Originally created by Amazon Web Services
(AWS, discussed in the next section), and afterward adopted by other cloud providers
under different names, both concepts are represented in Figure 2-4.
Figure 2-4 depicts a global cloud provider with four regions (US, Latin America, Europe,
and Asia), which correspond to the choices of data center facilities from which a cloud con-
sumer resource can be provisioned. Characteristics such as Internet latency and application
user locations may help the cloud consumer choose a region.

NOTE A cloud provider can also create exclusive regions for specific customers to fulfill
specific security or compliance requirements.

Each region may contain multiple availability zones, which are basically independent failure
domains (or subfacilities) within a single region. Consequently, any disruption in an avail-
ability zone should not impact the availability zone or zones. Through this arrangement,
a consumer can access IaaS resources from two availability zones within a region of the
consumer’s preference and use cheaper connectivity with lower latency when compared to
cloud resources installed in two different regions.
Chapter 2: Cloud Shapes: Service Models 39

Availability Availability 2
Zone 1 Zone 2 Availability
Zone 1
Europe
Region
US Region
Availability Availability
Availability Availability Zone 2 Zone 3 Asia
Zone 3 Zone 4 Region

Availability Availability
Zone 1 Zone 2

Latin American
Region

Availability Availability
Zone 1 Zone 2

Figure 2-4 Regions and Availability Zones

IaaS Example: Amazon Web Services


As the early pioneer of cloud computing, AWS offers an impressive number of cloud ser-
vices, as indicated in the AWS Management Console shown in Figure 2-5. Table 2-4 outlines
several of the main AWS IaaS offerings, most of which are pointed out in Figure 2-5.

Figure 2-5 AWS Management Console


Technet24.ir
40 CCNA Cloud CLDFND 210-451 Official Cert Guide

Table 2-4 AWS IaaS Offerings


Service Description
Elastic Com- Cloud service that provides virtual servers that are fully controlled by AWS
pute Cloud users and resized according to the required compute demand. Providing many
(EC2) versions of operating systems, such as Linux and Microsoft Windows Server,
EC2 enables robustness through regions and availability zones, as well as
security groups (rules of traffic containing IP addresses, protocols, and ports),
IPsec virtual private network (VPN) connections, and dedicated hardware for
instances from a single tenant.
Simple Stor- Cloud storage service that provides storage capacity based on objects for
age Service development and system administration teams.
(S3)
Elastic Block Offers block-based storage volumes that can be remotely accessed by EC2
Store (EBS) virtual servers, with a selection of latency and performance. This service is not
shown in Figure 2-5.
Elastic File Storage service that allows files to be stored for easy access by EC2 instances.
System (EFS) It also enables the control of throughput, input/output operations per second
(IOPS), and latency, according to the consumer requirements.
Virtual Pri- Cloud service that creates a logically isolated network within the AWS cloud
vate Cloud for a tenant. Through an AWS VPC, a user can control virtual networking for
(VPC) resources from other services, including the management of IP addresses, sub-
nets, route tables, network gateways, and VPNs.
Direct Con- Enables a dedicated network connection between the premises of a corpora-
nect tion and AWS to achieve a more reliable network experience when compared
to the Internet. This cloud network service is compatible with all other AWS
services and is enabled through dedicated connections provided by Amazon-
authorized TSPs.
Elastic Load Cloud network service that can distribute incoming application traffic among
Balancing EC2 virtual servers, optimizing reliability for applications that may already be
hosted in different regions or availability zones. This service is not shown in
Figure 2-5.
Route 53 Deploys a scalable Domain Name System (DNS) service within AWS, which
can be instantiated in different regions or availability zones for reliability rea-
sons. In summary, DNS translates domain names such as www.company.com to
IP addresses such as 200.201.202.203.

TIP Block storage, file storage, and object storage are distinct storage technologies
that will be properly defined and discussed in Chapter 8 and Chapter 9, “File Storage
Technologies.”

Now, let’s put ourselves into the shoes of an IaaS consumer. Figure 2-6 exhibits 5 of the 22
operating system choices that are available for immediate instantiation on AWS after select-
ing the EC2 link in the AWS Management Console.
Chapter 2: Cloud Shapes: Service Models 41

Figure 2-6 Image Selection for EC2 Instance


As you can see in Figure 2-6, AWS offers a good variety of Amazon Machine Image (AMI)
files that can be used to boot EC2 instances. For demonstration purposes, I selected a Red
Hat Linux image and configured several other settings to reach the page shown in Figure
2-7, which reviews all of my options for the instance before its proper launch.

Shared Hardware
VPC Availability Zone Region

Internal Storage Tag

Figure 2-7 EC2 Instance Details


Technet24.ir
42 CCNA Cloud CLDFND 210-451 Official Cert Guide

Observe that this particular virtual server is installed in the North California region, in a
VPC called vpc-49aa4b22 and an availability zone defined by the subnet-4faa4b24 IP sub-
net. This EC2 instance precludes dedicated hardware (tenancy default means shared hard-
ware) and has 10-GiB EBS storage (/dev/sda) attached to it.

Additionally, I have inserted a tag called “CCNA Cloud” to help resource selection during
massive operations with EC2 instances. After clicking the Launch button, my instance was
provisioned and accessible in less than a minute.

Figure 2-8 displays my EC2 dashboard and the recently created instance.

InstanceDashboard External Access Information

Instance Image

Figure 2-8 EC2 Dashboard

By selecting the instance in the dashboard, it is also possible to verify all the details about
the virtual server, including the image used and external access information (the public
IP address is 54.193.67.163 and the name is ec2-54-193.67.163.us-west-1.compute.
amazonaws.com).

Besides Amazon Web Services, many other cloud providers offer IaaS, such as Microsoft
Azure, Google, Rackspace, CenturyLink, Virtustream, IBM SoftLayer, and Dimension Data.

TIP One of the advantages of studying cloud computing is the fact that lab resources are
just a click or tap away (and may include a credit card charge). Therefore, I encourage you
to replicate the operations I execute in this chapter. If you were not previously familiar
with these cloud services, I assure you that these simple tasks will greatly contribute to your
learning experience.
Chapter 2: Cloud Shapes: Service Models 43

Platform as a Service
Paraphrasing NIST SP 800-146, Platform as a Service (PaaS) is a cloud service that offers to
its consumers the capability to deploy their customized applications through cloud-provided
programming languages and tools.

Unlike IaaS, whose cloud providers are focused on the offer of (virtual or physical) hard- 2
ware, a PaaS cloud service supplies a much more sophisticated environment for its consum-
ers. To draw a fair comparison with IaaS, Figure 2-9 represents the division of responsibili-
ties between provider and consumer in a PaaS component stack.

Application

Infrastructure Software Consumer Responsibility


Operating System
Provider Responsibility
Virtualization

Server Storage Network

Figure 2-9 Platform as a Service Component Stack

In Figure 2-9, you can observe that in PaaS, the cloud provider fully renders all hardware,
the virtualization layer, the operating system, and the software infrastructure. PaaS consum-
ers can build applications that interact with this infrastructure, which may contain program-
ming languages, libraries, databases, authentication services, middleware, and other elements
that are required for software development.

The quintessential PaaS consumers are application developers, who traditionally do not
want to manage the underlying infrastructure (network, servers, operating systems, and stor-
age) that is required for their jobs, but still desire control over the deployed applications
and their configuration settings. Other PaaS consumers include

■ Application testers
■ Application publishers
■ Application administrators
■ Application end users

At heart, a PaaS cloud is similar to a traditional computing system, composed of hardware


and software, which constitutes a platform that can be used for application development
and execution. Because PaaS represents an additional layer of software over IaaS, it is not
unusual to see IaaS cloud providers extending their portfolio to support PaaS. Through a
template composed of hardware resources and customized software, an IaaS cloud provider
can, for example, build a Java development platform consisting of two server instances with
loaded Java infrastructure software, one shared storage device, and a single network seg-
ment with access to the Internet.

In yet another situation, a PaaS cloud provider can support its consumers through the use of
a third-party IaaS-based cloud for their hardware fulfillment in the background.
Technet24.ir
44 CCNA Cloud CLDFND 210-451 Official Cert Guide

Service charging in PaaS can use a wide range of metrics, such as total number of end users
(concurrent or over a period), successful requests serviced, dynamically allocated hardware
(processing, storage, or network), or simply the time the platform is in use.

Application developers traditionally employed integrated development environments


(IDE) to carry out their daily tasks. An IDE usually contains a source code editor, automa-
tion tools, debuggers, programming language compilers or interpreters, and version control
systems, among other development tools. However, PaaS offerings leverage cloud character-
istics to compete against IDEs for the interest of application developers. Some advantages
of PaaS over IDEs are

■ Minimal software tool footprint: All a consumer needs is a web browser, rather than an
application installation in a workstation.
■ Resource allocation: A consumer can reserve an amount of computing resources to per-
form tests during the development.
■ Data management: Where different tenants, which may be collaborating in the same
software development project, may share data and use backup services from the cloud
provider.

In addition to enabling developers to create and test applications in a relatively easy and
inexpensive way, the PaaS service model can also help during the deployment phase of an
application. With such intention, PaaS cloud providers typically offer automatic scaling of
hardware resources to enable these customized applications to function without issues dur-
ing peaks of user interest.

Also, according to the Cisco Global Cloud Index: Forecast and Methodology, 2014-2019,
PaaS had a relatively slower adoption when compared to other service models such as IaaS
in 2014. One of the justifications for this trend is the lack of portability between PaaS
clouds, mostly caused by proprietary tools, languages, runtimes, and interfaces. To alleviate
the fear of lock-in among developers, many PaaS cloud providers have adopted open stan-
dards as one of their strategic flagships.

NOTE You can find this report at http://www.cisco.com/go/gci.

NIST SP 800-146 calls attention to the delicate balance between isolation of consumers
and the efficiency a PaaS environment can achieve. To illustrate how this tradeoff can be
addressed within a cloud provider, Figure 2-10 depicts three PaaS isolation designs.

From left to right in Figure 2-10, the first design (shared process) represents the most
efficient approach because multiple consumers access the same platform process and data-
base. In this scenario, the process must control scheduling issues to prevent actions by one
consumer degrading the performance of another. However, a failure in any of the shared
resources can disrupt services for all consumers that are accessing the structure.

In the middle design (dedicated process), the cloud provider runs a separate process and
database for each consumer, which reinforces the separation between PaaS consumers with
the concession of more resources being spent per client.
Chapter 2: Cloud Shapes: Service Models 45

Shared Process Dedicated Process Virtualization


Consumer 1 Consumer 2 Consumer 1 Consumer 2 Consumer 1 Consumer 2

Shared Platform Platform Platform Platform


Process
Platform
Process
2
Process DB DB Process Process DB DB DB
Operating Operating
Operating System Shared Operating System System System
Database
Physical Hardware Physical Hardware Virtualization

Physical Hardware

Isolation

Efficiency

Figure 2-10 PaaS Isolation Designs

Finally, the third approach (virtualized) depicts separate virtual servers as the isolation
point between consumers. Although, in this design, the cloud provider is certainly diminish-
ing efficiency of its infrastructure, it is certainly enforcing more isolation than the other
designs, because a major failure on any software component (operating system, process, or
database) cannot influence the environments of other consumers.

Regardless of the provider isolation design (or designs), consumers should always try to
discover if more hardened approaches are available in case the development environment is
submitted to stress tests or put into production.

TIP Linux containers are yet another isolation feature that can be applied to PaaS (you
will find more details about this partitioning technique in Chapter 5). Additionally, cloud
providers can also leverage the concept of application containers to deploy the virtualized
isolation approach depicted in Figure 2-10 (refer to Chapter 7, “Virtual Networking Services
and Application Containers,” for further information about this concept).

Another point of attention for PaaS consumers is the security protection offered with the
cloud service. Because applications may access external resources, the PaaS cloud provider
must deliver tools to mitigate attacks and exploits in typical languages and protocols such as
HTTP, HTML, Java, XML, and Microsoft .NET.

Many PaaS cloud providers have taken steps to address these issues, and a result adoption
of the PaaS model has increased among web application developers, with enterprise-class
application development close behind.

PaaS Example: Microsoft Azure


Microsoft Azure currently is one of the main providers of PaaS cloud services in the world.
Figure 2-11 illustrates the variety of cloud services that are available in its main portal.
Technet24.ir

46 CCNA Cloud CLDFND 210-451 Official Cert Guide

Cloud Services

Figure 2-11 Microsoft Azure Portal

Besides PaaS, Microsoft Azure also supplies IaaS cloud services, including virtual machines
(with Windows and other operating systems), data services (including SQL databases and
other options of data storage), and virtual networks that allow cloud services to connect to
each other and to a customer premises.

Aligning the expertise from the large community of Microsoft developers with its own
innovation drive, Microsoft Azure offers a wide range of application development environ-
ments.

After selecting Web Apps in the portal shown in Figure 2-11, an extensive list of develop-
ment platforms becomes available, as Figure 2-12 displays.

For purposes of demonstration, at the wizard step shown in Figure 2-12, I chose to deploy
an ASP.NET environment, which is essentially a Microsoft-developed open source web
application framework for dynamic web sites, which may include web applications and web
services. Figure 2-13 depicts my site settings for this new service.

Having chosen the suggestive name of ccnacloud for my application environment and the
region where I want this service to be deployed (West US), I have concluded the settings for
the service. Please observe that I could also have created a new App Service plan to enable
automatic scaling in this ASP.NET environment.

Some seconds after I clicked the check symbol, the new provisioned service was available in
my Azure console, as shown in Figure 2-14.
Chapter 2: Cloud Shapes: Service Models 47

PaaS Offers

Figure 2-12 Web Apps for Microsoft Azure

Figure 2-13 Settings for My ASP.NET Starter Page


Technet24.ir

48 CCNA Cloud CLDFND 210-451 Official Cert Guide

Figure 2-14 ASP.NET Site Created

Figure 2-15 shows that the ASP.NET starter page is already online and ready for develop-
ment tasks.

Figure 2-15 Provisioned ASP.NET Starter Page

Back to the portal, after selecting the recently created service, Microsoft Azure enables
many customization options such as the addition of a new deployment slot, which is a copy
of the development environment that can be used for quality assurance or production, as
Figure 2-16 demonstrates.
Chapter 2: Cloud Shapes: Service Models 49

Figure 2-16 ccnacloud ASP.NET Option

Besides ASP.NET, Microsoft Azure offers ready-to-go platforms such as Apache Tomcat,
BlogEngine.NET, HTML5, PHP, WordPress, and many others.

Competing with Microsoft Azure in the PaaS market, there are other eminent cloud provid-
ers such as Salesforce.com, Red Hat OpenShift, SAP, and Google.

Software as a Service
Software as a Service (SaaS) embodies cloud services whose consumers want access to
fully functional applications but do not want to manage or control the underlying hardware
or software infrastructure. According to the Cisco White Paper The Cloud Value Chain
Exposed: Key Takeaways for Network Service Providers, as of 2012, SaaS was already
widely adopted and had already disrupted approximately 25 percent of the enterprise appli-
cation market.

NOTE You can find the white paper at https://www.cisco.com/web/about/ac79/docs/sp/


Cloud-Value-Chain-ExposedL.pdf.

SaaS cloud providers are similar in some respects to application service providers (ASPs),
which became popular in the 1990s, in that they offer applications to corporate and individual
users. However, unlike the large majority of ASPs, SaaS providers leverage essential cloud
characteristics to provide robust support, automated scalability, and native multitenancy.

Undoubtedly, SaaS is by far the most varied service model as it reflects the wide spectrum
of applications in IT. Appropriately, there are many ways for providers to charge for the
usage of SaaS cloud services, including by number of users (which is the most typical), total
period of use, successful requests serviced, bandwidth (for video-related applications), and
storage size.
Technet24.ir

50 CCNA Cloud CLDFND 210-451 Official Cert Guide

Following the tradition established in the previous two sections, Figure 2-17 represents the dele-
gation of responsibilities between a SaaS cloud provider and a consumer in the component stack.

Application

Infrastructure Software
Consumer Responsibility
Operating System
Provider Responsibility
Virtualization

Server Storage Network

Figure 2-17 Software as a Service Component Stack

As Figure 2-17 reinforces, a SaaS cloud provider is completely responsible for the applica-
tion fulfillment (as well as its SLA), which must be robust and free of errors in order to offer
customers a level of performance similar to that of locally deployed software.

Similarly to PaaS, the main benefit of SaaS is that it has minimal requirements from users
(essentially web browsers). Additionally, SaaS offerings allow efficient use of software
licenses within the cloud provider because the number of server machines and desktops is
irrelevant in this service model.

Besides hardware and software infrastructure, a SaaS provider also hides from its users sup-
port preoccupations such as version management and data protection (backup). According
to the aforementioned Cisco white paper, SaaS vastly simplifies the customization of enter-
prise applications for the multitude of mobile platforms and form factors. Using modern
presentation technologies such as HTML5, SaaS services have achieved great success with
collaboration applications as they can quickly include such devices.

SaaS also shares some of the drawbacks and concerns that affect PaaS, such as the lack of
portability between SaaS clouds and the compromise between isolation and resource effi-
ciency in SaaS deployments.

Although some best practices (such as the ones described in NIST SP 800-146) do not rec-
ommend deploying real-time and critical applications on SaaS clouds, some SaaS providers
are developing methods to overcome the effects of Internet latency, such as wide-area net-
work (WAN) accelerators and direct connections to the customer premises.

NOTE WAN accelerators, as well as other networking services, will be discussed in more
detail in Chapter 7.

SaaS Examples
SaaS cloud services abound. In fact, some of them existed before the term “cloud comput-
ing” was even coined, such as many of the web mail providers that were established in the
late 1990s.

Figures 2-18 and 2-19 show the interfaces of two prominent SaaS clouds, Google Docs and
Cisco WebEx.
Chapter 2: Cloud Shapes: Service Models 51

Figure 2-18 Google Docs

Figure 2-18 displays the main web page from Google Docs, which provides free office pro-
ductivity tools such as text editors, spreadsheets, and presentation software.

As a cloud service, Google Docs can be accessed from any device or location, which brings
great advantages over traditional desktop applications. Its simplicity has motivated many
small and midsized companies to completely forego any internal infrastructure in favor of
the services offered by Google Docs and similar providers.

Figure 2-19 Cisco WebEx


Technet24.ir
52 CCNA Cloud CLDFND 210-451 Official Cert Guide

Cisco WebEx is a very popular web conferencing SaaS application, offering on-demand
collaboration, video conferencing, and many other options. This service has been used to
schedule and conduct millions of meetings (without unnecessary commuting) and remote
training sessions with a great intercommunication experience among participants.

Other SaaS services include applications such as enterprise resource planning (ERP) solu-
tions, customer relationship management (CRM) software, blog tools, and many other
offers.

Curiously, many SaaS clouds use IaaS and PaaS services from other providers in the back-
ground for production and development purposes, respectively.

Around the Corner: Anything as a Service


The unprecedented popularity of cloud computing explains the “as a Service” fever that
has been spreading since the cloud hype began in the late 2000s. New cloud services are
launched each day, a few of which immediately attract the attention of millions of users,
while most others quickly fade into obscurity. The sheer number of offerings has created a
new role called cloud broker, which was briefly discussed in the section “Cloud Providers”
earlier in this chapter. In summary, a cloud broker is a third-party company or professional
that hires cloud computing services on behalf of a corporation. Commonly, this role offers
comparison information about different cloud providers as well as recommendations that
will better support the contractor’s business goals.

Interestingly, cloud brokerage can also be offered as a service, where a consolidated inter-
face offered to the consumer hides background requests to a multitude of cloud providers
and may even include additional services such as resource management and security.

As other services that are built with the combination of multiple cloud services continue to
gain traction in the cloud market, they directly challenge the IPS stack classification. There-
fore, informally, these mixed offerings have created another service model called Anything
as a Service (XaaS).

TIP You may also encounter some publications that refer to these offerings as Everything
as a Service.

Figure 2-20 exemplifies two XaaS cloud services.

Desktop as a Service Disaster Recovery as a Service

Office Management
Productivity Software

IaaS SaaS IaaS SaaS

Figure 2-20 XaaS Examples


Chapter 2: Cloud Shapes: Service Models 53

The first example is called Desktop as a Service (DaaS), where the cloud consumer requests
a remotely accessible personal computer to carry out standard PC functions (such as web
browsing, document editing, and application execution). A DaaS provider can offer the
service through the combination of a computing instance provisioned via an IaaS cloud and
desktop software provisioned by one or more SaaS providers.
2
Figure 2-20 depicts another XaaS offering called Disaster Recovery as a Service (DRaaS),
which enables companies to hire a backup data center (to store data, run applications, and
receive end-user requests) in case they do not want, or simply cannot afford, the investment
necessary to build their own data center. In these scenarios, a SaaS provider can manage
resources in the customer data center as well as servers and storage deployed in an IaaS
cloud (owned by the same provider or another provider).

Other XaaS offerings include

■ Backup as a Service (BaaS): SaaS-provided backup software that can transparently use
storage from an IaaS cloud.
■ IP Telephony as a Service (IPTaaS): IP telephony control software is coordinated
through a SaaS cloud, while signaling servers are scaled in an IaaS cloud. Additionally,
the provider may offer a SaaS service to support IP telephony application development.
■ VPN as a Service (VPNaaS): Allows users to control bandwidth scaling and the deploy-
ment of features on their VPNs, including monitoring and security services. These modi-
fications can be simultaneously supported by SaaS-based management software and IaaS-
provided virtual servers deployed inside the customer premises.

Further Reading
■ “Want to hear Cisco’s POV on the top 5 questions about the Future of Cloud?” (Cisco
Blog): http://blogs.cisco.com/tag/xaas
Technet24.ir
54 CCNA Cloud CLDFND 210-451 Official Cert Guide

Exam Preparation Tasks

Review All the Key Topics


Review the most important topics in this chapter, denoted with a Key Topic icon in the
outer margin of the page. Table 2-5 lists a reference of these key topics and the page num-
ber on which each is found.

Table 2-5 Key Topics for Chapter 2


Key Topic Element Description Page Number
Table 2-2 Specialized service providers 32
List SLA common aspects 34
Figure 2-3 Infrastructure as a Service component stack 36
Table 2-3 Virtualization types 37
Table 2-4 AWS IaaS offerings 40
Figure 2-9 Platform as a Service component stack 43
Figure 2-17 Software as a Service component stack 50

Complete the Tables and Lists from Memory


Print a copy of Appendix B, “Memory Tables” (found on the CD), or at least the section
for this chapter, and complete the tables and lists from memory. Appendix C, “Answers to
Memory Tables,” also on the CD, includes completed tables and lists so that you can check
your work.

Define Key Terms


Define the following key terms from this chapter, and check your answers in the glossary:

service provider, service-level agreement (SLA), Infrastructure as a Service (IaaS), virtualiza-


tion, region, availability zone, Platform as a Service (PaaS), integrated development environ-
ment (IDE), Software as a Service (SaaS), cloud broker, Anything as a Service (XaaS)
Technet24.ir

This chapter covers the following topics:

■ Public Clouds

■ Risks and Challenges

■ Private Clouds

■ Community Clouds

■ Hybrid Clouds

■ Cisco Intercloud

■ Cisco Intercloud Fabric

This chapter covers the following exam objectives:

■ 2.1 Describe Cloud Deployment Models


■ 2.1.a Public
■ 2.1.b Private
■ 2.1.c Community
■ 2.1.d Hybrid

■ 2.2 Describe the Components of the Cisco Intercloud Solution


■ 2.2.a Describe the benefits of Cisco Intercloud
■ 2.2.b Describe Cisco Intercloud Fabric Services
CHAPTER 3

Cloud Heights: Deployment Models


As an information technology access model, cloud computing is certainly more malleable
than most computer technologies. In addition to having the flexibility to support diverse
service models (IaaS, PaaS, SaaS, XaaS), cloud computing enables designers of IT system
environments to respond to the skepticism and insecurity of prospective consumers by tai-
loring their environments to meet the consumers’ needs.

Like their atmospheric analogs, clouds can be “closer” or “farther” from their users through
different deployment models. In summary, this classification (which is independent to
service model categorization) imposes usage restrictions in a cloud computing scenario to
address vulnerabilities caused by resource sharing and infrastructure implementations that
do not satisfy compliance standards.

There are four cloud deployment models: public, private, community, and hybrid. An orga-
nization needs to consider the benefits and drawbacks of each deployment model before
choosing to implement any of them. After all, the diversity of current service offerings
poses additional challenges for customers that desire to avoid provider or technology lock-
in. Addressing such challenges, Cisco and an entire ecosystem of partners have brought to
reality the concept of the Intercloud, through an open and simple foundation technology
called Cisco Intercloud Fabric.

The CLDFND exam requires candidates to have basic knowledge about these four deploy-
ment models, Cisco Intercloud, and Cisco Intercloud Fabric. To familiarize you with each,
this chapter portrays a journey multiple organizations have taken in their cloud adoption
process. Duly, it demonstrates how the hindrances of each deployment model have led to
the development of new models, in a progression that outlines the rich landscape of cloud
service offerings that we contemplate today.

“Do I Know This Already?” Quiz


The “Do I Know This Already?” quiz allows you to assess whether you should read this
entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in
doubt about your answers to these questions or your own assessment of your knowledge
of the topics, read the entire chapter. Table 3-1 lists the major headings in this chapter and
their corresponding “Do I Know This Already?” quiz questions. You can find the answers in
Appendix A, “Answers to Pre-Assessments and Quizzes.”
Technet24.ir

58 CCNA Cloud CLDFND 210-451 Official Cert Guide

Table 3-1 “Do I Know This Already?” Section-to-Question Mapping


Foundation Topics Section Questions
Public Clouds 1
Risks and Challenges 2–3
Private Clouds 4
Community Clouds 5
Hybrid Clouds 6–7
Cisco Intercloud 8
Cisco Intercloud Fabric 9–10

1. Which of the following represents the deployment models described by NIST?


a. Public, private, hybrid
b. SaaS, IaaS, PaaS
c. Private, public, community
d. On-premise, off-premise, managed
e. Public, private, community, hybrid

2. Which option best describes “shadow IT”?


a. Hackers accessing public cloud resources using the identifications of employees
of an organization
b. Employees attacking resources from competitors that are sharing resources from
the same public cloud
c. Employees from an organization deploying resources in a cloud without the
knowledge of the IT department
d. Employees of a cloud provider accessing customer data
e. Denial-of-service attacks to slow performance of applications deployed on public
clouds

3. Which of the following cost risks can be associated with public cloud usage? (Choose
all that apply.)
a. Lack of forecasting modeling
b. Workload sprawl
c. Application performance issues
d. CAPEX model
Chapter 3: Cloud Heights: Deployment Models 59

4. According to NIST, what is the definition of private cloud?


a. A cloud deployment provisioned for exclusive use by a single organization
b. A cloud deployment managed by a single organization
c. A computing deployment located inside a single organization’s data center
d. A cloud computing deployment managed and used by a single organization
e. A cloud computing deployment managed, used by a single organization and also
located at the same organization’s data center
3
5. Which of the following options contains only regulatory compliance standards?
a. PCI DSS, FISMA, NIST
b. HIPAA, PCI DSS, SOX
c. IEEE, IETF, ANSI
d. ANSI, FedRAMP, Basel
e. SOX, Intercloud, HIPAA

6. What is “cloud bursting”?


a. A cloud deployment exhausts its infrastructure resources.
b. An organization can provision public cloud services to use during periods of
stress of its internal IT resources.
c. Two public cloud providers work in conjunction to load balance requests from a
consumer.
d. A private cloud can transform physical workloads into virtual workloads.

7. Which of the following represent challenges of hybrid cloud implementations?


(Choose all that apply.)
a. Inconsistent cloud architectures
b. Incompatible networking and security policies
c. Lack of encryption standards
d. Requirement for application reconfiguration when an application is migrated
from one cloud to another
e. Few service offerings

8. Which of the following are considered components of the Cisco Intercloud? (Choose
all that apply.)
a. Private clouds
b. Public clouds
c. Cisco Powered Partner Clouds
d. Cisco Intercloud Services
Technet24.ir

60 CCNA Cloud CLDFND 210-451 Official Cert Guide

9. Which of the following is correct about Cisco Intercloud Fabric? (Choose all that
apply.)
a. It is agnostic to server virtualization technology.
b. It provides encryption only for traffic that is traversing the Internet.
c. It does not allow migration of workloads toward a private cloud.
d. It has business and provider complementary solutions.

10. Which of the following is not considered a service of Cisco Intercloud Fabric?

a. VM portability
b. Hybrid cloud management and visibility
c. Cloud networking
d. Community cloud
e. Cloud security
Chapter 3: Cloud Heights: Deployment Models 61

Foundation Topics

Public Clouds
In Chapter 2, “Cloud Shapes: Service Models,” you learned about the cloud service models
(Infrastructure as a Service, Platform as a Service, and Software as a Service), which basically
classify cloud providers according to the type of service they offer. Chapter 2 explained
cloud deployments that are intended to offer services to any user connected to the Inter-
net, citing examples such as Amazon Web Services, Microsoft Azure, Google, and Cisco 3
WebEx.

According to NIST Special Publication 800-145, a public cloud is the “cloud infrastructure
provisioned for open use by the general public. It may be owned, managed, and operated
by a business, academic, or government organization, or some combination of them. It
exists on the premises of the cloud provider”.

Figure 3-1 represents a public cloud.

Public
Cloud

Internet

Figure 3-1 Public Cloud

Invoking our atmospheric metaphor, public clouds would correspond to the highest cloud
types (which are cirrocumulus, cirrus, and cirrostratus). Fittingly, they can cast a bigger area
of shadow (and, therefore, cover a higher number of users) when compared to other cloud
types (or deployment models).

A public cloud typically is deployed by a service provider with global reach and an extreme-
ly easy service engagement. This deployment model is so pervasive that some people are
even unaware that other deployment models exist. Of course, there are other possible cloud
computing implementation scenarios (private cloud, community cloud, and hybrid cloud).

For several reasons, as explained in the following sections, a large number of organizations
consider the public cloud deployment model inadequate (or even impossible) for their busi-
ness objectives and thus have adopted one of the other cloud deployment models.
Technet24.ir
62 CCNA Cloud CLDFND 210-451 Official Cert Guide

Risks and Challenges


Public clouds perfectly embodied the advantages of cloud computing during its late-2000s
hype. But paradoxically, the broad exposure of public clouds has discouraged some compa-
nies from seizing these benefits due to many risks that are intrinsic to the deployment model.

To better explain such risks (and operational challenges), allow me to propose a role-playing
game for you: in the next three sections, you will be the Chief Information Officer (CIO) of
a fast-growing company that is on the verge of adopting cloud services to increase IT agility.
To safely promote the cloud revolution in the organization, you decided to hire a consult-
ing firm to accurately assess the risks involved with embracing a public cloud. An experi-
enced consultant from the firm is ready to present his assessment to your team through
three distinct categories: security, control, and cost risks.

Security
Probably the most visceral reaction toward public clouds comes from potential consumers
who are worried about the inherent vulnerabilities of such environments. As CIO, you are
all too aware of this preoccupation because many company systems have suffered attacks
during the last year, making security the highest priority in IT in the current fiscal period.
The company CEO has put it in blunt terms: “I do not want to lose more money due to lack
of preparation against these hacker punks!”

Aware of this situation, the consultant presents the slide depicted in Figure 3-2 to your
team to outline the security risks and challenges that your company may encounter if it
decides to use public cloud services.

Public
Cloud
Your Company Account or Traffic Data Breaches
Hijacking

Shadow
IT Malicious
Insiders
End User
Cloud Portal
Insecure Interfaces Data Loss

Figure 3-2 Public Cloud Security Challenges

Table 3-2 summarizes the risks explained by the consultant.

Table 3-2 Public Cloud Security Risks


Risk Description
Data loss In the case of an outage or major hardware failure in the cloud deployment, cor-
porate data may be completely lost.
Data Sensitive company data may be accessed within the cloud provider or via Inter-
breaches net attacks.
Malicious Although they deploy highly automated environments, cloud providers still have
insiders to rely on employees, who are subject to human motivations and malfeasances.
Chapter 3: Cloud Heights: Deployment Models 63

Risk Description
Insecure Because a public cloud portal must be exposed via the Internet, common
interfaces attacks to standard protocols and languages may disrupt cloud services and dis-
close confidential data, including cloud user accounts and passwords.
Account A cloud user account and password can be obtained through traffic analysis or
or traffic social engineering. Unfortunately, many companies lack security policies and
hijacking enforcement regarding the sharing of critical information among employees.
Shadow IT If employees from your company deploy resources in a cloud without knowl- 3
edge of the IT department, confidential data may be wrongly stored in a public
space and business applications may not receive the appropriate service level.

After describing each risk, the consultant mentions that many public cloud providers have
already addressed some or all of these issues (using localization services, automated data
backup, and encryption for data in rest and in motion). Nevertheless, he points out that it is
your responsibility to question these providers and analyze their security tools before mak-
ing any decision.

Control
Through the increasingly thicker fog of discomfort in the room, the consultant continues
his presentation by explaining that the adoption of public cloud services may also incur
resource control risks when compared to traditional IT management. These risks are dis-
played in Figure 3-3, another slide from the consultant’s presentation.

Public
End-to-End Cloud
Management
Your Company
Performance
Monitoring

Service
End User Admission
Cloud Portal

Elasticity Control Data Location

Figure 3-3 Public Cloud Control Challenges

According to the consultant, you should be aware of the control challenges listed and
described in Table 3-3.

Table 3-3 Public Cloud Control Challenges


Challenge Description
Data location Due to compliance issues or national security, some kinds of data must not
be stored in data center facilities located within allowed countries.
Elasticity The ease of provisioning in a cloud may encourage indiscriminate use of pub-
control lic cloud services, where resources can be inefficiently scaled up and, conse-
quently, generate exaggerated costs.
Technet24.ir

64 CCNA Cloud CLDFND 210-451 Official Cert Guide

Challenge Description
Service An administrative account may issue requests for specific public cloud ser-
admission vices that are not authorized by the company IT department.
Performance It does not matter if cloud resources are correctly provisioned if business-
monitoring related applications are not working according to a predefined service-level
agreement with a cloud provider.
End-to-End With many different lines of business (LoBs) and departments from your
management company generating requests for public cloud resources, it may be very easy
for your IT department to lose track of the overall use of the public clouds of
choice.

Again, the consultant mentions that many public cloud providers have developed tools, and
even other services, to address these challenges, including choice of region, elasticity limits,
role-based access control policies, application performance dashboards, as well as integra-
tion with traditional management systems.

Cost
The consultant next explains that the lack of control invariably leads to excessive expenses,
as he summarizes in the slide shown in Figure 3-4. Table 3-4 further describes the risks
depicted in Figure 3-4.

Business focus Public Cloud


Service
Proliferation
Your Company Cost modeling
and forecasting

End User Cloud Portal

Hidden Costs
Loss of Revenue

Figure 3-4 Public Cloud Cost Risks

Table 3-4 Public Cloud Cost Risks


Risk Description
Hidden costs Although most cloud providers are fairly explicit about their charges, many
users do not pay attention to clauses that are not directly linked to the desired
service, such as amount of bandwidth used and decommission costs.
Service Without proper control of deployed resources, a company may inadvertently
proliferation allow sprawl of cloud services that are barely used (but properly charged).
Chapter 3: Cloud Heights: Deployment Models 65

Risk Description
Loss of Poor application performance or outages can cause loss of revenue for orga-
revenue nizations deploying critical business applications in public clouds. And worse,
such problems may irreparably damage the company image to its customers.
Cost model- Many organizations keep track of their IT resource requirements and, con-
ing and fore- sequently, can produce accurate forecasts of their needs for the near future.
casting However, some public cloud providers do not have tools that allow the correct
calculation of future costs according to this data.
3
Business CIOs obviously want public cloud services that align well with their company
focus business objectives. Notwithstanding, some CIOs are so eager to adopt these ser-
vices that they dismiss simple cost-benefit analyses in favor of “fashion IT.” Conse-
quently, although the original motivation to use public cloud resources may be to
reduce acquisition costs, they may result in excessive costs for the organization.

Sensing the overwhelming anxiety filling the room after his presentation, the consultant
adds that many cloud providers are fully aware of these risks and have deployed counter-
measures for each one of them, such as credit-based charging, stricter SLAs, cost forecast
tools, and customized services.
Finally, he proposes a serious study about other deployment services that may be consid-
ered more adequate for your company’s strategic objectives.

NOTE You can find more details about risks and threats associated with cloud computing
in https://cloudsecurityalliance.org/group/top-threats/.

Private Clouds
Most of the public cloud risks and challenges discussed in the previous section are fully
addressed via private clouds. According to NIST SP 800-145, this cloud deployment model
is defined as one in which “the cloud infrastructure is provisioned for exclusive use by a
single organization comprising multiple consumers (e.g., business units).” Consequently,
using the atmospheric cloud comparison, private clouds would correspond to low clouds
such as cumulus, stratus, cumulonimbus, and stratocumulus (which cast a smaller shadow
over the earth’s surface).
The primary purpose behind a private cloud is to completely isolate the cloud components
from other organizations, empowering a company to consume cloud services with superior
security, tighter control, and more manageable costs.
Figure 3-5 represents a private cloud providing services to its lone consumer organization.
Private Cloud Organization

Network

Figure 3-5 Private Cloud


Technet24.ir

66 CCNA Cloud CLDFND 210-451 Official Cert Guide

As Figure 3-5 depicts, private cloud resources simply are not available for public use. In
general, the employees of the served organization receive (or reuse) credentials to request
cloud resources. And, of course, these services are provided according to the essential cloud
characteristics (on-demand self-service, elasticity, resource pooling, broad network access,
and metering) that were extensively discussed in Chapter 1, “What Is Cloud Computing?”

The large majority of organizations that deploy a private cloud designate internal employees
to design, build, and support the company’s private cloud. Therefore, the private cloud is
usually (but not always) implemented on premises, meaning in a location that belongs to
the corporation. Unfortunately, as I have witnessed many times, such projects may eventual-
ly become overwhelming to an already overloaded IT department. And, as you have learned
in Chapter 2, a cloud computing implementation demands a certain level of service provider
competence that many organizations may simply lack.

Hence, alternatively, a private cloud consumer may hire a third-party company to fully
manage the cloud deployment. Moreover, a private cloud does not have to be provisioned
within a facility owned by the organization. In fact, as the NIST definition of private cloud
states, it “may be owned, managed, and operated by the organization, a third party, or some
combination of them, and it may exist on or off premises.” What really differentiates a pri-
vate cloud from other deployment models in NIST’s definition is that the private cloud is
restricted to use by a single corporation.

Interestingly, Amazon Web Services and other public cloud providers can deploy a service
called Virtual Private Cloud (VPC), which emulates a private cloud within a public cloud
environment. Commonly used in IaaS, a VPC isolates resources for a cloud tenant from
other users through a private IP subnet and a network segment.

VPCs may potentially entail the security, control, and cost risks that a private cloud project
is primarily trying to avoid. In summation, the decision to use a VPC (rather than a proper
private cloud) depends on the hardware and software isolation services the public provider
can offer, which will dictate how secure, manageable, and cost-effective this virtual con-
struct actually is.

But as Thomas Aquinas has presumably said, every choice is a renunciation. Likewise, an
organization must accept a compromise when it opts for the safeness of a private cloud
instead of the flexibility of the public cloud.

In a nutshell, the following public cloud benefits may be lessened (or even eliminated) in
private cloud deployments:

■ Broad network access: To achieve the highest level of network isolation, an organization
usually deploys private connections between internal users and private cloud resources.
Depending on how much such resources scale, this private connection may quickly
become a bottleneck.
■ OPEX model: In most private cloud projects, all resources must be acquired before the
cloud can be used, reinstating the CAPEX model in this deployment model.
■ Elasticity: The CAPEX model inherently defines an upper limit for scalability of private
cloud resources. Consequently, whoever is managing the private cloud must maintain
systematic monitoring of the resource usage in the infrastructure.
Chapter 3: Cloud Heights: Deployment Models 67

Currently, there are many private cloud offerings available in the market, some of the most
popular of which are as follows:

■ Cisco ONE Enterprise Cloud Suite


■ Microsoft Windows Azure Pack
■ VMware vCloud Suite
■ OpenStack (open source)

3
NOTE Both the Cisco ONE Enterprise Cloud Suite and OpenStack architectures will be
discussed in more detail in Chapter 4, “Behind the Curtain.”

Community Clouds
For organizations that depend on a high degree of collaboration with other organizations, a
private cloud may simply be too restrictive; after all, only one organization can access it. On
the other end of the spectrum, public clouds may not provide an acceptable level of isola-
tion from entities outside of the collaborators’ circle of trust.

To establish a middle ground between private and public clouds, another cloud deployment
model was created. Thus, as defined in NIST SP 800-145, a community cloud corresponds
to one in which “the cloud infrastructure is provisioned for exclusive use by a specific com-
munity of consumers from organizations that have shared concerns (e.g., mission, security
requirements, policy, and compliance considerations).”

Because they are “lower” than public clouds and “higher” than private clouds, commu-
nity clouds can be related to mid-level clouds (altostratus, altocumulus, and nimbostratus)
according to the weather classification system. Figure 3-6 graphically represents the com-
munity cloud deployment model.

Organization 1

Community
Cloud

Organization 4 Organization 2

Organization 3

Figure 3-6 Community Cloud


Technet24.ir

68 CCNA Cloud CLDFND 210-451 Official Cert Guide

Working as a more inclusive private cloud, a community cloud may be owned, managed,
and operated by one or more of the member organizations or by an external party. Further-
more, a community cloud may be hosted within one organization from the community or
on an off-premises site. As with NIST’s definition of a private cloud, the important point
that differentiates a community cloud is who can access the cloud deployment, not how or
where it is deployed.

Regulatory compliance standards are considered one of the most powerful motivations for
building community clouds. Such standards ultimately require the adherence of an organiza-
tion to laws, regulations, guidelines, and specifications that are important for its industry
and whose violations may result in legal consequences or dismissal from a community.

To further illustrate this concept, Table 3-5 lists and describes some common examples of
regulatory compliance standards.

Table 3-5 Examples of Regulatory Compliance Standards


Standard Description
Payment Card Indus- Compliance rules that apply specifically to organizations that handle
try Data Security Stan- credit cards. In essence, PCI DSS was conceived to protect customer
dard (PCI DSS) data in an attempt to reduce credit card fraud.
Health Insurance Por- Among other topics, HIPAA orders the establishment of national
tability and Account- standards for electronic transactions and national identifiers for health
ability Act (HIPAA) care providers, health insurance plans, and employer organizations.
Federal Information United States federal law that acknowledges the importance of infor-
Security Management mation security to the economic and national security interests of the
Act (FISMA) country.
Sarbanes-Oxley Act Named after sponsor senators Paul Sarbanes and Michael G. Oxley,
(SOX) this United States federal law establishes a set of additional require-
ments for public company boards, management, and public account-
ing firms. In effect, it covers responsibilities of board of directors
from a public organization and defines criminal penalties for out-of-
compliance operations.
Basel Accords Banking supervision recommendations on regulations that were
issued by the Basel Committee on Banking Supervision (BCBS).
Federal Risk and U.S. government-wide program that provides a standardized approach
Authorization Man- to security assessment, authorization, and continuous monitoring for
agement Program cloud products and services. Starting in 2012, FedRAMP began to
(FedRAMP) provide guidance to government and corporate organizations with
the objective to reduce duplicate efforts, increase efficiencies, and
remove security inconsistencies between government agencies.

Most of these standards require periodic auditing reviews from independent parties to
verify the organization’s compliance. Several of them have repercussions pertaining to IT
systems and how data is managed, so cloud environments and their risks are also taken into
account in such reviews.
Chapter 3: Cloud Heights: Deployment Models 69

Because some of these standards simply rule out the use of public cloud services for their
business applications and data, some cloud providers have developed community clouds
that fully comply with specific regulations. Examples include community cloud implemen-
tations such as AWS GovCloud, Capital Markets Community Platform (NYSE), and Health-
care Community Cloud (Carpathia).

Hybrid Clouds
As described in the prior section, community clouds are suitable to a relatively small num-
ber of companies from industries represented by common interests and compliance stan- 3
dards. For a while, that left all other organizations interested in cloud computing to contem-
plate the choice between a private cloud and a public cloud, as outlined in Table 3-6.

Table 3-6 Private and Public Clouds Compared


Service Model Advantages Disadvantages
Public cloud ■ OPEX model ■ Shared resources
■ Scale ■ Less secure
■ Highly accessible ■ Weaker control
Private cloud ■ Dedicated hardware ■ CAPEX model
■ More secure ■ Less scalable
■ Customizable ■ Standardized

But what if an organization did not have to choose? Such inquiry inspired the creation of
yet another cloud deployment model, a more flexible and all-embracing archetype called
hybrid cloud.

Figure 3-7 depicts an example of such a model. In this hybrid cloud implementation, a pri-
vate cloud is securely connected to a public cloud, with both of them simultaneously pro-
viding services to the same organization.

Public
Private Cloud
Organization
Cloud
ection
Secure Conn

Figure 3-7 Hybrid Cloud Example

Using NIST’s more formal definition, a hybrid cloud infrastructure represents “a composi-
tion of two or more distinct cloud infrastructures (private, community, or public) that
remain unique entities, but are bound together by standardized or proprietary technology
that enables data and application portability (e.g., cloud bursting for load balancing between
clouds).” As a direct consequence, hybrid cloud deployments are not restricted to private-
public bindings, allowing all other possible combinations (private-private, private-community,
community-public, and so forth).
Technet24.ir
70 CCNA Cloud CLDFND 210-451 Official Cert Guide

TIP Because they cover both high and low altitudes, cumulonimbus with anvil top clouds
would be the atmospheric analog of hybrid clouds.

Using a hybrid cloud, a consumer can decide to provision application resources in the pub-
lic cloud during periods of stress in the private cloud or even relocate some internal work-
loads to the public part of the hybrid cloud. Both situations define a hybrid cloud feature
that is commonly referred as cloud bursting.

Undoubtedly, hybrid clouds enable organizations to seize the best features of each deploy-
ment model. For example, an employee can use a private cloud to deploy fixed workloads
(and consequently leverage the control, security, and data sovereignty from this structure)
while implementing elastic workloads in the public cloud to benefit from its OPEX charging
model, provisioning speed, and superior scalability.
Due to its considerable flexibility, the hybrid cloud deployment model has gained a lot of
attention from enterprise and public corporations. As reported in the Cisco document Cisco
Intercloud Fabric: Hybrid Cloud with Choice, Consistency, Control and Compliance,
according to the results of Forrester Consulting research commissioned by Cisco in 2012,
76 percent of the 69 IT decision makers surveyed planned to implement hybrid clouds,
using IaaS to complement on-premises servers and storage and burst peak workloads, among
other use cases.

Nevertheless, as seasoned network engineers can attest, a harsher reality underlies any initia-
tive that intends to bring two different structures together. Basically, the main challenges
faced in hybrid cloud deployments originate from disparate technologies and standards that
are used in each of the linked cloud deployments. More specifically, hybrid clouds may fail
to integrate infrastructures with distinct

■ Security architectures
■ Encryption algorithms
■ Networking technologies
■ Application characteristics
■ Visibility methods and tools

Fortunately, a very well-known networking company rose to the challenge of integrating


heterogeneous cloud deployments in a concept that parallels the foundation of the Internet.

Cisco Intercloud
Cloud adoption certainly isn’t the only technological dilemma IT departments of the world
confront on a daily basis. Many CIOs have been challenged to quickly support business
objectives through new concepts such as
Chapter 3: Cloud Heights: Deployment Models 71

■ Bring your own device (BYOD): Describes the growing trend among organizations to
allow their employees to securely connect mobile devices (including personal comput-
ers, smartphones, tablets, and so forth) to the organization’s network to access data and
applications.
■ Internet of Things (IoT): Inspired by the realization that more than 99 percent of physi-
cal objects are not connected to the Internet, this technological approach intends to
deploy sensors and software in miscellaneous devices to collect their data and exercise
intelligent control over their functions.
■ Big data: This term loosely refers to data processing solutions that can handle, manage, 3
and analyze the humongous amount of data originated by the widespread explosion of
mobility, social media, IoT, and other modern technological trends.

Remarkably, the freedom of choice that hybrid clouds offer to organizations tends to
assuage the stress these new trends can impose on a traditional data center infrastructure.
Fundamentally influenced by this perception, Cisco launched its Intercloud strategy in
2014.

The Cisco Intercloud can be viewed as a genuine “back to the roots” approach to the chal-
lenging world of many clouds we are facing today, as Figure 3-8 demonstrates.

The Internet Cisco Intercloud

Network 1 Network 2 Network 3

Network 4 Network 5

Network 6 ... Network N ...

Figure 3-8 The Internet and Cisco Intercloud

During its adolescence in the 1990s, Cisco quickly rose to prominence by becoming the
main network manufacturer of the Internet’s backbone, providing the blocks that built a
single infrastructure capable of connecting multiple isolated and heterogeneous networks.
The left side of Figure 3-8 depicts the overall structure of the Internet.

As exhibited on the right side of Figure 3-8, the objective of the Cisco Intercloud is to
provide the foundation to integrate multiple isolated and heterogeneous clouds to finally
unlock the vast potential of the hybrid cloud.

The Cisco Intercloud strategy is firmly based on a partnership-centric approach to hybrid


clouds, which is represented in Figure 3-9.
Technet24.ir

72 CCNA Cloud CLDFND 210-451 Official Cert Guide

Enterprise
Private
Clouds

CISCO
Partner Intercloud CISCO
Clouds Cloud Intercloud
CISCO
Pampered (Intercloud Fabric) Services

Public
Clouds

Figure 3-9 The Cisco Intercloud Strategy

As you can see in Figure 3-9, the Intercloud represents an amalgam of cloud deployments
that includes enterprise private clouds, public clouds (such as Amazon Web Services and
Microsoft Azure), Cisco Powered clouds from Cisco partners, and the company’s own pub-
lic cloud (Cisco Intercloud Services).

The benefits of Cisco Intercloud can be summarized by the principles described in Table 3-7.

Table 3-7 Cisco Intercloud Principles


Principle Description
Choice of The “one-stop shop” approach definitely does not satisfy the understandable
consumption desire of many companies to choose services from different cloud providers.
models Cisco Intercloud frees organizations to provision and agnostically manage
cloud services from its private cloud technology and chosen public infrastruc-
ture. Additionally, Cisco Powered cloud partners can act as cloud brokers for
the offerings provided by the Cisco Intercloud Services or even other mem-
bers of the Intercloud.
Intercloud As IP routers were the foundation of the Internet, the basis of the Cisco Inter-
infrastructure cloud infrastructure is Cisco Intercloud Fabric (ICF), which essentially pro-
motes integration between distinct cloud providers. ICF will be discussed in
more detail in the next section.
Intercloud The Cisco Intercloud enables an easy and secure blending of on-premises
applications applications and public cloud applications through a rich ecosystem of SaaS
and PaaS member cloud providers.
Interoperabil- Cisco is committed to maintaining an open approach to the Intercloud, providing
ity and open flexibility for customer hybrid cloud projects that require a high level of interop-
standards erability with traditional data center technologies and public cloud offerings.
Chapter 3: Cloud Heights: Deployment Models 73

Principle Description
Security When moving critical data to a public cloud, an organization needs to main-
tain control over its data privacy, location, and compliance with regulations.
Having a broad choice of cloud providers can help companies to achieve end-
to-end security that spans from their internal network to cloud services they
may have hired. Cisco Intercloud accomplishes this objective through agile
security policies that remain consistent regardless of the chosen provider.

3
Ultimately, Cisco Intercloud aims to empower IT departments to keep tight control of pro-
visioned resources in an assorted number of cloud structures from different deployment
models and service models.

Cisco Intercloud Fabric


As mentioned in the previous section, Cisco Intercloud Fabric (ICF) is the basis of the
Cisco Intercloud infrastructure; it can be considered the piece that glues the Cisco Inter-
cloud together. Consisting of a stack of software that enables the centralized control of
hybrid cloud resources, the interactions Cisco Intercloud Fabric has with other cloud struc-
tures are clarified in Figure 3-10.

CISCO

CISCO Intercloud
Services
Intercloud
Fabric for
Provider

CISCO
Pampered

Intercloud
Fabric for
Provider
CISCO Powered
Cloud Partners
Virtualized Data Center
or Private Cloud Intercloud
Fabric for Az
Business u re
API
s
EC

Microsoft
2A

Azure
PI
s

amazon
web services

Figure 3-10 Intercloud Fabric Interactions


Technet24.ir

74 CCNA Cloud CLDFND 210-451 Official Cert Guide

As you can observe in Figure 3-10, the Cisco Intercloud Fabric solution is split into two
complementary parts: Intercloud Fabric for Business, which is designed to run on a cor-
poration’s private cloud, and Intercloud Fabric for Providers, which is installed on cloud
providers that are members of the Cisco Intercloud ecosystem. Cisco Intercloud Fabric for
Business also interoperates with public clouds such as Amazon Web Services and Microsoft
Azure through their published application programming interfaces (APIs).

TIP You will learn about APIs in more detail in Chapter 4. At this point you can think of
APIs simply as the sets of functions, variables, and data structures that enable software com-
ponents to communicate with each other.

Because Cisco Intercloud Fabric for Business also interacts with the server virtualization
layer from a traditional data center infrastructure, the solution actually does not require a
proper private cloud deployment within such a domain. To facilitate its adoption by organi-
zations without a private cloud, Intercloud Fabric for Business can interoperate (at the time
of this writing) with multiple server virtualization technologies, including VMware vCenter,
Microsoft Hyper-V, and Linux KVM (with OpenStack).

TIP OpenStack and server virtualization will be discussed in Chapter 4 and Chapter 5,
“Server Virtualization,” respectively.

Intercloud Fabric Architecture


The Cisco Intercloud Fabric is a pure software solution, whose internal components are
shown in Figure 3-11.
IT End
Admin User

VM Manager

ICFD
App_VM
VLAN 701 ICX Secure Tunnel ICS Secure Tunnel
VLAN 702
Secure CSR
Tunnel
Web_VM WebServerA IP Packet Ethernet
(ICX-ICS) Frame VSG
(VM=VM)

Private Cloud Public Cloud

Figure 3-11 Cisco Intercloud Fabric Architecture

As the central point of management and control for Intercloud Fabric resources, the Cisco
Intercloud Fabric Director (ICFD) offers a graphical user interface that can be accessed by
end users and IT administrators. As an administrator, the following are the most common
tasks you would perform on Cisco ICFD:
Chapter 3: Cloud Heights: Deployment Models 75

■ Establish management connections to server virtualization control software (commonly


called VM [virtual machine] managers) from your organization, to provide the raw prod-
uct for cloud-bursting operations
■ Configure the secure connection between a public cloud and the enterprise private cloud
■ Add and manage end users
■ Configure policies that govern workload placement between the enterprise and each
public cloud
■ Customize portal branding 3
■ Monitor hardware capacity and utilization per user
■ Create service catalogs to enable end users to provision and manage workloads in the
cloud
■ Configure virtual server templates and images, as well as end users’ access to them

Another ICF component, the Cisco Intercloud Extender (ICX), provides encryption for all
data headed toward public clouds through a secure tunnel, which essentially encapsulates
encrypted original Ethernet frames into IP packets. To ensure data confidentiality, ICX
employs a protocol called Datagram Transport Layer Security (DTLS), which supports
128- and 256-bit encryption algorithms.

To permit connectivity between resources distributed across distinct clouds, ICX must
access virtual local-area networks (VLANs) in the private cloud through the connection with
a virtual switch such as Cisco Nexus 1000V.

TIP Multiple virtual switches, including Cisco Nexus 1000V, will be described in further
detail in Chapter 6, “Infrastructure Virtualization.”

Whereas ICX encrypts and encapsulates Ethernet frames exiting the private cloud toward
the Internet, the Cisco Intercloud Switch (ICS) performs the reverse operations in the pro-
vider cloud, decrypting traffic received from ICX and making decisions that are usually
related to a physical switch (for example, sending a frame to a virtual machine). Further-
more, ICS is also responsible for establishing internal DTLS secure tunnels to any public
cloud resource managed by Intercloud Fabric, encrypting frames again before they are sent
to such resources. With such arrangement, the Intercloud Fabric solution never exposes
traffic in clear text out of the confinements of the private cloud.

As examples of ICF resources deployed in public clouds, the solution offers integrated
security and routing services in public cloud providers through Virtual Security Gateway
(VSG) and Cloud Service Router (CSR), respectively. These services, along with all the com-
ponents of a secure connection between a private cloud and a public cloud, can be moni-
tored in ICFD, as Figure 3-12 delineates.
Technet24.ir

76 CCNA Cloud CLDFND 210-451 Official Cert Guide

Secure Connection

Figure 3-12 Secure Connection in Intercloud Fabric Director

In Figure 3-12, an administrative account (admin) verifies that a secure tunnel is working
between an ICX (whose IP address is 198.18.133.100) and an ICS (198.18.4.105) located in a
public cloud called dCloud-Provider and created through a public cloud account creatively
called dCloud-Provider. Both ICX and ICS are marked as primary because ICFD can poten-
tially deploy redundant instances for both components. For the sake of simplicity, these
additional elements were not configured in this setup.

Also, both VSG and CSR are provisioned in the public cloud to offer firewall and routing
services to instances that will be running in this environment.

Intercloud Fabric Services


As previously mentioned, ICFD supports the creation of customized portals to users to
allow individualized management of their resources. As an example, Figure 3-13 represents
one view from a portal accessed by an end user imaginatively called user.

In the portal shown in Figure 3-13, an end user can manage virtual machines App_VM,
WebServerA, and Web_VM that are running in the private cloud (dCloud-DC) and, respec-
tively, using IP addresses 198.18.5.100, 198.18.6.101, and 198.18.6.100.

Figure 3-14 exhibits the catalog that administrative users have designed for this specific end
user.
Chapter 3: Cloud Heights: Deployment Models 77

Virtual Machines in the Private Cloud

Figure 3-13 ICFD User Virtual Machines

Figure 3-14 ICFD User Catalog

In the portal portrayed in Figure 3-14, there is only one service that the end user can invoke
(although other services certainly could have been configured). Unsurprisingly, “App Server in
dCloud Provider” allows the creation of an application server in the public cloud provider.
Technet24.ir

78 CCNA Cloud CLDFND 210-451 Official Cert Guide

After double-clicking the service, the end user initiates a service request that can query cus-
tom information, according to what was defined by the ICFD administrator. In this scenar-
io, the user can solely change the description of the soon-to-be deployed application server,
as you can observe in Figure 3-15.

Only customizable option

Figure 3-15 App Server Summary

As a result of the user request fulfilment of an App Server in the public provider, ICFD
starts to interact with elements from both sides of the secure connection to carry out the
creation of the application server. These procedures are summarized in Figure 3-16.

Figure 3-16 Service Request Status


Chapter 3: Cloud Heights: Deployment Models 79

After ICFD correctly executes all processes described in the service workflow, the applica-
tion server is working correctly, according to Figure 3-17.

Newly Created Application Server

Figure 3-17 Application Server Provisioned in Provider Cloud

With an application server deployed in the provider cloud, physical and virtual machines
in the enterprise access it. More specifically, because they share the same IP subnet
(198.18.5.0/24), App_VM and App_Server13 are connected to the same VLAN (701, which
is not shown in Figure 3-17).

Such Layer 2 extension to public clouds is a valuable differentiator for ICF when com-
pared to other hybrid cloud solutions. Using standard VLANs from the enterprise in public
clouds, Intercloud Fabric greatly simplifies the insertion of cloud-bursted instances into
security zones that are internally associated to these VLANs in corporative environments.

Moving forward, the end user decides to migrate a virtual machine from the organization’s
private cloud to the public cloud provider. This operation is intuitively started through the
selection of the virtual machine in ICFD, as you can see in Figure 3-18.
Technet24.ir

80 CCNA Cloud CLDFND 210-451 Official Cert Guide

Selected Virtual Machine

Figure 3-18 Migrating WebServerA to Provider

Figure 3-18 shows that after selecting a VM, a Migrate VM To Cloud button is available to
ignite the migration process, which carries out the following steps:

Step 1. The virtual machine is powered off and the Intercloud Fabric driver is added to
its image.

Step 2. The image is converted to the public cloud format (such as AMI in Amazon
Web Services), and its content is uploaded to a server instance in the public
cloud.

Step 3. The instance is initialized in the public cloud with ICFD managing it.

Figure 3-19 depicts how this development can be monitored in Intercloud Fabric Director.
Chapter 3: Cloud Heights: Deployment Models 81

Figure 3-19 Migration Service Request Status

After some minutes, the final result is displayed (see Figure 3-20): WebServerA is already
running on the public cloud provider (dCloud-Provider).

Figure 3-20 After WebServerA Is Fully Migrated

Figure 3-20 shows that the VM IP address (198.18.6.101) did not change with the migra-
tion, therefore requiring that its original VLAN (702, which is not exhibited in Figure 3-20)
is also present in the public cloud to maintain connectivity with the resources in the private
cloud.
Technet24.ir

82 CCNA Cloud CLDFND 210-451 Official Cert Guide

TIP The selection of WebServerA enables the reverse procedure through the Migrate VM
on Premise button. In addition, several other virtual server administrative operations are
available, including Power On, Power Off, Reboot, Terminate, and observation of the VM
migration history.

Figure 3-21 exposes the final topology, after AppServer-13 is created in the public cloud
and WebServerA is migrated from the private cloud.
IT End
Admin User

VM Manager

ICFD el
unn
cure T AppServer-13
App_VM Se
VLAN 701 ICX Secure Tunnel ICS Secure Tunnel
VLAN 702 Sec CSR
Secure ure
Tunnel Tun
Web_VM nel
VSG
WebServerA

Figure 3-21 Final Topology

In Figure 3-21, I have highlighted that secure tunnels are also established between the ICF-
managed VMs and the Intercloud Switch. Using the ICF driver installed before the migra-
tion to the public cloud, these per-VM tunnels guarantee that traffic in motion is always
encrypted outside of the private cloud security domain. For this reason, the whole set of
public cloud resources provisioned by ICF (ICS, VSG, CSR, and VMs) are collectively (and
informally) called the “ICF shell.”

Besides data encryption for site-to-site and VM-to-VM communications through DTLS
tunnels, ICF security is also enforced through its Cisco Intercloud Fabric Firewall (VSG).
In summary, VSG is a zone-based firewall that can be deployed to provide policy enforce-
ment for communication between ICF-managed VMs and to protect inter-VM traffic in the
provider cloud. In VSG, traffic filtering policies can be based on network attributes (such
as IP subnets and TCP ports) or VM attributes, including VM name and running operating
system.

Through its Cisco Intercloud Router (CSR), ICF can deploy routing and other advanced
network-based capabilities without requiring traffic to be redirected to the enterprise data
center. This virtual router is based on proven Cisco IOS Software and also runs as a VM in
the provider cloud. The router deployed in the cloud by Intercloud Fabric serves as a virtual
router and edge firewall for the workloads outside of the ICF shell in the provider cloud. In
addition, CSR can interoperate with Cisco routers in the enterprise to deliver an end-to-end
networking architecture.
Chapter 3: Cloud Heights: Deployment Models 83

TIP Both VSG and CSR 1000V will be discussed in more detail in Chapter 7, “Virtual
Networking Services and Application Containers.”

Intercloud Fabric Use Cases


Additionally to cloud bursting, the safe and rich feature set of Cisco Intercloud Fabric has
inspired a series of very interesting practical uses for this hybrid cloud architecture, such as:

■ Application development and test environments: ICF enables organizations to securely


deploy temporary development environments in the public cloud. While test environ-
3
ments can be easily cloned from this scenario, these organizations can also bring back the
workloads to their private clouds as soon as they are ready for production.
■ Shadow IT control: The solution offers to employees a secure alternative to ad hoc pub-
lic cloud provisioning, giving resource provisioning control back to the IT department.

Around the Corner: Private Cloud as a Service


Many organizations have struggled to deploy private clouds, only to shelve the project
before its completion or see it go untouched by internal users due to lack of business
alignment or cloud-related expertise. In response, some pioneering cloud providers have
stretched the loose definition of private cloud to encompass another type of cloud service
called Private Cloud as a Service (PCaaS), where a cloud provider

■ Deploys an on-premises private cloud in an organization, using pretested automated


operations
■ Operates, manages, and supports the private cloud
■ Charges the organization according to its use (OPEX)

Cisco offers PCaaS through the Cisco Metapod, an OpenStack-based solution that is
remotely installed over standardized hardware and operated 24 hours a day, all year. Besides
the extreme agility in the installation process, Cisco Metapod is a highly available private
cloud with a prebuilt catalog for virtual servers and desktops. Additionally, it can integrate
with most organizations’ directory services (using Lightweight Directory Access Protocol
or Microsoft Active Directory) and provides full support for OpenStack and Amazon Web
Service APIs.

Further Reading
■ Cisco Metapod: http://www.cisco.com/go/metapod
Technet24.ir
84 CCNA Cloud CLDFND 210-451 Official Cert Guide

Exam Preparation Tasks

Review All the Key Topics


Review the most important topics in this chapter, denoted with a Key Topic icon in the
outer margin of the page. Table 3-8 lists a reference of these key topics and the page num-
ber on which each is found.

Table 3-8 Key Topics for Chapter 3


Key Topic Element Description Page Number
Table 3-2 Public cloud security risks 62
Table 3-3 Public cloud control challenges 63
Table 3-4 Public cloud cost risks 64
Table 3-5 Examples of regulatory compliance standards 68
Table 3-6 Private and public clouds compared 69
Table 3-7 Cisco Intercloud principles 72
List Intercloud Fabric administrator tasks 75

Complete the Tables and Lists from Memory


Print a copy of Appendix B, “Memory Tables” (found on the CD), or at least the section for
this chapter, and complete the tables and lists from memory. Appendix C, “Memory Tables
Answer Key,” also on the CD, includes completed tables and lists so that you can check
your work.

Define Key Terms


Define the following key terms from this chapter, and check your answers in the glossary:

public cloud, line of business (LoB), private cloud, community cloud, regulatory compliance
standards, hybrid cloud, cloud bursting, Cisco Intercloud, Cisco Intercloud Fabric, Inter-
cloud Fabric Director (ICFD), Intercloud Extender (ICX), Intercloud Switch (ICS), Virtual
Security Gateway (VSG), Cloud Services Router (CSR)
Technet24.ir

This chapter covers the following topics:

■ Cloud Computing Architecture

■ Cloud Infrastructure: Journey to the Cloud

■ Application Programming Interfaces


CHAPTER 4

Behind the Curtain


Taking a quick glance at our rearview mirror, up to this point in the book we have
approached cloud computing from an external perspective, focusing on the relationship
between cloud consumers and cloud resources. Throughout the previous three chapters, I
have discussed the essential aspects that commonly characterize cloud computing environ-
ments and explored different classifications according to the types of offered services and
restrictions of use.
In this chapter, we shift gears to investigate how exactly a cloud deployment works. Here,
you will learn about the components and concepts a cloud architect should master before
effectively designing such an environment.

Although the CLDFND exam does not explicitly demand knowledge about the topics
discussed in this chapter, I have written it strategically to bridge the gap between the lofty
expectations of a cloud computing implementation and the realities of its operation on ser-
vice providers, enterprise, and public organizations.

With such intention, the chapter covers the basic architecture of a cloud deployment and its
main software functions; data center infrastructure evolution toward cloud computing; and
the main methods of communication between all cloud elements. And to further assist you
to successfully cross the chasm between a cloud user and a cloud professional, your reading
experience will be broadened through real-world solutions and scenarios.

“Do I Know This Already?” Quiz


The “Do I Know This Already?” quiz allows you to assess whether you should read this
entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in
doubt about your answers to these questions or your own assessment of your knowledge
of the topics, read the entire chapter. Table 4-1 lists the major headings in this chapter and
their corresponding “Do I Know This Already?” quiz questions. You can find the answers in
Appendix A, “Answers to Pre-Assessments and Quizzes.”

Table 4-1 “Do I Know This Already?” Section-to-Question Mapping


Foundation Topics Section Questions
Cloud Computing Architecture 1
Cloud Infrastructure: Journey to the Cloud 2
Application Programming Interfaces 3
Technet24.ir

88 CCNA Cloud CLDFND 210-451 Official Cert Guide

1. Which of the following are components of the cloud software stack? (Choose all that
apply.)
a. Virtualization software
b. Meter
c. Orchestrator
d. Portal
e. All software running within a data center that is hosting a cloud environment

2. Which of the following options summarizes the importance of standardization for


cloud deployments?
a. Any cloud computing deployment requires a single vendor for all infrastructure
components.
b. Virtualization solutions do not require standardization in cloud computing imple-
mentations.
c. Cloud software stack solutions already provide standardization for cloud infra-
structure.
d. Standardization of hardware, software, processes, and offerings facilitate automa-
tion and increase predictability in a cloud computing deployment.
e. None, because cloud computing scenarios must always be ready for customiza-
tion.

3. Which of the following are not characteristics of RESTful application programming


interfaces? (Choose all that apply.)
a. Uses HTTP or HTTPS
b. Only supports XML data
c. Designed for web services
d. Follows a request-response model
e. Designed for human reading
Chapter 4: Behind the Curtain 89

Foundation Topics

Cloud Computing Architecture


Let’s begin with a brief review of the first three chapters. Chapter 1, “What Is Cloud Com-
puting?” introduced you to fundamental concepts that frame cloud computing, with strong
emphasis given to the essential characteristics all cloud environments share, as defined by
NIST: on-demand self-service, broad network access, resource pooling, rapid elasticity,
and measured service. In Chapters 2 and 3, “Cloud Shapes: Service Models,” and “Cloud
Heights: Deployment Models,” respectively, you learned about two autonomous classifica-
tions of cloud computing:

■ Service models: Cloud service offerings are classified according to their level of custom- 4
ization flexibility and readiness to support consumer needs: Infrastructure as a Service
(IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
■ Deployment models: Cloud computing environments are classified according to the
number of different organizations that can access their services: public, private, commu-
nity, and hybrid clouds.

As a CCNA Cloud candidate, you now have a good grasp of all the high-level concepts that
you are expected to know for the exam. But as a future cloud practitioner, I imagine you
must have asked several times throughout your reading: What lies behind the curtain of
such wizardry? And to tell you the truth, it was kind of grueling for this engineer to avoid
deployment details in order to distill the conceptual essence of cloud computing.

Fortunately, it is time to delve into the details of how these special IT environments actually
operate. To begin with, Figure 4-1 displays a generic architecture that highlights important
components that are present in the large majority of cloud computing deployments.

Cloud Software Stack Cloud


Infrastructure

Cloud Cloud
Portal Orchestrator

Cloud End
User
Cloud VMs
Meter VM
Manager

Figure 4-1 Cloud Computing Architecture


Technet24.ir

90 CCNA Cloud CLDFND 210-451 Official Cert Guide

In Figure 4-1, the cloud elements are clearly separated into two groups: cloud software
stack and cloud infrastructure. Such division is further described in Table 4-2.

Table 4-2 Cloud Component Classes


Class Description
Cloud software Concerns integrated software modules that are developed to exclusively
stack perform cloud-related operations, such as providing a service catalog to
cloud consumers, provisioning requested resources, and keeping track
of cloud resource usage
Cloud infrastructure Applies to all software and hardware that the cloud software stack
orchestrates and that can also be used in non-cloud data center environ-
ments

Whereas the components of the cloud software stack translate the consumer requests
into infrastructure management operations, the cloud infrastructure elements embody the
resources that will actually be used by the cloud consumers. As represented in Figure 4-1,
the cloud infrastructure consists of network devices, storage systems, physical servers,
virtual machines, security solutions, networking service appliances, and operating systems,
among other solutions.

Because each of these infrastructure elements will be thoroughly explained in future chap-
ters, the following sections describe the main components of the cloud software stack.

NOTE Across different cloud computing deployments, you will certainly find these ele-
ments implemented in varied arrangements, such as services of monolithic applications or as
integrated software modules from different vendors. The focus of this chapter is to intro-
duce these elements as functions rather than as independent software pieces.

Cloud Portal
Much like an application on a personal computer, the cloud portal directly interacts with
end users. Essentially, the portal publishes cloud service catalogs, wizards for guided user
“shopping,” interactive forms, approval workflows, status updates, usage information, and
billing balances.

Figure 4-2 depicts an example screenshot from the Cisco Prime Service Catalog (PSC) cloud
portal, which is a software module that belongs to the Cisco ONE Enterprise Cloud Suite
integrated cloud software stack. As indicated, the interface is presented to the user as a per-
sonal IT as a Service (ITaaS) storefront.
Chapter 4: Behind the Curtain 91

Figure 4-2 Cloud Portal Example

In Figure 4-2, a cloud end user has access to different catalogs, including non-cloud device
and desktop application-related services. After selecting the Private Cloud IaaS option in the
lower-right corner, the cloud user is presented with the available cloud service offerings, six
of which in this particular example are displayed in Figure 4-3.

Figure 4-3 Private Cloud IaaS Offerings


Technet24.ir

92 CCNA Cloud CLDFND 210-451 Official Cert Guide

Regardless of what is actually being offered (for now, you do not have to worry about the
terms in the service names), it is important to notice that this catalog may have been spe-
cially customized for this user, a group he belongs to, or his organization.

TIP The term “secure container” describing five of the services in Figure 4-3 technically
refers to virtual application containers, which can be understood as isolated application
environments that usually contain exclusive virtual elements such as VLANs, VXLANs, vir-
tual machines, and virtual networking services. Chapter 7, “Virtual Networking Services and
Application Containers,” will discuss this concept in much more detail.

Most cloud portals also provide administrative tools to foster customization and facilitate
integration with other cloud software stack modules. To illustrate these operations, Figure
4-4 depicts some of the options available to a PSC administrative account creatively called
admin.

Figure 4-4 Cloud Portal Administration Portal Options


Through such a portal, cloud portal administrators can also monitor the cloud status, con-
trol user orders, and create new service offerings for cloud users. Figure 4-5 illustrates the
latter, through the Prime Service Catalog tool called Stack Designer.
Chapter 4: Behind the Curtain 93

Figure 4-5 Prime Service Catalog Stack Designer

Figure 4-5 depicts the creation of a new service called “CCNA Cloud CLDFND Exam Simu-
lator,” which contains two servers: an app server running the exam simulator, and a db serv-
er containing multiple questions that can be summoned to compose an on-demand exam.

The PSC software stack designer uses templates with a varied number of elements (virtual
machines, for example) to enable an easy assignment of applications to each catalog offer-
ing. After the application is designed, it can be published to the cloud end users as Figure
4-6 shows.

Figure 4-6 Publishing a Service to All Users


Technet24.ir

94 CCNA Cloud CLDFND 210-451 Official Cert Guide

The various tabs of the interface shown in Figure 4-6 enable a cloud portal administrator to
configure all parameters related to publishing a service catalog, including the name, asso-
ciated logo, description, and authorized users. In this specific screen capture, the CCNA
Cloud CLDFND Exam Simulator application will be available to all users (“Grant access to
Anyone”).

Figure 4-7 represents the updated Private Cloud IaaS catalog after the portal administrator
successfully publishes the recently designed service.

Figure 4-7 Final Service Catalog for Demo User

Broadly speaking, an effective cloud portal solution should

■ Primarily provide a good experience for end users’ operations


■ Offer useful tools for service customization and administrative tasks
■ Facilitate the integration with other components of the cloud software stack

Cloud Orchestrator
Drawing again on the comparison between the components of a cloud software stack and
those of a PC software stack, a cloud orchestrator correlates to a computer operating sys-
tem. Much like an operating system orchestrates PC hardware resources to fulfill user appli-
cation tasks, a cloud orchestrator coordinates infrastructure resources according to user
requests issued on the portal.

TIP Operating systems will be explained in much more detail in Chapter 5, “Server
Virtualization.”
Chapter 4: Behind the Curtain 95

The orchestrator epitomizes the core of a cloud computing deployment because it must
interoperate with all cloud elements, from both the cloud software stack and the cloud
infrastructure.

As demonstrated in the last section, when a cloud portal administrator is designing a service
to be published to end users, she leverages options that individually represent infrastructure
requests that will be issued to the cloud orchestrator whenever a user solicits this service.
From the orchestrator standpoint, each of these requests refers to the execution of a pre-
defined workflow, which is expressed as a sequence of tasks that is organized to be carried
out in order in a fast and standardized way.

Figure 4-8 portrays a single workflow from the embedded cloud orchestrator in Cisco ONE
Enterprise Cloud Suite: the Cisco Unified Computing System Director (or simply, UCS
4
Director).

Task
Task
Task
Task
Task
Task
Task
Task
Task
Task

Figure 4-8 UCS Director Workflow

In Figure 4-8, a workflow called “Fenced Container Setup – dCloud” is detailed as a sequen-
tial set of tasks representing the atomic operations that are executed on a single device.
As is true of any UCS Director workflow, Fenced Container Setup – dCloud begins with a
Start task, which obviously symbolizes the first step of the workflow. In this specific sce-
nario, Start has an arrow pointing to task “2620. AllocateContainerVMResources_1095,”
which must be executed according to parameters transmitted by the cloud portal. If
task 2620 is executed correctly, the cloud orchestrator carries out the next task (2621.
ProvisionContainer-Network_1096), and so forth. Should any error occur, the workflow is
programmed to forward the execution to an error task (generating a warning message to the
cloud orchestrator administrator, for example).

Because this workflow was actually invoked when an end user solicited a CCNA Cloud
CLDFND Exam Simulator instantiation, a cloud orchestrator administrator can also monitor
the portal service request execution, as Figure 4-9 demonstrates.
Technet24.ir

96 CCNA Cloud CLDFND 210-451 Official Cert Guide

Figure 4-9 Workflow Execution

To correctly execute a workflow, a cloud orchestrator must have information about the
devices that are related to each workflow task. Using management connections for this
endeavor, UCS Director also monitors resource usage and availability, greatly simplifying
capacity planning of a cloud computing environment.

Figure 4-10 depicts a dashboard from UCS Director displaying device information, provi-
sioned resources, and overall capacity for storage (na-edge1), servers (dCloud_UCSM), net-
work (VSM), and server virtualization (dCloud_VC_55).

Figure 4-10 UCS Director Dashboard


Chapter 4: Behind the Curtain 97

Through the monitoring information shown in Figure 4-10, UCS Director can inform the
cloud portal whenever there is resource saturation.

In a nutshell, an efficient orchestration solution must always combine simplicity with flex-
ibility in a balanced way. It must be easy enough to attract adoption, customizable to
execute very sophisticated operations, and allow open source development to leverage code
reuse for both tasks and workflows.

Cloud Meter
A cloud meter is the cloud software stack module that concretizes service measurement in a
cloud computing deployment. As end users request resources in the cloud portal, the cloud
meter does the following:
4
■ Receives notifications from the cloud orchestrator informing when infrastructure
resources were provisioned for the cloud consumer, their usage details, and the exact
time they were decommissioned
■ Supports the creation of billing plans to correlate cloud resource usage records, time
period, and user identity to actual monetary units
■ Summarizes received information, eliminates errors (such as duplicated data), and gener-
ates on-demand reports per user, group, business unit, line of business, or organization
■ Provides on-demand reports to the cloud portal or through another collaboration tool
(such as email, for example)

Traditionally in cloud scenarios, cloud meters are used for chargeback, which is an expen-
diture process where service consumers pay for cloud usage, assuaging the cost the cloud
provider has spent building the environment. Even in the specific case of private clouds, the
chargeback model can deliver the following benefits for the consumer organization:

■ It maps resource utilization to individual end users or groups of consumers.


■ It provides resource utilization visibility for the IT department, hugely facilitating capac-
ity planning, forecasting, and budgeting.
■ It strengthens conscious-use campaigns to enforce objectives such as green IT.

Alternately, if there is no possibility of actual currency exchange between consumers and


providers, cloud administrators can deploy a showback model, which only presents a break-
down of resources used to whoever it may interest, for the purposes of relative usage com-
parison among users and their groups.

Within Cisco ONE Enterprise Cloud Suite, two UCS Director features can fulfill the cloud
meter function: the chargeback module and the CloudSense analytics.

The chargeback module enables detailed visibility into the cost structure of the orchestrated
cloud infrastructure, including the assignment of customized cost models to predefined
groups (such as departments and organizations). The module offers a flexible and reusable
cost model that is based on fixed, one-time, allocation, usage, or combined cost parameters.
Additionally, it can generate various summary and comparison reports (in PDF, CSV, and
XLS formats), and Top 5 reports (highest VM cost, CPU, memory, storage, and network
costs).
Technet24.ir

98 CCNA Cloud CLDFND 210-451 Official Cert Guide

Figure 4-11 provides a quick peek into the building of a cost model in UCS Director.

Figure 4-11 Cost Model Example

In Figure 4-11, I have created a cost model for virtual machine usage, where a cloud user
is charged $9.99 (USD) before any actual resource provisioning. In addition, the same user
must pay $0.10 and $0.01 per hourly active and inactive VM, respectively. CPU resources
are also charged, with $1.00 per reserved GHz per hour and $0.50 per used GHz per hour.

The CloudSense analytics feature can provide real-time details about the orchestrated cloud
infrastructure resources and performance. Leveraging the strategic position of UCS Director
as a cloud orchestrator, this tool is especially designed for cloud administrators to improve
capacity planning, forecasting, and reporting of the cloud infrastructure (be it virtual or
physical).

Figure 4-12 lists some of the predefined reports that can be delivered to end users.
Besides providing metering functions natively, UCS Director can also integrate with third-
party chargeback solutions, and also connect to payment gateways to allow credit card
charging.

Ideally, a good cloud meter solution should be able to aggregate information about cloud
resource usage, allow the creation of flexible chargeback (or showback) plans, and offer
transparency to both cloud end users and administrators through self-explanatory dash-
boards and detailed reports.
Chapter 4: Behind the Curtain 99

Figure 4-12 CloudSense Reports

Cloud Infrastructure: Journey to the Cloud


Although some vendor campaigns may suggest otherwise, a cloud computing deployment
is not completely accomplished through a cloud software stack installation. In fact, when
a company pursues the endeavor of building a cloud (be it for its own use or to provide
services for other organizations), such messaging may reinforce the common oversight of
cloud infrastructure and its related operational process (a frequent cause of doomed cloud
deployment initiatives).

Generally speaking, any IT solution is potentially available for any organizations from any
industry. Competitive differentiation is consequently achieved through well-designed oper-
ational processes that strongly link business objectives and technical expertise.

Similarly, an effective cloud computing implementation depends on the evolution of data


center management processes to effectively support the needs of the potential cloud con-
sumers. Cisco and many other IT infrastructure providers believe that a progression of phas-
es can help organizations to safely cross the chasm between a traditional data center and a
cloud. Figure 4-13 illustrates such phases.

Traditional Cloud Computing


Data Center

Consolidation Virtualization Standardization Automation Orchestration

Figure 4-13 Journey to the Cloud


Technet24.ir

100 CCNA Cloud CLDFND 210-451 Official Cert Guide

In this “journey to the cloud,” each phase benefits from the results achieved in the previ-
ous phase, as you will learn in the following sections. Obviously, depending on the internal
characteristics of its data center and the strategic importance of the cloud computing proj-
ect, each organization can decide the pace at which each phase is carried out. For example,
the whole journey can be further accelerated in a cloud deployment that is intentionally
isolated in a few racks in a data center to avoid conflicts with traditional data center pro-
cesses (such as maintenance windows and freezing periods).

Notwithstanding, if the principles ingrained in each of the phases depicted in Figure 4-13
(and described in the following sections) are not fully comprehended, the cloud project may
easily run into challenges such as

■ Cloud services that are not adequate for the cloud consumer objectives
■ Inappropriate time to provision
■ Cloud deployment delays (or even abandonment) due to unforeseen complexity
■ Low adoption by end users

Consolidation
A very popular trend in the early 2000s, consolidation aims to break the silos that tradi-
tionally exist in a data center infrastructure. In essence, this trend was a direct reaction to a
period in data center planning I jokingly call “accidental architecture,” where infrastructure
resources were deployed as a side effect of application demand and without much atten-
tion to preexisting infrastructure in the facility. For this reason, many consolidation projects
were rightfully dubbed rationalization initiatives.

Figure 4-14 portrays a data center consolidation initiative.

The left side of Figure 4-14 depicts three infrastructure silos created with the dedication
of resources for each application deployment (A, B, and C). Although this arrangement
certainly guarantees security and isolation among application environments, it makes it very
difficult for data center administrators to

■ Allow cross-application connectivity for clients, servers, and storage


■ Optimize server and storage utilization, because one application could saturate its infra-
structure resources while others remain underutilized
Chapter 4: Behind the Curtain 101

Client for Client for Client for Client for Client for Client for
Application A Application B Application C Application A Application B Application C

Data Center Data Center Data Center


Network A Network B Network C Consolidated
Data Center
Network

A A B B C C 4
A C

A B B C

Storage for Storage for Storage for


Application A Application B Application C

Storage for Applications A, B, and C

Figure 4-14 Data Center Consolidation Example

A consolidation strategy such as the one depicted on the right side of Figure 4-14 enables
an IT department to achieve various benefits in each technology area. For example:

■ Networks: Consolidation of networks streamlines connectivity among application users


and servers, facilitating future upgrade planning. As a result, the large majority of data
centers maintain different network structures per traffic characteristic (such as produc-
tion, backup, and management) rather than per application.
■ Storage: Fewer storage devices can hold data for multiple applications, permitting better
resource utilization and finer capacity planning.
■ Servers: The hosting of multiple applications on a single server is afflicted with compat-
ibility issues between hardware and software modules. And although highly standardized
environments may achieve consolidation more quickly, they certainly do not represent
the majority of scenarios in a traditional data center.

Aiming at a greater scope of resource optimization, many IT departments extend their


consolidation initiatives toward a reduction of data center facilities and the recentralization
of servers provisioned outside of a data center site. In both cases, consolidation helps to
decrease management complexity and the number of extremely repetitive operational tasks.

Regardless of how or where it is applied, resource consolidation facilitates the interaction


between the cloud orchestrator and the infrastructure, establishing an important basis for
the resource pooling essential characteristic of cloud environments.
Technet24.ir

102 CCNA Cloud CLDFND 210-451 Official Cert Guide

Virtualization
As discussed in Chapter 2, virtualization techniques generally allow the provisioning of logi-
cal resources that offer considerable benefits when compared with their physical counter-
parts.

Interestingly enough, many virtualization techniques are still extensively deployed to sup-
port consolidation processes within data centers. For example:

■ Virtual local-area network (VLAN): Technique that isolates Ethernet traffic within a
shared network structure, providing segmentation for hosts that should not directly com-
municate with each other.
■ Storage volume: Allows storage capacity provisioning for a single application server
inside of a high-capacity storage system.
■ Server virtualization: Permits the provisioning of multiple logical servers inside of a
single physical machine.

Chapter 2 also explained a simple classification system that separates virtualization tech-
niques into three types: partitioning, pooling, and abstraction. From an architectural
standpoint, partitioning virtualization techniques enable even higher asset utilization
through more sophisticated resource control techniques. In addition, these virtual resources
can be combined into virtual data centers (vDCs), which are potential infrastructure tem-
plates for future cloud applications or tenants.

Pooling virtualization techniques also support consolidation projects through a greater


reduction of management points and proper construction of resource pools. Among the
examples of pooling technologies, I can highlight disk array virtualization (which allowed
multiple storage devices to be managed as a single unit) and server clusters (which enabled
multiple servers to deploy the same application for performance and high-availability pur-
poses).

Abstraction virtualization technologies also help consolidation as they can change the
nature of a physical device to a form that may be more familiar to a data center infrastruc-
ture team. As an illustration, a virtual switch connecting virtual machines potentially can be
managed using the same operational procedures used on a physical switch.

Invariably, all virtualization technologies provide a strong basis for cloud computing
deployments through their added flexibility. Because virtual resources can be quickly pro-
visioned without manual operations, it enormously amplifies the creation of offers in the
cloud portal and simplifies the job of the cloud orchestrator.

NOTE You will have the opportunity to explore virtualization techniques and technolo-
gies in much greater depth in subsequent chapters, including, for example, virtual machines
(Chapter 5), virtual switches (Chapter 6, “Infrastructure Virtualization”), virtual network-
ing services (Chapter 7), RAID, volumes (Chapter 8, “Block Storage Technologies”), virtual
device contexts, virtual PortChannels, fabric extenders, Overlay Transport Virtualization,
server I/O consolidation (Chapter 10, “Network Architectures for the Data Center: Unified
Fabric”), and service profiles (Chapter 12, “Unified Computing”).
Chapter 4: Behind the Curtain 103

Standardization
Although many organizations feel comfortable lingering in the virtualization phase, they are
usually aware that more effort is required to complete the journey toward cloud computing.
This awareness typically matures gradually as virtualization technologies naturally lead to
overprovisioning and loss of control over the deployed (virtual) resources.

To properly assess the problem, try to picture the amount of development a cloud software
stack solution would require if an organization tried to replicate the wild variations of “in
the heat of the moment” provisioned infrastructure. In any complex project, excessive varia-
tion counteracts replicability, predictability, scalability, and accountability. Henry Ford, a
leader in 20th-century mass production, was very cognizant of that fact, as he succinctly
stated to the customers of his automobile company: “You can have any color you like, as
long as it is black.” 4

As a logical conclusion, to achieve replicability, predictability, scalability, and accountability—


the essential characteristics of a well-oiled machine, be it a car or a cloud— production resourc-
es must be standardized. Customization means complexity. Therefore, offering more custom
options in a self-service cloud portal catalog means more workflows in the cloud orchestrators,
more metering plans, and more effort spent overseeing the cloud infrastructure.

Besides simplifying the service offerings in a cloud, infrastructure standardization can also
help to reduce development in the cloud software stack. Such an initiative can happen in
multiple dimensions, such as

■ Uniformity of infrastructure vendor, models, and versions concerning storage, server,


and network devices, virtualization software, operating systems, and platform software
■ Clear predefinition of resources (physical or virtual) to support user demands
■ Normalization of provisioning methods and configuration procedures

Undoubtedly, even without a proper cloud deployment, these different standardization


endeavors will certainly drive a data center toward more effective control over deployed
resources and troubleshooting processes.

NOTE Special attention to standardization process in data centers is given in Chapter 14,
“Integrated Infrastructures,” where the concept of pool of devices (POD) is explored in
detail.

Automation
Through standardization initiatives in IT, an organization is preparing its operational teams
to think massively rather than on a one-off basis. Notwithstanding, even in highly standard-
ized environments, the staggering amount of coordinated infrastructure operations can eas-
ily result in human-related mistakes.

Figure 4-15 exemplifies another complex system that is highly susceptible to operational
missteps.
Technet24.ir

104 CCNA Cloud CLDFND 210-451 Official Cert Guide

Figure 4-15 Are You Ready to Fly over the Clouds?

To eliminate human error and accelerate execution, sophisticated systems such as an air-
plane or a data center must rely on the automation of operational procedures. In other
words, to automate means to totally excise manual procedures and port standardized proce-
dures into software.

Automation invariably transforms provisioning, migration, and decommissioning processes


within a data center. Much as in a modern industrial production line, the operational teams
of an automated data center must design tasks that will be carried out by (software) robots
and closely monitor their effectiveness, performance, and compliance to service-level agree-
ments (SLAs). In automated environments, maintenance windows are simply ignited through
a “great red button” and quickly reversed, if a failure is detected.

Unsurprisingly, data center automation requires approaches that are very usual in software
development, where manual tasks are translated into code, which is then tested, debugged,
and, finally, put into production.

NOTE In highly automated scenarios, policy-driven infrastructure solutions can drastically


reduce development effort because they already provide built-in automation, reporting,
and analytics functions. You will learn more about such solutions in Chapter 11, “Network
Architectures for the Data Center: SDN and ACI,” and Chapter 12.

Orchestration
In the orchestration phase of the journey to the cloud, the cloud software stack benefits
from all the hard-earned results from the previous phases:

■ Reduction of resource silos (consolidation)


■ Logical provisioning, resource usage optimization, and management centralization (virtu-
alization)
■ Simplicity and predictability (standardization)
■ Faster provisioning and human error mitigation (automation)
Chapter 4: Behind the Curtain 105

At this stage, silos between infrastructure and development are already broken. The cloud
architect focuses his attention on services that should be offered in the portal catalog
according to user requirements. With such defined purpose, the cloud orchestrator is pro-
grammed to execute workflows over the automated infrastructure to fulfill the requested
services.

A cloud metering plan also must be put in action, with an optional chargeback (or show-
back) billing strategy aligned to business objectives. Additionally, as explained in Chapter
3, an organization handling the implementation of a cloud may also supplement its service
catalog through the secure offering of resources from other clouds through the hybrid
cloud deployment model.

Application Programming Interfaces 4


In the first half of this chapter, you have learned about the most important components in
a cloud computing deployment, with special attention given to the functions of the cloud
software stack and the evolution of data center infrastructure toward the cloud IT access
model.

The efficiency of a cloud implementation depends on how well the cloud software stack
components communicate with each other, the cloud infrastructure devices, and even with
external clouds. Especially in the case of the cloud orchestrator, a varied spectrum of com-
munication methods facilitates integration within the cloud.

In short, cloud software stack and infrastructure components commonly use the following
intercommunication approaches:

■ Command-line interface (CLI): Developed for user interaction with mainframe terminals
in the 1960s. A software component accepts commands via the CLI, processes them, and
produces an appropriate output. Although original CLIs depended on serial data connec-
tions, modern devices use Telnet or Secure Shell (SSH) sessions over an IP network.
■ Software development kit (SDK): Collection of tools, including code, examples, and
documentation, that supports the creation of applications for a computer component.
With an SDK, a developer can write applications using a programming language such as
Java and Python.
■ Application programming interface (API): A set of functions, variables, and data struc-
tures that enables software components to communicate with each other. Essentially, an
API regulates how services from a computer system are exposed to applications in terms
of operations, inputs, outputs, and data format.

Although the origin of APIs is associated with web services (which are basically applica-
tions that can communicate through the World Wide Web), they have become a powerful
alternative for cloud software integration because they decouple software implementation
from its services, freeing developers to use their language of preference in their applications
instead of being limited to the specific language used in an SDK.

More importantly, a well-designed API is a key tool for IT automation in general because it
can hide the complexity of intricate operations through a simple API request.
Technet24.ir

106 CCNA Cloud CLDFND 210-451 Official Cert Guide

For such reasons, cloud software stack developers frequently prefer to employ APIs for
module intercommunication. Fitly, infrastructure vendors also have begun to incorporate
APIs as a method of configuration, to avoid the difficulties associated with orchestrating
these resources through CLIs.

CLI vs API
To illustrate the challenge of using the CLI in this context, imagine that you are developing
a cloud orchestrator task that must obtain the firmware version of a network device. With
this information in hand, another task in a workflow can decide if this device should be
upgraded or not, for example.

Example 4-1 depicts the exact formatting of command output obtained through a CLI ses-
sion from the orchestrator to the device.

Example 4-1 CLI Example


! Cloud orchestrator issues the command
Switch# show version
! And here comes the complete output.
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (C) 2002-2015, Cisco and/or its affiliates.
All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under their own
licenses, such as open source. This software is provided "as is," and unless
otherwise stated, there is no warranty, express or implied, including but not
limited to warranties of merchantability and fitness for a particular purpose.
Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or
GNU General Public License (GPL) version 3.0 or the GNU
Lesser General Public License (LGPL) Version 2.1 or
Lesser General Public License (LGPL) Version 2.0.
A copy of each such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://opensource.org/licenses/gpl-3.0.html and
http://www.opensource.org/licenses/lgpl-2.1.php and
http://www.gnu.org/licenses/old-licenses/library.txt.

Software
BIOS: version 07.17
! In line 23, between columns 17 and 27, lies the actual information the orchestrator needs
NXOS: version 7.0(3)I1(1)
BIOS compile time: 09/10/2014
NXOS image file is: bootflash:///n9000-dk9.7.0.3.I1.1.bin
NXOS compile time: 1/30/2015 16:00:00 [01/31/2015 00:54:25]
Chapter 4: Behind the Curtain 107

Hardware
cisco Nexus9000 C93128TX Chassis
Intel(R) Core(TM) i3-3227U C with 16402548 kB of memory.
Processor Board ID SAL1815Q6HW

Device name: Switch


bootflash: 21693714 kB
Kernel uptime is 0 day(s), 0 hour(s), 16 minute(s), 27 second(s)

Last reset at 392717 usecs after Thu Nov 12 00:13:05 2015

Reason: Reset Requested by CLI command reload


4
System version: 7.0(3)I1(1)
Service:

plugin
Core Plugin, Ethernet Plugin

Active Packages:

Although you may reasonably disagree, a command-line interface is expressly designed for
humans, which justifies how information is organized in Example 4-1. However, an applica-
tion (such as the cloud orchestrator) would have to obtain the device software version using
programming approaches such as

■ Locating the version string through an exact position (line 23, between columns 17 and
27, in Example 4-1). While this method may offer a striking simplicity, it unfortunately
relies on the improbable assumption that all devices and software versions will always
position their software version in the same location. For example, any change in the
disclaimer text would invalidate your development effort, forcing the development of
specific code for each device and software combination.
■ Parsing the command output through a keyword such as “NX-OS: version” and capturing
the following text. But again, you would have to develop code for all operating systems
that are different from NX-OS. On the other hand, parsing the keyword “version” would
generate seven different occurrences in Example 4-1, requiring additional development
in the application to select the correct string representing the device software version.

While this simple scenario vividly describes some of the obstacles CLIs generate for soft-
ware developers, Example 4-2 hints at the advantages of using a standard API through the
display of the exact result of an API request issued toward the same device.
Technet24.ir

108 CCNA Cloud CLDFND 210-451 Official Cert Guide

Example 4-2 API Output


<?xml version="1.0"?>
<ins_api>
<type>cli_show</type>
<version>1.0</version>
<sid>eoc</sid>
<outputs>
<output>
<body>
<header_str>Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (C) 2002-2015, Cisco and/or its affiliates.
All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under their own
licenses, such as open source. This software is provided "as is," and unless
otherwise stated, there is no warranty, express or implied, including but not
limited to warranties of merchantability and fitness for a particular purpose.
Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or
GNU General Public License (GPL) version 3.0 or the GNU
Lesser General Public License (LGPL) Version 2.1 or
Lesser General Public License (LGPL) Version 2.0.
A copy of each such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://opensource.org/licenses/gpl-3.0.html and
http://www.opensource.org/licenses/lgpl-2.1.php and
http://www.gnu.org/licenses/old-licenses/library.txt.
</header_str>
<bios_ver_str>07.17</bios_ver_str>
<kickstart_ver_str>7.0(3)I1(1)</kickstart_ver_str>
<bios_cmpl_time>09/10/2014</bios_cmpl_time>
<kick_file_name>bootflash:///n9000-dk9.7.0.3.I1.1.bin</kick_file_name>
<kick_cmpl_time> 1/30/2015 16:00:00</kick_cmpl_time>
<kick_tmstmp>01/31/2015 00:54:25</kick_tmstmp>
<chassis_id>Nexus9000 C93128TX Chassis</chassis_id>
<cpu_name>Intel(R) Core(TM) i3-3227U C</cpu_name>
<memory>16402548</memory>
<mem_type>kB</mem_type>
<proc_board_id>SAL1815Q6HW</proc_board_id>
<host_name>dcloud-n9k</host_name>
<bootflash_size>21693714</bootflash_size>
<kern_uptm_days>0</kern_uptm_days>
<kern_uptm_hrs>0</kern_uptm_hrs>
<kern_uptm_mins>44</kern_uptm_mins>
Chapter 4: Behind the Curtain 109

<kern_uptm_secs>36</kern_uptm_secs>
<rr_usecs>392717</rr_usecs>
<rr_ctime> Thu Nov 12 00:13:05 2015
</rr_ctime>
<rr_reason>Reset Requested by CLI command reload</rr_reason>
! Here is the information the orchestrator needs
<rr_sys_ver>7.0(3)I1(1)</rr_sys_ver>
<rr_service/>
<manufacturer>Cisco Systems, Inc.</manufacturer>
</body>
<input>show version</input>
<msg>Success</msg>
<code>200</code>
4
</output>
</outputs>
</ins_api>

Example 4-2 exhibits output from an API (called Cisco NX-API) in a format called Exten-
sible Markup Language (XML). In summary, XML is a flexible text format created by the
World Wide Web Consortium (W3C) to represent data exchanged between two or more
entities on the Internet.

As you can see in Example 4-2, XML uses start tags (such as <rr_sys_ver>) and end tags
(such as </rr_sys_ver>) to express information such as 7.0(3)I1(1) to a program. With
previous knowledge about the API, a software developer can easily code an API request
to accurately obtain the firmware version using both the start tag and end tag as keywords.
Thus, with such information, an application can easily assign the discovered data (firmware
version) to a value to an alphanumerical variable, greatly facilitating logic operations. Addi-
tionally, this program can be extended to all devices that support the same API.

An alternative data format to XML that is also used in APIs is called JavaScript Object Nota-
tion, or simply JSON, which was originally created to transmit data in the JavaScript pro-
gramming language. As an illustration, Example 4-3 depicts an excerpt of the same API call
in JSON.

Example 4-3 Same Output in JSON


{
"ins_api": {
"type": "cli_show",
"version": "1.0",
"sid": "eoc",
"outputs": {
"output": {
"input": "show version",
"msg": "Success",
"code": "200",
"body": {
Technet24.ir
110 CCNA Cloud CLDFND 210-451 Official Cert Guide

"header_str": "Cisco Nexus Operating System (NX-OS) Software\nTAC supp


[output suppressed]
and\nhttp://www.gnu.org/licenses/old-licenses/library.txt.\n",
"bios_ver_str": "07.17",
"kickstart_ver_str": "7.0(3)I1(1)",
"bios_cmpl_time": "09/10/2014",
"kick_file_name": "bootflash:///n9000-dk9.7.0.3.I1.1.bin",
"kick_cmpl_time": " 1/30/2015 16:00:00",
"kick_tmstmp": "01/31/2015 00:54:25",
"chassis_id": "Nexus9000 C93128TX Chassis",
"cpu_name": "Intel(R) Core(TM) i3-3227U C",
"memory": 16402548,
"mem_type": "kB",
"proc_board_id": "SAL1815Q6HW",
"host_name": "dcloud-n9k",
"bootflash_size": 21693714,
"kern_uptm_days": 0,
"kern_uptm_hrs": 0,
"kern_uptm_mins": 21,
"kern_uptm_secs": 51,
"rr_usecs": 392717,
"rr_ctime": " Thu Nov 12 00:13:05 2015\n",
"rr_reason": "Reset Requested by CLI command reload",
! And here is the data you need
"rr_sys_ver": "7.0(3)I1(1)",
"rr_service": "",
"manufacturer": "Cisco Systems, Inc."
}
}
}
}
}

As shown in Example 4-3, JSON is also a fairly human-readable data-interchange format


that, contrary to its name, is independent from any programming language (just like XML).

In summary, each JSON object

■ Begins with a left brace, {, and ends with a right brace, }


■ Contains an unordered set of name-value pairs separated by commas
■ Defines data through name-values pairs separated by a colon and a space

Alternatively, a JSON array represents an ordered collection of values, also separated by


commas. An array begins with a left bracket, [, and ends with a right bracket, ].

Compared to XML, JSON has less overhead and is arguably simpler and cleaner, which can
be useful during code troubleshooting. However, XML is considered more flexible because
Chapter 4: Behind the Curtain 111

it allows the representation of other types of data besides text and numbers, including
images and graphs. Regardless of this comparison, the large majority of APIs support both
formats.

RESTful APIs
In addition to supporting more than one data representation format, the design of an
API incorporates the definition of transport protocols, behavior rules, and signaling data.
Although a wide variety of such architectures exists, the most commonly used in cloud
computing at the time of this writing belong to an architectural style known as Representa-
tional State Transfer (REST).

This architectural style was originally proposed by Roy Thomas Fielding in his 2000 doc-
toral dissertation, “Architectural Styles and the Design of Network-based Software Archi- 4
tectures.” The main purpose of the framework is to establish a simple, reliable, and scalable
software communication approach that can adequately support the booming number of
web services available on the Internet.

While other API architectures may share similar objectives, all RESTful APIs must adhere
to the formal constraints described in Table 4-3 (as defined by Fielding).

Table 4-3 RESTful APIs Constraints


Constraint Description
Client-server Established hierarchy between applications using the API, leveraging the
same relationship between a web browser and a web server. Using requests
from the client application and responses from its server counterpart, sim-
plicity and scalability are achieved through the exclusive assignment of data
storage to the latter.
Stateless Each request from the client application must contain all required informa-
tion for a server response, not taking advantage of any stored data about the
client session on the server.
Cache If allowed, the client application can store data from a server response to
avoid equivalent future requests.
Layered system A client application cannot distinguish whether it is communicating directly
to the end server or to an intermediary service along the way.
Code on Servers can optionally transfer logic to be executed on client applications.
demand
Uniform inter- This REST constraint decouples client and server internal architecture,
face enabling them to evolve individually. In summary, REST establishes a com-
mon interchange method between components, separating implementations
from the services they provide.

RESTful APIs typically use Hypertext Transfer Protocol (HTTP) as a communication stan-
dard between applications. With Roy Fielding also being one of its main creators, HTTP can
be considered one of the fundamental pillars of the Web, embodying the main method for
the communication between web browsers and servers.
Technet24.ir
112 CCNA Cloud CLDFND 210-451 Official Cert Guide

NOTE RESTful APIs can also use HTTPS (HTTP Secure) to enforce security measures,
such as encryption and authentication.

Figure 4-16 illustrates a simple HTTP transaction between two applications, as defined in
Request for Comments (RFC) 2068, from the Internet Engineering Task Force (IETF).

Client Server

TCP SYN

TCP SYN/ACK
TCP ACK
GET URL HTT
P/ 1.1
Time

TCP ACK
HTTP/1.1 200 OK Data
More Data

TCP ACK

Figure 4-16 HTTP Transaction Example

As Figure 4-16 shows, a client first initiates an HTTP transaction establishing a TCP connec-
tion (using destination port 80, by default) with the server, using a three-way handshake com-
posed of TCP SYN, TCP SYN/ACK, and TCP ACK messages. Within one or more connec-
tions, a client sends a request to the server containing the parameters described in Table 4-4.

Table 4-4 HTTP Request Parameters


Parameter Description
Request Indicates the action desired by the client. It assume the following values:
method OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE, and CONNECT.
Uniform Identifies the server resource to which to apply the request. When applied to
resource iden- a remote resource, it must also include the complete uniform resource locator
tifier (URL, such as http://www.portal.com/index.html).
Protocol ver- HTTP uses a <major>.<minor> numbering scheme to indicate the format of a
sion message and its capacity for establishing communication. The minor number
is incremented when the changes made to the protocol add features, while the
major number is incremented when the format of a message within the proto-
col is changed. RFC 2068 originally standardized HTTP 1.1.
Chapter 4: Behind the Curtain 113

Parameter Description
Header Encompasses request modifiers (such as Accept, Accept-Charset, Accept-
Encoding, Accept-Language), as well as client information that can be used
by the server for an appropriate response.
Cookie Please refer to explanation on Table 4-5.
Body Data bytes transmitted to the server.

Table 4-4 is not an exhaustive list of HTTP request headers.

After the server correctly receives the client request (as the TCP ACK signals), it issues an
HTTP response with the parameters described in Table 4-5.
4
Table 4-5 HTTP Response Parameters
Parameter Description
Protocol ver- As explained in Table 4-4.
sion
Status line Includes codes that express a successful response or an error. The code classes
are 1XX (informational messages), 2XX (success messages, such as 200=OK,
201=Created), 3XX (redirection messages, such as 301=Moved Permanently),
4XX (client error messages, such as 400=bad request, 403=Forbidden, and
404=Not Found), and 5XX (server error messages, such as 500=Internal Server
Error and 503=Services Unavailable).
Header Includes server information, data about the transmitted data (“metadata”), and
other content.
Cookie Small data message to be stored in the client. In subsequent HTTP accesses,
the client may send the cookie back to the server, which can identify the cli-
ent’s previous requests. One observation: Although it may violate one of the
REST constraints, some APIs use cookies for session authentication purposes.
Body Data bytes transmitted to the client.

Table 4-5 is not an exhaustive list of HTTP response headers.

A RESTful API uses an HTTP request method to represent the action according to its intent.
The most used actions are

■ GET: Used to read information from the server application. This action usually expects a
code 200 (OK) and appended data that corresponds to the request.
■ POST: Creates objects within the server application. It usually expects a code 201 (Cre-
ated) and a new resource identifier.
■ PUT: Updates a preexisting object in a server application. It commonly expects a code
200 (OK).
■ DELETE: Deletes resource and expects code 200 (OK) under normal conditions.
Technet24.ir

114 CCNA Cloud CLDFND 210-451 Official Cert Guide

As an illustration, Figure 4-17 portrays an example of an API request executed through


POSTMAN, an application that exposes all information contained in such operations, help-
ing the development, testing, and documentation of an API.

Figure 4-17 API Request on POSTMAN


Note in Figure 4-17 that the API request will be performed through a POST action, signal-
ing that a show version command will be issued to the network device whose management
IP address is 198.18.133.100. Additionally, the request will be applied to a URL defined as
http://198.18.133.100/ins with a body containing information about the API version (“1.0”),
type of input (“cli_show”), input command (“show version”), and output format (“json”).
Figure 4-18 displays the response received after clicking Send.

Figure 4-18 API Response on POSTMAN


Chapter 4: Behind the Curtain 115

In Figure 4-18, the network device response has a code 200, signaling that no error has occurred.
And included in the “body” name, you can observe an excerpt from the same output from
Example 4-3.

The cloud components discussed earlier in the chapter in the section “Cloud Computing
Architecture” frequently use RESTful APIs in multiple ways, such as

■ In the communication between cloud orchestrator and infrastructure components


■ In the integration between cloud software stack components (portal requests workflow
execution on cloud orchestration, for example)
■ In requests to the cloud portal from user applications (not end users) that are requesting
cloud resources
4
There are many other API formats, such as Windows PowerShell and Remote Procedure
Call, whose description is out of the scope of this chapter. But regardless of their origin,
all APIs should aim for clarity, simplicity, completeness, and ease of use. Above all, an API
designer must remember that, much like diamonds, APIs are forever (as soon as they are
published and used in development code).

Around the Corner: OpenStack


Originally created by the U.S. National Aeronautics and Space Administration (NASA) and
Rackspace in 2010, OpenStack is a community development initiative to develop open
source software to build public and private scalable clouds. In essence, it is a cloud software
stack that can control infrastructure resources in a data center through a web-based dash-
board or via the OpenStack API.

An OpenStack implementation consists of the deployment of multiple services, which are


developed in individual development projects. Table 4-6 describes some of the most popu-
lar OpenStack services, as they are known at the time of this writing.

Table 4-6 OpenStack Services


Service Description
Keystone Provides identity services for all services in an OpenStack installation
Nova Provisions virtual compute on behalf of cloud end users
Glance Coordinates images that will be used to boot virtual and physical instances
within the OpenStack cloud
Neutron Controls networking resources to support cloud service requests
Swift Offers data storage services in the form of objects
Cinder Provisions block-based storage for compute instances
Heat Manages the entire orchestration of infrastructure and applications within
OpenStack clouds
Trove Provides scalable and reliable databases for cloud end users
Ironic Provisions bare-metal (physical) servers
Technet24.ir
116 CCNA Cloud CLDFND 210-451 Official Cert Guide

Service Description
Sahara Implements data-intensive application processing clusters on top of OpenStack
Zaqar Offers a multitenant cloud-messaging service for web developers
Barbican Designed for secure storage, provisioning, and management of secrets such as
passwords, encryption keys, and certificates
Designate Provides Domain Name Service (DNS) for cloud tenants
Manila Builds shared file systems for compute instances
Magnum Orchestrates Linux containers instantiation within OpenStack clouds
Congress Offers governance and compliance services
Mistral Creates and schedules workflows for the automation of operational tasks

All of the services described in Table 4-6 interact through predefined APIs. Additionally,
these software modules can integrate with infrastructure elements, such as servers, network
devices, and storage systems, through plug-ins, which essentially normalize the backend
operations of each service to these devices. Third-party vendors can integrate their products
through drivers that translate the functions of a plug-in to a specific device model.

OpenStack is maturing at an incredible rate, including new features and even projects
through new versions being released every six months at each OpenStack summit. The
OpenStack version naming convention follows alphabetical order, such as Austin, Bexar,
Cactus, Diablo, Essex, Folsom, Grizzly, Havana, Icehouse, Juno, Kilo, and Liberty.

Further Reading
■ OpenStack: https://www.openstack.org/
■ OpenStack at Cisco: http://www.cisco.com/go/openstack
Chapter 4: Behind the Curtain 117

Exam Preparation Tasks

Review All the Key Topics


Review the most important topics in this chapter, denoted with a Key Topic icon in the
outer margin of the page. Table 4-7 lists a reference of these key topics and the page num-
ber on which each is found.

Table 4-7 Key Topics for Chapter 4


Key Topic Element Description Page Number
Table 4-2 Cloud component classes 90
4

Complete the Tables and Lists from Memory


Print a copy of Appendix B, “Memory Tables” (found on the CD), or at least the section
for this chapter, and complete the tables and lists from memory. Appendix C, “Answers to
Memory Tables,” also on the CD, includes completed tables and lists so that you can check
your work.

Define Key Terms


Define the following key terms from this chapter, and check your answers in the glossary:

cloud software stack, cloud portal, cloud orchestrator, workflow, cloud meter, chargeback,
showback, cloud infrastructure, consolidation, virtualization, standardization, automa-
tion, orchestration, application programming interface (API), Extensible Markup Language
(XML), JavaScript Object Notation (JSON), RESTful API, OpenStack
Technet24.ir

This chapter covers the following topics:


■ Introduction to Servers and Operating Systems

■ Server Virtualization History

■ Server Virtualization Definitions

■ Hypervisor Architectures

■ Server Virtualization Features

■ Cloud Computing and Server Virtualization

This chapter covers the following exam objectives:


■ 3.2 Describe Server Virtualization

■ 3.2.a Basic knowledge of different OS and hypervisors


CHAPTER 5

Server Virtualization
Throughout the history of computing, virtualization technologies have offered solutions to
diverse problems such as hardware inefficiency and lack of support for legacy applications.
More recently, the accelerated adoption of server virtualization on x86 platforms in the
mid-2000s has clearly positioned such a trend as one of the most important components of
modern data center architecture.

And without surprise, cloud computing significantly benefits from the agility and flexibil-
ity virtualization brings to application server provisioning. Therefore, the CLDFND exam
requires a basic knowledge of operating systems and hypervisors. Accordingly, this chapter
covers these concepts as well as important definitions that establish the overall context of
server virtualization.

This chapter also introduces the main components of server hardware and explores how
some of the challenges of server hardware are solved through virtualization. Then, it
addresses the most important concepts related to x86 server virtualization, introduces the
most prominent hypervisor architectures available at the time of this writing, and identifies
virtualization features that are particularly useful in cloud computing environments.

“Do I Know This Already?” Quiz


The “Do I Know This Already?” quiz allows you to assess whether you should read this
entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in
doubt about your answers to these questions or your own assessment of your knowledge
of the topics, read the entire chapter. Table 5-1 lists the major headings in this chapter and
their corresponding “Do I Know This Already?” quiz questions. You can find the answers in
Appendix A, “Answers to Pre-Assessments and Quizzes.”

Table 5-1 “Do I Know This Already?” Section-to-Question Mapping


Foundation Topics Section Questions
Introduction to Servers and Operating Systems 1–2
Server Virtualization History 3–4
Server Virtualization Definitions 5–6
Hypervisor Architectures 7
Server Virtualization Features 8–9
Cloud Computing and Server Virtualization 10
Technet24.ir

120 CCNA Cloud CLDFND 210-451 Official Cert Guide

1. Which of the following is not a server hardware component?


a. Storage controller
b. NIC
c. Operating system
d. CPU
e. RAM

2. Which of the following is not a server operating system?


a. Microsoft Windows Server
b. FreeBSD
c. Cisco IOS
d. Linux

3. Which of the following advantages were achieved through mainframe virtualization in


the early 1970s? (Choose two.)
a. Performance increase
b. Legacy application support
c. Downsizing
d. User isolation

4. Which of the following is not an advantage from server virtualization on x86 servers?
(Choose all that apply.)
a. Hardware efficiency
b. Network provisioning
c. Legacy application support
d. Management cost decrease

5. Which of the following are Type-1 hypervisors? (Choose all that apply.)
a. Linux KVM
b. Xen
c. Microsoft Hyper-V
d. VMware Workstation

6. Which of the following identify a hypervisor and its corresponding VM manager?


(Choose all that apply.)
a. KVM and oVirt
b. vSphere and ESXi
c. Hyper-V and Hyper-V Manager
d. ESXi and vCenter
e. KVM and OpenStack Nova
Chapter 5: Server Virtualization 121

7. Which of the following is not a virtual machine file?


a. Virtual disk
b. NVRAM
c. Swap memory
d. NFS
e. Log

8. Which of the following statements is false?


a. VM high availability enables the restarting of virtual machines that were running
on hosts that failed.
b. Live migration is a disaster recovery feature that allows the migration of VMs
after a physical server suffers a major hardware failure.
c. Resource load balancing allows automatic host selection when you are creating a
virtual machine.
5
d. VM fault tolerance reserves double the resources a virtual machine requires.

9. In which of the following features is a disruption in associated virtual machines


expected?
a. Fault tolerance
b. Live migration
c. High availability
d. Resource load balancing

10. Which of the following features enables cloud computing pooling characteristics?
a. Fault tolerance
b. Live migration
c. High availability
d. Resource load balancing
Technet24.ir
122 CCNA Cloud CLDFND 210-451 Official Cert Guide

Foundation Topics

Introduction to Servers and Operating Systems


In Chapter 2, “Cloud Shapes: Service Models,” you learned the different ways in which a
cloud computing environment can offer services. Testing your memory further, a specific
service model called Infrastructure as a Service (IaaS) provides server instances to end users,
allowing them to potentially run any software on top of these structures.

This section revisits the formal definition of a server, aiming to provide you with a strong
conceptual background before you delve into their positioning and challenges in cloud
computing environments.

What Is a Server?
Generically speaking, a server is a software component that can accept requests from mul-
tiple clients, providing suitable responses after processing these requests or accessing other
servers. This definition relates to the popular client/server application architecture, which
eventually replaced the centralized architecture based on mainframes during the last two
decades of the 20th century.

Although server software may run on desktop computers, its increasing relevance to busi-
ness applications means that dedicated, specialized hardware is better suited to host these
central application components. In the context of data center infrastructure, these custom-
ized micro-computers running server software are also (confusingly) called servers.

Note As you advance in your study in cloud technologies, you will learn that distinct data
center technology teams may use the exact same term for different subjects. Therefore, I
highly recommend that you always try to discover the context of the discussion beforehand
(for example, software or hardware, in the case of the word “server”). Except when noted,
“server” in this chapter refers to the dedicated hardware that hosts server applications.

Data centers, be they for cloud deployments or not, are basically built for application host-
ing and data storing. Suitably, data center architects may view servers as raw material that
ultimately defines how a data center is designed and maintained. In fact, I know many com-
panies that measure their data center agility according to the answer to the following ques-
tion: How long does it take to provision an application server?

Of course, servers have different characteristics depending on the applications they are
hosting. However, they all share the following basic components:

■ Central processing unit (CPU): Unquestionably the most important component of any
computer, the CPU (aka processor) is responsible for the majority of processing jobs and
calculations. The CPU is probably the server component that has experienced the fastest
evolution.
■ Main memory: The CPU uses these fast, volatile storage areas to directly access data
retrieved from files and programs that are being worked on. If you picture yourself as the
CPU, the main memory would be your desk, stacking documents that you are working on
Chapter 5: Server Virtualization 123

at this very moment or in a few minutes. Main memory can be also referred to as random-
access memory (RAM), mainly because the CPU can possibly access any part of it.
■ Internal storage: Most servers have an internal device that allows the recording and read-
ing of data for multiple purposes such as loading the operating system and storing appli-
cation information. The most usual internal storage devices are hard drives, which can be
managed by special cards called storage controllers.

Note In Chapters 8 and 9 (“Block Storage Technologies” and “File Storage Technologies”),
I will explore the multiple available storage technologies for cloud computing deployments.

■ Network interface controller (NIC): This is a hardware device that controls the commu-
nication between a server and the network. The vast majority of NICs deploy Ethernet, a
protocol so popular that it is commonplace to find native Ethernet interfaces in the serv-
er motherboard (circuit board that physically contains all the computer components). In
respect to both scenarios, some authors consider network adapter to be a more appro- 5
priate moniker than network interface controller for representing them.
■ Peripherals: These are auxiliary devices that perform particular functions such as data
input, output, or specialized processing. Some examples are the CD/DVD drive, mouse,
keyboard, and printer, among many others.

Figure 5-1 displays these server hardware components.

CPU Internal Storage

Main Memory Network Interface


Card
Server

Figure 5-1 Main Server Hardware Components

Of course, there are many other server elements such as BIOS (basic input/output system),
the discussion of which is out of the scope of this chapter. But most of them owe their
obscurity to a very special piece of software that will discussed in the next section.

Note You will find more details about BIOS and other components of x86 servers in
Chapter 12, “Unified Computing”.
Technet24.ir

124 CCNA Cloud CLDFND 210-451 Official Cert Guide

Server Operating Systems


An operating system (or simply OS) can be defined as software that controls computer
resources and provides common services for other computer programs that run “on top of
it.” Consequently, the operating system is widely considered the most fundamental piece of
software in a computer system.

Table 5-2 describes some of the most popular operating systems at the time of this writing.

Table 5-2 Operating Systems


Operating System Description
Microsoft Windows Most popular operating system for personal computers (PCs).
Microsoft introduced Windows in 1985 as a graphical user interface
(GUI) for the Microsoft Disk Operating System (MS-DOS). Today,
Windows comprises a family of operating systems developed for
myriad different computing devices ranging from smartphones to
servers.
Linux First released in 1991 by software engineer Linus Torvalds, Linux is
a Unix-based operating system developed and distributed as an open
source software for PCs and server platforms. Nevertheless, companies
such as Red Hat, Inc. also offer Linux (among open source solutions) as
enterprise-ready products.
FreeBSD Free Berkeley Software Distribution (FreeBSD) is a Unix-like operating
system developed at the University of California, Berkeley. This system
is still popular among select server platforms.
Apple Mac OS Family of operating systems developed by Apple, Inc. for its Macintosh
computers. Created In 1984, Mac OS was designed for these
specialized workstations and was succeeded by Apple OS X (2001),
which for the first time included a server version, and is now simply
branded OS X.
Android Acquired in 2005 and currently developed by Google, Android is an
OS based on Linux and, at the time of this writing, is the most popular
operating system for mobile devices (smartphones and tablets).
Apple iOS Apple created this mobile operating system in 2007 for the iPhone but
later extended its use to its line of tablets (iPad).
Cisco IOS Cisco IOS is considered the most popular network operating system
in the world. It is supported on multiple Cisco routers and switches,
among other network devices.
ChromeOS Google Inc. has developed this operating system to run on “netbooks,”
which are lightweight and inexpensive computers that are uniquely
built for web-based applications. Based on Linux, ChromeOS was first
released in 2009 and currently supports Android applications as well.

The kernel (“core” in German) is the central part of an operating system. This program
directly manages the computer hardware components, such as memory and CPU. The kernel
Chapter 5: Server Virtualization 125

is also responsible for providing services to applications that need to access any hardware
component, including NICs and internal storage.

Because it represents the most fundamental part of an operating system, the kernel is
executed on a protected area of the main memory (kernel space) to prevent problems other
processes may cause. Therefore, non-kernel processes are executed in a memory area called
user space.

As a visual aid, Figure 5-2 illustrates how an OS kernel relates to applications and the com-
puter hardware.

Applications User Space

Kernel Kernel Space

Hardware

CPU RAM NIC Disk

Figure 5-2 Operating System Kernel

Operating systems can be categorized according to the distribution of their components


between kernel space and user space. Hence, operating systems whose entire architecture
resides in kernel space are called monolithic (for example, Linux and FreeBSD). By contrast,
microkernel operating systems are considered more flexible because they consist of mul-
tiple processes that are scattered across both kernel space and user space. Mac OS X and
current Windows versions are examples of microkernel operating systems.

A server operating system is obviously focused on providing resources and services to


applications running on server hardware. Such is the level of specialization of these OSs that
many nonessential services, such as the GUI, may unceremoniously be disabled during nor-
mal operations.

Server Virtualization History


Contrary to a common misconception among many IT professionals, virtualization technol-
ogies are not exclusive to servers or even cutting-edge 21st-century innovations.

As a generic term, virtualization can be defined as the transparent emulation of an IT


resource to provide to its consumers benefits that were unavailable in its physical form.
Thereupon, this concept may be embraced by many resources, such as network devices,
storage arrays, or applications.

When virtualization was originally applied to computer systems, several advantages were
achieved in two different platforms, as you will learn in the next sections.
Technet24.ir

126 CCNA Cloud CLDFND 210-451 Official Cert Guide

Mainframe Virtualization
Famously described in Moore’s law back in 1965, and still true today, hardware develop-
ment doubles performance and capacity by each period of 24 months. From time to time,
these developments require changes in the software stack, including the operating system
and application components. Understandably, these software pieces may have deeper links
with specific processor architectures and may simply not work (or lose support) with a hard-
ware upgrade.

To counteract the effect of this trend on its mainframe series, IBM released the Virtual
Machine Control Program (VM-CP) in 1972, with widespread adoption within its customer
base. Figure 5-3 lists the main components of this successful architecture.

Mainframe Mainframe Mainframe Mainframe


User User User User

App App App App


Virtual
Machines
CMS CMS CMS CMS
Operating
System
(VM/370) Control Program (CP)

Mainframe Hardware

Figure 5-3 Mainframe Virtualization Architecture

As displayed in Figure 5-3, the Control Program (CP) controlled the mainframe hardware,
dedicating a share of its hardware resources to each instance of the Conversational Monitor
System (CMS). As a result, each CMS instance conjured an abstraction that gave each main-
frame user the perception of accessing an exclusive operating system for his own purposes.
This architecture coined the term virtual machine (VM), which essentially was composed of
a CMS instance and all the user programs that were running on that instance.

Since its first commercial release, mainframe virtualization has brought several benefits to
these environments, such as:

■ User isolation: Through the creation of virtual machines, VM-CP protected multiple
independent user execution environments from each other.
■ Application legacy support: With VM-CP, each user had access to an environment that
could emulate one particular version of operating system and hardware combination.
Consequently, during a migration to a new hardware version, a virtual machine could be
easily created to host applications from a previous mainframe generation.
Chapter 5: Server Virtualization 127

This approach is still used today in modern mainframe systems, even enabling these systems
to execute operating systems that were developed for other platforms, such as Linux.

Virtualization on x86
The generic term “x86” refers to the basic architecture from computers that used the Intel
8086 processor and its subsequent generations (80286, 80386, 80486, and so on). Its wide
adoption in PCs and “heavy” workstations paved the way for the popularization of the
architecture in data centers, coalescing with the client/server architecture adoption that
started in the 1980s. With the advent of web-based application technologies in the 1990s,
many developers successfully migrated to using x86 platforms as opposed to IBM main-
frames and Reduced Instruction Set Computing (RISC) computers from Sun Microsystems,
DEC, Hewlett-Packard, and IBM.
During this transition, best practices at the time advised that each application component,
such as a web service or database software, should have a dedicated server. This ensured
proper performance control and smaller impacts in the case of a hardware failure. 5
Following this one-application-per-server principle, server hardware acquisition was chained
to the requirements of new application deployments. Hence, for each new software compo-
nent, a corresponding server was bought, installed, and activated.

Soon, this approach posed its share of problems, including low efficiency. For example, the
capacity of email servers was sized based on utilization peaks that occurred when employ-
ees arrived at the office and returned from lunch. As a result, multiple servers remained
practically unused for most of their lifetime.

As more diverse applications were deployed, the sprawling army of poorly utilized servers
pushed data centers to inescapable physical limits, such as power and space. Paradoxically,
companies were expanding their data center footprint, or even building new sites, while
their computing resources were extremely inefficient.

Founded in 1998, VMware applied the principles of mainframe virtualization to x86 plat-
forms, allowing a single micro-computer to easily deploy multiple virtual machines. At first,
VMware provided solutions focused on workstation virtualization, where users could cre-
ate emulated desktops running other versions of operating systems on their machines.

At the beginning of the 2000s, VMware successfully ported this technology to server hard-
ware, introducing the era of server virtualization. Leveraging a tighter integration with virtu-
alization features offered by Intel and AMD processors and the increasing reliability from
x86 servers, virtual machines could now offer performance levels comparable to those of
physical servers.

Figure 5-4 depicts how server virtualization improved data center resource efficiency during
this period.
Technet24.ir
128 CCNA Cloud CLDFND 210-451 Official Cert Guide

Database Web Application Servers Email File Print DNS LDAP

Physical Servers

Virtualization

DB Web App App App Email File Print DNS LDAP


Virtual Machines
OS OS OS OS OS OS OS OS OS OS

Virtualized Server

Figure 5-4 Hardware Consolidation Through Server Virtualization

As you can see, server virtualization software enables applications that were running on
dedicated physical servers to be migrated to virtual machines. A virtualization administra-
tor decides how many virtual machines a virtualized server can host based on its hardware
capacity, but undoubtedly reaching a much higher efficiency level with multiple different
workloads.

Besides a better use of hardware resources, server virtualization on x86 platforms also inher-
ited the benefit of supporting applications developed for legacy hardware architectures.
A remarkable example happened during the architecture evolution from 32-bit to 64-bit,
which also occurred in the early 2000s. In the 32-bit architecture, the CPU refers to a mem-
ory location using 32 bits, resulting in a maximum memory size of 4 GB. To overcome this
constraint, both Intel and AMD ignited architecture designs with 64-bit addresses, allowing
a much higher memory limit.

During this transition period, many IT departments could not take advantage of the per-
formance boost from new servers simply because too many changes would be required
to adapt applications that were originally developed for 32-bit operating systems. Server
virtualization overcame this challenge, supporting 32-bit virtual machines to run over 64-bit
virtualized servers, with practically no software alteration during this process.

As you will learn in the section “Server Virtualization Features” later in this chapter, server
virtualization evolution unleashed many gains that were simply inconceivable to mainframe
virtualization pioneers. However, to fully grasp the importance of server virtualization for
cloud computing, you must delve even deeper into its main fundamentals.

Server Virtualization Definitions


As server virtualization increased hardware consolidation and offered application legacy
support, it naturally became an integral part of modern data center architectures. And
VMware’s competition increased as well, as other software vendors entered the server virtu-
alization market with interesting solutions and slightly different approaches.
Chapter 5: Server Virtualization 129

To approach server virtualization as generically and vendor neutrally as possible, in the fol-
lowing sections I will focus on the common concepts that bind all virtualization solutions
and introduce hypervisor offerings from many different vendors.

Hypervisor
A hypervisor can be defined as a software component that can create emulated hardware
(including CPU, memory, storage, networking, and peripherals, among other components)
for the installation of a guest operating system. In the context of server technologies, a
hypervisor is essentially a program that allows the creation of virtual servers.

Table 5-3 lists and describes the most commonly deployed hypervisors at the time of this
writing.

Table 5-3 Commonly Deployed Hypervisors


Hypervisor Brief Description
VMware ESXi Derived from the VMware ESX software created in 2001, ESXi 5
is the market-leading hypervisor as well as the basis for a suite of
virtualization tools called VMware vSphere.
Microsoft Hyper-V Released alongside Windows Server 2008 and enhanced in version
2012, this hypervisor naturally provides a tighter integration with
Windows environments.
Linux KVM The Kernel-based Virtual Machine (KVM) is an open source
hypervisor that was integrated into the Linux kernel in 2007.
Red Hat Enterprise An enterprise version of Linux KVM, this solution also benefits
Virtualization (RHEV) from other open source virtualization tools such as oVirt.
Citrix XenServer A Citrix enterprise version of Xen, a hypervisor that was
originally released in 2003 as a project from the University of
Cambridge. Citrix XenServer continues to leverage open source
development on Xen.
Oracle VM Also based on Xen, this hypervisor is specially customized to
support Oracle applications.
VMware Workstation, VMware Workstation allows the creation of multiple x86-based
VMware Player, and VMs over a desktop PC. VMware Player is its free version that
VMware Fusion only permits the creation of a single VM. VMware Fusion allows
the same functionality for Apple Mac OS X users.
Microsoft Windows Released in 2006, this virtualization software enables the creation
Virtual PC of virtual desktops on a single Windows-based computer.
Oracle VM Virtual Box This virtualization product, originally released in 2007, also allows
additional guest operating systems running over Linux, Mac OS
X, and Windows, among other operating systems.
Parallels Desktop for Mac This was the first virtualization software released for Mac
computers. Since 2007, this software permits the creation of
virtual desktops running Windows, Linux, and Mac OS X.
Technet24.ir

130 CCNA Cloud CLDFND 210-451 Official Cert Guide

Hypervisor Types
As Table 5-3 indicates, not all hypervisors are alike. Nonetheless, they can be divided in two
basic categories, as shown in Figure 5-5.

Physical Server Type-1 Type-2


Guest Operating
System
App App
App App App App OS OS
App OS OS OS OS App Hypervisor

OS Hypervisor OS

Figure 5-5 Hypervisor Types

For comparison purposes, Figure 5-5 represents a virtualization host (physical server) as a
stack composed of hardware, an operating system, and a single application. To its right, a Type-
1 hypervisor replaces the operating system as the software component that directly controls
the hardware, and for this reason it is also known as a native or bare-metal hypervisor. Type-1
hypervisors are heavily used for server virtualization and are exemplified by the first six solu-
tions listed in Table 5-3. On the other hand, as shown on the far right, a Type-2 hypervisor
runs over a preexisting operating system. When compared to Type-1 hypervisors, these hyper-
visors are considered easier to use, but the trade-off is that they offer lower performance lev-
els, explaining why they are normally deployed for workstation virtualization. Also known as
hosted hypervisors, this category is represented by the last four solutions listed in Table 5-3.

Note These categories follow a classification system developed by Gerald J. Popek and
Robert P. Goldberg in their 1974 article “Formal Requirements for Virtualizable Third
Generation Architectures.” By the way, as a healthy exercise, which type of hypervisor is
VM-CP (introduced earlier in the section “Mainframe Virtualization”)?

Virtual Machines
In the context of modern server virtualization solutions, a virtual machine is defined as an
emulated computer that runs a guest operating system and applications. Each VM deploys
virtual hardware devices such as the following (see Figure 5-6):
■ Virtual central processing unit (vCPU)
■ Virtual random-access memory (vRAM)
■ Virtual hard drive
■ Virtual storage controller
■ Virtual network interface controller (vNIC)
■ Virtual video accelerator card
■ Virtual peripherals such as a CD, DVD, or floppy disk drive

These components perform the same functions as their physical counterparts.


Chapter 5: Server Virtualization 131

Figure 5-6 Virtual Hardware in a VMware ESXi VM

Note In addition to the devices in the preceding list, Figure 5-6 displays a proprietary
virtual device called VMCI (Virtual Machine Communication Interface), which can provide
fast communication between virtual machines and the ESXi kernel.

Allow me to give away the “secret sauce” of x86 virtualization: from the hypervisor stand-
point, a VM is composed of a set of files residing on a storage device. Figure 5-7 displays
some actual files that define the same VM hosted on VMware ESXi.

In essence, these files dictate how the hypervisor controls the physical resources and shares
them with the guest operating system in each VM. The following are the main VM file
types in VMware ESXi:

■ Virtual disk (.vmdk extension): This file contains all the data a VM uses as its internal
storage device.
■ Swap memory (.vswp extension): This file is used as a replacement for the virtual memo-
ry whenever the processes running on the VM reach the vRAM predefined limit.
■ Log (.log extension): These files store all the information a VM produces for trouble-
shooting purposes.
■ Configuration (.vmx extension): You can find all the hardware settings for a VM in this
file, including vRAM size, NIC information, and references to all the other files.
■ Nonvolatile RAM (.nvram extension): This file contains information used during the
VM initialization, such as the boot device order and CPU settings.
Technet24.ir
132 CCNA Cloud CLDFND 210-451 Official Cert Guide

While other hypervisors certainly use different file extensions to represent a virtual
machine, most of them share this architectural structure.

Figure 5-7 VMware ESXi Virtual Machine Files

Virtual Machine Manager


A virtual machine manager (VM manager) is a software solution that can create and manage
virtual machines on multiple physical servers running hypervisors. In this context, a virtual-
ization cluster can be defined as a group of centrally managed hosts.

To deploy a robust virtualization cluster, the VM manager should be redundant. However,


as you will learn in the section “Server Virtualization Features,” if you deploy the VM man-
ager as a virtual machine, it can leverage several availability features from its own virtualiza-
tion cluster.

Now that you know what hypervisors have in common, it is time to take a look at their main
distinctions, tools, and deployment models.

Hypervisor Architectures
As previously mentioned, many competing solutions were developed after VMware created
the server virtualization market for x86 platforms. Broadly speaking, the hypervisor archi-
tectures discussed in the following sections offer distinct benefits for environments that are
directly aligned to their origins: VMware vSphere (workstation virtualization), Microsoft
Hyper-V (Windows operating system), and Linux KVM (open source operating system).
You will learn the details of how each solution deploys virtual machines and discover the
main components behind their architectures.
Chapter 5: Server Virtualization 133

VMware vSphere
VMware vSphere is the software suite comprising the VMware ESXi hypervisor and its asso-
ciated tools. VMware developed vSphere for the purpose of creating and managing virtual
servers. Continuing the basic structure started with VMware Infrastructure (VI), vSphere has
maintained its position as the leading server virtualization architecture since its launch in 2009.

Figure 5-8 depicts the main components of the VMware vSphere architecture and how they
interact with each other.

vCenter
App App App App App App Virtual
vSphere Client OS OS OS OS OS OS Machines
Virtualization
or Web Client ESXI ESXI ESXI
Admin
vSphere Client
Hosts

5
Figure 5-8 VMware vSphere Architecture

As Figure 5-8 demonstrates, a virtualization administrator uses vSphere Client, which is soft-
ware available for Windows computers, to deploy multiple virtual machines on hosts with
installed ESXi hypervisors. Nevertheless, the use of a VM manager enables multiple benefits
besides the centralized creation of VMs, as you will learn in the upcoming section “Server
Virtualization Features.” Originally built over a Windows-based server, VMware vCenter is
the VM manager for the vSphere suite.

A VMware vSphere administrator can use a web browser to control VMware vCenter and,
consequently, its associated virtualization cluster.

Note In VMware vSphere terminology, “cluster” refers to a set of hosts that share the
same policies and feature settings. Therefore, a single vCenter instance can manage multiple
clusters. However, for the sake of simplicity, this certification guide refers to a virtualization
cluster as a single VM manager managing a set of hosts.

Microsoft Hyper-V
Microsoft, the leading personal computer operating system vendor, released its server vir-
tualization offer in 2008. Hyper-V is a hypervisor enabled as a new Windows Server 2012
role, exactly as you would activate services such Active Directory or Terminal Services in
this operating system. Figure 5-9 shows Hyper-V already installed and activated along with
other roles in a Windows 2012 Server.

Your first impression might be that Hyper-V is a Type-2 hypervisor, but Hyper-V is indeed
a Type-1 hypervisor because it has direct access to the server hardware. In fact, after the
Hyper-V role is enabled on a Windows 2012 Server, the whole system requires a reboot
so the original operating system instance can be transformed into a special virtual machine,
formally called a parent partition. For obvious reasons, Hyper-V grants special privileges to
this partition because it also deploys some functions that are shared by the hosted VMs.
Technet24.ir

134 CCNA Cloud CLDFND 210-451 Official Cert Guide

Figure 5-9 Installed Hyper-V in a Windows 2012 Server

Although a virtualization administrator can use client software called Hyper-V Manager to
create and manage VMs in a single Hyper-V host, System Center Virtual Machine Manager
(SCVMM) is the architecture VM manager, extending server virtualization features to multi-
ple hosts. Using the VMM Administrator Console, a virtualization administrator can control
a Hyper-V virtualization cluster as depicted in Figure 5-10.
System Center Virtual
Machine Manager Virtual
(SCVMM) Machines
App App App App App App
VMM Adminstrator OS OS OS OS OS OS
Virtualization Windows Windows Windows
Windows Server
Console 2012 with
Admin Server Server Server

Hyper-V Manager Hyper-V Role


Hosts

Figure 5-10 Microsoft Hyper-V Architecture

Microsoft Hyper-V has increased its participation in the server virtualization market with
each new version. Arguably, its adoption is strongly supported by the ubiquitous presence
of the Windows operating system in server environments and its relative operational sim-
plicity to Windows-trained professionals.

Linux Kernel-based Virtual Machine


Standing on the shoulders of a giant called open source development, server virtualization
was brought to Linux systems with the increasing adoption of this operating system in the
Chapter 5: Server Virtualization 135

2000s. Kernel-based Virtual Machine (KVM) is arguably the most popular solution among
several other Linux-based Type-1 hypervisors since its release in 2007.

As its name implies, this full virtualization solution is not executed as a user application but
actually as a loadable kernel module called kvm.ko. This component was integrated in main-
line Linux since version 2.6.20.

Both the open source community and vendors packaging Linux enterprise solutions have
developed a great variety of methods to manage KVM virtual machines, including command-
line interfaces (Linux shell), web interfaces, and client-based GUIs. Additionally, many VM
managers were similarly created to manage KVM-based virtualization clusters. Among these
are Red Hat Enterprise Virtualization (RHEV), oVirt, and OpenStack Nova. Figure 5-11 illus-
trates how all of these components are integrated into a generic KVM architecture.

Red Hat
Enterprise
Virtualization
(RHEV)
5
oVirt Virtual
OpenStack Nova App App App App App App Machines
Web, Client, ... and many
OS OS OS OS OS OS
Virtualization others
Linux with
or CLI Linux Linux Linux
KVM Kernel
Admin
CLI, GUI Module
Hosts

Figure 5-11 Linux KVM Architecture

KVM has been adopted for various reasons, such as cost (it can be considered free, depend-
ing on your version of Linux) and operational easiness to install in server environments
where Linux expertise is available.

Note OpenStack Nova encompasses many functions that are beyond the scope of a VM
manager. Thus, besides the creation and management of virtual machines (called “Nova
instances”), Nova is considered the de facto orchestrator of most computer-related opera-
tions in an OpenStack cloud, offering a set of APIs (including Amazon EC2 API) to auto-
mate pools of computer resources from hypervisors such as Hyper-V, VM managers (such
as VMware vCenter), and high-performance computing (HPC) clusters.

Multi-Hypervisor Environments
While the three software solutions explained in the previous sections arguably represent the
most widely adopted server virtualization architectures, they are not alone in this market.
Some other hypervisors, such as Citrix XenServer, were briefly described in Table 5-3.

For obvious reasons, each solution presents its own specific advantages. Yet, some custom-
ers do not want to be restricted to a single server virtualization architecture. Interestingly,
during the past few years, I have observed an increasing number of customers deploying
multi-hypervisor environments for various reasons. For example:
Technet24.ir
136 CCNA Cloud CLDFND 210-451 Official Cert Guide

■ To decrease licensing costs


■ To avoid vendor lock-in
■ To leverage personnel’s specialized familiarity with a specific operating system
■ To deploy new projects such as cloud computing or virtual desktops

Server Virtualization Features


As I briefly mentioned in the section “Virtualization on x86,” a server virtualization clus-
ter can bring more benefits to data centers than simply creating and monitoring virtual
machines on multiple hosts. With the objective of exploring these enhancements, the fol-
lowing sections discuss this list of features (plus a few other interesting features):

■ Virtual machine high availability


■ Virtual machine live migration
■ Resource load balancing
■ Virtual machine fault tolerance

Undoubtedly, these features highlight the fact that VMs are software constructs that can be
much more easily manipulated when compared to physical servers.

Allow me to suggest to you a small exercise: During the explanation of each feature, try
to picture how it can help cloud computing environments. Later, in the section “Cloud
Computing and Server Virtualization,” I will address these synergies so you can gauge your
cloud design powers.

Virtual Machine High Availability


Server failures may happen at any time. To protect server systems from such events, applica-
tion teams usually have to develop or acquire solutions that provide an adequate level of
resiliency for these systems. These solutions are generically called cluster software and are
usually bound to an operating system (e.g., Windows Server Failover Cluster) or a single
application (e.g., Oracle Real Application Clusters).

With the considerable variety of operating systems and their hosted applications, the job of
managing cluster software in pure physical server environments can become extremely diffi-
cult. Notwithstanding, through virtualization clusters, hypervisors can actually provide high
availability to virtual machines regardless of their application or operating system flavors.

Figure 5-12 details how selected virtual machines can be conveniently “resurrected” when-
ever their host suffers a major hardware or hypervisor failure.

In the situation depicted on the left side of Figure 5-12, a virtualization cluster consists of
three hosts (Host1, Host2, and Host3) with two virtual machines each. After Host1 experi-
ences a major failure, both of its virtual machines are restarted on other hosts from the same
cluster, as shown on the right. This automatic procedure is only possible because the affect-
ed VMs’ files are stored on an external storage system (such as a disk array or network-
attached storage) rather than a hard disk on Host1.
Chapter 5: Server Virtualization 137

VM Manager VM Manager

Hypervisor Hypervisor Hypervisor


Hypervisor Hypervisor Hypervisor
Host 1 Failure

Host1 Host2 Host3


Host1 Host2 Host3

Figure 5-12 Virtual Machine High Availability Example


5

Note Both disk arrays and network-attached storage (NAS) will be properly explained in
Chapters 8 and 9. But at this stage, you simply need to understand that these systems repre-
sent external storage devices that can be accessed by multiple servers, such as hosts on the
same virtualization clusters.

VM high availability (HA) is extremely useful for legacy applications that do not have any
embedded availability mechanism. As a consequence, this specific enhancement allows less
development effort if, of course, the application service-level agreement (SLA) supports a
complete reboot.

Virtual Machine Live Migration


A much-hyped virtualization innovation in the mid-2000s, live migration enables the trans-
port of a virtual machine between two hosts from a virtualization cluster with minimal dis-
ruptions in its guest OS and hosted applications.

Figure 5-13 details how this elegant sleight of hand actually occurs.

In Figure 5-13, the virtualization administrator decides that a virtual machine must move
from Host1 to Host2 using live migration. Accordingly, the VM manager communicates to
both hosts about the operation. Immediately after Host2 creates a copy of the soon-to-be-
migrated VM, Host1 starts a special data connection to Host2 to synchronize the VM state
until Host2 has an exact copy of the VM (including its main memory and, consequently,
end-user sessions). After the synchronization is completed, the VM copy is fully activated
while its original instance is abruptly discarded.
Technet24.ir

138 CCNA Cloud CLDFND 210-451 Official Cert Guide

VM VM Copy

Hypervisor Hypervisor
VM State

VM Manager
LAN or
SAN

VM Files Shared Storage

Figure 5-13 Virtual Machine Live Migration Example

Figures 5-14 and 5-15 detail the live migration procedures in a VMware vSphere environ-
ment, which is called vMotion in this architecture. For your information, the whole process
took approximately 4 seconds.

Figure 5-14 Migrating a Virtual Machine

Figure 5-14 displays a possible method to start the live migration of VM-Nomad, currently
running on host10, to host11. Using vSphere Client, I right-clicked the VM name and select-
ed the option Migrate.
Chapter 5: Server Virtualization 139

Figure 5-15 Virtual Machine After the Migration

Figure 5-15 displays the result of the whole process. Notice that I have also started an
ICMP Echo test (ping) from my desktop directed to the VM to verify if the migration
causes any loss of connectivity. And although it does impact a single ICMP Echo in the
series, the migration is practically imperceptible to a TCP connection due to its native reli-
ability mechanisms.

Also observe that, similarly to virtual machine HA, the live migration process is possible
because the VMs’ files are located in a shared storage system.

Note Recently, some hypervisor vendors have overcome this limitation. Deploying a
concept called Shared Nothing Live migration, the whole set of VM files (including virtual
disk) is copied between hosts before the machine state is synchronized. While this tech-
nique eliminates the requirement for a shared external storage system, you should expect
the whole migration to be completed after a few minutes, not seconds as with standard live
migrations.

One very important point: VM live migration is definitely not a disaster recovery feature
simply because it is a proactive operation rather than a reaction to a failure. On the other
hand, several data center architects consider live migration a disaster avoidance technique
that allows VMs to be vacated from hosts (or even sites) before a predicted major disrup-
tion is experienced.
Technet24.ir

140 CCNA Cloud CLDFND 210-451 Official Cert Guide

Resource Load Balancing


The degree of flexibility introduced by live migrations has fostered an impressive toolbox
for automation and capacity planning. One of these tools, resource load balancing, enables
hosts that may be on the verge of a predefined threshold to preemptively send virtual
machines to other hosts, allowing an optimal utilization of the virtualization cluster’s overall
resource capacity.

Figure 5-16 shows an example of a resource load balancing activity in a virtualization cluster.

VM Manager VM Manager

Hypervisor Hypervisor Hypervisor


Hypervisor Hypervisor Hypervisor

Time
95% 60% 25% 60% 60% 60%
Host1 Host2 Host3 Host1 Host2 Host3

Figure 5-16 Resource Load Balancing Example

In Figure 5-16, Host1 presents a 95% utilization of a hardware resource (for example, CPU
or memory). Because the VM manager is monitoring every resource on cluster hosts, it can
proactively take an automated action to migrate virtual machines from Host1 to Host2 or
Host3. Such a procedure can be properly planned to achieve a better-balanced environment,
avoiding the scenario in which oversaturated hosts invariably impact VM performance.

Many load-balancing methods are available from server virtualization solutions, also allow-
ing a level of customization for virtualization administrators who want to exploit virtualiza-
tion to fulfill specific requirements. Additionally, these load-balancing decisions can be as
automatic as desired. For example, rather than making the decision to migrate the VMs, the
VM manager may only advise the virtualization administrator (through recommendation
messages) about specific manual operations that will improve the consumption of available
resources.

Virtual Machine Fault Tolerance


The underlying mechanisms of live migration also facilitated the creation of sophisticated
virtualization features. One example is virtual machine fault tolerance, which enables appli-
cations running on VMs to continue without disruption if a hardware or hypervisor failure
hits a host.
Figure 5-17 depicts the internal operations behind VM fault tolerance.
Chapter 5: Server Virtualization 141

Hypervisor Hypervisor
Hypervisor Hypervisor
VM State
Left host fails

Figure 5-17 Virtual Machine Fault Tolerance Example

The left side of Figure 5-17 represents a situation where the virtualization administrator has
decided that a VM deserves fault tolerance protection. Following that order, the VM man-
ager locates a host that can house an exact copy of the protected VM and, contrary to the
live migration process, the VM copy continues to be synchronized until a failure happens in
Host1. After such event, the VM copy is fully activated with minimal disruption to the VM,
including end users’ sessions.

When compared to VM high availability, VM fault tolerance ends up spending double the 5
CPU and memory resources the protected VM requires in the virtualization cluster. In spite
of this drawback, VM fault tolerance can be considered a perfect fit for applications that
simply cannot afford to reboot in moments of host malfunctions.

Other Interesting Features


With creativity running high during the server virtualization boom, many enhancements and
sophisticated mechanisms were developed. The following list explains some of the most
interesting features from current virtualization clusters, as well as their benefits to modern
data centers:

■ Power management: With this feature, the VM manager calculates the amount of
resources that all VMs are using and analyzes if some hosts may be put on standby
mode. If so, the VMs are migrated to a subset of the cluster, enabling automatic power
savings during periods of low utilization.
■ Maintenance mode: If a host from a virtualization cluster requires any disruptive opera-
tion (such as hypervisor version upgrade or patch installation), the virtualization adminis-
trator activates this mode, automatically provoking the migration of VMs to other hosts
in the cluster and avoiding VM-related operations in this host.
■ Snapshot: This feature preserves the state and data of a virtual machine at a specific
instant so you can revert the VM to that state if required. Under the hood, a snapshot is
primarily a copy of the VM files with a reference to a point in time.
■ Cloning: This operation creates a copy of a virtual machine that results in a completely
independent entity. To avoid future conflicts, the clone VM is assigned different identi-
fiers such as MAC addresses.
■ Templates: If you want to create a virtual machine that will be cloned frequently, you
can create a master copy of it, which is called a template. Although it can be converted
back to a VM, a template provides a more secure way of creating clones because it can-
not be changed as easily as a standard VM.
Technet24.ir
142 CCNA Cloud CLDFND 210-451 Official Cert Guide

Undeniably, most of the features discussed in this section can help cloud comput-
ing deployments achieve the characteristics described in Chapter 1, “What Is Cloud
Computing?” And, as you will learn in the next section, these features have become an
important foundation for these environments.

Cloud Computing and Server Virtualization


Server virtualization set such a firm foundation for cloud computing that many profession-
als still embrace these concepts as being synonymous. This confusion is understandable
because virtualization technology has surely decreased the number of physical operations
that encumbered server provisioning in data centers.

But while virtual machines greatly facilitate application provisioning, a server virtualization
cluster is definitely not a cloud. For sure, virtual machines are not the only resource end
users expect from a cloud computing environment. And, as I have described in Chapter 1,
a cloud can potentially offer physical resources, such as database servers, or networking
assets, such as virtual private networks (VPNs).

With the objective of further exploring both the similarities and differences between server
virtualization and cloud computing, the next three sections will review some essential cloud
characteristics discussed in Chapter 1.

Self-Service on Demand
Some VM managers offer embedded web portals for end users to easily provision and
manage VMs. Consequently, these hypervisor architectures can express the self-service on
demand characteristic through a very simple IaaS-based private cloud.

However, as I have personally observed in many customer deployments, some self-service


portal implementations do not address other cloud essential characteristics, such as provi-
sioning of physical resources and service metering. As a result, it is much more usual to find
cloud deployments using additional software layers consisting of service catalog portal,
infrastructure orchestration, and chargeback (or showback) solutions.

As a refresher from Chapter 4, “Behind the Curtain,” Figure 5-18 illustrates that a server vir-
tualization cluster is just one of several infrastructure components orchestrated by a cloud
software stack.
Figure 5-18 also highlights that end-user requests generate API calls (asking to create, read,
update, or delete a resource) to the VM manager via a cloud orchestrator. With such an
arrangement, metering is made possible through the collection of data from the VM man-
ager. Therefore, a flexible and well-written VM manager API is certainly a requirement for
the integration of hypervisor architectures and cloud orchestrators. And in a way to spare
customization efforts on this integration, most server virtualization vendors have developed
complementary cloud stack solutions.
Chapter 5: Server Virtualization 143

Network

Storage

I
AP

I
AP
Servers

Cloud
API Orchestrator API
Portal
End AP I API
I AP
User
Virtual Machines
Meter
VM 5
$ Manager

Cloud Software Stack Cloud Infrastructure

Figure 5-18 Server Virtualization Cluster in Cloud Computing Environment

Resource Pooling
Server virtualization features, such as live migration and resource load balancing, can change
the perception of a server virtualization cluster from being a simple group of hosts to being
a pool of computing resources. Figure 5-19 illustrates this perspective.

VM Manager
Deploying Resource
Load Balancing

Virtual Machines Virtual Machines

Hypervisor
Hypervisor Hypervisor Hypervisor

CPU RAM NIC Disk

CPU
RAM NIC Disk

CPU RAM NIC Disk

Host1 Host2 Host3 Computing


Resource Pool

“Virtualization Mirror”

Figure 5-19 Server Virtualization Cluster as a Pool of Resources


Technet24.ir
144 CCNA Cloud CLDFND 210-451 Official Cert Guide

Figure 5-19 displays a VM manager deploying resource load balancing on three hosts in virtu-
alization clusters. In this environment, a cloud orchestrator does not have to define in which
host a new VM will be instantiated simply because the VM manager already takes care of such
details. Therefore, from the cloud software stack perspective (right of the “virtualization mir-
ror”), the cluster is a big computing device aggregating a pool of resources (CPU, memory, I/O
bandwidth, and storage, among others) from the cluster hosts.

On the other hand, if end-user requests have already exhausted one of the cluster resources,
the cloud software stack should ideally detect such saturation and provision more physical
devices (hosts or storage) into the virtualization cluster.

Elasticity
Because a virtual machine is essentially a set of software components, it can be scaled up
according to application or end-user requirements. For example, if an application’s perfor-
mance is being adversely affected by memory constraints from its virtual machine, a VM
manager can increase its assigned vRAM without any serious disruption. And even if the
resource cannot be tampered with without a reboot, virtualization features such as cloning
and templates can scale out applications, providing more VMs as required.

Even so, to deploy such elasticity capability, a cloud architect should consider the additional
use of performance monitoring solutions to monitor applications running on virtual machines.
And as a general rule, these solutions are not part of a VM manager software package.

Around the Corner: Linux Containers and Docker


In this chapter, you have learned how virtual machines have revolutionized server and application
provisioning in data centers during the first decade of this century. However, in certain situations
an application development may not require some of the overhead related to a VM, such as

■ Emulation of hardware (for example, CPU, memory, NIC, and hard disk)
■ Execution (and management) of a separate operating system kernel
■ Initialization time, when compared to application provisioning over an already installed
operating system

Linux Containers (LXC) offers a lightweight alternative to hypervisors, providing operating


system–level virtualization instead. A single Linux server can run multiple isolated Linux sys-
tems, called containers, using the combination of the following kernel security features:

■ Control groups (cgroups): Provide resource isolation to each container, dedicating CPU,
memory, block I/O, and networking accordingly
■ Access control: Denies container access to unauthorized users
■ Namespaces: Isolate the application perspective of the operating system, offering distinct
process trees, network connectivity, and user identifiers for each provisioned container

Because a container reuses libraries and processes from the original Linux installation, it will
have a smaller size when compared with a virtual machine. Therefore, if your cloud environ-
ment provisions Linux applications, containers are probably the fastest and most economical
way to support such offerings. However, I highly recommend that you carefully analyze
Chapter 5: Server Virtualization 145

whether the isolation provided by containers is enough to isolate tenant resources in your
cloud environment.

Docker is an open source project that automates application deployments using LXCs.
Much like a VM manager to a virtualization cluster, Docker enables flexibility and portabil-
ity of containers between Linux servers and even cloud environments. Figure 5-20 displays
the differences between virtual machines, containers, and Docker.

Type-1 Hypervisor Linux Container Docker


CLI REST Dockerfiles
API

Virtual Machine
Container Docker
App App App App
Containers
OS OS OS OS App App App App App App App App
Hypervisor Linux Linux 5

Figure 5-20 Comparing Virtual Machines, Linux Containers, and Docker

Figure 5-20 also shows that Docker can be managed through either a command-line inter-
face (CLI) or a REST API, and can leverage dockerfiles, which are basically text documents
that individually contain all commands required to automatically build a container image.

Many analysts have pointed out how LXCs and Docker are very appropriate for PaaS-based
cloud services, potentially replacing virtual machines as their atomic unit. Generally speak-
ing, containers should be seen as a complementary solution to virtual machines, whenever
advantages (less overhead) of a container surpass their limitations (shared operating system
resources).

Further Reading
■ Virtualization Matrix: http://www.virtualizationmatrix.com/matrix.php?category_
search=all&free_based=1
■ Linux Containers: https://linuxcontainers.org/
■ Docker: https://www.docker.com/
Technet24.ir
146 CCNA Cloud CLDFND 210-451 Official Cert Guide

Exam Preparation Tasks

Review All Key Topics


Review the most important topics in this chapter, denoted with a Key Topic icon in the
outer margin of the page. Table 5-4 lists a reference of these key topics and the page num-
ber on which each is found.

Table 5-4 Key Topics for Chapter 5


Key Topic Element Description Page
List Server hardware components 122
Table 5-2 Operating systems 124
List Benefits of mainframe virtualization 126
Table 5-3 Commonly deployed hypervisors 129
List Virtual machine components 130
List Virtual machine files 131
List Benefits of virtualization features to data centers 141

Complete the Tables and Lists from Memory


Print a copy of Appendix B, “Memory Tables” (found on the CD), or at least the section
for this chapter, and complete the tables and lists from memory. Appendix C, “Answers to
Memory Tables,” also on the CD, includes completed tables and lists so that you can check
your work.

Define Key Terms


Define the following key terms from this chapter, and check your answers in the glossary:

server, operating system, kernel, hypervisor, VM manager, ESXi, Hyper-V, KVM, VM high
availability, VM live migration, resource load balancing, VM fault tolerance
Technet24.ir

This chapter covers the following topics:

■ Virtual Machines and Networking

■ Cisco Nexus 1000V

■ Virtual eXtensible LAN

This chapter covers the following exam objectives:

■ 4.2 Describe Infrastructure Virtualization


■ 4.2.a Difference between vSwitch and DVS
■ 4.2.b Cisco Nexus 1000V components
■ 4.2.b.1 VSM
■ 4.2.b.2 VEM
■ 4.2.b.3 VSM appliance
■ 4.2.c Difference between VLAN and VXLAN
CHAPTER 6

Infrastructure Virtualization
The sole action of creating virtual servers is certainly not enough to provide applications to
end users. In fact, these devices are expected to interact with their users, as well as other
resources, making the subject of networking mandatory in any application-related conversa-
tion. Many topics related to physical networks (such as VLANs, IP addresses, and routing)
are naturally part of this discussion. Additionally, the popularization of server virtualization
on x86 platforms has led to the development of other solutions to support the problem
of virtual machine communication. An amalgamation of both old and new concepts has
spawned a new and exciting branch of networking called virtual networking.
With server virtualization performing a fundamental role in cloud computing, virtual
machine (VM) traffic management has fostered intense development during the past
decade. Unsurprisingly, the CLDFND exam requires knowledge about the main funda-
mentals of virtual networking in Cisco environments, such as both main variants of virtual
switches (vSwitch and distributed virtual switch), multi-hypervisor virtual switching based on
Cisco Nexus 1000V, and Virtual eXtensible LANs (VXLANs).
Fully addressing these points, this chapter investigates the motivations behind virtual net-
working, compares multiple solutions for virtual machine communication, and describes
how Cisco has approached this new data center knowledge area.

“Do I Know This Already?” Quiz


The “Do I Know This Already?” quiz allows you to assess whether you should read this
entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in
doubt about your answers to these questions or your own assessment of your knowledge
of the topics, read the entire chapter. Table 6-1 lists the major headings in this chapter and
their corresponding “Do I Know This Already?” quiz questions. You can find the answers in
Appendix A, “Answers to Pre-Assessments and Quizzes.”

Table 6-1 “Do I Know This Already?” Section-to-Question Mapping


Foundation Topics Section Questions
Virtual Machines and Networking 1–3
Cisco Nexus 1000V 4–7
Virtual eXtensible LAN 8–10

1. Which of the following is not applicable to virtual switching in general?


a. Responsible for virtual machine Layer 2 connectivity
b. Can be configured per host, without the use of a VM manager
c. Does not allow live migration of VMs
d. Supports LACP
e. Can belong to more than one host
Technet24.ir

150 CCNA Cloud CLDFND 210-451 Official Cert Guide

2. Which of the following are differences between the VMware vNetwork Standard
Switch and the VMware vNetwork Distributed Switch? (Choose four.)
a. VMware vCenter requirement
b. Uplink Port Groups
c. Allows VMotion
d. Resides in more than one host
e. Permits Port Groups with distinct load balancing methods
f. Supports LACP

3. Which of the following virtual devices can be considered distributed virtual switches?
(Choose three.)
a. VMware vSS
b. Open vSwitch
c. Microsoft Virtual Network Switch
d. Linux bridge
e. VMware vDS
f. Cisco Nexus 1000V

4. Which of the following are correct about Cisco Nexus 1000V Switch for VMware
vSphere? (Choose four.)
a. The communication between the VSM and VEM requires Layer 3 connectivity.
b. The communication between the VSM and VEM requires Layer 2 connectivity.
c. The VEM runs inside the VMware vSphere kernel.
d. The VEM does not run inside the VMware vSphere kernel.
e. Cisco Nexus 1000V can coexist with other virtual switches in the same host.
f. Cisco Nexus 1000V cannot coexist with other virtual switches in the same host.
g. The communication between active and standby VSMs requires Layer 3 connec-
tivity.
h. The communication between active and standby VSMs requires Layer 2 connec-
tivity.

5. Which of the following are correct associations of Cisco Nexus 1000V interfaces and
VMware vSphere interfaces? (Choose two.)
a. Ethernet and vnic
b. vEthernet and vmk
c. vEthernet and vmnic
d. Ethernet and vmnic
e. Ethernet and vmk
Chapter 6: Infrastructure Virtualization 151

6. Which of the following is not a Cisco Nexus 1000V feature?


a. vTracker
b. PortChannel
c. DHCP server
d. Private VLAN
e. TrustSec

7. Which hypervisors support Cisco Nexus 1000V? (Choose three.)


a. Microsoft Hyper-V for Windows 2008
b. Microsoft Hyper-V for Windows 2012
c. VMware vSphere
d. Xen
e. KVM

8. Which protocols are used in the VXLAN encapsulation header? (Choose two.)
a. TCP
b. GRE 6
c. UDP
d. IP

9. Which of the following are advantages of VXLANs over VLANs? (Choose three.)
a. Avoids MAC address table overflow in physical switches
b. Offers easier provisioning of broadcast domains for virtual machines
c. Provides more segments
d. Use IP multicast
e. Deploys flood-based learning

10. Which of the following represent improvements of Cisco Nexus 1000V Enhanced
VXLAN over standard VXLAN deployments? (Choose two.)

a. Uses less MAC addresses


b. Does not require IP multicast
c. Eliminates broadcast and unknown unicast frames
d. Eliminates flooding
Technet24.ir
152 CCNA Cloud CLDFND 210-451 Official Cert Guide

Foundation Topics

Virtual Machines and Networking


Chapter 5, “Server Virtualization,” explored how mainframe virtualization was adapted for
x86 platforms to conceive one of the most important atomic units of a modern data center:
the virtual machine. In the same chapter, you also learned why server virtualization is a
key component in cloud computing, offering native agility, standardization, mobility, and
resilience to applications deployed in such environments. With this background, this sec-
tion zooms in on a simple (but challenging) problem: How to control VM traffic inside of a
hypervisor.

An Abstraction for Virtual Machine Traffic Management


Consider the following philosophical question:

If a virtual machine cannot communicate with any other element, does it really
exist?

From a technical standpoint, a virtual server obviously consumes CPU cycles, accesses
memory, and saves data. Nonetheless, if the main purpose of a VM is to provide services to
other systems, it certainly cannot achieve that without data connectivity.

The emulation of network adapters within virtual machines presupposes that actual traffic is
exchanged across this virtual interface, including IP packets encapsulated in Ethernet frames
and all ancillary signaling messages (e.g., Address Resolution Protocol [ARP]). Consequently,
during the intense development of server virtualization in the early 2000s, several solutions
were proposed for the following so-called challenges of VM communication:

■ Challenge 1: Which software element should control the physical network adapter
drive?
■ Challenge 2: How should communication occur between VMs and external resources
such as physical servers and routers?
■ Challenge 3: How should VMs located in the same host be isolated from one another?

Depending on your level of server virtualization expertise, you likely already know the solu-
tion for all of these challenges. Nonetheless, to fully appreciate the subtleties and elegance of
modern virtual networking solutions, for the moment put yourself in the shoes of a virtualiza-
tion vendor in the early 2000s. As a starting point for this discussion, consider Figure 6-1.

In Figure 6-1, the same virtualized host containing two virtual machines is represented twice.
And as described in the preceding list of challenges, these VMs must talk to each other and
to devices located beyond the access switch, in the data center network.

Additionally, Figure 6-1 introduces two ways of representing virtual machines and their
hosts. Whereas the drawing on the left depicts VMs running over the hypervisor (which I
call “server vision”), the drawing on the right positions VMs at the bottom, mimicking usual
network topologies where physical servers are connected below communication devices (a
“network vision”).
Chapter 6: Infrastructure Virtualization 153

Server Vision Network Vision

Data Center
Network

Virtual Machines Access Switch


App App
OS OS NIC

Hypervisor

VM ? VM
App App
OS OS

Virtualized Server Virtualized Server

Figure 6-1 Virtual Machine Networking Challenges

Although the two visions might seem very similar, I highly suggest you refrain from using
the server vision for networking problems. Comparing both methods, the network vision
allows a more detailed exploration of networking issues in complex environments. Thus,
from now on, this certification guide will employ the latter method whenever virtual net- 6
working is being discussed.

When addressing the first challenge, you will probably agree that standard VMs should not
control the physical network adapter driver, not only to avoid becoming a bottleneck for
VM traffic, but also to prevent other VMs from accessing this resource. As a consequence,
most server virtualization vendors have decided that the hypervisor itself should control the
physical network interface controller (NIC), sharing this resource with all hosted VMs.

Regarding the second challenge, try to picture how a hypervisor can use the physical NIC to
forward VM data to the outside world. In this case, is it better to route (Layer 3) or bridge
(Layer 2) the traffic? While routing certainly can offer better isolation to the hosted VMs,
it invariably imposes operational complexities (such as subnet design and routing protocol
implementation) that do not fit into most server virtualization deployments. Therefore, for
the sake of simplicity, most virtual networking solutions are based on Layer 2 forwarding of
Ethernet frames between VMs and the access switch.

Finally, addressing the third challenge, virtualization administrators expect to define which
VMs, even running in the same hypervisor instance, should communicate with each other.
As a direct result, most virtual networking solutions converged on the most traditional
method of traffic isolation: the virtual local-area network (VLAN).

A VLAN is formally defined as a broadcast domain in a single Ethernet switch or shared


among connected switches. Whenever a switch port receives a broadcast Ethernet frame
(destination MAC address is ffff.ffff.ffff), the Layer 2 device must forward this frame to
all other interfaces that are defined in the same VLAN. In other words, if two hosts are
connected to the same VLAN, they can exchange frames. If not, they are isolated until an
external device (such as an IP router) connects them.

Taking all of these considerations into account, and adding the goals of simplicity and flex-
ibility, the virtual switch has emerged as the most common virtual networking solution
Technet24.ir

154 CCNA Cloud CLDFND 210-451 Official Cert Guide

among all hypervisors. To further delve into this software network device, the following
sections explore its main characteristics, evolution, and variants.

The Virtual Switch


In the early 2000s, VMware created the concept of the virtual switch (vSwitch), which is
essentially a software abstraction where the hypervisor deploys a simplified version of a Layer
2 Ethernet switch to control virtual machine traffic. At the time of this writing, this specific
networking element is officially known as VMware vNetwork Standard Switch (vSS).

Note For the sake of simplicity, I will generically refer to a virtual network device that
shares the characteristics presented in this section as a vSwitch, regardless of its hypervisor.

Figure 6-2 illustrates the forwarding principles behind a generic vSwitch.

Data Center
Network

Access
Switch

NIC

e vSwitch
am
Fr VM
vNlC Fr vNlC
am
App VM e App
OS OS

Virtualized Server

Figure 6-2 Example of a vSwitch in Action

In a vSwitch, the physical NICs act as uplinks, conducting VM traffic beyond the access
switch. As represented in Figure 6-2, a vSwitch can forward Ethernet frames from a virtual
machine to the physical switch and vice versa. And because each VM emulates at least one
NIC, real Ethernet frames traverse the virtual wire that exists between the virtual adapter
and the virtual switch. After analyzing the destination MAC address in a frame, the vSwitch
decides if it should send the frame to the physical NIC or to a VM whose virtual network
adapter is connected to the same VLAN. In the latter situation, the data exchanged between
two VMs in the same host only requires a memory-based operation.

Using VLAN tagging in its physical NICs, a vSwitch deploys more than one VLAN in these
interfaces. Based on the 12-bit VLAN ID field defined in the IEEE 802.1Q standard, the vir-
tual device can identify to which VLAN an incoming Ethernet frame belongs and also signal
to the access switch the VLAN from an outgoing frame.
Chapter 6: Infrastructure Virtualization 155

Like all successful abstractions, the vSwitch allows available networking knowledge (such as
Layer 2 switches and VLANs) to leverage the adoption of its virtual version. Nonetheless,
there are fundamental differences between a physical Ethernet switch and a vSwitch.

The first difference is the definition of a vSwitch connectivity policy, which essentially
defines how the virtual device handles traffic that belongs to a group of VMs. In the case of
VMware vSphere, this policy is called Port Group and it is depicted in Figure 6-3.

Figure 6-3 VMware vSphere Port Group

In the VMware vSphere architecture, the Port Group VLAN105 defines the following for a
VM network connection:

■ A VLAN ID (105)
■ Security policies (rejecting promiscuous mode and accepting MAC address change detec-
tion and forged transmits)
■ Traffic shaping (which can potentially define average bandwidth, peak bandwidth, and
burst size)
■ Physical NIC teaming, specifying load balancing algorithm, network failover detection
method, switch notifications, failback, and network adapter fail order

The target of a connectivity policy defines the key distinction between a Port Group and a
physical switch port configuration. While in the physical world we configure VLANs (and other
network characteristics) on the switch interface, a Port Group is assigned to a VM network
adapter. And although this difference is fairly subtle, I will explain in the next section how it
radically changes the dynamics of network provisioning in cloud computing environments.
Technet24.ir

156 CCNA Cloud CLDFND 210-451 Official Cert Guide

As a result, if you want two VMs to directly communicate with each other inside the same
ESXi host, you just need to assign the same Port Group (or Port Groups that share the same
VLAN) to their virtual adapters. As an illustration of this process, Figure 6-4 depicts the
assignment of Port Group VLAN105 to the network adapter of a VM called VM-Web1.

Figure 6-4 VMware vSphere Port Group Assignment

In Figure 6-4, I have used vSphere Client to change the Port Group assigned to a VM net-
work adapter, after right-clicking the virtual machine name and selecting the desired net-
work adapter. Through these steps, a list of available Port Groups for that host appears in
the Network Label box.
In the case of the VMware vSwitch, a Port Group can only exist in a single host. Hence, as
virtualization clusters were expanded into hundreds of hosts, the repetitive creation of Port
Groups became a significant administrative burden.

The operational strain is especially heightened if the virtualization cluster is deploying live
migration of VMs between any pair of hosts. In this case, all hosts must deploy the same
vSwitch and Port Groups for a successful migration.

Note When a VM live migrates from one host to another, the destination hypervisor
sends a Reverse ARP (RARP) on behalf of the VM to update all physical switches with the
VM’s new location. This gratuitous refresh only affects the physical switches that share the
VLAN that contains the migrated VM. Hence, to avoid loss of connectivity between the
VM and other resources, this VLAN must be enabled on all physical switches that connect
both source and destination hosts.
Chapter 6: Infrastructure Virtualization 157

In the next section, you will be introduced to an evolution of the vSwitch that was created
to overcome these operational concerns.

Distributed Virtual Switch


As an innovation introduced in VMware vSphere version 4.0, VMware released a new
virtual networking device which is formally known as the vNetwork Distributed Switch
(vDS). Notwithstanding, I will generically refer to it as a distributed virtual switch (DVS) in
order to define a whole class of similar solutions that was subsequently developed on other
hypervisors.

Figure 6-5 depicts some of the differences between a vSwitch and a DVS in the context of
VMware vSphere.

Data Center
Network

6
vmnic vmnic vmnic vmnic

vmknic vmknic
VMware vCenter vSwitch DVS vSwitch
vmknic vmknic

vnic vnic vnic vnic vnic vnic

VM1 VM2 VM3 VM4

Host Host

Figure 6-5 Comparing a VMware vSwitch and a VMware DVS

In Figure 6-5, you can observe that each vSwitch is confined to a single hypervisor instance,
whereas the DVS is stretched across both hosts as if they were deploying the same virtual
networking device. The reason for that perception relates to the creation of distributed
Port Groups, which are produced once in VMware vCenter and automatically replicated to
all hosts that are “connected” to the DVS.

Figure 6-5 also references VMware vSphere networking terminology, described in Table 6-2.

Table 6-2 VMware vSphere Interfaces


VMware Description
vSphere
Interfaces
vmnic Short for virtual machine network interface controller, it represents the
physical NICs for an ESXi hypervisor instance and performs the role of an
uplink for a vSwitch or DVS. Exclusively for the VMware DVS, this interface
is associated to an uplink Port Group.
Technet24.ir

158 CCNA Cloud CLDFND 210-451 Official Cert Guide

VMware Description
vSphere
Interfaces
vnic Short for virtual network interface controller, it embodies the emulation of
a network adapter within a virtual machine. It can be associated to a standard
Port Group (vSwitch) or distributed Port Group (DVS).
vmknic Short for virtual machine kernel network interface controller, it is actually
a virtual interface created in the ESXi kernel that is used for management
purposes, IP storage access, and VM memory exchange during a live migra-
tion. It has an IP address and can also be associated to a standard Port Group
(vSwitch) or a distributed Port Group (DVS).

Besides its optimized provisioning, the VMware DVS has features that are not found on the
VMware vSwitch, such as private VLANs, port mirroring, Link Layer Discovery Protocol
(LLDP), and Link Aggregation Control Protocol (LACP).

As I have seen many times before, virtual networking novices may run into VM connectivity
problems whenever more than one virtual device is deployed in a host. My original recom-
mendation still remains: try to draw your virtual and physical topologies using the “network
vision” introduced in Figure 6-1. Through this method, you will be more than adept to
troubleshoot problems such as

■ “Orphaned” virtual machines: VMs that are connected to a vSwitch (or DVS) that does
not have an assigned vmnic (physical NIC).
■ Disjoint LANs: Happens when virtual machines are attached to virtual switches con-
nected to distinct physical LANs

Virtual Networking on Other Hypervisors


With vendors other than VMware introducing alternative solutions to the thriving server vir-
tualization market in the first decade of the 21st century, new virtual networking approach-
es were developed and adopted with varying success. Table 6-3 summarizes some of these
virtual networking devices and explains how they intrinsically differ from the VMware solu-
tions presented in the previous two sections.

Table 6-3 Virtual Networking in Other Hypervisors


Virtual Device Hypervisor Description
Microsoft Microsoft Microsoft has developed this virtual networking solution
Virtual Hyper-V in Hyper-V for Windows 2008. In essence, it is a Layer 2
Network device residing in a host parent partition. The Virtual Net-
Switch work Switch can deploy three types of networks to Hyper-V
virtual machines: external (which allows communication with
elements located outside of the host), internal (which enables
connectivity between VMs and the parent machine), and
private (which permits data exchange only among VMs and
nothing else).
Chapter 6: Infrastructure Virtualization 159

Virtual Device Hypervisor Description


Microsoft Microsoft Also residing in the parent partition, this virtual networking
Extensible Hyper-V device was introduced in Hyper-V for Windows Server 2012
Virtual Switch and, therefore, adds many additional functionalities such
as physical NIC teaming and quality of service (QoS). This
virtual device also integrates with third-party networking
solutions through the use of extensions used for monitor-
ing, filtering, and forwarding functions. Unlike the Hyper-V
Virtual Network Switch, Extensible Virtual Switches can be
instantiated from a template called Logical Switch.
“User Mode KVM and This non-kernel virtual networking approach deploys Net-
Networking” Xen work Address Translation (NAT) to enable VMs to access
devices connected to the physical network, but not the other
way around. Such limitation and performance issues sup-
pressed this solution from server virtualization environments.
Linux bridge KVM and In comparison to user mode networking, this Linux kernel
Xen application allows bidirectional Layer 2 communication among
VMs and the external world. It can also offer better perfor-
mance, but at the cost of a relatively low number of features, 6
which curiously include Spanning Tree Protocol (STP).
Open vSwitch KVM and OVS is an open source initiative that develops a multilayer
(OVS) Xen distributed virtual switch. Combining kernel and user space
software modules, this virtual device incorporates a fair num-
ber of features such as LACP, NetFlow, and traffic mirroring.

Obviously, all of these solutions present advantages and drawbacks when compared with
each other. However, as the next section will explore in detail, they have consistently intro-
duced the same operational challenges to most network teams.

Networking Challenges in Server Virtualization Environments


With server virtualization rapidly advancing into most data centers, the number of virtual
switch ports naturally has surpassed the number of physical switch interfaces in these envi-
ronments. Furthermore, from a pure networking perspective, virtual networking has shifted
the perimeter of the data center network from the access switches into the hypervisor. As a
result, border-related features, such as traffic classification and filtering, may not happen in
the ports that are connected to the (virtual) servers.

Although a great number of networking features were developed in virtual networking solu-
tions, the basic operational processes on these devices remain remarkably diverse from
their physical counterparts. As an illustration for this challenge, try to visualize how the fol-
lowing problems are addressed in your company:

■ When a physical server is not communicating with other hosts on the network, what is
the defined troubleshooting procedure?
■ Which commands does your network team execute? Which management tools are
deployed?
Technet24.ir

160 CCNA Cloud CLDFND 210-451 Official Cert Guide

Now consider what happens if a virtual machine runs into the same exact problem:
■ Are new troubleshooting procedures required?
■ Can your network team perform them with their current operational skills and manage-
ment tools?
■ How can the network team know about a change in the virtual network?

As discussed in Chapter 4, “Behind the Curtain,” operational processes can certainly define
whether a technology will be successfully adopted or not. And in my humble opinion, it
really does not matter how innovative a solution is, if it does not fit into the operational
procedures of a company IT department.

More specifically, the vast majority of virtual networking solutions discussed in the previous
sections present the following difficulties to network teams and their operations:

■ No visibility: When there is a problem happening in the “virtual wire” that links a vir-
tual machine to a virtual networking device, traditional management tools are useless to
discover a root cause. Without proper access to the VM manager, a network administra-
tor can only “ping” this VM and verify if its MAC address is detected on the physical
switches.
■ VMs on wrong VLANs: Operational mistakes during the creation of virtual network-
ing policies may provoke major problems in data center networks. For example, if a VM
running a Dynamic Host Control Protocol (DHCP) server is incorrectly connected to a
VLAN, it may assign incorrect IP addresses to other VMs and consequently stop all com-
munication in that segment.
■ Illicit communication between VMs: A virtualization administrator may not be acute-
ly aware of security policies that were defined and deployed in the physical network. As
a consequence, VMs that should not communicate with each other may share a poten-
tially dangerous network backdoor.
■ Distinct control policies: Besides filtering policies described in the previous item, QoS
policies defined in the physical network may not be accordingly mapped to the virtual
network. Therefore, noncritical traffic may exhaust resources (such as physical NIC
bandwidth) from being available for critical traffic, resulting in poor application perfor-
mance.
■ No virtualized DMZ: When companies must deploy demilitarized zones (DMZ) to con-
trol incoming and outgoing traffic for select servers, server hardware consolidation may
directly collide with such a security directive. I personally have seen many security teams
struggling with this concept, most of which ended up deploying a separate virtualization
cluster for these DMZ-connected VMs.
■ Increased complexity for multi-hypervisor environments: If a data center is using
more than one hypervisor, the network team will probably face additional operational
difficulties to find common features in distinct virtualization platforms.

In 2009, Cisco released an innovative (yet familiar) solution that addresses all of these
operational challenges: a multi-hypervisor distributed virtual switch, further explored in the
next section.
Chapter 6: Infrastructure Virtualization 161

Cisco Nexus 1000V


Before virtual machines became the new atomic unit of modern data centers, Cisco recog-
nized the importance of a concrete and secure way to manage virtual networking. Through
project Swordfish, started in 2006, Cisco designed the Cisco Nexus 1000V Switch as a
virtual network device running inside VMware ESXi hypervisors, leveraging a fundamental
structure from each company: the Cisco NX-OS operating system from data center Nexus
switches and VMware DVS.

We’ll begin our exploration of Cisco Nexus 1000V with the introduction of its main com-
ponents, which are displayed in Figure 6-6 and described in Table 6-4.
Telnet, SSH, Virtual
DCNM Infrastructure
DVS API Connection
LAN Client, HTTP
Virtual Data Center vCenter
Supervisor Network
Network Modules Virtualization
Admin Admin

vmnic vmnic vmnic vmnic vmnic vmnic


6
Virtual Ethernet Virtual Ethernet Virtual Ethernet
Ethernet Ethernet Ethernet
Module Module Module
vEthernet vEthernet ... vEthernet
vnlc vnlc vmic
vnlc
App App App
vmk vmk vmk
OS OS OS

Host Host Host

Figure 6-6 Cisco Nexus 1000V Architecture

Table 6-4 Cisco Nexus 1000V Main Components


Nexus 1000V Description
Component
Virtual Each VSM assumes the role of a supervisor module for Nexus 1000V, con-
Supervisor trolling the interface modules and providing synchronization with a VM
Module (VSM) manager such as VMware vCenter. Deployed as a pair of VMs, a VSM pair
works as active-standby supervisors for a Nexus 1000V instance and is rep-
resented as modules 1 and 2.
Virtual Plays the role of an interface module (or line card) for Nexus 1000V and
Ethernet provides connectivity for VMs running within a single virtualized host. The
Module (VEM) VEMs are displayed as modules 3 and forward on a Nexus 1000V instance.
Ethernet VEM interface connected to physical NIC on a host. It follows the format
interface Ethernet X/Y, where X is the VEM module number and Y represents the
NIC number following the order of connection.
Virtual Ether- Also referred to as vEthernet, it represents the Nexus 1000V interface con-
net interface nected to a VM vnic or a host vmknic. It uses the format vEthernet Z, where
Z is an increasing number assigned by the VSM when a virtual interface is
connected to Nexus 1000V.
Technet24.ir

162 CCNA Cloud CLDFND 210-451 Official Cert Guide

Note Virtual Services Appliances (VSAs) such as the Cisco Nexus 1110 Cloud Services
Platforms can also host VSMs as a virtual service blade (VSB). Unlike how VMs are man-
aged, VSBs are managed through procedures and NX-OS commands that are familiar to net-
work administrators and do not require access to the VM manager. For more information
about Nexus 1110, please refer to Chapter 13, “Cisco Cloud Infrastructure Portfolio.”

From an architectural standpoint, Cisco Nexus 1000V mirrors the internal structure of a
chassis switch such as the Cisco Nexus 9500. Functioning equivalently to these physical
devices, the switch supervisor (the VSM, in the case of Nexus 1000V) offers administrative
access and centrally controls all other switch components, including the Nexus 1000V Eth-
ernet line cards (VEMs). This comparison is further explored in Figure 6-7 and described in
the following list:

Nexus 9500 Nexus 1000V

VEM

VEM
Ethernet Data Center
Modules Network

VEM

Redundant
Supervisors
Active Standby
VSM VSM

Figure 6-7 Comparing a Chassis Switch to Cisco Nexus 1000V

■ Each VEM forwards Ethernet frames that originate from or are destined to connected
virtual machines.
■ The active VSM configures how the VEMs behave, defining VLANs, filters, and policies,
among other functionalities. It also monitors these modules and their interfaces (Ethernet
and vEthernet). Moreover, the standby VSM is constantly synchronizing with the active
VSM through a shared VLAN. Should the latter fail for any reason, the standby VSM is
always ready to assume the switch control operations.
■ The majority of chassis switches utilize fabric modules to forward traffic between inter-
face line cards. Interestingly, the physical network plays the role of a fabric module in
Cisco Nexus 1000V.
Chapter 6: Infrastructure Virtualization 163

Note As I will further explore in Chapter 11, “Network Architectures for the Data Center:
SDN and ACI,” the VSM deploys the Nexus 1000V control plane while the VEM imper-
sonates the switch data plane. And like any other Nexus switch, Nexus 1000V uses AIPC
(Asynchronous Interprocess Communication) and MTS (Message and Transaction Service)
as communication protocols between both planes.

To further expose the innards of Cisco Nexus 1000V, I will detail some of its most com-
mon operational procedures. With this objective in mind, please consider the screen capture
displayed in Figure 6-8, showing Nexus 1000V in VMware vCenter.

Nexus 1000V

Figure 6-8 Cisco Nexus 1000V in VMware vCenter

As Figure 6-8 demonstrates, Nexus 1000V is not discernible from a distributed virtual
switch from a VMware vCenter administrator’s perspective. You can observe which virtual
machines (and VMkernel interfaces) are connected to Nexus 1000V as well as the physical
NICs (or “uplinks”) that offer external connectivity in each host.

Simultaneously, Nexus 1000V can be managed as another Cisco switch from a networking
administrator’s perspective. In fact, Example 6-1 confirms this statement through a Secure
Shell (SSH) session to the VSM management IP address.
Technet24.ir

164 CCNA Cloud CLDFND 210-451 Official Cert Guide

Example 6-1 Cisco Nexus 1000V Command-Line Interface Session


! Administrator connects to the VSM management IP address and logs into it
vsm#
! This command verifies which modules are present in this Nexus 1000V instance
vsm# show module
! This switch has two VSMs and two VEMs
Mod Ports Module-Type Model Status
--- ----- -------------------------------- ------------------ ------------
1 0 Virtual Supervisor Module Nexus1000V active *
2 0 Virtual Supervisor Module Nexus1000V ha-standby
3 332 Virtual Ethernet Module NA ok
4 332 Virtual Ethernet Module NA ok
! Below you can find the software versions for Nexus 1000V and the VMware ESXi in the hosts
Mod Sw Hw
--- ------------------ ------------------------------------------------
1 4.2(1)SV2(2.1a) 0.0
2 4.2(1)SV2(2.1a) 0.0
3 4.2(1)SV2(2.1a) VMware ESXi 5.5.0 Releasebuild-1331820 (3.2)
4 4.2(1)SV2(2.1a) VMware ESXi 5.5.0 Releasebuild-1331820 (3.2)
! And here you can check the management IP addresses from the VSMs and the ESXi hosts
Mod Server-IP Server-UUID Server-Name
--- --------------- ------------------------------------ --------------------
1 198.18.133.40 NA NA
2 198.18.133.40 NA NA
3 198.18.133.31 422025f7-043a-87f5-c403-5b9efdf66764 vesx1.dcloud.cisco.com
4 198.18.133.32 4220955f-2062-8e3e-04b8-0000831108e7 vesx2.dcloud.cisco.com

* this terminal session


vsm#

As both Figure 6-8 and Example 6-1 hint, Nexus 1000V is “ergonomically” designed to intro-
duce minimal changes in the operational procedures from both virtualization and networking
administration teams.

From a provisioning perspective, the most crucial Nexus 1000V element is called a port pro-
file: an NX-OS command-line interface (CLI) command that was originally developed for
physical Cisco Nexus switches. Generally speaking, a port profile is a configuration template
that multiple interfaces can inherit as their own setting.

While port profiles proved to be very useful for chassis switches with hundreds of ports, they
are simply mandatory when you are dealing with thousands of vEthernet interfaces in a Nexus
1000V instance. To ensure consistency among all switch interfaces from a port profile, Nexus
1000V deploys a concept called atomic inheritance, which guarantees that the entire profile
is applied to its members. In the case of a configuration error, the port profile and its member
interfaces are rolled back to the last known acceptable state.
Chapter 6: Infrastructure Virtualization 165

Example 6-2 illustrates the creation of two types of Nexus 1000V port profiles.

Example 6-2 Creating Port Profiles in Cisco Nexus 1000V


! Entering configuration mode
vsm# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
! Creating VLANs 11, 12, and 13 in this Nexus 1000V instance
vsm(config)# vlan 11-13
vsm(config-vlan)# exit
! Creating a virtual Ethernet port profile
vsm(config)# port-profile type vethernet VM-PP
! This port profile provides access to a single VLAN (11) in Nexus 1000V
vsm(config-port-prof)# switchport mode access
vsm(config-port-prof)# switchport access vlan 11
! Interfaces that inherit this port profile will immediately start working
vsm(config-port-prof)# no shutdown
! This port profile must generate a distributed Port Group in VMware vCenter
vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# state enabled 6
vsm(config-port-prof)# exit
! Creating an Ethernet port profile for trunk interfaces that are tagging VLANs 11, 12, and 13
vsm(config)# port-profile type ethernet UPLINK-PP
vsm(config-port-prof)# switchport mode trunk
vsm(config-port-prof)# switchport trunk allowed vlan 11-13
vsm(config-port-prof)# no shutdown
vsm(config-port-prof)# vmware port-group
vsm(config-port-prof)# state enabled
vsm(config-port-prof)# exit
vsm(config)# exit
vsm#

As explicitly commented in Example 6-2, VM-PP and UPLINK-PP are configuration templates
than can be associated, respectively, to vEthernet and Ethernet interfaces. But unlike physical
switches, these interfaces do not inherit port profile configurations through the Nexus 1000V
CLI. Figure 6-9 depicts how VSM and vCenter are linked to each other from a provisioning per-
spective.

In Figure 6-9, port profiles VM-PP and UPLINK-PP automatically generated two distributed Port
Groups sharing names with the port profiles. These Port Groups can be associated, respectively, to
virtual machine vnics and host vmnics in any host connected to DVS “vsm” (which is the name of
our Nexus 1000V).
Technet24.ir

166 CCNA Cloud CLDFND 210-451 Official Cert Guide

Generated by Nexus 1000V

Figure 6-9 Cisco Nexus 1000V Port Groups in VMware vCenter

Caution It is very important to notice that port profiles are live templates, meaning that
changes made to them are immediately reflected in their spawned Port Groups and interface
heritage.

As previously stated, Nexus 1000V enables a nondisruptive operational model for both
administration teams. And through this virtual switch framework, each team can effectively
apply their specialized skills. In more detail:

■ The network team creates network policies (port profiles) for vnics, vmk, and vmnics,
as well as other networking configurations such as QoS policies and access control lists
(ACLs) consistent with the physical network.
■ The virtualization team applies these policies to virtual machines using well-known con-
cepts (distributed Port Groups), leveraging familiar management tools (VMware vCenter)
and standard procedures.

Cisco Nexus 1000V Advanced Features


Besides the operational advantages presented in the previous section, Cisco Nexus 1000V
also implements advanced networking features, such as
Chapter 6: Infrastructure Virtualization 167

■ Cisco Discovery Protocol (CDP): For quick topology discovery integrated with the
large majority of Cisco networking devices.
■ Private VLANs: Isolate a virtual machine (or group of VMs) from other VMs connected
to the same VLAN.
■ Switched Port Analyzer (SPAN) and Encapsulated Remote SPAN (ERSPAN):
Provide traffic mirroring from virtual Ethernet interfaces to an analysis tool connected to
another vEthernet port and located within a Layer 3 network, respectively.
■ Quality of Service (QoS): Provides traffic prioritization in the physical NICs.
■ DHCP Snooping, IP Source Guard, and Dynamic ARP Inspection: Advanced secu-
rity functionalities that enable Nexus 1000V to monitor and enforce correctly assigned
IP addresses, avoiding exploits such as rogue DHCP servers, invalid ARP messages, and
false IP addresses.
■ TrustSec: Enables Nexus 1000V to participate on this Cisco security architecture,
which in effect allows device authentication as well as the aggregation of diverse hosts
into security groups, vastly simplifying firewall rules and ACLs across multiple network
domains (campus and data center, for example).

Besides network features derived from physical devices, Nexus 1000V has also fostered
6
innovation in “pure” virtual networking in many different ways. One example is the vTrack-
er feature, which improves visibility over virtual machine’s status and behavior through the
Nexus 1000V CLI.

Example 6-3 depicts two simple vTracker applications.

Example 6-3 Cisco Nexus 1000V vTracker Examples


! Enabling the vTracker feature
vsm(config)# feature vtracker
! Now I want to observe the status from a virtual machine called WebServer-A
vsm(config)# show vtracker vm-view info vm WebServer-A
! vTracker shows many characteristics from the VM, including location (VEM 4), guest OS
(Ubuntu), power state, resource usage, and uptime
Module 4:
VM Name: WebServer-A
Guest Os: Ubuntu Linux (64-bit)
Power State: Powered On
VM Uuid: 423641c8-22a2-0c2f-6d5e-9e0cf56c02e0
Virtual CPU Allocated: 1
CPU Usage: 0 %
Memory Allocated: 256 MB
Memory Usage: 2 %
VM FT State: Unknown
Tools Running status: Running
Tools Version status: unmanaged
Data Store: Demo_Datastore
VM Uptime: 3 hours 44 minutes 25 seconds
Technet24.ir

168 CCNA Cloud CLDFND 210-451 Official Cert Guide

! This command offers visibility over the last five VM live migrations with VMs
connected to this Nexus 1000V instance
vsm(config)# show vtracker vmotion-view last 5
Note: VM Migration events are shown only for VMs currently managed by Nexus 1000v.
* '-' = Module is offline or no longer attached to Nexus1000v DVS
--------------------------------------------------------------------------------
VM-Name Src Dst Start-Time Completion-Time Mod Mod
--------------------------------------------------------------------------------
Windows7-A 3 4 Tue Apr 28 16:35:33 2015 Tue Apr 28 16:35:49 2015
WebServer-A 4 3 Tue Apr 28 16:35:14 2015 Tue Apr 28 16:35:32 2015
--------------------------------------------------------------------------------
! Virtual machine Windows7-A has migrated from VEM3 to VEM4 in 16 seconds. Virtual
machine WebServerA has migrated from VEM4 to VEM3 within 18 seconds.

In Example 6-3, vTracker is enabled through the feature command, which is a common
procedure for additional function activation on a modular network operating system such as
NX-OS. Later in the example, two vTracker views are explored:

■ VM view: Where several virtual machine attributes (such as guest OS) and resource utili-
zation levels (such as CPU and memory) are exposed to network administrators aiming to
support VM troubleshooting
■ vMotion view: Where live migrations of VMs connected to Nexus 1000V are detailed
for network administrators

Another innovation from the Cisco Nexus 1000V architecture was created to provide more
automation for common procedures and to increase network-related visibility for the server
virtualization team. In summary, the Cisco Virtual Switch Update Manager (VSUM) offers
the following features through the VMware vSphere Web Client:

■ Cisco Nexus 1000V fully automated installation


■ Automatic creation of basic Cisco Nexus 1000V port profiles
■ Cisco Nexus 1000V automated NX-OS version upgrade

Note Like many other solutions that will be discussed in Chapter 7, “Virtual Networking
Services and Application Containers,” VSUM is installed as a virtual appliance, which is
basically a “ready-to-run” virtual machine customized to perform a dedicated function. The
most popular virtual appliance file format is called Open Virtual Appliance (OVA).

Finally, highlighting its great fit for cloud computing, Nexus 1000V also supports an exten-
sible REST application programming interface (API) for software-based configuration.

Cisco Nexus 1000V: A Multi-Hypervisor Platform


Cisco has successfully extended its capabilities to other hypervisors, providing consistent
and advanced features to virtualization environments based on Microsoft Hyper-V and
Linux KVM.
Chapter 6: Infrastructure Virtualization 169

Figure 6-10 delineates the architecture of Nexus 1000V for Microsoft Hyper-V and Nexus
1000V for Linux KVM.

Nexus 1000V for Nexus 1000V for KVM


Microsoft Hyper-V
OpenStack
Nova and
SCVMM Neutron
VSM VSM

VFM VFM

VFM Data VFM Data


Windows
Center Linux Center
Server
Network Servers Network
2012 with
with KVM
Hyper-V ... ...
VFM VFM
6

Figure 6-10 Cisco Nexus 1000V Multi-Hypervisor Architectures

As with the VMware vSphere version, in both cases the VSM continues to be intrinsically
linked to a VM manager, which are Microsoft System Center Virtual Machine Manager (for
Hyper-V) and OpenStack Nova (for Linux KVM). And similarly, the creation of port pro-
files on Nexus 1000V automatically produces matching connectivity policies in both hyper-
visors (more specifically, port classifications in Hyper-V and network profiles in KVM).

In addition to its astounding number of features, another advantage of adopting Nexus 1000V
in multi-hypervisor environments is that it offers operational simplicity in such scenarios.

As an illustration, envision an OpenStack-based cloud computing scenario employing three


hypervisors: VMware vSphere, Microsoft Hyper-V, and Linux KVM. A cloud architect
should pick the most appropriate physical NIC high availability methods according to each
adopted virtual networking device from the extensive list of currently available options for
each virtual switch:

■ VMware vSphere vNetwork Standard Switch: Route based on originating virtual port
ID, route based on source MAC hash, use explicit failover order, or route based on IP
hash
■ VMware vSphere vNetwork Distributed Switch: Route based on originating virtual
port ID, route based on source MAC hash, route based on physical NIC load, use explic-
it failover order, or route based on IP hash
■ Microsoft Hyper-V Virtual Extensible Switch: Active-standby, all address hash, or
port mode
Technet24.ir

170 CCNA Cloud CLDFND 210-451 Official Cert Guide

■ KVM with Linux bridge: Spanning Tree Protocol (IEEE 802.1Q), active-backup bond-
ing, round-robin, XOR bonding, or LACP
■ KVM with Open vSwitch: Active-backup bonding, source MAC load balancing, TCP
load balancing, LACP

As you can easily conclude, the multitude of choices may become overwhelming for archi-
tects who intend to deploy a standard method of upstream communication for virtual
machines.

Note Most virtual switches, including Nexus 1000V, employ loop-avoidance techniques
to cease the use of Spanning Tree Protocol within the hypervisor (for example, most vir-
tual devices cannot forward frames between physical NICs). In Chapter 10, “Network
Architectures for the Data Center: Unified Fabric,” I will discuss why STP has become an
unsuitable solution for modern data center networks.

Properly addressing this challenge, Cisco Nexus 1000V builds a homogenous virtual net-
work layer distributed over different hypervisors, offering uplink high availability in the
exact same way for the whole cloud. Example 6-4 illustrates how Nexus 1000V implements
PortChannels, which are host uplinks composed of multiple physical NICs, on multiple
hypervisors.

Example 6-4 Building Automatic PortChannels in Cisco Nexus 1000V


! The question mark shows the available options for load balancing traffic on
PortChannels
vsm(config)# port-channel load-balance ethernet ?
! Each VEM will load balance traffic in a PortChannel by hashing the following
addresses, ports, and identifiers to a numerical value that selects one of the
operational links.
dest-ip-port Destination IP address and L4 port
dest-ip-port-vlan Destination IP address, L4 port and VLAN
destination-ip-vlan Destination IP address and VLAN
destination-mac Destination MAC address
destination-port Destination L4 port
source-dest-ip-port Source & Destination IP address and L4 port
source-dest-ip-port-vlan Source & Destination IP address, L4 port and VLAN
source-dest-ip-vlan Source & Destination IP address and VLAN
source-dest-mac Source & Destination MAC address
source-dest-port Source & Destination L4 port
source-ip-port Source IP address and L4 port
source-ip-port-vlan Source IP address, L4 port and VLAN
source-ip-vlan Source IP address and VLAN
! "source-mac" is the default algorithm
source-mac Source MAC address
source-port Source L4 port
source-virtual-port-id Source Virtual Port Id
vlan-only VLAN only
Chapter 6: Infrastructure Virtualization 171

! Changing a port profile created in Example 6-2


vsm(config)# port-profile UPLINK-PP
! Checking the available aggregation modes for the automatic PortChannels
vsm(config-port-prof)# channel-group auto mode ?
active Set channeling mode to ACTIVE
on Set channeling mode to ON
passive Set channeling mode to PASSIVE
! With ACTIVE mode, Nexus 1000V will start aggregation negotiation with the upstream
physical switch. On the other hand, PASSIVE mode will wait for the upstream switch to
start the negotiation. ON mode will already aggregate the uplinks without any
negotiation.
! In this case, I will choose ACTIVE mode
vsm(config-port-prof)# channel-group auto mode active

As you can verify in Example 6-4, Cisco Nexus 1000V provides myriad load balancing
methods for the upstream VM traffic (sent by the virtual machines to the physical network).
Ideally, the chosen method should match the load balancing algorithm for the downstream
traffic (from the access switches).

Example 6-4 also depicts the choice of aggregation negotiation mode on Nexus 1000V, 6
which must be compatible with the configured mode on the physical switches. With Nexus
1000V deploying ACTIVE mode, the upstream switches may deploy ACTIVE or PASSIVE
mode.

After the port profile is changed as shown in Example 6-4, as the connectivity policy
UPLINK-PP is assigned to more physical NICs in a host, its VEM will automatically aggre-
gate up to eight of these uplinks in a single PortChannel.

Note In Chapter 10, I will explore a feature called Virtual PortChannel (vPC), which
allows the aggregation of multiple uplinks connected to a pair of physical Nexus switches.

Virtual eXtensible LAN


In 2010, Cisco, VMware, and other IT vendors submitted an Internet Engineering Task
Force (IETF) draft proposal defining a cutting-edge technology called Virtual eXtensible
LAN (VXLAN). Primarily, VXLAN technology was created to address significant limita-
tions VLANs bring to dynamic server virtualization environments and cloud computing
projects.
Figure 6-11 serves as an overview to the discussion of why VLANs are considered “villains”
(pun intended) in some data centers.
Technet24.ir

172 CCNA Cloud CLDFND 210-451 Official Cert Guide

VLAN Provisioning

Data Center Network

NIC NIC NIC NIC

Virtual Switch MAC Address Table Virtual Switch


Depletion
VLAN 10 VLAN 20 VLAN 10 VLAN 20

VLAN ID Starvation
Host Host

Figure 6-11 VLAN Challenges for Server Virtualization

VLAN provisioning in the physical network constitutes the first challenge for VMs using
VLANs. In Figure 6-11, there are two hosts that may or may not belong to the same virtu-
alization cluster. Imagine that each host is deploying VMs connected to two VLANs: 10
and 20. As you may easily infer, these VMs can communicate with their counterparts in the
other host only if the network administration team has already configured both VLANs in
every switch and possible connections between both hosts.

Obviously, this configuration procedure can become unbearably cumbersome and slow as
the number of network devices increases. In addition, the potential live migration of VMs
to any host in the data center severely aggravates the situation. The reason is that network
administrators must enable these new VLANs in all switches and trunks of the data center
network to avoid isolated VMs.

Caution Although I have seen it done in some data centers, I definitely do not recom-
mend the pre-provisioning of all 4094 possible VLANs in every network trunk. Although it
may apparently save time and effort, this “worst practice” may result in unwanted traffic in
many ports of the network as well as STP scalability issues.

VLAN ID starvation is a second challenge that can become a growing preoccupation in


cloud computing environments. Because a physical network can deploy only 4094 VLANs
(1 to 4094 according to the IEEE 802.1Q standard) to isolate hosts, a cloud will eventually
face an absolute limit if it is reserving one or more VLANs per tenant.

While not as noticeable as the previous challenges, MAC address table depletion is already
a preoccupation for many network teams all over the world. In a standard Ethernet network,
every switch ends up learning the MAC address of all VMs. Therefore, a relatively small
Chapter 6: Infrastructure Virtualization 173

data center composed of eight racks containing 40 virtualized servers with 100 VMs each
may overload all access switches that support less than 32,000 MAC address entries.

As a possible reaction, an overwhelmed switch may stop MAC address learning and start
to forward more unknown traffic received on one interface to all the other ports, in a phe-
nomenon called flooding. With more switches sharing this fate, a data center network may
simply become unusable.

VXLAN in Action
Basically, a VXLAN segment is a broadcast domain built through the encapsulation of Eth-
ernet frames into IP packets. Figure 6-12 details how this encapsulation happens.

6 6 4 2 46-1500 4 (Size in
Original Bytes)
Ethernet Destination Source IEEE 802.1Q Payload
T/L FCS
MAC (VM) MAC (VM) Header Data
Frame

Removed 6

20 8 8 4 (Size in
Bytes)
VXLAN LDP VXLAN New
IP Header
Header Header Original Ethernet Frame FCS
Packet

VNI (3)

Figure 6-12 VXLAN Encapsulation

As Figure 6-12 shows, each VXLAN packet encapsulates a single Ethernet frame in an IP
packet containing a User Datagram Protocol (UDP) datagram with an extra header for spe-
cific VXLAN fields. The most important field is called VNI (VXLAN Network Identifier),
which defines the VXLAN packet segment. Through this 24-bit identifier, virtual machines
can be isolated in potentially 16,777,215 VXLAN segments. Anyhow, VXLAN segments
are assigned to the range 4096 to 16,777,215 to further heighten the perception that a
VXLAN segment replaces a VLAN segment.

Note In Figure 6-12, T/L means Type/Length, which may represent either the type of
data on the payload or the frame length, depending on its value. FCS means frame check
sequence, which is a 32-bit cyclic redundancy check (CRC) used to detect transmission
errors in Ethernet links.

To support a more comprehensive analysis of this technology, Figure 6-13 illustrates a


working VXLAN scenario.
Technet24.ir

174 CCNA Cloud CLDFND 210-451 Official Cert Guide

Data Center Network

IP IP IP
IP

NIC NIC NIC NIC

VTEP1 VTEP2
vSwitch1 vSwitch2
VXLAN VXLAN VXLAN VXLAN
5000 6000 5000 6000

Host 1 Host 2

Figure 6-13 VXLAN in Action

In Figure 6-13, a pair of virtualized servers is hosting two virtual machines each, using two
distinct VXLAN segments: 5000 and 6000. By definition, any device that can generate and
process VXLAN encapsulated traffic is called a VXLAN tunnel endpoint (VTEP). In Figure
6-13, both virtual switches are the only depicted VTEPs.

Ultimately, a virtual machine should not discern if it is connected to a VXLAN- or VLAN-


based broadcast domain. However, according to the VXLAN IETF standard (RFC 7348),
as soon as a VM is connected to a VXLAN segment, its VTEP must register itself into the
network as a member of the multicast group assigned to the VXLAN. This registration
procedure may be accomplished through an Internet Group Multicast Protocol (IGMP) Join
message. In this case, the connections of the VMs to the vSwitch generate IGMP Join mes-
sages to groups 239.5.5.5 (VXLAN 5000) and 239.6.6.6 (VXLAN 6000).

Using this information, the data center network is aware that VTEP1 and VTEP2 are part of
both multicast groups.

Note For a correct communication between VMs connected to the same VXLAN seg-
ment, the VXLAN ID and multicast group pair should be consistent on all VTEPs. If
desired, two or more VXLAN segments can share the same group.

Similar to a standard Ethernet switch, a VTEP maintains a MAC address table. However,
instead of associating a MAC address to an interface, a VTEP additionally associates a VM
MAC address to a remote VTEP IP address. The MAC address learning process is described
next via an example.

To begin the example, assume that the MAC address tables from both vSwitches are empty
and that VM1 sends an ICMP Echo message (ping) to VM2. Because VM1 does not know
VM2’s MAC address, it sends an ARP request that essentially states the following: “Hello,
people in my network segment. Whoever has the following IP address, please inform me of
your MAC address.”
Chapter 6: Infrastructure Virtualization 175

By definition, an ARP request is a broadcast message and therefore must be forwarded to


all machines connected to the broadcast domain (VXLAN 5000). Upon receiving the ARP
request, vSwitch1 does the following:

■ If the VM manager has not inserted its MAC address into vSwitch1’s MAC address table
(which is the case for most virtual switches), vSwitch1 learns VM1’s MAC address in
Interface1.
■ Encapsulates the Ethernet frame into a VXLAN packet using VTEP1 as its source IP
address and the VXLAN multicast group (239.5.5.5) as the destination IP address.

Figure 6-14 depicts the exact moment after the VXLAN-encapsulated ARP request leaves
Host1 toward the data center network. Using IP multicast forwarding, the data center
network replicates the packet and sends a copy to each member of the 239.5.5.5 group
(except, of course, VTEP1). As a result, the VXLAN packet reaches vSwitch2, immediately
updating its MAC address table as shown in Figure 6-15.

Host 1 Host 2
ARP VXLAN
Interface1 VTEP1 VTEP2 Interface2

Data Center Network


with IP Multicast
6
vSwitch1 vSwitch2
VM1 VM2
(VXLAN (VXLAN
5000) 5000)

VXLAN MAC Location VXLAN MAC Location


Address Address

5000 VM1 Interface1

Figure 6-14 VM1 Sending an ARP Request in VXLAN 5000

Host 1 Host 2

Interface1 VTEP1 VTEP2


ARP
Data Center Network
vSwitch1 with IP Multicast vSwitch2
VM1 VM2
(VXLAN (VXLAN
5000) Interface2 5000)

VXLAN MAC Location VXLAN MAC Location


Address Address

5000 VM1 Interface1 5000 VM1 VTEP1

Figure 6-15 VM2 Receiving the ARP Request

Note At this very moment, any other VTEP deploying VXLAN 5000 also receives the
ARP request and updates its MAC address table with the same entry (VM1-VTEP1 pair) in
VXLAN 5000.
Technet24.ir

176 CCNA Cloud CLDFND 210-451 Official Cert Guide

Thus, vSwitch2 decapsulates the VXLAN packet and forwards the ARP request to its lonely
local member of the segment: VM2. Processing the frame, VM2 sends a unicast ARP reply
directed to VM1’s MAC address informing VM1 of its MAC address. After receiving this
message, vSwitch2 does the following:

■ Updates its MAC address table with VM2’s MAC address in Interface2
■ Encapsulates the ARP reply into a VXLAN packet using VTEP2 as its source IP address
and VTEP1 as the destination IP address

Figure 6-16 displays the instant after the encapsulated frame leaves Host2.

Host 1 Host 2

Interface1 VTEP1 VTEP2


Data Center Network
vSwitch1 with IP Multicast vSwitch2
VM1 VXLAN ARP VM2
(VXLAN Interface2 (VXLAN
5000) 5000)

VXLAN MAC Location VXLAN MAC Location


Address Address

5000 VM1 Interface1 5000 VM1 VTEP1

5000 VM2 Interface2

Figure 6-16 ARP Reply Being Sent to the Data Center Network

The VXLAN packet containing the ARP reply is naturally routed to VTEP1 (vSwitch1). And
as Figure 6-17 illustrates, this virtual device

■ Updates its MAC address table with the information that VM2 can be reached through
VTEP2
■ Decapsulates the VXLAN 5000 packet and sends the ARP reply to VM1

Host 1 Host 2

Interface1 VTEP1 VTEP2


Data Center Network
vSwitch1 with IP Multicast vSwitch2
VM1 VM2
ARP
(VXLAN Interface2 (VXLAN
5000) 5000)

VXLAN MAC Location VXLAN MAC Location


Address Address

5000 VM1 Interface1 5000 VM1 VTEP1

5000 VM2 VTEP2 5000 VM2 Interface2

Figure 6-17 Topology After ARP Is Sent from VM1 to VM2


Chapter 6: Infrastructure Virtualization 177

From this moment on, both VMs can forward unicast Ethernet frames to each other with
their associated VTEPs only exchanging unicast VXLAN packets.

The encapsulation of a frame into a multicast VXLAN packet, as depicted in Figure 6-14,
also happens if VM1 sends an Ethernet frame destined to an unknown MAC address.
Resulting in VXLAN flooding, this frame will also be sent to all VTEPs deploying VXLAN
5000.

Based on the IETF standard, VTEPs learn MAC addresses through the actual forwarding of
multidestination traffic such as broadcast and unknown unicast frames. And if you are a sea-
soned network professional, you can surely recognize that standard VXLAN employs the
same learning mechanism from standard Ethernet switches.

How Does VXLAN Solve VLAN Challenges?


As you can deduce from the example in the previous section, a VXLAN deployment has
the following requirements for a data center network:

■ IP unicast routing and forwarding: To allow the exchange of VXLAN packets


between VTEPs.
■ IP multicast routing and forwarding: To permit the transmission of broadcast frames 6
and flooding within a VXLAN segment.
■ A maximum transmission unit (MTU) bigger than 1550 bytes: The additional 50
bytes are necessary to avoid fragmentation of VXLAN packets.

After these procedures are correctly implemented, virtual machines can be connected to
multiple VXLANs without any additional physical network configuration. In that sense, this
VXLAN characteristic overcomes the VLAN provisioning challenge.

Also, VTEPs can identify more than 16 million different Layer 2 segments using the same
physical network infrastructure between the hosts. Consequently, VXLAN can effectively
address the VLAN ID starvation challenge as well.

Lastly, physical switches do not learn the virtual machine MAC addresses because they are
inside of VXLAN packets. From a Layer 2 perspective, only the VTEP MAC addresses are
actually learned in the data center network, avoiding VLAN MAC address table depletion.

Standard VXLAN Deployment in Cisco Nexus 1000V


Cisco Nexus 1000V was one of the first products in the market to offer VXLAN capabili-
ties for virtual machines. And in truly Cisco fashion, Nexus 1000V followed the original
VXLAN IETF draft to enable the feature almost two years before the actual standard in
2014.

Since you are already familiar with Nexus 1000V configuration, Example 6-5 demonstrates
how VXLANs are deployed in the virtual device.
Technet24.ir

178 CCNA Cloud CLDFND 210-451 Official Cert Guide

Example 6-5 VXLAN Configuration in Cisco Nexus 1000V


! Enabling VXLAN in this Nexus 1000V instance
vsm(config)# feature segmentation
! Creating a VXLAN segment in Nexus 1000V. The segment must have a name (string
"VXLAN5000"), a segment identifier (5000), and an associated multicast group (239.5.5.5)
vsm(config)# bridge-domain VXLAN5000
vsm(config-bd)# segment id 5000
vsm(config-bd)# group 239.5.5.5
! Creating VXLAN segment "VXLAN6000" with a segment identifier 6000, and multicast group
(239.6.6.6)
vsm(config)# bridge-domain VXLAN6000
vsm(config-bd)# segment id 6000
vsm(config-bd)# group 239.6.6.6
! Creating vEthernet port profile to connect virtual machines in VXLAN5000
vsm(config-bd)# port-profile type vethernet VM-5000
vsm(config-pp)# switchport mode access
vsm(config-pp)# switchport access bridge-domain VXLAN5000
vsm(config-pp)# no shutdown
vsm(config-pp)# vmware port-group
vsm(config-pp)# state enabled
! Creating vEthernet port profile to connect virtual machines in VXLAN6000
vsm(config-bd)# port-profile type vethernet VM-6000
vsm(config-pp)# switchport mode access
vsm(config-pp)# switchport access bridge-domain VXLAN6000
vsm(config-pp)# no shutdown
vsm(config-pp)# vmware port-group
vsm(config-pp)# state enabled
vsm(config-pp)#

In Example 6-5, the feature segmentation command enables VXLAN encapsulation in the
Nexus 1000V instance. Afterward, two VXLAN segments are created and referred to as
bridge domains “VXLAN5000” and “VXLAN6000” (these names are only strings). Observe
that the provisioning of a VXLAN segment mirrors the creation of a VLAN in Nexus
1000V, as explained earlier in Example 6-2.

Inside each bridge domain configuration, both VXLAN segments 5000 and 6000 are
assigned to multicast groups (239.5.5.5 and 239.6.6.6, respectively) that will be used in the
transmission of multidestination frames. Finally, two port profiles are created for virtual
machine attachment. From this moment on, whenever a VM virtual adapter is associated to
distributed Port Group VM-5000, it will be automatically connected to VXLAN segment
5000. The same process obviously works for Port Group VM-6000 and VXLAN segment
6000.

Let’s suppose that five virtual machines (A, B, C, D, and E) were connected to the previous
VXLAN segments in Nexus 1000V. Figure 6-18 details this fictional topology.
Chapter 6: Infrastructure Virtualization 179

VXLAN MAC Location VXLAN MAC Location


Address Address

5000 A Veth1 5000 A IP1

5000 B Veth2 5000 B IP1

6000 C Veth3 6000 C IP1

5000 D IP2 5000 D Veth4


IP Network
6000 E IP2 6000 E Veth5

IP1 IP2
VEM West VEM East
Veth Veth Veth Veth Veth
1 2 3 4 5

A B C D E

Figure 6-18 Cisco Nexus 1000V VXLAN Topology

For the sake of simplicity, consider that both VEMs (West and East) belong to the same
Nexus 1000V instance. Three virtual machines (A, B, and C) are connected to VEM West, 6
while VEM East handles traffic from VMs D and E. VMs A, B, and D are connected to
VXLAN segment 5000, and VMs C and E share VXLAN segment 6000 as their broadcast
domain.

In Figure 6-18, you can verify that each VEM deploys its own independent MAC address
table and VTEP, whose IP addresses are respectively IP1 and IP2 (which are assigned to
VMkernel interfaces at each host). After traffic is exchanged within both segments, each
VEM has two types of MAC address table entries:

■ Local: Related to virtual machines that are attached to local vEthernet interfaces.
Usually, these entries are statically assigned using the MAC addresses provided by the
VM manager.
■ Remote: Learned through actual VM traffic originated from remote VEMs. These
dynamic entries have the MAC address from remote virtual machines and their respec-
tive VTEP IP addresses.

With their MAC address tables in such status, the VEMs simply switch traffic between local
VMs connected to the same VXLAN segment (as A and B are in VEM West). Each VEM
also uses its own VTEP IP address (and the one contained in remote MAC address entries)
to generate VXLAN packets to VMs connected to other VEMs deploying at least one of
the VXLAN segments.

Should a virtual machine live migrate from one VEM to the other, the gratuitous Reverse
ARP broadcast message (sent by the destination VEM) will update the MAC address table
entries on all VEMs that share the corresponding VXLAN segment.
Technet24.ir

180 CCNA Cloud CLDFND 210-451 Official Cert Guide

Note If VEMs West and East belong to two distinct Nexus 1000V instances, their active
VSMs must share the same segment identifiers and multicast groups for VXLAN segments
5000 and 6000. With this condition, VMs A, B, C, D, and E would follow the same previ-
ously described behavior with one exception: at the time of this writing, it is not possible to
live migrate a VM between hosts that are connected to different Nexus 1000V switches.

VXLAN Gateways
In the prior few sections, I have shown how VXLANs can conveniently overcome VLAN
shortcomings that afflict VM-to-VM communication. Intentionally, I have not addressed
the extremely common situation where a VM connected to a VXLAN must communicate
with a VLAN-attached physical server or the Internet.

Now that you have mastered the principles underlying VXLAN communication, you are
ready to meet an important component of this protocol framework: the VXLAN gateway,
a device that can connect VXLAN segments to standard VLANs. Some network devices can
perform the role of a VXLAN gateway by assuming one of the following formats:

■ Virtual: A virtual machine provides the traffic interchange between VXLANs and
VLANs.
■ Physical: A physical network device establishes this communication.

Generally speaking, virtual VXLAN gateways are suggested for environments that may not
require an outstanding forwarding performance (for example, a single cloud tenant). Even
so, being virtual machines, these gateways benefit from server virtualization natural scaling
and replicability advantages. On the other hand, multitenant environments will most cer-
tainly leverage the high performance provided by physical VXLAN gateways.

VXLAN gateways are also classified depending on how they handle interchanged traffic
between a VXLAN and a VLAN. Therefore, according to the layers defined in the Open
Systems Interconnection (OSI) model, these devices can be

■ Layer 2: Where the gateway bridges traffic between a VLAN and a VXLAN, forming a
single broadcast domain within this VXLAN-VLAN pair.
■ Layer 3: Where the gateway routes traffic between VLANs and VXLANs. Most
commonly, these devices become the default gateway for VXLAN-connected virtual
machines. Some VXLAN Layer 3 gateways can also route traffic between different
VXLAN segments.

The IP addressing design for VXLAN-based scenarios defines the required type of com-
munication between VXLAN and VLANs. Notwithstanding, I have noticed that Layer 2
VXLAN gateways are more heavily used in data centers going through physical-to-virtual
(P2V) migrations. In these scenarios, the network team is usually concerned with the main-
tenance of connectivity during a migration process, where physical servers are being con-
verted into VMs without any IP address change.

Figure 6-19 displays some examples of different VXLAN gateways from the Cisco Cloud
Infrastructure portfolio, including devices from each combination available at the time of
this writing.
Chapter 6: Infrastructure Virtualization 181

Nexus 9300
ASR 9000
Physical Nexus 7000

Nexus 5600 ASR 1000

Virtual

Nexus 1000V CSR 1000V


ASAv
VXLAN Gateway

Layer 2 Layer 3

Figure 6-19 Cisco VXLAN Gateways

The figure quadrants identify the following solutions and how they can be integrated into
6
cloud computing environments as VXLAN gateways:

■ Nexus 9300, 7000, and 5600 switches: Hardware-based switches that can work as
Layer 2 and Layer 3 VXLAN gateways.
■ ASR 9000 and 1000 routers: Physical devices that can route VXLAN packets to
VLANs or another VXLAN.
■ Cloud Service Router (CSR) 1000V and Adaptive Security Virtual Appliance
(ASAv): Virtual machines that can respectively function as router and firewall in server
virtualization environments. Both can perform Layer 3 VXLAN gateway functions and
will be further explained in Chapter 7.
■ Cisco Nexus 1000V VXLAN Gateway: This appliance works as an additional module
in a Nexus 1000V instance. It provides Layer 2 communication between VXLAN-VLAN
pairs configured in the active VSM.

As the variety of solutions indicates, there is not a single “best” VXLAN gateway solution
that fits in all designs: each product can be positioned for a group of use-case scenarios.
Still, if you intend to deploy VXLANs in your cloud computing project, I highly recom-
mend that you orient your design primarily toward the essential characteristics of cloud
computing presented in Chapter 1, “What Is Cloud Computing?”

Around the Corner: Unicast-Based VXLAN


VXLANs offer several advantages for server virtualization and cloud computing, but one
of its requirements is considered especially challenging for some physical networks: IP
multicast. The most common reasons IP multicast poses a challenge are lack of support on
network devices and a lack of specialized personnel.
Technet24.ir

182 CCNA Cloud CLDFND 210-451 Official Cert Guide

Perhaps more worrying than an IP multicast deployment is the fact that VTEPs rely on
broadcast frames or flooding to learn MAC addresses. As can occur with VLANs in Ether-
net networks, these multidestination datagrams can transform a set of VTEPs into a single
failure domain, and easily produce undesired effects (for example, loops and MAC address
flapping).

Aware of these obstacles, the networking industry has moved accordingly, and Cisco, along
with other vendors, has developed VXLAN solutions that can avoid both prerequisites, IP
multicast and MAC address learning through the data plane.

Without IP multicast, VTEPs must use other methods for MAC address learning. The meth-
ods that Cisco has developed to counteract these challenges essentially can be classified
into two main categories: controller-based methods and protocol-based methods.
Since 2013, Cisco Nexus 1000V offers a unicast-only VXLANs implementation called
Enhanced VXLAN. In essence, this controller-based method uses the active VSM to distrib-
ute MAC-VTEP entries to VEMs deploying VXLAN segments.

Figure 6-20 details the Enhanced VXLAN MAC address learning in Nexus 1000V.

VSM VSM

]
E P3 [A,VTEP3] [A,VTEP3]
, VT
[A

MAC Location MAC Location MAC Location MAC Location MAC Location MAC Location
Address Address Address Address Address Address
Time
A vEth10 A vEth10 A VTEP 3 A VTEP 3

VEM 3 VEM 3 VEP 4 VEP 5


[VTEP3] [VTEP3]

A A

Figure 6-20 Enhanced VXLAN MAC Learning

As explained earlier in the section “Standard VXLAN Deployment in Cisco Nexus 1000V,”
each VEM has a VTEP and deploys a separate MAC address table. The left side of Figure
6-20 displays the moment VM A is connected to VXLAN 10000 on VEM 3. As a conse-
quence, the active VSM learns that the VM MAC address can be reached through the VEM
VTEP. Acting as a MAC-VTEP entry distributor, the VSM populates the MAC address
table on other VEMs, as shown on the right side of Figure 6-20.

Furthermore, Example 6-6 describes how exactly an Enhanced VXLAN is configured on


Nexus 1000V.
Chapter 6: Infrastructure Virtualization 183

Example 6-6 Enhanced VXLAN Deployment


! Creating an Enhanced VXLAN
vsm(config)# bridge-domain VXLAN10000
vsm(config-bd)# segment id 10000
vsm(config-bd)# segment mode unicast-only
! Creating a vEthernet port profile assigned to bridge domain "VXLAN10000"
vsm(config-bd)# port-profile type vethernet VM-10000
vsm(config-pp)# switchport mode access
vsm(config-pp)# switchport access bridge-domain VXLAN10000
vsm(config-pp)# no shutdown
vsm(config-pp)# vmware port-group
vsm(config-pp)# state enabled
vsm(config-pp)#

In Example 6-6, rather than defining a multicast group for the bridge domain, I have used
the segment mode unicast-only command to create bridge domain “VXLAN10000.” After-
ward, I have assigned the bridge domain to port profile VM-10000, auto-generating a con-
nectivity policy (Port Group in VMware vSphere) that may be associated to virtual network 6
adapters on VMs.

Note The same Nexus 1000V instance can simultaneously deploy standard and Enhanced
VXLAN segments.

As an immediate effect from this method, the flooding of frames with unknown destination
MAC addresses is no longer necessary. Because the VSM can distribute MAC addresses
from an Enhanced VXLAN among all of its VEMs, there is full awareness of all MAC
addresses in that segment—put simply, there are no unknown addresses.

Yet, broadcast messages are still necessary in IP-based communications within a VXLAN
segment (for example, ARP messages). So how are broadcast frames forwarded in Enhanced
VXLAN segments? The answer lies in Figure 6-21.

VSM VSM

MAC Location MAC Location Broadcast


Address Address

A vEth10 A vEth10
Broadcast
B VTEP4 B VTEP4
Time
C VTEP5 C VTEP5

VEM 3 VEM 4 VEM 5 VEM 3 VEM 4 VEM 5


Broadcast

(VTEP3) (VTEP4) (VTEP5) (VTEP3) (VTEP4) (VTEP5)

A B C A B C

Figure 6-21 Enhanced VXLAN Head-End Replication


Technet24.ir
184 CCNA Cloud CLDFND 210-451 Official Cert Guide

On the left side of Figure 6-21, VM A sends a broadcast frame, which, of course, should
be received by all other VMs connected to Enhanced VXLAN 10000. Because the active
VSM has already populated the all VEMs’ MAC address tables, VEM 3 is aware of the vir-
tual machines that are connected to the same VXLAN (VM B in VEM 4 and VM C in VEM
5). Thus, VEM 3 performs a head-end replication of the VXLAN packet that contains the
broadcast frame, sending one copy to each VTEP registered in its MAC address table.

Note In the case of standard VXLAN, the multicast IP network takes care of the packet
replication in the case of multidestination packets.

Controller-based unicast-only VXLANs are obviously limited to the controller domain. In


the case of Nexus 1000V, an active VSM can populate only the MAC address tables of its
administered VEMs. Under these circumstances, Cisco and other vendors have initiated
some endeavors to standardize unicast-only VXLAN communication between independent
VTEPs. Among many efforts, one is apparently becoming a de facto standard at the time of
this writing: VXLAN Ethernet Virtual Private Network (EVPN).

The secret sauce of this technology is the use of Multiprotocol Border Gateway Protocol
(MP-BGP) to exchange MAC-VTEP entries between multiple devices deploying VXLAN
segments. With this approach, all VTEPs (physical or virtual) signal a new VXLAN-attached
host through MP-BGP updates to other VTEPs.

In fairly large VXLAN networks, with multiple VTEPs, it may become extremely complex
to manage the full-mesh set of MP-BGP connections between all VTEPs on a network.
Instead, a BGP route reflector can be deployed as a central point of advertisement, where
one MAC-VTEP update from a VXLAN segment that is sent to the route reflector will sub-
sequently be forwarded to all other VTEPs deploying the same segment.

Further Reading
■ Enhanced VXLAN on Cisco Nexus 1000V Switch for VMware vSphere Deployment
Guide: http://www.cisco.com/c/en/us/products/collateral/switches/nexus-7000-series-
switches/guide_c07-728863.html
■ VXLAN DCI Using EVPN: http://tools.ietf.org/id/draft-boutros-l2vpn-vxlan-evpn-04.txt
■ Cisco Border Gateway Protocol Control Plane for Virtual Extensible LAN: http://www.
cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-
c11-733737.html
Chapter 6: Infrastructure Virtualization 185

Exam Preparation Tasks

Review All the Key Topics


Review the most important topics in this chapter, denoted with a Key Topic icon in the
outer margin of the page. Table 6-5 lists a reference of these key topics and the page num-
ber on which each is found.

Table 6-5 Key Topics for Chapter 6


Key Topic Element Description Page Number
Figure 6-5 Comparing a VMware vSwitch and a VMware DVS 157
Table 6-2 VMware vSphere interfaces 157
Table 6-3 Virtual networking in other hypervisors 158
Table 6-4 Cisco Nexus 1000V main components 161
Figure 6-11 VLAN challenges for server virtualization 172
Figure 6-14 VM1 sending an ARP request in VXLAN 500 175
6
Figure 6-15 VM2 receiving the ARP request 175
Figure 6-16 ARP reply being sent to the data center network 176
Figure 6-17 Topology after ARP is sent from VM1 to VM2 176

Complete the Tables and Lists from Memory


Print a copy of Appendix B, “Memory Tables” (found on the CD), or at least the section
for this chapter, and complete the tables and lists from memory. Appendix C, “Answers to
Memory Tables,” also on the CD, includes completed tables and lists so that you can check
your work.

Define Key Terms


Define the following key terms from this chapter, and check your answers in the glossary:

virtual networking, vSwitch, Port Group, distributed virtual switch (DVS), Cisco Nexus
1000V, Virtual Supervisor Module (VSM), Virtual Ethernet Module (VEM), port pro-
file, Virtual eXtensible LAN (VXLAN), Ethernet Virtual Private Network (EVPN),
Multiprotocol Border Gateway Protocol (MP-BGP)
Technet24.ir

This chapter covers the following topics:

■ Virtual Networking Services

■ Virtual Application Containers

This chapter covers the following exam objectives:

■ 4.2 Describe Infrastructure Virtualization

■ 4.2.d Virtual networking services

■ 4.2.e Define Virtual Application Containers

■ 4.2.e.1 Three-tier application container

■ 4.2.e.2 Custom container


CHAPTER 7

Virtual Networking Services and


Application Containers
Although many data center professionals may regard networking solely as “data plumbing,”
the importance of reliable information sharing continues to grow as IT establishes a stronger
alignment with business.

More than ever, modern network devices can offer sophisticated functionalities with much
more value to business than simply forwarding packets. In that sense, networking services
have established themselves as an integral part of data centers since the blooming of Inter-
net commerce in the 1990s.

Inhabiting the gray area between applications and network, networking services can be
defined as a set of repetitive operations normally carried out by application servers (or cli-
ent devices) but actually implemented on specialized network devices. The most common
data center networking services are firewalls, server load balancers, and WAN accelerators.
And as server virtualization has evolved, many of these services have been packaged in vir-
tual machines with comparable performance to some physical devices.

In Chapter 6, “Infrastructure Virtualization,” you learned the fundamental principles and


were introduced to multiple solutions for virtual networking. One of the most innovative
solutions, Cisco Nexus 1000V, represents the company approach to virtual machine traffic
control with advanced features and remarkable consistency with physical network opera-
tions. Today, this virtual switch provides a comprehensive framework for virtual network-
ing services both developed in-house and by third-party vendors.

The CLDFND exam requires awareness of the most common virtual networking services,
including compute and edge firewalls, advanced routing, server load balancing, and WAN
acceleration. This chapter first introduces these services, through real examples from the
Cisco Nexus 1000V portfolio, and then turns its attention to virtual application containers,
a concept that introduces standardization and consistency for cloud computing virtual net-
works.

“Do I Know This Already?” Quiz


The “Do I Know This Already?” quiz allows you to assess whether you should read this
entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in
doubt about your answers to these questions or your own assessment of your knowledge
of the topics, read the entire chapter. Table 7-1 lists the major headings in this chapter and
their corresponding “Do I Know This Already?” quiz questions. You can find the answers in
Appendix A, “Answers to Pre-Assessments and Quizzes.”
Technet24.ir

188 CCNA Cloud CLDFND 210-451 Official Cert Guide

Table 7-1 “Do I Know This Already?” Section-to-Question Mapping


Foundation Topics Section Questions
Virtual Networking Services 1–8
Virtual Application Containers 9–10

1. Which of the following is not a data center networking service?


a. ADC
b. WAN acceleration
c. Network access control
d. Firewall
e. Intrusion prevention system

2. Which of the following are enhancements of vPath over service insertion methods
such as VLAN manipulation, PBR, and WCCP? (Choose all that apply.)
a. Performance
b. Service chains
c. One-arm mode
d. Policy-based forwarding
e. Traffic offload

3. Which of the following are differences between VSG and ASAv? (Choose all that
apply.)
a. VSG policies can be executed inside the hypervisor kernel.
b. ASAv policies can be executed inside the hypervisor kernel.
c. ASAv must analyze every packet from a connection.
d. VSG must analyze every packet from a connection.
e. VSG supports security policies with VM attributes.
4. Which network operating system does CSR 1000V run?
a. NX-OS
b. IOS
c. IOS XR
d. IOS XE
e. ASR-OS

5. Which of the following is not a benefit applications gain from the use of ADCs?
a. Scaling
b. High-availability
c. Content switching
d. Clustering
e. Acceleration
Chapter 7: Virtual Networking Services and Application Containers 189

6. Which of the following are required configuration elements when deploying server
load balancing in Citrix NetScaler 1000V? (Choose all that apply.)
a. Stickiness table
b. Virtual IP address
c. Monitor
d. Servers
e. DNS

7. Which of the following is not a WAN acceleration method available on vWAAS?


a. TFC
b. Windows printing AO
c. DRE
d. PLZ
e. TFO

8. Which of the following virtual networking services support vPath? (Choose all that
apply.)
a. VSG
b. CSR 1000V
c. ASAv 7
d. vWAAS
e. NetScaler 1000V

9. Which of the following solutions are components of Cisco Virtual Application Cloud
Segmentation? (Choose all that apply.)
a. Nexus 1000V
b. PNSC
c. UCS Director
d. CSM
e. VSG
f. CSR 1000V

10. Which of the following are differences between three-tier and custom virtual applica-
tion containers? (Choose all that apply.)

a. Additional security zones


b. Zone-based firewall
c. Number of application tiers
d. Use of VXLAN
e. Number of segments
Technet24.ir
190 CCNA Cloud CLDFND 210-451 Official Cert Guide

Foundation Topics

Virtual Networking Services


There are many ways to provide specialized services to applications. For example, one can
install agents on application servers to achieve user session authorization or encryption
according to a defined security policy. However, as the number of servers increases in a
data center site (especially under the influence of server virtualization), such agents may eas-
ily become an operational challenge.

If a specific service is based on open standards and requires exhaustive repetition of the
same operations, a dedicated network device is probably the best way to perform that ser-
vice. These specialized devices are generically called networking services.
Offering predictable performance for predefined operations, networking services are usu-
ally deployed in a centralized position in a data center network while executing their func-
tions transparently to both application servers and clients.

The most popular data center networking services are

■ Firewalls
■ Advanced routers
■ Server load balancers (SLBs) or application delivery controllers (ADCs)
■ Wide-area network (WAN) accelerators

Over the past two decades, networking services have been deployed in different formats,
such as dedicated network appliances, hardware-based device insertion modules, or addi-
tional features on a network operating system. But with the increasing adoption of server
virtualization, many networking services started to be commercialized in a virtual format.
Also labeled virtual networking services, the first virtual appliances were essentially a
repackaging of physical appliances, with virtual machines hosting the exact same software
that ran on physical networking services.

Nonetheless, new approaches were developed as more networking services leveraged the
flexibility of virtual switching. Providing a firm architecture for virtual networking designs,
Cisco Nexus 1000V has aggregated an enviable portfolio of virtual networking services,
which will be discussed at length in the following sections.

But before you delve into these services, you must first be acquainted with some of the
“old-school” techniques that are still used to insert networking services in physical struc-
tures.

Service Insertion in Physical Networks


If a networking service must be deployed for an application, the traffic exchanged between
clients and servers must be guided to the network device implementing such service. As an
illustration, Figure 7-1 depicts some of the most traditional solutions for traffic steering
deployed in data center networks.
Chapter 7: Virtual Networking Services and Application Containers 191

VLAN Manipulation Policy-Based Router (PBR) WCCP Reverse-Proxy

Client Client Client

WAN WAN WAN


WCCP

VLAN 2
100
Networking PBR
Service 3
Networking Networking
VLAN Service Service
200

Server Server Server

Figure 7-1 Traffic Steering Techniques in Physical Networks

The topology on the left in Figure 7-1 depicts a method called VLAN manipulation, where
two VLANs are used to drive traffic through a networking service. This arrangement works 7
for inline appliances, which can bridge Ethernet frames (or route IP packets) between both
VLANs, allowing servicing of all traffic that uses this path. While this technique is success-
fully used for security services such as firewalls and intrusion prevention systems (IPSs), it
may not be ideal for networking services that must only be applied to selected traffic.

Depicted in the middle topology of Figure 7-1, policy-based routing (PBR) enables net-
working services in “one-arm mode,” where the specialized device is not positioned as a
mandatory hop between clients and servers. Although it has the advantage of not overload-
ing the appliance with traffic that should not be serviced, this technique requires manual
configuration in centralized points of the network to allow precise traffic steering. Gener-
ally speaking, PBR is commonly applied to server load-balancer designs.

In 1997, Cisco developed the Web Cache Control Protocol (WCCP) to detour client HTTP
requests to web caches, providing bandwidth savings and faster responses on a remote branch.
The topology on the right in Figure 7-1 depicts an alternative WCCP design called reverse-
proxy, where the web cache is not close to the client but rather located in the data center
network. In this case, a strategically positioned network device (router or switch) detects
incoming client HTTP traffic and, through WCCP encapsulation, steers it to the cache with
the objective of offloading servers from having to send the same web objects repeatedly.

In more detail, WCCP in reverse-proxy carries out the following operations:

Step 1. The router (or switch) receives IP packets from a client.

Step 2. If packets belong to TCP port 80 (web traffic), they are encapsulated into
WCCP packets and sent to the web cache. The encapsulation guarantees that
Technet24.ir

192 CCNA Cloud CLDFND 210-451 Official Cert Guide

the IP packets reach the web cache in the original format and without any man-
ual configuration on intermediary network devices. If a requested web object is
present in the cache, it sends it to the client without bothering the web servers.

Step 3. If the requested web object is not present in the cache, the device transpar-
ently retrieves the object from the server, sends it to the client, and caches it
for future sessions.

When compared to other interception methods, WCCP offers the simplicity of configur-
ing fewer devices to provide networking services for select applications. Although WCCP
demands support on network devices and web caches, its elegance grants a natural exten-
sion to more services. For example, TCP traffic steering through WCCP is now a usual ser-
vice insertion technique for WAN accelerators.

Virtual Services Data Path


Leveraging the considerable flexibility brought by server virtualization, networking services
can be inserted with less complexity when compared to the techniques explained in the
prior section. Taking advantage of this flexibility, Cisco Nexus 1000V incorporates a fea-
ture called Virtual Services Data Path (vPath). In a nutshell, vPath avoids convoluted net-
work configurations and deploys networking service insertion through port profiles.

NOTE As you have learned in Chapter 6, Nexus 1000V port profiles are interface configu-
ration templates that can be inherited by distributed virtual switch interfaces that are con-
nected to VMs.

As a visual aid, Figure 7-2 explores the working principles behind vPath.

vPath-enabled
Frame1 Virtual Networking Service Frame2
(vPath Encapsulation) (vPath Encapsulation)

Fra
me2
VLAN, VXLAN, or
Layer 3 Network

West East
VEM VEM

Frame1

A B vPath-enabled Port Profile C D

Figure 7-2 vPath in Action

In Figure 7-2, a special port profile is created and associated with two virtual machines
connected to distinct Virtual Ethernet Modules (West VEM and East VEM). Denoted as a
Chapter 7: Virtual Networking Services and Application Containers 193

small circle, this port profile essentially signals to a VEM that frames to (or from) these vir-
tual machines deserve distinctive traffic handling, rather than plain Layer 2 switching.

Afterward, the VEM encapsulates these frames into vPath packets destined to virtual net-
working services that can be reached through a shared Layer 2 segment (VLAN or VXLAN)
or a remote IP subnet.

Figure 7-2 depicts vPath steering two frames to a virtual networking service: Frame1 (sent
by VM A) and Frame2 (destined to VM D). Depending on the virtual appliance specialized
service, these packets are processed accordingly and sent back to the VEM connected to
the virtual machine to continue their original path.

Similarly to WCCP, vPath employs encapsulation to simplify traffic interception configura-


tions. But as an enhancement, vPath introduces the concept of forwarding policies, where
a virtual machine attribute (such as associated port profile) takes precedence over its pure
networking characteristics (such as VLAN or IP address) to define service insertion. Further-
more, vPath enables virtual networking services to program Nexus 1000V VEMs to better
serve target applications.

Knowing that examples speak much louder than abstractions, I will introduce a very illustra-
tive vPath-enabled security service in the next section.

Cisco Virtual Security Gateway


Released in 2010, Cisco Virtual Security Gateway (VSG) brought the concept of a com- 7
pute firewall to virtual networks based on Cisco Nexus 1000V. With VSG, traffic between
two virtual machines can be permitted or blocked within the hypervisor kernel or, more
specifically, the Virtual Ethernet Module.

But first, allow me to present the components that comprise the VSG architecture:

■ Cisco Prime Network Services Controller (PNSC): Deployed as a virtual machine, this
software is responsible for the creation of security profiles, configuration of VSG (as
well as other service devices), and the establishment of a multitenant hierarchy defined
by tenants, virtual data centers (vDCs), virtual applications (vApps), and application tiers.
■ Cisco Virtual Security Gateway (VSG): This virtual appliance executes the rules defined
in the PNSC security profiles on traffic from (or to) VMs that belong to a PNSC tenant,
vDC, vApp, or tier.
■ Cisco Virtual Supervisor Module (VSM): In the Nexus 1000V supervisor module, VSG
reachability is configured and PNSC security profiles are inserted into vEthernet port
profiles. And as explained in Chapter 6, these port profiles generate connectivity policies
that can be associated to virtual machines and, thereafter, drive VEM behavior in order
to correctly steer traffic to VSG.
■ VM Manager: Actively associates connectivity policies (Port Groups in the case of
VMware vSphere) to a VM network adapter card.

As you may have already noticed, these components work collaboratively to deploy the
concept of a compute firewall. Further detailing this service, Figure 7-3 examines a VSG
scenario at the moment when a new connection reaches a Nexus 1000V instance.
Technet24.ir

194 CCNA Cloud CLDFND 210-451 Official Cert Guide

PNSC VSM VM Manager

“WEB-PP”
“WEB-SP” Containing “WEB-PP” Port Group
“WEB-SP”

West East
VSG VEM VEM

A B C D

Host1 Host2

Figure 7-3 VSG Ready to Process Traffic

The following is the complete provisioning process used to build the scenario depicted in
Figure 7-3:

■ The security administrator creates a VSG security profile called “WEB-SP” in PNSC. This
policy basically blocks all traffic except TCP connections whose destination port is 80 or
443.
■ The network administrator defines how the VEMs can reach VSG (VLAN, VXLAN, or
IP address), creates a port profile called “WEB-PP,” and inserts security profile “WEB-
SP” into this port profile.
■ The configuration of port profile “WEB-PP” automatically creates a connectivity policy
(or Port Group in VMware-speak) in the VM manager. Thus, the virtualization adminis-
trator assigns this policy to VM A, spawning vEthernet 10 in West VEM.

NOTE For the sake of simplicity and because it is a topic beyond the scope of this book,
I will not represent how exactly each virtual networking service deploys high availability in
the examples discussed in this chapter. Please refer to each vendor’s products documenta-
tion for more detailed information.

This bucolic scene is disturbed with an HTTP connection sent toward VM A. Figure 7-4
shows what happens when the first packet of the connection reaches West VEM, regardless
of its source (B, C, D, or an external host).

Figure 7-4 captures the instant when West VEM sends the connection’s first packet (encap-
sulated in a vPath frame) after noticing that it was supposed to reach a “protected” interface
(vEthernet 10).

As VSG receives the packet, it enforces security profile WEB-SP rules on it. Because the
packet belongs to a TCP port 80 connection, it conforms to WEB-SP rules. Consequently,
VSG resends the encapsulated packet back to the West VEM, as displayed in Figure 7-5.
Chapter 7: Virtual Networking Services and Application Containers 195

PNSC VSM VM Manager

“WEB-PP”
“WEB-SP” Containing “WEB-PP” Port Group
“WEB-SP”

vPath-
Encapsulated
Frame
West East
VSG VEM VEM
vEth10

A B C D

Host1 Host2

Figure 7-4 First Packet Steered to VSG

PNSC VSM VM Manager

“WEB-PP”
“WEB-SP” Containing “WEB-PP” Port Group
“WEB-SP”

West East
VSG VEM VEM
vEth10

A B C D

Host1 Host2

Figure 7-5 West VEM Sending First Packet to VM A

Besides traffic redirection, vPath also carries out more operations between VSG and West
VEM. In fact, VSG caches the security profile decision into the Nexus 1000V module,
allowing the remaining packets from that specific connection to be freely exchanged
between VM A and the original source.

To recognize which packets belong to this same connection, VEMs with protected VMs
maintain a flow table. Inside these tables, allowed and blocked connections are identified
via destination IP address, source IP address, protocol (TCP, UDP, and ICMP, among oth-
ers), destination port (in the case of TCP or UDP), and source port (in the case of TCP or
UDP).

The VEM also identifies the state of each offloaded flow. Therefore, after a FIN or RST is
seen on a TCP connection, the VEM automatically purges this specific offload entry. For
Technet24.ir

196 CCNA Cloud CLDFND 210-451 Official Cert Guide

connectionless protocols and interrupted TCP connections, Nexus 1000V has predefined
timeouts for the offloading traffic entries configured by VSG.

NOTE VSG does not offload to the VEM the handling of some protocols that require
inspection of all packets of a flow, such as File Transfer Protocol (FTP), Trivial FTP (TFTP),
and Remote Shell (RSH).

Because VSG only analyzes the first packet of a standard connection, it provides a scalable
solution to control VM-to-VM (or “east–west,” as I intended to subconsciously suggest to
you) traffic in cloud environments.

You can easily deduce that security rules based on IP addresses may not be the best way to
secure the extremely dynamic networks. As a general rule in these scenarios, any security
policy should be designed to be as reusable as possible. Hence, a cloud network architect
should always strive to eliminate specific arguments on any policy or template.

With this objective, VSG supports the creation of security rules beyond IP addresses. As
Figure 7-6 displays, PNSC provides flexible traffic classification methods that can be easily
reused in automated environments.

Figure 7-6 VSG Security Profile Rule in PNSC

Figure 7-6 exhibits security profile APP-SP, which contains a single rule that only allows
VMs characterized as web servers to use TCP port 3349. Rather than defining these VMs
through their IP addresses, the rule uses the VM name prefix as a source attribute, allowing
a much more efficient method to build security rules.
Chapter 7: Virtual Networking Services and Application Containers 197

Besides virtual machine naming, PNSC can define the following VM attributes for VSG
security profiles:

■ Guest operating system


■ Hostname
■ Cluster name
■ Port profile
■ VM DNS name

Besides prefixes, PNSC also allows the use of additional regular expressions and operators
such as contains, equals, and not equals to recognize parts of these VM attributes.

NOTE The joint capability of handling traffic within the hypervisor kernel and using clas-
sification based on server virtualization attributes (even inside the same network segment) is
informally defined as micro-segmentation.

Virtual zones (vZones) are yet another PNSC resource that greatly facilitates security
policy writing. In a nutshell, a vZone defines a logical group of VMs with multiple common
attributes that can be referenced, as a single parameter, on any security profile rule. As an
example, a virtual zone called vZone-DB can aggregate all VMs whose names start with the
prefix “SQL” and whose IP addresses belong to a subnet 192.168.1.0/24.
7

NOTE More details related to VSG scalability, performance, and licensing can be found in
Chapter 13, “Cisco Cloud Infrastructure Portfolio.”

Cisco Adaptive Security Virtual Appliance


A compute firewall such as Cisco Virtual Security Gateway (VSG) is carefully designed to
enforce security policies within a defined organization unit such as a cloud tenant. Yet,
additional protection measures are required to harden the organization unit from what is
considered to be “the external wild world.” After all, attacks and exploits may come from
the Internet or even from other tenants sharing the same infrastructure.
Also known as ASAv, the Cisco Adaptive Security Virtual Appliance is composed of the
market-leading Cisco ASA firewall software ported into a virtual machine. Although ASAv
may replace some physical Adaptive Security Appliance (ASA) models in server virtualiza-
tion environments, it was originally designed to perform the role of an edge firewall for
cloud tenant resources.

Figure 7-7 illustrates how ASAv can protect VMs from a cloud tenant in a Nexus 1000V
scenario.
Technet24.ir

198 CCNA Cloud CLDFND 210-451 Official Cert Guide

Clients Clients

Internet Internet

VLAN VLAN
50 50 VLAN
50

Tru
nk

nk Tru ASAv
Tru

Tru nk
nk
VLAN 50
West East
VXLAN VEM VEM VXLAN 9000
9000
ASAv VXLAN VXLAN VXLAN
9000 9000 9000

A B C A B C
Host1 Host2

Figure 7-7 ASAv Deployment Example

In both topologies in Figure 7-7, ASAv is protecting all virtual machines connected to
VXLAN 9000 (VMs A, B, and C) from outside traffic coming through VLAN 50. Because
edge firewalls must deploy Layers 4 to 7 security rules to avoid more sophisticated attacks,
traffic is always steered to ASAv through VLAN (and VXLAN) manipulation. Routing IP
packets between both inside and outside interfaces, ASAv can work as a default gateway for
the protected VMs.

NOTE At the time of this writing, ASAv does not support vPath. For this reason, it
depends on other traffic steering techniques, such as embodying the VMs default gateway,
to enforce security policies on all traffic from a cloud tenant (or select resources from it).

The topology on the right side of Figure 7-7 hides the representation of Layer 2 switches
and virtualization hosts to clarify the objective of an edge firewall such as ASAv. In my
opinion, such “broadcast domain view” can be an excellent alternative to characterize secu-
rity domains in virtual network topologies.

ASAv can deploy up to ten virtual interfaces and, for that reason, may also filter intra-tenant
traffic and deploy virtual demilitarized zones (DMZs). Besides traffic filtering, ASAv also
provides the following capabilities for cloud tenant resources:

■ Site-to-site virtual private networks (VPNs): Using IPsec, the edge firewall can securely
connect external networks with compatible routers or firewalls.
■ Remote VPNs: This feature allows individual hosts deploying VPN software (such as
Cisco AnyConnect) to remotely connect to ASAv as if they were located in a local
Chapter 7: Virtual Networking Services and Application Containers 199

security zone. With this feature, IT administrators can perform maintenance operations
on protected VMs, for example.
■ Network Address Translation (NAT): ASAv can perform translation of private IP
addresses to public IP addresses so internal VMs can be externally reached. This capabil-
ity also allows the reuse of private IP subnets and addresses for multiple tenants.
■ Application inspection: Required for services that embed IP addressing information in
user data or open secondary connections using dynamically assigned ports. Through this
feature, ASAv performs deep packet analysis in protocols such as Domain Name Service
(DNS), FTP, HTTP, Instant Messaging (IM), RSH, Session Initiation Protocol (SIP),
SQL*Net, TFTP, among others.
■ Authentication, Authorization, and Accounting (AAA): Set of services to identify ASAv
administrators and application users, define what type of resources they can access, and
register the procedures they have executed.

At the time of this writing, ASAv can be managed through the following tools and methods:

■ Command-line interface (CLI): This configuration method uses commands from the
well-known ASA operating system.
■ Cisco Adaptive Security Device Manager (ASDM): Graphical user interface that allows
the easy creation of security rules and policies on a single ASAv instance. Its administra-
tion can be also offered to tenants’ administrators, if desired.
■ Cisco Security Manager (CSM): Management tool that allows security policy consisten- 7
cy across multiple ASAv instances through object management, event monitoring, report
and troubleshooting tools, image control, health, and performance monitoring. CSM can
be managed through a GUI or a REST-based XML API.
■ Application programming interface (API): ASAv also provides an individual API based
on RESTful principles. Ideal for cloud computing and other automated environments.

NOTE More details related to ASAv scalability, performance, and licensing can be found
in Chapter 13.

Cisco Cloud Services Router 1000V


The vast majority of virtual switches only offer Layer 2 forwarding (bridging) between
virtual machines and the physical network. Nevertheless, rather than limit Layer 3 forward-
ing (routing) to occur exclusively from physical network devices, cloud tenants can benefit
from the insertion of advanced routing functions closer to their virtual machines.

Cisco Cloud Services Router (CSR) 1000V expands Cisco’s initiative of building virtual
appliances based on its most flexible and popular physical devices. More specifically, CSR
1000V runs Cisco IOS XE, a robust and complete variation of the most popular network
operating system in the world.

Deploying CSR 1000V, cloud tenants can leverage advanced Layer 3 features such as
Technet24.ir

200 CCNA Cloud CLDFND 210-451 Official Cert Guide

■ IP versions 4 and 6, including migration tools such as NAT64


■ Unicast routing protocols, including Border Gateway Protocol (BGP), Open Shortest Path
First (OSPF), and Enhanced Interior Gateway Routing Protocol (EIGRP), as well as PBR
■ High availability gateway protocols such as Hot Standby Routing Protocol (HSRP),
Virtual Router Redundancy Protocol (VRRP), and Global Load Balancing Protocol
(GLBP)
■ IP multicast protocols, including Internet Group Management Protocol (IGMP) and
Protocol Independent Multicast (PIM)
■ Virtual Forwarding and Routing (VRF) for routing and forwarding table segmentation
■ Multiprotocol Label Switching (MPLS) services, including Layer 3 VPNs (L3VPNs),
Ethernet over MPLS (EoMPLS), and Virtual Private LAN Services (VPLS)
■ Zone-based firewall
■ IPsec VPNs, including Dynamic Multipoint VPN (DMVPN), Easy VPN, and FlexVPN
■ Access control lists (ACLs)
■ WCCP for an easier integration of myriad virtual networking services such as web caches
and WAN accelerators

NOTE CSR 1000V also deploys advanced data center features such as Overlay Transport
Virtualization (OTV), which will be explained in more detail in Chapter 10, “Network
Architectures for the Data Center: Unified Fabric.”

As a visual aid, Figure 7-8 presents a use case for CSR 1000V in an IaaS-based cloud.

Cloud Tenant
IPsec Tunnel
VLAN 500
(CORPORATE)

CORPORATE CSR1
MPLS-Enabled
WAN
Internet
CSR2 VXLAN 5000
(PARTNER)
PARTNER

CORPORATE PARTNER

Figure 7-8 CSR 1000V Deployment Example

In this scenario, a company deploys Layer 3 VPNs for traffic isolation between corporate
users (CORPORATE) and partners (PARTNER). After hiring IaaS services from a cloud pro-
vider, the company wants to enforce these security measures to virtual machines deployed
in its brand new cloud tenant environment.
Chapter 7: Virtual Networking Services and Application Containers 201

As Figure 7-8 shows, two CSR 1000V instances are deployed inside the cloud tenant for
redundancy purposes (ideally, each one of these virtual routers should be located in distinct
hosts through anti-affinity rules defined in a virtualization cluster). CSR1 and CSR2 deploy
HSRP to implement an “always active” default gateway for virtual machines. Moreover,
these tenant-controlled routers provide route advertisement throughout the company WAN
with OSPF. Security is further tightened with the use of IPsec tunnels providing encryption
to all data exchanged between remote branches and the cloud tenant.

Deploying MPLS in these IOS-based virtual appliances, the cloud tenant has separate VMs
assigned to CORPORATE and PARTNER VPNs, as long as they are connected, respective-
ly, to VLAN 500 or VXLAN 5000. Consequently, desktops from end users connected to a
VPN PARTNER can only access virtual machines that are connected to VXLAN 5000 in the
cloud tenant.

NOTE At the time of this writing, CSR 1000V does not require vPath because it is usually
positioned to deploy advanced routing features for all traffic exchanged between the cloud
tenant domain and external networks.

The instantiation of an advanced router in a cloud tenant is such a compelling concept that
public cloud providers, such as Amazon Web Services (AWS), are already offering CSR
1000V as an additional service for their customers.

CSR 1000V can be managed through the following methods: 7

■ CLI: Using Telnet or SSH, a network administrator can use the familiar commands from
Cisco IOS. As a result, service providers and large enterprise networks can easily leverage
CLI-based provisioning tools for CSR 1000V instances running on a cloud environment.
■ Cisco PNSC: This management tool can install and license CSR 1000V instances, as well
as configure features according to their location within a created PNSC tenant hierarchy.
■ API: CSR 1000V additionally provides an API based on RESTful principles. It is ideal for
cloud computing and other automated environments.

NOTE More details related to CSR 1000V scalability, performance, and licensing can be
found in Chapter 13.

Citrix NetScaler 1000V


With the massive popularity of e-business applications in the late 1990s, data center archi-
tects quickly realized that a web application running over a single server raises two immedi-
ate risks:

■ Lackluster performance: Whenever the server hits a saturation point defined by its
hardware-software combination
■ Poor availability: In the case of a major hardware or software failure
Technet24.ir

202 CCNA Cloud CLDFND 210-451 Official Cert Guide

To mitigate these risks, Cisco created the concept of the server load balancer (SLB) appli-
ance in 1998. In essence, an SLB is a network device that can receive end-user traffic and
send it to a selected server according to a predefined load balancing policy.

Figure 7-9 displays the main components on a typical SLB deployment, described in the
following list.

Service A1

Server 1

or
nit
Mo
Service A2
itor Server 2
Mon
SLB
Application Monitor Service A3
VIP
Users Service B3
Mon Server 3
itor

Mo
nit
or
Client Server Service B4

Client 1 Server 3 Server 4


Client 2 Server 4

… …

Stickiness Service B5
Table Server 5

Figure 7-9 SLB Basic Architecture

■ Servers: Basically the IP addresses from servers that will receive the connections dis-
patched from the SLB.
■ Service: The IP address, port, and protocol combination used to route requests to a spe-
cific load-balanced application server. A service is the logical representation of an appli-
cation running on a server.
■ Monitors: Synthetic requests the SLB creates to check whether a service is available on
a server. They can be as simple as an Internet Control Message Protocol (ICMP) Echo
request or as sophisticated as a Hypertext Transfer Protocol (HTTP) GET operation
bundled with a database query.
■ Virtual IP (VIP): An SLB internal IP address that is specifically used to receive end-user
connections. This address is usually registered with DNS servers to be advertised to end
users.
■ Virtual server: Combines a VIP, transport protocol (TCP or UDP), and port to which a
client sends connection requests for a particular load-balanced application. It is associ-
ated with a set of services to which the SLB will dispatch end-user connections.
Chapter 7: Virtual Networking Services and Application Containers 203

■ Stickiness table: An optional SLB component that stores client information. The SLB
uses this data to consistently forward end-user subsequent connections to the server that
was selected during the first access, thus maintaining user session states inside the same
server. Examples of stored client information are source IP address, HTTP cookies, and
special strings excised from user data.
■ Load balancing algorithm: It is the configured method of user traffic distribution among
the servers deploying the same application. A wide variety of algorithms are available
today for these devices, including round robin, least connections, and hashing.

Generally speaking, an SLB provides server load balancing through the following process:

STEP 1. The SLB receives the client connection request in its VIP address, identifies the
virtual server that will take care of the connection, and checks if the client is
already in the stickiness table.

STEP 2. If the client information is not already present in the stickiness table, the SLB
takes a look at the services associated with the virtual server and determines,
through their monitor results, which real servers have the application healthily
running at that moment.

STEP 3. Using the virtual server configured load balancing algorithm, the SLB selects
the server that will receive the user connection.

STEP 4. The SLB saves the client and server information in the stickiness table and
7
coordinates both ends of the connection until it eventually ends.

An interesting analogy for an SLB would be an airport control tower, which must identify
the main characteristics of a landing airplane (user connection) before deciding which run-
way (server) it can use. The control tower usually applies a predefined method (algorithm)
to sequence the arriving planes and must already know (monitor) if a runway is in mainte-
nance or not.

NOTE Please do not confuse the concept of a server load balancer with that of server
cluster software, which essentially allows servers to work in tandem, providing a scale-out
solution for a specific application. Although SLBs may replace specific end-user distribution
functions of clustering software, only the latter can manage user session synchronization
among cluster members and provide shared access to stored data.

With widespread adoption in most data centers in the 2000s, SLBs were aptly renamed
application delivery controllers (ADCs) as they expanded their capabilities with features
such as the following:

■ Content switching: Server selection based on Layers 5 to 7 parameters from HTTP, FTP,
RSTP, DNS, as well as user data.
■ Secure Sockets Layer (SSL) acceleration: Hardware-assisted encryption to offload
secure web servers.
Technet24.ir

204 CCNA Cloud CLDFND 210-451 Official Cert Guide

■ TCP connection reuse: Multiplexing of numerous TCP connections from clients to a


small number of connections between the ADC and a server. This feature offloads web
servers from the management of multiple TCP connections.
■ Object compression: Decreases bandwidth requirements with web objects being com-
pressed in the ADC and, subsequently, delivered to the application clients employing
decompression in their web browsers.
■ Web acceleration: Includes several distinct mechanisms whose objective is improving
application response time for web applications.
■ Application firewall: Embedded analysis tools which prevent security breaches, data
loss, and unauthorized customizations to web applications with sensitive business or cus-
tomer information.

Similar to other networking services, ADCs were eventually introduced as virtual appliances
targeting server virtualization and cloud deployments. In both contexts, this virtual network-
ing service is generically used to increase capacity of applications when virtual machines are
scaled out through cloning or template instantiation.

Citrix NetScaler (NS) 1000V embodies the packaging of Citrix NetScaler ADCs in virtual
appliances. And as Figure 7-10 depicts, NetScaler 1000V provides vPath integration with
virtual networks based on Nexus 1000V.

Clients Clients

Internet Internet

Time

Data Data
Center Center
Network Network

West East West East


VEM VEM VEM VEM
vPath
NS 1000V NS 1000V
vPath
vPath

A C A B C D

Host1 Host2 Host1 Host2

Figure 7-10 NetScaler 1000V Deployment Example

In the described scenario, a cloud tenant has already deployed two virtual machines (A
and C) to host a web application. In the situation displayed on the left side of the figure,
Chapter 7: Virtual Networking Services and Application Containers 205

NetScaler 1000V receives the requests from clients using a VIP address. As a consequence,
the virtual ADC load balances the client connections between both VMs, providing more
user capacity and availability for the application.

Because the company business department is foreseeing a sudden user interest during a
future limited promotion, the cloud administration team decides to deploy two more virtual
machines (B and D) for that period. The right side of the figure showcases this situation,
where NetScaler 1000V is automatically configured to load balance client traffic to all
VMs, doubling the user capacity of the application.

If you are wondering about the value vPath is adding to this scenario, first you have to
understand a classic problem related to ADCs connected in one-arm mode: how to guaran-
tee that response traffic from the servers reaches the ADC.

In Figure 7-10, when NetScaler 1000V sends the connection request to the virtual machine,
by default, the client IP address is used as the connection source address. Therefore, the
VM response has the client IP address as the destination. And because Nexus 1000V is a
Layer 2 switch, it would naturally send the VM response to the VM default gateway, blind-
ing the ADC from the server response. Besides, the connection would be instantly terminat-
ed as soon as the client did not see the original destination IP address (VIP) at the response
source IP address.

Traditionally, two methods are deployed in physical networks to solve this challenge, with
variable benefits and drawbacks:
7
■ Source NAT: The ADC replaces the client IP address with an internal address (for
example, the VIP), forcing the return traffic to be directed back to it. As a disadvantage,
the application servers only detect the ADC as the origin for all connections and conse-
quently lose track of the client accesses for monitoring and accountability purposes.
■ Policy-based routing: As previously explained in the section “Service Insertion in
Physical Networks,” a network device may deploy this non-default method to route serv-
er responses back to the ADC, demanding an exclusive IP subnet for the virtual service.
However, this method is not possible in pure Layer 2 scenarios such as the one depicted
in Figure 7-10.

Nexus 1000V and NetScaler 1000V can avoid this conundrum through vPath encapsulation.
When this virtual ADC is assigned to a vPath-enabled port profile, Nexus 1000V can reg-
ister load-balanced connections coming from NetScaler 1000V and automatically steer the
return traffic to the ADC, avoiding the drawbacks from both source NAT and PBR.

NOTE More details related to Citrix NetScaler 1000V scalability, performance, and licens-
ing can be found in Chapter 13.

Cisco Virtual Wide Area Application Services


The two main reasons why applications usually suffer from performance issues in wide-area
networks are latency and bandwidth starvation:
Technet24.ir

206 CCNA Cloud CLDFND 210-451 Official Cert Guide

■ Latency can be defined as the time spent to transmit any signal through a communica-
tion channel. In any client/server application transaction, application response time can
be roughly derived from the multiplication of the round-trip time (two times the latency)
and the number of messages exchanged between client and server, with reception con-
firmation. For example, if a server needs to send 1000 messages to a client and requires
reception confirmation for each of them to send the next message, a latency of 1 mil-
lisecond will result in 2 seconds for the whole transaction. On the other hand, with a
latency of 100 milliseconds, the same transaction would need 3 minutes and 20 seconds
(or 200 seconds) to finish.
TCP connections behave differently from the data exchange I just described. In summa-
ry, TCP employs the concept of a transmission window, which represents the unidirec-
tional amount of connection data that can be transmitted after a reception confirmation.
In a TCP connection, the transmission window increases whenever the last transmitted
data chunk is successfully transmitted. Nevertheless, most hosts present a transmission
window size limitation of 64 KB, which may compromise TCP connections in high-
latency links, regardless of their available bandwidth.
■ Because it defines how Internet access is usually billed, bandwidth starvation is argu-
ably the most intuitive cause attributed to unsatisfactory application performance. For
example, in a low-bandwidth WAN connection, traffic queuing in network devices
increases, causing packet loss and retransmissions on TCP connections.

To support the consolidation process of servers in centralized data centers, a new network-
ing service called WAN acceleration was conceived. In essence, WAN acceleration aims to
transparently assuage the effect of latency and bandwidth saturation for applications tra-
versing WAN links.

Cisco Wide Area Application Services (WAAS) epitomizes Cisco’s innovative and scalable
WAN acceleration solution. WAAS is a symmetrical acceleration solution, demanding the
deployment of an accelerator device at each end of a WAN link: one close to the client and
another in the vicinity of the application server. With this arrangement, application connec-
tions are intercepted by both WAAS devices, which in turn apply acceleration algorithms to
decrease the application response time and increase the WAN link capacity.

In summary, Cisco WAAS offers the following acceleration algorithms:

■ TCP Flow Optimization (TFO): Each WAAS device acts as a TCP proxy, providing
local acknowledgements to the host that is closest to the device on behalf of the remote
end. With this artifice, WAAS “fools” both client and server through the illusion that
both are connected to the same LAN. Then, the devices deploy an optimized version of
TCP between them to leverage the most from the WAN bandwidth for each connection.
■ Data Redundancy Elimination (DRE): WAAS inspects TCP traffic to identify redundant
data patterns at the byte level and quickly replace them with 6-byte signatures that are
automatically indexed and recognized by both WAAS devices.
■ Persistent Lempel-Ziv (PLZ): Standards-based compression that can be applied (in con-
junction with DRE or not) to further reduce the amount of bandwidth consumed by a
TCP flow.
Chapter 7: Virtual Networking Services and Application Containers 207

■ Application optimization (AO): WAAS provides specific optimization algorithms for


message-intensive applications based on SSL, HTTP, Microsoft Server Message Block
(SMB), Network File System (NFS), Messaging Application Programming Interface
(MAPI), Citrix Independent Computer Architecture (ICA), and Windows Printing.

NOTE You will receive a more detailed explanation of both SMB and NFS in Chapter 9,
“File Storage Technologies.”

Full WAAS services can be deployed in two different physical formats: Cisco Wide Area
Virtualization Engine (WAVE) appliances and service modules for Cisco Integrated Services
Routers (ISR). On both formats, traffic interception is performed through PBR, inline pass-
through network adapters, WCCP, ADCs, or a specialized WAAS clustering solution called
Cisco AppNav.

NOTE Select Cisco routers can also deploy an IOS feature called WAAS Express, which
implements a smaller set of WAAS acceleration algorithms.

In 2010, Cisco WAAS was also released as a virtual appliance called Virtual Wide Area
Application Services (vWAAS). Similarly to other virtual networking services, this format
allows the deployment of WAN acceleration for cloud computing environments, which are
intrinsically exposed to WAN latency and client bandwidth restrictions. 7
Since its first version, vWAAS incorporated vPath to its options of supported traffic inter-
ception methods. Hence, Nexus 1000V can easily steer virtual machine traffic that requires
WAN acceleration to a vWAAS instance.

Figure 7-11 examines a vWAAS deployment using vPath.

Host1

WCCP Optimized vPath


Connection
vWAAS
Application A Data Center
WAN
User Network
WAAS vPath
Non-Optimized
Connection
B

Port Profile with vPath vCM

Application B to vWAAS
User Host2

Figure 7-11 vWAAS Deployment Scenario


Technet24.ir

208 CCNA Cloud CLDFND 210-451 Official Cert Guide

In Figure 7-11, one end user is accessing an application hosted in VM A while another end
user is requesting services from an application running on VM B. A physical WAAS appli-
ance in branch A receives WCCP-redirected traffic and therefore can negotiate WAN accel-
eration algorithms with another WAAS device installed in the path to the application server.
The depicted vWAAS covers this function, receiving steered traffic when the Nexus 1000V
VEM perceives that VM A is connected to a vPath-enabled interface.

vPath may also be used to offload vWAAS from non-accelerated traffic. In Figure 7-11, the
connection between end-user B and VM B cannot be accelerated because branch B does not
have a WAAS device. Consequently, when vWAAS does not detect a remote device, it can
program the VEM on host 2 to not steer subsequent packets from this connection.

As Figure 7-11 also demonstrates, a specific virtual appliance called vWAAS Central Man-
ager (vCM) is responsible for managing a complete WAAS system. In that sense, vCM can
configure and monitor thousands of WAAS devices, at the time of this writing.

Figure 7-12 displays one of the multiple reporting capabilities from vCM. In this specific
screen capture, vCM provides information related to the whole WAN acceleration deploy-
ment about traffic volume, data reduction, and top 10 most compressed applications.

Figure 7-12 vWAAS Central Manager GUI

Besides a GUI, vCM also provides for cloud computing implementations an API for moni-
toring purposes.

NOTE More details related to vWAAS scalability, performance, and licensing can be
found in Chapter 13.

vPath Service Chains


In previous sections, you have learned how vPath simplifies virtual networking insertion in
Nexus 1000V scenarios. This section explores how this technology can help when multiple
Chapter 7: Virtual Networking Services and Application Containers 209

services should handle the traffic of a single VM. With this goal, Nexus 1000V supports
service chains, in which a sequence of vPath-enabled virtual networking services is faith-
fully followed whenever a VEM detects a connection to certain virtual machines.

Figure 7-13 details how Nexus 1000V builds a vPath service chain for three different virtual
networking services.

NS 1000V vWAAS VSG


Port Profile with vPath
Service Chain VIP

VM A
CSR 1000V

Application
User VEM

Figure 7-13 vPath Service Chain in Action

In the figure, Nexus 1000V is configured to implement a vPath service chain enforcing the
following order: 7
1. NetScaler 1000V
2. vWAAS
3. VSG
A CSR 1000V instance routes the first packet from a client request to a VIP configured in
NetScaler 1000V (continuous arrow). After the virtual ADC load balances the connection to
VM A, it uses vPath to encapsulate the result and send it back to the VEM (dashed arrows
represent vPath encapsulated traffic).
Following the service chain order associated to VM A, the Nexus 1000V module forwards
the packet to vWAAS to verify whether its WAN acceleration algorithms may be applied to
the connection.
Again encapsulated in vPath, the original packet is steered to VSG for security policy check-
ing. When the packet finally reaches VM A in its original form, the VEM is already pro-
grammed for the return traffic, where
■ The VEM may already deploy VSG’s security rule decision.
■ It may not send more packets from that specific connection to vWAAS, in case the con-
nection cannot be accelerated.
■ The module will surely steer the server response to NetScaler 1000V.

Most importantly, this rather complex traffic management is completely hidden under the
service chain definition, which is inserted in a Nexus 1000V port profile. All the steering
and offload decisions are implicitly executed, and will continue to happen even if any of
the virtual services or the VM live migrates to another host.
Technet24.ir
210 CCNA Cloud CLDFND 210-451 Official Cert Guide

Fundamentally, a vPath service chain provides policy-defined service insertion for virtual
machines. If desired, other port profiles may implement distinct service chains: for example,
a second service chain may only include vWAAS and VSG services for select VMs.

vPath service chains make it extremely easy for clouds to employ virtual networking ser-
vices as “add-ons” for each application tier on a cloud tenant.

In the case of a failure on any virtual networking service on the chain, Nexus 1000V detects
the lack of connectivity probe (keepalive) responses from the failed service and no longer
steers traffic to it. However, the configured fail-mode in such service will define how the
entire service chain will behave:

■ Fail-mode close: If the virtual networking service fails or loses connectivity to Nexus
1000V, all port profiles associated with the service will drop every packet and the whole
service chain will stop working.
■ Fail-mode open: If the virtual networking service fails or loses connectivity to Nexus
1000V, all port profiles associated with the service will perform frame forwarding as if
the service is not included in the service chain.

Undoubtedly, because of its central position for vPath-enabled services, Nexus 1000V is
the most important tool during troubleshooting processes involving service chains.

Virtual Application Containers


Business applications rely on servers, storage, and networking, which includes segment cre-
ation and additional networking services (routing, security, and load balancing, among oth-
ers). As a consequence, the application provisioning process is severely challenged with the
complexity of installing, configuring, and licensing all of these components.

By definition, some level of application isolation is required in any multitenant domain,


for multiple reasons, such as security, compliance, or service-level agreements (SLAs). As a
simple example, you can imagine that in a service provider hosting applications for multiple
customers, a single tenant may want to separate applications for internal employees from
those for external partners.

With both situations in mind, data center architects embraced the concept of a network
container to speed up application provisioning and reinforce network isolation in multiten-
ant environments. In summary, a network container can be defined as a set of networking
services configured in a standardized manner.

During the 2000s, to avoid deploying dedicated network devices (switches, routers, fire-
walls, and ADCs) for each tenant, most data center service providers built network contain-
ers using a virtualization technique called device partitioning to better leverage the usage of
networking resources. The following elements represent common network device partitions
that were heavily used during that period:

■ Virtual Routing and Forwarding (VRF) instance: A routing instance that can coexist
with several others in the same routing equipment. It is composed of independent rout-
ing and forwarding tables, a set of interfaces, and optional routing protocols to exchange
routing information with other peers.
Chapter 7: Virtual Networking Services and Application Containers 211

■ Firewall context: An independent virtual firewall, with its own security policy, interfac-
es, and administrators. From an administration perspective, deploying multiple contexts
is similar to having multiple standalone devices. Cisco ASA supports multiple contexts
deploying separate routing tables, firewall features, IPS, and management capabilities.
■ ADC context: An abstraction of an independent load balancer with its own interfaces,
configuration, policies, and administrators. This technology was originally deployed in
the former Cisco ADC solution called Application Control Engine (ACE).

Figure 7-14 illustrates four network container examples composed of network partitions
and offered to tenants of a data center service provider.

Bronze Silver Gold Diamond


Network Network Network Network
Container Container Container Container

Physical
Router VRF VRF VRF VRF

Physical Firewall Firewall Firewall


Firewall Context Context Context

Physical ADC ADC ADC ADC


ADC Context Context Context Context

Tenant Application Tenant Application Tenant Application Tenant Application

Figure 7-14 Network Container Examples

In all four container options depicted in Figure 7-14, VRF instances provide Layer 3 servic-
es to external networks, while both firewall and SLB contexts provide their specialized net-
working services to one or more applications from a tenant. Using these virtual partitions,
a service provider can logically provision selected networking services without undertaking
the manual procedures that would be necessary if physical appliances were deployed.

As you previously learned in the section “Service Insertion in Physical Networks,” this
scenario uses VLAN manipulation to insert services between clients and servers from an
application. For this objective, an additional VLAN must be provisioned to connect two
networking services (or one service to the application servers). Thus, the bronze, silver, gold,
and diamond network containers, respectively, consume 2, 2, 3, and 5 VLANs from the
4094 that are available on a single service provider network infrastructure.

NOTE For service providers interested in developing such designs, Cisco offers an
extremely useful standardization tool in the form of the Virtualized Multiservice Data
Center (VMDC) reference architecture. VMDC essentially provides a framework for build-
ing multitenant data centers with focus on the integration of networking, computing, server
virtualization, security, load balancing, and system management.
Technet24.ir

212 CCNA Cloud CLDFND 210-451 Official Cert Guide

Being also a multitenant environment, cloud computing recycles many principles and best
practices learned from data center service providers. But with the benefits brought by server
virtualization, cloud providers could evolve network containers into virtual application
containers through the addition of three new ingredients:

■ Virtual machines: The simplicity of VM provisioning allows virtual servers to be added


to these templates.
■ VXLAN: Used to scale and facilitate network provisioning for tenants.
■ Virtual networking services: Can replace device partitions, decreasing cloud orchestra-
tion operations with the physical network and expanding their capabilities to scale out
(create more virtual appliances) and scale up (allocate more resources to a virtual appli-
ance).

As discussed in Chapter 4, “Behind the Curtain,” standardization is a mandatory requirement


for ease of automation. And for this reason, virtual application containers are considered a
key element of cloud computing architecture.

Using its broad portfolio of virtual networking services, Cisco streamlined the creation of
virtual application containers through a solution called Cisco Virtual Application Cloud
Segmentation (VACS). This software package basically automates installation, licensing, and
configuration of multiple virtual services to enable an easy and efficient setup of virtualized
applications.

Cisco VACS provisions application environments through virtual application container


templates, which are used for the instantiation of identical virtual application containers.
The solution architecture has three main software components:

■ Cisco Nexus 1000V: Provides network segmentation through VLAN and VXLANs, and
offers vPath-based network insertion
■ Cisco PNSC: Controls the installation, licensing, and configuration of virtual networking
services
■ Unified Computing System (UCS) Director: Cisco orchestration solution that provides
the management interface to deploy, provision, and monitor the whole VACS solution

As Figure 7-15 demonstrates, VACS discharges the cloud orchestrator from executing
repetitive (and strongly correlated) tasks to instantiate a virtual application container.
Chapter 7: Virtual Networking Services and Application Containers 213

Cloud VACS
Orchestrator
Administrator

REST GUI
API

UCS Director

Container Container Container


VACS

CSR 1000V VSG CSR 1000V VSG CSR 1000V VSG

PNSC
vPath vPath vPath
VXLAN 5001 VXLAN 5002 VXLAN 5003
...
Nexus 1000V

Web Application Database Web Application Database Web Application Database


VM Manager

Figure 7-15 VACS Architecture

Figure 7-15 highlights two ways to manage VACS:

■ GUI: Used to install VACS components, license virtual networking services, create virtual
application container templates, and instantiate containers from them, if necessary 7
■ REST API: Interface that allows a cloud orchestrator to instantiate virtual application
containers based on virtual application container templates, as well as decommission con-
tainers that will not be used anymore

Regardless of its origin, when a request for a new container reaches UCS Director, it follows
preexistent workflows that interact with PNSC (to provision the required virtual networking
services), a VM manager (to spawn virtual machines), and Nexus 1000V (to correctly con-
nect these VMs, virtual appliances, and external networks). Assembling like the Avengers,
these elements form a brand new virtual application container.

VACS provides wizards to create “almost-ready-to-go” virtual application container tem-


plates, which are commonly referred as three-tier templates. Figure 7-16 illustrates this
construct.
Technet24.ir

214 CCNA Cloud CLDFND 210-451 Official Cert Guide

Data Center Management IP


Network Address Pool

EIGRP
External IP VSG
Address Pool
CSR 1000V
ath
vP
Internal IP
Subnet Pool
VLAN or
VXLAN Pool

Web Application Database

Figure 7-16 Three-tier Container Template in VACS

As you can see, this predefined template enforces VSG security policies in three applica-
tion tiers (web, application, and database) that are connected to a shared segment (VLAN or
VXLAN). It also achieves segregation through CSR 1000V acting as a zone-based firewall
and uses EIGRP as the default routing protocol to advertise the public subnet assigned to
the virtual machines. It can also deploy static routes and use NAT to publish internal ser-
vices to external networks.

A three-tier template can be defined as internal (where external networks can only access
the web tier) or external (where all three application tiers can be externally accessed).

To guarantee uniqueness of addressing and segments ID, this container template must be
configured with pools, described in Table 7-2, before it can instantiate any application con-
tainer.

Table 7-2 Three-tier Virtual Application Container Template Parameters


Template Parameter Description
Management IP address Used by all manageable elements in a container, such as CSR 1000V
pool and VSG.
External IP address pool Contains public IP addresses that will be used on the CSR 1000V
external interface of each container. It should also include, as a con-
figuration parameter, the next-hop IP address if the virtual router is
not using any routing protocol.
Internal IP subnet pool Provides unique IP subnets for all VMs in a three-tier container. As
a new container is provisioned, its assigned subnet will be published
through the use of a routing protocol such as EIGRP.
VLAN ID pool Assigned to VLAN-based virtual application containers.
VXLAN ID pool Assigned to VXLAN-based virtual application containers.
Chapter 7: Virtual Networking Services and Application Containers 215

Finally, the three-tier container template also provides predefined VSG vZones for web,
application, and database virtual machines.

NOTE As an exclusive add-on to the web tier from the three-tier container template, you
can add a redundant pair of SLB virtual networking services based on open source HAProxy
(http://www.haproxy.org/).

Adversely from three-tier container templates, VACS uses custom virtual application con-
tainer templates to address specific requirements from a cloud computing environment.
Additionally to all parameters defined on a three-tier template, a custom template requires
the parameters listed and described in Table 7-3.

Table 7-3 Custom Virtual Application Container Template Additional Parameters


Template Parameter Description
Number of application Defines different tiers of a container, consequently changing the
tiers number of zones, networks, and application types
Tier network Establishes additional Layer 2 segments (VLAN or VXLAN) that can
be appended to isolate specific application tiers in a container
Security zone Allows the customization of ACLs that can be applied exclusively to
an application tier
7
Application Layer Permits inspection of the incoming packets for specific protocols
Gateway such as HTTP, HTTPS, FTP, DNS, ICMP, SQLNET, MSSQL, and
LDAP

NOTE As an exclusive add-on any tier in the custom container template, you can add a
redundant pair of open source HAProxy SLBs.

After you create these container templates, they become available for container creation
requests. And as I have commented before, these requests will generate isolated virtual
application containers.

Although most VACS deployments use the REST API to accept inbound requests from
cloud portals and orchestrators, a simple example of container creation can be demonstrat-
ed through the UCS Director GUI, as shown in Figure 7-17.
Technet24.ir

216 CCNA Cloud CLDFND 210-451 Official Cert Guide

Figure 7-17 Requesting an Application Container

In Figure 7-17, a non-administrative user (“demouser”) is requesting the creation of an appli-


cation container called App-Cont. Because the container template already exists, the user
request generates a workflow iteration to instantiate the template. Figure 7-18 displays the
workflow in action.

Figure 7-18 VACS Workflow

And after all tasks related to components of the App-Cont are executed in the workflow
shown in Figure 7-18, the application container is finally ready to be used. Figure 7-19
depicts the final outcomes of the request, including VMs and virtual networking services
originally specified in the three-tier container template.
Chapter 7: Virtual Networking Services and Application Containers 217

Figure 7-19 Virtual Machines from App-Cont

Around the Corner: Service Insertion Innovations


Throughout this chapter, I have explored many service insertion methods, including VLAN 7
manipulation, policy-based routing (PBR), Web Cache Control Protocol (WCCP), and
Virtual Data Path (vPath). While the advantages from the first three approaches deserve
extended discussions on physical network designs, their benefits tend to pale when com-
pared to the flexibility and simplicity of vPath.

As previously explained, vPath permits the insertion of multiple virtual networking services
through the use of granular policies that can discriminate virtual machines with flexibility
beyond addresses and subnets. But if you think vPath is the last word on service insertion
technologies, you are quite wrong.

Take for example the Cisco Remote Integrated Services Engine (RISE), which is depicted in
action in Figure 7-20.

RISE is intended to simplify one-arm mode implementations of networking services (such as


ADCs), abstracting these appliances as “remote modules” of a physical Nexus data center switch.

This perception is achieved through a tight integration between the devices, which enables
select SLB configurations to be automatically reflected in the switch. As Figure 7-20 exem-
plifies, the configuration of a VIP load balancing user sessions to real servers produces two
consequences:

■ Auto-PBR: The Nexus switch automatically configures policy-based routing to steer


server responses to the ADC.
■ Route Health Injection (RHI): As soon as there is at least one active server, the ADC
advertises the VIP address through a dynamic routing protocol, allowing end users to be
preferably routed to the ADC site.
Technet24.ir
218 CCNA Cloud CLDFND 210-451 Official Cert Guide

WAN WAN

RHI

VIP Configuration VIP Auto-


RISE RISE PBR
ADC ADC

Real Real
Servers Servers

Figure 7-20 Cisco RISE in Action


Both RISE and vPath depend on joint efforts between Cisco and other vendors, such as
Citrix. Cisco is also currently leading the development of Network Services Header (NSH),
which is a service chaining protocol based on the early success of vPath. In a nutshell, NSH
includes the following enhancements:
■ It is an open standard, motivating a broad acceptance from a wide variety of networking
services and vendors.
■ It is designed for both physical and virtual networks.
■ It is transport-independent because it can be inserted between the original packet and
any outer network transport encapsulation such as MPLS, VXLAN, or Generic Routing
Encapsulation (GRE).

Along with other vendors, Cisco has proposed an IETF draft, describing all details around
the header and allowing software and hardware vendors to develop NSH-solutions before
the publication of the final standard.

NOTE Cisco Application Centric Infrastructure (ACI) also implements an innovative


service insertion technique called service graphs. Because this feature is a component
of this paradigm-shift architecture, I will save this discussion for Chapter 11, “Network
Architectures for the Data Center: SDN and ACI.”

Further Reading
■ Deliver the Next-Generation Intelligent Data Center with Cisco Nexus 7000 Series
Switches, Citrix NetScaler Application Delivery Controller, and RISE Technology:
http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-
switches/white-paper-c11-731370.pdf
■ Network Service Header: https://tools.ietf.org/html/draft-ietf-sfc-nsh-01
Chapter 7: Virtual Networking Services and Application Containers 219

Exam Preparation Tasks

Review All the Key Topics


Review the most important topics in this chapter, denoted with a Key Topic icon in the
outer margin of the page. Table 7-4 lists a reference of these key topics and the page num-
ber on which each is found.

Table 7-4 Key Topics for Chapter 7


Key Topic Element Description Page Number
Figure 7-1 Traffic steering techniques in physical networks 191
Figure 7-3 VSG ready to process traffic 194
Figure 7-4 First packet steered to VSG 195
Figure 7-5 West VEM sending first packet to VM A 195
List ASAv capabilities for cloud tenant resources 198
List CSR 1000V advanced Layer 3 features 200
List SLB configuration elements 202
List Cisco WAAS acceleration algorithms 206
Table 7-2 Three-tier virtual application container template 214 7
parameters
Table 7-3 Custom virtual application container template addi- 215
tional parameters

Complete the Tables and Lists from Memory


Print a copy of Appendix B, “Memory Tables” (found on the CD), or at least the section
for this chapter, and complete the tables and lists from memory. Appendix C, “Answers to
Memory Tables,” also on the CD, includes completed tables and lists so that you can check
your work.

Define Key Terms


Define the following key terms from this chapter, and check your answers in the glossary:

networking service, firewall, server load balancer (SLB), WAN acceleration, service inser-
tion, policy-based routing (PBR), Web Cache Control Protocol (WCCP), virtual networking
service, Virtual Data Path (vPath), Virtual Security Gateway (VSG), Adaptive Security Virtual
Appliance (ASAv), Cloud Services Router (CSR) 1000V, NetScaler 1000V, Virtual Wide
Area Application Services (vWAAS), Virtual Application Cloud Segmentation (VACS)
Technet24.ir

This chapter covers the following topics:

■ What Is Data Storage?

■ Hard Disk Drives

■ RAID Levels

■ Disk Controllers and Disk Arrays

■ Volumes

■ Accessing Blocks

■ Fibre Channel Basics

■ SAN Designs

■ Virtual SANs

■ Internet SCSI

■ Cloud Computing and SANs

This chapter covers the following exam Objectives:

■ 5.1 Describe storage provisioning concepts


■ 5.1.a Thick
■ 5.1.b Thin
■ 5.1.c RAID
■ 5.1.d Disk pools
■ 5.2 Describe the difference between all the storage access technologies
■ 5.2.b Block technologies

■ 5.3 Describe basic SAN storage concepts


■ 5.3.a Initiator, target, zoning
■ 5.3.b VSAN
■ 5.3.c LUN

■ 5.5 Describe the various Cisco storage network devices


■ 5.5.a Cisco MDS family
■ 5.5.c UCS Invicta (Whiptail)
CHAPTER 8

Block Storage Technologies


The capability to store data for later use is part of any computer system. However, how
such systems write and read data in a storage device has been handled in many different
ways since the beginning of computer science.

Obviously, cloud computing projects cannot avoid this topic. To actually provide self-
catered services to end users, cloud architects must select a methodology to store applica-
tion data that is both efficient and strikes a good balance between performance and cost.
In some cases, data storage itself may be offered as a service, enabling consumers to use a
cloud as their data repository.

Within such context, the CLDFND exam requires knowledge about the basic principles of
block storage technologies, including provisioning concepts, storage devices, access meth-
ods, and Cisco storage network devices. This chapter starts with the most basic of these
principles by providing a formal definition of what constitutes storage. It then examines
different types of storage devices, hard disk drives (and associated technologies), main
block storage access methods, storage-area networks (SANs), and common SAN topologies.
Finally, the chapter correlates these technologies to the current state of cloud computing,
providing a clear application of these technologies to dynamic cloud environments.

“Do I Know This Already?” Quiz


The “Do I Know This Already?” quiz allows you to assess whether you should read this
entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in
doubt about your answers to these questions or your own assessment of your knowledge
of the topics, read the entire chapter. Table 8-1 lists the major headings in this chapter and
their corresponding “Do I Know This Already?” quiz questions. You can find the answers in
Appendix A, “Answers to Pre-Assessments and Quizzes.”

Table 8-1 Do I Know This Already?” Section-to-Question Mapping


Foundation Topics Section Questions
What Is Data Storage? 1
Hard Disk Drives 2
RAID Levels 3
Disk Controllers and Disk Arrays 4
Volumes 5
Accessing Blocks 6
Fibre Channel Basics 7–8
SAN Designs 9
Technet24.ir

222 CCNA Cloud CLDFND 210-451 Official Cert Guide

Foundation Topics Section Questions


Virtual SANs 10
Internet SCSI 11
Cloud Computing and SANs 12

1. Which of the following is a correct statement?


a. HDDs are considered primary storage.
b. DVD drives are considered secondary storage.
c. Tape libraries are considered tertiary storage.
d. RAM is not considered primary storage.
2. Which of the following represents data addressing in HDDs?
a. Bus/target/LUN
b. World Wide Name
c. Cylinder/head/sector
d. Domain/area/port
3. Which option describes an incorrect RAID level description?
a. RAID 0: Striping
b. RAID 1: Mirroring
c. RAID 6: Striping, double parity
d. RAID 10: Striping, multiple parity
4. Which of the following is not an array component?
a. SATA
b. Controller
c. JBOD
d. Disk enclosure
e. Access ports
5. Which of the following devices can potentially deploy volume thin provisioning?
(Choose all that apply.)
a. JBOD
b. Storage array
c. Server
d. Dedicated appliance
e. HDD
6. Which of the following are block I/O access methods? (Choose all that apply.)
a. ATA
b. SCSI
c. SAN
d. NFS
e. SQL
Chapter 8: Block Storage Technologies 223

7. Which of the following options is incorrect?


a. FC-0: Physical components
b. FC-1: Frame transmission and signaling
c. FC-3: Generic services
d. FC-4: ULP mapping
8. Which of the following is correct concerning Fibre Channel addressing?
a. WWNs are used for logical addressing.
b. FCIDs are used for physical addressing.
c. WWNs are used in Fibre Channel frames.
d. FSPF is used to exchange routes based on domain IDs.
e. FCIDs are assigned to N_Ports and F_Ports.
9. Which of the following is an accurate list of common SAN topologies?
a. Collapsed core, spine-leaf, core-aggregation-access
b. Collapsed core, spine-leaf, core-distribution-access
c. Collapsed core, core-edge, core-aggregation-access
d. Collapsed core, core-edge, edge-core-edge
e. Collapsed core, spine-leaf, edge-core-edge
10. Which of the following is not an advantage for the deployment of VSANs?
a. Interoperability
b. Multi-tenancy
c. Zoning replacement 8
d. Security
e. Isolation
11. Which of the following is incorrect about iSCSI?
a. Standardized in IETF RFC 3720.
b. Employs TCP connections to encapsulate SCSI traffic.
c. Employs TCP connections to encapsulate Fibre Channel frames.
d. Deployed by iSCSI initiators, iSCSI targets, and iSCSI gateways.
e. Initiators and targets are usually identified through IQN.

12. Which is the most popular block I/O access method for cloud offerings of Storage as
a Service?
a. Fibre Channel
b. iSCSI
c. FCIP
d. iFCP
e. NFS
Technet24.ir
224 CCNA Cloud CLDFND 210-451 Official Cert Guide

Foundation Topics

What Is Data Storage?


Applications should be able to receive data from and provide data to end users. To support
this capability, data centers must dedicate resources to hold application data effectively
while supporting at least two basic input and output (I/O) operations: writing and reading.

In parallel to the evolution of computing, data storage technologies were developed to


achieve these objectives, using multiple approaches. Storage devices are generally separated
into the following three distinct classes, depending on their speed, endurance, and proxim-
ity to a computer central processing unit (CPU):

■ Primary storage: The volatile storage mechanisms in this class can be directly accessed
by the CPU, usually have small capacity, and are relatively faster than other storage
technologies. Primary storage is also known to as main memory, which was referred in
Chapter 5, “Server Virtualization.”
■ Secondary storage: The devices in this class require I/O channels to transport their data
to the computer system processor because they are not directly accessible to the CPU.
Secondary storage devices deploy nonvolatile data, have more storage capacity than pri-
mary storage, provide longer access times, and are also known as auxiliary memory.
■ Tertiary storage: This class represents removable mass storage media whose data access
time is much longer than secondary storage. These solutions are the most cost effective
among all storage types, providing massive capacity for long-term periods.

Figure 8-1 exemplifies these classes of storage technologies.

Tape Library
Latency

Hard Disk Drive

Memory Modules

Capacity

Figure 8-1 Storage Technology Classification


Photo Credits: finallast, destina, kubais
Chapter 8: Block Storage Technologies 225

Figure 8-1 uses two parameters to characterize each technology: latency, which is the length
of time it takes to retrieve saved data from a resource, and capacity, which represents the
maximum amount of data a device can hold.

Random-access memory (RAM) provides a relatively low latency (from 8 to 30 nanosec-


onds) and, for that reason, is the most common main memory device. RAM chips are com-
posed of multiple simple structures that contain transistors and capacitor sets that can store
a single bit. When compared to other storage technologies, these devices have relatively low
capacity (tens of gigabytes at the time of this writing).

Because the energy stored in the RAM capacitors leaks, the stored information must
be constantly refreshed, characterizing the most common type of memory used today:
dynamic RAM (DRAM). Represented in the bottom left of Figure 8-1, the DRAM chips
(little black rectangle) are sustained by a physical structure that is commonly referred to as a
memory module.

In the upper-right side of Figure 8-1, tertiary storage technologies are represented by a tape
library. This rather complex system comprises multiple tape drives, abundant tape car-
tridges, barcode readers to identify these cartridges, and a robotic arm to load tapes to the
drives after an I/O request is issued. Because of its myriad mechanical components, a tape
library introduces a much longer latency (a few minutes) when compared to other storage
technologies. Notwithstanding, a tape library provides an excellent method for long-term
archiving because of its capacity, which is usually measured in exabytes (EB).

Depicted in the middle of Figure 8-1, hard disk drives (HDDs) epitomize the most com-
monly used secondary storage technology today, offering latency of around 10 ms and
capacity of a few terabytes per unit. These devices offer a great cost-benefit ratio when
compared to RAM and tape libraries, while avoiding the volatile characteristics of the for- 8
mer and the extreme latency of the latter.

The next section discusses hard disk drives, exploring the internal characteristics of this sea-
soned technology.

Hard Disk Drives


HDDs are widely popular storage devices whose functionality is based on multiple platters
(disks) spinning around a common axis at a constant speed. Electromagnetic movable heads,
positioned above and below each disk, read (or write) data by moving over the surface
of the platter either toward its center or toward its edge, depending on where the data is
stored.
In these devices, data is stored in concentric sectors, which embody the atomic data units of
a disk and accommodate 512 bytes each. A track is the set of all sectors that a single actua-
tor head can access when maintaining its position while the platters spin. With around 1024
tracks on a single disk, all parallel tracks from the disks form a cylinder. As a result, each
specific sector in an HDD is referenced through a three-part address composed of cylinder/
head/sector information.

Figure 8-2 illustrates how these components are arranged in a hard disk drive structure.
Technet24.ir
226 CCNA Cloud CLDFND 210-451 Official Cert Guide

Sector

Head 1
Cylinder
Head 2

Head 3
Actuator

Figure 8-2 Hard Disk Drive

When a read operation requests data from a defined cylinder/head/sector position, the
HDD mechanical actuator positions all the heads at the defined cylinder until the required
sector is accessible to the defined head.

I/O operations for a single sector from an HDD are rare events. Customarily, these requests
refer to multiple sector clusters, which are composed of two, four, or another exponent of
two, contiguous sectors. I/O operations directed to nonconsecutive sectors increase data
access time and worsen application performance.

At this point, it is relatively easy to define a fundamental unit of storage access: a block can
be understood as a sequence of bytes, with a defined length (block size), that embodies
the smallest container for data in any storage device. In the case of an HDD, a block can be
directly mapped to a sector cluster.

RAID Levels
Although HDDs continue to be widely utilized in modern data centers, their relatively low
mean time between failures (MTBF) is a serious concern for most application caretakers. As
an electromagnetic device, an HDD is consequently subject to malfunctions caused by both
electronic and mechanical stresses, such as physical bumps, electric motor failure, extreme
heat, or sudden power failure while the disk is writing.

Furthermore, the maximum capacity of a single HDD unit may not be enough for some
application requirements. (If you’re like me, you have a highly sophisticated data distribu-
tion method to avoid filling up your personal computer’s HDD, and have data redundancy
in case the device fails for any reason.) Considering that it would be extremely counterpro-
ductive if each application had its own data management algorithm, automatic methods of
distribution were developed to deal with the specific characteristics of HDDs.

Formally defined in the 1988 paper “A Case for Redundant Arrays of Inexpensive Disks
(RAID),” by David A. Patterson, Garth A. Gibson, and Randy Katz, RAID (now commonly
called redundant array of independent disks) can be seen as one of the most common
storage virtualization techniques available today. In summary, RAID aggregates data blocks
from multiple HDDs to achieve high storage availability, increase capacity, and enhance I/O
performance. As a consequence, a RAID group sustains the illusion of a single virtual HDD
with these additional benefits.
Chapter 8: Block Storage Technologies 227

Figure 8-3 portrays some of the most popular RAID levels, where each one employs a dif-
ferent block distribution scheme for the involved HDDs. Table 8-2 describes each one of
these levels.

RAID 0 RAID 1

0 1 0 0
2 3 1 1
4 5 2 2
6 7 3 3
8 9 4 4

RAID 5 RAID 6
Parity 0-2 Parity 0-2
Parity 6-8

0 1 2 0 1 2
4 5 3
8 3 4 5
6 7 Parity 6-8 6 7 8
9 10 11 9 10 11

Parity 3-5 Parity 3-5


Parity 9-11
Parity 9-11

Figure 8-3 Select RAID Levels

Table 8-2 Select RAID Levels


Level Description
RAID 0 In this level, sequential blocks of data are written across multiple drives (called 8
striping). Figure 8-3 depicts a sequence of ten blocks (0 to 9) being striped
between two HDDs, which is the minimum quantity of devices for RAID 0. The
method does not provide any data redundancy, because a disk failure results in
total data loss. However, when compared to a lonely disk drive with similar capac-
ity, this RAID level improves I/O performance for one reason: it supports simulta-
neous reads or writes on all drives.
RAID 1 Also known as mirroring, RAID 1 makes sure that every write operation at one
device is duplicated to another device. If one of the disks fails, data can be com-
pletely recovered from its mirrored pair. This level adds latency for write opera-
tions, because they must occur twice. RAID 1 can use only half of the overall
capacity of the RAID group.
RAID 5 A popular method, RAID 5 has better balance between capacity and I/O perfor-
mance when compared with other RAID levels. In a nutshell, RAID 5 deploys data
block striping over a group of HDDs (minimum of three) and builds additional
parity blocks that can be used to recover an entire sequence of blocks in the
absence of an entire drive. Unlike the other parity-based methods (such as RAID
3 and 4), RAID 5 evenly distributes the parity blocks among the HDDs, which
enhances I/O performance because a block change only generates an additional
operation in another disk (the one that contains the parity block).
Technet24.ir
228 CCNA Cloud CLDFND 210-451 Official Cert Guide

RAID 6 This level addresses one of the main complaints about RAID 5: the long time
period required to rebuild the RAID group in the case of a drive failure (all blocks
from a lost disk must be recalculated and saved on the spare HDD). To address this
issue, RAID 6 creates two parity blocks in different drives for each block stripe,
avoiding immediate RAID reconstruction in the case of a single device failure,
while providing fault tolerance for one more failed disk. For such reasons, RAID 6
is one of the most popular aggregation methods deployed today.

With time, human creativity spawned alternative disk aggregation methods that were not
contemplated in the original RAID level definitions. Figure 8-4 depicts two nested RAID
levels, which essentially combine two RAID schemes to achieve different capacity, availabil-
ity, and performance characteristics.

RAID 01 RAID 10
RAID 0 RAID 1

RAID 1 RAID 1 RAID 0 RAID 0

0 0 1 1 0 1 0 1
2 2 3 3 2 3 2 3
4 4 5 5 4 5 4 5
6 6 7 7 6 7 6 7

Figure 8-4 Nested RAID Levels

Table 8-3 describes both RAID 01 and RAID 10.

Table 8-3 Nested RAID Levels


Level Description
RAID 01 Also known as RAID 0/1 or RAID 0+1, this RAID level employs a pair of mir-
rored HDDs and stripes data over them, as shown on the left in Figure 8-4.
RAID 10 Also known as RAID 1/0 or RAID 1+0, this RAID level essentially mirrors
groups of striped disks, as shown on the right in Figure 8-4.

Both methods were conceived to leverage the best characteristics of RAID 0 and RAID 1
(redundancy and performance) without the use of parity calculation.

Disk Controllers and Disk Arrays


The previous section explained how hard disk drives can be aggregated into RAID groups,
but it didn’t mention which mechanism is actually performing data striping and mirroring,
calculating parity, and rebuilding a RAID group after a disk fails.

The mastermind device behind all this work is generically called storage controller. As illus-
trated in Figure 8-5, a storage controller is usually deployed as an additional expansion card
providing an I/O channel between the CPU and internal HDDs within an application server.
Chapter 8: Block Storage Technologies 229

Internal Internal Internal


HDD HDD HDD

CPU Storage
Controller

Server

External External External


HDD HDD HDD

Figure 8-5 Storage Controller in a Server

As shown in Figure 8-5, the storage controller manages the internal HDDs of a server (which
are usually inserted in the server’s front panel) and external drives. In summary, the storage
controller can potentially deploy multiple RAID groups in a single server.

There is not much use for data residing on a hard disk drive that is encased in (or solely
connected to) a single application server should this one fail. Moreover, the data storage
demands of an application may well surpass the locally available capacity on a single server.

With such incentives, it was only a matter of time before data at rest could be decoupled
from application servers. And in the context of HDDs, this separation was achieved through
two different devices: JBODs and disk arrays.

JBOD (just a bunch of disks) is basically an unmanaged set of hard drives that can be indi-
vidually accessed by application servers through their internal storage controllers. For this 8
reason, JBODs generally cannot match the level of management, availability, and capacity
required in most data center facilities.

Conversely, a disk array contains multiples disks that are managed as a scalable resource
pool shared among several application environments. The following list describes the main
components of a disk array, which are depicted on the left side of Figure 8-6:

■ Array controllers: Control the array hardware resources (such as HDDs) and coordinate
the server access to data contained in the device. These complex storage controllers are
usually deployed as a pair to provide high availability to all array processes.
■ Access ports: Interfaces that are used to exchange data with application servers.
■ Cache: RAM modules (or other low-latency storage technology such as flash memory)
that are used to reduce access latency for stored data. It is generally located at the con-
troller or on expansion modules (not depicted in Figure 8-6).
■ Disk enclosures: Used for the physical accommodation of HDDs as well as other storage
devices. Usually, a disk enclosure gathers HDDs with similar characteristics.
■ Redundant power sources: Provide electrical power for all internal components of a
disk array.
Technet24.ir

230 CCNA Cloud CLDFND 210-451 Official Cert Guide

Disk Array Access


Array Ports
Controllers
Disk Array

Disk
Enclosure Controller

Slot
Front-End Back-End
••

Controller

HDD HDDs

Redundant Power
Sources

Physical View Connectivity View

Figure 8-6 Disk Array Components

The right side of Figure 8-6 visualizes the two types of disk array interconnections: front-
end (used for server access) and back-end (used for the internal disk connection to the array
controller). Both types of connections support a great variety of communication protocols
and physical media, as you will learn in future sections.

With their highly redundant architecture and management features, disk arrays have become
the most familiar example of permanent storage in current data centers. At the time of this
writing, the main disk array vendors are EMC Corporation, Hitachi Data Systems (HDS),
IBM, and NetApp Inc.

Besides providing sheer storage capacity, disk arrays also offer advanced capabilities. One
of them is called dynamic disk pool, which can overcome the following challenges of RAID
technologies within disk arrays:

■ Same drive size: Regardless of chosen RAID level, the striping, mirroring, and parity cal-
culation procedures require that all HDDs inside a RAID group share the same size. This
characteristic can become operationally challenging as HDDs continue to evolve their
internal storage capacity.
■ High rebuild time: Even with RAID 6, when a drive fails, the array controller must read
every single block of the remaining drives before reconstructing the failed RAID group.
This routine operation can consume a considerable chunk of time, depending on the
number of RAID group members.
■ Low number of disks: Because of the previously described rebuild process, disk array
vendors usually do not recommend that the number of RAID group members surpasses
15 to 20 disks (even though disk arrays may enclose hundreds of HDDs).
■ Need of hot spare units: It is a usual procedure to leave unused the HDDs that are ready
to replace failed members of a RAID group. Because these hot spare drives are effectively
used after a rebuilding process, they further reduce the disk array overall storage capacity.

The secret behind dynamic disk pools is pretty simple, as with most smartly conceived
technologies: they still implement principles underlying RAID but in a much more granular
way. Within a pool, each drive is broken into smaller pieces deploying RAID levels, rather
than using the whole HDD for that intent.
Chapter 8: Block Storage Technologies 231

Some vendors, such as NetApp, allow array administrators to create a single pool containing
potentially hundreds of HDDs from the same array. In the case of NetApp disk arrays based
on the SANtricity storage operating system, the lowest level of disk pools is called D-Piece,
representing 512 MB of data stored in a disk.

Ten D-Pieces form a D-Stripe, containing 5120 MB of data. Each D-Stripe constitutes a
RAID 6 group of eight data D-Pieces and two parity D-Pieces. The D-Pieces are then ran-
domly distributed over multiple drives in the pool through an intelligent algorithm that
guarantees load balancing and randomness.

Figure 8-7 exemplifies a disk pool implementation.

3TB 3TB

2TB 2TB 2TB 2TB

1TB 1TB 1TB 1TB 1TB 1TB

P D1 D2 D3 D4 D5 D6 D7 D8 P
D1 D2 D3 P D4 D5 P D6 D7 D8
D1 D2 D3 D4 D5 D6 D7 D8 P P

HDD1 HDD2 HDD3 HDD4 HDD5 HDD6 HDD7 HDD8 HDD9 HDD10 HDD11 HDD12

Figure 8-7 Disk Pool Data Distribution

In Figure 8-7, the squares represent D-Pieces belonging to three different D-Stripes, which
are portrayed in different colors. And as you may notice, each D-Stripe is fairly distributed
over 12 HDDs of different sizes. In the case of a drive failure, the array controller must only
rebuild the D-Stripes that are stored on that device. And statistically, as the number of pool
8
members increases, fewer D-Stripes are directly affected when a single HDD fails.

Another key characteristic of disk pools is that they do not require hot spare disks, because
rebuilding processes only need spare D-Pieces. Using Figure 8-7 as an example, if HDD1
fails, the array controller can use available space in the pool to rebuild D-Piece D1 from the
white D-Stripe and a parity D-Piece from the gray D-Stripe.

Volumes
As you may have already realized, storage technologies employ abstractions on top of
abstractions. In fact, when an array administrator has created a RAID group or a disk pool,
he rarely provisions its entirety to a single server. Instead, volumes are created to perform
this role. And although the term volume may vary tremendously depending on the context
in which you are using it, in this discussion it represents a logical disk offered to a server by
a remote storage system such as a disk array.

Most HDDs have their capacity measured in terabytes, while RAID groups (or disk pools)
can easily reach tens of terabytes. If a single server requires 2 TB of data storage capacity
of its use, provisioning four 1-TB drives for a RAID 6 group dedicates two drives for parity
functions. Additionally, if the application server needs more capacity, the RAID group resiz-
ing process may take a long time with the block redistribution over the additional disks.
Technet24.ir

232 CCNA Cloud CLDFND 210-451 Official Cert Guide

Volumes allow a more efficient and dynamic way to supply storage capacity. As a basis for
this discussion, Figure 8-8 exhibits the creation of three volumes within two aggregation
groups (RAID groups or disk pools).

Disk Array
Server1

Aggregation 6TB Volume


Group 1 Server2
3TB Volume

Aggregation
2TB Volume
Group 2
Server3

Figure 8-8 Volumes Defined in Aggregation Groups

In Figure 8-8, three volumes of 6 TB, 3 TB, and 2 TB are assigned, respectively, to Server1,
Server2, and Server3. In this scenario, each server has the perception of a dedicated HDD
and, commonly, uses a software piece called a Logical Volume Manager (LVM) to create
local partitions (subvolumes) and perform I/O operations on the volume on behalf of the
server applications. The advantages of this intricate arrangement are

■ The volumes inherit high availability, performance, and aggregate capacity from a RAID
group (or disk pool) that a single physical drive cannot achieve.
■ As purely logical entities, a volume can be dynamically resized to better fit the needs of
servers that are consuming array resources.

There are two ways a storage device can provision storage capacity. Demonstrating the pro-
vision method called thick provisioning, Figure 8-9 details a 6-TB volume being offered to
Server1.

,·PMXVW
XVLQJ7%RI
D7%GLVN

7%9ROXPH

6HUYHU
$JJUHJDWLRQ
*URXS

Figure 8-9 Thick Provisioning


Chapter 8: Block Storage Technologies 233

In Figure 8-9, the array spreads the 6-TB volume over members of Aggregation Group 1
(RAID or disk pool). Even if Server1 is only effectively using 1 TB of data, the array con-
trollers dedicate 6 TB of actual data capacity for the volume and leave 5 TB completely
unused. As you may infer, this practice may generate a huge waste of array capacity. For
that reason, another method of storage provisioning, thin provisioning, was created, as
Figure 8-10 shows.

,·PMXVW
XVLQJ7%RI
D7%GLVN

7%9ROXPH
9LUWXDOL]HU

7%9ROXPH

6HUYHU
$JJUHJDWLRQ
*URXS

Figure 8-10 Thin Provisioning

In Figure 8-10, a storage virtualizer provides the perception of a 6-TB volume to Server1,
but only stores in the aggregate group what the server is actually using, thereby avoiding
waste of array resources due to unused blocks.

NOTE Although a complete explanation of storage virtualization techniques is beyond 8


the scope of this book, I would like to point out that such technologies can be deployed on
storage devices, on servers, or even on dedicated network appliances.

From the previous sections, you have learned the basic concepts behind storing data in
HDDs, RAID groups, disk pools, and volumes. Now it is time to delve into the variety of
styles a server may deploy to access data blocks in these storage devices and constructs.

Accessing Blocks
Per definition, block storage devices offer to computer systems direct access and complete
control of data blocks. And with the popularization of x86 platforms and HDDs, multiple
methods of accessing data on these storage devices were created. These approaches vary
widely depending on the chosen components of a computer system in a particular scenario
and can be based on direct connections or storage-area network (SAN) technologies.

Nonetheless, it is important that you realize that all of these arrangements share a common
characteristic: they consistently present to servers the abstraction of a single HDD exchang-
ing data blocks through read and write operations. And as a major benefit from block stor-
age technologies, such well-meaning deception drastically increases data portability in data
centers as well as cloud computing deployments.
Technet24.ir

234 CCNA Cloud CLDFND 210-451 Official Cert Guide

Advanced Technology Attachment


Simply known as ATA, Advanced Technology Attachment is the official name that the
American National Standards Institute (ANSI) uses for multiple types of connections
between x86 platforms (personal computers or servers) and internal storage devices.

Parallel Advanced Technology Attachment (PATA) was created in 1986 to connect IBM
PC/AT microcomputers to their internal HDDs. Also known as Integrated Drive Electronics
(IDE), this still-popular interconnect can achieve up to 133 Mbps with ribbon parallel cables.

As its name infers, an IDE disk drive has integrated storage controllers. From a data access
perspective, each IDE drive is an array of 512-byte blocks reachable through a simple com-
mand interface called basic ATA command set. All ATA interface standards are defined
by the International Committee for Information Technology Standards (INCITS) Technical
Committee T13, which is accredited by ANSI.

Figure 8-11 portrays an ATA internal HDD and the well-known ribbon cable used on such
devices, whose maximum length is 46 centimeters (18 inches).

Figure 8-11 IDE HDD and ATA Cable


Photo Credits: Vladimir Agapov, estionx, dmitrydesigner, charcomphoto

Serial Advanced Technology Attachment (SATA) constitutes an evolution of ATA archi-


tecture for internal HDD connections. First standardized in 2003 (also by the T13 Technical
Committee), SATA can achieve 6 Gbps and remains a popular internal HDD connection for
servers and disk arrays. Figure 8-12 depicts a SATA disk and a SATA cable, which can reach
up to 1 meter (or 3.3 feet) of length.

Figure 8-12 SATA Disk and SATA Cable


Photo credit: charcomphoto; Vladimir Agapov
Chapter 8: Block Storage Technologies 235

Both PATA and SATA received multiple enhancements that have resulted in standards such
as ATA-2 (Ultra ATA), ATA-3 (EIDE), external SATA (eSATA), and mSATA (mini-SATA).

TIP All ATA technologies defined in this section provide a direct connection between a
computer and storage device, characterizing a direct-attached storage (DAS) layout.

Small Computer Systems Interface


The set of standards that defines the Small Computer System Interface (SCSI, pronounced
scuzzy) was developed by the INCITS Technical Committee T10. Like ATA, SCSI defines
how data is transferred between computers and peripheral devices. With that intention,
SCSI also describes all operations and formats to provide compatibility between different
vendors.

To discover the origins of most SAN-related terms, let’s take a jump back to 1986, the year
SCSI-1 (or SCSI First Generation) was officially ratified. A typical SCSI implementation at
that time followed the physical topology illustrated in Figure 8-13.

SCSI Bus

Terminator
Ribbon Cable
...

HBA in

LUN0
out

LUN1
LUN2
in

LUN0
out

LUN1
... in

LUN0
out

LUN1
LUN2 8
Initiator Target Target Target
(ID=7) (ID=6) (ID=5) (ID=0)

Figure 8-13 SCSI Parallel Topology

Figure 8-13 depicts a SCSI initiator, a computer system that could access HDDs (SCSI
targets), connected to a SCSI bus. The bus was formed through daisy-chained connec-
tions that required two parallel ports on each target and several ribbon cables. The bus also
required a terminator at the end devices to avoid signal reflection and loss of connectivity.

The SCSI parallel bus deployed half-duplex communication between initiator and targets,
where only one device could transmit at a time. Each device had an assigned SCSI identi-
fier (SCSI ID) to be referenced and prioritized on the shared transmission media. These IDs
were usually configured manually, and it was recommended that the initiator always had the
maximum priority (7, in Figure 8-13).

Inside the bus, the initiator communicated with logical units (LUs) defined at each target
device and identified with a logical unit number (LUN). Whereas most storage devices only
supported a single LUN, some HDDs could deploy multiple logical storage devices.

An initiator sent SCSI commands to a logical unit using a bus/target/LUN address. Using
this address, a logical device could be uniquely located even if the initiator deployed more
than one SCSI host bus adapter (HBA) connected to multiple buses.
Technet24.ir

236 CCNA Cloud CLDFND 210-451 Official Cert Guide

In SCSI, I/O operations and control commands were sent using command descriptor blocks
(CDBs), whose first byte symbolized the command code and the remaining data, its parame-
ters. Although these commands also included device testing and formatting operations, read
and write operations are the most frequent between the initiator and its targets. In essence,
each command was issued to interact with a selected part of a LUN and to perform an I/O
operation.

The SCSI Parallel Interface (SPI) architecture represented in Figure 8-13 evolved in the follow-
ing years, achieving speeds of 640 Mbps and having up to 16 devices on a single bus. Yet, its
communication was still half-duplex and connections could not surpass 25 meters (82 feet).

To eliminate unnecessary development efforts and to support future SCSI interfaces and
cabling, the T10 committee created the SCSI Architecture Model (SAM). In summary, this
model proposed the separation between physical components and SCSI commands. Figure
8-14 illustrates the structure of this model.

SCSI Medium
Specific SCSI Black SCSI Stream SCSI Graphic Changer
Commands Commands Commands Commands Commands
(SBC) (SSC) (SGC) (SMC)

Primary SCSI Primary Commands (SPC)


Commands

SCSI Architecture Model (SAM)

SCSI Interlocked Serial SCSI Fibre Channel Internet


Protocols Protocol Protocol Protocol SCSI
(SIP) (SSP) (FCP) (ISCSI)

SCI Parallel Serial Fibre TCP


Interconnects Interface Attached Channel
(SPI) SCSI (SAS) (FC-PH) IP

Figure 8-14 SCSI Architecture Model

In the SCSI third-generation standard (SCSI-3), SAM defines SCSI commands (primary and
specific for each peripheral) that are completely independent from the SCSI interconnects
and their respective protocols, allowing an easy portability to other types of interconnects.

For example, Serial Attached SCSI (SAS) was specifically designed to overcome the limita-
tions of the SCSI Parallel Interface. Defined as point-to-point serial connection, SAS can
attach two devices through a maximum speed of 6 Gbps and over distances shorter than 10
meters (33 feet). This standard has achieved relative popularity with internal HDDs and, as
SATA, is also commonly used on the back end of multiple disk array models.
Chapter 8: Block Storage Technologies 237

NOTE In the mid-1990s, the Small Form Factor (SFF) committee introduced ATA Packet
Interface (ATAPI), which extended PATA to other devices, such as CD-ROMs, DVD-
ROMs, and tape drives. More importantly, ATAPI allowed an ATA physical connection to
carry SCSI commands and data, serving as an alternative interconnect for this widespread
standard. On the other hand, SAS offers compatibility with SATA disks through the SATA
Tunneling Protocol (STP). Therefore, ATA commands can be carried over SCSI media (with
STP) and vice versa (with ATAPI).

Another popular interconnect option is Fibre Channel (FC), which can be defined as a
series of protocols that provides high-speed data communication between computer sys-
tems. Created in 1988, Fibre Channel has established itself as the main SAN protocol, pro-
viding block I/O access between servers and shared storage devices. Fibre Channel currently
offers speeds of up to 16 Gbps, 10-km reach, and potentially millions of connected hosts in
a single network.

Internet SCSI (iSCSI) is essentially a SAN technology that allows the transport of SCSI
commands and data over a TCP connection. An iSCSI session can be directed to an iSCSI
port on a storage array or a gateway between the iSCSI initiator and storage devices con-
nected through other SAN protocols.

Both SAN technologies will be further explored in the following sections.

Fibre Channel Basics


Since its ratification in 1994 by the INCITS Technical Committee T11, Fibre Channel has
become the most popular protocol for enterprise and service provider storage-area net-
8
works. Originally conceived to offer different types of data transport services to its con-
nected nodes, Fibre Channel enables communication devices to form a special kind of net-
work called fabric.

With speeds starting at 1 Gbps, Fibre Channel offers transport services to higher-level pro-
tocols such as SCSI, IBM’s Single-Byte Command Code Sets (SBCCS) for mainframe storage
access, and the Internet Protocol (IP).

Like the majority of networking architectures, Fibre Channel does not follow the Open
Systems Interconnection (OSI) model. Instead, it divides its protocols and functions into
different hierarchical layers with predefined responsibilities, as described in Table 8-4 and
exhibited in Figure 8-15.

Table 8-4 Fibre Channel Layers


Level Description
FC-0 Defines all physical components of a Fibre Channel connection, such as media (fiber
or copper), connectors, and transmission parameters.
FC-1 Performs encoding and error control. Some Fibre Channel connections (1, 2, 4, and 8
Gbps) use the 8B/10B transmission encoding, where 10 bits are transmitted to rep-
resent an 8-bit symbol. On the other hand, 16-Gbps Fibre Channel connections use
the 64/66B transmission encoding.
Technet24.ir
238 CCNA Cloud CLDFND 210-451 Official Cert Guide

Level Description
FC-2 Includes the frame structure and byte sequences.
FC-3 Deploys the set of services common to any Fibre Channel fabric, such as time distri-
bution and security capabilities.
FC-4 Provides the mapping between an upper-layer protocol (ULP), such as SCSI,
SBCCS, or IP, and the other Fibre Channel layers.

Upper Layer Protocols SCSI IP SBCCS

FC-4 SCSI-FCP IPv4FC FC-SB

FC-3 FC-GS

FC-2 FC-AL FC-SW

FC-1 FC-PH FC-FS

FC-0 FC-PI

Figure 8-15 Fibre Channel Layers

Figure 8-15 represents how the following T11 standards fit into the Fibre Channel five-layer
model:

■ Fibre Channel Physical and Signaling Interface (FC-PH)


■ Fibre Channel Physical Interface (FC-PI)
■ Fibre Channel Framing and Signaling (FC-FS)
■ Fibre Channel Generic Services (FC-GS)
■ Fibre Channel Arbitrated Loop (FC-AL)
■ Fibre Channel Fabric and Switch Control Requirements (FC-SW)
■ Fibre Channel Protocol for SCSI (SCSI-FCP)
■ FC Mapping for Single Byte Command Code Sets (FC-SB)
■ Transmission of IPv4 and ARP Packets over Fibre Channel (IPv4FC)

TIP You can find much more detailed information about Fibre Channel standards at
http://www.t11.org/t11/als.nsf/v2guestac.

Fibre Channel Topologies


Fibre Channel supports three types of topologies:

■ Point-to-point: Connects two Fibre Channel–capable hosts without the use of a com-
munication device, such as a Fibre Channel switch or hub. In this case, Fibre Channel is
actually being used for a DAS connection.
Chapter 8: Block Storage Technologies 239

■ Arbitrated loop: Permits up to 127 devices to communicate with each other in a looped
connection. Fibre Channel hubs were designed to improve reliability in these topolo-
gies, but with the higher adoption of switched fabrics, Fibre Channel loop interfaces are
more likely to be found on legacy storage devices such as JBODs or older tape libraries.
■ Switched fabric: Comprises Fibre Channel devices that exchange data through Fibre
Channel switches and theoretically supports up to 16 million devices in a single fabric.

Figure 8-16 illustrates these topologies, where each arrow represents a single fiber connection.

NL_Port NL_Port
N_Port

N_Port
N_Port NL_Port NL_Port F_Port
FL_Port
Switch
NL_Port
E_Port
E_Port

Switch
NL_Port NL_Port
NL_Port F_Port
Hub NL_Port

NL_Port NL_Port NL_Port

Point-to-Point Arbitrated Loop Switched Fabric

Figure 8-16 Fibre Channel Topologies

Figure 8-16 also introduces the following Fibre Channel port types: 8

■ Node Port (N_Port): Interface on a Fibre Channel end host in a point-to-point or


switched fabric topology.
■ Node Loop Port (NL_Port): Interface that is installed in a Fibre Channel end host to
allow connections through an arbitrated loop topology.
■ Fabric Port (F_Port): Interface Fibre Channel switch that is connected to an N_Port.
■ Fabric Loop Port (FL_Port): Fibre Channel switch interface that is connected to a public
loop. A fabric can have multiple FL_Ports connected to public loops, but, per definition,
a private loop does not have a fabric connection.
■ Expansion Port (E_Port): Interface that connects to another E_Port in order to create an
Inter-Switch Link (ISL) between switches.

Fibre Channel Addresses


Fibre Channel uses two types of addresses to identify and locate devices in a switched fab-
ric: World Wide Names (WWNs) and Fibre Channel Identifiers (FCIDs).

In theory, a WWN is a fixed 8-byte identifier that is unique per Fibre Channel entity. Fol-
lowing the format used in Cisco Fibre Channel devices, this writing depicts WWNs as
colon-separated bytes (10:00:00:00:c9:76:fd:31, for example).
Technet24.ir

240 CCNA Cloud CLDFND 210-451 Official Cert Guide

A Fibre Channel device can have multiple WWNs, where each address may represent a part
of the device, such as:

■ Port WWN (pWWN): Singles out one interface from a Fibre Channel node (or a Fibre
Channel host bus adapter [HBA] port) and characterizes an N_Port
■ Node WWN (nWWN): Represents the node (or HBA) that contains at least one port
■ Switch WWN (sWWN): Uniquely represents a Fibre Channel switch
■ Fabric WWN (fWWN): Identifies a switch Fibre Channel interface and distinguishes an
F_Port

Figure 8-17 displays how these different WWNs are assigned to distinct components of a
duplicated host connection to a Fibre Channel switch.

Switch WWN
(sWWN)
Port WWNs Fabric WWNs

pWWN1 fWWN1
HBA

pWWN2 fWWN2

Server Fibre Channel


Node WWN Switch
(nWWN)

Figure 8-17 Fibre Channel World Wide Names

In opposition, FCIDs are administratively assigned addresses that are inserted on Fibre
Channel frame headers and represent the location of a Fibre Channel N_Port in a switched
topology. An FCID consists of 3 bytes, which are detailed in Figure 8-18.

Size in
8 8 8 Bits

Domain ID Area ID Port ID

Figure 8-18 Fibre Channel Identifier Format

Each byte has a specific meaning in an FCID, as follows:

■ Domain ID: Identifies the switch where this device is connected


■ Area ID: May represent a subset of devices connected to a switch or all NL_Ports con-
nected to an FL_Port
■ Port ID: Uniquely characterizes a device within an area or domain ID

To maintain consistency with Cisco MDS 9000 and Nexus switch commands, this writing
describes FCIDs as contiguous hexadecimal bytes preceded by the “0x” symbol (0x01ab9e,
for example).
Chapter 8: Block Storage Technologies 241

TIP At this point, you may be tempted to make mental associations between WWNs,
FCIDs, MAC addresses, and IP addresses. Although it is a strong temptation, I advise you
avoid the urge for now. Fibre Channel is easier to learn when you clear your mind about
other protocol stacks.

Fibre Channel Flow Control


As part of a network architecture capable of providing lossless transport of data, a Fibre
Channel device must determine whether there are enough resources to process a frame
before receiving it. As a result, a flow control mechanism is needed to avoid frame discard-
ing within a Fibre Channel fabric.

Fibre Channel deploys credit-based strategies to control data exchange between directly
connected ports or between end nodes. In both scenarios, the receiving end is always in
control, while the transmitter only sends a frame if it is sure that the receiver has available
resources (buffers) to handle this frame.

Buffer-to-Buffer Credits (BB_Credits) are used to control frame transmission between


directly connected ports (for example, N_Port to F_Port, N_Port to N_Port, or E_Port to
E_Port). In essence, each port is aware of how many buffers are available to receive frames
at the other end of the connection, only transmitting a frame if there is at least a single
available buffer.

BB_Credit counters are usually configured with low two-digit values when both connected
ports belong to the same data center. However, in data center interconnections over longer
distances, a small number of BB_Credits may result in low performance. Such behavior can
occur because the transmitting port sends as many frames as it can and then must wait for 8
more BB_Credits from the receiving port to transmit again. The higher the latency on the
interconnection, the longer the wait and the less traffic that traverses this link (regardless of
its available bandwidth).

Distinctively, the End-to-End Credits (EE_Credits) flow control method regulates the trans-
port of frames between source and destination N_Ports (such as an HBA and a disk array
storage port). With a very similar mechanism to BB_Credits, this method allows a sender
node to receive the delivery confirmation of frames from the receiver node.

Fibre Channel Processes


The Fibre Channel standards formally define a fabric as the “entity that interconnects vari-
ous N_Ports attached to it and is capable of routing frames by using only the D_ID (destina-
tion domain ID) information in an FC-2 frame header.” The problem with this definition is
that it does not really differentiate a fabric from a network.

However, as a recent trend, the networking industry has chosen the term fabric to represent
a set of network devices that can somehow behave like one single device. And being the
first architecture to use the term, Fibre Channel has influenced other networking technolo-
gies (including Ethernet) to also use the term as they embody characteristics from Fibre
Channel such as simplicity, reliability, flexibility, minimum operation overhead, and support
of new applications without disruption.
Technet24.ir

242 CCNA Cloud CLDFND 210-451 Official Cert Guide

TIP You will find a deeper discussion about Ethernet fabrics in Chapter 10, “Network
Architectures for the Data Center: Unified Fabric,” and Chapter 11, “Network Architectures
for the Data Center: SDN and ACI.”

Within Fibre Channel, many of these qualities are natively deployed through fabric ser-
vices, which are basic functions that Fibre Channel end nodes can access through certain
well-known FCID addresses, defined in the 0xfffff0 to 0xffffff range. Table 8-5 lists some
of these fabric services and their respective well-known addresses.

Table 8-5 Some Fibre Channel Fabric Services


Fibre Channel Service Well-known Address
Broadcast 0xffffff
Fabric Login 0xfffffe
Fabric Controller 0xfffffd
Name Server 0xfffffc
Management Server 0xfffffa
Reserved 0xfffff4 to 0xfffff0

The Broadcast Alias service can be accessed in a Fibre Channel switch when a frame is sent
to address 0xffffff. Upon reception on this condition, a switch must replicate and transmit
the frame to other active ports (according to a predefined broadcast policy).

The Login Server service is a fundamental service for N_Ports from each fabric switch that
receives and responds to Fabric Login (FLOGI) frames. Such procedure is used to discover
the operating characteristics associated with a fabric and its elements. Additionally, the
Login Server service assigns the FCID for the requesting N_Port.

TIP As you will see in the section “Fibre Channel Logins,” later in this chapter, T11 also
defines other logins besides FLOGI.

The Fabric Controller service is the logical entity responsible for internal fabric operations
such as

■ Fabric initialization
■ Parsing and routing of frames directed to well-known addresses
■ Setup and teardown of ISLs
■ Frame routing
■ Generation of fabric error responses

The Name Server service stores information about active N_Ports, including their WWNs,
FCIDs, and other Fibre Channel operating parameters (such as supported ULP). These values
are commonly used to inform a node about FCID-pWWN associations in the fabric.
Chapter 8: Block Storage Technologies 243

Finally, the Management Server service is an informative service that is used to collect and
report information about link utilization, service errors, and so on.

Fabric Shortest Path First


A Fibre Channel switched fabric must be in an operational state to successfully transport
frames between N_Ports. A key element called a principal switch is instrumental for this
process, because it is responsible for domain ID distribution within the fabric.

The election of a principal switch depends on the priority assigned to each switch (1 to 254,
where the lower value wins), and if a tie happens, it depends on each sWWN (lower wins
again).

After the principal switch assigns domain IDs to all other switches in a fabric, routing tables
start to be populated with entries that will be used to correctly route frames based on their
destination domain ID and using the best available path.

Fibre Channel uses the Fabric Shortest Path First (FSPF) protocol to advertise domain IDs
to other switches so that they can build their routing tables. As a link-state path selection pro-
tocol, FSPF keeps track of the state of the active links, associates a cost with each link, and
finally calculates the path with the lowest cost between every two switches in the fabric.

By default, FSPF assigns the values 1000, 500, 250, 125, and 62 to 1-, 2-, 4-, 8-, and
16-Gbps links. But in Cisco Fibre Channel devices, you can also manually assign the cost of
an ISL.

Conceived to clarify these theoretical concepts, Figure 8-19 illustrates how path costs are
used to route Fibre Channel frames between N_Ports.
Switch2 Routing Table
8
Domain Cost Interface
ID
1a 125 fc1/23
Storage2
3c 125 fc1/24 (FCID = 0x2b4e99)

fc1 Switch3 Routing Table


23

Switch1 Routing Table


/2
1/

0x2b 4
fc

Domain Cost Interface Domain Cost Interface


ID Switch2 ID
8
s

G
bp

2b 125 fc1/1 bp 1a 250 fc3/3 or


G

s fc4/4
8

3c 250 fc1/1 or
1
1/

fc2/2 fc3 2b 125 fc3/3


fc

/3

Storage1 fc2/2 Storage3


(FCID = 0x1a07bc) 4 Gbps fc4/4 (FCID = 0x3cfaca)
0x1a 0x3c
Switch1 Switch3
0x3cfaca 0x1a07bc DATA
Destination Source
Domain ID Domain ID

Figure 8-19 FSPF Costs in a Fabric

In Figure 8-19, a fabric is composed of three switches, named Switch1, Switch2, and
Switch3. During the fabric initialization, Switch1 was elected principal switch and assigned
to itself the domain ID 0x1a. Afterward, it assigned to Switch2 and Switch3 domain IDs
0x2b and 0x3c, respectively.
Technet24.ir

244 CCNA Cloud CLDFND 210-451 Official Cert Guide

Using FSPF, the switches exchange routes based on their own domain ID and build rout-
ing tables, shown in Figure 8-19. Also noticeable is the fact that the Fibre Channel initia-
tor (Server1) and targets (Storage2 and Storage3) have already received FCIDs during their
FLOGI process, and that the first byte on these addresses corresponds to the domain ID of
their directly connected switch.

Therefore, to route frames between two Fibre Channel N_Ports, each switch observes the
destination domain ID in each frame, performs a lookup in its routing table, and sends the
frame to the outgoing interface pointed to in the route entry. This process is repeated until
the frame reaches the switch that is directly connected to the destination N_Port.

TIP Figure 8-19 follows the interface naming process used in Cisco Fibre Channel switch-
es, where the first number refers to the slot order and the second to the order of the inter-
face in a module.

Notice in Figure 8-19 that there are two paths with an FSPF cost of 250 between Switch1
and Switch3. Thus, the frame sent from Server1 to Storage3 (represented in a very simplified
format) may be forwarded to one of two interfaces on Switch1: fc1/1 or fc2/2.

According to the Fibre Channel standards, it is up to each manufacturer to decide how a


switch forwards frames in the case of equal-cost paths. Cisco Fibre Channel switches can
load-balance traffic on up to 16 equal-cost paths using one of the following behaviors:

■ Flow-based: Each pair of N_Ports only uses a single path (for example, traffic from
Server1 to Storage3 will only use the path Switch1–Switch3). The path choice is based on
a hash function over the source and destination FCIDs of each frame.

TIP A hash function is an operation that can map digital data of any size to digital data of
fixed size. In the case of FSPF load-balancing, a hash function of the same arguments (des-
tination and source FCIDs) will always result in the same path among all equal-cost shortest
paths.

■ Exchange-based: Exchanges from each pair of N_Ports are load-balanced among all
available paths. The path choice is based on a hash operation over the source and desti-
nation FCIDs of each frame, plus a Fibre Channel header called Originator Exchange
Identifier (OX_ID), which is unique for each SCSI I/O operation (read or write).

Because FSPF does not take into account the available bandwidth or utilization of the paths
for route calculation, SAN administrators must carefully plan FSPF to consider failure sce-
narios and to avoid traffic bottlenecks.

Unlike the topology shown in Figure 8-19, SAN administrators commonly deploy ISL
redundancy between switches. However, these professionals still need to be aware that if
an ISL changes its operational state, a general route recomputation can bring instability and
traffic loss to a fabric.

To reduce these risks, using PortChannels is highly recommended because they can aggre-
gate multiple ISLs into one logical connection (from the FSPF perspective).
Chapter 8: Block Storage Technologies 245

Figure 8-20 compares how a fabric with nonaggregated ISLs and a fabric with PortChannels
behave after a link failure.

Route No Route
Recomputation Recomputation

Route Route
Recomputation Recomputation Port Channels

Link Failure Link Failure

Figure 8-20 Comparing FSPF and PortChannels

When PortChannels are not being deployed, a link failure will always cause route recom-
putation because a link state has changed for FSPF. In the fabric depicted on the right in
Figure 8-20, such a failure is confined to the logical ISL (the PortChannel itself), not causing
any FSPF change in the fabric if at least one PortChannel member is operational.

In a nutshell, PortChannels increase the reliability of a Fibre Channel fabric and simplify its
operation, reducing the number of available FSPF paths between switches.

Fibre Channel Logins


A Fibre Channel N_Port carries out three basic login processes before it can properly 8
exchange frames with another N_Port in a switched fabric. These login processes are
described in Table 8-6.

Table 8-6 Fibre Channel Logins


Fibre Channel Description
Login
Fabric Login Procedure where an N_Port obtains an FCID and identifies the operating char-
(FLOGI) acteristics associated with the connected switch and its fabric.
Port Login Operation where two N_Ports (with completed FLOGI) discover their mutual
(PLOGI) capabilities and operating parameters.
Process Login Process used to establish a session between two FC-4 level logical processes
(PRLI) (ULP) from the devices that have performed a successful PLOGI.

Figure 8-21 exhibits a FLOGI process between a server HBA and Switch1, and another
between an array storage port and Switch2. Afterward, each N_Port receives an FCID that is
inserted into a FLOGI database located in its directly connected switch. Both switches will
use this table to forward Fibre Channel frames to local ports.
Technet24.ir

246 CCNA Cloud CLDFND 210-451 Official Cert Guide

Server Switch1 Switch2


Array
HBA

Fabric Login (FLOGI) Fabric Login (FLOGI)

Port Login (PLOGI)

Process Login (PRLI)

Figure 8-21 Fibre Channel Logins

Figure 8-21 also shows subsequent PLOGI and PRLI processes between the same HBA and
a storage array port. After all these negotiations, both devices are ready to proceed with
their upper-layer protocol communication using Fibre Channel frames.

Zoning
A zone is defined as a subset of N_Ports from a fabric that are aware of each other, but
not of devices outside the zone. Each zone member can be specified by a port on a switch,
WWN, FCID, or human-readable alias (also known as FC-Alias).

Zones are configured in Fibre Channel switched fabrics to increase network security, intro-
duce storage access control, and prevent data loss. By using zones, a SAN administrator can
avoid a scenario where multiple servers can access the same storage resource, ruining the
stored data for all of them.

A fabric can deploy two methods of zoning:

■ Soft zoning: Zone members are made visible to each other through name server queries.
With this method, unauthorized frames are capable of traversing the fabric.
■ Hard zoning: Frame permission and blockage is enforced as a hardware function on the
fabric, which in turn will only forward frames among members of a zone. Cisco Fibre
Channel switches only deploy this method.

TIP In Cisco devices, you can also configure the switch behavior to handle traffic between
unzoned members. To avoid unauthorized storage access, blocking is the recommended
default behavior for N_Ports that do not belong to any zone.

A zone set consists of a group of one or more zones that can be activated or deactivated
with a single operation. Although a fabric can store multiple zone sets, only one can be
Chapter 8: Block Storage Technologies 247

active at a time. The active zone set is present in all switches on a fabric, and only after a
zone set is successfully activated can the N_Ports contained in each member zone perform
PLOGIs and PRLIs between them.

NOTE The Zone Server service is used to manage zones and zone sets. Implicitly, an
active zone set includes all the well-known addresses from Table 8-5 in every zone.

Figure 8-22 illustrates how zones and an active zone set can be represented in a Fibre Chan-
nel fabric.

Zone A

Zone C

Zone B Zone A
Zone B
Zone C
Zone Set ABC
(Active)

Figure 8-22 Zones and Zone Sets

Although each zone in Figure 8-22 (A, B, and C) contains two or three members, more hosts
could be inserted in them. When performing a name service query (“dear fabric, whom can
8
I communicate with?”), each device receives the FCID addresses from members in the same
zone and begins subsequent processes, such as PLOGI and PRLI.

Additionally, Figure 8-22 displays the following self-explanatory types of zones:

■ Single-initiator, single-target (Zone A)


■ Multi-initiator, single-target (Zone B)
■ Single-initiator, multi-target (Zone C)

TIP Because not all members within a zone should communicate, single-initiator, single-
target zones are considered best practice in Fibre Channel SANs.

SAN Designs
In real-world SAN designs, it is very typical to deploy two isolated physical fabrics with
servers and storage devices connecting at least one N_Port to each fabric. Undoubtedly,
such best practice increases storage access availability (because there are two independent
paths between each server and a storage device) and bandwidth (if multipath I/O software is
installed on the servers, they may use both fabrics simultaneously to access a single storage
device).
Technet24.ir

248 CCNA Cloud CLDFND 210-451 Official Cert Guide

There are, of course, some exceptions to this practice. In many data centers, I have seen
backup SANs with only one fabric being used for the connection between dedicated HBA
ports in each server, tape libraries, and other backup-related devices.

Another key aspect of SAN design is oversubscription, which generically defines the ratio
between the maximum potential consumption of a resource and the actual resource allocat-
ed in a communication system. In the specific case of Fibre Channel fabrics, oversubscrip-
tion is naturally derived from the comparison between the number of HBAs and storage
ports in a single fabric.

Because storage ports are essentially a shared resource among multiple servers (which rarely
use all available bandwidth in their HBAs), the large majority of SAN topologies are expect-
ed to present some level of oversubscription. The only completely nonoversubscribed SAN
topology is DAS, where every initiator HBA port has its own dedicated target port.
In classic SAN designs, an expected oversubscription between initiators and targets must be
obeyed when deciding how many ports will be dedicated for HBAs, storage ports, and ISLs.
Typically, these designs use oversubscriptions from 4:1 (four HBAs for each storage port, if
they share the same speed) to 8:1.

TIP Such expected oversubscription is also known as fan-out.

With these concepts in mind, we will explore three common SAN topologies, which are
depicted in Figure 8-23.

Single-Layer Core-Edge Edge-Core-Edge

Edge
SAN A SAN A SAN B SAN B

Core Core
SAN A SAN B SAN A SAN B

SAN
SAN AA SAN B
SAN A SAN B
Edge Edge
SAN A SAN B SAN A SAN A SAN B SAN B

Figure 8-23 Common SAN Topologies


Chapter 8: Block Storage Technologies 249

The left side of Figure 8-23 represents the single-layer topology, where only one Fibre
Channel switch is positioned between the server HBAs and the storage ports on disk arrays.
The simplicity of this topology also limits its scalability to the maximum number of ports
of a single Fibre Channel switch. For this reason, positioning Fibre Channel director-class
switches (or simply, directors) in this topology is usually better, because they can offer a
larger number of ports when compared to fabric switches, as well as high availability on
all of their internal components. This topology is also known as collapsed-core as a direct
result of the highly popular process in the 2000s of consolidating multiple fabric switches
into directors.

NOTE You will find more details about Cisco MDS 9000 fabric switches and directors in
Chapter 13, “Cisco Cloud Infrastructure Portfolio.”

The center topology in Figure 8-23 depicts a very traditional topology called core-edge,
where two layers of communication devices are positioned between initiators and targets.
In this case, each layer is dedicated to the connection of a single type of device (servers on
edge and storage devices on core). In core-edge topologies, the number of ISLs between
both layers is defined according to the SAN expected oversubscription and available port
speeds.

Also in core-edge designs, the number of ports on edge devices depends on the type of
physical connection arrangement to the servers. In top-of-rack connections, where the serv-
ers share the same rack with their access devices, fabric switches with less than 48 Fibre
Channel ports are usually deployed. If end-of-row connections are used, directors are com-
monly positioned to offer connection access to servers distributed across many racks.
8
Finally, the topology on the right in Figure 8-23 depicts the edge-core-edge topology,
which is deployed in SANs that require thousands of ports in a single physical structure. In
an edge-core-edge fabric, there are two kinds of edge switches: target edges and initiator
edges. These switches are used, respectively, to connect to storage devices and to connect
to servers. All traffic coalesces in the core switches, whose number of devices and deployed
ISLs controls the SAN oversubscription.

As a way to transcend some scalability limits on Fibre Channel SANs, such as the number
of domain IDs in a single fabric, it is possible to deploy a virtualization technique called
N_Port Virtualization (NPV). Running in NPV mode, a switch can emulate an N_Port con-
nected to an upstream switch F_Port, which eliminates the need to deploy an ISL and the
insertion of an additional domain ID in the fabric.

Figure 8-24 details the differences between a standard ISL and an NPV connection.
Technet24.ir
250 CCNA Cloud CLDFND 210-451 Official Cert Guide

N_Port NP_Port

F_Port F_Port
Switch Mode
Switch Mode
(Domain ID=0x1a) NPIV Enabled
(Domain ID=0x1a)

E_Port F_Port

E_Port NP_Port
Switch Mode NPV Mode
(Domain ID=0x2b) (No Domain ID)

F_Port F_Port

N_Port N_Port

FCID=0x2b0000 FCID=0x1a0101

Figure 8-24 Comparing ISLs to NPV Connections

Unlike a standard Fibre Channel switch, a switch in NPV mode performs a fabric login
(FLOGI) in an upstream switch through a Node Proxy Port (NP_Port), which receives
an FCID in the upstream switch in response. Afterward, the NPV-mode switch uses this
NP_Port to forward all FLOGIs from connected servers to the upstream switch. Figure 8-24
also highlights that the host connected to the NPV-mode switch received FCID 0x1a0101,
which is derived from the core switch domain ID.

Finally, an NPV connection requires that an upstream F_Port can receive and process more
than one fabric login. To support such behavior, the upstream switch must deploy a capabil-
ity called N_Port ID Virtualization (NPIV).

Virtual SANs
Until the late 1990s, storage administrators deployed SAN islands to avoid fabric-wide dis-
ruptive events from one application environment causing problems in other environments.
Their concept was really simple: independent and relatively small fabrics that connect serv-
ers and storage dedicated to a single application.

When storage administrators needed more ports, they typically scaled these SAN islands
through the connection of small fabric switches. With time, the deployment of SAN islands
produced some architectural challenges, such as

■ Low utilization of storage resources that were connected to a single island


■ Large number of management points
■ Considerable number of ports used for ISLs
■ Unpredictable traffic oversubscription on ISLs
Chapter 8: Block Storage Technologies 251

In the next decade, a significant number of data centers started SAN consolidation projects
to eliminate these undesirable effects. And as I mentioned in the section “SAN Designs,”
director-class switches were ideal tools for these projects.

To avoid the formation of a single failure domain in a consolidated physical fabric, Cisco
introduced the concept of virtual storage-area networks (VSANs). Supporting the creation
of virtual SAN islands, VSANs can bring several other advantages to SAN administrators, as
you will learn in the following sections.

VSAN Definitions
A VSAN is defined as a set of N_Ports that share the same Fibre Channel fabric processes
in a single physical SAN. As a consequence, all fabric services such as Name Server, Zone
Server, and Login Server present distinct instances per VSAN.
In Cisco devices, VSAN Manager is the network operating system process that maintains
VSAN attributes and port membership. It employs a database that contains information
about each VSAN, such as its unique name, administrative state (suspended or active), oper-
ational state (up, if the VSAN has at least one active interface), load-balance algorithm (for
FSPF equal-cost paths and PortChannels), and Fibre Channel timers.

In a VSAN-capable device, there are two predefined virtual SANs:

■ Default (VSAN 1): This VSAN cannot be erased, only suspended. It contains all the
ports when a switch is first initialized.
■ Isolated (VSAN 4094): This special VSAN receives all the ports from deleted VSANs
and, per definition, does not forward any traffic. It exists to avoid involuntary inclusion
of ports in a VSAN and it cannot be deleted either. 8
An example usually is clearer than a theoretical explanation, so to illustrate how VSANs
behave, Figure 8-25 depicts a single MDS 9000 switch (MDS1) deploying two different
VSANs. Observe that among the four interfaces, VSAN 100 contains interfaces fc1/1 and
fc2/1, while VSAN 200 contains fc1/2 and fc1/3.

Server1 Storage1
fc1/1 fc2/1
(VSAN 100) (VSAN 100)

fc1/3 fc1/2
MDS1
(VSAN 200) (VSAN 200)
Storage2
Server2

Figure 8-25 Two VSANs in a Single Switch

Example 8-1 explains how both VSANs were created on switch MDS1 as well as the inter-
face assignment configuration.
Technet24.ir

252 CCNA Cloud CLDFND 210-451 Official Cert Guide

Example 8-1 VSAN Creation and Interface Assignment


! Entering the configuration mode
MDS1# configure terminal
! Entering the VSAN configuration database
MDS1(config)# vsan database
! Creating VSAN 100
MDS1(config-vsan-db)# vsan 100
! Including interfaces fc1/1 and fc2/1 in VSAN 10, one at a time
MDS1(config-vsan-db)# vsan 100 interface fc1/1
MDS1(config-vsan-db)# vsan 100 interface fc2/1
! Creating VSAN 200
MDS1(config-vsan-db)# vsan 200
! Both interfaces fc1/2 and fc1/3 are included in VSAN 20
MDS1(config-vsan-db)# vsan 200 interface fc1/2-3
! Now, all interfaces are simultaneously enabled
MDS1(config-vsan-db)# interface fc1/1-3,fc2/1
MDS1(config-if)# no shutdown

If all interfaces on MDS1 are configured in automatic mode, it is expected that all four
devices will perform their FLOGI into both VSANs. Example 8-2 proves this theory.

Example 8-2 FLOGI Database in MDS1


! Displaying the flogi database
MDS1(config-if)# show flogi database
--------------------------------------------------------------------------------
INTERFACE VSAN FCID PORT NAME NODE NAME
--------------------------------------------------------------------------------
! Server1 is connected to domain ID 0xd4
fc1/1 100 0xd40000 10:00:00:00:c9:2e:66:00 20:00:00:00:c9:2e:66:00
! Server2 and Storage2 are connected to domain ID 0xed
fc1/2 200 0xed0000 10:00:00:04:cf:92:8b:ad 20:00:00:04:cf:92:8b:ad
fc1/3 200 0xed00dc 22:00:00:0c:50:49:4e:d8 20:00:00:0c:50:49:4e:d8
! Storage1 is connected to domain ID 0xd4
fc2/1 100 0xd400da 50:00:40:21:03:fc:6d:28 20:03:00:04:02:fc:6d:28

In Example 8-2, VSANs 100 and 200 have different domain IDs (0xd4 and 0xed, respec-
tively) that were randomly chosen from the domain ID list (1 to 239, by default) in each
VSAN. Additionally, the VSAN distinct Login Servers have, accordingly, assigned FCIDs to
the connected storage devices. As a result, MDS1 has created two virtual SAN islands. And
from their own perspective, each initiator-target pair is connected to completely different
switches.
Chapter 8: Block Storage Technologies 253

VSAN Trunking
Just like their physical counterparts, VSANs are usually not created to be restricted to a
single switch. In fact, they can be extended to other switches from core-edge or edge-core-
edge topologies, for example. Although ISLs could potentially be configured to transport
frames from a single VSAN, to avoid waste of ports, a trunk is usually the recommended
connection between VSAN-enabled switches.

By definition, a Trunk Expansion Port (TE_Port) can carry the traffic of several VSANs
over a single Enhanced Inter-Switch Link (EISL). In all frames traversing these special ISLs,
an 8-byte VSAN header is included before the frame header. Although this header struc-
ture is not depicted here, you should know that it includes a 12-bit field that identifies the
VSAN each frame belongs to.

NOTE The VSAN header is standardized in FC-FS-2 section 10.3 (“VFT_Header and
Virtual Fabrics”).

Figure 8-26 illustrates an EISL configured between two switches (MDS-A and MDS-B). It
also depicts the interface configuration used on both devices.

MDS-A(config)# interface fc2/14


MDS-A(config-if)# switchport mode E
MDS-A(config-if)# switchport trunk mode on
MDS-A(config-if)# switchport trunk allowed vsan 10
MDS-A(config-if)# switchport trunk allowed vsan add 20
MDS-A(config-if)# no shutdown

Server10
8
Frames with Array10
VSAN tagging
VSAN 10 VSAN 10
10 20

fc2/14 fc5/6
MDS-A MDS-B
VSAN 20 VSAN 20
Array20
Server20

MDS-B(config)# interface fc5/6


MDS-B(config-if)# switchport mode E
MDS-B(config-if)# switchport trunk mode on
MDS-B(config-if)# switchport trunk allowed vsan 10
MDS-B(config-if)# switchport trunk allowed vsan add 20
MDS-B(config-if)# no shutdown

Figure 8-26 VSAN Trunking

In the figure, the switchport command is first used to configure the port type (E_Port), then
to enable VSAN trunking, and then to allow VSANs 10 and 20 to use the trunk on both
switches. Consequently, an EISL is formed, causing frames (also portrayed in Figure 8-26)
from intra-VSAN traffic to be tagged in this special connection.
Technet24.ir

254 CCNA Cloud CLDFND 210-451 Official Cert Guide

Zoning and VSANs


Two steps are usually executed to present a SCSI storage volume (LUN) from a disk array to
a server. Therefore, after both of them are logged to a common Fibre Channel fabric:
Step 1. Both device ports must be zoned together.

Step 2. In the storage array, the LUN must be configured to be solely accessed by
a pWWN from the server port. This process is commonly referred as LUN
masking.

When deploying VSANs, a SAN administrator performs these activities as if she were man-
aging different Fibre Channel fabrics. Hence, each VSAN has its own zones and zone sets
(active or not).
Still referring to the topology from Figure 8-26, Example 8-3 details a zoning configuration
that permits Server10 (whose HBA has pWWN 10:00:00:00:c9:2e:66:00) to communicate
with Array10 (whose port has pWWN 50:00:40:21:03:fc:6d:28) after VSANs 10 and 20 are
already provisioned.

Example 8-3 Zoning Server10 and Array10 on MDS-A


! Creating a zone in VSAN 10
MDS-A(config)# zone name SERVER10-ARRAY10 vsan 10
! Including Server10 pWWN in the zone
MDS-A(config-zone)# member pwwn 10:00:00:00:c9:2e:66:00
! Including Array10 pWWN in the zone
MDS-A(config-zone)# member pwwn 50:00:40:21:03:fc:6d:28
! Creating a zone set and including zone SERVER10-ARRAY10 in it
MDS-A(config-zone)# zoneset name ZS10 vsan 10
MDS-A(config-zoneset)# member SERVER10-ARRAY10
! Activating the zone set
MDS-A(config-zoneset)# zoneset activate name ZS10 vsan 10
Zoneset activation initiated. check zone status
! At this moment, all switches in VSAN 10 share the same active zone set
MDS-A(config)# show zoneset active vsan 10
zoneset name ZS10 vsan 10
zone name SERVER10-ARRAY10 vsan 10
* fcid 0x710000 [pwwn 10:00:00:00:c9:2e:66:00]
* fcid 0xd40000 [pwwn 50:00:40:21:03:fc:6d:28]

TIP Because the zone service is a distributed fabric process, I could have chosen either of
the switches for this configuration.

In Example 8-3, I have chosen pWWNs to characterize both devices in the zone because
these values will remain the same, wherever they are connected in the fabric. Nevertheless,
other methods can be used to include a device in a zone, such as switch interface and FCID.
Chapter 8: Block Storage Technologies 255

The stars on the side of each zone member signal that these nodes are logged in and present
in the VSAN 10 fabric. At the end of Example 8-3, they are zoned together.

Figure 8-27 exhibits the 20-GB disk array volume detected on Server10, which uses Win-
dows 2008 as its operating system. From this moment on, Server10 can store data on this
volume through standard SCSI commands.

LUN from Array10

Figure 8-27 New Volume Detected in Server10

In Example 8-4, it is possible to verify that the Zone Server on VSAN 20 is completely
unaware of the recent zone set activated in VSAN 10.

Example 8-4 Displaying Active Zone Set in VSAN20


MDS-A(config)# show zoneset active vsan 20
Zoneset not present
MDS-A(config)#

VSAN Use Cases


I usually recommend the creation of an additional VSAN whenever a SAN administrator feels
the sudden urge to deploy another physical Fibre Channel fabric in his environment. Follow-
ing this train of thought, VSANs can be effectively deployed to achieve the following:
Technet24.ir
256 CCNA Cloud CLDFND 210-451 Official Cert Guide

■ Consolidation: As previously mentioned, VSANs originally allowed many SAN islands to


be put together within the same physical infrastructure.
■ Isolation: A VSAN basically creates an additional Fibre Channel fabric that does not
interfere with environments that already exist. As a way to avoid investment in additional
SAN switches, VSANs can be easily applied to support development, qualification, and
production environments for the same application.
■ Multi-tenancy: Virtual SANs can easily build, on a shared infrastructure, separate fabric
services and traffic segmentation for servers and storage devices that belong to different
companies. Moreover, the management of all VSAN resources can be assigned to distinct
tenant administrators through the use of Role-based Access Control (RBAC) features.
■ Interoperability: As one of the first Cisco innovations brought to a hardly open indus-
try, MDS 9000 interoperability features enabled the integration of third-party switches
through the use of additional VSANs. In essence, a VSAN in interoperability mode
emulates all the characteristics of a third-party device without compromising other envi-
ronments.

NOTE A feature called Inter-VSAN Routing (IVR) can be used whenever two devices that
belong to different VSANs must communicate with each other. In migration scenarios that
use VSANs in interoperability mode, IVR is heavily used.

Certainly, this list is not an exhaustive exploration of VSAN applicability, as many more use
cases were created to support specific requirements of many other customers. But funda-
mentally, this virtualization technique drastically decreased SAN provisioning time as well as
the number of unused ports during the endemic formation of physical SAN “archipelagos”
in the 1990s.

Internet SCSI
A joint initiative between Cisco and IBM, Internet SCSI (or simply iSCSI) was originally
intended to port SCSI peripherals into the flourishing IP networks. Fostering convergence
over a single network infrastructure, iSCSI connections can leverage an existent local-area
network (LAN) (and potentially wide-area networks [WANs]) to establish block-based com-
munications between servers and storage devices.

Standardized in the IETF RFC 3720 in 2004, iSCSI employs TCP connections to encapsulate
SCSI traffic (and perform flow control). For that reason, iSCSI has consequently achieved
great popularity as a quick-to-deploy alternative SAN technology.

As depicted in Figure 8-28, the main components of the iSCSI architecture are

■ iSCSI initiator: Server that originates an iSCSI connection to an iSCSI portal identi-
fied by an IP address and a TCP destination port (default is 3260). An initiator usually
employs a software client to establish a session and (optionally) an offload engine to
unburden the CPU of the overhead associated with iSCSI or TCP processing. However,
as computer processor speeds continue to abide by Moore’s law, the great majority of
iSCSI implementations still rely on idle resources from the CPU and forego the optional
offload engine.
Chapter 8: Block Storage Technologies 257

TIP Fibre Channel HBAs relieve the server processor from all overhead related to SCSI
and Fibre Channel communication.

■ iSCSI target: Storage device that provides LUN access through iSCSI. Generally speak-
ing, these targets establish an iSCSI portal to which multiple iSCSI initiators may connect.
■ iSCSI gateway: Communication device that allows the communication between iSCSI
initiators and targets connected to different SAN technologies. Select MDS 9000 switch-
es and directors are capable of establishing portals for iSCSI initiators and initiators for
Fibre Channel storage devices.

iSCSI Initiator
iSCSI Target

iSCSI
Connection

IP Network
iSCSI Initiator Fibre Channel
iSCSI Switch
Connection iSCSI
Gateway

Fibre Channel
Target

Figure 8-28 iSCSI Devices

NOTE Over the years, I have witnessed many discussions regarding the question of
whether iSCSI requires a separate IP network. Excluding political and governance argu- 8
ments, I would say that iSCSI traffic can be easily protected in modern data center networks
through familiar network technologies such as VLANs and Quality of Service (QoS). In such
scenarios, I would recommend having an exclusive VLAN connecting dedicated iSCSI inter-
faces for both initiators and targets as a best practice, for two reasons: iSCSI initiators will
not require additional host routes, and iSCSI traffic will be easily identified, by VLAN or IP
subnet, for QoS purposes.

As an alternative to using IP addresses, devices at the end of an iSCSI connection can


identify each other through specially defined naming schemes. The most commonly used
structure is called an iSCSI Qualified Name (IQN), which may have up to 255 bytes and is
formed with the concatenation of the following elements:

■ Literal IQN: The prefix “iqn”


■ Date: The year and month
■ Reverse domain name: Where “company.com” becomes “com.company”
■ Optional colon and specific device name:

An example IQN name is “iqn.1993-11.com.disk-vendor:diskarrays.sn.45678.” If the iSCSI


connection is using IQN-based authentication, the other host at the end of the connections
must identify this exact string. Other, less common naming conventions for iSCSI are IEEE
Extended Unique Identifier (EUI) and T11 Network Address Authority (NAA).
Technet24.ir
258 CCNA Cloud CLDFND 210-451 Official Cert Guide

Designed as an additional service for large iSCSI deployments, Internet Storage Name Ser-
vice (iSNS) enables automated discovery, management, and configuration of iSCSI devices.
For this reason, iSNS is frequently compared to native Fibre Channel services such as Login
Server and Name Server.

Although iSNS was standardized in 2005, the great majority of iSCSI implementations still
depend on static portal definitions on initiators to establish iSCSI connections. As a conse-
quence, iSCSI devices leverage high-availability mechanisms from both SCSI and IP proto-
cols. For example:

■ Multipathing: Ideally, each initiator and target pair deploys more than one interface to
establish two independent iSCSI connections between these entities. With such arrange-
ment, an initiator software client may choose to forward iSCSI traffic to only one con-
nection or load-balance storage access between them. In both cases, if a session fails for
any reason, the remaining connection maintains LUN access to the iSCSI initiator.
■ Virtual Router Redundancy Protocol (VRRP): This protocol can provide high availability
for iSCSI portals created on multiple target (or gateway) ports. In these scenarios, iSCSI ini-
tiators connect to a virtual IP address that is dynamically assigned to an interface deploying
an iSCSI portal. In the case of a device (or port) failure, this special IP address is quickly
reassigned to another device (or interface) and the iSCSI connection is reinitialized.

iSCSI initiators, targets, and gateways can also leverage security mechanisms from both IP
and SCSI architectures, including access control lists (ACLs) and LUN masking for specific
IP addresses, subnets, or IQNs. RFC 3720 additionally predicted exclusive security mea-
sures for iSCSI, such as initiator identification through Challenge-Handshake Authentication
Protocol (CHAP) and packet protection through Internet Protocol Security (IPSec).

Cloud Computing and SANs


As specialized environments built over standard data centers, cloud computing can certainly
enjoy the same benefits from block-based storage and SANs. As you have learned in this
chapter, these technologies greatly facilitate data portability because they obscure any dis-
tinction between local hard disk drives and remote volumes, while leveraging all advantages
from modern disk arrays.

Fundamentally, block storage technologies can serve two objectives in cloud computing
scenarios:

■ Infrastructure: Where disk arrays and SAN devices are used to provide remote storage
access to the systems supporting the cloud
■ Block Storage as a Service: Where volumes are offered as cloud tenant resources

This section explores each of these possibilities using concepts and technologies you are
already familiar with.

Block Storage for Cloud Infrastructure


Primarily, a data center architect usually includes SAN deployments in a project to establish
a transparent decoupling between application software and the data it accesses. Such an
Chapter 8: Block Storage Technologies 259

approach greatly facilitates the replication of data and volume scaling up without unneces-
sary hardware changes in the application servers.

Considering the specific requirements of a cloud computing infrastructure, SANs can offer
additional interesting functions such as

■ SAN boot: Rather than using an internal HDD, a server can boot its operating system
using a LUN on a remote storage device, simplifying hardware replacement if a critical
component suffers any major malfunction. SAN boot is also a necessary step for the con-
cept of stateless computing, where all server configurations (states) are not contained in
the computer system. In this style of computing, a server is used exclusively as a process-
ing resource.
■ Cluster storage: Computer clusters are composed of multiple server nodes controlled by
cluster software and performing identical tasks to achieve high availability and, in some
cases, higher performance. Most cluster technologies require that the multiple nodes
access the same data set simultaneously, which can be easily achieved in SAN deployments.
■ Virtual machine datastores: Multiple virtualization hosts that run Type-1 hypervisors
have access to the same volume storing VM files, thereby enabling server virtualization
features such as VM high availability and live migration. This implementation can be con-
sidered a specific, but extremely popular, subtype of cluster storage.

NOTE Most hypervisor architectures can perform VM storage live migration, which
essentially allows the transfer of all files from virtual machines between two VM datastores
without any loss of service.

8
Block Storage as a Service
Besides consuming block-based storage for its own structural purposes, there is no techni-
cal reason why a cloud computing environment cannot also provide these resources to end
users. Therefore, through a Block Storage as a Service offering, a cloud tenant can request
a volume to store data from applications deployed inside a cloud, or even to store data for
external systems.

For public clouds, providing external access to automatically generated SCSI LUNs may be
especially challenging due to the expected high latency of the Internet. Nevertheless, these
volumes could potentially be provisioned for external servers that are sharing the same
Fibre Channel SAN or Ethernet LAN (for iSCSI) with a private cloud.

In real-world scenarios, the most common Block Storage as a Service offerings today exist
to create and attach volumes to Infrastructure as a Service (IaaS)-generated virtual machines.
Several public cloud providers include this service in their portals, including Amazon Web
Services through its Amazon Elastic Block Storage (EBS) and Microsoft Azure via its Azure
Blob Storage.

OpenStack also offers a highly available component for Block Storage as a Service, named
Cinder. Much like the services cited in the previous paragraph, Cinder is designed to allow
Nova-instantiated virtual machines to access volumes dynamically provisioned on traditional
block storage devices.
Technet24.ir
260 CCNA Cloud CLDFND 210-451 Official Cert Guide

As explained, you should not consider Cinder as storage device emulation, simply because
this OpenStack service does not actually receive or send application data directly from or
to tenant VMs. In fact, Cinder provides an abstraction service that uses back-end storage
drivers that enable several vendors to be controlled by an OpenStack cloud. In summary,
OpenStack Cinder

■ Provides a volume to a single VM instance


■ Has an API to receive predefined end-user requests such as create volume and attach
volume working in tandem with OpenStack Nova
■ Selects the most appropriate storage device to deploy a volume
■ Sends requests to the underlying storage systems to create and manage volumes
■ Sends storage information (using iSCSI credentials) to Nova for session establishment

As a general comment, iSCSI is usually considered a better fit for Block Storage as a Service
scenarios due to its flexibility of using software initiators in the virtual machines (as well as
the amazing ubiquity of IP).

Around the Corner: Solid-State Drives


It is not exactly news that hard disk drives are not the only available technology for second-
ary storage functions, especially if you observe the huge gap of data access latency between
main memory (30 to 60 nanoseconds) and HDDs (3,000,000 to 12,000,000 nanoseconds).
And although that difference may be imperceptible to humans (the blink of an eye is on
average 100,000,000 to 150,000,000 nanoseconds), it is certainly not for computers, where
a single user request for a stored object may generate millions of individual I/O operations.

Dodging the electromechanical-related latencies of HDDs, solid-state drives (SSDs) were


created several decades ago to better bridge the gap between primary and secondary stor-
age, offering an access latency of less than 100 microseconds. SSDs share their origins with
the widely popular USB flash drives, both being composed of the same integrated circuit
assemblies that can persistently store data through power outages. For that reason, SSDs are
also referred as flash drives by some vendors.

Perhaps to appeal to long-term HDD users, some vendors chose to market their SSD tech-
nology as “solid-state disk.” Regardless of the motivation, this technical misstep is not total-
ly without merit, because most SSDs share compatible interfaces with HDDs. Employing
SATA and SAS connections, SSDs can easily replace other technologies in personal comput-
ers, servers, and disk arrays.

Besides providing lower latency, SSDs are also more resistant to physical shock, produce less
noise, and consume less power when compared to HDDs. And as the price difference with
HDDs decreases, SSDs are increasingly becoming popular in application environments that
require superior I/O performance. And once more, Cisco innovates modern data center architec-
tures by positioning SSDs as part of the Unified Computing System (UCS) with UCS Invicta.

NOTE Cisco Unified Computing System will be explained in detail in Chapter 12,
“Unified Computing.”
Chapter 8: Block Storage Technologies 261

Fundamentally, UCS Invicta consists of an SSD-only storage system that is especially built
to bring faster I/O operations through unique features. For this reason, it is primarily
designed to accelerate applications such as database, email, virtual desktop, high-perfor-
mance computing (HPC), and video transcoding applications.
UCS Invicta is available in two physical factors:
■ Appliance: A single hardware piece that can be easily added to a UCS domain com-
posed of blade or rack servers. It provides block I/O access to SCSI LUNs through Fibre
Channel or iSCSI.
■ Scaling System: Composed of router nodes (which are responsible for connectivity and
management, including replication, striping, and RAID configurations) and nodes deploy-
ing individual flash-memory management, including RAID and data protection.
UCS Invicta OS is the storage operating system that controls both formats, providing the
advanced features described in Table 8-7.

Table 8-7 UCS Invicta Benefits


Feature Description
Lower write- Write amplification occurs when flash drives pad data to fill an empty block.
amplification Besides consuming excess space, write amplification decreases the longevity of
factors SSD-based devices through excessive and unnecessary write operations.
Because UCS Invicta writes to the solid-state device in fixed-length I/O oper-
ations, write operations are performed in complete blocks only, increasing
eight to ten times the longevity from the same flash drives when compared to
other solutions.
High-speed The optional elimination of redundant stored data in UCS Invicta is per- 8
data dedupli- formed inline during write operations. If any redundancy is discovered, a ref-
cation erence pointer is stored rather than the whole data, saving space and achiev-
ing a higher lifetime for appliances and nodes.
Enhanced Optionally enabled, an appliance or node may use the same hashing algo-
error detection rithm from the deduplication process to detect differences between the
and correction received data and its stored counterpart. If an error is detected, the original
data can be recovered from a hash table residing in the system DRAM.
Data integrity Each UCS Invicta appliance or node contains a 1-GB protection buffer for
in the event of inbound write operations that have not yet been successfully stored in the
a power loss SSDs. This buffer is capable of writing the data to flash memory in the
event of a power loss. The write data is recovered and verified after power is
restored, after which it is finally committed to the SSDs.

NOTE Cisco has announced the end-of-sale for UCS Invicta solutions in 2015. However, I
have maintained its content in this certification guide to address the CLDFND exam blueprint.
You can find more details related to the product (such as capacity and scalability) in Chapter 13.

Further Reading
■ http://www.cisco.com/c/en/us/support/servers-unified-computing/ucs-invicta-series/tsd-
products-support-series-home.html
Technet24.ir
262 CCNA Cloud CLDFND 210-451 Official Cert Guide

Exam Preparation Tasks

Review All the Key Topics


Review the most important topics in this chapter, denoted with a Key Topic icon in the
outer margin of the page. Table 8-8 lists a reference of these key topics and the page num-
ber on which each is found.

Table 8-8 Key Topics for Chapter 8


Key Topic Element Description Page Number
List Types of data storage 224
Table 8-2 Select RAID levels 227
Table 8-3 Nested RAID levels 228
List Disk array components 229
List Differences between disk pools and RAID 230
Figure 8-9 Thick provisioning 232
Figure 8-10 Thin provisioning 233
Table 8-4 Fibre Channel layers 237
Figure 8-17 Fibre Channel World Wide Names 240
Figure 8-18 Fibre Channel Identifier format 240
Table 8-5 Some Fibre Channel fabric services 242
Table 8-6 Fibre Channel logins 245
List VSAN use cases 256
List Cloud use cases for SANs 259
Table 8-7 UCS Invicta benefits 261

Complete the Tables and Lists from Memory


Print a copy of Appendix B, “Memory Tables” (found on the CD), or at least the section
for this chapter, and complete the tables and lists from memory. Appendix C, “Answers to
Memory Tables,” also on the CD, includes completed tables and lists so that you can check
your work.
Chapter 8: Block Storage Technologies 263

Define Key Terms


Define the following key terms from this chapter, and check your answers in the glossary:

main memory, secondary memory, hard disk drive (HDD), redundant array of independent
disks (RAID), disk controller, disk array, volume, block, Advanced Technology Attachment
(ATA), Small Computer Systems Interface (SCSI), storage-area network (SAN), Fibre Chan-
nel, World Wide Name (WWN), Fibre Channel Identifier (FCID), Fabric Shortest Path First
(FSPF), Fabric Login (FLOGI), zone, zone set, virtual storage-area network (VSAN), VSAN
trunking, Internet Small Computer Systems Interface (iSCSI), iSCSI Qualified Name (IQN),
SAN boot, solid-state drive (SSD)

8
Technet24.ir

This chapter covers the following topics:

■ What Is a File?

■ Building a File System

■ Accessing Remote Files

■ Cloud Computing and File Storage

This chapter covers the following exam subjects:

■ 5.2 Describe the difference between all the storage access technologies
■ 5.2.a Difference between SAN and NAS; block and file
■ 5.2.c File technologies

■ 5.4 Describe basic NAS storage concepts


■ 5.4.a Shares / Mount Points
■ 5.4.b Permissions
CHAPTER 9

File Storage Technologies


As discussed in Chapter 8, “Block Storage Technologies,” after a block-based storage device
or volume is provisioned to a server, the applications running on the computer system will
dictate, from a byte level, exactly how data is written and read. However, the described pro-
cedure is not the only way to store data for later use.

As a computer user, you are already familiar with the concept of using files to store your per-
sonal data. This familiarity will help you understand why file systems are also a popular choice
for saving information in modern data centers, including those that host cloud computing.

According to a 2012 report, International Data Corporation (IDC) estimated that file-based
storage systems accounted for 65% of the overall disk capacity shipped that year. And with
the advances of cloud computing, such technologies have grown in importance in the infra-
structure that supports these environments.

The CLDFND exam requires knowledge of the basic principles behind file storage technol-
ogies, clearly differentiating them from the block-based technologies explained in Chapter
8. With this objective in mind, this chapter presents the formal definition of a file, compar-
ing this data-at-rest structure to other methods available today. The chapter then introduces
the most common file locations and the main options to build a file system, including
naming rules, format, and security. It also addresses the two methods of remote file access,
explaining how files are usually shared among computers. Finally, the chapter correlates file
storage technologies to cloud computing environments, focusing on how their flexibility
may help these implementations.

“Do I Know This Already?” Quiz


The “Do I Know This Already?” quiz allows you to assess whether you should read this
entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in
doubt about your answers to these questions or your own assessment of your knowledge
of the topics, read the entire chapter. Table 9-1 lists the major headings in this chapter and
their corresponding “Do I Know This Already?” quiz questions. You can find the answers in
Appendix A, “Answers to Pre-Assessments and Quizzes.”

Table 9-1 “Do I Know This Already?” Section-to-Question Mapping


Foundation Topics Section Questions
What Is a File? 1–2
Building a File System 3–5
Accessing Remote Files 6–8
Cloud Computing and File Storage 9–10
Technet24.ir

266 CCNA Cloud CLDFND 210-451 Official Cert Guide

1. Which of the following is not a usual part of file metadata?


a. Time and date of creation
b. Author name
c. Category
d. Last modified

2. Which of the following represents a true difference between block storage and file
storage technologies?
a. File storage technologies offer higher performance.
b. Block storage technologies provide more capacity.
c. Block storage technologies exclusively use IP networks.
d. File storage devices can control content.

3. Which of the following lists permissions from Linux files?


a. Read, write, and execute
b. Open, edit, and execute
c. Read and write
d. Open, modify, and full control

4. Which of the following lists the available options for volume formatting in Windows
platforms?
a. FAT12, FAT16, FAT32
b. NTFS
c. NTFS, FAT12, FAT32
d. NTFS, FAT16, FAT32
e. NTFS, FAT12, FAT16, FAT32

5. Which Linux command allows a file to be fully controlled by any system user?
a. chmod 755
b. chmod 777
c. permission rwxrwxrwx
d. permission 777
e. chmod rw-rw-rw-

6. Which of the following is not a difference between SAN and NAS?


a. NAS typically uses NFS or SMB.
b. SAN typically uses Fibre Channel protocol.
c. NAS does not support RAID.
d. NAS supports disk aggregation.
e. Both technologies create volumes.
Chapter 9: File Storage Technologies 267

7. Which of the following is correct about the MOUNT protocol?


a. It is optional for NFS.
b. It is stateless in NFSv2 and NFSv3.
c. It allows a client to attach a remote directory tree to a local file system.
d. It controls client authentication but not authorization.

8. Which of the following is not part of SMB architecture?


a. NetBIOS
b. SMB dialect
c. Share
d. Active Directory
e. POSIX

9. Which is the most commonly used access protocol for cloud file-hosting services?
a. HTTP
b. SMB
c. FTP
d. NFS
e. CIFS

10. Which file access protocol can be used to store VM files from VMware ESXi?

a. SMB
b. SFTP
c. NFS
d. FTP
e. TFTP 9
Technet24.ir
268 CCNA Cloud CLDFND 210-451 Official Cert Guide

Foundation Topics

What Is a File?
As you learned in Chapter 8, a block is simply a sequence of bytes with a defined length (block
size), forming the smallest data container in a block-based storage device. Storage devices such
as hard disk drives, disk arrays, and flash drives are the most prominent block storage devices.
Yet, because most computer users typically interact only with files, such as graphics, presenta-
tions, and documents, they are not aware of the incessant block exchange between the com-
puter processor and the storage device. As a unit of storage, a file masks this complexity.

By definition, a file is a set of contiguous data that is persistently saved on a storage


device. Besides enclosing proper user data, files also contain metadata, which is simply data
describing user data. Common examples of file metadata include the filename, author name,
access permissions, time and date of creation, and date of last revision. Figure 9-1 portrays
a simple file structure for a file named MyWinterVacations. It contains data consisting of
the repetition of a phrase that Jack Nicholson fans may find downright eerie. The file meta-
data defines the author name (Jack Torrance), date of creation (12/01/1980), last revision
(12/26/1980), and permission (users from the Overlook Group can read the file).

Name: MyWinterVacations
Author: Jack Torrance
Created: 12/01/1980 Metadata
Revised: 12/26/1980
Permission: Read for Overlook Group

All workd and no play makes Jack a dull boy


All workd and no play makes Jack a dull boy
All workd and no play makes Jack a dull boy
All workd and no play makes Jack a dull boy Data
All workd and no play makes Jack a dull boy
All workd and no play makes Jack a dull boy
All workd and no play makes Jack a dull boy
All workd and no play makes Jack a dull boy
All workd and no play makes Jack a dull boy
All workd and no play makes Jack a dull boy

Figure 9-1 A Simple File Structure

You also learned in Chapter 8 that storage technologies are ultimately stacked in multiple
abstraction layers. And as files’ data and metadata are stored on raw volumes (such as an
HDD or SCSI LUNs on a disk array), storage devices require an additional layer of software
to fully define where a file begins and ends. Consequently, a raw volume must be formatted
to stock multiple files, which will then be controlled by many users. A corresponding file
management application on users’ computers leaves them completely unaware of all block-
related operations in the volume. Most operating systems (such as Linux and Windows)
have long been equipped with file management applications, which perhaps explains how
files became the customary unit of storage.
Chapter 9: File Storage Technologies 269

File Locations
A user can access files located in different places, which are generically represented in
Figure 9-2.

Local Sharing Network-Attached NAS-Head


Storage (NAS)

NAS Head

NAS Disk Array

Figure 9-2 File Location Options

First and most frequently, a user accesses local files when these data constructs are main-
tained in a direct-attached storage (DAS) device. But rather curiously, when a server is using
an external SCSI volume on a disk array, it is actually saving files locally. After all, from a
file management perspective, the files are being processed in this server.

Already inadequate in data centers, the practice of storing files in local storage is becoming
as unfashionable as shoulder pads. In any scenario, local files represent a single point of fail-
ure, requiring individual backup procedures and manual operations to avoid data loss in the
case of hardware failure.

File sharing allows users to access files located in other computers, through the use of
a network infrastructure and a file access protocol. Although ad hoc file sharing may be
attractive for environments with a low number of computers, it may easily become a man- 9
agement nightmare for companies with thousands of systems. In these organizations, sea-
soned IT administrators commonly preferred to store user files centrally in specialized com-
puters called file servers. But as these servers had their capacity and robustness challenged
with user file proliferation, they were gradually replaced with network-attached storage
(NAS) devices. In essence, a NAS device is a specialized storage device with a large capacity
that can serve files to a heterogeneous group of servers and personal computers.

A NAS may contain hundreds of hard disk drives and deploy aggregation technologies such
as RAID and dynamic disk pools to increase access performance, data capacity, and file
availability. But whereas disk arrays are connected to a storage-area network (SAN), a NAS
formats volumes to handle files and exclusively communicate with clients through IP-based
file sharing protocols such as Network File System (NFS) and Server Message Block (SMB).

From an architectural perspective, a NAS works as single point of arbitration to remote cli-
ent systems, providing locks (to prevent other users from tampering with the files) and per-
missions (to prevent unauthorized access to files). These devices are commercialized through
various vendors, such as NetApp Inc. and EMC Corporation.

You might also like