Zsiga Z Cisco Certified Design Expert CCDE 400 007 Official Cert Guide 2023
Zsiga Z Cisco Certified Design Expert CCDE 400 007 Official Cert Guide 2023
Zig Zsiga
Cisco Press
Cisco Certified Design Expert CCDE 400-007 Official Cert
Guide
Zig Zsiga
Copyright© 2023 Cisco Systems, Inc.
Published by:
Cisco Press
All rights reserved. This publication is protected by copyright, and
permission must be obtained from the publisher prior to any prohibited
reproduction, storage in a retrieval system, or transmission in any form or
by any means, electronic, mechanical, photocopying, recording, or likewise.
For information regarding permissions, request forms, and the appropriate
contacts within the Pearson Education Global Rights & Permissions
Department, please visit www.pearson.com/permissions.
No patent liability is assumed with respect to the use of the information
contained herein. Although every precaution has been taken in the
preparation of this book, the publisher and author assume no responsibility
for errors or omissions. Nor is any liability assumed for damages resulting
from the use of the information contained herein.
ScoutAutomatedPrintCode
Library of Congress Control Number: 2023902257
ISBN-13: 978-0-13-760104-2
ISBN-10: 0-13-760104-2
Trademark Acknowledgments
All terms mentioned in this book that are known to be trademarks or service
marks have been appropriately capitalized. Cisco Press or Cisco Systems,
Inc., cannot attest to the accuracy of this information. Use of a term in this
book should not be regarded as affecting the validity of any trademark or
service mark.
Special Sales
For information about buying this title in bulk quantities, or for special
sales opportunities (which may include electronic versions; custom cover
designs; and content particular to your business, training goals, marketing
focus, or branding interests), please contact our corporate sales department
at [email protected] or (800) 382-3419.
For government sales inquiries, please contact
[email protected].
For questions about sales outside the U.S., please contact
[email protected].
Feedback Information
At Cisco Press, our goal is to create in-depth technical books of the highest
quality and value. Each book is crafted with care and precision, undergoing
rigorous development that involves the unique expertise of members from
the professional technical community.
Readers’ feedback is a natural continuation of this process. If you have any
comments regarding how we could improve the quality of this book, or
otherwise alter it to better suit your needs, you can contact us through email
at [email protected]. Please make sure to include the book title and
ISBN in your message.
We greatly appreciate your assistance.
Composition: codeMantra
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco
and/or its affiliates in the U.S. and other countries. To view a list of Cisco
trademarks, go to this URL: www.cisco.com/go/trademarks. Third party
trademarks mentioned are the property of their respective owners. The use
of the word partner does not imply a partnership relationship between Cisco
and any other company. (1110R)
Pearson’s Commitment to Diversity, Equity, and
Inclusion
Pearson is dedicated to creating bias-free content that reflects the diversity
of all learners. We embrace the many dimensions of diversity, including but
not limited to race, ethnicity, gender, socioeconomic status, ability, age,
sexual orientation, and religious or political beliefs.
Education is a powerful force for equity and change in our world. It has the
potential to deliver opportunities that improve lives and enable economic
mobility. As we work with authors to create content for every product and
service, we acknowledge our responsibility to demonstrate inclusivity and
incorporate diverse scholarship so that everyone can achieve their potential
through learning. As the world’s leading learning company, we have a duty
to help drive change and live up to our purpose to help more people create a
better life for themselves and to create a better world.
Our ambition is to purposefully contribute to a world where
While we work hard to present unbiased content, we want to hear from you
about any concerns or needs with this Pearson product so that we can
investigate and address them.
Please contact us with concerns about any potential bias at
https://www.pearson.com/report-bias.html.
Figure Credits
Figure 4-1 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 4-2 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 4-3 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-1 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-2 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-3 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-4 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-5 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-6 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-7 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-8 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-9 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-10 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-11 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-12 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-13 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-14 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-15 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-16 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-17 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-18 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 6-19 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-1 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-2 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-3 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-4 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-5 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-6 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-7 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-8 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-9 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-10 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-11 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-12 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-13 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-14 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-15 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-16 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-17 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-18 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-19 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-20 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-21 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-22 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-23 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-24 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-25 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-26 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-27 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-28 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-29 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-30 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-31 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-32 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-33 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-34 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-35 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-36 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-37 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-38 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-39 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-40 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-41 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-42 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-43 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-44 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-45 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-46 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-47 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-48 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-49 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-50 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-51 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-52 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-53 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-54 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-55 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-56 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-57 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-58 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-59 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-60 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-61 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 8-62 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-1 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-2 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-3 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-4 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-5 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-6 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-7 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-8 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-9 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-10 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-11 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-12 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-13 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-14 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-15 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-16 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-17 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-18 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-19 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-20 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-21 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-22 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-23 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-24 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-25 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-26 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-27 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-28 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-29 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-30 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-31 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-32 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-33 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 9-34 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 10-1 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 10-2 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 10-3 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 10-4 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 10-5 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 10-6 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 11-1 Henry, J, CCNP Wireless Design, ©2021 Reprinted by
permission of Pearson Education, Inc
Figure 11-2 Henry, J, CCNP Wireless Design, ©2021 Reprinted by
permission of Pearson Education, Inc
Figure 11-3 Henry, J, CCNP Wireless Design, ©2021 Reprinted by
permission of Pearson Education, Inc
Figure 11-4 Henry, J, CCNP Wireless Design, ©2021 Reprinted by
permission of Pearson Education, Inc
Figure 11-5 Henry, J, CCNP Wireless Design, ©2021 Reprinted by
permission of Pearson Education, Inc
Figure 13-1 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 13-2 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 13-3 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 13-4 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 13-5 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 13-6 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 13-7 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 13-8 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 13-9 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 13-10 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 13-11 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 13-12 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 13-13 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 13-14 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 13-15 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 13-16 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 13-17 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 13-18 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 14-1 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 14-2 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 14-3 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 14-4 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 14-5 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 14-6 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 14-7 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 14-8 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 14-9 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 14-10 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 14-11 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 14-12 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 14-13 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 14-14 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 14-15 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 14-16 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 14-17 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 14-18 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 14-19 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 14-20 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 15-1 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 15-2 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 15-3 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 15-4 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 15-5 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 15-6 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 15-7 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 15-8 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 15-9 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 15-10 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 15-11 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 15-12 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 15-13 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 15-14 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 15-15 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 15-16 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 15-17 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 15-18 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 15-19 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 15-20 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 16-1 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 16-2 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 16-3 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 16-4 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 16-5 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 16-6 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 16-7 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 16-8 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 16-9 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 16-10 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-1 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 17-2 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 17-3 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 17-4 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 17-5 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 17-6 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 17-7 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 17-8 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted by
permission of Pearson Education, Inc
Figure 17-9A Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-9B Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-10 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-11 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-12 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-13 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-14 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-15 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-16 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-17 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-18 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-19 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-20 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-21 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-22 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-23 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-24 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-25 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-26 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-27 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-28 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-29 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
Figure 17-30 Al-shawi, M, CCDE Study Guide, 1st Ed, ©2016 Reprinted
by permission of Pearson Education, Inc
About the Author
Zig Zsiga, CCDE 2016::32, CCIE #44883, has been in the networking
industry for 20 years. He is currently a principal architect supporting the
Cisco CX US public sector business and customers. Zig holds an active
CCDE and two CCIE certifications, one in Routing and Switching and the
second in Service Provider. He also holds a bachelor of science in computer
science from Park University. He is a father, a husband, a United States
Marine, a gamer, a nerd, a geek, and a big soccer fan. Zig loves all
technology and can usually be found in the lab learning and teaching others.
This is his second published book, and he is also the host of the Zigbits
Network Design Podcast (ZNDP), where he interviews leading industry
experts about network design. All of Zig’s content is located at
https://zigbits.tech. Zig lives in Upstate New York, USA, with his wife,
Julie, and their son, Gunnar.
About the Technical Reviewers
Martin J. Duggan, CCDE #2016::6 and CCIE #7942, is a principal
network architect designing network solutions for global financial accounts
at Systal Technology Solutions. Martin gained his CCIE Routing and
Switching certification in 2001 and has been passionate about Cisco
qualifications and mentoring ever since. He wrote the CCIE Routing and
Switching Practice Labs series and CCDE v3 Practice Labs titles for Cisco
Press and provides content for multiple Cisco exam tracks. Martin resides
in the UK and enjoys gliding, cycling, snowboarding, and karate when not
designing networks. Follow Martin on Twitter @Martinccie7942.
Nicholas (Nick) Russo, CCDE #20160041 and CCIE #42518, is an
internationally recognized expert in IP/MPLS networking and design. To
grow his skillset, Nick has been focused on advancing network DevOps via
automation for his clients. Recently, Nick has been sharing his knowledge
through online video training and speaking at industry conferences. Nick
also holds a bachelor of science in computer science from the Rochester
Institute of Technology (RIT). Nick lives in Maryland, USA, with his wife,
Carla, and daughters, Olivia and Josephine.
Dedications
You, the reader, will never truly know the journey this book endured to
reach completion. I would like to share with you the CliffsNotes version, so
please bear with me. I had truly wanted to complete this book months
ago…even a year ago. I wanted this book in your hands as it is now, to be
used as a resource for making better network design decisions and for
passing the CCDE exam. But good intentions are just that…intentions. As
with anything, there is a path a book must take. This book most definitely
took the path that was not paved. It had to persevere through a global
pandemic, numerous family emergencies, including my own, and a number
of other global events that we don’t have the time to dive into here.
This book made it through it all. I promise you it is not perfect; I can only
imagine what I missed or got wrong in this process, but it is here, and it is a
resource for you, the network designer and CCDE candidate. I hope as you
read through these chapters you feel the passion I have for this industry,
network design, and the CCDE. This book, with all of the time and energy
put into it, is for you.
I dedicate this book to you, the reader, the CCDE candidate, and the
network design expert. May it help you on your network design journey!
Acknowledgments
I had this great idea to write a book, and while I had limited experience in
the area of writing, I figured it was going to be a piece of cake. I was far
from the truth. Writing this book was much harder than I thought it would
be. I didn’t know what I didn’t know and, frankly, this book would not have
been possible without the help of many people.
Thank you, Marwan Al-shawi, for graciously letting me leverage the truly
outstanding content in your CCDE Study Guide. Your content is truly
remarkable, and it didn’t make sense to re-create it when it still applies
today. Thank you!
Thank you to my two technical editors, Martin Duggan and Nick Russo.
You both kept me honest and humble throughout this experience. I could
have not asked for a better technical team. I can only imagine the thoughts
that went through your heads when reading some of the chapter
drafts…“What is Zig doing now?” I thank you both for being great network
designers, and better friends than I could ever ask for.
Thank you, Dave Lucas, for your contributions in Chapter 8. I sincerely
appreciate you.
Thank you to my Cisco leadership team: Mike Solomita, Maurice DuPas,
Jim Lien, and Fred Mooney. You all gave me the space, time, and
overwhelming encouragement to make this project happen. Thank you for
always supporting me and my career aspirations.
Thank you, Elaine Lopez and Mark Holm, as without the two of you…well,
I can’t even imagine where we would all be without the CCDE. You both
have been great mentors, colleagues, and friends. Thank you for always
being available for my random and crazy questions. Mark, we have some
work to do, my friend!
Thank you, Nancy Davis and Ellie Bru. This entire process would have
been truly unbearable without the two of you. You have been with me from
the start. Thank you for being so understanding about all the life events that
happened during the writing of this book. Thank you for always being
willing to help and guide me throughout this journey. From helping me with
writer’s block to creating figures to author reviews, you two have been truly
amazing. You are truly the A team!
Thank you to my wife, Julie. Julie, you have always supported me with
everything I thrive to do in my life. You gave me critical advice throughout
this journey when I truly needed it. You kept me honest and let me know
when I was just being dumb, which happened a few times. You were my
sounding board on all things as they happened in real time. You are my
rock, my constant, and my muse. I am truly lucky to be able to journey
through our life side by side. Thank you from the bottom of my heart and
always remember I love you the Everest! Kilo!
To my son, Gunnar. This book, and the journey I took to write it, is a perfect
example of how you can set a goal, attack that goal, and achieve it. Will you
know the steps to take to achieve every goal in your life? Most definitely
not. I didn’t know half of the steps to complete this book, but here it is. Will
there be roadblocks, pitfalls, and hurdles in the way? Of course. What
matters is what you do when you encounter one. It’s what you do when you
find something standing in your way of achieving that goal. Will you have
to sacrifice to make it happen? Most likely. That sacrifice might be time,
energy, or sleep, but you will most likely have to endure it to achieve your
goal. Once again, take this as an example that you can literally do anything
you set your mind to in life; just set your mind to it and make it happen.
Reader Services
Register your copy at www.ciscopress.com/title/9780137601042 for
convenient access to downloads, updates, and corrections as they become
available. To start the registration process, go to
www.ciscopress.com/register and log in or create an account. Enter the
product ISBN, 9780137601042, and click Submit. When the process is
complete, you will find any available bonus content under Registered
Products.
*Be sure to check the box that you would like to hear from us to receive
exclusive discounts on future editions of this product.
Contents at a Glance
Introduction
Glossary
Index
Online Elements:
Appendix C Memory Tables
Appendix D Memory Tables Answers
Appendix E Study Planner
Icons Used in This Book
Command Syntax Conventions
The conventions used to present command syntax in this book are the same
conventions used in the IOS Command Reference. The Command
Reference describes these conventions as follows:
The network design topics covered in this book aim to prepare you to be
able to
Whether you are preparing for the CCDE certification or just want to be a
better network designer, you will benefit from the range of topics covered
and the business success approach used to analyze, compare, and explain
these topics to make proper design decisions.
Print book: Look in the cardboard sleeve in the back of the book for a
piece of paper with your book’s unique PTP code.
Premium Edition: If you purchase the Premium Edition eBook and
Practice Test directly from the Cisco Press website, the code will be
populated on your account page after purchase. Just log in at
www.ciscopress.com, click Account to see details of your account,
and click the Digital Purchases tab.
Amazon Kindle: For those who purchase a Kindle edition from
Amazon, the access code will be supplied directly from Amazon.
Other Bookseller E-books: Note that if you purchase an e-book
version from any other source, the practice test is not included because
other vendors to date have not chosen to vend the required unique
access code.
Note
Do not lose the activation code, because it is the only means with
which you can access the QA content with the book.
Once you have the access code, to find instructions about both the PTP web
app and the desktop app, follow these steps:
Step 1 Open this book’s companion website, as described in the previous
section.
Step 2 Click the Practice Exams button.
Step 3 Follow the instructions listed there both for installing the desktop
app and for using the web app.
Note that if you want to use the web app only at this point, just navigate to
www.pearsontestprep.com, establish a free login if you do not already have
one, and register this book’s practice tests using the registration code you
just found. The process should take only a couple of minutes.
Note
Amazon eBook (Kindle) customers: It is easy to miss Amazon’s
email that lists your PTP access code. Soon after you purchase the
Kindle eBook, Amazon should send an email. However, the email
uses very generic text, and makes no specific mention of PTP or
practice exams. To find your code, read every email from Amazon
after you purchase the book. Also do the usual checks for ensuring
your email arrives, like checking your spam folder.
Note
Other eBook customers: As of the time of publication, only the
publisher and Amazon supply PTP access codes when you purchase
their eBook editions of this book.
Note
This book covers only the “CCDE v3 Unified Exam Topics”
blueprint and the “CCDE v3 Core Technology List,” as they
encompass all the knowledge areas required for the CCDE Written
Exam.
Table I-1 lists each section in the “CCDE v3 Unified Exam Topics”
blueprint along with a reference to the book chapter that covers the
corresponding topic.
Table I-2 lists each section in the “CCDE v3 Core Technology List” along
with a reference to the book chapter that covers the corresponding topic.
These are the same topics you should be proficient in when designing
networks and making proper network design decisions in the real world.
Note
The two topic lists covered in Table I-1 and Table I-2 below are
current as of the book’s writing, but may be subject to updates, so
always check the blueprint at cisco.com.
4.2.f Security 4
5.1.a Segmentation 4, 10
5.1.c Visibility 4
1.1 Ethernet 6
1.2 CWDM/DWDM 6
1.5 Wireless 11
CCDE v3 Core Technology List Chapter(s) in
Which Topic Is
Covered
2.2.c Multipath 7
2.5.b Redundancy 13
2.5.c Virtualization 13
2.5.d Segmentation 13
3.2.d Scalability 8
3.3.a Protocols 8, 9
CCDE v3 Core Technology List Chapter(s) in
Which Topic Is
Covered
3.3.b Timers 8, 9
3.3.c Topologies 8, 9
3.4.a Recursion 8
3.4.b Micro-loops 8
3.6.b Redundancy 8
3.8.b NAT 10
3.8.c Subnetting 8
3.9.b MSDP/anycast 13
3.9.c PIM 13
4.1.c LDP 9
4.3 SD-WAN 9
4.3.e Segmentation 9
4.3.f Policy 9
5.0 Security 4, 10
5.4.b AAA for network access with 802.1X and MAB 10, 11
6.0 Wireless 11
6.2.e AP groups 11
CCDE v3 Core Technology List Chapter(s) in
Which Topic Is
Covered
6.2.f AP modes 11
7.0 Automation 12
Each version of the exam can have topics that emphasize different functions
or features, and some topics can be rather broad and generalized. The goal
of this book is to provide the most comprehensive coverage to ensure that
you are well prepared for the exam. Although some chapters might not
address specific exam topics, they provide a foundation that is necessary for
a clear understanding of important topics. Your short-term goal might be to
pass this exam, but your long-term goal should be to become a qualified
network designer that can make proper network design decisions that help
to make businesses successful.
It is also important to understand that this book is a “static” reference,
whereas the exam topics are dynamic. Cisco can and does change the topics
covered on certification exams often.
This exam guide should not be your only reference when preparing for the
certification exam. You can find a wealth of information available at
Cisco.com that covers each topic in great detail. If you think that you need
more detailed information on a specific topic, read the Cisco documentation
that focuses on that topic.
Note that as technologies and architectures continue to develop, Cisco
reserves the right to change the exam topics without notice. Although you
can refer to the list of exam topics in Tables I-1 and I-2, always check
Cisco.com to verify the actual list of topics to ensure that you are prepared
before taking the exam. You can view the current exam topics on any
current Cisco certification exam by visiting the Cisco.com website,
choosing Menu, and Training & Events, then selecting from the
Certifications list. Note also that, if needed, Cisco Press might post
additional preparatory content on the web page associated with this book at
http://www.ciscopress.com/title/9780137601042. It’s a good idea to check
the website a couple of weeks before taking your exam to be sure that you
have up-to-date content.
Part I: What is Network Design?
Chapter 1
Network Design
This chapter covers the following topics:
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Designing large-scale networks to meet today’s dynamic business and IT
needs is a complex assignment. This is especially true when the network
was designed for technologies and requirements relevant years ago and the
business decides to adopt new IT technologies and architectures to facilitate
the achievement of its goals, but the business’s existing network was not
designed to address these new technologies’ requirements. Therefore, to
achieve the desired goal of a given design, the network designer must adopt
an approach that tackles the design in a structured manner.
There are two common approaches to analyze and design networks:
Mindset
Requirements
Design use cases
The business
Constraints
“Why”
Mindset
Above all else, your mindset is the most important factor for obtaining the
CCDE certification. Knowing the technology is critical, but it is the
relatively easy portion of this journey. We can put in the effort and time to
learn what we don’t know from a technology perspective. Many candidates
that attempt the CCDE and Network Design do not have a proper design
mindset. Most of us are not taught a proper design mindset until later in our
careers. In this section we are going to highlight the different elements of a
proper design mindset that can make you successful in all of your design
situations, be it the CCDE or any network design situation. Mindset is one
of six network design fundamentals.
An implementation mindset will not work for network design; you need to
have a network design mindset to be successful both in network design
situations and on the CCDE exam. This section starts to cover the items that
need to be incorporated in a network designer’s mindset.
Functional Requirements
Functional requirements compose the foundation of any system design
because they define system and technology functions. Specifically,
functional requirements identify what these technologies or systems will
deliver to the business from a technological point of view. For example, a
Multiprotocol Label Switching (MPLS)-enabled service provider might
explicitly specify a functional requirement in a statement like this: “The
provider edge routers must send VoIP traffic over 10G fiber link while data
traffic is to be sent over the OC-48 link.” It is implied that this service
provider network needs to have provider edge (PE) routers that support a
mechanism capable of sending different types of traffic over different paths,
such as MPLS Traffic Engineering (MPLS-TE). Therefore, the functional
requirements are sometimes referred to as behavioral requirements because
they address what a system does.
Note
The design that does not address the business’s functional
requirements is considered a poor design: however, in real-world
design, not all the functional requirements are provided to the
designer directly. Sometimes they can be decided on indirectly, based
on other factors. Most of the time, it is the responsibility of the
network designer to find and document the functional requirements,
in which case the network designer would also need to have proper
sign-off on them before initiating the network design.
Technical Requirements
The technical requirements of a network can be understood as the
technical aspects that a network infrastructure must provide in terms of
security, availability, and integration. These requirements are often called
nonfunctional requirements. Technical requirements vary, and they must be
used to justify a technology selection. In addition, technical requirements
are considered the most dynamic type of requirements compared to other
requirements such as business requirements because, based on technology
changes, they change often. Technical requirements include the following:
Note
The technical requirements help network designers to specify the
required technical specifications (features and protocols) and
software version that supports these specifications and sometimes
influence the hardware platform selection based on its technical
requirements.
Application Requirements
Application requirements are the driving factors that dictate and in most
cases constrain a network design. If an application is created that requires
Layer 2 connectivity, it limits the design to Layer 2 protocols like spanning
VLANs between data centers. From a business point of view, user
experience is one of the primary, if not the highest, priority that any IT and
network design must satisfy. The term end users can be understood
differently according to the type of business. The following are the most
common categories of end users:
Design Scope
It is important in any design project that network designers carefully
analyze and evaluate the scope of the design before starting to gather
information and plan network design. Therefore, it is critical to determine
whether the design task is for a greenfield network or for a current
production network. It is also vital to determine whether the design spans a
single network module or multiple modules. In other words, the
predetermination of design scope can influence the type of information
required to be gathered, in addition to the time to produce the design. Table
1-2 shows an example of how identifying the design scope can help
network designers determine the areas and functions a certain design must
emphasize and address. As a result, the scope of the information to be
obtained will be more focused on those areas.
Optimize Add redundant link for remote access, which might require
enterprise redesign of the WAN module and remote site designs and
edge configuration such as overlay tunnels
availability
Note
Identifying the design scope in the CCDE exam is very important.
For example, the candidate might have a large network to deal with,
whereas the actual design focus is only on adding and integrating a
new data center. Therefore, the candidate needs to focus on that part
only. However, the design still needs to consider the network as a
whole, a “holistic approach,” when you add, remove, or change
anything across the network.
Note
Identifying what is out of scope is equally important as well. This can
protect situations of scope creep that can hinder the success of a
project altogether. Furthermore, what’s not in scope can also limit
what is available to the network designer in their design decisions.
Greenfield
A greenfield network design use case is one of the best situations for
network designers to encounter. It’s a clean slate or a clean canvas for you
to paint your picture on, but in this case, you are designing, architecting,
and building an environment from scratch. Make sure what you are
suggesting is actually needed by the business!
Brownfield
Most of your network design situations will include a brownfield network
design use case in some form. A brownfield use case is when there is
already an environment with production traffic running through it. It is
recommended you spend some time up front to discover the current state
and properly assess it. This isn’t just technical discovery with protocols and
diagrams though. You need to discover the business and the associated lines
of effort as well. Your goal here is to understand what the business is trying
to accomplish before you start making any design decisions.
Once you do make design decisions, make sure you prepare for the
migration to the new design. Limit the potential failures, if any, and make
sure to rely on the network design fundamentals covered earlier in this
chapter and network design principles and network design techniques
covered later in this chapter.
Replace Technology
The replace technology use case covers a wide range of options to replace
an existing technology to meet certain requirements. It might be a WAN
technology, routing protocol, security mechanism, underlying network core
technology, or some other technology. Also, the implications of this new
technology on the current design, such as enhanced scalability or potential
conflict with some of the existing application requirements, require network
designers to tailor the network design so that these technologies work
together rather than in isolation, so as to reach objectives, such as delivering
business applications and services.
Note
Make sure that when you are replacing a technology or adding new
technology, you are doing it for the correct reasons. Deploying SD-
WAN to replace your WAN architecture without a valid reason is not
the way to go. Make sure there are direct business justifications for
you to do what you are doing. When in this design use case, ensure
you have a properly tested migration plan. If you were migrating
from OSPF to EIGRP, you should have a plan listing each step along
the way. Also, each step should have a validation task to ensure the
migration is going as expected, and a proper backout plan in case
something goes wrong.
Merge or Divest
The merge or divest use case covers the implications and challenges
(technical and nontechnical) associated with merging or separating different
networks. This can be one of the most challenging use cases for network
designers because, most of the time, merging two different networks means
integrating two different design philosophies, in which multiple conflicting
design concepts can appear. Therefore, at a certain stage, network designers
have to bring these two different networks together to work as one cohesive
system, taking into consideration the various design constraints that might
exist, such as goals for the merged network, security policy compliance,
timeframe, cost, the merger constraints, the decision of which services to
keep and which ones to divest (and how), how to keep services up and
running after the divestiture, what the success criteria is for the merged
network, and who is the decision maker. The following are some examples
of each of these use cases and what you as a network designer could see:
Scaling a Network
The scaling use case covers different types of scalability aspects at different
levels, such as physical topology, along with Layer 2 and Layer 3
scalability design considerations. In addition, the challenges associated with
the design optimization of an existing network design to offer a higher level
of scalability are important issues in this domain. For example, there might
be some constraints or specific business requirements that might limit the
available options for the network designer when trying to optimize the
current design. Considerations with regard to this use case include the
following: Is the growth planned or organic? Are there issues caused by the
growth? Should a network designer stop and redesign the network to
account for growth? What is the most scalable design model?
For example, you could be brought in to help solve a problem with a
technology or architecture. It could be as simple as a single flat area 0
OSPF design that no longer scales to the business requirements. In this
case, you could leverage multiple areas, multiple area types, and LSA
filtering techniques if needed.
Design Failure
Nine times out of ten, design failure is the design use case you will be
brought in to fix. There is a problem, and you have to resolve it. An analogy
is working at a hospital as an emergency room doctor, tasked with
identifying the problems people have and resolving them as quickly as
possible. This is the exact same situation for you as a network designer
when you are dealing with a design failure use case. A simple technical
example of this is not aligning the critical roles of Spanning Tree Protocol
(STP) and First Hop Redundancy Protocol (FHRP). If your STP root bridge
and your FHRP default gateways are not aligned to the correct devices, then
you would have a suboptimal routing issue that would eventually lead to a
design failure.
The Business
Why do we make network design decisions?
Network engineers, network designers, and network architects tend to
design networks without the correct purposes in mind. In most of these
situations, there wasn’t any reason for the design decisions made. When
asked why they did what they did, they might state “just because,” “it’s how
we have always done it,” or “it’s in the script, and we just copy it…”
This is not a path for success, not for you or your designs. The point of all
of this is to answer the questions “Why do we design?” and “Why do we
make design decisions?” The answers are actually pretty simple and
straightforward but are not always clear.
As network designers, we do what we do for the sake of the businesses,
companies, and organizations we support. Specifically, we make network
design decisions for businesses so that those businesses can make money.
This is not always the golden rule, but it is the situation more often than not.
I remember having a conversation with a CIO at a company I worked at
about 10 years ago, and at that time he was telling me I needed to care more
about the business…I remember saying “The business doesn’t matter; the
technology is the only thing that matters!” I was totally and 100 percent
naive.
Of course, there are cases where a business does not operate to make a
profit. These not-for-profit organizations are more concerned with covering
their expenses and reducing their day-to-day costs than making large profits
on what they provide. These organizations have different goals, different
outcomes that they are trying to achieve. In the public sector market, many
organizations are focused not on making money but on providing a specific
service or addressing a specific goal. These include, for example, local,
state, and federal government agencies, public safety services (police, fire,
ambulance), and environmental protection organizations.
We will discuss the business as a network design fundamental in more
depth in Chapter 2, “Designing for Business Success.”
Note
It has become more common to document all design decisions, and
why they were made, in a network design binder of sorts. This allows
all team members involved, past, present, and future, to be able to
understand why a feature, design option, or functionality was
implemented. Remembering why we made a design decision 6
months ago is much easier than remembering 2 years down the road
why we added that routing adjustment at 2 a.m. during a maintenance
window. Document everything to the best of your ability.
Constraints
When we are designing an architecture for a customer, we start out clean
with no requirements, no rules to follow. As we interview the customer, we
learn about the different requirements and, more importantly, the specific
constraints that box us in. Constraints are one of dix network design
fundamentals and fall into three categories: Business, Application, and
Technology constraints.
These constraints are hard rules and limitations network designers have to
follow. Think of these rules as the scientific laws we learn in grade school,
such as Newton’s three laws of motion. These “laws of design” have the
potential to change with each business and environment you encounter. No
network design situation is exactly the same, as each one has its own
business, application, and technology constraints.
Constraints are everywhere and come in a multitude of forms. There will
always be constraints. It’s up to you as the network designer to know which
constraints are applicable in each situation. Don’t assume constraints.
Properly qualify each constraint with evidence.
The following are the most common constraints that network designers
must consider:
Note
In some situations, if the proposed solution and technologies will
save the business a significant amount of money, you can justify the
cost of hiring new staff.
Security
Scalability
Availability
Cost
Manageability
Before we jump into the five network design principles, there is a new
concept that we need to cover that doesn’t exactly fit into any specific
network design functional area, but it is imperative that all network
designers know and understand it. This is called unstated requirements.
Unstated Requirements
It has become more prevalent where customers do not come out and
articulate their specific requirements. They assume requirements, and you
as the network designer have to figure out what requirements are important
to the network design. Your job is not done after you have identified the
requirements, though; you also have to determine the level of each
requirement. For example, does this network require no single points of
failure or does this network require no dual points of failure?
You need to keep this concept of unstated requirements at the top of your
mind, because every network design principle has become an unstated
requirement.
Pervasive Security
Historically, security hasn’t been identified as a network design principle.
It’s been added here as a network design principle because of the impact it
has to the overall business. Here is a list of questions to help set the stage on
why security is a network design principle:
The following are three security models that you should know as a network
designer. The industry has been shifting between these models over the last
20-plus years.
Shifting of Availability
Normally when we talk about network design principles, we talk about
resiliency, reliability, fault tolerance…the list is never-ending. Availability
includes all of these topics: redundancy, resiliency, reliability, and much
more; this is why availability is one of the five network design principles.
Here is where unstated requirements start to come into play.
The need for availability is just assumed today. What level of availability is
needed is the true question. Once that is identified, then a network designer
must assess the complexity and cost, both monetary and non-monetary, of
that level of availability.
As you increase the level of availability, the complexity and associated cost
increase with it. A simple example, previously mentioned, is removing
single points of failure versus removing dual points of failure. The latter
option increases both the cost and complexity exponentially.
If you encounter a situation in which a customer or stakeholder says they
need a highly available network that never allows an application outage,
you should be ready to provide them with a couple estimates of the cost
associated with making an environment with that level of availability. Make
them realize what it truly is going to cost to have such a highly available
environment. Show them the value and the cost.
A question arises when availability is in the forefront of the network design
discussion: What level of redundancy, resiliency, and reliability is too
much? The answer is, when the increased complexity, cost, and the
associated return for availability are not worth it.
As an example, is it proper to design a solution that has eight Layer 2 or
Layer 3 links between devices? It is normal to suggest redundant links for
most devices in an architecture, with two links being the simplest option. In
some designs and architectures, four links is actually preferred, and there
are valid reasons for this. With that said, five or more links tends to add
more complexity to the environment, with a higher level of cost, and has a
diminishing return on the benefits of availability.
Network designers should apply this concept to all aspects of network
design decisions: routing adjacencies, Layer 3 links, Layer 2 links, devices,
pod architecture, aggregation, and core layers, and so on.
The large shift with availability is that the focus of network design is no
longer network availability but rather application and service availability.
Why does a network exist in the first place? The network is a service,
getting data from point A to point B at the right time so that it can be
properly leveraged. The network facilitates all of this, but it is seamless to
the “resources.” This is the shift in your design mindset that has to happen.
As an analogy, the network is the plumbing to the running water in our
house, and data is the water. Without the network, data cannot arrive.
This is a larger concept now than just data transporting the network. No one
else understands what data is, or the bytes and bits on the network.
Perspective matters here. The end user only cares if their application works
when they go to leverage it.
For those of you who can remember the days when everyone had a landline
at home, did you ever pick up a POTS (plain old telephone service) phone
and not have a dial tone? I don’t recall ever picking up a POTS phone and it
not working. This is analogous to where the network sits now. If we pick up
a VoIP phone and it doesn’t work, what’s the impact? If a user tries to
access email but it doesn’t load, what’s the impact? If the cloud provider
that hosts your company’s Software as a Service (SaaS) application has an
outage and your customers can no longer access your SaaS application,
what’s the impact?
As network designers, we have to identify the required level of availability
for applications and services (again, in most cases requirements are unstated
by the customer). We have to strive to maximize this identified level of
availability while keeping the constraints in mind. We have to partner with
the application owners to truly understand the requirements and
interdependencies each application and service has. Not all customers know
their applications and what they are supposed to be doing. From a network
design perspective, each application will leverage different portions of the
network. You will need to properly identify what the application is
dependent on and make appropriate design decisions to ensure the required
level of availability for that application is achieved.
Limiting Complexity…Manageability
Probably one of the hardest tasks you will have as a network designer is to
manage the complexity level of the design you are proposing. You have to
keep it super simple (KISS). This is why manageability is one of the five
network design principles. When comparing different design options that
provide the same capability, choosing the simpler option is the way to go. A
great question to ask yourself at this stage is, “Can the network design I’m
proposing be managed by the team at hand?” For example, suppose you
have a network design that meets the customer’s needs. It is highly
available, secure, redundant, and cost effective. However, your design has
multiple CCIE-level design elements, but the customer doesn’t have any
CCIE skilled professionals on staff. How can your customer manage this
design? How can your customer troubleshoot this design when there is a
problem? This is an issue to consider. You as a network designer need to
assess the team that will be owning this design and managing it day to day.
They need to understand what is being done within the environment and
why. The why here is actually more important than the how or what.
As a network designer, you cannot design a solution that is unmanageable
by the staff who will operate the network. However, there are situations
where no other choice exists but to leverage a more complex design. In this
situation, if the local team does not have the skill sets to manage the design,
you have to raise the issue with the business. This is where you have to
assume the role of trusted advisor and tactfully explain to the business that
they need higher-level skilled professionals to manage and maintain the
network environment that the business requires. When you do this, you
need to show the business why they need a complex solution in their
terminology, not in technical terms. You need to show them the impact and
the why.
Obfuscating the complexity of a solution still yields a complex solution.
Leveraging other technology to hide the complexity of an environment does
not make it a simple solution. It might make it a manageable solution, but
it’s still a complex solution. Oftentimes, the obfuscated solution becomes
more complex. You have to understand not only the original complex
environment, but also the technology being leveraged that is hiding that
complex environment. Leveraging a GRE tunnel to form a routing
adjacency over a complex OSPF multi-area design is a perfect example.
Here you are hiding the “underlay” complex network by forming an
“overlay” tunnel that you can then create a routing adjacency on top of. The
original network is still complex, and just because the GRE tunnel makes it
seem less complex does not mean it is.
Failure Isolation
Taking a closer look at the campus architecture depicted in Figure 1-3, there
is a large failure domain that spans the data center, the west campus, and the
east campus. A failure domain is an area in which an outage can propagate.
Figure 1-4 shows an example of a failure situation.
Hierarchy of Design
Hierarchy of design is the idea of creating dedicated levels for different
purposes within the architecture you are building. The traditional hierarchy
model is access, distribution, aggregation, and core.
Let’s use our higher education architecture again, but this time we are going
to focus on the west campus. As shown in Figure 1-19, we have eight west
campus locations connecting to the core pod. This is an extremely flat
architecture that lacks hierarchy. By breaking this up into distribution and
access layers, implementing a hierarchy of design, we can create a robust
and scalable architecture.
Making Assumptions
Here you are, all ready to help your customer, which is your company in
this scenario, resolve a business problem. You step in and start leading the
design discovery phase of the engagement or initiative. As you facilitate
conversations with the different stakeholders you are working with, you
start to think that complexity is bad. Maybe this was your thought from the
beginning of the meeting—that this environment, this customer, your
company, doesn’t want a complex solution.
Here is the trap of making an assumption and not properly validating that
assumption.
In any design situation, you never want to assume anything. Don’t assume
complexity is bad. Sure, you can have assumptions, but don’t make any
design decisions based on an assumption until you validate that the
assumption you have is correct.
Validate your assumptions with questions to your customer, the business
stakeholders, and your team. Leverage and lead workshops to drive
customer dialog, but let them talk. Don’t feel like you need to talk the entire
time. Listen attentively, process what is being discussed, and take notes so
you understand the customer and the situation better.
If you are assuming the customer doesn’t want a complex solution, ask
questions about how the customer is managing their network today. Could
they manage a complex solution in this same way? Map everything back to
the business and the requirements. If you do this properly, you might
discover that the customer’s needs require that you design a complex
solution. This is out of your hands, and it is perfectly acceptable. Just don’t
assume complexity is bad. Don’t assume that the customer has a limited
budget. Don’t assume the customer isn’t open to new technologies or
solutions that they’ve never used before. To reiterate, if you do assume,
make sure you validate your assumptions before making any design
decisions.
What about receiving the full IPv4 Internet table on your Internet edge
routers? In the past, some network designers would stick to this as the
way to do things. They would assume and suggest this as a design
because it provided so much value and flexibility to the environment.
This was overdesigning the solution. There was no business need or
requirement to do this.
How about running an MPLS network at a service provider? Do you
design MPLS-TE tunnels with sub-second failover, node protection,
and link protection, or would the IGP timers be enough to meet the
requirements at hand? The former solution, which would be
considered gold plating, adds a ton of complexity and additional work
to set up the solution, let alone the amount of time and resources that
would now be needed to manage this solution on a daily basis.
How about studying for your CCIE exam? Have you put a technology
or solution into a protection network because you wanted to see how it
worked? This has been done numerous times and there was no
requirement for that solution in that environment at that time.
You have to catch yourself. The easiest way to mitigate this pitfall of
overdesigning and gold plating is to hyperfocus on the requirements.
Everything you do should have a direct business requirement that it maps
to. This won’t be a one-to-one mapping; it will be a one-to-many mapping,
the business requirement mapped to many design decisions.
Best Practices
Have you ever made a design decision and your justification was “Because
it’s best practice”? Did you fully understand the implications of what you
were doing, or did you fill in the “best practice” variable with your own
personal biased design, solution, or architecture?
A lot of us in the network design field make “best practice” calls without
fully understanding the design implications of those decisions. Let’s start
with a simple example. Why do we enable an OSPF interface as a point-to-
point interface? Is there a business requirement for it? Probably a better
question to ask and answer is whether there is a business requirement that
we are breaking because of this “best practice” decision.
Now, how about a more complex and realistic example. What about
implementing sub-second failover for an IGP versus less than 5 seconds
failover for an IGP? Are we implementing sub-second failover because it’s
“best practice” or are we correlating this choice to a business need?
As network designers, we cannot fall into the trap of “best practices.”
Instead, we take into consideration the best practices and we modify them,
tweak them for each of our design decisions based on the business
requirements. Just because something is best practice doesn’t mean it’s
going to work that way in your design. Sometimes you have to build and
design a network that is not preferred from a best practice standpoint, and
you do this because of the requirements.
A perfect example of this would be spanning Layer 2 by leveraging STP
between two data centers. This is something most network designers
wouldn’t want to do. Spanning Layer 2 like this creates a large failure
domain between these two data centers. Sometimes we as network
designers have no choice, especially when there are application
requirements that force this design option to solve the application
constraints. Nowadays there are better options to span Layer 2 between
multiple sites than to leverage STP, but the implication is still the same.
Preconceived Notions
Preconceived notions are pretty similar to assumptions, but they are defined
by outside information from your experiences.
As network designers, we don’t want to bring in our own preconceived
notions or opinions. Just because a network designer likes EIGRP does not
mean it’s the correct IGP for every design situation. Just because MPLS
L3VPN circuits are more expensive than MPLS L2VPN circuits in your
experience does not mean you can make decisions based on that
information in a design situation.
The design should always be tied back to the customer’s business
requirements!
Summary
In this chapter, we focused on the network design elements that all network
designers should know. We discussed the network design fundamentals,
which are the foundation of all designs (like the foundation of a house).
Then we added to this foundation with network design principles (like the
framing of the house), which showed the give and take from a design
perspective. The more availability a network architecture requires, the more
it will cost. Once again, this cost can be both monetary and nonmonetary.
We next covered the network design techniques (like a roof to the house),
referencing a real-world use case to solidify how these techniques can be
leveraged in every network design situation. Finally, we highlighted the
mistakes that network designers tend to make. We talked about the pitfalls
of assumptions, overdesigning, strict adherence to best practices, and
preconceived notions.
If there are not any relevant business requirements for a specific design
situation and you are not violating another business requirement, then best
practice is probably the way to go, but you need to understand the full
picture before making these decisions. In the end, it really boils down to
doing what is right for the specific situation that you are presented with. A
lot of people are looking for the “one fits all” solution or answer, but there
isn’t one. Especially for a network designer. People who take the “easy”
way out end up doing a disservice to the networks they touch and customers
they serve.
All of the items covered in this chapter are important and, yes, it’s a
different way of thinking altogether, which will not be easy. It will take
some time and effort to instill these elements into your thought process. It’s
worth it to incorporate each element discussed in this chapter to ensure your
continued success as you design a network and tackle the CCDE
certification.
Reference
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Chapter 2
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Business Success
This section covers the primary aspects that pertain to the business needs
and directions that (individually or collectively) can influence network
design decisions either directly or indirectly. The best place to start
understanding the business needs and requirements is to look at the big
picture of a company or business and understand its associated business
priorities, business drivers, and business outcomes. This enables network
designers to steer the design to ensure business success. However, there can
be various business goals and requirements based on the business type and
many other variables. As outlined in Figure 2-1, with a top-down design
approach, it is almost always the requirements, constraints, and drivers at
higher layers, such as business and application requirements, that drive and
set the requirements and directions for the lower layers. Therefore, network
designers aiming to achieve a business-driven design must consider this
when planning and producing a new network design or when evaluating and
optimizing an existing one. The following sections discuss some of the
business elements at the higher layers and how each can influence network
design decisions at the lower layers. Remember, our goal as network
designers is to ensure business success.
Figure 2-1 Business Success Top-Down Approach
Business Priorities
Each business has a set of business priorities that are typically based on
strategies adopted for the achievement of goals. These business priorities
can influence the planning and design of IT network infrastructure.
Therefore, network designers must be aware of these business priorities to
align them with the design priorities, which ensures the success of the
network they are designing by delivering business value. For example,
suppose that a company’s highest priority is to provide a more collaborative
and interactive business communications solution, followed by the
provision of mobile access for the end users. In this example, providing a
collaborative and interactive communications solution must be satisfied
first before providing or extending the solution over any mobility solution
for the end users. Keep in mind, it is important to align the design with the
business priorities, which are key to achieving business success and
transforming IT into a business enabler.
An example business priority would be security. There are a number of
other terms that can be used for this priority, such as Zero Trust
Architecture, cybersecurity modernization, and risk management
framework. No matter what the business priority is called, the intent is the
same, to secure the network and maintain data integrity. If a business’s data
is compromised, that business is out of business.
Business Drivers
Now that we know what the business priorities are, we need to start to
identify the different constraints and challenges that apply to the business.
What does this business have to do and why? This is what we call a
business driver. Business drivers are what organizations must follow. A
business driver is usually the reason a business must achieve a specific
outcome. It is the “why” the business is doing something.
An example of a business driver would be a constraint on the business to
follow a specific compliance standard like HIPAA or PCI DSS. This aligns
perfectly with the business priority of security mentioned in the previous
section. If the business does not adhere to this constraint, depending on the
compliance standard in question, the business can be fined or even shut
down. For this example, the business driver could be worded as “Required
to follow HIPAA compliance standards.”
Business Outcomes
A business outcome equates to the end result, such as saving money,
diversifying the business, increasing revenue, or filling a specific need.
Essentially, a business outcome is an underlying goal a business is trying to
achieve. A business outcome will specifically map to a business driver.
Returning to our previous example of security as a business priority, the
business driver could be phrased as “Required to follow HIPAA compliance
standards” while the business outcome could be “Properly maintain HIPAA
compliance to stay in business.”
At a minimum, there will be one outcome per driver, but there can be
multiple outcomes mapping to the same driver. This is perfectly fine. If, and
when, an organization achieves its business outcomes, then its business
drivers are met, which ensures the organization’s business success.
Business Capabilities
Before we go down the “solutionizing” path, which most of us engineers
tend to do too early, we need to know what business capabilities are and
how they apply to network design. Business capabilities are not solutions.
Business capabilities are what you get from a solution. Most solutions
provide multiple capabilities. Some solutions provide parts of multiple
capabilities; when combined with other solutions, the business can get a
number of capabilities that will make them successful.
Session-based security is a great example of a capability that a business
might need to have to meet compliance standards. A vendor-agnostic
solution that provides this capability would be a network access control
(NAC) solution. There are many different vendor-specific NAC solutions;
the point here is that all of them, no matter what vendor solution we
highlight, provide the capability of session-based security. Moving forward
along this thought process, Cisco Identify Services Engine (ISE) is an
example of a vendor-specific solution that provides the capability of
session-based security. Cisco ISE actually provides multiple capabilities in
addition to session-based security.
Business capabilities map directly to business outcomes. As a network
designer, you will find that multiple capabilities often map to the same
outcome, and that multiple outcomes often map to the same capability. This
is expected and perfectly fine. Table 2-2 shows the relationship between
business priorities, drivers, outcomes, and capabilities.
Business Continuity
IT as a “New” Utility
Over the last decade, IT, and networking specifically, has become more akin
to a utility provider (such as an electricity or water provider). Identifying
the network as a utility is important today because most businesses assume
that the “Internet” is just going to work, that a business application will
always be accessible for its end users. Everywhere we go—hotels, airports,
restaurants—some form of wireless connection is available. We’ve come to
expect it, just like we expect lights to come on when we flip a switch and
water to pour out of the faucet when we move the handle.
This assumption of the network to be available and to properly provide the
business whatever the business needs or requires at any specific time is the
concept of unstated requirements introduced in Chapter 1. As a network
designer, you will have to identify how far to take this expectation of the
network being a utility and always being available. Table 2-3 shows how
we can compare the associated business risk versus reward versus cost of
the different availability options.
Table 2-3 Business Risk vs. Reward vs. Cost Analysis
D Associate D Business Risk Business Reward
e d Cost e
si si
g g
n n
D C
e o
c m
is p
i l
o e
n x
it
y
S Low; no L Very high; Low; initial cost savings.
i additiona o outages are
n l cost for w more likely to
g redundan occur that
l cy. would
e directly bring
p the business
o offline, which
i would make
n the business
t lose revenue
s and market
o reputation.
f
f
a
i
l
u
r
e
N Medium; H Low; the High; the initial cost is higher but the
o the i solution has reward is substantially better because
s solution g been designed the business can function, and continue
i cost h to allow for to make money, while a single failure
n increases single failures occurs. With this design comes a level
g to allow to occur that of complexity that needs to be properly
l for would still managed.
e redundan allow the
p t business to
o compone function and
i nts to make money.
n mitigate
t any
s single
o points of
f failure.
f
a
i
l
u
r
e
N High; a V Very low; the Very high; the initial cost is substantially
o much e solution has higher than limiting single points of
d higher r been designed failure, but now the solution is more
u initial y to allow for robust and can withstand a higher
a cost is h dual failures degree of failures and still allow the
l needed to i to occur that business to function. One of the
p create g would still drawbacks here besides the high cost is
o this h allow the the very high complexity level. A
i design business to business running solutions with no dual
n with no function and points of failure requires a highly skilled
t dual make money. and technical team to manage and
s points of maintain it.
o failure.
f
f
a
i
l
u
r
e
We can see that no single points of failure would be the best design option
to allow for a redundant solution, increasing overall business availability
while also limiting the monetary cost to the business. As network designers,
we should be able to present the information captured in Table 2-3 to the
different business leaders within a company to allow them to make properly
informed business decisions. The business leaders might assume the risk of
the lower-cost option, or they may spend a ton of money to mitigate the risk
to the business, thus increasing the business reward. Business risk versus
reward and cost analysis are extremely important concepts to understand for
network designers.
Planning
An enduring adage you’ve likely heard is “if you do not have a plan, you
are planning to fail.” This adage is accurate and applicable to network
design. Many network designers focus on implementation after obtaining
the requirements and putting them in design format. They sometimes rely
on the best practices of network design rather than focusing on planning
“what are the possible ways of getting from point A to point B?” This
planning process can help the designer devise multiple approaches or paths
(design options). At this point, the designer can ask the key question: Why?
Asking why is vital to making a business-driven decision for the solution or
design that optimally aligns with the business’s short- or long-term strategy
or objective. In fact, the best practices of network design are always
recommended and should be followed whenever applicable and possible.
However, reliance on best practices is more common when designing a
network from scratch (greenfield), which is not common with large
enterprises and service provider networks. In fact, IT network architectures
and building constructions and architectures are quite similar in the way
they are approached and planned by designers or consultants.
For example, several years ago a Software as a Service (SaaS) company
built a new headquarters location in a large city in the United States, which
was architected and engineered based on the business priorities, drivers,
outcomes, and requirements at that time. Recently, this SaaS company has
acquired a number of other companies and is in the process of merging
them all together. The stakeholders have requested the network designers to
review the design and make suggestions for modification to address the
increased number of people accessing the headquarters location, because
this increase was not properly projected and planned for during the original
design five years ago.
Typically, the architects and engineers will then evaluate the situation,
identify current issues, and understand the stakeholders’ goals. In other
words, they gather the business priorities, drivers, and outcomes and
identify the issues. Next, they work on optimizing the existing building
(which may entail adding more parking space, expanding some areas, and
so forth) rather than destroying the current building and rebuilding it from
scratch. However, this time they need to have proper planning to provide a
design that fits current and future means.
Similarly, with IT network infrastructure design, there are always new
technologies or broken designs that were not planned well to scale or adapt
to business and technology changes. Therefore, network designers must
analyze business issues, requirements, and the current design to plan and
develop a solution that could optimize the overall existing architecture. This
optimization might involve the redesign of some parts of the network (for
example, WAN), or it might involve adding a new data center to optimize
BC plans. To select the right design option and technologies, network
designers need to have a planning approach to connect the dots at this stage
and make a design decision based on the information gathering and analysis
stage. Ultimately, the planning approach leads to the linkage of design
options and technologies to the gathered requirements and goals to ensure
that the design will bring value and become a business enabler rather than a
cost center to the business. Typical tools network designers use at this stage
to facilitate and simplify the selection process are the decision tree and the
decision matrix.
Decision Tree
A decision tree is a helpful tool that a network designer can use to compare
multiple design options, or perhaps protocols, based on specific criteria. For
example, a designer might need to decide which routing protocol to use
based on a certain topology or scenario, as illustrated in Figure 2-4.
Figure 2-4 Decision Tree
Decision Matrix
Decision matrices serve the same purpose as decision trees; however, with
the decision matrix, network designers can add more dimensions to the
decision-making process. Table 2-4 present two dimensions a network
designer can use to select the most suitable design option. In these two
dimensions, both business requirements and priorities can be taken into
account to reach the final decision, which is based on a multidimensional
approach.
When using the decision matrix as a tool in the preceding example, design
option 2 is more suitable based on the business requirements and priorities.
The decision matrix is not solely reliant on the business requirements to
drive the design decision; however, priorities from the business point of
view were considered as an additional dimension in the decision-making
process, which makes it more relevant and focused.
Planning Approaches
To develop a successful network design, a proper planning approach is
required to build a coherent strategy for the overall design. Network
designers can follow two common planning approaches to develop
business-driven network designs and facilitate design decisions:
Strategic Balance
Within any organization, there are typically multiple business units and
departments, all with their own stakeholders. Each has its own strategy,
some of which are financially driven, whereas others are more innovation-
driven. For example, an IT department is more of an in-house service
provider concerned with ensuring service delivery is possible and optimal,
whereas the procurement department is cost-driven and always prefers
cheaper options. The marketing department, in contrast, is almost always
innovation-driven and wants the latest technology. Consequently, a good
understanding of the overall business strategy and goals can lead to a
compromise between the different aims of the different departments. In
other words, the idea is that each business unit or entity within an
organization must have its requirements met at a certain level so that all can
collectively serve the overall business priorities, drivers, outcomes, and
strategies.
As an example of achieving strategic balance, let’s consider a case study of
a retail business wanting to expand its geographic presence by adding more
retail shops across the globe with low CAPEX. Based on this goal, the main
point is to increase the number of remote sites with a minimal cost
(expansion and cost):
IT point of view:
The point of sales (PoS) application being used does not support
offline selling or local data saving. Therefore, it requires
connectivity to the data center to operate.
The required traffic volume from each remote site is small, but it
is real-time application traffic requiring guarantied treatment.
Many sites are to be added within a short period of time.
Optimum solution: IT suggested that the most scalable and
reliable option is to use an MPLS VPN as a WAN.
Marketing point of view: If any site cannot process purchased items
due to a network outage, this will impact the business’s reputation in
the market.
Optimum solution: High-speed, redundant links should be used.
Financial point of view: Cost savings.
Optimum solution: One cheap link, such as an Internet link, to
meet basic connectivity requirements.
Based on the preceding list, it is obvious that the consideration for a WAN
redundancy is required for the new remote sites; however, cost is a
constraint that must be considered as well.
When applying the strategic balance (alignment) concept, each department
strategy can be incorporated to collectively achieve the overall optimum
business goal by using the suboptimal approach from each department’s
perspective.
In this particular example, the retail business can use two Internet links
combined with a VPN overlay solution to achieve the business goal through
a cost-effective solution that offers link redundancy to increase the
availability level of the remote sites, meeting application bandwidth
requirements while at the same time maintaining the brand reputation in the
market at the desired level.
Project Management
As a network designer, we need to understand how projects are managed.
This doesn’t mean we need to be a project manager, nor do we actually
have to manage the projects in question. We need to understand the process
each project will go through and the associated advantages and
disadvantages of the methodology being used. This section covers the most
common project management methodologies and their associated
advantages and disadvantages.
Waterfall Methodology
The waterfall project management framework follows a sequential, linear
process and historically has been the most popular version of project
management for software engineering and IT projects. The waterfall project
management framework is sometimes planned using a Gantt chart that
shows the start dates, end dates, assigned resources, dependencies, and
overall status for each project task. Figure 2-5 shows an example of a
waterfall project plan, and Figure 2-6 shows an example of the
corresponding Gantt chart for this same waterfall project plan.
Figure 2-5 Waterfall Project Plan Example
Figure 2-6 Waterfall Project Plan Gantt Chart Example
Once one of the stages is complete, the project team moves on to the next
step. The team can’t go back to a previous stage without starting the whole
process from the beginning. And, before the team can move to the next
stage, requirements may need to be reviewed and approved by the customer.
This is why the waterfall project plan is a linear process.
Disadvantages of Waterfall
The biggest limitation of the waterfall project management model is its
adversity to change throughout the project. Because waterfall is linear, you
can’t bounce between phases, even if unexpected changes occur. Once
you’re done with a phase, the project team moves on to the next phase and
cannot go back to previous phases.
Change adverse: Once the project team completes a phase, they can’t
go back. If they reach the testing and verification phase and realize
that a specific business capability is missing, it is very difficult and
expensive to go back and fix it.
Solution delivery is late: The project has to complete multiple phases
before the solution implementation can begin. As a result, business
leaders won’t see a working solution until late in the entire process.
Requirements gathering is difficult: One of the first phases in a
waterfall project is to complete requirements gathering with the
business stakeholders. The problem with this is that it can be
extremely difficult to properly identify what the stakeholders truly
need and want this early in the process. In most cases, business leaders
learn and identify requirements throughout the process as the project
moves forward.
Agile Methodologies
Agile methodologies are based on an incremental, iterative approach.
Instead of in-depth planning at the beginning of the project like that of the
waterfall model, the agile methodologies are open to changing requirements
over time and encourage constant feedback from the different business
leaders and stakeholders. Cross-functional teams work on iterations of a
product over a period of time, and this work is organized into a backlog that
is prioritized based on business or customer value. The goal of each
iteration is to produce a working product.
Agile methodologies were built for software development, so why are we
focusing on them here? With the industry shift to automation and
orchestration within networking, there have been a number of development-
focused adoptions within the networking industry, such as infrastructure as
code, network as code, leveraging APIs to complete networking tasks at
scale, and reworking manual workflows into a continuous
integration/continuous delivery (CI/CD) pipeline process. Because of this
market shift, network designers need to understand how an Agile
methodology works to make proper design decisions when businesses are
leveraging the different Agile methodologies. As we cover Agile
throughout this section, you will see references to software development
because that’s what it was built for, but it can easily be leveraged from a
network design perspective as well.
Advantages of Agile
An Agile methodology is focused on flexibility, continuous improvement,
and speed. Unlike the waterfall methodology, change is fully embraced and
welcomed in an Agile methodology.
Disadvantages of Agile
An Agile methodology has some disadvantages and trade-offs. With all of
the changes being added throughout the project, the project end date can be
hard to predict as timelines are pushed because of these changes to the
project. With an Agile methodology, documentation is not prioritized and
can be forgotten altogether.
Scrum Methodology
Scrum is a subset of Agile and one of the most popular process frameworks
for implementing Agile. Scrum was built to be an iterative model used to
manage projects. With Scrum, there are roles, responsibilities, and meetings
that never change. For example, Scrum leverages four ceremonies that
provide a process structure to each sprint: sprint planning, daily stand-up,
sprint demo, and sprint retrospective.
Advantages of Scrum
Scrum is a prescriptive framework with specific roles, responsibilities, and
meetings.
Disadvantages of Scrum
While Scrum offers some concrete benefits, it also has some downsides.
Scrum requires a high level of experience and commitment from the team,
and projects can be at risk of scope creep.
Kanban Methodology
Kanban is a Japanese term meaning “visual sign” or “card.” The Kanban
methodology is a visual framework used to implement Agile that shows
what to produce, when to produce it, and how much to produce. It
encourages small, incremental changes to your current system and does not
require a certain setup or procedure. Kanban can be easily overlaid on top
of the other project management methodologies discussed so far.
When looking at Kanban versus Agile, it’s important to remember that
Kanban is one flavor of Agile. It’s one of many frameworks used to
implement Agile software development.
A Kanban board is a tool to implement the Kanban method for projects. A
Kanban board is made up of different “swim lanes” or columns. The
simplest boards have three columns: To Do, Work In Progress (WIP), and
Done.
I personally leverage a physical Kanban board. My columns include To Do,
Working, and Completed. Within the Working column, I have two nested
columns, In Progress and Pending. Oftentimes, when working on a task or
project, you will get to a point where no more work can be done until
someone else completes a task or something else happens. This is when I
leverage the Pending column to place these cards until the corresponding
work is completed so that I can jump right back into that card and complete
my associated work on it.
I actually wrote this entire book leveraging Kanban. I broke down each
chapter and section in this book into eight different Kanban cards, for a total
of 160 cards. Figure 2-7 shows the cards for Chapter 2.
Figure 2-7 Kanban Board Example
Kanban cards represent the work, and each card is placed on the board in
the swim lane that represents the status of that work. These cards
communicate status at a glance. You could also use different color cards to
represent different details. For example, in my Kanban board setup, green
cards are for health, pink cards are for home-related items, orange cards are
for Cisco-related work, and blue cards are for Zigbits-related work.
Advantages of Kanban
Kanban’s visual nature offers a unique advantage when implementing
Agile. The Kanban board is easy to learn and understand, it improves the
flow of work, and minimizes cycle time.
Disadvantages of Kanban
Many of the disadvantages associated with Kanban come with misuse or
mishandling of the Kanban board. An outdated or overcomplicated board
can lead to confusion, inaccuracies, or miscommunication.
Kanban Rules
Every Kanban project should follow these rules:
Visualize: Visually seeing the work on the board allows the team to
fully understand the entire project and how it will move forward, or
not move forward. By seeing it all, the team can find problems quicker
in the process and resolve them before they become larger issues.
Limit work: Work in progress limits (WIP limits) determine the
minimum and maximum amount of work for each column on the
board or for each workflow. Originally, I put my WIP limit at 10…
this was way too much WIP. Today I leverage a WIP limit of 2. Keep
in mind, a WIP limit of 10 might be perfectly fine for a large team
working on a project together. By putting a limit on WIP, you can
increase speed and flexibility, and reduce the need for prioritizing
tasks.
Manage the card flow: The movement of work (cards) throughout
the Kanban board should be monitored and improved upon. Ideally,
you want a fast, smooth flow, which shows that the team is creating
value quickly.
Reserve the right to improve: As the team leverages the Kanban
method, the team should be able to identify and understand any issues
that come up. The group should reserve the right to improve the
process anytime that will help the flow of work, the overall cycle of
time, and increase the work quality being delivered.
Summary
In this chapter, we focused on the business elements, terminology, and
project management styles that all network designers need to know. We
discussed business priorities, drivers, outcomes, and capabilities, and how
each relates to the other to drive the requirements for the design. We
highlighted how risk versus reward, ROI, CAPEX, OPEX, and business
continuity all impact the design decisions we have to make as network
designers. We labeled IT, including networking, as the new utility that
everyone expects to be online and available all the time, just like running
water and electricity. Understanding the business is critical to ensuring the
design decisions will directly increase business success. Finally, we covered
the most common project management methodologies of today. We talked
about waterfall, Agile, Scrum, and Kanban, the advantages and
disadvantages of each, and how they compare to each other based on
various criteria.
Network designers must be the bridge between technology and the business.
Be the bridge my friends!
References
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
White, Russ and Donohue, Denise, The Art of Network Architecture
(Cisco Press, 2014)
Beck, Kent, et al., “Manifesto for Agile Software Development”
(Agile Alliance, 2001)
Data Management 11
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Business Applications
The network is the information highway for the business applications of
today, and for the business to be successful, these applications must be able
to properly communicate as required between users, devices, data,
databases, and other application components.
Application Models
Network designers need to understand how an application is built to
properly design the network for that application. The following are the
different application models being leveraged today and their associated
design elements that you need to know as a network designer:
W End user and No database layer access. How are end users
e application accessing the web tier
b layer access globally?
ti only. The web tier needs to be
e globally accessible for the
r end users. How are the web tier–
specific networks/IP
addresses being routed?
Normally located in a
DMZ.
What’s the web tier’s
high-availability
architecture?
(Active/active,
active/standby, anycast,
etc.)
A Web and This tier is internally How does the web tier
p database accessed only, so no communicate with the
p access only. external addresses or application tier?
li No end user routing are needed.
c should ever
a access this Load balancing should be How does the database
ti tier directly. implemented, but how tier communicate with the
o depends on the other tier’s application tier?
n communication method
ti with this tier (SNAT,
e NAT, Sticky, etc.).
r
Normally located
internally behind multiple
security layers.
Service Models
We highlighted the different application models for how an application can
be created earlier. This section takes that discussion a step further by
covering the different service models that can be leveraged for the
application. These service models determine where the application is
located and what elements of the application are owned and managed by the
business. The following are the most common service models:
Note
There are other service models that are not covered in this section,
such as Database as a Service (DBaaS), Compliance as a Service
(CaaS), and Security as a Service (SECaaS). What is covered in this
section are the most common service models at the time of writing.
Hosted
within the
business’s
server
environme
nt.
Se Character Advantages When to Use
rv istics
ic
e
M
o
de
l
Easy to
run
without
extensive
IT
knowledg
e.
Se Character Advantages When to Use
rv istics
ic
e
M
o
de
l
Cost-
effective.
The Cloud
When a business starts planning to leverage cloud in any form, there are
three use cases that network designers should consider throughout the
design process:
Note
Different remote sites and different applications may use different
gateway sites and paths, depending on the application and measured
application performance. Remote sites that use gateway sites for
Internet access are referred to as client sites.
Figure 3-2 shows how cloud access can be achieved through a gateway in a
data center or a cloud access point (CAP). A branch office tunnels SaaS
traffic to a gateway location and then uses the Internet at the gateway
location to access the SaaS application.
Figure 3-2 Cloud Access Through a Gateway
Hybrid Approach
It is possible to have a combination of DIA and client/gateway sites. When
defining both DIA and gateway sites, SaaS applications can use either the
DIA exits of the remote site or the gateway sites for any given application,
depending on which path provides the best performance. DIA sites are,
technically, a special case of a client site, but the Internet exits are local
instead of remote.
Cloud Types
When selecting a cloud solution, there are a number of different types to
choose from, each with its own associated benefits and limitations:
Private cloud: A private cloud consists of cloud computing resources
used by one business. This cloud environment can be located within
the business’s data center footprint, or it can be hosted by a cloud
service provider (CSP). In a private cloud, the resources, applications,
services, data, and infrastructure are always maintained on a private
network and all devices are dedicated to the business.
Public cloud: A public cloud is the most common type of cloud
computing. The cloud computing resources are owned and operated
by a CSP. All infrastructure components are owned and maintained by
the CSP. In a public cloud environment, a business shares the same
hardware, storage, virtualization, and network devices with other
businesses.
Hybrid cloud: A hybrid cloud is the use of both private and public
clouds together to allow for a business to receive the benefits of both
cloud environments while limiting their negative impacts on the
business.
Multi-cloud: Multi-cloud is the use of two or more CSPs, with the
ability to move workloads between the different cloud computing
environments in real time as needed by the business.
Table 3-4 compares the different cloud types in relation to each other based
on various characteristics.
Cloud-Agnostic Architecture
A cloud-agnostic architecture is when there are no vendor specific features
and functionality that are proprietary. It is focused on leveraging the same
cloud capabilities across the different cloud providers no matter what
vendor it is. When looking at cloud service providers and migrating
applications to the cloud, there are three primary focus points that should be
leveraged within a cloud-agnostic architecture:
Decoupling
There are two perspectives to think about for decoupling. First, all
applications should be designed to be inherently decoupled from the
underlying cloud platform they are on. This can be accomplished by
leveraging service-oriented architecture (SOA), which is discussed in detail
a bit later in this chapter. Second, all cloud components should be
decoupled from the applications that leverage them.
Containerization
All applications should follow a containerized architecture. This is critical
for cloud applications as well as on-premises data center applications.
Ensuring all applications are developed with containerization in mind
allows for real cloud adoption and portability. Container technology helps
decouple applications from the cloud-specific environment, which provides
an abstraction layer away from any of the CSP dependencies. The goal is to
ensure that it is relatively easy to migrate applications between different
cloud vendors if the mission requires it. Cloud containerized architectures is
a topic that is covered in detail in an upcoming section.
Service-Oriented Architecture
To ensure a successful cloud-agnostic architecture, incorporating the
software design service-oriented architecture (SOA) is hyper-critical. SOA
is a style of software design where services are provided to other parts of an
application component themselves. This is accomplished through network
communication protocols. The underlying principles are vendor and
technology agnostic. In SOA, services communicate with other services in
two ways. The first way is to simply pass data between the different
services. The second way is to logistically coordinate an activity event
between two or more services. There are many benefits to SOA:
Data Management
Data is the most critical resource that all other resources will be leveraging.
We have to manage all data effectively, accurately, and securely so that
these additional resources can properly leverage that data with ensured
integrity, availability, and confidentiality. Data management in essence lays
the foundation for data analytics. Without good data management, there will
be no data analytics. Data management can be broken down into 11 pillars:
Summary
What is the purpose of the network? To ensure business success! This
chapter went into great detail on how a network designer can accomplish
this.
This chapter covered how businesses rely heavily on the network and the
corresponding services and applications riding on it. This chapter also
covered application and service models, showing how the location and
architecture of the application or service directly affect the required network
design elements. In addition, this chapter highlighted the multitude of cloud
options and the associated advantages of each option. This chapter
highlighted the preference for agnostic cloud services over proprietary
cloud services, to ensure a business doesn’t lock itself into a specific cloud
service provider, and how adopting a service-oriented architecture can be
beneficial to the business. Last but not least, this chapter gave a quick
overview of the importance of data and data management by highlighting
the 11 data management pillars. Ensuring the confidentiality, integrity, and
availability of a business’s data is paramount to the business’s success. If a
business’s data is compromised, it can no longer make valid decisions on
that data, which handicaps the business until the data is fixed.
Reference
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Security Is Pervasive
This chapter covers the following topics:
This chapter discusses the primary design principles and considerations that
network designers must evaluate when examining a network design from a
network security point of view.
As discussed earlier in this book, to achieve a successful business-driven
design, network architects and designers must always consider a top-down
design approach to build the foundation or the roadmap of the design. With
regard to security, it is pervasive and overlaid on top of every other
component in an environment. Network designers should consider the
following questions when designing a new network design or when
evaluating an existing network design to help them draw the high-level
picture of the design direction with regard to the security aspects:
This chapter covers the following “CCDE v3.0 Unified Exam Topics”
section:
Regulatory Compliance 9, 10
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Today’s converged networks carry much business-critical information
across the network (whether it is voice, video, or data), which makes
securing and protecting that information extremely important, for two
primary reasons. The first reason is for information security and privacy
purposes. The second reason is to maintain business continuity at the
desired level, such as protecting against distributed denial of service
(DDoS) attacks, regardless of whether this protection is within the internal
network or between the different sites over an external network. Therefore,
the design of network infrastructure and network security must not be
performed in isolation. In other words, the holistic design approach
discussed earlier in this book is vital when it comes to network security
design considerations. No network should be designed independently from
its security requirements. A successful network design must facilitate the
application and integration of the security requirements by following the
top-down approach, starting from the business goals and needs, to
compliance with the organization’s security policy standards, to a detailed
design and integration of the various network technologies and components.
To secure any system or network, there must be predefined goals to achieve
and specifications to comply with to ensure that the outcomes are
measurable and always meet the organization’s standards. Therefore, to
achieve this, organizations almost always develop security standards,
policies, and specifications that all collectively aim to achieve the desired
goal with regard to information security. This is what is commonly known
as a security policy. A security policy is a formal statement of the rules by
which people who are given access to an organization’s technology and
information assets must abide. It should also specify the mechanisms
through which these requirements can be met and audited for compliance
with the policy. As a result, a good understanding of the organization’s
security policy and its standards is a crucial prerequisite before starting any
network design or optimizing an existing design. This understanding
ensures that any suggested solution will comply with the security policy
standards and expectations of the business with regard to information
security.
For instance, you may suggest redesigning an existing 1G dark fiber
(owned by the organization) to a virtual leased line (VLL) solution (L2VPN
based) that offers the same quality at a lower cost. However, the security
policy may dictate that any traffic traversing any network that is not owned
by the organization must be encrypted. By taking this point into
consideration, the network architect or designer here can add IPsec or
MACsec to the proposed solution to provide an encrypted VLL, to ensure
the suggested design supports and complies with the organization’s security
policy standards.
The integration of network infrastructure and network security (including
security components such as firewalls and configurations such as
infrastructure access control lists ACL [iACLs]) can be seen as a double-
edged sword. On the one hand, security will offer privacy, control, and
stability to the network (for example, protect against DDoS attacks and
unauthorized access). On the other hand, if both the network infrastructure
and the security components are designed in isolation (siloed approach),
then when they integrate together at some point, there will be a mix of the
following issues:
Complex integration
Traffic drop
Reduced performance
Redesign or major design changes
Design failure
For instance, in Figure 4-1, the network infrastructure of the Internet edge
was designed to provide Internet access for the enterprise. Based on this,
the network designer considered Enhanced Interior Gateway Routing
Protocol (EIGRP) as the internal dynamic routing protocol and external
Border Gateway Protocol (eBGP) to provide the edge routing to provide
end-to-end connectivity.
Note
In Figure 4-3, the core block was not placed in its own domain to
illustrate one of the recommended and proven design considerations
that offloads any additional processing from the network core, such
as security policies. However, some design requirements require the
core to be treated as a separate security domain.
Figure 4-3 Security Domains and Zones
Security devices are almost always placed at the chokepoints (domains or
zone boundary points). However, the types of security devices and roles can
vary based on the domain or zone to be protected and its location in the
network. For example, in Figure 4-3, a firewall is placed at the network
management zone boundary to fulfill packet filtering and inspection
requirements for OAM traffic flows in both directions, while at the public
demilitarized zone (DMZ), there are multiple specialized security nodes
such as web application firewall data loss prevention and IPS.
Nevertheless, it is critical that network architects and designers consider the
type of the targeted network and its high-level architecture. Each of these
networks (irrespective of its detailed design) has different traffic flow
characteristics. For instance, enterprise networks always define chokepoint
boundaries such as the Internet edge or connections to extranet networks
where traffic is always controlled by strict and confined rules.
Figure 4-4 Zero Trust Architecture Concept of Trust Score and Access
Authorization
Figure 4-5 Zero Trust Architecture Concept of Trust Score vs. Risk
Score
The following factors and concepts that are shown in Figure 4-4 and Figure
4-5 all equate to the authorization for access for that specific user and
device:
Static factors: These are items that we know and can preemptively
base access and authorization on. The most common of these factors
are credentials but could also include the level of confidence, device
trust, network, physical location, biometrics, and device orientation.
Dynamic factors: These are sources of data that can be analyzed at
the time of access to change what level of access and authorization
(i.e., the trust score of the transaction in question) is being provided.
The most common is threat intelligence, but can also be geo velocity
(the difference between your current location and where you last
logged in), GPS coordinates, and real-time data analytics around the
transaction.
Entitled to access: Users have various roles and, based on those roles,
are entitled to specific access to complete their job. A financial user
would need a different level of access than a human resource user.
These two users should not have the same access or authorization.
They may have overlapping access to resources that they both need to
complete their job functions. Further examples of roles include
contractors, affiliates, and foreign nationals. This is not limited to
people but also is applied to devices. For example, printers would
have a different entitled to access level than Internet of Things devices
would.
Trust score: This is a combination of factors, both static and dynamic,
and is used to continually provide identity assurance. A trust score
determines the level of access as required by the level of risk value of
the asset being accessed.
Risk score: Resources such as assets, applications, and networks have
levels of risk scores, which are thresholds that must be exceeded for
access to be permitted. In general, the security plan categorization
determines asset level or risk.
Authorization for access: For a resource, in this case a user or
device, to be authorized for access to another resource, in this case an
asset, application, or system, the trust score, and the entitlement level
are combined to determine the authorization for access. Just because a
trust score is high enough to access a resource, if that user or device
doesn’t have the correct entitlement, they will not have the appropriate
authorization for access.
Figure 4-6 shows the Zero Trust Architecture components and Figure 4-7
highlights an example of a theoretical Zero Trust Architecture
Figure 4-6 Zero Trust Architecture Components
This is not a fully inclusive section covering every aspect of Zero Trust,
Zero Trust Architecture, and Zero Trust Networking. Giving the topic
proper justice would require an entire book. The goal here is for you as a
network designer to identify the different high-level capabilities that are
included in a Zero Trust Architecture and how they impact the different
design decisions and the overall state of the network.
C Characteristic Mechanisms to
I Achieve
A
T
ri
a
d
S
e
c
u
ri
t
y
E
le
m
e
n
t
Regulatory Compliance
A network designer should be able to take any compliance standard given
to them and design a solution that fits into each of the specific constraints
the standard governs. The goal of this section is to provide a quick
overview of the most common compliance standards today, which will
allow you to select design decisions that meet these constraints. This
section does not provide an all-inclusive list of compliance regulations, nor
does it cover every single aspect of each of the compliance regulations
mentioned. Instead, it highlights two U.S. compliance standards, HIPAA
and PCI DSS, and one EU compliance standard, GDPR, because they
impact many organizations and are representative of the types of
compliance standards network designers should be aware of. The section
wraps up with a brief discussion of data sovereignty.
HIPAA
The U.S. Health Information Portability and Accountability Act
(HIPAA) was enacted in 1996 and is enforced by the Office of Civil Rights.
HIPAA has a few rules that are important to know if you are designing a
network for an organization that handles health records or information in
any way:
PCI DSS
Payment Card Industry Data Security Standard (PCI DSS) compliance
is mandated by credit card companies to help ensure the security of credit
card transactions. This standard refers to the technical and operational
standards that businesses must follow to secure and protect credit card data
provided by cardholders and transmitted through card processing
transactions. PCI DSS is developed and managed by the PCI Security
Standards Council. Within PCI DSS there are 12 requirements that network
designs should know:
As you can see, the rules of PCI DSS are more technical and security-
focused than the rules of HIPAA, which are more generically stated.
PCI DSS is a compliance regulation that you will have to take into
consideration when credit cards and the associated transactions are part of a
business’s functions. Always consider PCI DSS when you are dealing with
a business that has a point of sale system or an online ordering system; in
essence, PCI DSS applies to all entities that store, process, and transmit
cardholder data—fast food restaurants, retail stores, online storefronts…the
list is endless.
GDPR
The General Data Protection Regulation (GDPR) is a European Union
(EU) regulation for data protection that sets guidelines for the collection
and processing of personal information from individuals. It applies to the
processing of personal data of people in the EU by businesses that operate
in the EU. It’s important to note that GDPR applies not only to firms based
in the EU, but any organization providing a product or service to residents
of the EU. The regulation pertains to the full data life cycle, including the
gathering, storage, usage, and retention of data. The following is a list of
some of the types of privacy data GDPR protects:
A presence in an EU country
No presence in the EU but processes personal data of EU residents
Data Sovereignty
Data sovereignty is the requirement that information is subject to the
location’s regulations from where it was collected and processed.
Sovereignty is a state-specific regulation requiring that information
collected and processed in a country must remain within the boundaries of
that country and must be safeguarded according to the laws of that country.
This is most often seen today when businesses are sending data into other
countries where their server or data centers physically reside. This is also
common when migrating to cloud providers. In these circumstances, the
data in question cannot leave the country and must be stored properly
following the rules and regulations that country has set forth.
Summary
Security is truly pervasive today. It is in each place in the network, in each
product architecture, and horizontally interlocked across an end-to-end
enterprise. From a network design perspective, security is one of those areas
that has a number of constraints we have to comply with when making our
design decisions. This can be as simple as determining whether we have to
use encryption (IPsec or MACsec) over transport circuits or as complicated
as developing and implementing a full Zero Trust Architecture in the
enterprise network.
Zero Trust Architecture is a critical shift in how network security is both
thought of and implemented in today’s networks. In this chapter, we
covered the Zero Trust Architecture pillars, capabilities, and concepts in a
vendor-agnostic fashion, to arm you as a network designer in this critical
security transformation that is happening in the industry today. Trust score,
entitlement level, and authorization of access are some of the key concepts
that dictate what a user or device can access within a Zero Trust
Architecture. No more are those days where every user and device in an
enterprise network receives full, unrestricted east–west access. Zero Trust
builds on the CIA triad of confidentiality, integrity, and availability, which
are considered the fundamentals of information security.
The final topic covered in this chapter was regulatory compliance standards.
This section provided an overview of HIPAA, PCI DSS, and GDPR. Again,
these were quick overviews to give you a general idea of how a network
designer should understand a given standard and apply that understanding
as they make design decisions. The goal for all network designers is to meet
the standards in question, to allow the business and its networks to stay
online. These were just three examples of a growing landscape of security
standards that impact the overall design and architecture of modern
enterprise networks.
Reference
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
How do you know what you are choosing from a network design
perspective is going to work? Are you making the right network design
decisions? How do you define the phrase “architecture”? Who defines your
architecture? If you asked another member of your organization, would they
define architecture the same way you do?
This is where reference architectures, models, and frameworks come in to
save the day. These items help guide your decisions by leveraging the
different aspects that an organization has to help determine the proper way
forward for that business.
This chapter highlights these reference architectures, models, and
frameworks so you as a network designer understand how to properly map
the decisions and the overarching architecture you are building to the
specific business elements. These topics will not be covered end to end, as
each individual framework and reference architecture warrants its own book
in itself. This chapter will cover examples of these frameworks to show
how they can dictate a design or architecture for a business.
This chapter covers the following “CCDE v3.0 Core Technology List”
section:
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Business architecture
Business architecture (BA) enables everyone, from strategic planning
teams to implementation teams, to get “on the same page,” enabling them to
address challenges and meet business objectives. The people, processes,
and technology (tools) that align with business priorities enable business
outcomes. Furthermore, a business solution is actually a set of interacting
business capabilities that delivers specific, or multiple, business outcomes.
A business outcome is a specific measurable result of an activity, process,
or event within the business, traditionally following the SMART principle:
specific, measurable, attainable, realistic, and time-bound.
Here are a few examples of real business outcomes (real statistics) that
business leaders have had in the “real production” world today:
Figure 5-1 shows how it all fits together, specifically how a technology
solution, in our case a network design decision we made, fits in with a
business capability of automating business processes, which transforms the
business to provide the business solution of customer care, which then
delivers the business outcome of an improved customer lifetime value by 10
percent in 24 months.
Aligning Stakeholders
Who are the stakeholders we should align with? There are four groups of
stakeholders we should look to align in this process: the IT steering
committee, architects, finance and purchasing, and enterprise-wide process
owners. The following list provides some examples of what roles, titles, and
groups might fit into each of these four categories:
Keep in mind that these are example roles and titles, and not a pre-scripted
or all-inclusive list. As a network designer, it is ultimately up to you to
identify what stakeholders you need to include in this process.
Hardware inventory
Software inventory
Contract management
Network discovery
Administrative data
User data
Event management data (IT Operations Management [ITOM])
Protection of assets.
Reputation management.
Cost optimization.
Summary
Typically, no single reference architecture, framework, or model will apply
to the situation you are in as a network designer. In most cases, you may
have to merge or adopt aspects of multiple frameworks to help guide you
and the business to success. As you continue this journey, here are three
guidelines to leverage when dealing with frameworks:
Encourage architecture reviews regularly
Avoid big-bang implementations (doing everything at once)
Use your operating model as a North Star, following a path toward a
strategic goal
This chapter started with a number of questions to really set the stage for
the discussion of architectures, frameworks, and models. How do you know
what you are choosing is going to work? Are you making the right
decisions? How do you define the phrase “architecture”? Who defines your
architecture?
To answer these questions, this chapter first covered business architecture,
defining what business priorities, drivers, outcomes, and capabilities are
and, more importantly, how they interact with one another. In this same
context, we described the different levels of alignment, or scope, that
business architecture includes and how you can identify which scope you
are in. How do we know if the network design we are proposing will be
successful was our next point of interest and is where the value of key
performance indicators was brought forth.
Then we covered enterprise architecture, where we determine business
requirements by identifying stakeholders and the operating model being
leveraged. Finally, we identified the current state by defining the data and
creating a blueprint.
The end of this chapter provided a list of the most common reference
architectures, frameworks, and models at the time of writing. This list is a
summary and is not meant to be inclusive of all the different components
within each of these frameworks listed, nor is the list a fully inclusive list of
all frameworks leveraged today.
Reference
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Transport Technologies
This chapter covers the following topics:
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Figure 6-1 illustrates the different Layer 2 media access methods and
topologies that can be transported over an MPLS-enabled backbone.
Metro Ethernet Forum (MEF), however, defines the two categories just
discussed as two main types of Layer 2 Ethernet services:
Table 6-2 summarizes the relationship between the transport model and the
Metro Ethernet service definitions.
In addition to E-Line and E-LAN services, services are available for Layer
2 that are mainly to facilitate carrying legacy WAN transport over MPLS
networks, such as the following:
Note
Cisco’s implementation of VPWS is known as Any Transport over
MPLS (AToM) and delivers what is known as Ethernet over MPLS
(EoMPLS). L2TPv3, however, can be used as an analogous service to
AToM over any IP transport. Keep in mind here that Cisco’s AToM
includes more services (PPP, HDLC, FR, and ATM) as its “any
transport” than EoMPLS. EoMPLS and Metro Ethernet services
cannot provide this same level of flexibility from a connectivity and
mixed circuits perspective.
ATM AAL5 protocol data units (PDUs) over PW cell relay over
PW
Ethernet Protocol based
VLAN based
Limited scalability. The more PEs and VSIs there are, the more
network hardware resources are consumed per PE associated with
operational complexity:
Many PWs because of the nature of this model, where a full
mesh of directed LDP sessions is required (N × (N – 1) / 2 PWs
required)
Potential signaling and packet replication overhead, when the
number of PWs increases across multiple PEs to cover multiple
remote sites (CEs) per customer VSI using LDP as the control
plane protocol
Large amount of multicast replication, which may result in
inefficient network bandwidth utilization (unless some
mechanism is used to mitigate it such as Interior Gateway
Management Protocol [IGMP] snooping with VPLS)
CPU overhead for replication
MAC distribution across the network limitations
Support limited number of customers/VLANs (maximum 4096)
VLAN and port-level support (no QinQ)
Support multihomed CEs in active-standby manner
Suitable for simple and small deployments, such as a self-deployed
enterprise VPLS solution
Business Drivers
One of the main drivers toward EVPN is the increased demand on
distributed virtualized and cloud-based data centers, which commonly
require a scalable and reliable stretched layer concavity among them.
Recently, Data Center Interconnects (DCI) has become a leading
application for Ethernet multipoint L2VPNs. Virtual machine (VM)
mobility, storage clustering, and other data center services require nodes
and servers in the same Layer 2 network to be extended across data centers
over the WAN. Consequently, these trends and customer needs add new
requirements for L2VPN operators to meet, such as the following:
EVPN Instance
An EVI represents an L2VPN instance on a PE node. Similar to the VRF in
MPLS L3VPN, import and export Route Targets (RTs) are allocated to
each EVI. In addition, a bridge domain (BD) is associated with each EVI.
Mapping traffic to the bridge domain, however, is dependent on the
multiplexing behavior of the user to network interface (UNI). Typically, any
given EVI can include one or more BDs based on the PE’s service interface
deployment type, as summarized in Figure 6-14.
Figure 6-14 EVPN EVI Models
For instance, you can use the VLAN bundling model in environments that
require multiple VLANs to be carried transparently across the EVPN cloud
between two or more sites. In contrast, the VLAN-aware bundling is more
feasible for multitenant data center environments where multiple VLANs
have to be carried over the DCI over a single EVI with multiple DBs
(VLAN to BD 1:1 mapping), because the overlapping of tenant MAC
addresses across different VLANs is supported.
Ethernet Segment
Ethernet segment (ES) refers to a site that is connected to one or more PEs.
(An ES can be either a single CE or an entire network.) Typically, each
network segment is assigned to a single unique identifier, commonly
referred to as an Ethernet segment identifier (ESI), which can eliminate any
STP type of protocol used for loop prevention and normally will add
limitations to the design, especially for the multihomed CE scenarios.
Based on this identifier, EVPN can provide access redundancy that offers
georedundancy and multihoming, where a site (CE or entire network with
multiple CEs) can be attached to one or multiple PEs that connect to the
same provider core using multiple combinations of the CE-PE connectivity.
Figure 6-15 illustrates the various EVPN-supported access connectivity
models:
Note
In EVPN, the PE advertises in BGP a split-horizon label (ESI MPLS
label) associated with each multihomed ES to prevent flooded traffic
from echoing back to a multihomed Ethernet segment.
Note
RFC 6391 describes a mechanism that introduces a flow label that
allows P routers to distribute flows within a PW.
Note
The placement of the functionality of a VXLAN controller with BGP
(EVPN) as a control plane (where all the hosts’ MAC addresses are
hosted and updated) can vary from vendor to vendor and from
solution to solution. For example, it can be placed at the spine nodes
of the data center architecture, as well as at the virtualized (software-
based) controller. It can also be used as a BGP route reflector to
exchange Virtual Tunnel End Point (VTEP) list information between
multiple VSMs.
Note
Technically, you can use all the VXLAN models discussed earlier
(host and network based) at the same time. However, from a design
point of view, this can add design complexity and increase
operational complexity. Therefore, it is always recommended to keep
it simple and start with one model that can achieve the desired goal.
Summary
This chapter covered the various transport technologies design models,
protocols, and approaches, along with the characteristics of each. All these
design options and protocols are technically valid and proven solutions and
still in use by many operators today. However, as a network designer, you
must evaluate the scenario that you are dealing with, ideally using the top-
down design approach, where business goals and requirements are at the
top, followed by the application requirements that should collectively drive
the functional and technical requirements.
For instance, if an enterprise needs a basic self-deployed Layer 2 DCI
solution between three distributed data centers with a future plan to add a
fourth data center within two years, a flat VPLS solution can be cost-
effective and simple to deploy and manage, in addition to scalable enough
for this particular scenario. By contrast, if a service provider is already
running flat VPLS and experiencing high operational complexity and
scalability challenges and the SP is interested in a solution that supports its
future expansion plans with regard to the number of L2VPN customers
while minimizing the current operational complexity, then H-VPLS with
BGP signaling, H-VPLS with PBB, EVPN, or EVPN PBB are possible
solutions here. More detail gathering would be required to narrow down the
selection. For example, is this operator offering MPLS L3VPN? Does it
plan to offer multihoming to the L2VPN customers with active-active
forwarding? If the answer to any of these questions is yes, EVPN (with or
without PBB) can be an optimal solution. Because if the SP is offering
L3VPN, that means the same control plane (BGP) can be used for both
MPLS VPN services (simplifies operational complexity and offers a more
scalable solution).
Adding PBB to this solution will optimize its scalability to a large extent,
especially if this operator provides L2VPN connectivity to cloud-based data
centers where a large number of virtual machine MAC addresses is
expected to be carried over the L2VPN cloud. If multihoming with active-
active forwarding is required, EVPN here will be a business enabler, along
with optimized scalability and simplified operation as compared to the
existing flat VPLS. However, there might be some design constraints here;
for example, if the current network nodes do not support EVPN and the
business is not allocating any budget to perform any hardware or software
upgrade; or if this provider has existing an interprovider L2VPN link with a
global Carrier Ethernet, to extend its L2VPN connectivity for some large
enterprise customers with international sites, and this global Carrier
Ethernet does not support EVPN.
In both situations, the network designer is forced to look into other suitable
design alternatives such as H-VPLS. Again, the design requirements and
constraints must drive the decision for which solution is the suitable or
optimal one by looking at the bigger picture and not focusing only on the
technical characteristics of the design option or protocol.
Reference
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Layer 2 Technologies
This chapter covers the following topics:
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
There are more Layer 2 technologies than this core list. We will cover a
number of them in subsequent chapters.
In addition, there are some features and enhancements to STP that can
optimize the operation and design of STP behavior in a classical Layer 2
environment. The following are the primary STP features:
Loop Guard: Prevents the alternate or root port from being elected
unless bridge protocol data units (BPDUs) are present
Root Guard: Prevents external or downstream switches from
becoming the root
BPDU Guard: Disables a PortFast-enabled port if a BPDU is
received
BPDU Filter: Prevents sending or receiving BPDUs on PortFast-
enabled ports
Figure 7-1 briefly highlights the most appropriate place where these
features should be applied in a Layer 2 STP-based environment.
Figure 7-1 STP Features Locations
Note
Cisco has developed enhanced versions of STP. It has incorporated a
number of the preceding features into them using different versions of
STP that provide faster convergence and increased scalability, such as
Per-VLAN Spanning Tree Plus (PVST+) and Rapid PVST+.
Link Aggregation
The concept of link aggregation refers to the industry standard IEEE
802.3ad, in which multiple physical links can be grouped together to form a
single logical link. This concept offers a cost-effective solution by
increasing cumulative bandwidth without requiring any hardware upgrades.
The IEEE 802.3ad Link Aggregation Control Protocol (LACP) offers
several other benefits, including the following:
Note
Cisco has created a “switch-clustering” solution called Virtual
Switching System (VSS), which solves the Spanning Tree Protocol
looping problem by converting the distribution switching pair into a
logical single switch. From a design perspective, VSS removes the
need for both STP and FHRP in the Layer 2 design.
Table 7-2 summarizes and compares the main capabilities and functions of
these different FHRP protocols.
Note
In the design illustrated in Figure 7-3, when GLBP is used as the
FHRP, it is going to be less deterministic compared to HSRP or
VRRP because the distribution of Address Resolution Protocol
(ARP) responses is going to be random.
If a network designer doesn’t properly align the STP root bridge with the
active FHRP instance for the corresponding VLAN, there will be a
suboptimal impact to traffic traversing that VLAN. An example of this is
shown in Figure 7-4.
Figure 7-4 FHRP and STP Design Alignment
While the design in Figure 7-4 functionally works, it is not optimal and in
most cases would be called a poor design. This is because FHRP and STP
have not been properly aligned from a design perspective, causing traffic
sourced from the client device on VLAN 22 to traverse an extra hop
through DSW1 to then get to DSW2.
STP-Based Models
In classical Layer 2 STP-based LAN networks, the connectivity from the
access layer switches to the distribution layer switches can be designed in
various ways and combined with Layer 2 control protocols and features
(discussed earlier) to achieve certain design functional requirements. In
general, there is no single best design that someone can suggest that can fit
every requirement, because each design is proposed to resolve a certain
issue or requirement. However, by understanding the strengths and
weaknesses of each topology and design model (illustrated in Figure 7-5
and compared in Table 7-2), network designers may then always select the
most suitable design model that meets the requirements from different
aspects, such as network convergence time, reliability, and flexibility. This
section highlights the most common classical Layer 2 design models of
LAN environments with STP, which can be applied to enterprise Layer 2
LAN designs. Figure 7-5 highlights the different STP-based LAN
connectivity topologies.
Figure 7-5 Primary and Common Layer 2 (STP-Based) LAN
Connectivity Topologies
Table 7-3 contains a design comparison summary for the STP-based LAN
connectivity topologies highlighted in Figure 7-5.
Note
All the Layer 2 design models in Figure 7-5 share common
limitations: the reliance on STP to avoid loss of connectivity caused
by Layer 2 loops and the dependency on Layer 3 FHRP timers, such
as VRRP, to converge. These dependencies naturally lead to an
increased convergence time when a node or link fails. Therefore, as a
rule of thumb, tuning and aligning STP and FHRP timers is a
recommended practice to overcome these limitations to some extent.
Figure 7-7 Cisco VSS Cluster Design with Dual Active Detection
Stacking switches is another form of clustering that is like the virtual switch
architecture. Stacking has a number of the same attributes and design
elements but is achieved by joining multiple physical switches into a single
logical switch. From a Cisco product implementation perspective (Cisco-
proprietary StackWise), switches are interconnected by StackWise
interconnect cables, and a master switch is selected. The switch stack is
managed as a single object and uses a single IP management address and a
single configuration file. This reduces management overhead. Furthermore,
the switch stack can create an EtherChannel connection, and uplinks can
form Multichassis EtherChannel (MEC) with an upstream distribution
architecture.
Daisy-Chained Access Switches
Although this design model might be a viable option to overcome some
limitations, network designers commonly use it as an interim solution. This
design can introduce undesirable network behaviors. For instance, the
design shown in Figure 7-8 can introduce the following issues during a link
or node failure:
Summary
In this chapter, we focused on the Layer 2 technologies that all network
designers need to know. This included STP, VLANs, trunking, link
aggregation, Multichassis link aggregation (mLAG), and FHRP. We then
discussed and compared the most common LAN design models, when to
use them, and why to use them. As a network designer, you need to have
various technologies in place that properly protect the Layer 2 domain and
facilitate having redundant paths to ensure continuous connectivity to the
end user and their respective application, in the event of a failure scenario.
Understanding the characteristics of these core Layer 2 technologies and
their respective behaviors is critical to successfully designing a reliable and
highly available Layer 2 network.
Reference
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Layer 3 Technologies
This chapter covers the following topics:
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Routing protocols
Routing table
Forwarding decision (switches packets)
Figure 8-2 Router’s Forwarding Decision
Link-State Routing
Link-state routing protocols use Dijkstra’s shortest path algorithm to
calculate the best path. Open Shortest Path First (OSPF) and
Intermediate System-to-Intermediate System (IS-IS) protocols are link-
state routing protocols that have a common conceptual characteristic in the
way they build, interact, and handle L3 routing to some extent. A link-state
advertisement (LSA) is a message that is used to communicate network
information such as router links, interfaces, link states, and costs within a
link-state routing protocol. Figure 8-3 illustrates the process of building and
updating a link-state database (LSDB).
Figure 8-3 Process of Building an LSDB
It is important to remember that although OSPF and IS-IS as link-state
routing protocols are highly similar in the way they build the LSDB and
operate, they are not identical! This section discusses the implications of
applying link-state routing protocols (OSPF and IS-IS) on different network
topologies, along with different design considerations and
recommendations.
Note
The link between the two hub nodes (for example, ABRs) will
introduce the potential of a single point of failure to the design.
Therefore, link redundancy (availability) between the ABRs may
need to be considered.
Note
OSPF is a more widely deployed and proven link-state routing
protocol in enterprise networks compared to IS-IS, especially with
regard to hub-and-spoke topologies. IS-IS has limitations when it
works on nonbroadcast multiple access (NBMA) multipoint
networks.
Note
Later in this chapter, more details are provided about flooding domain
and route summarization design considerations for link-state routing
protocols, which can reduce the level of control plane complexity and
optimize link-state information flooding and performance.
Note
Other mechanisms help to optimize and reduce link-state LSA/LSP
flooding by reducing the transmission of subsequent LSAs/LSPs,
such as OSPF floors reduction (described in RFC 4136). This is done
by eliminating the periodic refresh of unchanged LSAs, which can be
useful in fully meshed topologies.
Each of the OSPF areas allows certain types of LSAs to be flooded, which
can be used to optimize and control route propagation across the OSPF
routed domain. However, if OSPF areas are not properly designed and
aligned with other requirements, such as application requirements, it can
lead to serious issues because of the traffic black-holing and suboptimal
routing that can appear as a result of this type of design. Subsequent
sections in this book discuss these points in more detail.
Figure 8-9 shows a conceptual high-level view of the route propagation,
along with the different OSPF LSAs, in an OSPF multi-area design with
different area types.
The typical design question is, “Where can these areas be used and why?”
The basic standard answer is, “It depends on the requirements and
topology.”
For instance, if no requirement specifies which path a route must take to
reach external networks such as an extranet or the Internet, you can use the
“totally NSSA” area type to simplify the design. For example, the scenario
in Figure 8-10 is one of the most common design models that use OSPF
NSSA. In this design model, the border area that interconnects the campus
or data center network with the WAN or Internet edge devices can be
deployed as totally NSSA. This deployment assumes that no requirement
dictates which path should be used. Furthermore, in the case of NSSA and
multiple ABRs, OSPF selects one ABR to perform the translation from
LSA type 7 to LSA type 5 and floods it into area 0 (normally the router
with the highest router ID, as described in RFC 1587 [obsoleted by RFC
3101]). This behavior can affect the design if the optimal path is required.
Note
RFC 3101 introduced the ability to have multiple ABRs perform the
translation from LSA type 7 to type 5. However, the extra
unnecessary number of LSA type 7 to type 5 translators may
significantly increase the size of the OSPF LSDB. This can affect the
overall OSPF performance and convergence time in large-scale
networks with a large number of prefixes.
Similarly, in the scenario depicted on the left in Figure 8-11, a data center in
London hosts two networks (10.1.1.0/24 and 10.2.1.0/24). Both
WAN/MAN links to this data center have the same bandwidth and cost.
Based on this setup, the traffic coming from the Sydney branch toward
network 10.2.1.0/24 can take any path. If this is not compromising any
requirement (in other words, suboptimal routing is not an issue), the OSPF
area 10 can be deployed as a “totally stubby area” to enhance the
performance and stability of remote site routers.
In contrast, the scenario on the right side of Figure 8-11 has a slightly
different setup. The data centers are located in different geographic
locations with a data center interconnect (DCI) link. In a scenario like this,
the optimal path to reach the destination network can be critical, and using a
totally stubby area can break the optimal path requirement. To overcome
this limitation, there are two simple alternatives to use: either “normal
OSPF area” or the “stubby area” for area 10. This ensures that the most
specific route (LSA type 3) is propagated to the Sydney branch router to
select the direct optimal path rather than crossing the international DCI.
Figure 8-11 OSPF Totally Stubby Area Versus Stubby Area Design
In summary, the goal of these types of different OSPF areas is to add more
optimization to the OSPF multi-area design by reducing the size of the
routing table and lowering the overall control plane complexity by reducing
the size of the fault domains (link-state flooding domains). This size
reduction can help to reduce the overhead of the routers’ resources, such as
CPU and memory. Furthermore, the reduction of the flooding domains’ size
will help accelerate the overall network recovery time in the event of a link
or node failure. However, in some scenarios where an optimal path is
important, take care when choosing between these various area types.
Note
In the scenarios illustrated in Figure 8-10 and Figure 8-11,
asymmetrical routing is a possibility, which may be an issue if there
are any stateful or stateless network devices in the path such as a
firewall. However, this section focuses only on the concept of area
design. Later in this book, you will learn how to manage
asymmetrical routing at the network edge.
It is obvious that OSPF and IS-IS as link-state routing protocols are similar
and can achieve (to a large extent) the same result for enterprises in terms of
design, performance, and limitations. However, OSPF is more commonly
used by enterprises as the interior gateway protocol (IGP), for the following
reasons:
OSPF can offer a more structured and organized routing design for
modular enterprise networks.
OSPF is more flexible over a hub-and-spoke topology with multipoint
interfaces at the hub.
OSPF naturally runs over IP, which makes it a suitable option to be
used over IP tunneling protocols such as Generic Routing
Encapsulation (GRE), Multipoint GRE (mGRE), Cisco Dynamic
Multipoint Virtual Private Network (DMVPN), and Next Hop
Resolution Protocol (NHRP), whereas with IS-IS, this is not a
supported design.
In terms of staff knowledge and experience, OSPF is more widely
deployed on enterprise-grade networks. Therefore, compared to IS-IS,
more people have OSPF knowledge and expertise.
However, if there is no technical barrier, both OSPF and IS-IS are valid
options to consider.
EIGRP Routing
Enhanced Interior Gateway Routing Protocol (EIGRP) is an enhanced
distance-vector routing protocol, relying on the Diffusing Update
Algorithm (DUAL) to calculate the shortest path to a network. A distance-
vector protocol is a routing protocol that advertises the entire table to its
neighbors. EIGRP, as a unique Cisco innovation, became highly valued for
its ease of deployment, flexibility, and fast convergence. For these reasons,
EIGRP is commonly considered by many large enterprises as the preferred
IGP. EIGRP maintains all the advantages of distance-vector protocols while
avoiding the concurrent disadvantages. For instance, EIGRP does not
transmit the entire routing information that exists in the routing table
following an update event; instead, only the “delta” of the routing
information will be transmitted since the last topology update. EIGRP is
deployed in many enterprises as the routing protocol for the following
reasons:
This means that the remote site connected to router D will be completely
isolated, without taking any advantage of the backdoor link. To overcome
this issue, EIGRP offers a useful feature called stub leaking, where both
routers D and C in this scenario can advertise routes to each other
selectively, even if they are configured as a stub. Route filtering might need
to be incorporated in scenarios like this when an EIGRP leak map is
introduced into the design to avoid any potential suboptimal routing that
might happen as a consequence of route leaking.
Note
As discussed earlier, a link-state routing protocol can lead to transit
forwarding loops in a ring and mesh topologies after a network
component failure event. Therefore, both EIGRP and link-state
routing protocols have limitations on these topologies, with different
indications (fast and large number of EIGRP queries versus link-state
transit loop).
Note
A link-state routing protocol can offer built-in information hiding
capabilities (route suppression) by using different types of flooding
domains, such as L1/L2 in IS-IS and stubby types of areas in OSPF.
The subsequent sections examine where and why to break a routed network
into multiple logical domains. You will also learn summarization techniques
and some of the associated implications that you need to consider.
Note
Although route filtering can be considered as an option for hiding
reachability information, it is often somewhat complicated with link-
state protocols.
Figure 8-17 Link-State Flooding Domain Boundaries
The following sections cover the various design considerations for IGP
flooding domains, starting with a review of the structure of link-state and
EIGRP domains.
Note
The solution presented in this scenario is based on the assumption
that traffic flowing over multiple international links is acceptable
from the perspective of business and application requirements.
You can use a GRE tunnel as an alternative method to the OSPF virtual link
to fix issues like the one just described; however, there are some differences
between using a GRE tunnel versus an OSPF virtual link, as summarized in
Table 8-3.
Figure 8-23 OSPF Virtual Link
May add tunnel overhead as all traffic is The routing updates are
tunneled and encapsulated by the tunnel tunneled, but the data traffic
endpoints. is sent natively without
tunnel overhead.
OSPF stub area can be used as a transit area The transit area cannot be
for the tunnel. an OSPF stub area.
Note
The amount of available bandwidth with regard to the control plane
traffic such as link-state LSAs/LSPs is sometimes a limiting factor.
For instance, the most common quality of service (QoS) standard
models followed by many organizations allocate one of the following
percentages of the interface’s available bandwidth for control
(routing) traffic: 4-class model, 7 percent; 8-class model, 5 percent;
and 12-class model, 2 percent. This is more of a concern when the
interconnection is a low-speed link such as a legacy WAN link (time-
division multiplexing [TDM] based, Frame Relay, or ATM) with
limited bandwidth. Therefore, other alternatives are sometimes
considered with these types of interfaces, such as a passive interface
or static routing.
For instance, many service providers run thousands of routers within one
IS-IS level. Although this may introduce other design limitations with
regard to modern architectures, in practice it is proven as a doable design.
In addition, today’s router capabilities, in terms of hardware resources, are
much stronger and faster than routers that were used five to seven years
ago. This can have a major influence on the design, as well, because these
routers can handle a high number of routes and volume of processing
without any noticeable performance degradation.
In addition, the number of areas per border router is also one of the primary
considerations in designing link-state routing protocols, in particular OSPF.
Traditionally, the main constraint with the limited number of areas per ABR
is the hardware resources. With the next generation of routers, which offer
significant hardware improvements, ABRs can hold a greater number of
areas. However, network designers must understand that additional areas to
be added per ABR correlates to potential lower expected performance
(because the router will store a separate LSDB per area).
In other words, the hardware capabilities of the ABR are the primary
deterministic factor of the number of areas that can be allocated per ABR,
considering the number of prefixes per area as well. Traditionally, the rule
of thumb is to consider two to three areas (including the backbone area) per
ABR. This is a foundation and can be expanded if the design requires more
areas per ABR, with the assumption that the hardware resources of the ABR
can handle this increase.
In addition to these facts and variables, network designers should consider
the nature of the network and the concept of fault isolation and design
modularity for large networks that can be designed with multiple functional
fault domains (modules). For example, large-scale routed networks are
commonly divided based on the geographic location of global networks or
based on an administrative domain structure if they are managed by
different entities.
From the perspective of logical separation, you should place each one of the
large parts of the network into its own logical domain. The logical topology
can be broken using OSPF areas, IS-IS levels, or EIGRP route
summarization. The question you might be asking is, “Why has the domain
boundary been placed at routers G and H rather than router D?”
Technically, both are valid places to break the network into multiple logical
domains. However, if we place the domain boundary at router D, both the
primary data center network and regional data center will be under the same
logical fault domain. This means the network may be less scalable and
associated with lower control plane stability because routers E and F will
have a full view of the topology of the regional data center network
connected to routers G and H. In addition, routers G and H most probably
will face the same limitations as routers E and F. As a result, if there is any
link flap or routing change in the regional data center network connected to
router G or H, it will be propagated across to routers E and F (unnecessary
extra load and processing).
Figure 8-27 Potential Routing Domain Boundaries
Note
Although both options are valid solutions, on the CCDE exam the
correct choice will be based on the information and requirements
provided. For instance, if one of the requirements is to achieve a
more stable and modular design, a separate OSPF area for the
regional data center will be the more feasible option in this case.
Route Summarization
The other major factor when deciding where to divide the logical topology
of a routed network is where summarization or reachability information
hiding can take place. The important point here is that the physical layout of
the topology must be considered. In other words, you cannot decide where
to place the reachability information hiding boundary (summarization)
without referring to what the physical architecture looks like and where the
points are that can enhance the overall routing design if summarization is
enabled. Subsequent sections in this chapter cover route summarization
design considerations in more detail.
Figure 8-31 IS-IS Levels and Optimal Routing
Suboptimal Routing
Although hiding reachability information with route summarization can
help to reduce control plane complexity, it can lead to suboptimal routing in
some scenarios. This suboptimal routing, in turn, may lead traffic to use a
lower-bandwidth link or an expensive link, over which the enterprise might
not want to send every type of traffic. For example, if we use the same
scenario discussed earlier in the OSPF areas, we then apply summarization
to the data center edge routers of London and Milan and assume that the
link between Sydney and Milan is a high-cost link that has a typically lower
routing metric, as depicted in Figure 8-37.
Note
The example in Figure 8-37 is “routing protocol” neutral; it can apply
to all routing protocols in general.
As illustrated in Figure 8-37, the link between the Sydney branch and the
Milan data center is 10 Mbps, and the link to London is 5 Mbps. In
addition, the data center interconnect between Milan and London data
centers is only 2 Mbps. In this particular scenario, summarization of the
Sydney branch from both data centers will typically hide the more specific
route. Therefore, the Sydney branch will send traffic destined to any of the
data centers over the high-bandwidth link (with a lower routing metric); in
this case, the Sydney–Milan path will be preferred (almost always, higher
bandwidth = lower path metric). This behavior will cause suboptimal
routing for traffic destined to the London data center network. This
suboptimal routing, in turn, can lead to an undesirable experience, because
rather than having 5 Mbps between the Sydney branch and the London data
center, their maximum bandwidth will be limited to the data center
interconnect link capacity, which is 2 Mbps in this scenario. This is in
addition to the extra cost and delay that will be from the traffic having to
traverse multiple international links.
Figure 8-37 Summary Route and Suboptimal Routing
Even so, this design limitation can be resolved via different techniques
based on the use of the routing protocol, as summarized in Table 8-4.
Note
With IS-IS, L1-L2 (ABR) may send the default route toward the L1
domain, and the route leaking at the London ABR will leak/send the
more specific local prefix for optimal routing.
OSPF
If multiple routes cover the same network with different types of routes,
such as inter-area (LSA type 3) or external (LSA type 5), OSPF considers
the following list “in order” to select the preferred path (from highest
preference to the lowest):
1. Intra-area routes
2. Inter-area routes
3. External type 1 routes
4. External type 2 routes
Let’s take a scenario where there are multiple routes covering the same
network with the same route type as well; for instance, both are inter-area
routes (LSA type 3). In this case, the OSPF metric (cost) that is driven by
the links’ bandwidth is used as a tiebreaker to select the preferred path.
Typically, the route with the lowest cost is chosen as the preferred path.
If multiple paths cover the same network with the same route type and cost,
OSPF will typically select all the available paths to be installed in the
routing table. Here, OSPF performs what is known as equal-cost multipath
(ECMP) routing across multiple paths.
An OSPF router that injects external LSAs into the OSPF database is called
an autonomous system boundary router (ASBR). For external routes
with multiple ASBRs, OSPF relies on LSA type 4 to describe the path’s
cost to each ASBR that advertises the external routes. For instance, in the
case of multiple ASBRs advertising the same external OSPF E2 prefixes
carrying the same redistributed metric value, the ASBR with the lowest
reported forwarding metric (cost) will win as the preferred exit point.
IS-IS
Typically, with IS-IS, if multiple routes cover the same network (same exact
subnet) with different route types, IS-IS follows the sequence here “in
order” to select the preferred path:
1. Level 1
2. Level 2
3. Level 2 external with internal metric type
4. Level 1 external with external metric type
5. Level 2 external with external metric type
Like OSPF, if there are multiple paths to a network with the same exact
subnet, route type, and cost, IS-IS selects all the available paths to be
installed in the routing table (ECMP).
EIGRP
EIGRP has a set of variables that can solely or collectively influence which
path a route can select. For more stability and simplicity, bandwidth and
delay are commonly used for this purpose. Nonetheless, it is always simpler
and safer to alter delay for EIGRP path selection, because of some
implications associated with tuning bandwidth for EIGRP traffic
engineering purposes discussed earlier in this chapter, which requires
careful planning.
Like other IGPs, EIGRP supports the concept of ECMP; in addition, it does
support “unequal cost load balancing,” as well, with proportional load
sharing.
Note
In Table 8-5, link-state ABR refers to either OSPF ABR, ASBR, or
IS-IS L1-L2 router.
Note
As you’ll notice, the full mesh in the preceding table has no excellent
scalability among the IGPs. This is because the nature of full-mesh
topology is not very scalable. (The larger the mesh becomes, the
more complicated the control plane will be.)
Note
None of the preceding points can be considered as an absolute use
case for route redistribution because the use of route redistribution
has no fixed rule or standard design. Therefore, network designers
need to rely on experience when evaluating whether route
redistribution needs to be used to meet the desired goal or whether
other routing design mechanisms can be used instead, such as static
routes.
Routing loop
Suboptimal routing
Slower network convergence time
Metric transformation
Administrative distance
Metric Transformation
Typically, each routing protocol has its own characteristic and algorithm to
calculate network paths to determine the best path to use based on certain
variables known as metrics. Because of the different metrics (measures)
used by each protocol, the exchange of routing information between
different routing protocols will lead to metric conversion so that the
receiving routing protocol can understand this route, as well as be able to
propagate this route throughout its routed domain. Therefore, specifying the
metric at the redistribution point is important, so that the injected route can
be understood and considered.
For instance, a common simple example is a redistribution from RIP into
OSPF. RIP relies on hop counts to determine the best path, whereas OSPF
considers link cost that is driven by the link bandwidth. Therefore,
redistributing RIP into OSPF with a metric of 5 (five RIP hops) has no
meaning to OSPF. Hence, OSPF assigns a default metric value to the
redistributed external route. Furthermore, metric transformation can lead to
routing loops if not planned and designed correctly when there are multiple
redistribution points. For example, Figure 8-41 illustrates a scenario of
mutual redistribution between RIP and OSPF over two border routers.
Router A receives the RIP route from the RIP domain with a metric of 5,
which means five hops. Router B will redistribute this route into the OSPF
domain with the default redistribution metrics or any manually assigned
metric. The issue in this scenario is that when the same route is
redistributed back into the RIP domain with a lower metric (for example,
2), router A will see the same route with a better metric from the second
border router. As a result, a routing loop will be formed based on this
design (because of metric transformation).
Figure 8-41 Multipoint Routing Redistribution
Hypothetically, this metric issue can be fixed by redistributing the same
route back into the RIP domain with a higher metric value (for example, 7).
However, this will not guarantee the prevention of routing loops because
there is another influencing factor in this scenario, which is the
administrative distance (discussed next in more detail). Therefore, by using
route filtering or a combination of route filtering and tagging to prevent the
route from being reinjected into the same domain, network designers can
avoid route looping issues in this type of scenario.
Administrative Distance
Some routing protocols assign a different administrative distance (AD)
value to the redistributed route by default (typically higher than the locally
learned route) to give it preference over the external (redistributed route).
However, this value can be changed, which enables network designers and
engineers to alter the default behavior with regard to route and path section.
From the route redistribution design point of view, AD can be a concern
that requires special design considerations, especially when there are
multiple points of redistribution with mutual route redistribution.
To resolve this issue, either route filtering or route tagging jointly with route
filtering can be used to avoid reinjecting the redistributed (external) route
back into the same originating routing domain. You can tune AD values to
control the preferred route. However, this solution does not always provide
the optimal path when there are multiple redistribution border routers
performing mutual redistribution. If for any reason AD tuning is used, the
network designer must be careful when considering this option, to ensure
that routing protocols prefer internally learned prefixes over external ones
(to avoid unexpected loops or suboptimal routing behavior).
Route filtering and route tagging combined with route filtering are common
and powerful routing policy mechanisms that you can use in many routing
scenarios to control route propagation and advertisement and to prevent
routing loops in situations where multiple redistribution boundary points are
exits with mutual route redistribution between routing domains. However,
these mechanisms have some differences that network designers must be
aware of, as summarized in Table 8-6.
Note
Route tagging in some platforms requires the IS-IS “wide metric”
feature to be enabled in order for the route tagging to work properly,
where migrating IS-IS routed domain from “narrow metrics to wide
metrics” must be considered in this case.
Note
If asymmetrical routing has a bad impact on the communication in
the previous scenario, between EIGRP and IS-IS domains, it can be
avoided by tuning EIGRP metrics such as delay, when the IS-IS route
redistributed into EIGRP to control path selection from EIGRP
domain point of view and align it with the selected path from IS-IS
(to align both ingress and egress traffic flows).
BGP Routing
Border Gateway Protocol (BGP) is an Internet Engineering Task Force
(IETF) protocol and the most scalable of all routing protocols. As such,
BGP is considered the routing protocol of the global Internet, as well as for
service provider–grade networks. In addition, BGP is the desirable routing
protocol of today’s large-scale enterprise networks because of its flexible
and powerful attributes and capabilities. Unlike IGPs, BGP is used mainly
to exchange network layer reachability information (NLRI) between routing
domains. (The routing domain in BGP terms is referred to as an
autonomous system [AS]; typically, it is a logical entity with its own
routing and policies and is usually under the same administrative control.)
Therefore, BGP is almost always the preferred inter-AS routing protocol. A
typical example is the global Internet, which is formed by numerous
interconnected BGP autonomous systems.
There are two primary forms of BGP peering:
Interdomain Routing
Typically, eBGP is mainly used to determine paths and route traffic between
different autonomous systems; this function is known as inter-domain
routing. Unlike an IGP (where routing is usually performed based on
protocol metrics to determine the desired path within an AS), eBGP relies
more on policies to route or interconnect two or more autonomous systems.
The powerful policies of eBGP allow it to ignore several attributes of
routing information that typically an IGP takes into consideration.
Therefore, an eBGP can offer simpler and more flexible solutions to
interconnect various autonomous systems based on predefined routing
policies.
Table 8-7 summarizes common AS terminology with regard to the
interdomain routing concept and as illustrated in Figure 8-46.
Stub An AS that has connections to more than one AS, and typically
multiho should not offer a transit path
med AS
As a path-vector routing protocol, BGP has the most flexible and reliable
attributes to match the various requirements of interdomain routing and
control. Accordingly, BGP is considered the de facto routing protocol for
the global Internet and large-scale networks, which require complex and
interdomain routing control capabilities and policies.
The following list highlights the typical BGP route selection (from the
highest to the lowest preference):
Note
For more information about BGP path selection, refer to the
document “BGP Best Path Selection Algorithm,” at
https://www.cisco.com/c/en/us/support/docs/ip/border-gateway-
protocol-bgp/13753-25.html.
Design model 1: This design model (see Figure 8-47) has the
following characteristics:
iBGP is used across the core only.
Regional networks use IGP only.
Border routers between each regional network and the core run
IGP and iBGP.
IGP in the core is mainly used to provide next-hop (NHP)
reachability for iBGP speakers.
Figure 8-47 BGP Core Design Module 1
Design model 2: This design model (see Figure 8-48) has the
following characteristics:
BGP is used across the core and regional networks.
Each regional network has its own BGP AS number (ASN) (no
direct BGP session between the regional networks).
Reachability information is exchanged between each regional
network and the core over eBGP (no direct BGP session between
regional networks).
IGP in the core as well as at the regional networks is mainly used
to provide NHP reachability for iBGP speakers in each domain.
Figure 8-48 BGP Core Design Module 2
Design model 3: This design model (see Figure 8-49) has the
following characteristics:
MP-BGP is used across the core (MPLS L3VPN design model).
MPLS is enabled across the core.
Regional networks can run either static IGP or BGP.
IGP in the core is mainly used to provide NHP reachability for
MP-BGP speakers.
Figure 8-49 BGP Core Design Module 3
Design model 4: This design model (see Figure 8-50) has the
following characteristics:
BGP is used across the regional networks.
In this design model, each regional network has its own BGP
ASN.
Reachability information is exchanged between the regional
networks directly over direct eBGP sessions.
IGP can be used at the regional networks to provide local
reachability within each region and may be required to provide
NHP reachability for BGP speakers in each domain (BGP AS).
Figure 8-50 BGP Core Design Module 4
These designs are all valid and proven design models; however, each has its
own strengths and weaknesses in certain areas, as summarized in Table 8-9.
During the planning phase of network design or design optimization,
network designers or architects must select the most suitable design model
as driven by other design requirements, such as business and application
requirements (which ideally must align with the current business needs and
provide support for business directions such as business expansion).
Note
IGP or control plane complexity referred to in Table 8-9 is in
comparison to the end-to-end IGP-based design model, specifically
across the core.
It is obvious that AIGP can be a powerful feature to optimize the BGP path
selection process across a transit AS. However, network designers must be
careful when enabling this feature because when AIGP is enabled, any
alteration to the IGP routing can lead to a direct impact on BGP routing
(optimal path versus routing stability).
Note
For simplicity, this scenario assumes that both campus cores (RR)
advertise the next-hop IPs of the Internet edge routers to all the
campus blocks.
Note
One of the common limitations of the route reflection concept in
large BGP environments is the possibility of suboptimal routing. This
point is covered in more detail later in this book.
Update Grouping
Update grouping helps to optimize BGP processing overhead by providing
a mechanism that groups BGP peers that have the same outbound policy in
one update group, and updates are then generated once per group. By
integrating this function with BGP route reflection, each RR update
message can be generated once per update group and then replicated for all
the RR clients that are part of the relevant group, as depicted in Figure 8-58.
BGP Confederation
The other option to solve iBGP scalability limitations in large-scale
networks is through the use of confederations. The concept of a BGP
confederation is based on splitting a large iBGP domain into multiple
(smaller) BGP domains (also known as sub-autonomous systems). The
BGP communication between these sub-autonomous systems is formed
over eBGP sessions (a special type of eBGP session referred to as an intra-
confederation eBGP session). Consequently, the BGP network can scale
and support a larger number of BGP peers because there is no need to
maintain a full mesh among the sub-autonomous systems; however, within
each sub-AS iBGP, full mesh is required, as illustrated in Figure 8-59.
Note
The intra-confederation eBGP session has a mixture of both iBGP
and eBGP characteristics. For example, NEXT_HOP, MED, and
LOCAL_PREFERENCE attributes are kept between sub-autonomous
systems. However, the AS_PATH is changed with updates across the
sub-autonomous systems.
Figure 8-59 BGP Confederation
Note
The confederations appear as a single AS to external BGP
autonomous systems. Because the sub-AS topology is invisible to
external peering BGP autonomous systems, the sub-AS is also
removed from the eBGP update sent to any external eBGP peer.
Note
To avoid BGP route oscillation, which is associated with RRs or
confederations in some scenarios, network designers must consider
deploying higher IGP metrics between sub-autonomous systems or
RR clusters than those within the sub-AS or cluster.
Figure 8-60 BGP Confederation and RR
Note
Although BGP route reflection combined with confederation can
maximize the overall BGP flexibility and scalability, it may add
complexity to the design if the combination of both is not required.
For instance, when merging two networks with a large number of
iBGP peers in each domain, confederation with RR might be a
feasible joint approach to optimize and migrate these two networks if
it does not compromise any other requirements. However, with a
large network with a large number of iBGP peers in one AS that
cannot afford major outages and configuration changes within the
network, it is more desirable to optimize using RR only rather than
combined with confederation.
Integrat Simple Simple within the same Simple within the same
ion with sub-AS, complex sub-AS, complex
MPLS- between sub-autonomous between sub-autonomous
TE systems systems
Although these questions are not the only ones, they cover the most
important functional requirements that can be delivered by a routing
protocol. Furthermore, there are some factors that you need to consider
when selecting an IGP:
Size of the network (for example, the number of L3 hops and expected
future growth)
Security requirements and the supported authentication type
IT staff knowledge and experience
Protocol’s flexibility in the modular network such as support of
flexible route summarization techniques.
Summary
For network designers and architects to provide a valid and feasible
network design (including both Layer 2 and Layer 3), they must understand
the characteristics of the nominated or used control protocols and how each
behaves over the targeted physical network topology. This understanding
will enable them to align the chosen protocol behavior with the business,
functional, and application requirements, to achieve a successful business-
driven network design. Also, considering any Layer 2 or Layer 3 design
optimization technique, such as route summarization, may introduce new
design concerns (during normal or failure scenarios), such as suboptimal
routing. Therefore, the impact of any design optimization must be taken
into consideration and analyzed, to ensure the selected optimization
technique will not introduce new issues or complexities to the network that
could impact its primary business functions. Ideally, the requirements of the
business-critical applications and business priorities should drive design
decisions.
Reference
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Network Virtualization
This chapter covers the following topics:
MPLS: This section covers critical MPLS topics and network design
elements for MPLS.
Software-Defined Networks: This section covers SD-WAN and SD-
LAN in a vendor-agnostic perspective to provide the corresponding
network design elements associated with the inherent capabilities
these solutions provide.
MPLS 1–8
Software-Defined Networks 9, 10
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
MPLS
Over the years Multiprotocol Label Switching (MPLS) has become more
popular within the large enterprise space. The benefits and overall design
goals are the same and are as follows.
In addition, the MPLS peer model has proven its flexibility and reliability in
fulfilling these goals for many large enterprise customers by offering
Single infrastructure that can serve all VPN customers (as shown in
Figure 9-1).
The optimization of OPEX. For instance, adding a new customer or a
new site for an existing customer will require simple changes to the
relevant edge nodes (provider edge [PE] nodes) only as the core
control plane intelligence is pushed to the provider cloud, as shown in
Figure 9-1.
The opening of new revenue-generation sources to the business by
offering differentiated services for its customers, such as prioritization
and expedited forwarding for voice.
The optimization of time to market to introduce new services to the
organization’s L3VPN customers, such as IPv6 and multicast support.
A high degree of flexibility by offering various media access methods
for the organization’s customers, such as legacy equipment, Ethernet
over copper or fiber, and Long Term Evolution (LTE) or 5G.
In the typical MPLS architecture, the provider edge nodes (PEs) carry
customer routing information to inject customer routes from the directly
connected customer edge nodes (CEs), each to the relevant Multiprotocol
Border Gateway Protocol (MP-BGP) VPNv4/v6, along with the relevant
VPN and transport MPLS labels (label edge router [LER]). This achieves
the optimal routing of traffic that pertains to each customer within each
VPN routing domain. However, provider routers (Ps) at the core of the
network are mainly responsible for switching MPLS labeled packets.
Therefore, they are also known as label switching routers (LSRs). Figure 9-
2 illustrates the primary component of an MPLS architecture:
Route Distinguisher
For an MPLS L3VPN to support having multiple customer VPNs with
overlapping addresses and to maintain the control plane separation, the PE
router must be capable of using processes that enable overlapping address
spaces of multiple customers’ VPNs. In addition, the PE router must also
learn these routes from directly connected customer networks and propagate
this information using the shared backbone. This is accomplished by using
a route distinguisher (RD) per VPN or per VRF instance. As a result, the
MPLS core can seamlessly transport customers’ routes (overlapped and
nonoverlapped) over one common infrastructure and control plane protocol
to take advantage of the RD prepended per MP-BGP VPNv4/v6 prefix.
Normally, the RD value can be allocated using different approaches. Each
approach has its strengths and weaknesses, as covered in Table 9-2.
*Load balancing or load sharing for multihomed sites using unique RD per
VPN per PE is covered in more detail later in this chapter.
**In large-scale networks with a large number of PEs and VPNs, unique
RD per VPN RD allocation should be used. The unique RD per VPN per
PE RD allocation model can be used only for multihomed sites if the
customer needs to load balance/share traffic toward these sites.
***BGP site of origin (SoO) can be used as an alternative to serve the same
purpose without the need of a unique RD per interface/VRF.
Note
Based on the RD allocation models covered in Table 9-2, a single
VPN may include multiple RDs with different VRFs. However, the
attributes of the VPN (per customer) will not change and is still
considered an intra-VPN because technically the route propagation is
controlled based on the import/export of the RT values.
Route Targets
Route targets (RTs) are an additional identifier and considered part of the
primary control plane elements of a typical MPLS L3VPN architecture
because they facilitate the identification of which VRF instance can install
which VPN routes. In fact, RTs represent the policies that govern the
connectivity between customer sites. This is achieved via controlling the
import and export RTs. Technically, in an MPLS VPN environment, the
export RT is to identify a VPN membership with regard to the existing
VRFs on other PEs, whereas the import RT is associated with each PE local
VRF. The import RT recognizes and maps the VPN routes (received from
remote PEs or leaked on the local PE from other VRF instances) to be
imported into the relevant VRF instance of any given customer. In other
words, RTs can offer network designers a powerful capability to control
what MP-BGP VPN route is to be installed in any given VRF/customer
routing instance. In addition, they provide flexibility to create various
logical L3VPN (WAN) topologies for the enterprise customer, such as any
to any, hub and spoke, and partially meshed, to meet different connectivity
requirements.
Per prefix: In this model, a VPN label is assigned for each VPN
prefix. Although this model can generate a large number of labels, it is
required in scenarios where the VPN packets sent between the PE and
CE are label switched, such as in Carrier supporting Carrier (CsC)
designs.
Per VRF: In this model, a single label is allocated to all local VPN
routes of any given PE in a given VRF. This model offers an efficient
label space and BGP advertisements. In addition, some vendor
platforms support the same per-VRF label for both IPv4 and IPv6
prefixes.
Per CE: The PE router allocates one label for every immediate next
hop; in most cases, this would be a CE router. This label is directly
mapped to the next hop, so there is no VRF route lookup performed
during data forwarding. However, the number of labels allocated is
one for each CE rather than one for each VRF. Because BGP knows
all the next hops, it assigns a label for each next hop (not for each PE-
CE interface). When the outgoing interface is a multiaccess interface
and the media access control (MAC) address of the neighbor is not
known, Address Resolution Protocol (ARP) is triggered during packet
forwarding.
Network designers must be careful if they plan to change the default label
allocation behavior, because any inconsistency or simple error can lead to a
broken forwarding plane that can easily bring down the entire network or a
portion of the network. In a service provider network, a PE that goes down
this way may result in several customer sites (usually single-homed ones)
being out of service, which can impact the business significantly, especially
if there is a strict service-level agreement (SLA) with its customers.
Figure 9-6 shows a summary of end-to-end forwarding and control planes
of an MPLS L3VPN architecture.
Note
The BGP multipathing feature must be enabled within the relevant
BGP VRF address family at the remote PE routers (for example, PE-
1 in the preceding example). Similarly, enabling BGP multipathing is
required in a single CE dual-attached use case to enable the load
balancing/sharing from the CE end as well when BGP is used
between the CE and PE.
Full Mesh
The full-mesh topology shown in Figure 9-8 is the simplest and most
common topology that represents the typical MPLS L3VPN layout. Simply,
the any-to-any communications model between different customer sites that
normally belong to the same customer (under a single VPN or multiple
VPNs) must carry the same RT values of the import and export among them
(among the relevant PEs).
This design model logically can be shown as one large router with all other
locations connected directly to it, as shown in Figure 9-9, where the big
router in the middle is the MPLS L3VPN cloud and all other sites are
directly attached to it in a start topology.
Figure 9-8 MPLS L3VPN Full-Mesh Topology
If the BGP used as the PE-CE routing protocol across the hub-and-
spoke topology over L3VPN and each site uses the same BGP
autonomous system number (ASN), BGP AS override should be used
by the PE connected to the hub-and-spoke sites. This avoids blocking
communication among the sites as a result of BGP loop-prevention
behavior. Although BGP allows the AS-in feature to be used for the
same purpose from the CE side, it must be planned carefully to avoid
any unexpected BGP AS_PATH looping.
If more than one spoke is connected to the same PE, VRF is required
to avoid traffic bypassing the hub site.
If the hub site has two edge CE routers connected to the MPLS
L3VPN cloud, each CE must (ideally) be assigned the role of handling
routing/traffic in one direction; one hub CE is connected to the
receiving link, and the other hub CE is connected to the sending link.
The following are the most common scenarios used with this design model:
Note
Depending on the network environment, the number of prefixes to be
exported and imported can be limited in a controlled manner. For
example, in the case of the managed services, only the loopback IP of
each CE is exported from the customer VPN to the
NOC/management VPN for monitoring and remote-access purposes.
At the same time, controlling the number of exported and imported
prefixes prevents the leaking of extra prefixes, which can lead to
other issues such as exposing customer internal routes and
unnecessary extra overhead on the PE nodes.
Note
The sequence of numbers of the following design options continues
from the two other design options previously discussed, to facilitate
the reference of each model in Table 9-3.
* If the full Internet routing table is installed per VRF, there can be
scalability limitations at the PE level.
**There might be limitations at the ASBR level (VRF routes or NATing
entries/sessions).
The comparison in Table 9-3 might make it seem like one option is better
than others for certain design requirements, but usually the network
environment and the situation drive the design choices (for instance, if an
Internet service provider [ISP] wants to start providing MPLS VPN to its
customers but cannot afford any service interruption to its existing Internet
customers). In this situation, keeping the Internet routing at the BGP global
routing table can be a viable solution. Similarly, if an MPLS L3VPN
provider wants to offer the Internet as a value-added service for its
customers, adding the Internet as a new VPN (extranet VPN connectivity
model) may be less interruptive. Therefore, always consider the other
factors such as business priorities, design constraints, and the targeted
environment, in addition to the technical aspects, before making any design
decision.
Note
This section covers the primary and most commonly used PE-CE
routing protocols (static, OSPF, EIGRP, and BGP).
Note
If you are unfamiliar with the OSPF terms used in this section, such
as OSPF DN bit, OSPF domain identifier, and OSPF sham link, it is
recommended that you refer to IETF RFC 4577 to build foundational
knowledge about these terms before reading this section.
Note
In some scenarios, such as a multi-VRF CE and a hub-and-spoke
topology over MPLS L3VPN, covered earlier in this chapter (see
Figure 9-12), when OSPF is used as the PE-CE routing protocol with
the hub-and-spoke over MPLS L3VPN topology, there will usually
be multiple PE nodes communicating with a central/hub PE router
that connects to the hub CE router over dual interfaces/subinterfaces
(each in a separate VRF). Technically in this scenario, when two
remote sites (spokes) need to communicate, the OSPF link-state
advertisements (LSAs) from each remote site will reach the
central/hub PE, then the hub CE router, where the traffic flow loops
and comes back into a different VRF (typical hub-and-spoke model).
The issue here is that when these LSAs are type 3, 5, or 7, the LSAs
will not be considered by the central/hub PE because they have the
DN bit set. Therefore, the “DN bit ignore” feature is required in this
scenario to “disable DN bit checking” at the hub PE node, in order to
meet traffic flow and routing information distribution requirements
(in which the route/LSA must be considered when it loops and comes
back into a different VRF on the hub/central PE in order to reach
other spokes). This OSPF feature is also known as capability VRF-
lite. Ideally, before considering this feature, a careful analysis is
required to avoid introducing any potential routing information loop.
However, these attributes (DN bit or route tag) will be stripped from the
prefixes if the route is redistributed into another routing domain (such as
EIGRP) and then redistributed back into OSPF and then to MPLS L3VPN
from another PE. As discussed earlier in this book, route redistribution can
cause metric transformation. This is a good example of how multiple
redistributions may lead to a routing loop, as shown in Figure 9-24.
Figure 9-24 PE-CE Connectivity Model OSPF with Multiple
Redistributions
Consequently, the network designer must be careful when there is a
possibility of multiple redistribution points across multiple routing domains,
because this scenario can break down the communication between OSPF
islands across the MPLS L3VPN backbone.
In contrast, in Scenario 2 in Figure 9-25, all the sites are deployed in OSPF
area 0. Although at a high level this design might look simpler, the most
significant issue here is that all the routes between the data center and HQ
site will be seen as OSPF intra-area routes. In other words, no matter what
the WAN link cost metric is, the backdoor will always be the preferred path
between the data center and the HQ (because the route from the MPLS
L3VPN will be seen as either an inter-area route or external route). This
might not always be a desirable design. To resolve this issue, based on the
OSPF area design, network designers must make sure that the route coming
from the MPLS L3VPN backbone is received as an intra-area route as well.
To achieve this, the service provider must coordinate and set up the OSPF
sham link between the relevant PEs (in this scenario, PE-2 and PE-3) to
create a logical intra-area link between the ingress and egress PEs (area 0 in
this example).
Note
For the enterprise (CE side) to avoid the reliance on the SP side to set
up a sham link, the OSPF areas design can be migrated to use a
unique OSPF area per site (for the sites connected with a backdoor
link), if this option is available.
Therefore, the protocol selection has to be aligned with all the different
design requirements (business, functional, and application) to achieve a
successful design.
Note
Intermediate System-to-Intermediate System (IS-IS) acts similarly to
OSPF when there is a backdoor link, as the redistributed prefixes
with the MPLS cloud (from MP-BGP into IS-IS) will be seen as an
external route by the receiving CE while the same route over the
backdoor link will be received as an internal route. This usually leads
to always preferring the backdoor path. To overcome this issue, BGP
supports carrying some of the critical IS-IS information as part of
BGP extended communities, which can be converted back into an IS-
IS link-state packet (LSP). For example, if the original route was
received as level 1, it will be reconverted into an IS-IS level 1 route at
the other end (PE). (For more details, refer to this IETF draft: draft-
sheng-isis-bgp-mpls-vpn.)
Does not contain The route is accepted into the EIGRP topology table,
an SoO value. and the SoO value from the interface that is used to
reach the next-hop CE router is appended to the route
before it is redistributed into BGP.
In Figure 9-27, SoO helps to optimize the EIGRP looping (race condition)
by preventing the route from being reinjected into the network based on the
attached SoO value to the route and the deployed SoO value on the
interface. For instance, traffic sourced from the HQ LAN (CE-4) passing
through PE-3 will have an SoO value of 1:4 assigned to it. Then, any
interface in the scenario shown in Figure 9-27 that has an SoO value of 1:4
will not pass this route information through.
Although in this type of scenario SoO can help to mitigate route looping
and racing issues to a certain extent, it might sometimes be necessary to
introduce other limitations, such as reduced redundancy. For example, if the
SoO values are applied on the backdoor link (as covered in Figure 9-27)
and the PE-3-CE-4 link goes down, any traffic with SoO value of 1:3 or 1:4
destined for the HQ (behind CE-4) will be isolated (because of the SoO
filtering at the backdoor link), even though the backdoor link is available.
Therefore, as a network designer, you must understand the design goals and
priorities and what the impact is of applying SoO on the different
interfaces/paths (for example, redundancy + suboptimal routing versus
stability + optimal routing). In other words, if the time required for the
EIGRP to stabilize following a failure event is acceptable, a simple SoO
design should be sufficient, like the one shown in Figure 9-28, in which
SoO stops the information feedback looping faster than relying on the hop
count for EIGRP to stabilize after CE-2 failure.
However, Scenario 2 shown in Figure 9-26 is designed with the same
EIGRP ASN on all sites, which is typical in that all the routes learned over
MPLS L3VPN and the backdoor links will be an internal EIGRP route.
One of the common design concerns with this setup is when the backdoor
link is intended to be used only as a backup path (because with this design
there is a possibility that some remote sites will use the backdoor link to
reach either the DC LAN or the HQ LAN). For instance, in Figure 9-29, the
HQ LAN prefix is advertised in EIGRP to the MPLS VPN PE-3 and to CE-
2 and CE-3 over the backdoor link. Likewise, CE-2 and CE-3 advertise this
route to PE-1 and PE-2, respectively. Therefore, PE-1 in this case has two
BGP paths available for the HQ LAN: the iBGP path via PE-2 and PE-3,
and the locally redistributed BGP route from EIGRP advertisement via CE-
2 EIGRP.
Note
This cost community may transform BGP to act in a way it is not
designed to (like IGP), which may lead to undesirable behaviors in
some scenarios.
Consider, however, what happens when a new remote site is added (for
example, in a new country where the current service provider does not have
a presence and requires inter-AS communication to extend the MPLS-
L3VPN reachability). In this scenario, the BGP cost community will not be
a valid solution because it does not support propagation over external BGP
(eBGP) sessions, as shown in Figure 9-32.
Consequently, using EIGRP as a PE-CE routing protocol may add
simplicity to the enterprises that already use EIGRP as the enterprise
routing protocol. However, when there are multihomed sites to the MPLS
provider or sites with backdoor links, the design may prove too
complicated, and overall flexibility and stability may be reduced. EIGRP
Over the Top (OTP), however, can offer a more flexible PE-CE design and
is independent of the service provider routing control.
Same ASN (ASN) per site: With this model, the MPLS provider
allocates the same ASN to all the customer sites. One of the main
advantages of this model is reduced BGP ASN collisions.
Single ASN per site: With this model, the MPLS provider allocates
each of the customer sites a separate BGP ASN. This model offers
network designers and operators the ability to identify the source of
prefixes (from which site) in a simple way (based on BGP ASN in the
AS-PATH attribute of each prefix). However, it may introduce
scalability limitations with regard to the available ASNs.
Software-Defined Networks
With the advent of software-defined solutions, we now have more
capabilities that can be leveraged from a network designer perspective. This
section will compare and contrast why a network designer could leverage a
software-defined solution in an overarching network design to meet
multiple business requirements. Inherently, a software-defined solution is
more complex, and there are more pieces to the solution that network
designers, architects, and engineers need to understand. This complexity,
though, is obfuscated by the additional capabilities a software-defined
solution provides, assuming it is designed, deployed, and functioning
properly. The following sections will highlight SD-WAN and SD-LAN in a
vendor-agnostic perspective (leveraging vendor-specific examples as
needed to provide context) and the corresponding design decisions and
options around each solution.
SD-WAN
From a vendor-agnostic perspective, software-defined wide-area
networking (SD-WAN) is composed of separate orchestration, management,
control, and data planes. The orchestration plane assists in the automatic
onboarding of the edge (spoke) routers into the SD-WAN overlay. The
management plane is responsible for central configuration and monitoring.
The control plane builds and maintains the network topology and makes
decisions regarding where traffic flows. The data plane is responsible for
forwarding packets based on decisions from the control plane. Figure 9-35
shows the different SD-WAN planes and how they interact with one
another.
SD-WAN Components
The primary components of the SD-WAN consist of a network manager, the
controller, the orchestrator, and the edge router. The following list provides
an overview of each of these components and their functions:
Note
Depending on the specific vendor implementation of SD-WAN, these
components and capabilities can be integrated together into the same
system or they can be integrated into dedicated individual systems.
The key takeaway is that these capabilities are what make up an SD-
WAN solution, and as a network designer you will have to know
when to leverage a solution like SD-WAN to solve the underlying
business requirements.
Virtual Networks
In the SD-WAN overlay, virtual networks (VNs) provide segmentation,
much like Virtual Routing and Forwarding instances (VRFs). Each VN is
isolated from other VNs and each has its own forwarding table. An
interface or subinterface is explicitly configured under a single VN and
cannot be part of more than one VN. Labels are used in the management
protocol route attributes and in the packet encapsulation, which identifies
the VN a packet belongs to. The VN number is a 4-byte integer with a value
from 0 to 65530.
TLOC Extension
A very common network setup in a site with two edge routers is for each
edge router to be connected to just one transport. There are links between
the edge routers, which allow each edge router to access the opposite
transport through a TLOC extension interface on the neighboring edge
router. TLOC extensions can be separate physical interfaces or
subinterfaces.
SD-WAN Policies
Policies are an important part of the SD-WAN architecture and are used to
influence the flow of data traffic among the edge routers in the overlay
network. Policies apply either to control plane or data plane traffic and are
configured centrally on the controllers or locally on the edge device routers.
Centralized control policies operate on the routing and TLOC information
and allow for customizing routing decisions and determining routing paths
through the overlay network. These policies can be used in configuring
traffic engineering, path affinity, service insertion, and different types of
VPN topologies (full-mesh, hub-and-spoke, regional mesh, etc.). Another
centralized control policy is application-aware routing, which selects the
optimal path based on real-time path performance characteristics for
different traffic types. Localized control policies enable routing policy at a
local site, specifically through OSPF or BGP.
Data policies influence the flow of data traffic through the network based
on fields in the IP packet headers and VPN membership. Centralized data
policies can be used in configuring application firewalls, service chaining,
traffic engineering, and QoS. Localized data policies allow data traffic to be
handled at a specific site, such as ACLs, QoS, mirroring, and policing.
Some centralized data policy may affect handling on the edge device itself,
as in the case of application route policies or a QoS classification policy. In
these cases, the configuration is still downloaded directly to the controllers,
but any policy information that needs to be conveyed to the edge routers is
communicated through the secure connection already established.
SD-LAN
Software-defined local area network (SD-LAN) is an evolved evolution of
existing campus LAN designs that introduces programmable overlays
enabling easy-to-deploy network virtualization across the LAN, capable of
supporting multiple enclaves. In addition to network virtualization, SD-
LAN allows for software-defined segmentation and policy enforcement
based on user identity, device, method of connectivity, and group
membership. These are newer technologies and protocols that eliminate
many of the issues and problems previously described. These capabilities
also provide a significant reduction in operational expenses and an
increased ability to drive business assurance and outcomes quickly with
minimal risk, at the cost of increased complexity and staff expertise. Figure
9-37 highlights the different layers within SD-LAN today.
SD-LAN Terminology
The following terms are used in SD-LAN and in other software-defined
solutions today:
Underlay network
Overlay network
SD-LAN data plane
SD-LAN control plane
Underlay Network
The underlay network is defined by the physical switches and routers that
are part of the LAN. All network elements of the underlay must establish
Internet Protocol (IP) connectivity via the use of a routing protocol.
Theoretically, any topology and routing protocol can be used, but the
implementation of a well-designed Layer 3 foundation to the LAN edge is
highly recommended to ensure performance, scalability, and high
availability of the network. In the SD-LAN, end-user subnets are not part of
the underlay network but instead are part of the overlay network. The
underlay is typically a Layer 3 fabric without any Layer 2. All Layer 2
requirements can be achieved in the overlay network.
Overlay Network
An overlay network runs over the underlay in order to create a virtual
network. Virtual networks isolate both data plane traffic and control plane
behavior among the physical networks of the underlay. Virtualization is
achieved inside SD-LAN by encapsulating user traffic over IP tunnels that
are sourced and terminated at the boundaries of SD-LAN. Network
virtualization extending outside of the SD-LAN is preserved using
traditional virtualization technologies such as virtual routing and
forwarding (VRF)-Lite, MPLS VPN, or SD-WAN. Overlay networks can
run across all or a subset of the underlay network devices. Multiple overlay
networks can run across the same underlay network to support multitenancy
through virtualization.
SD-LAN Components
SD-LAN is composed of several node types. This section describes the
functionality of each node and how the nodes map to the physical campus
topology.
Note
To properly show the different capabilities and components of an SD-
LAN solution, we will cover specific aspects of the Cisco SD-A
solution. The goal of this comparison is not to teach you all about
Cisco SD-A but rather about the capabilities that all SD-LAN
solutions should have and how you as a network designer can
properly leverage an SD-LAN solution to meet the needs and
requirements of a business.
Note
Cisco SD-A is a unique set of technologies, automation, and central
control that doesn’t necessarily fit perfectly into the SD-LAN
category. In other words, SD-LAN represents a subset of Cisco SD-
A, but it is a good representation for most of the capabilities.
Summary
This chapter covered the different design options and considerations of
forwarding and control plane mechanisms of MPLS, MP-BGP, and
software-defined networking. These services have become primary business
enablers for enterprises by meeting customer connectivity requirements,
whether it is a Layer 3 or Layer 2 type of connectivity. In addition,
automation within the software-defined solutions discussed in this chapter
brings new capabilities to businesses that they haven’t seen before. By
leveraging these automation and orchestration capabilities, businesses can
redirect their staff members to focus on business initiatives rather than the
day to day operations and maintenance of the infrastructure. These design
models can directly or indirectly impact business effectiveness and overall
efficiency.
In addition to the design models and considerations, this chapter covered
the different design approaches that offer a scalable design to support
enterprise businesses in this modern software-defined world with a very
large number of nodes and prefixes. Last but not least, the design decision
of selecting a certain design approach or protocol has to be based on the
holistic approach, to avoid designing in isolation of other parts of the
network, regardless of whether it is for a physical, virtual, underlay, or
overlay entity.
References
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Security
This chapter covers the following topics:
5.0 Security
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Designing a modern network that is secure is one of the most complicated
tasks that network designers confront. Mobility and communication over
the Internet are becoming one of the primary (de facto) methods of
communications and are essential requirements (in most cases they are
unstated requirements at the time of this writing. Businesses assume they
will have them) for many businesses. Therefore, to address these trends and
requirements, sophisticated security countermeasures are required.
Typically, a good security design follows a structured approach that divides
the network into domains and applies security in layers, with each layer
focusing on certain types of security requirements and challenges. This is
also known as defense in depth, where multiple layers of protection are
strategically located across the network and where a failure of one layer to
detect an attack or malicious traffic flow will not leave the network at risk.
Instead, the multiple security layers work in a back-to-back manner to
detect the attack or malicious flow in the network. Figure 10-1 summarizes
the common security aspects that can be applied in a layered approach.
Figure 10-1 Elements of Layered Network and Information Security
Infrastructure Security
This section covers infrastructure security and network firewall
considerations, in brief, focusing on the integration and impact with regard
to network design.
C As discussed earlier in this book, the control plane is like the brain of
o the network node; it usually controls and handles all path selection
n functions. Therefore, any control plane–related issues, such as a
t flapping session with a BGP peer that advertises an extremely large
r number of prefixes, will impact not only the network stability but also
o the device itself (because of high CPU spikes in this case). Common
l mechanisms to protect this plane include iACLs, routing protection,
p and control plane policing (CoPP).
l
a
n
e
M As the name implies, this plane relates to the management traffic of the
a device, such as device access, configuration, troubleshooting, and
n monitoring. Therefore, its criticality is equal to the other two planes.
a Any unauthorized access can lead to a device- and network-wide crisis,
g such as injecting a black-holing route into the control plane or flooding
e the network with malicious traffic, which will ultimately impact all the
m hosts and users transiting through the network in general and this
e network device in particular. Common mechanisms to protect this
n plane include CPU and memory thresholding, AAA (authentication,
t authorization, and accounting), and CoPP.
p
l
a
n
e
Note
Although CoPP uses modular QoS CLI (MQoC), the QoS
considerations earlier provide network-wide treatment on a per-hop
basis for Layer 3 control plane traffic flows, whereas CoPP is focused
only on a device level. As covered earlier, marking down the DSCP
value of packets that are out of profile can impact how these packets
will be treated by other nodes across the network.
192-bit (Enterprise)
WP
E-
Shar
ed
From what is shown in Table 10-4, it’s pretty straightforward that WPA3 is
the best option, from both a security standpoint and a performance
standpoint. With that said, at the time of writing not all wireless devices
support WPA3, in which case WPA2 is the preferred option.
Note
Hypervisors have taken this virtualized firewall appliance a step
further; instead of leveraging a dedicated virtual firewall, the
hypervisor has this functionality built-in, allowing these security
capabilities to be instantiated at any location, at any layer, and at any
virtual machine within the hypervisor’s management control.
Detecting and mitigating common types of attacks is by far one of the most
important security goals of all security devices. Based on the multiple
sections and security layers discussed briefly in this section, Figure 10-6
provides a summarized list of some common network attacks and risks,
along with the possible countermeasures and features that can be used to
protect the network infrastructure and mitigate the impact of these attacks at
different layers.
Figure 10-6 Common Network Attacks and Mitigation Mechanisms
Authentication
The first step in the NAC process is authentication, and every device and
user gets authenticated. How each of these resources is authenticated
depends on what protocols they support and the overall security architecture
design being deployed.
There are three parts to 802.1X authentication:
Client-side No Yes No No No No
certificate (PA
required C)
Authorization
Once a device or user has properly completed authentication, they start the
authorization process. This step is about what the device or user should
have access to. What do they need to complete their job or role? These can
be referred to as use cases. A good use case example is network printers,
and here are some questions that should be answered:
In most cases, the answer to these questions is no, they don’t. For most
network printer use cases, we can allow the print server to access them, and
that’s all that is needed. This process should be followed for all use cases;
just remember to only allow what the use case needs to work, not what the
use case wants to have access to.
There are a number of different authorization policies that can be applied to
a use case, and the following are some of the most common:
Visibility
In regard to network access control and identity management, when we are
talking about visibility, we are not talking about real-time visibility, real-
time traffic flows, or real-time analytics. When a NAC solution is properly
designed and deployed, the business starts to see where devices, users, and
resources live, when they are connecting to the network, from which
locations they are connecting to the network, how they are authenticating,
and what is being denied. For example, without a NAC solution deployed, a
corporate business might inadvertently allow a gaming system to connect to
its network. This just might be something that occurs regularly and has the
potential to cause a business impact if that gaming system is allowed to use
all available bandwidth. With a NAC solution, this access can be denied
from the start. On the flip side, if this wasn’t a corporate business but was a
higher education campus environment, the college might want to allow the
gaming system to connect to the network while also limiting the available
bandwidth the gaming system can consume.
Another great example of this visibility is third-party network switches and
hubs. In a corporate environment, these devices can run wild,
uncontrollable at times, unless you deploy port security, which can also be a
large management overhead. Instead, if properly leveraging a NAC
solution, these third-party switches and hubs can be denied access to the
network right from the start.
The visibility provided by a NAC deployment is really the first step toward
real-time visibility.
Summary
Security and the network design focus on security are extremely imperative.
We’ve covered security in two dedicated chapters in this book now, the
other being Chapter 4, where we covered security design. Security is
interweaved and overlaid on top of every network design, solution, and
architecture. Every security-focused network design decision will have its
trade-offs to account for, and we can most assuredly take security too far
where it causes the network to be unusable by the business users. We can
also not take security far enough, leaving vulnerabilities, attack surfaces,
and architecture points wide open. There has to be a happy middle ground
where the network and the data on top of it are being properly secured to
maintain its confidentiality, integrity, and availability while not limiting the
ability of the business and its users to complete their approved functions.
The topics covered in this chapter focused primarily on infrastructure- and
device-level security. If a device hasn’t been properly hardened (locked
down), it is much easier to compromise. The simple task of removing, or
disabling, all services, protocols, and functions that are not being utilized
can save the entire network from a catastrophic outage. In addition,
leveraging different security-focused protocols to help secure not only the
device but also the Layer 2 and Layer 3 control planes limits the overall
attack surface of the infrastructure. Having purpose-built security devices
that provide specific security capabilities such as firewall, IDS, and NAC
helps secure not only what is being connected to the LAN but also what is
being allowed into the perimeter of the network. Understanding these
components and their capabilities is a requirement for all network
designers. Knowing the role a NAC solution plays, that it is an
infrastructure service like DNS, DHCP, NTP, and so on, allows a network
designer to ensure it is properly designed and the network it supports is
properly designed for it.
Reference
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Wireless
This chapter covers the following topics:
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
1. Which of the following wireless client specifications is helpful in
designing the size of an AP cell?
a. The antenna gain
b. The receiver sensitivity
c. The number of spatial streams
d. Client’s software operating system
2. A high-density area in a wireless design is determined by which
one of the following statements?
a. More clients are using the 5-GHz band than the 2.4-GHz
band.
b. A small number of clients in an area are using high-
bandwidth applications.
c. A higher number of clients are associated with each AP in an
area.
d. A higher amount of RF coverage is needed in an area.
3. Suppose a customer wants users on their wireless network to
authenticate with a username and password before being allowed
wireless network access. Which of the following items could be
leveraged to add this security requirement in your wireless
network design to meet the customer’s needs?
a. RADIUS
b. TACACS
c. SMNP
d. AES servers
4. In a data-only wireless deployment (non-real-time traffic), which
one of the following statements is true?
a. Strict jitter requirements must be met.
b. Strict latency requirements must be met.
c. Strict packet loss requirements must be met.
d. No specific requirements must be met.
5. Suppose a customer wants to use a real-time application that
requires jitter to be less than 30 milliseconds. Which one of the
following wireless network deployment models should be
leveraged for this wireless network design requirement?
a. Data deployment model
b. High-density deployment model
c. Voice deployment model
d. There is not enough information given to determine a
deployment model.
6. Suppose you are working on a wireless network design that is
required to support a high density of clients within a large lecture
space at a college. You begin by adjusting the AP’s transmit
power level down to its lowest setting, but you find that the AP’s
cell size is still too large for your design. What is the next step to
take for you to reduce the AP’s cell size?
a. Use an external omnidirectional antenna.
b. Use an external patch antenna.
c. Install a second AP next to the first one and use the same
channel on each.
d. Enable the lowest data rate to reduce the cell size.
7. Which of the following statements are valid design goals for
wireless network design that will support voice over the wireless
network for calls? (Choose all that apply.)
a. Make 12 Mbps the lowest mandatory data rate.
b. Design for call capacity per AP.
c. Use every possible 5-GHz nonoverlapping channel.
d. Consider avoiding 5-GHz DFS channels.
8. When you meet with a customer to gather information about an
upcoming wireless project, which one of the following items
would be the most helpful as you prepare to design the wireless
solution?
a. The scope of the project
b. A list of the buildings and locations that need wireless
service
c. Floor plans of buildings that need wireless service
d. Diagrams of the physical network infrastructure
9. Regarding a wireless client in relation to the AP it’s connecting
too, which of the following statements is incorrect?
a. As the client moves away from the AP, the AP’s signal
strength decreases.
b. As the client moves away from the AP, the usable data rate
decreases.
c. As the client moves toward the AP, the SNR increases.
d. As the client moves toward the AP, the usable data rate
decreases.
Foundation Topics
Channel 1 to 13 11 1 to 13 1 to 13
s: 2.4 channels
GHz
36–48, 52–64, 36–48, 52–64, All, but DFS
100–140, 144, 20 100–140, 149– not
Channel 149–161 channels 165 recommended
s: 5
GHz
MCS 0–8 —
802.11n
802.11ac
Taking a look at the 802.11 support among these devices, you can easily see
that all of these critical devices support both the 2.4-GHz and 5-GHz bands.
Furthermore, you can see that two of the four devices support
802.11a/b/g/n, while the other two add support for 802.11ac. This is
important information to ensure the access points (APs) that you select in
the design provide the required data rates at the corresponding 802.11 level.
Next you look at the different channels and data rates each device supports.
The key here is that you want to leverage the highest data rate that all
devices support, and in most cases disable the lower ones so they are not
leveraged. If you leave the lower data rates available, it will limit the
overall performance of the access point as devices connect to it on those
rates as well as the higher rates. Just keep in mind that a lot of legacy
devices only support the lower data rates, and if a design disables them,
those devices will not be able to connect to the wireless network.
In some wireless designs it may be prudent to create different service set
identifiers (SSIDs), with different APs advertising them for these legacy
devices that only operate at the lower data rates. An SSID is used as a
wireless network name and can be made up of case-sensitive letters,
numbers, and special characters. When designing wireless networks, we
give each wireless network a name which is the SSID. This allows end
users to distinguish from one wireless network to another.
Continuing with this example scenario, the channels offered may be
different as some countries only allow specific channels, and thus those
devices will only operate in that specific country. For example, if a device
only supports 2.4-GHz channels 1 through 11 (the wireless VoIP devices in
this scenario), it will only be able to operate in the United States. The Table
11-2 column for high-definition tablets refers to a feature called Dynamic
Frequency Selection (DFS). DFS enables an AP to dynamically scan for
Radio Frequency (RF) channels and avoid those used by other radio
devices in the area. RF is a wireless electromagnetic signal used as a form
of data communication.
For best performance in a wireless environment, wireless devices should be
able to distinguish received signals as legitimate information they should be
listening to and ignore any background signals on the spectrum. Signal-to-
noise ratio (SNR) ensures the best wireless functionality. SNR is the
difference between the received wireless signal and the noise floor. The
noise floor is erroneous background transmissions that are emitted from
either other devices that are too far away for the signal to be intelligible, or
by devices that are inadvertently creating interference on the same
frequency. For example, if a client device’s radio receives a signal at -75
dBm, and the noise floor is -90 dBm, then the effective SNR is 15 dB. This
would then reflect as a signal strength of 15 dB for this wireless connection.
Generally, an SNR value of 20 dB or more is recommended for data
networks, whereas an SNR value of 25 dB or more is recommended for a
wireless network supporting real-time application traffic.
Shifting capabilities to RF, the transmit power level of one device might be
very different from that of another device, as shown in Table 11-2. This
difference is directly related to the size of the battery in the device—the
smaller the device, the smaller the battery, and less power the device can
leverage to transmit a signal. The location of the device’s antenna can also
have an impact on the RF performance. Some devices have embedded
antennas, while other devices have external antennas, which allows them to
be extended as needed. Lastly, the placement of the device in relation to
access points, with obstacles obstructing the path, can degrade and limit the
connections.
Wireless Security Capabilities
In the majority of situations, customers will require the most secure
network. Wired networks have some level of inherent security because the
data transmissions are contained within the physical wire. In contrast,
wireless networks transport data over the air, potentially allowing other
devices to eavesdrop or manipulate the traffic.
When conducting a discovery session to identify wireless network design
requirements, a network designer should ask the following questions:
The 802.11 standard defines two methods of authentication: open and WEP.
All devices should support both methods. WEP has been deprecated due to
its inherent security weakness. Open authentication is used in conjunction
with 802.1X (network access control [NAC]) security and multiple EAP
options to offer a wide range of authentication methods, which was
discussed in Chapter 10, “Security.” Table 11-3 shows the different security
specifications for the four device types we have been reviewing.
Reviewing the device security capabilities in Table 11-3, each device can
support a comprehensive list of wireless security features and protocols.
What you should notice is that these features and protocols are not the same
across all devices. Network designers must be aware that not all devices
support all security features, protocols, or options, and identify the proper
security mechanisms that all devices support. Securing access to a wireless
network involves applying one of the authentication methods in Table 11-3
to screen the users and devices that try to join.
During the wireless design process, you should list the wireless networks
that the customer wants, along with the security mechanisms that should be
leveraged to ensure the wireless network is secured. For example, a wireless
network meant for the standard corporate laptops might require WPA2-
Enterprise with AES, 802.1X with EAP-TLS, and digital certificates.
Another wireless network offered to the high-def tablets might require
PEAP-MSCHAPv2. Keep in mind, with most wireless products, a wireless
SSID cannot run multiple security mechanisms simultaneously (i.e., WPA2-
Enterprise and 802.1X cannot be used together). This limitation adds a
constraint to any wireless network design and has the potential impact to
add a number of SSIDs as the security mechanisms required in a design
increase.
Note
Some vendor solutions have additional features that allow wireless
traffic to be locally switched at the access point. For example, Cisco
has the FlexConnect option within the Wireless LAN Controller
(WLC) configuration that allows for this exact forwarding behavior.
Note
The following CAPWAP discussion highlights the Cisco-specific
implementation as an example case study.
Another area where the logical path requires careful consideration is the
path between the controller and the key infrastructure services, such as the
AAA and DHCP servers. Additional infrastructure services, including
AAA/NAC, DHCP, DNS, SDN controller, and many more, may be placed
at locations throughout the network that have firewalls protecting them.
Understanding the logical path between these services will often require
opening of firewall rules for the service to interface with the controller.
As with CAPWAP, the wireless controller’s management interface is used
to communicate with AAA/NAC servers, as well as a host of other services,
including directory servers, other controllers, and more.
For DHCP, a controller proxies communication to the DHCP server on
behalf of clients using the controller’s IP address in the VLAN associated to
the WLAN of those clients.
Table 11-4 summarizes the ports that must be open to allow the controller to
communicate with key services.
RADIUS authentication and UDP port 1812 (some older versions use
authorization UDP port 1645)
This all starts with Layer 1 of the OSI model, RF design. In a wireless
world, this is all about the antenna: determining which type of antenna
should be leveraged and where it should be placed. For antenna selection,
we need to consider the density of clients that we want that AP to serve and
how far away the AP can be placed and still enable clients to connect to it.
For the antenna placement, we want to ensure clear line of sight to clients;
that is, ensure there are no obstacles in the way. Identifying strategic
locations can be the best tool for determining where to place antennas.
Think of locations where users congregate and consider offering them direct
frequencies that provide better throughput and signal strength.
If a wireless-enabled device can see the wireless network, it should be able
to join it within the coverage area. Taking a look at Figure 11-3, the left side
shows this example for two devices within the AP cell size as defined in the
figure.
Summary
This chapter covered various wireless network design topics, including
IEEE 802.11 standards and protocols and enterprise wireless network use
cases. To avoid wireless design defects, network designers need to always
incorporate the wireless network design topics covered in this chapter in an
integrated holistic approach rather than designing in isolation. Moreover,
considering the top-down design approach is a fundamental requirement to
achieving a successful business-driven design. Although this chapter
focused on the wireless side, there are design implications within the
physical infrastructure that also need to be kept in mind. For example, as
you add more APs to solve wireless requirements (high density of users),
each of these APs still requires physical network connectivity and power.
These are all factors that must go into your wireless network design
decision process.
Reference
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Automation
This chapter covers the following topics:
I’m sure some of you are wondering why automation is a topic in network
design and on the CCDE certification exam. This is a network design
certification, after all. For network engineers, it can be hard to grasp the
impacts of automation on the network and, more importantly, on the
business. Networks and the corresponding network designs have never been
more complex. Leveraging automation can have a large impact on the
network and the business but it highly depends on the underlying network
design. Assuming a network has been automated, how does that affect the
design of the network?
How long would it take to build out 12 data centers worldwide manually?
The architecture and overall network design could be very similar for each
data center, but if we are manually configuring each device, at each
location, this daunting task could very well take multiple years. Let’s just
say it takes 12 months to complete. That’s one data center each month.
Now think about building these 12 data centers with automation. Every
component, configuration, feature, capability, and functionality is templated
with corresponding variables. Let’s assume it takes a month to fully
template all of these elements out, figure out the automation workflows and
orchestration process, and we start building the data centers in month 2. By
taking advantage of automation, the business can build multiple data centers
at the same time, with the same resources. What took 12 months manually
can be completed in 3 months using automation, in some cases even faster.
With a network built, automation can further be leveraged to complete
operations and maintenance (O&M) tasks, troubleshoot network issues and
resolve them, instantiate business intent end to end—and many more
capabilities are being identified every day.
In addition to the increase in build time, automation limits user errors,
reduces total cost of ownership, reduces network outages, and increases
service agility. This is the business impact of automation, and why business
leaders will require it within the design of their networking infrastructure.
This chapter focuses on the underlying impact of automation on a business
and how a network designer can properly structure a network design for
automation. This chapter does not cover how to build or write automation.
Thus, this chapter will not cover programming languages, API calls,
orchestration tools, data models, and so forth, but it will cover the specific
capabilities of automation that network designers need to know to properly
design a network.
This chapter covers the following “CCDE v3.0 Core Technology List”
sections:
Zero-Touch Provisioning 1, 2
Infrastructure as Code 3
CI/CD Pipelines 4, 5
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Zero-Touch Provisioning
The overwhelming increased demand for automation has stressed the
overall network design and architectures. Therefore, adding new services to
an oversaturated and oversubscribed network fulfilling numerous tasks such
as network management, quality of experience, and optimization is not
feasible today. The answer to this problem comes in many forms, one of
which is the automation of the most common daily tasks and
responsibilities related to operations and maintenance of the network.
Zero-touch provisioning (ZTP) is based on software-defined network
(SDN) solutions and network functions virtualization (NFV) concepts. The
intent and outcome of a ZTP capability is to have any new network device
fully configured automatically, in a plug-and-play situation. The benefits of
ZTP are the following:
Note
There must be a minimum amount of network infrastructure created
to allow ZTP to communicate with the network devices it will
configure. In most cases this means setting up a management network
to allow all network devices to be automatically provisioned with
ZTP via their out-of-band management interfaces.
Infrastructure as Code
Infrastructure as Code (IaC) is a new approach to infrastructure
automation that focuses on consistent, repeatable steps for provisioning,
configuring, and managing infrastructure. Over the years, infrastructure has
been provisioned manually. Deploying new capabilities, applications, and
network devices used to take weeks, months, and even years to complete,
which created a demand for a new process of provisioning and configuring
devices that is more effective and more efficient. IaC fulfills that demand.
Infrastructure as Code is all about the representation of infrastructure
through machine-readable files that can be reproduced for an unlimited
amount of time. Figure 12-2 shows how leveraging IaC can streamline the
building of multiple environments, one each for development, staging, and
production.
Businesses and their networks are far more complex than in the past.
Businesses are requiring consistent uptime as they are relying on the
network, specifically connectivity to the Internet, for most resources.
Businesses need flexibility and elasticity to support their business needs.
Infrastructure as Code can help support dynamic businesses while
enhancing data and cybersecurity.
CI/CD Pipelines
When making manual network changes via the CLI, it is suggested to make
small, incremental changes versus large, wholesale changes. For example,
copying into the CLI a hundred lines of configuration changes and then
having to troubleshoot what part of the copy broke the network is much
more difficult than leveraging automation tools to complete the same work.
Figuring out what part of the change isn’t working is always troublesome,
and often could create service downtime, an outage, or a few hours of
frustration (at a minimum).
Continuous integration/continuous delivery (CI/CD) is a common software
development practice used by developers to merge code changes into a
central repository multiple times a day, sometimes multiple times an hour,
and automate the entire software release process. With continuous
integration (CI), each time the code has been changed, the build and test
process is automatically executed for the given application, providing
instant feedback to the different developers on what’s working and, more
importantly, what’s not working in their code. With continuous delivery
(CD), the relevant resources are automatically provisioned and deployed,
which can sometimes consist of multiple disparate stages for more complex
projects. The important aspect of a CI/CD pipeline is that all of these
processes are fully automated, documented, and visible to the entire project
team.
The four steps in a CI/CD pipeline are source, build, test, and deploy.
Figure 12-3 shows the CI/CD pipeline process workflow.
Each site, pod, and building block of the network design must be M Zer
“templateable” and repeatable. This might include the specific o o-
subnets used, which ports on each device are used for what d tou
connections, the naming of the devices, and many more u ch
templateable items in the network design. l pro
a visi
r oni
No one-off designs or arbitrary topologies should be leveraged. i ng
t
y
No snowflake (Special designs / implementations against the Infr
norm) astr
uct
ure
as
Co
de
Co
nti
nuo
us
inte
gra
tio
n/c
ont
inu
ous
deli
ver
y
pip
elin
es
Identify the smallest number of standard models and have the H Infr
discipline to stick to them. For example, having small office, i astr
medium office, and large office models for all remote sites. Each e uct
remote site model would then have metrics and thresholds that r ure
would elevate the network design to the next level within the site a as
models. Maybe a small office is for 10 or fewer users, a medium r Co
office is for 11 to 25 users, and a large office is for 26-plus users. c de
Sometimes these thresholds are business-focused metrics h
(OPEX/CAPEX), while other times they are technology focused y
(e.g., number of ports, number of switches, number of line cards).
o Co
f nti
d nuo
e us
s inte
i gra
g tio
n n/c
ont
inu
ous
deli
ver
y
pip
elin
es
Reduced CAPEX/OPEX.
Summary
This chapter focused on the underlying impact of automation on a business
and how a network designer can properly structure a network design for
automation. This chapter specifically covered the automation capabilities of
zero-touch provisioning, Infrastructure as Code, CI/CD pipelines, and the
corresponding network design elements around them. After highlighting
these capabilities individually, they were collectively compared to each
other from a network design perspective and then from a business
perspective.
Reference
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Multicast Design
This chapter covers the following topics:
Note
Practically and ideally, the decision to enable multicast should be
driven by business application requirements. However, as a network
architect or designer, you can sometimes suggest that the business
migrate certain applications from the unicast version to multicast
version if the option exists and the transition is going to optimize the
network and application performance. For instance, Moving Picture
Experts Group (MPEG) high-bandwidth video applications usually
consume a large amount of network bandwidth for each stream.
Therefore, enabling IP multicast will enable you to send a single
stream to multiple receivers simultaneously. This can be seen from
the business point of view as a more cost-effective and bandwidth-
efficient solution, especially if the increased bandwidth utilization
translates to increased cost (such as WAN bandwidth).
This chapter covers the following “CCDE v3.0 Core Technology List”
sections:
Multicast Switching 1, 2
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Multicast Switching
More and more Layer 2 networks are requiring multicast switching in their
design because the applications being leveraged by the business require
multicast support. This section covers the different multicast switching
protocols—Internet Group Management Protocol (IGMP) and Multicast
Listener Discovery (MLD)—and the corresponding attributes network
designers need to know to properly design a Layer 2 network to run
multicast properly.
Note
Network designers must consider maintaining the existing Layer 3
unicast communication between the WAN router and other routers in
this network after placing the WAN router in a different VLAN, such
as adding Layer 3 VLAN interfaces or adding additional
interfaces/subinterfaces from other routers within the same VLAN.
Note
IGMP snooping may maintain forwarding tables based on either
Ethernet MAC addresses or IP addresses. Because of the MAC-
overlapping issues covered earlier with regard to mapping an IP
multicast group address to Ethernet addresses, the forwarding based
on IP address is desirable if a switch supports both types of
forwarding mechanisms.
Multicast Routing
This section starts by discussing the key consideration to achieve successful
multicast forwarding, and then covers the most common protocols used to
route multicast traffic within a single multicast domain. It also discusses
multicast routing between different multicast domains.
Many
to one
Note
Although some of the PIM protocols covered in Table 13-2
technically support multicast interdomain routing, it is not common
for them to be used to provide multicast interdomain routing without
other protocols such as Multicast Source Discovery Protocol
(MSDP).
Although having multiple flavors of PIM might be seen as an added
complexity to the multicast network design, it can be seen as added
flexibility for an environment with different types of multicast applications.
For instance, PIM Source-Specific Multicast (PIM-SSM) can be deployed
for certain enterprise communication applications, PIM-BIDIR for financial
applications, and PIM-SM for other general IP multicast communications.
RP discovery, however, is one of the primary design aspects that must be
considered during the planning and design phase of any IP multicast design
task. Table 13-3 summarizes the common mechanisms used to locate or
discover the intended multicast RP within a multicast domain.
Note
The RP is required to initiate new multicast sessions with sources and
receivers. During these sessions, the RP and the first-hop router
(FHR) may experience some periodic increased overhead from
processing. However, this overhead will vary based on the multicast
protocol in use. For instance, the RP with PIM-SM Version 2 requires
less processing than in PIM-SM Version 1, because the sources only
periodically register with the RP to create state. In contrast, the
location of the RP is more critical in network designs that rely on a
shared tree, where the RP must be in the forwarding path, such as
PIM-BIDIR. (The following section covers this point in more detail.)
RP Placement
Normally, the placement of a multicast RP is influenced primarily by the
following factors:
As a rule of thumb, when the source tree along with the shortest-path tree
(SPT) are considered, RP placement is not a big concern even though it is
commonly recommended to be placed closer to the multicast sources,
because in this case the RP is not necessarily required to be in the data
forwarding path. However, in some multicast applications, such as many-to-
many types of multicast applications, the receivers might operate as senders
at the same time using different multicast groups for receiving and sending.
Technically, even when SPT is enabled (where the last-hop router [LHR]
cuts over to the SPT source tree), the source tree is always created between
the source and the RP, in which an (S,G) state is created on all the nodes
between each source and its RP before the switch over to SPT takes place,
as shown in Figure 13-5. This may lead the nodes in the path (between the
many receivers/senders and the RP) to hold a large number of (S,G) states.
(In trading environments, this number can reach up to a few thousands of
source feedback streams sourced from the receivers that operate as
multicast senders as well.)
Figure 13-5 Many-to-Many Multicast Applications Using PIM-SM RP
Therefore, to reduce the number of (S,G) states on these nodes, you can add
different RPs close to the receivers that require sending feeds to the
feedback groups, as shown in Figure 13-6. Also, MSDP can be introduced
in this design among the RPs to ensure that all RPs for any given group will
be aware of other active sources. This design option with MSDP is limited
to IPv4 only. This design is suitable in a multicast environment that requires
the receivers to be able to send feedback/streams using a separate group
from the actual data source group. Alternatively, PIM-SM can be migrated
to PIM-BIDIR in this environment.
Figure 13-6 Optimized: Many-to-Many Multicast Applications Using
PIM-SM RPs
However, when the shared tree is used as the forwarding multicast tree,
such as using PIM-BIDIR where the RP will be in the data path, network
designers must carefully and wisely consider the placement of the RP
within the network because all the traffic will flow through the RP. For
example, if multiple multicast streams are sourced from different senders
distrusted at different locations across the network, identifying which
stream should be given priority is a key to place the RP in the most optimal
path between the sources and receivers based on the following:
For example, in the scenario shown in Figure 13-7, there are two hub nodes
each connected to a different data center with multicast applications
streaming over a shared tree. Although hub 1 is connected directly to the
business-critical multicast application that requires high bandwidth,
multicast streams will reach each of the remote site’s receivers via hub 2.
This is because hub 2 is defined as the RP for the multicast shared tree
(such as PIM-BIDIR), where the RP must be in the data forwarding path,
which can result in a congested data center interconnect (DCI) link and
degraded application quality. Therefore, to optimize the path for the
business-critical application with high-bandwidth requirements, the RP
function must be moved to hub 1 to serve as an RP either for the entire
multicast domain or at least for the multicast groups used by the
applications located in DC-1. This can provide a more optimal path for
multicast streams sourced from DC-1 toward the spokes/receivers.
Figure 13-7 RP Placement Consideration: Shared-Tree Multicast
Note
In both scenarios, the assumption is that the RPF check is considered
based on the utilized path for multicast traffic.
Interdomain Multicast
The multicast protocols discussed earlier focused mostly on handling
multicast in one multicast domain. The term multicast domain can be
defined as an interior gateway protocol (IGP) domain, one BGP
autonomous system (AS) domain, or it can be based on the administrative
domain of a given organization. For example, one organization might have
multiple multicast domains, with each managed by a different department;
for instance, one domain belongs to marketing, and another domain belongs
to engineering. Other common scenarios of multiple multicast domains are
between service providers and after a merger or acquisition between
companies. Therefore, it is important sometimes to maintain the isolation
between the different multicast domains by not sharing streams and RP
feeds and at the same time offering the ability to share certain multicast
feeds and RP information as required (in a controlled manner). This section
covers the most common protocols that help to achieve this type of
multicast connectivity.
Multicast BGP
As discussed earlier, a successful RPF check is a fundamental requirement
to establish multicast forwarding trees and pass multicast content
successfully from sources to receivers. However, in certain situations,
unicast might be required to use one link and multicast to use another, for
some reason, such as bandwidth constraints on certain links. This situation
is common in interdomain multicast scenarios. Multicast Border Gateway
Protocol (MP-BGP; sometimes referred to as MBGP) is based on RFC
2283, “Multiprotocol Extensions for BGP-4.” MP-BGP offers the ability to
carry two sets of routes or network layer reachability information (NLRI)
(sub-AFI), one set for unicast routing and one set (NLRI) for multicast
routing. BGP multicast routes are used by the multicast routing protocols to
build data distribution trees and influence the RPF check. Consequently,
service providers and enterprise networks can control which path multicast
can use and which path unicast can use using one control plane protocol
(BGP) with the same path selection attributes and rules (such as AS-PATH,
LOCAL_PREFERENCE, and so on).
The sending MSDP The sending MSDP peer is the only MSDP peer
peer is also an (for example, if only a single MSDP peer or a
interior MP-BGP default MSDP peer is configured).
peer.
The sending MSDP The sending MSDP peer is a member of a mesh
peer is also an group.
exterior MP-BGP
peer.
* This table covers the Cisco-specific MSDP RPF check rules. There is a
standardized list of rules in RFC 3618.
Figure 13-11 Common RPF Check Failure with MSDP and BGP
Therefore, it is important that the address used for both MP-BGP and
MSDP peer addresses is the same.
In the scenario shown in Figure 13-12, AS 500 is providing transit
connectivity service to a content provider (AS 300) that offers IPTV
streaming. End users need to connect to the streaming server IP 10.1.1.1
over AS 500. AS 500 has two inter-AS links, and they want to offer this
transit service with AS 300 using the following traffic engineering
requirements:
Unicast traffic between AS 500 and AS 300 must use the link with 5
Gbps as the primary path and fail over to the 10-Gbps link in case of a
failure.
Multicast traffic must use the 10-Gbps link between AS 500 and AS
300. However, in case of a link or node failure, multicast traffic must
not fail over to the other link (to avoid impacting the quality of other
unicast traffic flowing over the 5-Gbps inter-AS link). Multicast group
addresses that are in the range of 232/5 must not be shared between
the two domains (AS 300 and AS 500).
Currently, the IPTV system needs to use only the range of
225.1.1.0/24. Therefore, only sources in AS 300 with this range must
be accepted by AS 500.
To ensure multicast traffic flow is over the 10-Gbps link only and
without facing RPF check failure, MP-BGP will be used to advertise
the multicast sources IPs (e.g., 10.1.1.1) and filter out these IPs from
being advertised/received over the 5-Gbps link.
MSDP peering must be established between multicast RPs of AS 300
and AS 500 to exchange SA messages about the active source within
the local domain in each AS.
PIM RP filtering and MSDP filtering is required to ensure that the RPs
will only send/accept sources within the multicast group IP range
(225.1.1.0/24).
Figure 13-12 Interdomain Multicast Design Scenario
Embedded-RP
Although PIM SSM offers the ability for IPv6 multicast to communicate
over different multicast domains, PIM SSM still does not offer an efficient
solution for some multicast deployments where many-to-few and many-to-
many types of applications exist, such as videoconferencing and multiuser
games applications. Also, in some scenarios, the multicast sources between
domains may need to be discovered. Furthermore, MSDP cannot be used to
facilitate interdomain multicast communication as with IPv4, because it has
deliberately not been specified for IPv6. Therefore, the most common and
proven solution (at the time of this writing) to facilitate interdomain IPv6
communication is the IPv6 Embedded-RP (described in RFC 3306) in
which the address of the RP is encoded in the IPv6 multicast group address,
and specifies a PIM-SM group-to-RP mapping to use the encoding,
leveraging, and extending unicast-prefix-based addressing. The IPv6
Embedded-RP technique offers network designers a simple solution to
facilitate interdomain and intradomain communication for IPv6 Any-Source
Multicast (ASM) applications without MSDP.
However, network designers must consider that following an RP failure
event, multicast state will be lost from the RP after the failover process
because there is no means of synchronizing states between the RPs (unless
it is synchronized via an out-of-band method, which is not common). In
addition, with MSDP, network operators have more flexibility to filter
based on multicast sources and groups between the domains. In contrast,
with Embedded-RP, there is less flexibility with regard to protocol filtering
capabilities, and if there is no other filtering mechanism, such as
infrastructure access lists, to limit and control the use of the RP within the
environment, a rogue RP can be introduced to host multicast groups, which
may lead to a serious service outage or information security risk.
Anycast-RP
The concept of Anycast-RP is based on using two or more RPs configured
with the same IP address on their loopback interfaces. Typically, the
Anycast-RP loopback address is configured as a host IP address (32-bit
mask). From the downstream router’s point of view, the Anycast IP will be
reachable via the unicast IGP routing. Because it is the same IP, IGP
normally will select the topologically closest RP (Anycast IP) for each
source and receiver. MSDP peering and information exchange is also
required between the Anycast-RPs in this design, because it is common for
some sources to register with one RP and receivers to join a different RP, as
shown in Figure 13-13.
Figure 13-13 Anycast-RP with MSDP
In the event of any Anycast-RP failure, IGP will converge, and one of the
other Anycast-RPs will become the active RP, and sources and receivers
will fail over automatically to the new active RP, thereby maintaining
connectivity.
Note
IPv6 does not support MSDP. Therefore, each RP has to define other
RPs in the network as PIM RP set to maintain a full list of sources
and receivers in the network. Alternatively, Anycast-RP using PIM,
described in RFC 4610, can be used instead where the Anycast-RP
functionality can be retained without using MSDP.
1. The closest RP (anycast loopback IP) for each source and receiver will
be selected by the underlying unicast routing protocol (IGP).
2. When RP-B receives the PIM Register message from multicast source
S-B via R-1 (DR), it will decapsulate it and then forward it across the
shared tree toward the interested (joined) receivers.
Phantom RP
With PIM-BIDIR, all the packets technically flow over the shared tree.
Therefore, redundancy considerations of the RP become a critical
requirement. The concept of a phantom RP is specifically used for PIM-
BIDIR, and the phantom RP does not necessarily need to be a physical
RP/router. An IP subnet that is routable in the network can serve the
purpose as well, where the shared tree can be rooted as shown in Figure 13-
15.
Live-Live Streaming
The term live-live refers to the concept of using two live simultaneous
multicast streams through the network using either a path separation
technique or a dedicated infrastructure and RPs per stream. For instance, as
shown in Figure 13-16 and Figure 13-17, the first stream (A) is sent to one
set of multicast groups, and the second copy of the stream (B) is sent using
a second set of multicast groups. Each of these groups will usually be
delivered using separate infrastructure equipment to the end user with
complete physical path separation, as shown in Figure 13-16. This design
approach offers the ultimate level of resiliency that caters for any failure in
a server or in network component along the path.
However, using single infrastructure such as MPLS-enabled core associated
with MPLS Traffic Engineering (MPLS-TE) to steer the streams over
different paths across the core infrastructure, as shown in Figure 13-17,
offers resiliency for any failure on the server side; however, it may not offer
full network resiliency (because both streams will use the same core
infrastructure).
That said, if the MPLS provider caters for different failure scenarios
(optical, node, link, and so on, along with switchover time that is fast
enough to be performed without being noticed by the applications, such as
using MPLS-TE FRR, and also avoids any shared risk link group [SRLG]
along the path), it can offer a reliable and cost-effective solution.
Figure 13-16 Live-Live Stream over Separate Core Networks
One of the primary drivers to adopt such an expensive design approach
(live-live) is the strict requirement of some businesses (commonly in the
financial services industry because each millisecond is worth money) to
aggressively minimize the loss of packets in the multicast data streams by
adopting a reliable and low-latency multicast solution that does not
introduce retransmissions.
Summary
In this chapter, we focused on multicast design, covering multicast
switching, routing, and overall design considerations. Based on the
multicast design options, considerations, and constraints covered in this
chapter, network designers should always answer the following questions
before considering any design recommendation or strategy:
References
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
This chapter covers multiple networking and IP service design concepts and
considerations that are considered as additional core topics for the purpose
of the CCDE v3 written and practical exam at the time of this writing. The
different topics discussed in this chapter might be presented as an
application or service to be used to achieve a business need. For example, a
business-critical application might require quality of service (QoS) to be
enabled across the network to work properly.
Note, as well, that this chapter focuses on the design drivers, considerations,
and approaches that network designers can consider based on the different
design requirements, without covering any deep technical details.
Note
IPv6-specific design considerations are covered in this chapter, but
there are no specific CCDE blueprint line items for IPv6. This is
because it is expected that IPv4 and IPv6 are inherently included
throughout every CCDE blueprint domain and topic. The additional
IPv6 section is included in this chapter because it contains critical
topics that all network designers and CCDE candidates should know.
This chapter covers the following “CCDE v3.0 Core Technology List”
sections:
Note
For the Core Technology List item 4.8, only FCAPS is covered in this
chapter. ITIL and TOGAF are covered in chapter 5 while DevOps
(Automation) was covered in chapter 12.
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Note
Although IPv6 supports built-in IPsec, it is considered a myth and
misconception that IPv6 is more secure than IPv4. This assertion
stems from the original mandated use of IPsec in host-to-host
communications, as specified in RFC 2401. Consequently, if IPsec is
implemented, it will provide confidentiality and integrity between
two hosts, but it still will not address any link operation
vulnerabilities, attacks, and most of the denial-of-service (DoS)
attacks.
With regard to IPV6 addressing, IPv6 has three types of unicast addresses:
Link local: This address is nonroutable and can exist only on a single
Layer 2 domain (fe80::/64). As described in RFC 4291, even when
one or more routable IPv6 addresses are assigned to a certain
interface, an IPv6 link-local address is still required to be enabled on
this interface.
Unique local address (ULA): This address is routable within the
administrative domain of a given network (fc00::/7). This address, in
concept, is similar to the IPv4 private address range (RFC 1918).
Global: This address is routable across the Internet (2000::/3). This
address is similar in concept to the IPv4 public ranges.
Network designers can mix and match the different IPv6 addressing types
when designing an enterprise network. However, there are some issues
associated with each design model, as summarized in Table 14-3.
Table 14-3 IPv6 Address Types
IPv6 Addressing Scope Design Manageab Scalabil
Model Simplicity ility ity
Discovery Phase
At this phase, network architects or designers usually focus on
understanding and identifying the business goals and drivers toward the
enablement of IPv6, in addition to other influencing factors such as project
timeframe, government compliance, and the geographic distribution of the
sites with regard to IP addressing availability. Information about other
influencing factors from the network point of view at this stage also needs
to be identified and gathered at a high level, such as whether the existing
network infrastructures (LAN, WAN, security nodes, services, and
applications) support IPv6 and whether the business is willing to invest and
upgrade the nodes that do not support IPv6. Therefore, it is critical that the
right and relevant information be gathered at this phase so that it can be
analyzed and considered during the planning phase.
In other words, the planning phase takes the output information from the
discovery phase and analyzes it and uses it as a foundation to drive the
selection of the appropriate migration/integration approach. Consider, for
example, that an enterprise needs to migrate its network to be IPv6 enabled
(end to end), but at present the core network components do not support
IPv6 and the business is not allocating any budget to upgrade these
components. In addition, access to some new IPv6-enabled applications
hosted at the data center is an urgent requirement. This information should
be collected during the discovery phase, and the network designer at the
assessment and planning phase is expected to select the right approach to
meet the requirements for this enterprise, taking into consideration the
relevant constraints. In this case, the network designer may suggest either
an IPv6 over IPv4 tunneling mechanism (either sourced from the
workstation or using access/distribution switches), Domain Name System
(DNS)-based translation, or 64NATing to facilitate accessing the new IPv6-
based applications over the existing IPv4 core infrastructure.
The following subsections cover the approaches possible during the
planning phase with regard to IPv6 based on the targeted environment
(enterprise versus service provider). The selected approach should ideally
be complemented with one or more of the technical mechanisms listed later
in Table 14-6 at the design phase. Therefore, each phase (ideally) must take
the outcomes of the previous phase as a foundational input to achieve a
cohesive end-to-end business-driven design that avoids any “design in
isolation” throughout all the phases.
Partially migrated
blocks may require
tunneling as an interim
solution.
Migrate Quick L Migrating certain modules This approach is
the ly i of the enterprise network suitable when the core
enterpris migra m first. device does not
e ting i support IPv6 and
network certai t requires either
fully or n e A DNS translation or hardware or software
partially enterp d tunneling mechanism such upgrades.
to be rise as ISATAP is required to
pure modul maintain the
IPv6- es communications between Increases design and
only or first, IPv6 and IPv4 islands control plane
dual such within the network. complexity.
stack as
data
center Increases operational
s complexity.
Note
Some of the transition approaches to IPv6 for Service Provider
Networks include technical options that are out of scope for the
CCDE v3 exam at the time of this writing. These options are still
listed in Table 14-5 and Table 14-6 to provide the specifics for the
transition, but this chapter does not go into the detail of explaining
them. This includes the following topics: 6PE, 6VPE, and 6rd.
Note
One of the primary considerations when migrating or integrating with
IPv6 is to ensure that IPv6 is secured in the same manner that IPv4 is
secured. Otherwise, the entire network will be vulnerable to network
attackers and breaches. For instance, since the release of Microsoft
Windows Server 2008, IPv6 has been native to the Windows OS,
which supports transition technologies at the server/client level such
as ISATAP. In this case, if one of these servers is compromised and
the network security rules do not consider IPv6, malicious traffic can
ride an IPv6 tunnel or packet without being blocked or contained by
the security devices in the path.
Detailed Design
After selecting the suitable approach for the migration or integration
between IPv4 and IPv6 networks, ideally based on the gathered and
analyzed information during the planning phase, network designers at this
stage can put together the details of the design, such as selecting the
suitable integration mechanism, deployment details such as tunnels
termination, IP addressing, routing design, network security details, and
network virtualization considerations, if any are required. Typically, the
outcome of the design phase will be used by the implementation engineers
during the deployment phase to implement the designed solution.
Therefore, if there is any point that is not doable or practically cannot be
implemented, it will be reported back to this phase (to the network
designer) to be revised and changed accordingly. There are various
mechanisms and approaches with regard to integrating IPv6 and IPv4. For
simplicity, these mechanisms can be classified as following:
Dual stack
Tunneling based
Translation based
MPLS environment solutions
Table 14-6 lists the various technical mechanisms that can be used to
integrate and support the coexistence of both IPv4 and IPv6, taking into
consideration some of the primary design aspects that influence the solution
selection based on the design requirements.
Note
Information in Table 14-6 is not a best practice or mandatory with
regard to IPv6 integration and migration options. However, it is based
on the most commonly considered technology solutions for certain
scenarios. And network designers must always assess the different
influencing factors before suggesting any approach or mechanism.
Increased control
plane complexity.
May introduce
scalability weaknesses
when both IP versions
are running together
(depends on available
hardware resources
such as memory).
Mecha Scenario Targeted Design Concern
nism Environment
Increases operational
complexity.
Mecha Scenario Targeted Design Concern
nism Environment
Stateful
architecture on
L2TP network
server (LNS).
Retain the ability for its internal users to access some legacy
applications that do not support IPv6 and to access the IPv4 Internet
Provide the ability for external users to access the ABC Corp. new
IPv6 web-based services over the IPv4 Internet
Therefore, ABC Corp. has decided to add another Internet link dedicated to
accessing IPv6 Internet services. Moreover, the security team of ABC Corp.
requires accessing some (predefined) IPv4 Internet websites, “web-based
services,” by the internal IPv6-enabled users to appear as if they are
accessible over the IPv6 Internet. For example, when an IPv6-enabled user
accesses a website across the IPv4 Internet using a typical site, such as
www.example.com, the resolved source IP address of the website’s domain
name by the DNS should appear to the user as an IPv6 source address
instead of an IPv4 source address.
One of the ABC Corp. primary requirements is that the go-live of the IPv6
network project be within six weeks. Therefore, the company has hired a
network consultant to provide a strategic approach that can help it to
achieve this goal within this limited timeframe.
The assumption is that network nodes across the network, end-user devices,
applications, and hosts within the data center support IPv6.
Design Approach
To meet the primary requirements of ABC Corp., taking into account the
design considerations covered earlier, ABC Corp. can consider the
following phased approach.
Phase 1
Provide fast IPv6 enablement across the network (see Figure 14-3):
Enable IPv6 on all network nodes (dual stack), starting from the DC
followed by other nodes such as WAN routers.
Enable IPv6 routing on the network nodes (DC, WAN routers hub and
spokes, and Internet edge).
Enable stateful NAT64 at the IPv4 Internet edge gateway toward the
IPv4 Internet to provide Internet access for the internal IPv6 devices.
Introduce DNS64 functionality to satisfy the requirement of making
Internet service IPv4 source addresses appear as if they are sourced
from an IPv6 address (by synthesizing DNS A record into AAAA
record).
Enable static NAT64 at the DC edge nodes, where the IPv4 to IPv6
static mapping can provide internal IPv6-enabled users to access
legacy IPv4-only applications.
Enable static NAT64 at the IPv4 Internet gateway, where the IPv6 to
IPv4 static mapping can enable external users to access ABC Corp.
IPv6 web-based services over the IPv4 Internet.
Interconnect the IPv6 network islands (spokes/remote sites) with the
HQ/hub site using an IP overlay tunneling mechanism (preferably
IPv6 over mGRE IP tunneling dynamic multipoint VPN [DMVPN])
over the IPv4 MPLS VPN WAN.
Phase 2
Design optimization (see Figure 14-3):
Note
For a sample WAN migration example, see Chapter 17, “Enterprise
WAN Architecture Design.”
Quality of Service Design Considerations
In today’s converged networks, there is an extremely high reliance on IT
services and applications. In particular, there is increased demand on real-
time multimedia applications and collaboration services (also known as
media applications) over one unified IP network that carries various types
of traffic streams such as voice, video, and data applications. For instance,
voice streams can be found across the network in different flavors, such as
standard IP telephony, high-definition audio, and Internet Voice over IP
(VoIP). In addition, video communications also have various types, where
each has different network requirements, such as video-on-demand, low-
definition interactive video (such as webcams), high-definition interactive
video (such as telepresence), IP video surveillance, digital signage, and
entertainment-oriented video applications. However, there can be an
unlimited number of data applications.
Therefore, to deliver the desired quality of experience to the different end
users, network designers and architects need to consider a mechanism that
can offer the ability to prioritize traffic selectively (usually real-time and
mission-critical applications) by providing dedicated bandwidth, controlled
jitter and latency (based on the application requirements), and improved
loss characteristics, and at the same time ensuring that providing priority for
any traffic flow will not make other flows fail. This mechanism is referred
to as quality of service (QoS). The following sections discuss the design
approaches and considerations of QoS using a business-driven approach.
Identify the Understand the scope of the Is the application used within
scope. QoS design, such as the campus, across the WAN,
campus, WAN, VPN, or or over VPN?
service provider edge or end
to end across different
blocks. Is there any network in the
path that is not directly
controlled, such as a WAN?
Application sensitivity to
packet loss, jitter, and delay.
Strategic Approach Design Considerations
Goal
QoS Architecture
In general, there are two fundamental QoS architecture models:
Integrated services (IntServ): This model, specified in RFC 1633,
offers an end-to-end QoS architecture based on application transport
requirements (usually per flow) by explicitly controlling network
resources and reserving the required amount of bandwidth (end to end
along the path per network node) for each traffic flow. Resource
reservation protocols, such as RSVP, and admission control
mechanisms form the foundation of this process.
Differentiated services (DiffServ): This model, specified in RFC
2475, offers a QoS architecture based on classifying traffic into
multiple subclasses where packet flows are assigned different
markings to receive different forwarding treatment (per-hop behavior
[PHB]) per network node along the network path within each
differentiated services domain (DS domain).
Note
The aforementioned QoS architectural models are applicable for both
IPv4 and IPv6, because both IP versions include the same 8-bit field
in their headers, which can be used, for example, for DiffServ (IPv4:
Type of Service [ToS]; IPv6: Traffic Class). Therefore, the concepts
and methodologies discussed in this section are intended for both
IPv4 and IPv6 unless otherwise specified, such as if any application
supports and uses the added 20-bit Flow Label field of the IPv6
header (RFC 8200, 6437). However, the larger IPv6 packet’s header
needs to be considered when calculating the aggregate bandwidth of
traffic flows.
Note
Per-hop behavior (PHB) defines a forwarding behavior with regard to
the treatment characteristics for the traffic class, such as its dropping
probabilities and queueing priorities, to offer different levels of
forwarding for IP packets (described in RFC 2474). DS domains
described in this section refer to any domain with QoS policies and
differentiated treatments, regardless if it is IP Precedence based (RFC
791), Assured Forwarding based (RFC 2597), or a mixture of both.
Note
Service provisioning policy refers to the attributes of traffic
conditioners (QoS policies) deployed at each DS–domain boundary
and mapped across the DS domain.
Trusted model: This model can be used with endpoints that can mark
their traffic. At the same time, these endpoints have to be approved
and trusted from a security point of view, such as IP phones, voice
gateways, wireless access points, videoconferencing, and video
surveillance endpoints. In addition, “ideally” these trusted endpoints
should not be mobile (i.e., fixed endpoints) to be reflected at the
switch port level in a more controlled manner.
Untrusted model: This model usually considers using manual traffic
classification and marking. The most common candidates of this
model are PCs and servers because these endpoints are subject to
attack and infection by worms and viruses that can flood the network
with a high volume of malicious traffic. Even worse, this traffic might
be marked with a CoS or DSCP value that has priority across the
network, which usually leads to a true denial-of-service (DoS)
situation. However, PCs and servers normally run business-critical
applications that need to be given certain service differentiation across
the network, such as a PC running a software-based phone or a server
running business-critical applications such as SAP or CRM. With this
model network, designers can selectively classify each of the desired
application’s traffic flows to be treated differently across the network
and mark it with the desired CoS/DSCP value along with a policy that
either limits each application class to a predefined maximum
bandwidth or marks down the out-of-profile traffic to a CoS/DSCP
value that has lower priority across the network. Furthermore, as a
simple rule of thumb, any endpoint that is not under control by the
enterprise should be considered untrusted, and the classification and
marking of traffic flows can be controlled selectively and manually.
Conditional trust model: This model offers the ability to extend the
trust boundary of the network edge to the device or endpoint it is
connected to. This is based on an intelligent detection of a trusted
endpoint, usually an IP phone. (In Cisco solutions, this is achieved by
using Cisco Discovery Protocol [CDP].) However, the IP phone in this
scenario has a PC connected to the back of the IP phone. Therefore,
by extending the trust boundary to the IP phone, the IP phone can send
its traffic in a trusted manner while overriding PC traffic to a DSCP
value of 0. This model offers a simple and easy method to roll out
large IP telephony deployments with minimal operational and
configuration complexity. However, if the PCs have some applications
that need to be marked with a certain DSCP value, such as a softphone
or a business video application, in this case, manual traffic
classification and marking are required at the edge port of the access
switch to identify this traffic and mark it with the appropriate
CoS/DSCP value and ideally associate it with a policer.
Note
DSCP marking is more commonly used than IP Precedence (IPP)
because of its higher flexibility and scalability to support a wider
range of classes. However, in some scenarios, a mix of both may be
required to be maintained, such as migration or integration between
different domains (such as in merger and acquisition scenarios and
WAN MPLS VPN service providers that offer their CoS classes based
on IPP) where seamless QoS interoperability has to be maintained. In
this case, class selector PHB is normally used to provide backward
compatibility with ToS-based IP Precedence (RFC 4594, 2474).
Logically, after traffic flows are classified and marked (whether manually at
the edge of the DS domain or automatically by being considered one of the
trust boundary models discussed earlier), traffic flows have to be grouped in
DS classes. Usually, application flows that share similar or the same traffic
characteristics and network requirements such as delay, jitter, and packet
loss can be placed under the same DS class to achieve a structured traffic
grouping that helps network operators to assign the desired treatment at
different locations across the DS domain (per class), such as assigning
different queuing models per class to control traffic flows during periods of
traffic congestion.
Supports low-
latency queuing
(LLQ).
There are other queuing techniques, but they are not covered in this section
because they are less commonly used or offer basic queuing capabilities.
However, this does not mean that they are not used or cannot be considered
as an option. These other techniques include weighted round-robin (WRR)
and custom queuing. First-in, first-out (FIFO) queuing is the default
queuing when no other queuing is used. Although FIFO is considered
suitable for large links that have a low delay with very minimal congestion,
it has no priority or classes of traffic.
Although WFQ offers a simplified, automated, and fair flow distribution, it
can impact some applications. For instance, a telepresence endpoint may
require end-to-end 5 Mbps for the video Rapid Transport Protocol (RTP)
media stream over a 10-Mbps WAN link, and there may be multiple flows
passing over the same link, let’s say ten flows in total. Typically, with WFQ
fairness, telepresence video streams will probably get one-tenth of the total
available bandwidth of the 10 Mbps, which will lead to degraded video
quality. With CBWFQ, though, network designers can place the flows of
the telepresence RTP media streams in their own class with a minimum
bandwidth guarantee of 5 Mbps during congestion periods. In addition,
interactive video traffic can be assigned to the LLQ to be prioritized and
serviced first during an interface congestion situation, as shown in Figure
14-7.
Note
The different Cisco software versions, such as IOS, include a built-in
policer (implicit policer) with the LLQ, which limits the available
bandwidth of the LLQ (such as real-time traffic flows) to match the
bandwidth allocated to the strict-priority queue, thus preventing
bandwidth starvation of the non-real-time flows serviced by the
CBWFQ scheduler. However, this behavior (implicit LLQ policing)
is applicable only during periods of interface congestion (full Tx-
Ring). A similar concept is applicable to the multi-LLQ model, where
a state implicit policer is enabled per LLQ.
Hierarchical QoS
The most common scenario at the enterprise edge is that links are
provisioned with sub-line rate. For instance, the WAN service provider may
provide the physical connectivity over a 1-Gbps Ethernet copper link,
whereas the actual provisioned bandwidth can be 10 Mbps, 50 Mbps, or any
sub-line rate. The problem with this setup, even if QoS policies are applied
on this interface, such as CBWFQ, is that it will not provide any value or
optimization because QoS policies normally kick in when the interface
experiences congestion. In other words, because the physical link line rate
in this scenario is higher than the actual provisioned bandwidth, there will
be no congestion detected even if the actual provisioned sub-line rate is
experiencing congestion, which means that QoS has no impact here.
Therefore, with hierarchical QoS (HQoS), the shaper at the parent policy
(as shown in Figure 14-8) can simulate what is known as backpressure to
inform the router that congestion has occurred in which QoS policies can
take place.
Admission Control
Admission control is a common and essential mechanism used to keep
traffic flows in compliance with the DS domain traffic conditioning
standards, such as an SLA between a service provider and its customers that
specifies the maximum allowed traffic rate per class and in total per link,
where excess packets will be discarded to keep traffic flows within the
agreed traffic profile (SLA). There are two primary ways that admission
control can be performed: traffic policing and traffic shaping. With traffic
policing, when traffic streams reach the predefined maximum contracted
rate, excess traffic is either dropped or re-marked (marked down), as shown
in Figure 14-9.
Figure 14-9 Traffic Policing
Traffic shaping, in contrast, keeps excess packets in a queue, buffered and
delayed, and then schedules the excess for later transmission over
increments of time. As a result, traffic shaping will smooth packet output
rate and prevent unnecessary drops, as shown in Figure 14-10. However,
the buffering of excess packets may introduce delay to traffic flows,
especially with deep queues. Therefore, with real-time traffic it is
sometimes preferred to police and drop excess packets rather than delay it
and then transmit it, to avoid the degraded quality of experience.
Note
While the IETF DiffServ RFCs provide a consistent set of PHBs for
applications marked to specific DSCP values, they do not specify
which application should be marked with which DSCP value.
Therefore, considerable industry disparity exists in application-to-
DSCP associations, which led Cisco to put forward a standards-based
application marking recommendation in its strategic architectural
QoS Baseline document (in 2002). Eleven different application
classes were examined and extensively profiled and then matched to
their optimal RFC-defined PHBs. More than four years after Cisco
put forward its QoS Baseline document, RFC 4594 was formally
accepted as an informational RFC (in August 2006). RFC 4594 puts
forward 12 application classes and matches these to RFC-defined
PHBs, as summarized in Figure 14-11.
Figure 14-11 Twelve-Class QoS Baseline Model Based on Cisco and
RFC 4594 Baselines
Note
The most significant of the differences between the Cisco QoS
Baseline and RFC 4594 is the RFC 4594 recommendation to mark
call signaling from AF31 to CS3 (as per the original QoS Baseline of
2002). It is important to remember that RFC 4594 is an informational
RFC; in other words, it is only an industry best practice and not a
standard.
The 12-class QoS model is a comprehensive and flexible model, which can
be standardized and considered across the enterprise network. However, this
model is not always viable or achievable for the following reasons:
Not all enterprises or SPs are ready or need to deploy such a wide QoS
design model.
This 12-class QoS design can introduce a level of end-to-end QoS
design and operational complexity, because most WAN providers offer
either 4- or 6-class QoS models across the WAN.
Note
The biggest concern with regard to the operational complexity is that
it is prone to issues caused by human errors; this point is covered
later in the “Network Management” section.
Ideally, what drives the number of classes (QoS model) is how many
applications used across the network need special consideration and the
level of service differentiation required for delivering the desired level of
quality of experience. As shown in Figure 14-12, if the WAN provider is
offering a 4-class QoS model, it is not an easy task to map from the 12-class
model to the 4 classes (outbound), and from the 4-class model to the 12-
class model (inbound) at each WAN edge, especially if there is a large
number of sites. In contrast, if the enterprise is using a 4-, 6-, or 8-class QoS
model, the operation and design complexities will be minimized with regard
to QoS policies and configurations.
Figure 14-12 Mapping Between QoS Models with Different Classes
However, both 4- and 6-class models include provisioning for only a single
class for real-time (usually for voice), and if the video is added to the
network, either a higher QoS class model (such as an 8-class model) is
required to be considered or both voice and video traffic have to be
provisioned under a single class, which may not be a desirable option for
large deployments with a large number of IP telephony and video endpoints
across multiple sites. Therefore, considering the top-down approach to
identifying the business and user expectations, needs, and priorities in terms
of the services and applications used, in addition to their level of criticality
to the business, will help, to a large extent, drive the strategy of QoS design
in the right direction (business-driven).
Note
Network orchestration and automation tools may help eliminate the
operational complexity discussed earlier. However, this is something
that depends on the configuration and change management and on the
platforms and architecture used. For instance, the level of automation
in software-defined networks (SDNs) is always high, but with
simplified manageability compared to other models.
Enterprise Campus
Today’s campus networks are almost all provisioned with Gigabit/10
Gigabit of bandwidth across the LAN, in which QoS might be seen as an
unnecessary service to be considered because the need for queuing is
minimal or almost not required as compared to the WAN and Internet edge,
where queuing is a primary function to enhance the service quality.
Although this statement is valid to some extent, the need for QoS is not
only limited to perform queuing functions.
The unified marking and accurate traffic classification (as close to the
source as possible) that QoS can offer, when enabled across the campus
LAN, also enables policing across the campus LAN to provide the
flexibility and control to network operators to manage traffic usage based
on different fields of traffic flows, such as ToS values. It can also be used as
a protective mechanism in situations like DoS attacks, to mitigate their
impact. Therefore, it is recommended that QoS be enabled across the
campus LAN to maintain a seamless DS domain design in which
classification and marking policies establish and impose trust boundaries,
while policers help to protect against undesired flows at the access edge and
across the LAN.
Enterprise Edge
The enterprise edge (WAN, extranet, or Internet) is the most common place
that traffic flow aggregation occurs (where more or a larger number of
traffic flows usually arrive from the LAN side and need to exit the
enterprise edge, which is usually provisioned with lower capacity). For
instance, the LAN side might be provisioned with Gigabit/10 Gigabit,
whereas the WAN has only 10 Mbps of actual available bandwidth.
Therefore, QoS is always one of the primary functions considered at the
enterprise edge to achieve more efficient bandwidth optimization,
especially for enterprises that are converging their voice, video, and data
traffic. Logically, the enterprise edge represents the DS domain edge where
mapping and profiling of traffic flow to align with the adjacent DS domain
is required to maintain a seamless QoS design. For instance, Figure 14-15
shows an example of a 12-class to 4-class QoS model mapping at the
enterprise WAN edge router (CE) toward the SP edge (PE) to achieve end-
to-end consistent QoS policies.
Figure 14-15 QoS Mapping: Enterprise WAN Edge
Note
The allocated bandwidth percentage in Figure 14-15 is only for the
purpose of the example and based on the best practices
recommendations. However, these values technically must be
allocated based on the SLA between the service provider and
enterprise customer.
Note
It is common that service providers offer their CoS based on IP
Precedence marking only. As a result, the enterprise marking based
on DSCP and deployed with the 8- or 12-class model may encounter
inconsistent end-to-end markings. For instance, if there is a video
RTP stream sent out from one site and its packets are marked with a
DSCP value of DSCP 34 or AF41 (in binary, 100010), it will convert
to IPP 4 (in binary, 100). In turn, it will come back as DSCP 32
(binary, 100000) at the other remote site. Therefore, a re-marking is
required in this case at the other side (receiving) in the ingress
direction to maintain a unified QoS marking end to end. Also, the
used MPLS DiffServ tunneling mode and its impact on the original
DSCP marking across the service provider network are covered later
in this chapter.
Note
At the hub site, network operators still need to define a policy to
shape the traffic of the interface to match the actual provisioned sub-
line rate of bandwidth. Spokes, however, should follow the typical
HQoS deployment per site, where the bandwidth of the Internet/WAN
link must be shaped to the maximum upload provisioned capacity.
This means direct spoke-to-spoke traffic streams will be controlled by
the HQoS policies defined at the spokes level.
One key issue network designers must be aware of when considering per-
tunnel QoS for a DMVPN solution is that GRE, IPsec, and the L2 overhead
must be considered when calculating the required bandwidth for QoS
shaping and queuing, because these headers will be included as part of the
packet size. This is because queuing and shaping, technically, are executed
at the outbound physical interface of the DMVPN mGRE tunnel. GETVPN,
in contrast, preserves the entire original IP packet header (source and
destination IP addresses, TCP and UDP port numbers, ToS byte, and DF
bit) simply because QoS is applied at each GETVPN group member
(because no tunnels are used). This makes GETVPN design and
deployment simpler, and the same standard WAN QoS design can be
applied, with the exception that packet size will be increased and therefore
must be considered as part of QoS bandwidth calculations.
Network Management
As discussed earlier in this book, today’s modern networks carry multiple
business-critical applications over one unified infrastructure such as voice,
video, and data. In addition, in today’s competitive telecommunications
market, service providers always aim to satisfy their customers by meeting
strict SLA requirements. As discussed earlier, various technologies,
protocols, architectures, and constraints all collectively construct an
operational network. However, designing and deploying a network using a
business-driven approach will not guarantee the quality of the solution as
well (that is, if the network is really operating as expected or not).
Moreover, traffic requirements in terms of pattern and volume can change
over time as a natural result of business organic growth, merger and
acquisition, and the introduction of new applications. This means that the
network may end up handling traffic flows that it was not designed for. The
question here is this: How can IT leaders and network operators know about
what is going on? If the network is facing performance issues that need to
be taken care of, how can the change and alteration be performed in a
tracked and structured manner? In fact, configurations and changes are
more of a concern compared to other aspects because any error can lead to
either downtime or degraded performance, and generally, most network
downtimes are caused by human error.
Consequently, there must be a set of procedures and mechanisms capable of
measuring and providing real-time and historical information about every
single activity across the network to help the IT team take action in a more
proactive manner instead of relying on only the reactive approach to being
able to effectively, and in a timely manner, identify and fix issues or
abnormal behaviors in which the mean time to repair (MTTR) needs to be
kept as short as technically possible. Furthermore, the action taken by IT
should also be performed in a controlled and structured manner to be
tracked and recorded, and also combined with some automation with regard
to configuration and changes to help reduce the percentage of human errors.
For the IT team to achieve this, they need a network management solution
that controls the operation, administration, maintenance, and provisioning.
There are several industry standards and frameworks in the area of network
management. This section discusses the ITU-T standard (FCAPS).
After all or most of these questions have been answered, network designers
should have a good understanding about the high-level targeted network
management solution and should be able to start specifying its detailed
design, which should answer at least the following questions:
Taking these questions into consideration, network designers can drive the
solution selection and can specify which features and protocols are required
and where they should be enabled. For example, an MPLS service provider
decided to offer VoIP service to its customers to make voice calls to the
public switched telephone network (PTSN) numbers across the service
provider IP backbone, in addition to Internet Session Initiation Protocol
(SIP) VoIP applications using Cisco Unified Border Element (CUBE) as a
SIP gateway, as shown in Figure 14-18.
YANG
Yet Another Next Generation (YANG) is an IETF standard (RFC 6020)
data modeling language used to describe the data for network configuration
protocols such as NETCONF and RESTCONF. YANG is extensible
through augmentation, allowing new content to be added as needed to the
YANG language. YANG has a hierarchical configuration structure within
data models, which makes it very easy to read and reuse as needed. Figure
14-21 provides an example of a YANG data model. YANG is a full, formal
contract language with rich syntax and semantics to build applications on.
NETCONF
Network Configuration Protocol (NETCONF) is a network management
protocol defined by the IETF in RFC 6241. NETCONF provides rich
functionality for managing configuration and state data. The protocol
operations are defined as remote procedure calls (RPCs) for requests and
replies in XML-based representation. NETCONF supports running,
candidate, and startup configuration datastores. The NETCONF capabilities
are exchanged during session initiation. Transaction support is also a key
NETCONF feature. NETCONF is a client/server protocol and is
connection-oriented over TCP. All NETCONF messages are encrypted with
SSH and encoded with XML. A NETCONF manager is a client, and a
NETCONF device is a server. The initial contents of the <hello> message
define the NETCONF capabilities that each side supports. The YANG data
model defines capabilities for the supported devices. In addition, other
standards bodies and proprietary specifications define capabilities. Figure
14-22 highlights the different NETCONF operations and datastore
capabilities.
RESTCONF
RESTCONF, which is defined in RFC 8040, is an HTTP-based protocol
that provides a programmatic interface for accessing YANG modeled data.
RESTCONF uses HTTP operations to provide create, retrieve, update, and
delete (CRUD) operations on a NETCONF datastore containing YANG
data. RESTCONF is tightly coupled to the YANG data model definitions. It
supports HTTP-based tools and programming libraries. RESTCONF can be
encoded in either XML or JSON.
When comparing RESTCONF with NETCONF, RESTCONF has
No notion of transaction
No notion of lock
No notion of candidate config and commit
No notion of two-phase commit
No <copy-config>
XML or JSON, while NETCONF is only XML
Secure
Transpo SSH HTTPS
rt
The operations that are listed in Table 14-10 are not an all-inclusive list of
operations for both NETCONF and RESTCONF. Figure 14-23 takes the
YANG example from Figure 14-21 and adds the corresponding
RESTCONF HTTP operations on the right.
Figure 14-23 Example YANG Data Model with RESTCONF HTTP
Operations
Closed-Loop Automation
Closed-loop automation (CLA) is a continuous process that monitors,
measures, and assesses real-time network traffic and then automatically acts
to optimize end-user quality of experience.
Full-Stack Observability
Full-stack observability (FSO) is defined by metrics, events, logs, and
traces. Modern applications span multiple environments. Today, a typical
mobile application comprises hundreds of services communicating with
each other over a zero-trust multi-cloud landscape, all of which have to
work flawlessly. The level of complexity of these applications is
tremendously higher than in decades past. We can no longer manage or
optimize them because it is too much data with too little context and
correlation. Traditional monitoring only gives visibility at the domain level,
whether it be the network, infrastructure level, cloud, or database. The
combined full-picture view is becoming more critical for the best user
experience. This is where FSO comes into the forefront. Organizations
require complete visibility and insights to properly take relevant action at
the right time. To achieve this, there has to be a capability to measure the
inner state of these applications based on the data generated by them, such
as logs, metrics, and traces, which is also known as observability.
Summary
This chapter covered various advanced IP topics and services that are part
of any network design. To avoid design defects, network designers need to
always incorporate these services in an integrated holistic approach rather
than designing in isolation. Moreover, considering the top-down design
approach is a fundamental requirement to achieving a successful business-
driven design (for example, ensuring that the design complies with the
organization’s security policy standards). This chapter also emphasized the
importance of considering the business priorities and design constraints in
which network design ideally must adopt the “first things first” approach,
which takes into consideration existing limitations, which may include staff
knowledge, budget, or supported features and technologies.
References
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
This chapter covers the following “CCDE v3.0 Core Technology List”
sections and provides design recommendations from an enterprise campus
architecture standpoint:
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Three-Tier Model
A three-tier model, illustrated in Figure 15-1, is typically used in large
enterprise campus networks, which are constructed of multiple functional
distribution layer blocks.
Two-Tier Model
A two-tier model, illustrated in Figure 15-2, is more suitable for small to
medium-size campus networks (ideally not more than three functional
disruption blocks to be interconnected), where the core and distribution
functions can be combined into one layer, also known as collapsed core-
distribution architecture.
Note
The term functional distribution block refers to any block in the
campus network that has its own distribution layer such as a user
access block, WAN block, or data center block.
Campus Modularity
By applying the hierarchical design model across the multiple functional
blocks of the enterprise campus network, a more scalable and modular
campus architecture (commonly referred to as building blocks) can be
achieved. This modular enterprise campus architecture offers a high level of
design flexibility that makes it more responsive to evolving business needs.
As highlighted earlier in this book, modular design makes the network more
scalable and manageable by promoting fault domain isolation and more
deterministic traffic patterns. As a result, network changes and upgrades can
be performed in a controlled and staged manner, allowing greater stability
and flexibility in the maintenance and operation of the campus network.
Figure 15-3 depicts a typical campus network along with the different
functional modules as part of the modular enterprise architecture design.
Figure 15-3 Typical Modular Enterprise Campus Architecture
Note
Within each functional block of the modular enterprise architecture,
to achieve the optimal structured design, you should apply the same
hierarchical network design principle.
Note
Following the principle “build today with tomorrow in mind” can
lead CCDE candidates into a gold-plating situation. For the CCDE
exam, candidates should solve the problem at hand, and no more,
unless the scenario states otherwise.
That being said, sometimes (when possible) you need to gain the support
from the business first to drive the design in the right direction. To gain the
support of IT leaders of the organization, you need to highlight and explain
to them the extra cost and challenges of operating a network that either was
not designed optimally with regard to their projected business expansion
plans or was designed for yesterday’s requirements and is incapable of
handling today’s requirements. Consequently, this may help to influence the
business decision as the additional cost needed to consider the three-tier
architecture will be justified to the business in this case (long-term
operating expenditure [OPEX] versus short-term CAPEX). In other words,
sometimes businesses focus only on the reduction of CAPEX without
considering that OPEX can probably cost them more in the long run if the
solution was not architected and designed properly to meet their current and
future requirements.
Note
The routed access design model does not support spanning Layer 2
VLANs across multiple access switches, and this might not be a good
choice for some networks. Although expanding Layer 2 over routed
infrastructure is achievable using other different overlay technologies,
this might add complexity to the design, or the required features may
not be supported with the existing platforms for the access or
distribution layer switches.
The left side of Figure 15-6 represents the physical connectivity, and the
right side shows the logical view of this architecture, which is based on the
switch clustering design model across the entire modular campus network.
Figure 15-6 Switch Clustering Concept
Table 15-2 compares the different access-distribution connectivity design
models from different design angles.
Note
All the design models discussed in this section are valid design
options. However, the design choice must be driven by the
requirements and design constraints, such as cost, which can
influence which option you can select. For example, an access switch
with Layer 3 capabilities is more expensive than a switch with Layer
2 capabilities only. This factor will be a valid tiebreaker if cost is a
concern from the perspective of business requirements.
Scalability High with proper query High with proper OSPF area
domain containment via design and area type selection
EIGRP stubs and
summarization
MPLS-TE No Yes
support
Campus Network Virtualization Design
Considerations
Virtualization in IT generally refers to the concept of having two or more
instances of a system component or function such as operating system,
network services, control plane, or applications. Typically, these instances
are represented in a logical virtualized manner instead of being physical.
Virtualization can generally be classified into two primary models:
Note
It is important that network designers understand the drivers toward
adopting network virtualization from a business point of view, along
with the strengths and weaknesses of each design model. This ensures
that when a network virtualization concept is considered in a given
area within the network or across the entire network, it will deliver
the promised value (and not be used only because it is easy to
implement or it is an innovative approach). As discussed earlier in
this book, a design that does not address the business’s functional
requirements is considered a poor design; consider the design
principle “no gold plating” discussed in Chapter 1, “Network
Design.”
Note
One of the main concerns about network virtualization is the concept
of fate sharing (aka shared failure state), because any failure in the
physical network can lead to a failure of multiple virtual networks
running over the same physical infrastructure. Therefore, when the
network virtualization concept is used, ideally a reliable and highly
available network design should be considered as well. Besides the
constraints about virtual network availability, there is always a
concern about network virtualization (multitenant environment)
where multiple virtual networks (VNs) operate over a single physical
network infrastructure and each VN probably has different traffic
requirements (different applications and utilization patterns).
Therefore, there is a higher potential of having traffic congestion and
degraded application quality and user experience if there is no
efficient planning with regard to the available bandwidth, number of
VNs, traffic volume per VN, applications in use, and the
characteristics of the applications. In other words, if there is no
adequate bandwidth available and the quality of service (QoS)
policies to optimize and control traffic behaviors, one VN may
overutilize the available bandwidth of the underlying physical
network infrastructure. This will usually lead to traffic congestion
because other VNs are using the same underlying physical network
infrastructure, resulting in fate sharing.
Device virtualization
Path isolation
Services virtualization
Moreover, you can use the techniques of the different models individually
to serve certain requirements or combine them to achieve one cohesive end-
to-end network virtualization solution. Therefore, network designers must
have a good understanding of the different techniques and approaches,
along with their attributes, to select the most suitable virtualization
technologies and design approaches for delivering value to the business.
Device Virtualization
Also known as device partitioning, device virtualization represents the
ability to virtualize the data plane, control plane, or both, in a certain
network node, such as a switch or a router. Using device-level virtualization
by itself will help to achieve separation at Layer 2, Layer 3, or both, on a
local device level. The following are the primary techniques used to achieve
device-level network virtualization:
Path Isolation
Path isolation refers to the concept of maintaining end-to-end logical path
transport separation across the network. The end-to-end path separation can
be achieved using the following main design approaches:
Service Virtualization
One of the main goals of virtualization is to separate services access into
different logical groups, such as user groups or departments. However, in
some scenarios, there may be a mix of these services in terms of service
access, in which some of these services must only be accessed by a certain
group and others are to be shared among different groups, such as a file
server in the data center or Internet access, as shown in Figure 15-14.
Figure 15-14 End-to-End Path and Services Virtualization
Therefore, in scenarios like this where service access must be separated per
virtual network or group, the concept of network virtualization must be
extended to the services access edge, such as a server with multiple VMs or
an Internet edge router with single or multiple Internet links.
The virtualization of a network can be extended to other network service
appliances, such as firewalls. For instance, you can have a separate virtual
firewall per virtual network, to facilitate access control between the virtual
user network and the virtualized services and workload, as shown in Figure
15-15. The virtualization of network services can be considered as a “one-
to-many” network device level virtualization.
Figure 15-15 Firewall Virtual Instances
Furthermore, in multitenant network environments, multiple security
contexts offer a flexible and cost-effective solution for enterprises (and for
service providers). This approach enables network operators to partition a
single pair of redundant firewalls or a single firewall cluster into multiple
virtual firewall instances per business unit or tenant. Each tenant can then
deploy and manage its own security policies and service access, which are
virtually separated. This approach also allows controlled inter-tenant
communication. For example, in a typical multitenant enterprise campus
network environment with MPLS VPN (L3VPN) enabled at the core, traffic
between different tenants (VPNs) is normally routed via a firewalling
service for security and control (who can access what), as illustrated in
Figure 15-16.
Figure 15-17 zooms in on the firewall services contexts to show a more
detailed view (logical/virtualized view) of the traffic flow between the
different tenants/VPNs (A and B), where each tenant has its own virtual
firewall service instance located at the services block (or at the data center)
of the enterprise campus network.
Note
The concept of NFV is commonly adopted by service provider
networks nowadays. Nonetheless, this concept is applicable and
usable in enterprise networks and enterprise data center networks that
want to gain its benefits and flexibility.
Note
In large-scale networks with a very high volume of traffic (typically
carrier-grade), hardware resource utilization and limits must be
considered.
Summary
The enterprise campus is one of the vital parts of the modular enterprise
network. It is the medium that connects the end users and the different types
of endpoints such as printers, video endpoints, and wireless access points to
the enterprise network. Therefore, having the right structure and design
layout that meets current and future requirements is critical, including the
physical infrastructure layout, Layer 2, and Layer 3 designs. To achieve a
scalable and flexible campus design, you should ideally base it on
hierarchal and modular design principles that optimize the overall design
architecture in terms of fault isolation, simplicity, and network convergence
time. It should also offer a desirable level of flexibility to integrate other
networks and new services and grow in size.
However, the concept of network virtualization helps enterprises to utilize
the same underlying physical infrastructure while maintaining access, and
path and services access isolation, to meet certain business goals or
functional security requirements. As a result, enterprises can lower CAPEX
and OPEX and reduce the time and effort required to provision a new
service or a new logical network. However, the network designer must
consider the different network virtualization design options, along with the
strengths and weaknesses of each, to deploy the suitable network
virtualization technique that meets current and future needs. These needs
must take into account the different variables and constraints, such as staff
knowledge and the hardware platform-supported features and capabilities.
References
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Note
These functions can vary from design to design to some extent.
Moreover, all of them are not necessarily required to be applied. For
example, NAT can be applied at only one layer instead of doing
multiple layers of NAT.
Path requirements
Is the business goal high availability only?
Is the business goal to optimize the ROI of the existing external
links?
Should available bandwidth be increased?
Should there be path and traffic isolation?
Traffic flows characteristics
Is the business goal to host services within the enterprise and to
be accessible from the Internet?
Is the business goal to host some of its services in the cloud or to
access external services over the Internet?
Or both (hybrid)?
Note
BGP is the most flexible protocol that handles routing policies and
the only protocol that has powerful capabilities that can reliably
handle multiple peering with multiple autonomous systems
(interdomain routing). Therefore, this section only considers BGP as
the protocol of choice for Internet multihoming design; however,
some designs may use an interior gateway protocol (IGP) or static
routing with multihoming. Typically, these designs eliminate all the
flexibilities that you can gain from BGP multihoming scenarios.
Note
BGP community values can provide flexible traffic engineering
control within the enterprise and across the ISP. Internet providers
can match community values and predefined application policies per
community value. For example, you can influence the path
LOCAL_PREFERENCE value of your advertised route within the
ISP cloud by assigning an x BGP community value to the route. Refer
to RFC 1998.
BGP community values technically can be seen like a “route tag,” which
can be contained within the one AS or be propagated across multiple
autonomous systems to be used as a “matching value” to influence BGP
path selection. For instance, one of the common scenarios with global ISPs
is that each ISP can share the standard BGP community values used with its
customers to distinguish IP prefixes based on its geographic location (for
example, by region or continent). This offers enterprises the flexibility to
match the relevant community value that represents a certain geographic
location and associate it with a BGP policy such as AS-PATH prepending to
achieve a certain goal. For example, an enterprise may want all traffic going
to IP prefixes within Europe (outbound) to use Internet link 1, while all
other traffic should use the second link. As illustrated in Figure 16-3, BGP
community values can simplify achieving this goal to a large extent in a
more dynamic manner.
Figure 16-3 BGP Community Value Usage Example
Scenario 1: Active-Standby
This design scenario (any of the connectivity models depicted in Figure 16-
4) is typically based on using one active link for both inbound and outbound
traffic, with a second link used as a backup. This design scenario is the
simplest design option and is commonly used in situations where the
backup link is a low-speed link and is only required to survive during any
outage of the primary link.
Figure 16-4 Internet Multihoming Active-Standby Connectivity Models
Figure 16-5 shows an active-standby scenario where ISP A must be used as
the primary and active path for both ingress and egress traffic flows.
Ingr Longest match over the preferred path, by dividing the prefix into
ess two halves (for instance, advertise /16 as 2 × /17 over the preferred
ingress path toward ISP A in the scenario in Figure 16-5)
I The typical mechanism to use here is to divide the PI address into two
n halves. For example, an IPv4 /16 subnet can be divided into two /17
g summary subnets, similarly to an IPv6 /48 subnet can be divided into
r two /49 summary subnets. Then advertise each half over a different
e link along with the aggregate (IPv4 /16 or IPv6 /48 in this example)
s over both links to be used in case of link failure. For unequal load
s sharing, you can use the same concept with more small subnets to be
advertised over the path with higher capacity.
E For the outbound traffic direction, you need to receive the full Internet
g route from one of the ISPs along with the default route from both.
r Accept with filtering only every other /4 for IPv4 (for example, 0/4,
e 32/4). IPv6 can use the same concept (IPv6 either selectively or the
s same concept). From the other link, increase the
s LOCAL_PREFERENCE for the default route.
In this case, the more specific route (permitted in the filtering) will be
used over one link. Every other route that was filtered out will go over
the second link using the default route. For unequal load sharing, more
subnets can be accepted/allowed from the link with higher capacity.
Note
ISPs usually deploy route filtering policies with their customers and
with other peering ISPs, where only certain subnet lengths are
accepted, to avoid propagating a large number of small networks such
as v4/24 or v6/64. However, the subnets presented in this section are
hypothetical to simplify the explanation of the discussed points.
Note
In both cases, you need to make sure that there is a link between the
two Internet edge routers (physical or tunnel) with internal BGP
(iBGP) peering over this link, to avoid traffic black-holing in some
failure scenarios.
Asymmetrical Routing with Multihoming (Issue and Solution)
The scenario depicted in Figure 16-8 demonstrates a typical design scenario
with a potential for asymmetrical routing. This scenario is applicable to two
sites or data centers with a direct (backdoor) link, along with a layer of
firewalling behind the Internet edge routers. In addition, these firewalls are
site-specific, where no state information is exchanged between the firewalls
of each site. In addition, both sites are advertising the same address range
toward the Internet (PI or PA) range. Therefore, a possibility exists that
return traffic (of outbound traffic) originated from Site-1 going to the
Internet using the local Internet link will come back over the Site-2 Internet
link. The major issue here is that the firewall of Site-2 has no “state
information” about this session. Therefore, the firewall will simply block
this traffic. This can be a serious issue if the design did not consider how
the network design will handle situations like this, especially during failure
scenarios.
To optimize this design to overcome this undesirable behavior, network
designers need to consider the following:
Control plane peering between edge routers in each site: The first
important point here is to make sure that both Internet edge routers are
connected directly and use iBGP peering between them. This link can
be physical or over a tunnel interface, such as a GRE tunnel. A
network designer can optimize this design and mitigate its impact by
adding this link along with associated BGP policies (make Site-1
internal prefixes more preferred over the Site-1 Internet link), which
will help to avoid the blocking by the Site-2 firewall.
Organized IP addressing advertisement: To make the preceding
point work smoothly, a network designer needs to make sure that each
site is advertising its own route prefixes as more specific (longest
match principle), in addition to the aggregate of the entire PI subnet,
as discussed earlier. For example, /16 might be divided into two /17s
per site. (You can use the same concept with IPv6.)
NAT consideration: The other point to be noted here is that if these
prefixes are translated by the edge firewalls, one of the common and
proven ways to deal with this type of scenario is by forming a direct
route peering between the distribution/core layer nodes and the
Internet edge routers. Consequently, the edge firewalls can perform
NAT for traffic passing through them (as long as the two earlier
considerations mentioned in this list are in place).
Note
It’s worth mentioning that asymmetric routing will completely break
traffic flows with stateful firewalls in between, but even in networks
without firewalls, asymmetric routing can be burdensome because it
consumes valuable bandwidth from unrelated sites (transit sites).
These can be even harder to troubleshoot because traffic flows
continue to work despite the lurking problem.
For instance, in Figure 16-9 the Internet edge distribution is peering with
the Internet edge router using multihop eBGP with private AS and
advertising the PI prefixes over BGP (using static routes with a BGP
network statement to advertise the routes). The firewall is using a default
route toward the Internet edge router, along with NAT.
By incorporating the design recommendations to optimize the preceding
scenario, a network designer can make the design more agile in its response
to failures. For example, in the topology in Figure 16-8, if the link between
Site-1 and the Internet were to go down for some reason, traffic destined for
Site-1 prefixes (part of the first half of the /17) would typically go to the
Site-2 Internet link (because we advertise the full /16 from both sites). Then
traffic would traverse the inter-site link to reach Site-1. As a result, this
design will eliminate the firewall blocking issue, even after an Internet link
failure event, as illustrated in Figure 16-10.
Table 16-5 compares the three different Internet Multihoming Models from
a design decision perspective.
Figure 16-9 Optimized Multihoming Design with Firewalls
f
l
e
x
i
b
i
l
i
t
y
Summary
Today’s enterprise businesses, in particular multinational and global
organizations, primarily rely on technology services and applications to
achieve their business goals. Therefore, the enterprise Internet edge
architecture is one of the most vital and critical modules within the modern
modular enterprise architecture. It represents the gateway of the enterprise
network to the Internet, and today the Internet is an unstated requirement.
As a network designer you must keep this in mind. Customers, businesses,
and end users expect the Internet to always work, Network designers are
responsible for making sure the Internet always works. Therefore, network
designers must consider designs that can provide a common resource access
experience to the remote sites and users without compromising any of the
enterprise security requirements, such as end-to-end path separation
between certain user groups. In addition, optimizing Internet edge design
with business-driven multihoming designs can play a vital role in enhancing
the overall Internet edge performance and design flexibility and can
maximize the total ROI of the available links.
References
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Caution
The goal of self-assessment is to gauge your mastery of the topics in
this chapter. If you do not know the answer to a question or are only
partially sure of the answer, you should mark that question as wrong
for purposes of the self-assessment. Giving yourself credit for an
answer you correctly guess skews your self-assessment results and
might provide you with a false sense of security.
Foundation Topics
Interconnect the different enterprise locations and remote sites that are
geographically dispersed
Meet enterprise security policy requirements by protecting enterprise
traffic over the WAN (secure transport), to offer the desired end-to-
end level of protection and privacy across the enterprise network
Cost-effective and reliable WAN by providing flexible and reliable
transport that meets the primary business objectives and critical
enterprise application requirements and that supports the convergence
of voice, data, and video to satisfy the minimum requirements of
today’s converged enterprise networks
Support business evolution, change, and growth by offering the
desired level of agility and scalability to meet the current and
projected growth of remote sites with flexible bandwidth rates
Note
The preceding factors are not necessarily the standard or minimum
requirements for every enterprise network. Even so, these factors are
the generic and common concerns that most IT enterprises have.
These concerns can vary from business to business. For instance,
many businesses have no concern about having unsecured IP
communications over their private WAN.
Note
For the purpose of the CCDE exam, the scenario and requirements
always determine the right choice. There might be situations where
both WAN options seem to be equally valid, but typically one of them
should be more suitable than the other because of a constraint or a
requirement given to you on the exam. However, in some cases, there
might be more than one right answer or optimal and suboptimal
answers. Therefore, you need to have the right justification to support
your design decision (using the logic of why, as discussed in Chapter
1, “Network Design”).
Note
Although E-Tree is another type of ME connectivity model, it is a
variation from E-LAN to provide a hub-and-spoke connectivity
model.
Note
This section discusses these technologies from the enterprise point of
view, as an L2 or L3 WAN solution. The service provider point of
view is not covered in this book because it is beyond the scope of the
CCDE v3 exam.
Note
L2VPN SPs can preserve the access media using the legacy access
media type (such as ATM and Frame Relay) if this is required by the
business or if there is a lack of ME coverage by the SP in certain
remote areas. In addition, this type of connectivity (mixed) is a
common scenario during the migration phases from legacy to modern
L2 WAN services.
Advantages Limitations
Note
There are other variations of ME services, such as Ethernet private
line (EPL), that support Ethernet over xWDM (dense wavelength-
division multiplexing [DWDM], coarse wavelength-division
multiplexing [CWDM]), SONET, or dedicated Ethernet interconnects
over fiber. However, this type of service is more expensive than the
other ones that are offered by MPLS SPs, such as VPLS or EVPL
(Ethernet Virtual Circuit [EVC]) as an L2VPN ME service.
Advantages Limitations
QoS across L3
VPNs can be
difficult because
it may require
QoS re-marking
to comply with
the different
service carrier
policies.
Access
flexibility: With
L3 WAN, the
enterprise can be
provisioned with
any type of
access media
depending on the
access
availability of
that location (for
example,
Ethernet,
WiMAX, and
VPN, over the
Internet or
4G/5G).
Routing
simplicity: With
L3VPN,
enterprises will
typically offload
the core routing
to the SP. From
the enterprise
point of view,
only one routing
peer/session per
link needs to be
maintained.
Note
The decision of when to use the Internet as a WAN transport and how
to use it in terms of level of redundancy and whether to use it as a
primary versus backup path depends on the different design
requirements, design constraints, and business priorities (see Table
17-5).
Furthermore, the Gartner Inc. report “Hybrid Will Be the New Normal for
Next Generation Enterprise WAN” analyzes and demonstrates the
importance of the integration of the Internet and MPLS WAN to deliver a
cohesive hybrid WAN model to meet today’s modern businesses, and
applications’ trends and requirements such as cloud-based services. As the
report’s Summary states, “Network planners must establish a unified WAN
with strong integration between these two networks to avoid application
performance problems.”
Note
The connectivity to the Internet can be either directly via the
enterprise WAN module or through the Internet module (edge), as
highlighted in Chapter 16, “Enterprise Internet Edge Architecture
Design.” This decision is usually driven by the enterprise security
policy, to determine where the actual tunnel termination must happen.
For instance, there might be a dedicated DMZ for VPN tunnel
termination at the enterprise Internet edge that has a backdoor link to
the WAN distribution block to route the decapsulated DMVPN
traffic.
Table 17-4 highlights the primary advantages and limitations of the Internet
as a WAN transport model.
Ban Very flexible (can Flexible (less than Flexible with limitations,
dwi vary between 1 L2 MPLS-based depending on the site
dth Mbps to 100 WAN [ME]) location and connectivity
Gbps) provisioning type (DSL
versus 4G versus 5G)
Supports both scale out and Supports scale up (limited scale out, as a
scale up. You can use more maximum of two aggregation layer nodes
than two aggregation layer per mLAG can be used).
nodes.
The more links to be added, The more links to be added, the larger the
the larger routing database. number of Address Resolution Protocol
(ARP) entries.
Site type (for example, small branch versus data center versus regional
office)
Level of criticality (How much can the downtime cost? How critical is
this site to the business if it goes offline for x amount of time?)
Traffic load (for example, the load on the HQ data center is more than
that of the regional data center)
Cost (Is cost-saving a top priority?)
Table 17-7 summarizes the various types of WAN edge connectivity design
options depicted in Figure 17-6, along with the different considerations
from a network design perspective.
Figure 17-6 WAN Edge Connectivity Options
Large enterprises with large geographic distribution can mix between the
connectivity options (single versus dual WAN) by using single and dual
providers, based on the criticality of the site and business needs. For
instance, regional hub sites and data centers can be dual-homed to two
providers. In addition, this mixed connectivity design approach, where
some remote sites are single-homed to a single provider while others are
multihomed to dual providers (typically larger sites such as data centers or
regional HQs), can offer a transit path during a link failure, as depicted in
the scenario in Figure 17-7. Ideally the transit site should be located within
the same geographic area or country (in the case of global organizations) to
mitigate any latency or cost-related issues, when applicable, by reducing the
number of international paths that traffic has to traverse. In addition, the
second provider in Figure 17-7 can be an Internet-based transport such as
DMVPN over the Internet.
Note
The edge node is usually either a CE node (for MPLS L3 or Layer 2
WAN) or a VPN spoke node. In some cases, a single WAN edge
router can perform the role of both a CE router and VPN spoke
router.
mLAG with
FHRP
mLAG with
IGP
Dual WAN,
Dual WAN, dual routers single router
Dual WAN,
dual routers
Note
The WAN connectivity options in Table 17-9 apply for both private
enterprise WAN and overlaid WAN over the Internet transport.
Note
Routing over GRE tunnels with large routing tables may require
adjustments (normally lowering) to the maximum transmission unit
(MTU) value of the tunnel interface.
Note
As stated, you can use these design models as foundation reference
architecture and scale them based on the requirements. For instance,
the design option 1 model can easily be migrated to design option 2
when the number of remote sites increases and requires a higher level
of redundancy. Similarly, the number of edge access nodes
(WAN/Internet edge routers) can be scaled out depending on the
design requirements. For instance, an enterprise may consider design
option 1 with an additional redundant edge router to a second MPLS
WAN, while the Internet edge router is to be used only as a third level
of redundancy with a tunneling mechanism.
Note
The number of remote sites in the following categorization is a rough
estimation only (based on the current Cisco Validated Design [CVD]
at the time of this writing). Typically, this number varies based on
several variables discussed earlier in this book, such as hardware
limitations and routing design in terms of number of routes.
Note
The primary scalability limiting factor of any VPN solution is the
supported number of sessions by the hardware platform that is used.
Note
GETVPN is an encryption mechanism that enables you to preserve IP
header information that supports a true “any-to-any” encrypted IP
connectivity model. Therefore, it is commonly used over private
transport networks such as a private WAN instead of the other IP
tunneling mechanisms. Having said that, GETVPN is not always the
ideal or optimal overlay and encryption solution over the private
WAN. For example, if the existing WAN platforms of an organization
do not support GETVPN (and the business has no intention or plan to
upgrade any network hardware/software), then you need to deal with
the design constraints and consider other options here, such as IPsec
with GRE or mGRE.
WAN Virtualization
Introducing virtualization and path isolation over the WAN transport is
commonly driven by the adoption of the network virtualization concept by
the enterprise within the campus LAN, branches, or data center network.
Therefore, to maintain end-to-end path isolation, network virtualization
must be extended over the WAN transport in a manner that does not
compromise path-isolation requirements. From a WAN design point of
view, two primary WAN connectivity models drive the overall WAN
virtualization design choices:
Note
For this design option and subsequent ones, it is hard to generalize
and provide a specific recommended number of remote sites or
VRFs, because the decision has to be made based on these two
variables when measuring the scalability of the design option. For
example, evaluating this design option for a network that requires
path isolation between three sites, where each site has ten different
virtual networks to transport, is different from when there are three
sites with two virtual networks in each. In both cases, the number of
sites is small; however, the number of VRFs (virtual networks)
becomes the tiebreaker.
Note
Operational complexity always increases when the network size
increases and the WAN virtualization techniques used have limited
scalability support, and vice versa.
Note
After the migration to the MPLS VPN, there will be two points of
route redistribution with a backdoor link (over the L2VPN link
between the hub sites). This scenario may introduce some route
looping or suboptimal routing. Therefore, it is advised that over this
L2VPN link each hub site (ASBR) advertises only the summary
routes and not the specific to ensure that the MPLS WAN path is
always preferred (more specific) without potential routing
information looping. Alternatively, route filtering, as discussed in
Chapter 8, “Layer 3 Technologies” (in the “Route Redistribution
Design Considerations” section), must be considered.
Traffic between the LAN networks of each hub site will use the
VLL (L2VPN) path as the primary path. Because the HQ LAN
networks, along with the VLL link, are all part of the same area
(area 0), the summarization at the hub (Autonomous System
Border Router [ASBR]) routers will not be applicable here (same
area). By reducing OSPF cost over the VLL link, you can ensure
that traffic between the HQ LANs will always use this path as the
primary path.
Step 2. (Illustrated in Figure 17-30):
a. Connect one of the spoke routers intended to be migrated to
the MPLS VPN.
b. Establish an eBGP session with MPLS VPN SP and advertise
the local subnet (LAN) using the BGP network statement
(ideally without route retribution).
c. Once the traffic starts flowing via the MPLS VPN (eBGP has
a lower AD 20), disconnect the Frame Relay link.
d. At this stage, traffic between migrated spokes and
nonmigrated spokes will flow via the transit hub site.
Figure 17-30 WAN Migration Step 2
Step 3. Migrate the remaining spokes using the same phased approach.
Note
With this approach, connectivity will be maintained between the
migrated and nonmigrated sites without introducing any service
interruption until the migration of the remaining remote sites is
completed.
Summary
Today’s enterprise businesses are geographically dispersed, which makes
them rely on the technology services of the WAN module to interconnect
their disaggregated locations together. This is why the WAN module is one
of the most vital and critical modules within the modern network
architecture for businesses at the time of writing. Therefore, network
designers must consider designs that can provide a common resource access
experience to the remote sites and users, whether over the WAN or the
Internet, without compromising any of the enterprise security requirements,
such as end-to-end path separation between certain user groups. Last but
not least, overlay integration at the WAN edge in today’s networks can offer
enterprises flexible and cost-effective WAN and remote-access connectivity,
even considering the additional layer of control plane and design
complexity that may be introduced into the overall enterprise architecture.
References
Al-shawi, Marwan, CCDE Study Guide (Cisco Press, 2015)
Gartner Research, “Hybrid Will Be the New Normal for Next
Generation Enterprise WAN” (Gartner, Inc., 2014; Reference
Document ID: G00266397, https://www.gartner.com)
List Common factors that drive the decision for which 460
WAN edge connectivity option to use
Final Preparation
Part 1 of this book covered the different network design aspects that all
network designers and CCDE candidates should know. Understanding
network design fundamentals, principles, techniques, and pitfalls is highly
critical to a network designer’s success. All of these topics create a proper
network design mindset. Part 2 of this book covered the technologies,
protocols, and design considerations for them. Part 3 of this book focused
on enterprise network architectures, merging the concepts presented in Part
1 and Part 2 within a specific network design use case.
Understanding all these topics is required to be prepared to pass the 400-
007 CCDE Written Exam and eventually the CCDE Practical Exam to
achieve the certification. While the previous 17 chapters supply the detailed
information that you need to know for the exam, most people need more
preparation than simply reading those chapters. Therefore, this chapter
provides a set of tools and a study plan to help you complete your
preparation for the exams.
This short chapter has three main sections. The first section helps you get
ready to take the exam, and the second section lists the exam preparation
tools useful at this point in the study process. The third section provides a
suggested study plan you can use now that you have completed all the
earlier chapters in this book.
Getting Ready
Here are some important tips to keep in mind to ensure that you are ready
for this rewarding exam:
Build and use a study tracker: Consider using the exam objectives
shown in this chapter to build a study tracker for yourself. Such a
tracker can help ensure that you have not missed anything and that
you are confident for your exam. As a matter of fact, this book offers a
sample study planner as a website supplement.
Think about your time budget for questions on the exam: When
you do the math, you will see that, on average, you have one minute
per question. While this does not sound like a lot of time, keep in
mind that many of the questions will be very straightforward, and you
will take 15 to 30 seconds on those. This leaves you extra time for
other questions on the exam.
Watch the clock: Check in on the time remaining periodically as you
are taking the exam. You might even find that you can slow down
pretty dramatically if you have built up a nice block of extra time.
Get some earplugs: The testing center might provide earplugs, but
get some just in case and bring them along. There might be other test
takers in the center with you, and you do not want to be distracted by
their moans and groans. I personally have no issue blocking out the
sounds around me, so I never worry about this, but I know it is an
issue for some.
Plan your travel time: Give yourself extra time to find the center and
get checked in. Be sure to arrive early. As you test more at a particular
center, you can certainly start cutting it closer time-wise.
Online testing: If participating in online testing, ensure that you have
performed and passed the online system check. For more information,
check here: https://www.cisco.com/c/en/us/training-events/training-
certifications/online-exam-proctoring.html#~requirements
Get rest: Most students report that getting plenty of rest the night
before the exam boosts their success. All-night cram sessions are not
typically successful.
Bring in valuables, but get ready to lock them up: The testing
center will take your phone, your smartwatch, your wallet, and other
such items and will provide a secure place for them.
Take notes: You will be given note-taking implements and should not
be afraid to use them. I always jot down any questions I struggle with
on the exam. I then memorize them at the end of the test by reading
my notes over and over again. I always make sure I have a pen and
paper in the car, and I write down the issues in the parking lot just
after the exam. When I get home—with a pass or fail—I research
those items!
Step 11. Click Next and then click the Finish button to download the exam
data to your application.
Step 12. You can now start using the practice exams by selecting the
product and clicking the Open Exam button to open the exam
settings screen.
Note that the offline and online versions sync together, so saved exams and
grade results recorded on one version will be available to you in the other
version as well.
Study mode
Practice Exam mode
Flash Card mode
Study mode allows you to fully customize an exam and review answers as
you are taking the exam. This is typically the mode you use first to assess
your knowledge and identify information gaps. Practice Exam mode locks
certain customization options in order to present a realistic exam
experience. Use this mode when you are preparing to test your exam
readiness. Flash Card mode strips out the answers and presents you with
only the question stem. This mode is great for late-stage preparation, when
you really want to challenge yourself to provide answers without the benefit
of seeing multiple-choice options. This mode does not provide the detailed
score reports that the other two modes provide, so it is not the best mode for
helping you identify knowledge gaps.
In addition to these three modes, you will be able to select the source of
your questions. You can choose to take exams that cover all of the chapters,
or you can narrow your selection to just a single chapter or the chapters that
make up specific parts in the book. All chapters are selected by default. If
you want to narrow your focus to individual chapters, simply deselect all
the chapters and then select only those on which you wish to focus in the
Objectives area.
You can also select the exam banks on which to focus. Each exam bank
comes complete with a full exam of questions that cover topics in every
chapter. The two exams printed in the book are available to you, as are two
additional exams of unique questions. You can have the test engine serve up
exams from all four banks or just from one individual bank by selecting the
desired banks in the exam bank area.
There are several other customizations you can make to your exam from the
exam settings screen, such as the time allowed for taking the exam, the
number of questions served up, whether to randomize questions and
answers, whether to show the number of correct answers for multiple-
answer questions, and whether to serve up only specific types of questions.
You can also create custom test banks by selecting only questions that you
have marked or questions on which you have added notes.
Premium Edition
In addition to the free practice exam provided on the website, you can
purchase additional exams with expanded functionality directly from
Pearson IT Certification. The Premium Edition of this title contains an
additional two full practice exams and an eBook (in both PDF and ePub
format). In addition, the Premium Edition title has remediation for each
question to the specific part of the eBook that relates to that question.
Because you have purchased the print version of this title, you can purchase
the Premium Edition at a deep discount. There is a coupon code in the book
sleeve that contains a one-time-use code and instructions for where you can
purchase the Premium Edition.
To view the premium edition product page, go to
https://www.informit.com/title/9780137601042.
Summary
The tools and suggestions provided in this chapter have been designed with
one goal in mind: to help you develop the skills required to pass the 400-
007 CCDE Written Exam. This book has been developed from the
beginning to not only tell you the facts but also help you learn how to apply
the facts. No matter what your experience level leading up to taking the
exam, it is my hope that the broad range of preparation tools, and even the
structure of the book, will help you pass the exam with ease. I hope you do
well on the exam.
Appendix A
Chapter 2
1. d. Explanation: Each business has a set of priorities that are
typically based on strategies adopted for the achievement of
goals. These business priorities can influence the planning and
design of IT network infrastructure. Therefore, network designers
must be aware of these business priorities to align them with the
design priorities, which ensures the success of the network they
are designing by delivering business value.
2. c. Explanation: What does this business have to do and why? This
is what we call a business driver. Business drivers are what
organizations must follow. A business driver is usually the reason
a business must achieve a specific outcome. It is the “why” the
business is doing something. Required to maintain a specific
compliance standard is a perfect example.
3. b. Explanation: A business outcome equates to the end result,
such as saving money, diversifying the business, increasing
revenue, or filling a specific need. Essentially, a business outcome
is an underlying goal a business is trying to achieve. A business
outcome will specifically map to a business driver.
4. a. Explanation: Business capabilities are not solutions. Business
capabilities are what you get from a solution. A network access
control solution provides a business the session- and transaction-
based security capability, for example. Most solutions provide
multiple capabilities. Some solutions provide parts of multiple
capabilities, and when they are combined with other solutions, the
business can get a number of capabilities that will make it
successful.
5. c. Explanation: A decision matrix serves the same purpose as a
decision tree, but with the matrix, network designers can add
more dimensions to the decision-making process.
6. c. Explanation: Capital expenses (CAPEX), or expenditures, are
the major purchases a business makes that are to be used over a
long period of time. Examples of CAPEX include fixed assets,
such as buildings and equipment.
7. a. Explanation: Operating expenses (OPEX), or expenditures, are
the day-to-day costs incurred by a business to keep the business
operational. Examples of OPEX include rent, utilities, payroll,
and marketing.
8. b. Explanation: Return on investment (ROI) is the concept of
identifying what the perceived potential benefit is going to be for
the business if the business makes an investment in a new
technology, solution, and/or capability. For example, completing
that application upgrade is going to cost the business $5,000 in
time and labor. The investment here is $5,000 but it doesn’t have
to be monetary. At some point, the business wants to recoup its
investment, i.e., what is the business getting out of this
investment? Included with this upgrade are new features and
functionality that the business can charge a premium for. These
premium services are how the business is going to recoup its
initial $5,000 investment.
9. d. Explanation: The waterfall project management framework is
linear, adverse to change, structured, and requires documentation
at the end of each phase.
10. c. Explanation: The agile project management model is based on
an incremental, iterative approach. Instead of in-depth planning at
the beginning of the project, like that of the waterfall model, agile
methodologies are open to changing requirements over time and
encourage constant feedback from the different business leaders
and stakeholders. Cross-functional teams work on iterations of a
product over a period of time, and this work is organized into a
backlog that is prioritized based on business or customer value.
The goal of each iteration is to produce a working product. Agile
welcomes change and, because of that, the end state is oftentimes
not known at the beginning of the project. Throughout an agile
project, feedback is welcomed and requested on a reoccurring
basis to the business leaders and stakeholders. Breaking down the
project into an iterative process allows the project team to focus
on higher-quality work that is collaborative and also promotes
faster delivery.
Chapter 3
1. b. Explanation: The single-server model is the simplest
application model, and it is equivalent to running the application
on a personal computer. All of the required components for an
application to run are on a single application or server.
2. c. Explanation: The 2-tier application model is like a client/server
architecture, where communication takes place between client
and server. In this model, the presentation layer or user interface
layer runs on the client side while the dataset layer gets executed
and stored on the server side.
3. a. Explanation: The 3-tier application model has three tiers or
layers called presentation (aka web), intermediate (aka
application), and database.
4. d. Explanation: On-premises is the service model where a
business owns and manages the infrastructure. A business will
procure all of the infrastructure required to run the service and
then fully manage, maintain, and operate it. In some situations,
the management is outsourced but the infrastructure is procured
and owned by the business.
5. a. Explanation: Platform as a Service (PaaS) is a service model
where a vendor provides hardware and software tools, and
customers use these tools to develop applications. PaaS users tend
to be application developers.
6. b. Explanation: Infrastructure as a Service (IaaS) is a pay-as-you-
go service model for storage, networking, and virtualization—all
of a business’s infrastructure needs. IaaS gives users cloud-based
alternatives to on-premises infrastructure, so businesses can avoid
investing in expensive onsite resources.
7. c. Explanation: Software as a Service (SaaS) is a service model
where a vendor makes its software available to users, usually for
a monthly or annual subscription service fee.
8. c. Explanation: Multi-cloud is the use of two or more cloud
service providers (CSPs), with the ability to move workloads
between the different cloud computing environments in real time
as needed by the business.
9. b. Explanation: A private cloud consists of cloud computing
resources used by one business. This cloud environment can be
located within the business’s data center footprint, or it can be
hosted by a cloud service provider. In a private cloud, the
resources, applications, services, data, and infrastructure are
always maintained on a private network and all devices are
dedicated to the business.
10. a. Explanation: A hybrid cloud is the use of both private and
public clouds together to allow for a business to receive the
benefits of both cloud environments while limiting their negative
impacts on the business.
11. b. Explanation: Data governance is the planning of all aspects of
data management. This includes availability, usability,
consistency, integrity, and security of all data within the
organization.
Chapter 4
1. a, d. and f. Explanation: Static factors are items that we know
and therefore can preemptively base access and authorization on.
The most common of these factors are credentials but could also
include the level of confidence, device trust, network, physical
location, biometrics, and device orientation. Threat intelligence,
real-time data analytics, and GPS coordinates are all dynamic
factors, sources of data that can be analyzed at the time of access
to change what level of access and authorization (i.e., the trust
score of the transaction in question) is being provided.
2. b, c, and e. Explanation: Threat intelligence, real-time data
analytics, and GPS coordinates are all dynamic factors, sources of
data that can be analyzed at the time of access to change what
level of access and authorization (i.e., the trust score of the
transaction in question) is being provided. Static factors are items
that we know and can preemptively base access and authorization
on. The most common of these are credentials but could also
include the level of confidence, device trust, network, physical
location, biometrics, and device orientation.
3. b and d. Explanation: A trust score is created by a combination of
factors, both static and dynamic, and is used to continually
provide identity assurance. Assets, applications, networks, and so
forth—what we generally call resources—have levels of risk
scores, which are thresholds that must be exceeded for access to
be permitted. In general, the security plan categorization
determines asset level or risk. Users have various roles and, based
on those roles, are entitled to specific access to complete their job.
A financial user would need a different level of access than a
human resource user. These two users should not have the same
access or authorization. They may have overlapping access to
resources that they both need to complete their job functions.
4. a and d. Explanation: For a resource such as a user or device to
access another resource, such as an asset, application, or system,
the requesting resource’s authorization for access is determined
by combining its entitlement level and trust score. Entitled to
access (aka entitlement) means users have various roles and,
based on those roles, are entitled to specific access to complete
their job. A financial user would need a different level of access
than a human resource user. These two users should not have the
same access or authorization. They may have overlapping access
to resources that they both need to complete their job functions. A
trust score is created by a combination of factors, both static and
dynamic, and is used to continually provide identity assurance. A
trust score determines the level of access as required by the level
of risk value of the asset being accessed. Although static factors
contribute to the trust score, they do not directly comprise the
authorization for access. Static factors are items that we know and
can preemptively base access and authorization on. The most
common of these are credentials but could also include the level
of confidence, device trust, network, physical location,
biometrics, and device orientation. Although dynamic factors
contribute to the trust score, they do not directly comprise the
authorization for access. Dynamic factors are sources of data that
can be analyzed at the time of access to change what level of
access and authorization (i.e., the trust score of the transaction in
question) is being provided. The most common is threat
intelligence, but can also be geovelocity, GPS coordinates, and
real-time data analytics around the transaction.
5. d. Explanation: A policy engine is the location where policy is
implemented, rules are matched, and associated access
(authorization) is pushed to the policy enforcement points. This is
also known as a policy administration point (PAP) and a policy
decision point (PDP). A trust engine dynamically evaluates
overall trust by continuously analyzing the state of devices, users,
workloads, and applications (resources). It utilizes a trust score
that is built from static and dynamic factors. This is also known as
a policy information point (PIP). An endpoint device is any
device an end user can leverage to access the enterprise network.
Endpoint devices include business-owned assets and personally
owned devices that are approved to access the enterprise network.
Inventory is a single point of truth for all resources. This is an
end-to-end inventory throughout the entire architecture/enterprise.
This is also known as a policy information point.
6. a. Explanation: A trust engine dynamically evaluates overall trust
by continuously analyzing the state of devices, users, workloads,
and applications (resources). It utilizes a trust score that is built
from static and dynamic factors. This is also known as a policy
information point (PIP). An endpoint device is any device an end
user can leverage to access the enterprise network. Endpoint
devices include business-owned assets and personally owned
devices that are approved to access the enterprise network.
Inventory is a single point of truth for all resources. This is an
end-to-end inventory throughout the entire architecture/enterprise.
This is also known as a policy information point. A policy engine
is the location where policy is implemented, rules are matched,
and associated access (authorization) is pushed to the policy
enforcement points. This is also known as a policy administration
point (PAP) and a policy decision point (PDP).
7. b. Explanation: The CIA triad includes confidentiality, integrity,
and availability. It does not include compliance. Confidentiality
protects against unauthorized access to information to maintain
the desired level of secrecy of the transmitted information across
the internal network or public Internet. Integrity maintains
accurate information end to end by ensuring that no alteration is
performed by any unauthorized entity. Availability ensures that
access to services and systems is always available and
information is accessible by authorized users when required.
Compliance is a distractor answer.
8. c. Explanation: Integrity maintains accurate information end to
end by ensuring that no alteration is performed by any
unauthorized entity. Compliance is a distractor answer.
Availability ensures that access to services and systems is always
available and information is accessible by authorized users when
required. Confidentiality protects against unauthorized access to
information to maintain the desired level of secrecy of the
transmitted information across the internal network or public
Internet.
9. b. Explanation: The Payment Card Industry Data Security
Standard (PCI DSS) is a compliance standard focused on ensuring
the security of credit card transactions. This standard specifies the
technical and operational standards that businesses must follow to
secure and protect credit card data provided by cardholders and
transmitted through card processing transactions. The Health
Information Portability and Accountability Act (HIPAA) is a
compliance standard focused on protecting health and patient
information. Policy enforcement points are the locations where
trust and policy are enforced. A policy engine is the location
where policy is implemented, rules are matched, and associated
access (authorization) is pushed to the policy enforcement points.
This is also known as a policy administration point (PAP) and a
policy decision point (PDP).
10. a. Explanation: The Health Information Portability and
Accountability Act (HIPAA) is a compliance standard focused on
protecting health and patient information. The Payment Card
Industry Data Security Standard (PCI DSS) is a compliance
standard focused on ensuring the security of credit card
transactions. This standard refers to the technical and operational
standards that businesses must follow to secure and protect credit
card data provided by cardholders and transmitted through card
processing transactions. Policy enforcement points are the
locations where trust and policy are enforced. A policy engine is
the location where policy is implemented, rules are matched, and
associated access (authorization) is pushed to the policy
enforcement points. This is also known as a policy administration
point (PAP) and policy decision point (PDP).
Chapter 5
1. a. Explanation: Business architecture enables everyone, from
strategic planning teams to implementation teams, to get “on the
same page” or to be synchronized, enabling them to address
challenges and meet business objectives.
2. b. Explanation: Enterprise architecture is a process of organizing
logic for business processes and IT infrastructure reflecting the
integration and standardization requirements of the company’s
operating model.
3. c. Explanation: A business solution is a set of interacting business
capabilities that delivers specific, or multiple, business outcomes.
4. d. Explanation: A business outcome is a specific measurable
result of an activity, process, or event within the business.
5. a. Explanation: Technology Specific is a domain-specific
architecture. Within this scope, the business is requiring help with
finding and purchasing the right product or group of products in
an architecture focus area. This might be data center, security, or
enterprise networking focused but doesn’t cross between the
different architecture focus areas.
6. b. Explanation: Technology Architecture is a multi-domain
architecture (MDA), also referred to as cross-architecture. In this
scope, the business needs help understanding the benefits of
multi-domain technology architecture and how to show the value
it provides to the business. With this scope, two or more
architecture focus areas are incorporated.
7. c. Explanation: Business Solutions is a partial business
architecture scope for a business that requires expertise to help
solve its business problems and determine how to measure the
business impact of its technology investments (CAPEX, OPEX,
ROI, TCO, etc.).
8. d. Explanation: Business Transformation is a business-led
architecture scope for a business that requires help with
transforming its business capabilities to facilitate innovation to
accelerate the company’s digitization.
9. c. Explanation: The Open Group Architecture Framework is an
enterprise architecture methodology that incorporates a high-level
framework for enterprises that focuses on designing, planning,
implementing, and governing enterprise information technology
architectures. TOGAF helps businesses organize their processes
through an approach that reduces errors, decreases timelines,
maintains budget requirements, and aligns technology with the
business to produce business-impacting results.
10. a. Explanation: Information Technology Infrastructure Library is
a set of best practice processes for delivering IT services to your
organization’s customers. ITIL focuses on ITSM and ITAM, and
includes processes, procedures, tasks, and checklists that can be
applied by any organization. The three focus areas of ITIL are
Change Management, Incident Management, and Problem
Management.
Chapter 6
1. a. Explanation: The correct port-based Metro Ethernet transport
mode for an E-Line service is Ethernet private line (EPL).
2. d. Explanation: The correct VLAN-based Metro Ethernet
transport mode for an E-LAN service is Ethernet virtual private
LAN (EVPLAN).
3. b. Explanation: The correct VLAN-based Metro Ethernet
transport mode for an E-Line service is Ethernet virtual private
line (EVPL).
4. c. Explanation: The correct port-based Metro Ethernet transport
mode for an E-LAN service is Ethernet private LAN (EPLAN).
5. c. Explanation: The correct transport mode over pseudowire (PW)
for a Frame Relay access connection is port-based per DLCI.
6. a. Explanation: The correct transport mode over PW for an ATM
access connection is AAL5 protocol data units over PW cell relay
over PW.
7. b. Explanation: The correct transport mode over PW for an
Ethernet access connection is Protocol-based per VLAN.
8. d and e. Explanation: The Layer 2 transport options that are the
best options for a very large enterprise Data Center Interconnect
(DCI) solution are Provider Backbone Bridging with Ethernet
VPN (PBB-EVPN) and Provider Backbone Bridging with Virtual
Private LAN Service (PBB-VPLS). EVPN can support up to a
large enterprise scenario, H-VPLS can support up to a medium
enterprise scenario, and VPLS can only support up to a small
scenario.
9. b and d. Explanation: The Layer 2 transport options that are best
for MAC mobility are EVPN and PBB-EVPN because of the
sequence number attribute being leveraged.
Chapter 7
1. b. Explanation: PortFast is a feature that bypasses the listening
and learning phases to transition directly to the forwarding state.
STP edge port is another name for this but is less well-known.
2. d. Explanation: BPDU Filter is a feature that suppresses BPDUs
on ports.
3. a. Explanation: Root Guard is a feature that prevents external
switches—switches that are not part of your network or under
your control—from becoming the root of the Spanning Tree
Protocol tree.
4. c. Explanation: Multiple Spanning Tree (MST) is a protocol that
is used to group multiple VLANs into a single STP instance. This
also reduces the total number of spanning-tree instances that
match the physical topology of the network, reducing the CPU
load.
5. c. Explanation: Virtual Router Redundancy Protocol (VRRP) is a
standards-based first-hop routing protocol that provides
redundancy with a virtual router elected as the master.
6. d. Explanation: Link Aggregation Control Protocol (LACP) is a
protocol defined in IEEE 802.3ad that provides a method to
control the bundling of several physical ports to form a single
logical channel.
7. b. Explanation: Virtual Switching System (VSS) is a Cisco
technology that allows certain Cisco switches to bond together as
a single virtual switch.
8. d. Explanation: The primary design concern for a Looped
Triangle Topology is that STP limits the ability to utilize all the
available uplinks within a VLAN or STP/MST instance.
9. a. Explanation: The primary design concern for a Loop-Free
Inverted U Topology is that it introduces a single point of failure
to the design if one distribution switch or uplink fails.
10. b. Explanation: The primary design concern for a Looped Square
Topology is that a significant amount of access layer traffic might
cross the interswitch link to reach the active FHRP.
11. c. Explanation: The primary design concern for Loop-Free U
Topology is the inability to extend the same VLANs over more
than a pair of access switches.
Chapter 8
1. c. Explanation: Link-state advertisements (LSAs) are used by
OSPF routers to exchange routing and topology information.
When neighbors decide to exchange routes, they send a list of all
LSAs in their respective topology database. Each router then
checks its topology database and sends a Link State Request
message requesting all LSAs that were not found in its topology
table. Other routers respond with the Link State Update that
contains all LSAs requested by the neighbor.
2. a. Explanation: Type 3 LSAs are generated by area border routers
(ABRs) to advertise networks from one area to the rest of the
areas in an autonomous system.
3. b. Explanation: Point-to-multipoint indicates a topology where
one interface can connect to multiple destinations. Each
connection between a source and destination is treated as a point-
to-point link. An example would be a Point-to-Multipoint Cisco
Dynamic Multipoint VPN (DMVPN) topology. OSPF will not
elect DRs and BDRs and all OSPF traffic is multicast to
224.0.0.5.
4. a, d, and e. Explanation: Neighborship is discovered and
maintained using hello packets. These packets are sent using
multicast. Update messages are used to send routing information
to neighbors. These packets are sent to either one neighbor via
unicast or to multiple neighbors via multicast. They are sent using
Reliable Transport Protocol. EIGRP uses query packets when a
router loses a path to a network. The router sends a query packet
to its neighbors, asking if they have information on that network.
These packets are sent via multicast and using Reliable Transport
Protocol.
5. b. Explanation: An EIGRP stub router will inform neighbors via
the hello packet that it’s a stub; by doing so, neighbors will not
send queries to the router. EIGRP stubs are typically used at
spoke locations, as stubs cannot be used as transient routers.
6. b, d, and e. Explanation: Update messages are used to send
routing information to neighbors. These packets are sent to either
one neighbor via unicast or to multiple neighbors via multicast.
They are sent using Reliable Transport Protocol. EIGRP uses
query packets when a router loses a path to a network. The router
sends a query packet to its neighbors, asking if they have
information on that network. These packets are sent via multicast
and using Reliable Transport Protocol. Reply packets are used by
routers that received the query packet to respond to the query.
These are sent unicast to the router that sent the query and are
sent using Reliable Transport Protocol.
7. a. Explanation: IS-IS uses something like a designated router in
OSPF, but in Intermediate System-to-Intermediate System (IS-IS)
it’s referred to as a Designated Intermediate System. A DIS is
elected and is a pseudo node of the process. If you were to not
have a DIS on a multiaccess environment, then all the LSPs
would be flooded to other routers.
8. b. Explanation: An Intermediate System-to-Intermediate System
(IS-IS) level 2 router has the link-state information for the intra-
area as well as inter-area routing. The L2 router sends only L2
hellos. IS-IS level 2 area is similar and often compared to OSPF
backbone area 0.
9. c. Explanation: The overload bit in Intermediate System-to-
Intermediate System (IS-IS) is used to increase convergence and
prevent black-holing of traffic in the environment. When the
overload bit is set, it will gracefully redirect traffic around the
device in which the bit is set, thus making it a non-transit router.
By leveraging the overload bit, traffic will not be sent to routers
where other processes (i.e., BGP) haven’t converged yet and
therefore due to the process not having all information would
drop the traffic.
10. d. Explanation: To manipulate traffic inside your own AS, local
preference can be used. Local preference is carried inside an AS
(iBGP) so you can manipulate traffic at one node and the attribute
is carried inside your AS.
11. a and c. Explanation: AS Path prepend is a very common way to
influence traffic into your AS. If you want to prefer a router over
another, then on the router that is less preferred, add additional
AS to the path to make the route “look not as good.” Another
option, although less common, is the use of MED. MED may be
valid when connected to the same neighboring AS with multiple
connections versus connecting to different ASs.
Chapter 9
1. b. Explanation: The unique route distinguisher (RD) per VPN
allocation model is simple to design and manage, and requires
lower hardware resource consumption compared to the other
models.
2. c. Explanation: The unique RD per VPN per provider edge (PE)
allocation model provides both active/active and active/standby
connectivity design options for a multihomed remote site.
3. b. Explanation: EIGRP site of origin (SoO) is specifically used to
help avoid or mitigate the impact of routing loops, and racing
issues, in complex topologies leveraging EIGRP as a PE-CE
routing protocol that contain both MPLS VPN and backdoor
links.
4. c. Explanation: The OSPF DN bit is set when routes are
redistributed from the BGP into OSPF. This bit is then checked
when OSPF redistributes routes into BGP at another device. If the
OSPF DN bit is set, those routes will not be redistributed back
into the super backbone. BGP and EIGRP SoO are unrelated here.
The OSPF domain ID is a mechanism to manually adjust the
OSPF process ID and is normally used by the PE devices to
ensure the OSPF neighbors are in the proper OSPF process.
5. d. Explanation: The route distinguisher is prepended per MP-BGP
VPNv4/v6 prefix to seamlessly transport customer routes
(overlapping and nonoverlapping) over one common
infrastructure.
6. a. Explanation: The route target is specifically used to import and
export routes from and to a VRF.
7. b. Explanation: The hub-and-spoke MPLS L3VPN topology
sends all remote site traffic (spokes) to a specific hub location, in
this case the data center, for a specific reason. In this case, the
firewall in the data center must inspect the traffic before allowing
spoke traffic to traverse to another spoke.
8. d. Explanation: The extranet and shared services MPLS L3VPN
topology allows for resources to be properly shared between
different VRFs and VPNs within the MPLS environment. In this
case, the Internet service is the resource being shared. This is
accomplished by setting the proper route target imports and
exports.
9. a. Explanation: The underlay network is defined specifically by
the physical switches and routers in the LAN.
10. d. Explanation: In the SD-WAN overlay, virtual networks (VNs)
provide segmentation just like VRFs.
Chapter 10
1. c. Explanation: The data plane is responsible for controlling fast-
forwarding of traffic passing through a network device.
2. b. Explanation: The control plane is like the brain of the network
node and usually controls and handles path selection functions.
3. a. Explanation: The management plane is the plane that is focused
on the management traffic of the device, such as device access,
configuration troubleshooting, and monitoring.
4. c. Explanation: Transit IP traffic is traffic for which a network
device makes a typical routing and forwarding decision regarding
whether to send the traffic over its interfaces to directly attached
nodes.
5. d. Explanation: Exception IP traffic is any IP traffic carrying a
nonstandard “exception” attribute, such as a transit IP packet with
an expired TTL.
6. b. Explanation: Receive IP traffic is traffic destined to the
network node itself, such as toward a router’s IP address, and
requires CPU processing.
7. a. Explanation: Non-IP traffic is typically related to non-IP
packets and almost always is not forwarded, such as MPLS, IS-IS
(CLNP), and Layer 2 keepalives.
8. d. Explanation: Application targeted attacks are mitigated by web
proxy/filtering, e-mail proxy/filtering, and a web application
firewall (WAF).
9. a and b. Explanation: A network direct access is mitigated by a
Layer 2 iACL, Layer 3 iACL, and a firewall. Layer 2 attacks are
mitigated by iACL, CoPP, and system and topological
redundancy.
10. c. Explanation: A network DoS attack is mitigated by Layer 3 and
Layer 2 network and device security considerations, and remotely
triggered black hole (RTBH) anomaly-based IDS/IPS.
11. b. Explanation: EAP Transport Layer Security (EAP-TLS)
authentication provides a certificate-based and mutual
authentication of the client and the network. It relies on client-
side certificates (identity certificates) and server-side certificates
to perform authentication and can be used to dynamically
generate user-based and session-based keys to secure future
communications. One limitation of EAP-TLS is that certificates
must be managed on both the client and server side. However, it
is the most secure EAP type because of the mutual authentication
of the client and network.
12. a. Explanation: EAP Message Digest Challenge (EAP-MD5)
authentication provides a base level of EAP support and typically
is not recommended for implementation because it may allow the
user’s password to be derived. However, it is the easiest of the
EAP types to deploy because there is no requirement for
certificate management on either the client or server sides.
Chapter 11
1. b. Explanation: The receiver sensitivity is the most helpful
because it defines the minimum usable signal strength a client can
receive from an access point (AP). The AP cell size is determined
by the distance a client can be located from the AP before the
AP’s signal falls below the receiver sensitivity.
2. c. Explanation: High density in a wireless design is determined by
the number of clients per AP in an area. If the user population is
high in a small area, all of the users might end up joining a single
AP. The goal of a good wireless design would be to add additional
APs and distribute the clients across them, maintaining an
adequate level of performance for each AP. For a high-density
design, the coverage area and cell size per AP is reduced to allow
for higher performance for the clients leveraging each AP.
3. a. Explanation: The customer is wanting user authentication, so
you could leverage RADIUS, AAA, or NAC (Cisco ISE) servers
to meet that need.
4. d. Explanation: A data-only wireless deployment without any
additional real-time applications being leveraged is usually used
when clients use normal applications that have no specific
performance requirements; thus, there is no need to account for
jitter, latency, or packet loss.
5. c. Explanation: A voice deployment model is indicated because of
the strict jitter requirement given. Jitter implies network
performance that is necessary for real-time applications such as
voice and video.
6. b. Explanation: If the AP is already at its lowest transmit power
level setting, your next strategy should be to connect an external
directional antenna to the AP. The patch antenna will focus the
AP’s RF energy into a smaller area and will help reduce the AP’s
cell size.
7. a, b, and d. Explanation: Wireless network designs focused on
voice should use a minimum data rate of 12 Mbps. It is important
to consider the number of simultaneous calls that each AP can
support, based on the minimum data rate. As a general guideline,
you should leverage the many 5-GHz channels, but carefully
validate that you can use each DFS channel, only if radar D
signals have not been detected on them.
8. c. Explanation: Of the options available, the floor plans will be
most helpful, as you will leverage these directly into any wireless
planning tool to help identify where to place APs within the
wireless network design.
9. d. Explanation: The closer a client is located in relation to an AP,
the stronger the AP’s signal will be. With a stronger received
signal, and constant or increasing SNR, the client will likely try to
use a faster data rate.
Chapter 12
1. a and c. Explanation: Zero-touch provisioning reduces the
amount of time needed to deploy new infrastructure and
eliminates the need to troubleshoot because of network outages
caused by human error.
2. a, c. and d. Explanation: With ZTP we can have all of our cabling
validated to ensure it’s correct, ensure we have a saved
configuration in a repository of our ZTP-enabled devices, and
ensure all ZTP devices are running a specific code version.
3. a and c. Explanation: Infrastructure as code reduces the amount
of time needed to deploy new infrastructure and eliminates the
need to troubleshoot because of network outages caused by
human error.
4. a, c, and d. Explanation: A CI/CD pipeline reduces the amount of
time needed to deploy new infrastructure, reduces the need to
troubleshoot because of network outages caused by human error,
and increases time to market on new services.
5. a. Explanation: The proper steps in a CI/CD pipeline are source,
build, test, and deploy.
Chapter 13
1. c. Explanation: Internet Group Management Protocol (IGMP)
snooping is a Layer 2 multicast protocol running on IPv4
networks that listens on multicast protocol packets between a
Layer 3 multicast device and user hosts to maintain outbound
interfaces of multicast packets. Multicasts may be filtered from
the links that do not need them, conserving bandwidth on those
links.
2. a. Explanation: Multicast Listener Discovery (MLD) snooping is
a Layer 2 multicast protocol running on IPv6 networks that listens
on multicast protocol packets between a Layer 3 multicast device
and user hosts to maintain outbound interfaces of multicast
packets. MLD snooping manages and controls multicast packet
forwarding at the data link layer. Think of MLD snooping as
IGMP snooping but for IPv6.
3. b. Explanation: Reverse path forwarding (RPF) is the mechanism
used by Layer 3 nodes in the network to optimally forward
multicast datagrams without loops.
4. a. Explanation: Protocol-Independent Multicast (PIM)
Bidirectional (BIDIR) builds bidirectional shared trees connecting
multicast sources and receivers. It never builds a shortest-path
tree, so it scales well because it does not need a source-specific
state. PIM-BIDIR is the best multicast routing protocol for many-
to-many traffic pattern requirements.
5. c. Explanation: PIM Source-Specific Multicast (PIM-SSM) builds
trees that are rooted in just one source. SSM eliminates the
requirement for rendezvous points (RPs) and shared trees of
sparse mode and only builds a shortest-path tree (SPT).
6. d. Explanation: PIM Bootstrap Router (PIM-BSR) is like Cisco’s
Auto-RP in that it is a protocol that is used to automatically find
the rendezvous point (RP) in a multicast network. BSR is a
standard and included in PIMv2, unlike Auto-RP, which is a
Cisco-proprietary protocol. BSR sends messages on a hop-by-hop
basis and does so by sending its packets to multicast address
224.0.0.13.
7. a. Explanation: With PIM-BIDIR, all traffic will follow the path
through the RP, wherever in the network that RP is located.
Because of this, a network designer will need to put the RP
between the sources and receivers of the critical application.
8. a, c, and d. Explanation: There are four factors that influence the
placement of a multicast RP: the multicast protocol that is used,
the multicast tree model, the application multicast requirements,
and a targeted network segment between the sources and
receivers (LAN versus WAN).
9. c. Explanation: Anycast-RP is based on using two or more RPs
configured with the same IP address on their loopback addresses.
Typically, the Anycast-RP loopback address is configured as a
host IP address (32-bit mask). From the downstream router’s
point of view, the Anycast-RP will be reachable via the unicast
IGP routing. Multicast Source Discovery Protocol (MSDP)
peering and information sharing is also required between the
Anycast-RPs in this design because it is common for some
sources to register with one RP and receivers to join a different
RP.
10. d. Explanation: Phantom RP is a redundancy consideration for the
RP in a PIM-BIDIR deployment. To create a phantom RP, two
routers in a network segment will need to be configured with the
same IP address but different subnet masks. Then the interior
gateway protocol can control the preferred path for the root
(phantom RP) of a multicast shared tree based on the longest
match (longest subnet mask) where multicast traffic can flow
through. The other router with the shorter mask can be used in the
same manner if the primary router fails. This means the failover
to the secondary shared tree path toward the phantom RP will rely
on the unicast IGP convergence.
Chapter 14
1. d. Explanation: Migrate the core to be in dual-stack mode first,
and then other enterprise modules can be gradually migrated to
IPv6-only or dual stack, depending on the goals and requirements
of the business. Migrating to IPv6 this way ensures there is no
service interruption.
2. b. Explanation: To provide IPv6 access either inbound or
outbound at the enterprise Internet edge, a translation mechanism
is required that is either based on a load balancer, pure DNS, or
classical NAT64.
3. a. Explanation: Dual stack is when a device is running both IPv4
and IPv6 protocol stacks. When all of the devices in the network
are running like this, it is called end-to-end dual stack.
4. c. Explanation: Generic Routing Encapsulation (GRE) is a
protocol for encapsulating data packets that uses one routing
protocol inside the packets of another protocol. GRE is one way
to set up a direct point-to-point connection across a network. In
this specific case, IPv6 would be running through the GRE tunnel
that travels the IPv4 network.
5. b. Explanation: The WFQ algorithm offers a dynamic distribution
among all traffic flows based on weights, like that of the DSCP
values.
6. d. Explanation: First-in, first-out (FIFO) queuing is the default
queuing when no other queuing is used. Although FIFO is
considered suitable for large links that have a low delay with very
minimal congestion, it has no priority or classes of traffic.
7. c. Explanation: Priority Queuing is typically four to six queues
with different priority levels, and the higher priority queues are
always serviced first.
8. a. Explanation: A LLQ supports real-time queuing and minimum
bandwidth guarantee.
9. b. Explanation: Network Configuration Protocol (NETCONF) is
a network management protocol defined by the IETF in RFC
6241. All NETCONF messages are encrypted with SSH and
encoded with XML.
10. c. Explanation: RESTCONF, which is defined in RFC 8040, is an
HTTP-based protocol that provides a programmatic interface for
accessing YANG-modeled data. RESTCONF can be encoded in
either XML or JavaScript Object Notation (JSON). In most
design situations, it will be best to leverage NETCONF for router
and switches and RESTCONF for controller north-bound
communication.
Chapter 15
1. a and c. Explanation: The routed access design model is easier to
troubleshoot and has a faster convergence time than a classical
Spanning Tree Protocol (STP)-based access design by eliminating
the reliance on STP and First Hop Redundancy Protocol (FHRP),
and relying on equal-cost multipath (ECMP) for traffic load
sharing.
2. b. Explanation: One of the technical network design limitations of
a routed access design model is the inability to span Layer 2
natively.
3. d. Explanation: A business network design limitation of the
routed access design model is the higher monetary cost for the
routing capabilities on the switch infrastructure for the
corresponding licensed features required for routing protocols.
4. b. Explanation: Switch clustering is the best access-distribution
connectivity model for flexibility.
5. c. Explanation: The routed access connectivity model supports
both scale up and scale out.
6. a. Explanation: Enhanced Interior Gateway Routing Protocol
(EIGRP) has the highest architecture flexibility without
limitations to the number of tiers while including proper route
summarization. Open Shortest Path First (OSPF) and
Intermediate System-to-Intermediate System (IS-IS) have
inherent limitations as the number of tiers increases that make
them both not as flexible. The most flexible option is not listed,
which is Border Gateway Protocol (BGP).
7. b. Explanation: Of the routing protocols present, OSPF is the only
IGP that supports MPLS Traffic Engineering (MPLS-TE).
8. d. Explanation: The most scalable virtualization option available
is MPLS with Multiprotocol Border Gateway Protocol (MP-
BGP). With that said, this also comes with a high operational
complexity and requires staff to have advanced routing
experience.
9. a. Explanation: The design option that requires the least level of
routing expertise is VLANs + 802.1Q + VRFs.
Chapter 16
1. a. Explanation: LOCAL_PREFERENCE is a Border Gateway
Protocol (BGP) attribute that is applied as routes come into the
network to influence traffic outbound and is carried throughout
the local autonomous system (AS).
2. b. Explanation: AS-PATH prepend is a BGP attribute that is
applied as routes leave the network (outbound via provider one)
to influence traffic into the network (inbound via provider two) a
different deterministic path back into the network.
3. a. Explanation: Equal and unequal load sharing provides
flexibility with limited monetary spending.
4. b. Explanation: Active/standby is used in a situation where a
backup link is needed but only as a requirement to survive an
outage on the primary (active) link.
5. c. Explanation: Equal and unequal load sharing with two edge
routers is used when there are multiple data centers or multiple
sites with a large campus environment hanging off of the inside of
the Internet edge architecture with the need to leverage the
internal campus routing information to determine the best Internet
exit point.
Chapter 17
1. a. Explanation: An MPLS L2VPN WAN allows a customer to roll
out new transport technologies and services, like IPv6 or
multicast, rapidly without having to wait on the provider to make
any changes.
2. b. Explanation: An MPLS L3VPN WAN controls the number of
routing neighborships deterministically, one or two BGP
neighbors, while an MPLS L2VPN WAN can have hundreds of
IGP neighbors across the same link. This second situation can be
mitigated with proper configurations if needed.
3. c. Explanation: Of the options provided, leveraging the Internet as
a WAN is the cheapest solution.
4. b. Explanation: The best WAN transport model for a very large
number of remote sites is the MPLS L3VPN WAN option. The
MPLS L2VPN WAN introduces some routing and adjacency
issues and limitations with a large number of sites. The Internet as
WAN option is limited to the VPN hardware supporting the
number of concurrent VPN sessions required.
5. b. Explanation: Dynamic multipoint VPN (DMVPN) is the most
scalable option that also provides spoke registration and spoke-to-
spoke communication dynamically.
6. c and d. Explanation: IPsec and Generic Routing Encapsulation
(GRE) both provide point-to-point network topologies, while
remote access VPN and DMVPN provide hub-spoke network
topologies.
7. b. Explanation: Dynamic multipoint VPN (DMVPN) is the only
option that inherently allows for spoke sites to base traffic directly
between them, without going to a hub or data center location.
8. b and d. Explanation: From the options provided, the best options
are identifying where the default route is being generated (and
how it will be propagated in the new WAN architecture) and
knowing the logical and physical site architectures (like OSPF
areas and area types, and if there are backdoor links).
Appendix B
Note
The downloaded document has a version number. Comparing the
version of the print Appendix B (Version 1.0) with the latest online
version of this appendix, you should do the following:
Same version: Ignore the PDF that you downloaded from the
companion website.
Website has a later version: Ignore this Appendix B in your book
and read only the latest version that you downloaded from the
companion website.
Technical Content
The current Version 1.0 of this appendix does not contain additional
technical coverage.
Glossary
Numerics
2-tier model An application model that is like the client/server architecture,
where communication takes place between client and server. In this model,
the presentation layer or user interface layer runs on the client side while
the dataset layer gets executed and stored on the server side.
3-tier model An application model that has three tiers (aka layers):
presentation (web), intermediate (application), and database.
A
add technology A network design use case that includes adding a new
technology, functionality, or capability to an architecture.
administrative distance (AD) A rating of the trustworthiness of a routing
information source. A lower number is preferred.
Anycast-RP Based on using two or more rendezvous points (RPs)
configured with the same IP address on their loopback addresses. Typically,
the Anycast-RP loopback address is configured as a host IP address (32-bit
mask). From the downstream router’s point of view, the Anycast-RP will be
reachable via the unicast IGP routing. MSDP peering and information
sharing is also required between the Anycast-RPs in this design, because it
is common for some sources to register with one RP and receivers to join a
different RP.
AP cell The RF coverage area of a wireless access point; also called the
basic service area (BSA).
application requirements The items an application needs to properly
function. For example, VoIP has specific requirements of latency, loss, and
delay within the network to function properly.
application tier The tier (aka layer) where all of the application’s functions
and logic occur. This layer processes tasks, functions, and commands;
makes decisions and evaluations; and performs calculations. It also is how
data is moved between the Web and database layers. This is often referred
to as the Presentation or Logic layer of the application.
area border router (ABR) An OSPF router that is connected to more than
one area.
authentication server A RADIUS server that contains an authentication
database.
authenticator A network device that the client is connecting to. In a wired
deployment, this could be a network switch, and in a wireless deployment,
this could be the local access point the client is connecting to or the wireless
controller that manages the access points.
authorization for access For a resource, in this case a user or device, to be
authorized for access to another resource, in this case an asset, application,
or system, the trust score and the entitlement level are combined to
determine the authorization for access. Just because a trust score is high
enough to access a resource, if the user or device doesn’t have the correct
entitlement, they will not have the appropriate authorization for access.
autonomous system boundary router (ASBR) An OSPF router that
injects external LSAs into the OSPF database.
Auto-RP A Cisco-proprietary protocol that automatically communicates
rendezvous points (RPs) within a PIM network. Candidate RPs send their
announcements to the RP mapping agents with multicast address
224.0.1.39. The 224.0.1.40 address is used in Auto-RP discovery in the
destination address for messages from the RP mapping agent to discover
candidates. The RP mapping agents select the RP for a group based on the
highest IP address of all candidate RPs.
availability Ensures that access to services and systems is always available
and information is accessible by authorized users when required.
B
Border Gateway Protocol (BGP) An interdomain routing protocol that
allows BGP speakers residing in different autonomous systems to exchange
routing information.
bottom-up approach A design approach that focuses on selecting network
technologies and design models first. This can impose a high potential for
design failures, because the network will not meet the requirements of the
business or its applications.
BPDU Filter A feature that suppresses BPDUs on ports.
BPDU Guard A feature that disables a PortFast-enabled port if a BPDU is
received.
brownfield A network design use case that already has an environment
with production traffic running through it and now requires a network
designer to modify the network architecture.
business architecture (BA) Enables everyone, from strategic planning
teams to implementation teams, to get “on the same page”, enabling them to
address challenges and meet business objectives.
Business, Operations, Systems, and Technology (BOST) Framework that
provides the structure for enterprise models, their elements, and
relationships. Each of the four BOST elements has its own views. In this
framework, the requirements flow downward through the four views. The
capabilities flow upward in response to these requirements, creating a
mapping between the requirements and capabilities. The success of this
framework is based on the ability of a business to align its capabilities with
the constantly changing requirements in all four views.
business architecture scope Includes four levels of alignment, Technology
Specific, Technology Architecture, Business Solutions, and Business
Transformation.
business capability A function provided by a solution that directly meets a
business outcome. Business capabilities are not solutions. Business
capabilities are what you get from a solution. A network access control
solution provides a business the session- and transaction-based security
capability, for example. Most solutions provide multiple capabilities. Some
solutions provide parts of multiple capabilities; when they are combined
with other solutions, the business can get a number of capabilities that will
make them successful.
business driver What a business has to do and why. Business drivers are
what organizations must follow. A business driver is usually the reason a
business must achieve a specific outcome. It is the “why” the business is
doing something.
business outcome An underlying goal a business is trying to achieve. A
business outcome equates to the end result: save money, diversify the
business, make more money, or fill a specific need. A business outcome
will specifically map to a business driver.
business priority The top buckets the business is focused on that all other
decisions must align with to ensure success. Each business has a set of
priorities that are typically based on strategies adopted for the achievement
of goals. These business priorities can influence the planning and design of
IT network infrastructure. Therefore, network designers must be aware of
these business priorities to align them with the design priorities. This
ensures the success of the network they are designing by delivering
business value.
business solution A set of interacting business capabilities that delivers
specific, or multiple, business outcomes.
Business Solutions A partial business architecture scope for a business that
requires expertise to help solve its business problems and determine how to
measure the business impact of its technology investments (CAPEX,
OPEX, ROI, TCO, etc.).
Business Transformation A business-led architecture scope for a business
that requires help with transforming its business capabilities to facilitate
innovation to accelerate the company’s digitization.
C
capital expenses (CAPEX) The major purchases a business makes that are
to be used over a long period of time. Examples of CAPEX include fixed
assets, such as buildings and equipment.
carrier-neutral facility (CNF) A data center that is not owned by a
network provider but rather is entirely independent of service providers.
class-based weighted fair queuing (CBWFQ) Provides class-based
queuing (user-defined classes) with a minimum bandwidth guarantee. It
supports flow-based WFQ for undefined classes, such as class-default. It
supports low-latency queuing (LLQ).
cloud access point (CAP) A predefined location to get access to a cloud
service provider (CSP) or multiple CSPs. Sometimes CAPs are located in a
carrier-neutral facility (CNF).
confidentiality Protects against unauthorized access to information to
maintain the desired level of secrecy of the transmitted information across
the internal network or public Internet.
continuous delivery (CD) The practice of automatically preparing code
changes for release into a production environment. With CD, all code
changes are validated in a testing environment before being deployed into a
production environment. When CD is properly implemented, the team will
always have a deployment-ready build that has passed through the
validation test process.
continuous integration (CI) The practice of automating the integration of
code changes from multiple team members into a single project.
Control and Provisioning of Wireless Access Points (CAPWAP) A
Control and Provisioning of Wireless Access Points (CAPWAP) is a logical
network connection between access points and a wireless LAN controller.
CAPWAP is used to manage the behavior of the APs as well as tunnel
encapsulated 802.11 traffic back to the controller. CAPWAP sessions are
established between the AP’s logical IP address (gained through DHCP)
and the controller’s management interface.
control plane Like the brain of the network node, it usually controls and
handles all Layer 3 functions.
controller Software-based component that is responsible for the centralized
control plane of the SD-WAN fabric network. It establishes a secure
connection to each edge router and distributes routes and policy information
to it. It also orchestrates the secure data plane connectivity between the
different edge routers by distributing crypto key information, allowing for a
very scalable, IKE-less architecture.
D
data plane Responsible for controlling fast-forwarding of traffic passing
through a network device.
database tier The tier (aka layer) where information is stored and retrieved
from a database. The information is then passed back to the intermediate
(application) layer and then eventually back to the end user.
decision matrix Serves the same purpose as a decision tree, but with the
matrix, network designers can add more dimensions to the decision-making
process.
decision tree A helpful tool that a network designer can use to compare
multiple design options, or perhaps protocols, based on specific criteria. A
decision tree is a one-dimensional tool.
Department of Defense Architecture Framework (DODAF) Defines a
common approach for presenting, describing, and comparing DoD
enterprise architectures across organizational, joint, or multinational
boundaries. DODAF leverages common terminology, assumptions, and
principles to allow for better integration between DoD elements. This
framework is suited to large systems with complex integration and
interoperability challenges. One element of this framework that is unique is
its use of views. Each view offers an overview of a specific area or function
and provides details for specific stakeholders within the different domains.
design failure A network design use case that has a problem that needs to
be resolved. A simple technical example of this is not aligning the critical
roles of Spanning Tree Protocol (STP) and First Hop Redundancy Protocol
(FHRP). If your STP root bridge and your FHRP default gateways are not
aligned to correct devices, this would lead to a design failure situation.
distance-vector routing protocol A routing protocol that advertises the
entire routing table to its neighbors.
divestment A network design use case that requires a network designer to
split an architecture into two or more separate architectures that can each
function as their own entity.
dynamic factors Sources of data that can be analyzed at the time of access
to change what level of access and authorization (i.e., the trust score of the
transaction in question) is being provided. The most common dynamic
factor is threat intelligence, but can also be geovelocity, GPS coordinates,
and real-time data analytics around the transaction.
Dynamic Frequency Selection (DFS) A mechanism that enables an AP to
dynamically scan for RF channels and avoid those used by radar stations.
E
EAP Flexible Authentication via Secure Tunneling (EAP-FAST) Instead
of using certificates to achieve mutual authentication, authenticates by
means of a PAC (Protected Access Credential), which can be managed
dynamically by the authentication server.
EAP Message-Digest Challenge (EAP-MD5) Authentication that provides
a base level of EAP support and is typically not recommended for
implementation because it may allow the user’s password to be derived.
EAP Transport Layer Security (EAP-TLS) Authentication that provides
a certificate-based and mutual authentication of the client and the network.
It relies on client-side certificates (identity certificates) and server-side
certificates to perform authentication and can be used to dynamically
generate user-based and session-based keys to secure future
communications. One limitation of EAP-TLS is that certificates must be
managed on both the client side and server side.
EAP Tunneled Transport Layer Security (EAP-TTLS) An extension of
EAP-TLS that provides for certificate-based, mutual authentication of the
client and network through an encrypted tunnel. Unlike EA-TLS, EAP-
TTLS requires only server-side certificates, which makes it easier to deploy
than EAP-TLS but less secure.
edge control Network virtualization design element that represents the
network access point. Typically, it is a host or end-user access (wired,
wireless, or virtual private network [VPN]) to the network where the
identification (authentication) for physical to logical network mapping can
occur. For example, a contracting employee might be assigned to VLAN X,
whereas internal staff are assigned to VLAN Y.
edge router A device that sits at a physical site or in the cloud and provides
secure data plane connectivity among the sites over one or more WAN
transports. They are responsible for traffic forwarding, security, encryption,
QoS, routing protocols such as BGP and OSPF, and more.
Embedded-RP IPv6 Embedded-RP Described in RFC 3306. Facilitates
interdomain IPv6 multicast communication, in which the address of the
rendezvous point (RP) is encoded in the IPv6 multicast group address, and
specifies a PIM-SM group-to-RP mapping to use the encoding, leveraging,
and extending unicast-prefix-based addressing. The IPv6 Embedded-RP
technique offers network designers a simple solution to facilitate
interdomain and intradomain communication for IPv6 Any-Source
Multicast (ASM) applications without MSDP.
endpoint device Any device an end user can leverage to access the
enterprise network. Endpoint devices include business-owned assets and
personally owned devices that are approved to access the enterprise
network, potentially in a limited way like the common bring your own
device (BYOD) deployment. In some vendor-specific implementations of
Zero Trust, an endpoint device is also called a policy enforcement point
(PEP).
Enhanced Interior Gateway Routing Protocol (EIGRP) Cisco’s
proprietary enhanced distance-vector routing protocol.
enterprise architecture (EA) A process of organizing logic for business
processes and IT infrastructure reflecting the integration and
standardization requirements of the company’s operating model.
entitled to access The resources a user is allowed to access based on the
role they are in. Users have various roles and, based on those roles, are
entitled to specific access to complete their job. A financial user would need
a different level of access than a human resource user. These two users
should not have the same access or authorization. They may have
overlapping access to resources that they both need to complete their job
functions.
exception IP traffic Any IP traffic carrying a nonstandard “exception”
attribute, such as a transit IP packet with an expired TTL.
Extensible Authentication Protocol (EAP) Used to pass authentication
information between the supplicant and the authentication server. There are
a number of different types of EAP authentication options, with each
handling the authentication differently.
F
failure isolation A technique that creates boundaries within the network
design to help contain problems from propagating.
Federal Enterprise Architecture Framework (FEAF) The industry
standard framework for government enterprise architectures. Within this
framework, the focus is on guiding the integration of strategic, business,
and technology management architecture processes. One of the primary
benefits of this framework is that it focuses on a common approach to
technology acquisition within all U.S. federal agencies.
feedback loop Continuous information sharing to allow for dynamic
changes to policy based on constant analysis of new information via
AI/ML, Big Data, and data lakes.
First Hop Redundancy Protocol (FHRP) A protocol that deals with first-
hop routing. Options are HSRP, VRRP, and GLBP.
first-in, first-out (FIFO) The default queuing when no other queuing is
used. Although FIFO is considered suitable for large links that have a low
delay with very minimal congestion, it has no priority or classes of traffic.
functional requirements Identify what the different technologies or
systems will deliver to the business from a technological point of view.
Specifically, functional requirements are the foundation of any system
design because they define system and technology functions.
G
Gateway Load Balancing Protocol (GLBP) A Cisco-proprietary protocol
that attempts to overcome the limitations of existing redundant router
protocols by adding basic load-balancing functionality. In addition to being
able to set priorities on different gateway routers, GLBP allows a weighting
parameter to be set.
General Data Protection Regulation (GDPR) A European Union (EU)
regulation for data protection that sets guidelines for the collection and
processing of personal information from individuals. It applies to the
processing of personal data of people in the EU by businesses that operate
in the EU. It’s important to note that GDPR applies not only to firms based
in the EU, but any organization providing a product or service to residents
of the EU.
gold plating Adding design elements that are excessive and do not meet
any underlying requirement.
greenfield A network design use case that is a clean slate or a clean canvas
for a network designer to design an end-to-end architecture from beginning
to end.
gRPC Network Management Interface (gNMI) Developed by Google,
provides the mechanism to install, manipulate, and delete the configuration
of network devices and also to view operational data. The content provided
through gNMI can be modeled using YANG. gRPC is a remote procedure
call developed by Google for low-latency, scalable distributions with
mobile clients communicating to a cloud server.
H
Health Information Portability and Accountability Act (HIPAA) A
compliance standard focused on protecting health and patient information.
Network designers must ensure that the network design for businesses that
have to follow HIPAA leverage design options that meet the associated
security controls specified by HIPAA.
hierarchy The process of creating layers within the architecture for a
specific purpose. The most common layers are core, distribution,
aggregation, and access.
Hot Standby Routing Protocol (HSRP) A Cisco-proprietary first-hop
routing protocol that provides redundancy by creating a virtual router out of
two or more routers.
hybrid cloud The use of both private and public clouds together to allow
for a business to receive the benefits of both cloud environments while
limiting their negative impacts on the business.
I
Information Technology Infrastructure Library (ITIL) A set of best
practice processes for delivering IT services to your organization’s
customers. ITIL focuses on ITSM and ITAM, and includes processes,
procedures, tasks, and checklists that can be applied by any organization.
The three focus areas of ITIL are change management, incident
management, and problem management.
Infrastructure as a Service (IaaS) A pay-as-you-go service model for
storage, networking, and virtualization—all of a business’s infrastructure
needs. IaaS gives users cloud-based alternatives to on-premises
infrastructure, so businesses can avoid investing in expensive onsite
resources.
Infrastructure as Code (IaC) The concept of managing and provisioning
infrastructure through code instead of through manual processes. With IaC,
configuration files are created that contain the network infrastructure
specifications, which can then be edited and distributed depending on the
need. It also ensures that the environments being provisioned are the same
every time.
infrastructure devices Networking devices within the enterprise, such as
switches, routers, and firewalls. These devices are also called policy
enforcement points (PEPs).
integrity Maintains accurate information end to end by ensuring that no
alteration is performed by any unauthorized entity.
Intermediate System-to-Intermediate System (IS-IS) An interior
gateway routing protocol with link-state characteristics that uses Dijkstra’s
shortest path algorithm to calculate paths to destinations.
inventory Single point of truth for all resources. This is an end-to-end
inventory throughout the entire architecture/enterprise. This is also known
as a policy information point (PIP).
L
Lightweight Extensible Authentication Protocol (LEAP) An EAP
authentication type that encrypts data transmissions using dynamically
generated keys and supports mutual authentication. This is a Cisco
proprietary protocol that Cisco licenses to other manufacturers and vendors
to use.
Link Aggregation Control Protocol (LACP) A protocol defined in IEEE
802.3ad that provides a method to control the bundling of several physical
ports to form a single logical channel.
link-state advertisement (LSA) A message that is used to communicate
network information such as router links, interfaces, link states, and costs.
link-state routing protocol A routing protocol that uses Dijkstra’s shortest
path algorithm to calculate the best path.
M
MAC Authentication Bypass (MAB) The authenticator sends the MAC
address of the client device to the authentication server to check if it permits
the MAC address. MAB is a very insecure option, but for devices that do
not support 802.1X, it can be used.
management plane Relates to the management traffic of the device, such
as device access, configuration troubleshooting, and monitoring.
many-to-one virtualization model Model in which multiple physical
resources appear as a single logical unit. The classical example of many-to-
one virtualization is the switch clustering concept. Other examples include
firewall clustering, and FHRP with a single virtual IP (VIP) that front ends
a pair of physical upstream network nodes (switches or routers).
merger A network design use case that requires a network designer to
combine two independent architectures into one holistic end-to-end
architecture.
modularity A concept that allows for purpose-built building blocks to be
leveraged.
Multicast Source Discovery Protocol (MSDP) Described in RFC 3618,
used to interconnect multiple PIM-SM domains. MSDP reduces the
complexity of interconnecting multiple PIM-SM domains by allowing the
PIM-SM domains to use an interdomain source tree. With MSDP, the
rendezvous points (RPs) exchange source information with RPs in other
domains. Each PIM-SM domain uses its own RP and does not depend on
the RPs in other domains. When an RP in a PIM-SM domain first learns of
a new sender, it constructs a Source-Active (SA) message and sends it to its
MSDP peers. All RPs that intend to originate or receive SA messages must
establish MSDP peering with other RPs, either directly or via an
intermediate MSDP peer.
multi-cloud The use of two or more cloud service providers (CSPs), with
the ability to move workloads between the different cloud computing
environments in real time as needed by the business.
Multiple Spanning Tree (MST) A protocol that is used to reduce the total
number of spanning-tree instances that match the physical topology of the
network, reducing the CPU load.
multitier STP based Network design model that represents the classical or
traditional way of connecting access to the distribution layer in the campus
network. In this model, the access layer switches usually operate in Layer 2
mode only, and the distribution layer switches operate in Layer 2 and Layer
3 modes. The primary limitation of this design model is the reliance on
Spanning Tree Protocol (STP) and First Hop Redundancy Protocol (FHRP).
N
Network Configuration Protocol (NETCONF) A network management
protocol, defined in RFC 6241, that provides rich functionality for
managing configuration and state data. The protocol operations are defined
as remote procedure calls (RPCs) for requests and replies in XML-based
representation. NETCONF supports running, candidate, and startup
configuration datastores. The NETCONF capabilities are exchanged during
session initiation. Transaction support is also a key NETCONF feature.
NETCONF is a client/server protocol and is connection-oriented over TCP.
All NETCONF messages are encrypted with SSH and encoded with XML.
A NETCONF manager is a client, and a NETCONF device is a server. The
initial contents of the <hello> message define the NETCONF capabilities
that each side supports. The YANG data model defines capabilities for the
supported devices.
network manager Centralized network management system that provides a
GUI to easily monitor, configure, and maintain all SD-WAN devices and
links in the underlay and overlay network.
non-IP traffic Typically related to non-IP packets and almost always is not
forwarded, such as MPLS, IS-IS (CLNP), and Layer 2 keepalives.
O
one-to-many virtualization model Model in which a single physical
resource can appear as many logical units, such as virtualizing an x86
server, where the software (hypervisor) hosts multiple virtual machines
(VMs) to run on the same physical server. The concept of network function
virtualization (NFV) can also be considered as a one-to-many system
virtualization model.
on-premises The service model where a business owns and manages the
infrastructure. A business will procure all of the infrastructure required to
run the service and then fully manage, maintain, and operate it. In some
situations, the management is outsourced but the infrastructure is procured
and owned by the business.
Open Short Path First (OSPF) A link-state routing protocol that uses
Dijkstra’s shortest path algorithm to calculate paths to destinations.
operating expenses (OPEX) The day-to-day costs incurred by a business
to keep the business operational. Examples of OPEX include rent, utilities,
payroll, and marketing.
orchestrator Software-based component that performs the initial
authentication of edge devices and orchestrates controller and edge device
connectivity. It also has an important role in enabling the communication of
devices that sit behind NAT.
overlay network Runs over the underlay network to create a virtual
network. Virtual networks isolate both data plane traffic and control plane
behavior among the physical networks of the underlay. Virtualization is
achieved inside SD-LAN by encapsulating user traffic over IP tunnels that
are sourced and terminated at the boundaries of SD-LAN. Network
virtualization extending outside of the SD-LAN is preserved using
traditional virtualization technologies such as virtual routing and
forwarding (VRF)-Lite, MPLS VPN, or SD-WAN. Overlay networks can
run across all or a subset of the underlay network devices. Multiple overlay
networks can run across the same underlay network to support multitenancy
through virtualization.
P
Payment Card Industry Data Security Standard (PCI DSS) A
compliance standard focused on ensuring the security of credit card
transactions. This standard refers to the technical and operational standards
that businesses must follow to secure and protect credit card data provided
by cardholders and transmitted through card processing transactions.
Network designers have to ensure that the network design for businesses
that have to follow PCI DSS leverages design options that meet the
associated security controls specified by PCI DSS.
perimeter security (aka Turtle Shell) Legacy security model that
leverages a security device at the edge or perimeter that is the gatekeeper
into the network. This security device has a bunch of security capabilities
that limit what traffic can get into the network and what can leave it. Inside
the network, behind the security device, there are no other security devices.
In this model, there is full east–west (lateral movement) traffic between
users and resources.
phantom RP A redundancy consideration for the rendezvous point (RP) in
a PIM-BIDIR deployment. To create a phantom RP, two routers in a
network segment will need to be configured with the same IP address but
different subnet masks. Then IGP can control the preferred path for the root
(phantom RP) of a multicast shared tree based on the longest match (longest
subnet mask) where multicast traffic can flow through. The other router
with the shorter mask can be used in the same manner if the primary router
fails. This means the failover to the secondary shared tree path toward the
phantom RP will rely on the unicast IGP convergence.
PIM Bidirectional (BIDIR) Defined in RFC 5015, a variant of PIM-SM
that builds bidirectional shared trees connecting multicast sources and
receivers. It never builds a shortest-path tree, so it scales well because it
does not need a source-specific state. PIM-BIDIR eliminates the need for a
first-hop route to encapsulate data packets being sent to the RP. PIM-BIDIR
dispenses with both encapsulation and source state by allowing packets to
be natively forwarded from a source to the RP using the shared tree state.
PIM Source-Specific Multicast (PIM-SSM) A variant of PIM-SM that
builds trees that are rooted in just one source. SSM, defined in RFC 3569,
eliminates the requirement for rendezvous points (RPs) and shared trees of
sparse mode and only builds a shortest-path tree (SPT). SSM trees are built
directly based on the receipt of group membership reports that request a
given source. SSM is suitable for when well-known sources exist within the
local PIM domain and for broadcast applications.
Platform as a Service (PaaS) A service model where a vendor provides
hardware and software tools, and customers use these tools to develop
applications. PaaS users tend to be application developers.
policy enforcement point (PEP) Location where trust and policy are
enforced.
policy engine Implements policy, matches rules, and pushes associated
access to the policy enforcement points. This is also known as a policy
administration point (PAP) and a policy decision point (PDP).
Priority Queuing (PQ) Typically supports four queues with different
priority levels, and the higher-priority queues are always serviced first.
private cloud Consists of cloud computing resources used by one business.
This cloud environment can be located within the business’s data center
footprint, or it can be hosted by a cloud service provider (CSP). In a private
cloud, the resources, applications, services, data, and infrastructure are
always maintained on a private network and all devices are dedicated to the
business.
Protected Extensible Authentication Protocol (PEAP) Provides a method
to transport securely authentication data, including legacy password-based
protocols. PEAP accomplishes this by using tunneling between PEAP
clients and an authentication server.
public cloud The most common type of cloud computing, the cloud
computing resources are owned and operated by a cloud service provider
(CSP). All infrastructure components are owned and maintained by the
CSP. In a public cloud environment, a business shares the same hardware,
storage, virtualization, and network devices with other businesses.
R
radio frequency (RF) A wireless electromagnetic signal used as a form of
data communication.
receive IP traffic Traffic destined to the network node itself, such as
toward a router’s IP address, and requires CPU processing
recovery point objective (RPO) The amount of data that can be lost during
an outage at peak business demand before harm occurs to the business. The
amount of data that can be lost from an RPO perspective is given a specific
time value, which is measured against the last backup that took place.
recovery time objective (RTO) The time an application, system, network,
or resource can be offline without causing significant business damage as
well as the time it takes to restore the service in question. RTO is focused
on the time to recover a failing system or network outage.
redundancy The concept of having multiple resources performing the same
function/role so that if one of them fails, the other takes over with limited to
no impact on the production traffic.
reliability How much of the network data gets from source to destination
locations in the right amount of time to be leveraged correctly.
replace technology A network design use case that includes replacing a
technology, function, or capability to an architecture with another one. For
example, replacing OSPF with EIGRP as a routing protocol.
resilience (aka resiliency) The ability of the network to automatically fail
over when an outage occurs.
RESTCONF Defined in RFC 8040, an HTTP-based protocol that provides
a programmatic interface for accessing YANG modeled data. RESTCONF
uses HTTP operations to provide create, retrieve, update, and delete
(CRUD) operations on a NETCONF datastore containing YANG data.
RESTCONF is tightly coupled to the YANG data model definitions. It
supports HTTP-based tools and programming libraries. RESTCONF can be
encoded in either XML or JSON.
return on investment (ROI) The concept of identifying what the perceived
potential benefit is going to be for the business if the business does an
action (i.e., is the investment going to be profitable for the business?).
reverse path forwarding (RPF) The mechanism used by Layer 3 nodes in
the network to optimally forward multicast datagrams. The RPF algorithm
uses the following rules:
S
scaling A network design use case that requires a network designer to
address design limitations with a current production design and suggest
modification to increase its scalability. This can be as simple as a single flat
area 0 OSPF design that doesn’t scale to the business requirements, in
which case a network designer can leverage multiple areas, multiple area
types, and LSA filtering techniques to increase the scalability of the
network design.
service set identifier (SSID) A unique ID that is used as a wireless network
name and can be made up of case-sensitive letters, numbers, and special
characters. When designing wireless networks, we give each wireless
network a name which is the SSID. This allows end users to distinguish
from one wireless network to another.
services virtualization Network virtualization design element that
represents the extension of the network virtualization concept to the
services edge, which can be shared services among different logically
isolated groups, such as an Internet link or a file server located in the data
center that must be accessed by only one logical group (business unit).
session- and transaction-based security A security model in which users
and devices are locked down. Resources like printers, applications, security
cameras, and so forth can be locked down to have access only to what they
need access to. In this model, east–west traffic is secured dynamically.
shared failure state (aka fate sharing) A state in which a device is
performing multiple critical functions and if it were to incur an outage from
one of those functions, it would affect the other critical functions.
shared tree The multicast tree roots somewhere between the network’s
source and receivers. The root is called the rendezvous point (RP). The tree
is created from the RP throughout the network with no loops. Sources will
first send their multicast traffic to the RP, which then forwards data to the
member of the group in the shared tree.
shortest-path tree (SPT) Also called source trees, the multicast tree roots
from the source of the multicast group and then expands throughout the
network to the destination hosts. These paths are created without having to
go through a rendezvous point (RP).
signal-to-noise ratio (SNR) The difference between a received signal’s
strength and the noise floor.
single-server model The simplest application model, equivalent to running
the application on a personal computer. All of the required components for
an application to run are on a single application or server.
Software as a Service (SaaS) A service model where a vendor makes its
software available to users, usually for a monthly or annual subscription
service fee.
Spanning Tree Protocol (STP) A protocol that prevents loops from being
formed when switches are interconnected via multiple paths.
static factors Items that we know and can preemptively base access and
authorization on. The most common static factors are credentials but could
also include the level of confidence, device trust, network, physical
location, biometrics, and device orientation.
strategic planning approach Typically targets planning to long-term
business outcomes and strategies.
supplicant A software client running on the end device that passes
authentication information to the authentication server.
switch clustering Network design model that provides the simplest and
most flexible design compared to the other design models. By introducing
the switch clustering concept across the different functional modules of the
enterprise campus architecture, network designers can simplify and enhance
the design to a large degree. This offers a higher level of node and path
resiliency, along with significantly optimized network convergence time.
T
tactical planning approach Typically targets planning to overcome an
issue or to achieve a short-term goal.
technical requirements The technical aspects that a network infrastructure
must provide in terms of security, availability, and integration. These
requirements are often called nonfunctional requirements.
technology architecture A multi-domain architecture (MDA), also referred
to as cross-architecture. In this scope, the business needs help
understanding the benefits of multi-domain technology architecture and
how to show the value it provides to the business. With this scope, two or
more architecture focus areas are incorporated.
technology specific A domain-specific architecture. Within this scope, the
business is requiring help with finding and purchasing the right product or
group of products in an architecture focus area. This might be data center,
security, or enterprise networking focused, but it doesn’t cross between the
different architecture focus areas.
The Open Group Architecture Framework (TOGAF) An enterprise
architecture methodology that incorporates a high-level framework for
enterprises that focuses on designing, planning, implementing, and
governing enterprise information technology architectures. TOGAF helps
businesses organize their processes through an approach that reduces errors,
decreases timelines, maintains budget requirements, and aligns technology
with the business to produce business-impacting results.
three-tier model Network design model that is typically used in large
enterprise campus networks, which are constructed of multiple functional
distribution layer blocks. This model has dedicated core, distribution, and
access layers.
top-down approach A design approach that simplifies the design process
by splitting the design tasks to make it more focused on the design scope
and performed in a more controlled manner, which can ultimately help
network designers to view network design solutions from a business
perspective.
transit IP traffic Traffic for which a network device makes a typical
routing and forwarding decision regarding whether to send the traffic over
its interfaces.
transport virtualization Network virtualization design element that
represents the transport path that will carry different virtualized networks
over one common physical infrastructure, such as an overlay technology
like a generic routing encapsulation (GRE) tunnel. The terms path isolation
and path separation are commonly used to refer to transport virtualization.
trust engine Dynamically evaluates overall trust by continuously analyzing
the state of devices, users, workloads, and applications (resources). Utilizes
a trust score that is built from static and dynamic factors. This is also known
as a policy information point (PIP).
trust score A combination of factors, both static and dynamic, this is used
to continually provide identity assurance. A trust score determines the level
of access as required by the level of risk value of the asset being accessed.
two-tier model Network design model that is more suitable (than a three-
tier model) for small to medium-size campus networks (ideally not more
than three functional disruption blocks to be interconnected), where the core
and distribution functions can be combined into one layer, also known as
collapsed core-distribution architecture. This model has two layers, a
collapsed core-distribution layer and an access layer.
U
underlay network Defined by the physical switches and routers that are
part of the LAN. All network elements of the underlay must establish IP
connectivity via the use of a routing protocol. Theoretically, any topology
and routing protocol can be used, but the implementation of a well-designed
Layer 3 foundation to the LAN edge is highly recommended to ensure
performance, scalability, and high availability of the network. In the SD-
LAN, end-user subnets are not part of the underlay network but instead are
part of the overlay network.
unstated requirements The concept that customers do not articulate their
specific requirements explicitly. Customers assume requirements, which
leave the network designer to figure out what requirements are important to
the design.
V
virtual local-area network (VLAN) A broadcast domain that is isolated
within Layer 2 and defined logically. Ports in a LAN switch are assigned to
different VLAN numbers.
virtual network (VN) Provides segmentation, much like Virtual Routing
and Forwarding (VRF) instances. Each VN is isolated from other VNs and
each has its own forwarding table. An interface or subinterface is explicitly
configured under a single VN and cannot be part of more than one VN.
Labels are used in the management protocol route attributes and in the
packet encapsulation, which identifies the VN a packet belongs to. The VN
number is a 4-byte integer with a value from 0 to 65530.
Virtual Router Redundancy Protocol (VRRP) A standards-based first-
hop routing protocol that provides redundancy with a virtual router elected
as the master.
Virtual Routing and Forwarding (VRF) One of the primary mechanisms
used in today’s modern networks to maintain routing isolation on a Layer 3
device level. In MPLS architecture, each PE holds a separate routing and
forwarding instance per VRF per customer. Typically, each customer’s VPN
is associated with at least one VRF. Maintaining multiple VRFs on the same
PE is similar to maintaining multiple dedicated routers for customers
connecting to the provider network.
VPN label Typically, VPN traffic is assigned to a VPN label at the egress
PE (LER) that can be used by the remote ingress PEs (LER), where the
egress PE demultiplexes the traffic to the correct VPN customer egress
interface based on the assigned VPN label. In other words, the VPN label is
generated and assigned to every VPN route by the egress PE router, then
advertised to the ingress PE routers over an MP-BGP update. Therefore, it
is only understood by the egress PE node that performs demultiplexing to
forward traffic to the respective VPN customer egress interface/CE based
on its VPN label.
W
web tier The front end of the application that all end users access. This is
how an end user sees and interacts with the application. This is often called
the web or GUI tier (aka layer) of the application. The main function of this
tier is to translate tasks and results to something the end user can
understand.
weighted fair queuing (WFQ) Algorithm that offers a dynamic fair
distribution among all traffic flows based on weight.
Y
Yet Another Next Generation (YANG) An IETF standard (RFC 6020)
data modeling language used to describe the data for network configuration
protocols such as NETCONF and RESTCONF.
Z
Zachman Framework Provides a means to classify a business’s
architecture in a structured manner. This is a proactive business tool that is
used to model a business’s functions, elements, and processes to help the
business manage change throughout the organization.
Zero Trust Architecture A security model that adds real-time capture and
analytics tools to the mix to allow for real-time AI/ML decision making.
Every device, user, application, server, service, and resource (even data
itself) is assigned a trust score. This trust score changes based on what the
analytics engine sees happening.
zero-touch provisioning (ZTP) Capability used to automatically configure
a new device once it is plugged into the network. When configured, the new
device will pull a DHCP address from a preconfigured DHCP server that
tells the new device where to pull its initial configuration and OS image. If
the image downloaded is not the same as the image that is running on the
new device, the new device will complete an upgrade to the downloaded
image. Once the upgrade is completed, the new device will apply the
downloaded configuration, which is commonly called the day zero
configuration. At this point, the device is ready for future day one
configuration.
Index
Numbers
2-tier application model, 69
3-tier application model, 69–70
database layer, 69, 70
intermediate (application) layer, 69, 70
presentation (web) layer, 69, 70
11 pillars of data management, 80–81
12-class QoS baseline model, 381
12 principles of Agile methodologies, 58
802.11 wireless standard, 303–305
802.1D, 147
802.1S MST (Multiple Spanning Tree), 147
802.1W, 147
A
abstraction, cloud-agnostic architecture, 77
access
access-distribution design model, 408
aggregation layer, WAN, 458–460
authorization for access, Zero Trust Architectures, 93
CAP, 75
CAPWAP, 306–309
daisy-chained access switches, 155–156
DCA, 74
DIA, 74
entitled to access, Zero Trust Architectures, 93
Layer 2 media access, 120–121
Carrier Ethernet Architecture, 122–123
L2VPN Ethernet-based models, 121–122
Metro Ethernet services, 123–124
NAC, 294
Pearson Cert Practice Test Engine
offline access, 492
online access, 491–492
remote-access security, 288–291
routed access design model, 408–409
access-distribution design model, 408
active-standby design scenario, BGP over multihomed Internet
edge, 435–437
AD, multiple redistribution boundary points, 201
adding applications/technology, 11
admission control, QoS design, 379–380
aggregation layer, WAN, access, 458–460
Agile methodologies, project management, 57–59, 63–64
agnostic vs. proprietary cloud services, 77–78
AIGP (Accumulated IGP cost for BGP), 214–215
AIOps (Artificial Intelligence for IT operations), 399
aligning stakeholders, EA, 109–110
Anycast-RP, 344–346
application (intermediate) layer, 3-tier application model, 69, 70
application binders (run books), 80
application models, 69
2-tier application model, 69
3-tier application model, 69–70
single-server application model, 69
applications
adding, 11
cloud computing strategies, 80
constraints, 14, 70–72
hardcoded items, 71
high availability, 71
Layer 2 extensions, 71
multicasting, 70–71
multicasting requirements, 344
requirements, 8–9, 70–72
service models, 72–73
AQM (Active Queue Management), 379
architectures
BA, 105–106
alignment, 106
guiding principles, 106–107
KPI, 108
scope, 106
Carrier Ethernet Architecture, 122–123
DiffServ architectures, 371–378
EA, 108–109
aligning stakeholders, 109–110
CMDB, 110–111
identifying current state, 110–111
identifying operating models, 110
methodology, 109–111
frameworks
BOST framework, 114, 115
DODAF, 114, 115
FEAF, 112–113, 115
ITIL, 114
ITIL, 115
RMF, 112–113, 115
TOGAF, 111, 115
Zachman Framework, 112, 115
IntServ architectures, 371
Zero Trust Architectures, 17, 91–92, 94
authorization for access, 93
dynamic factors, 92
endpoint devices, 93
entitled to access, 93
feedback loops, 94
infrastructure devices, 94
inventories, 93
migrating, 94–95
PEP, 94
policy engines, 93
risk scores, 93
static factors, 92
trust engines, 93
trust scores, 93
assumptions, pitfalls of network design, 36–37
assurance, business, 18
asymmetrical routing, BGP over multihomed Internet edge
design, 440–441
AToM (Any Transport over MPLS), 124
authentication, 294–296
authorization, 296–298
authorization for access, Zero Trust Architectures, 93
automation
AIOps, 399
CI/CD pipelines, 321–323, 324
CLA, 399
IaC, 319–321, 324
ZTP, 318–319, 324
Auto-RP, 335
availability
network design, 18–19
redundancy, 18
reliability, 18
resiliency, 18
Availability (CIA Triad), 96
B
BA (Business Architectures), 105–106
alignment, 106
guiding principles, 106–107
KPI, 108
scope, 106
balance (strategic), business success, 54–56
bandwidth
allocation, QoS, 383
EIGRP routing, 174–175
BC (Business Continuity), 47
best practices, network design, 38
BGP (Border Gateway Protocol)
AIGP, 214–215
attributes, 208–209
confederation, 222–225
endpoint discovery, MPLS over multipoint GRE, 479–481
as enterprise core routing protocol, 209–215
multicast BGP, 340–341
over multihomed Internet edge design, 433–434
active-standby design scenario, 435–437
asymmetrical routing, 440–441
load sharing, 437–440
policy control attributes, 434–435
path selection, 208–209
route reflection, 215–221
routing, 206
scalability design options, 215–225
shortest path over the enterprise core, 214–215
suboptimal routing, 195
update grouping, 221–222
BOST framework, 114, 115
BPDU Filter, 147
BPDU Guard, 147
branch (remote site) WAN design, 462–467
brownfield network design, 10–11
budgeting time for exam questions, 490
Business Architectures. See BA
business as network design fundamental, 13
business assurance, 18
business capabilities, 46
business constraints, 14
Business Continuity (BC), 47
business drivers, 45, 46, 355–356
“business innovation” enabler, IT as, 49
business outcomes, 46
business partners, 8
business priorities, 45, 46
business risk vs. reward vs. cost analysis, 50–51
business success, designing for
BC, 47
business capabilities, 46
business drivers, 45, 46
business outcomes, 46
business priorities, 45, 46
business risk vs. reward vs. cost analysis, 50–51
CAPEX, 51
elasticity, strategic support for business trends, 48–49
IT
as “business innovation” enabler, 49
as a “new utility,” 50–51
nature of the business, 52
OPEX, 51
planning, 52–53
decision matrices (matrix), 53–54
decision trees, 53
strategic planning approach, 54
tactical planning approach, 54
RPO, 47
RTO, 47
strategic balance, 54–56
top-down approach, 44–45
business trends, elasticity as strategic support, 48–49
C
campus networks, 402
access-distribution design model, 408
core blocks, 407–408
hierarchical design models, 404–406
Layer 3 routing, 411–413
modularity, 406–411
multitier STP-based design model, 408
routed access design model, 408–409
switch clustering (virtual switches), 409–410
three-tier network design model, 405
two-tier network design model, 405–406
virtualization, 413–414
drivers, 414–415
edge control, 415
services virtualization, 415
transport virtualization, 415
CAP (Cloud Access Point), 75
capabilities, business, 46
CAPEX (Capital Expenses), 51
CAPWAP (Control and Provisioning of Wireless Access
Protocol), 306–309
Carrier Ethernet Architecture, 122–123
CBWFQ (Class-Based WFQ), 377–378
chapter-ending review tools, 494
chokepoints, 89–91
CIA Triad, 95–96
CI/CD pipelines, 321–323, 324
Cisco Certified Design Expert (CCDE 400–007) Exam, updates,
514–515
CLA (Closed Loop Automation), 399
client sites, 75
clock watching, exam preparation, 490
cloud computing
agnostic vs. proprietary cloud services, 77–78
application strategies, 80
cloud-agnostic architecture, 76–77
connectivity models, 74
CAP, 75
DCA, 74
hybrid, 75
considerations, 73
containerization, 77, 78–79
container deployment architecture, 79
traditional deployment architecture, 78
virtualization deployment architecture, 79
decoupling, 77
hybrid clouds, 76
multi-clouds, 76
private clouds, 76
public clouds, 76
SOA, 78
CMDB (Configuration Management Database), 110–111
CNF (Carrier-Neutral Facility), 74
complexity, network design, 19–20
Confidentiality (CIA Triad), 96
congestion
avoidance (AQM), 379
managimg, 376–378
connectivity
DMVPN connectivity model, WAN, 466–467
multicast islands, 339–340
WAN edge connectivity design, 460–461
constraints
application constraints, 14, 70–72
business constraints, 14
cost constraints, 14, 20–21
infrastructure constraints, 15
location constraints, 14
network design, 13–15
staff expertise constraints, 15
technology, 14
time constraints, 14
containerization, 77, 78–79
container deployment architecture, 79
traditional deployment architecture, 78
virtualization deployment architecture, 79
continuity (BC), business, 47
control plane
security, Layer 3, 287–288
VPLS, 126–127
core blocks, campus networks, 407–408
cost analysis, vs. business risk vs. reward, 50–51
cost constraints, 14, 20–21
crafting network design requirements, 9–10
CSC design model, 475–476
current state (EA), identifying, 110–111
customer-controlled WAN, 473–474
customers, 8
customizing, practice tests, 492–493
D
dACL (downloadable ACL), 297
daisy-chained access switches, 155–156
Data Link Layer. See Layer 2
data management, 11 pillars of data management, 80–81
data sovereignty, 98
database layer, 3-tier application model, 69, 70
DCA (Direct Cloud Access), 74
decision matrices (matrix), 53–54
decision trees, 53
decoupling, 77
deploying cloud containerization
container deployment architecture, 79
traditional deployment architecture, 78
virtualization deployment architecture, 79
designing for business success
BC, 47
business capabilities, 46
business drivers, 45, 46
business outcomes, 46
business priorities, 45, 46
business risk vs. reward vs. cost analysis, 50–51
CAPEX, 51
elasticity, strategic support for business trends, 48–49
IT
as “business innovation” enabler, 49
as a “new utility,” 50–51
nature of the business, 52
OPEX, 51
planning, 52–53
decision matrices (matrix), 53–54
decision trees, 53
strategic planning approach, 54
tactical planning approach, 54
RPO, 47
RTO, 47
strategic balance, 54–56
top-down approach, 44–45
designing networks, 4
best practices, 38
business as network design fundamental, 13
campus networks, 402
access-distribution design model, 408
core blocks, 407–408
hierarchical design models, 404–406
Layer 3 routing, 411–413
modularity, 406–411
multitier STP-based design model, 408
routed access design model, 408–409
switch clustering (virtual switches), 409–410
three-tier models, 405
two-tier models, 405–406
virtualization, 413–425
constraints, 13–15
crafting requirements, 9–10
design use cases
add technology or application, 11
brownfield network design, 10–11
design failures, 12
divest, 11–12
greenfield network design, 10
merge, 11–12
scaling networks, 12
failures, 12
fundamentals, 5–6
identifying requirements with “Why?,” 15–16
mindset, 6–7
application requirements, 8–9
functional requirements, 7
technical requirements, 7
pitfalls, 36
assumptions, 36–37
overdesigning, 37–38
preconceived notions, 38–39
principles, 16
availability, 18–19
complexity, 19–20
cost constraints, 20–21
scalability, 20
security, 16–18
scope, 9–10
techniques, 21
failure isolation, 22–26
hierarchy of design, 33–36
modularity, 30–33
shared failure state, 27–30
top-down approach, 4
unstated requirements, 16
destination traffic engineering, 196
devices
hardening, 281–285
partitioning, 417–420
DIA (Direct Internet Access), 74
DiffServ architectures, 371–378
discovery phase, IPv4 migration/integration to IPv6, 358
divest use cases, 11–12
DMVPN (Dynamic Multipoint VPN)
MPLS over dynamic multipoint GRE (DMVPN), 479–480
per VRF, 477–478
WAN, 466–467
DMZ (Demilitarized Zones), 91
DODAF (Department of Defense Architecture Framework), 114
domain security, 89–91
drivers
business drivers, 45, 46
campus network virtualization design, 414–415
Internet edge design, 432–433
dual WAN providers, 462–463
dynamic factors, Zero Trust Architectures, 92
E
EA (Enterprise Architecture), 108–109
CMDB, 110–111
methodology, 109
aligning stakeholders, 109–110
identifying current state, 110–111
identifying operating models, 110
EAP (Extensible Authentication Protocol), 295–296
EAP-FAST, 295, 296
EAP-MD5, 295, 296
EAP-TLS, 295, 296
EAP-TTLS, 295, 296
earplugs, exam preparation, 490
ECMP (Equal-Cost Multipath), 459–460
edge connectivity design, WAN, 460–461
edge control, campus network virtualization design, 415
EIGRP routing, 170–171
bandwidth, 174–175
EIGRP flooding domain structure, 184
full-mesh topologies, 174
hub-and-spoke topologies, 171–172
OTP, 480–482
propagation considerations, 174–175
ring topologies, 173–174
stub route leaking, 172–173
suboptimal routing, 195
traffic engineering, 197–198
zero successor routes, 175
E-LAN service, 123, 451
elasticity, strategic support for business trends, 48–49
E-Line service, 123, 450
embedded-RP, 343
end users, categories of, 8
endpoint devices, Zero Trust Architectures, 93
Enterprise Architecture. See EA
Enterprise Campus QoS design, 385
enterprise core routing protocol, BGP as, 209–215
Enterprise Edge QoS design, 385–386
enterprise Internet edge design, 429–430
multihoming, 431
active-standby design scenario, 435–437
asymmetrical routing, 440–441
BGP over multihomed Internet edge design, 433–
443
concept, 432–433
drivers, 432–433
load sharing, 437–440
optimized design, 441–443
overview, 430–432
Enterprise Layer 3 routing, IP routing/forwarding, 161–162
enterprise networks
virtualization, 416–420
wireless network design, 309–313
enterprise WAN, 448–449
aggregation layer access, 458–460
CSC design model, 475–476
customer-controlled WAN, 473–474
design hierarchy, 458–459
DMVPN connectivity model, 466–467
dual WAN providers, 462–463
ECMP, 459–460
edge connectivity design, 460–461
EIGRP OTP, 480–482
enterprise-controlled WAN, 473–474
mLAG, 459–460
module design, 467–471
MPLS over dynamic multipoint GRE (DMVPN), 479–480
MPLS over multipoint GRE, 479–481
MPLS over point-to-point GRE tunnels, 478–479
MPLS-based WAN
Layer 2 technologies, 450–453, 457–458
Layer 3 technologies, 453–454, 457–458
virtualization, 475–476
overlay networks, 471–473
over-the-top WAN virtualization, 475–482
point-to-point GRE tunnel per VRF, 476–477
remote site (branch) WAN design, 462–467
single WAN providers, 462–463
SP-controlled WAN, 473–474
transports
Internet as WAN transport, 455–458
modern transports, 450
MPLS-based WAN, 450–454
overview, 449–450
virtualization, 471–483
WAN migration to MPLS VPN, 484–488
entitled to access, Zero Trust Architectures, 93
ES (Ethernet Segments), 137
Ethernet
Carrier Ethernet Architecture, 122–123
E-LAN service, 123
E-Line service, 123
ES, 137
E-Tree service, 123
EVPN design model, 134–139, 141
L2VPN Ethernet-based models, 121–122
MAC addresses, multicast IP address mapping, 329–332
Metro Ethernet services, 123–124
E-Tree service, 123, 451
EVC (Ethernet Virtual Connections), Metro Ethernet services,
124
EVPL. See E-Line
EVPN design model, 134–139, 141
exam preparation, 490
chapter-ending review tools, 494
earplugs, 490
locking up valuables, 491
online testing, 491
Pearson Cert Practice Test Engine, 491
accessing offline, 492
accessing online, 491–492
customizing practice tests, 492–493
Premium Edition, 494
updating practice tests, 493
rest, 491
study trackers, 490
suggested plan for final review/study, 494
taking notes, 491
time budgets for exam questions, 490
travel time, 491
watch the clock, 490
expenses
CAPEX, 51
OPEX, 51
Extranet topologies, MPLS L3VPN, 243–245
F
Fabric Border Node, 274
Fabric Control-Plane Node, 272–273
Fabric Edge Node, 273
Fabric Intermediate Node, 274
Fabric policy, Identity Repository, 274–275
Fabric SDN Controller, 275
failures
design failures, 12
domains, 22
isolating, 22–26
shared failure state, 27–30, 415
fate sharing, 27–30, 415
FCAPS (Fault, Configuration, Account, Performance and
Strategy), 391
FEAF (Federal Enterprise Architecture Framework), 112–113,
115
feedback loops, Zero Trust Architectures, 94
FHRP (First Hop Redundancy Protocol), 150–152
FIB (Forwarding Information Base), 161–162
FIFO (First In-First Off), 377–378
filtering routes, 202–205
final preparation, 490
chapter-ending review tools, 494
earplugs, 490
locking up valuables, 491
online testing, 491
Pearson Cert Practice Test Engine, 491
accessing offline, 492
accessing online, 491–492
customizing practice tests, 492–493
Premium Edition, 494
updating practice tests, 493
rest, 491
study trackers, 490
suggested plan for final review/study, 494
taking notes, 491
time budgets for exam questions, 490
travel time, 491
watch the clock, 490
flat VPLS design model, 129–131
flexibility of network design. See scalability
frameworks, architecture
BOST framework, 114, 115
DODAF, 114, 115
FEAF, 112–113, 115
ITIL, 114
ITIL, 115
RMF, 112–113, 115
TOGAF, 111, 115
Zachman Framework, 112, 115
FSO (Full Stack Observability), 399
full-mesh topologies
EIGRP routing, 174
link-state over, 165–167
MPLS L3VPN, 240–241
fully-automated network management, 399
functional requirements, design mindset, 7
fundamentals of network design, 5–6
business as network design fundamental, 13
constraints, 13–15
crafting requirements, 9–10
design use cases
add technology or application, 11
brownfield network design, 10–11
design failures, 12
divest, 11–12
greenfield network design, 10
merge, 11–12
scaling networks, 12
identifying requirements with “Why?,” 15–16
mindset, 6–7
application requirements, 8–9
functional requirements, 7
technical requirements, 7
G
gateways, CAP, 75
GDPR (General Data Protection Regulation), 98
GLBP (Gateway Load Balancing Protocol), 150
gold plating, 37–38
GRE (Generic Routing Encapsulation)
DMVPN per VRF, 477–478
MPLS over
dynamic multipoint GRE (DMVPN), 479–480
multipoint GRE, 479–481
point-to-point GRE tunnels, 478–479
point-to-point GRE tunnel per VRF, 476–477
greenfield network design, 10
gRPC network management interface, 399
H
hardcoded items, 71
hardening devices, 281–285
hiding topology/reachability information, 175–177
hierarchical design models, campus networks, 404–406
hierarchical QoS, 378
hierarchy of design, 33–36, 458–459
high availability, 71
high density wireless network design, 309–312
higher education campus architecture case study
business success, designing for, elasticity, strategic support
for business trends, 48–49
network design, 20–21
high-level design, network management, 391–395
HIPAA (Health Information Portability and Accountability
Act), 97
hop by hop path virtualization, 417–418
HSRP (Hot Standby Router Protocol), 150
HSRP-Aware PIM, 348–349
hub-and-spoke topologies
EIGRP routing, 171–172
link-state over, 163–166
MPLS L3VPN, 240–241
OSPF interfaces, 165–166
H-VPLS (hierarchical VPLS) design model, 131–133
hybrid clouds, 75, 76
I
IaaS (Infrastructure as a Service), 72, 73
IaC (Infrastructure as Code), 319–321, 324
identifying
EA
current state, 110–111
operating models, 110
requirements with “Why?,” 15–16
identity management, 294
Identity Repository, Fabric policy, 274–275
IEEE 802.11 wireless standard, 303–305
IGMP snooping, 331–332
IGP flooding domains, 178
EIGRP routing, 184
link-state routing, 178–183
routing domain logical separation, 184
summary of characteristics, 198
traffic engineering, 196–198
infrastructures
constraints, 15
devices, Zero Trust Architectures, 94
security
device hardening, 281–285
Layer 2 security, 285–287
Layer 3 control plane security, 287–288
overlay networks, 288–291
remote-access security, 288–291
Integrity (CIA Triad), 96
interconnecting multicast islands, 339–340
interdomain multicasting, 340
interdomain routing, 206–207
intermediate (application) layer, 3-tier application model, 69, 70
internal users, 8
Internet
DIA, 74
as WAN transport, 455–458
Internet edge design, 429–430
multihoming, 431
BGP over multihomed Internet edge design, 433–
443
concept, 432–433
drivers, 432–433
overview, 430–432
interoperability, cloud-agnostic architecture, 77
intrusion prevention, 292–294
IntServ architectures, 371
inventories, Zero Trust Architectures, 93
IP (Internet Protocol)
routing/forwarding, 161–162
tunneling QoS design, 386–390
IPv6 (Internet Protocol version 6)
address types, 356–357
business drivers, 355–356
IPv4 migration/integration, 357–358
deployment, 366
detailed design, 363–365
discovery phase, 358
monitoring, 366
optimization, 366
solution assessment/planning, 358–363
transitioning scenario, 366–369
technical drivers, 355–356
IS-IS (Intermediate System to Intermediate System)
multilevel IS-IS, 164–165
optimal routing, 189–190
OSPF vs.170
suboptimal routing, 195
traffic engineering, 197
isolating failures, 22–26
IT (Internet Technology)
as “business innovation” enabler, 49
as a “new utility,” 50–51
ITIL (Information Technology Infrastructure Library), 114, 115
J-K
Kanban methodology, project management, 60–62, 63–64
KPI (Key Performance Indicators), BA, 108
L
L2VPN Ethernet-based models, 121–122
L3VPN forwarding pane, 238–239
Extranet topology, 243–245
full mesh topologies, 240–241
hub-and-spoke topologies, 241–244
multihoming, 239–240
multilevel hub-and-spoke topologies, 243–244
PE-CE L3VPN routing design, 249–266
shared services, 243–245
LACP (Link Aggregation Control Protocol), 148–149
LAN (Local Area Networks)
controller services ports, wireless networks, 309
E-LAN service, 123, 451
Layer 2 LAN design models
daisy-chained access switches, 155–156
STP-based models, 152–154
switch clustering (virtual switches), 154–155
SD-LAN, 269–275
VLAN, 148, 417
VLAN-based transport model, VPWS, 125
VPLS
building blocks, 127
control plane, 126–127
design considerations, 126–127
design models (overview), 129
EVPN design model, 134–139, 141
flat VPLS design model, 129–131
functional components, 128
H-VPLS design model, 131–133
VSI, 128
VXLAN design model, 140–141
Layer 2 (Data Link Layer)
core technologies, 147
extensions, 71
FHRP, 150–152
LAN design models, 152
daisy-chained access switches, 155–156
STP-based models, 152–154
switch clustering (virtual switches), 154–155
link aggregation, 148–150
media access, 120–121
Carrier Ethernet Architecture, 122–123
L2VPN Ethernet-based models, 121–122
Metro Ethernet services, 123–124
MPLS-based WAN, 450–453, 457–458
security, 285–287
STP, 147
802.1D, 147
802.1S MST, 147
802.1W, 147
BPDU Filter, 147
BPDU Guard, 147
FHRP, 150–152
Layer 2 LAN design models, 152–154
Loop Guard, 147
Root Guard, 147
trunking, 148
VLAN, 148
Layer 3 (Network layer)
BGP
AIGP, 214–215
attributes, 208–209
BGP shortest path over the enterprise core, 214–
215
confederation, 222–225
enterprise core routing design models, 210–214
as enterprise core routing protocol, 209–215
path selection, 208–209
route reflection, 215–221
routing, 206
scalability design options, 215–225
update grouping, 221–222
campus network design, 411–413
control plane security, 287–288
EIGRP routing, 170–171, 174
bandwidth, 174–175
EIGRP flooding domain structure, 184
hub-and-spoke topologies, 171–172
propagation considerations, 174–175
ring topologies, 173–174
stub route leaking, 172–173
suboptimal routing, 195
zero successor routes, 175
Enterprise Layer 3 routing, IP routing/forwarding, 161–162
hiding topology/reachability information, 175–177
IGP flooding domains, 178
EIGRP routing, 184
link-state routing, 178–183
routing domain logical separation, 184
summary of characteristics, 198
traffic engineering, 196–198
interdomain routing, 206–207
link-state routing, 162–163
link-state flooding domain structure, 178–183
link-state over full-mesh topologies, 165–167
link-state over hub-and-spoke topologies, 163–166
OSPF, 163–164, 167–170
MPLS-based WAN, 453–454, 457–458
policy compliance, 190
route filtering, 225–227
route redistribution, 198–199
multiple redistribution boundary points, 200–201
route filtering, 202–205
single redistribution boundary points, 199–200
route summarization, 191–194
security control, 190
suboptimal routing, 194–196
traffic patterns, 187–190
underlying physical topologies, 184–186
LEAP (Lightweight EAP), 296
link aggregation, 148–150, 410
link-state routing, 162–163
link-state flooding domain structure, 178–183
link-state over
full-mesh topologies, 165–167
hub-and-spoke topologies, 163–166
optimal routing, 196
live-live streaming, 347–348
load sharing, BGP over multihomed Internet edge design, 437–
440
location constraints, 14
locking up valuables, exam preparation, 491
Loop Guard, 147
Looped Square topologies, 153
Looped Triangle topologies, 153
Loop-Free Inverted U topologies, 153
Loop-Free U topologies, 154
LSDB (Link-State Database), 162–163
M
MAB (Mac Authentication Bypass), 296
managing
AQM, 379
complexity of network design, 19–20
congestion, 376–378
data, 11 pillars of data management, 80–81
identity, 294
networks, 390–391
AIOps, 399
CLA, 399
FCAPS, 391
FSO, 399
fully-automated network management, 399
gRPC network management interface, 399
high-level design, 391–395
model-driven management, 396–399
multitier design, 395–396
NETCONF protocol, 397, 398
RESTCONF protocol, 397–398
YANG, 396–397, 398
projects, 55
Agile methodologies, 57–59, 63–64
Kanban methodology, 60–62, 63–64
Scrum methodology, 59–60, 63–64
waterfall methodology, 55–57, 62–64
many-to-one virtualization model, 413
merge use cases, 11–12
metric transformation, multiple redistribution boundary points,
200–201
Metro Ethernet services, 123–124
migrating Zero Trust Architectures, 94–95
mindset, network design, 6–7
application requirements, 8–9
functional requirements, 7
technical requirements, 7
mLAG (multichassis Link Aggregation), 410, 459–460
model-driven network management, 396–399
modularity
campus networks, 406–411
network design, 30–33
MP-BGP VPN Internet routing, 247–249
MPLS (Multiprotocol Label Switching), 232–233
architectural components, 233–235
control plane components, 234–237
L3VPN forwarding pane, 238–239
Extranet topology, 243–245
full mesh topologies, 240–241
hub-and-spoke topologies, 241–244
multihoming, 239–240
multilevel hub-and-spoke topologies, 243–244
PE-CE L3VPN routing design, 249–266
shared services, 243–245
MP-BGP VPN Internet routing, 247–249
multihoming, 239–240
Non-MP-BGP VPN Internet routing, 246–247
over dynamic multipoint GRE (DMVPN), 479–480
over multipoint GRE, 479–481
over point-to-point GRE tunnels, 478–479
peer model, 233
route distinguishers, 235–237
RT, 237
VPN
Internet access design, 246–249
path virtualization, 418–419
VRF, 235
WAN
Layer 2 technologies, 450–453, 457–458
Layer 3 technologies, 453–454, 457–458
migration to MPLS VPN, 484–488
virtualization, 475–476
MSDP (Multicast Source Discovery Protocol), 341–343
MST (Multiple Spanning Tree), 147
multicasting, 70–71
Anycast-RP, 344–346
application requirements, 344
BGP, 340–341
HSRP-Aware PIM, 348–349
IGMP snooping, 331–332
interconnecting multicast islands, 339–340
interdomain multicasting, 340
IP address mapping into Ethernet MAC address, 329–332
live-live streaming, 347–348
MSDP, 341–343
phantom RP, 346–347
resiliency, 344
routing, 332–335
RP
Auto-RP, 335
embedded-RP, 343
placement of, 335–338
static RP, 335
RPF, 332–333, 341–342
switching, 329–332
multi-clouds, 76
multihoming
Internet edge design, 431
active-standby design scenario, 435–437
asymmetrical routing, 440–441
BGP over multihomed Internet edge design, 433–
443
concept, 432–433
drivers, 432–433
load sharing, 437–440
optimized design, 441–443
MPLS L3VPN, 239–240
multihop path virtualization, 417–418
multilevel hub-and-spoke topologies, 243–244
multilevel IS-IS, 164–165
multiple redistribution boundary points, 200
AD, 201
metric transformation, 200–201
multitier network management design, 395–396
multitier STP-based design model, 408
N
NAC (Network Access Control), 294
nature of the business, designing for business success, 52
NETCONF protocol, 397, 398
network design, 4
best practices, 38
business as network design fundamental, 13
campus networks, 402
access-distribution design model, 408
core blocks, 407–408
hierarchical design models, 404–406
Layer 3 routing, 411–413
modularity, 406–411
multitier STP-based design model, 408
routed access design model, 408–409
switch clustering (virtual switches), 409–410
three-tier models, 405
two-tier models, 405–406
virtualization, 413–425
constraints, 13–15
crafting requirements, 9–10
enterprise networks, virtualization, 416–425
fundamentals, 5–6
higher education campus architecture case study, 21
failure isolation, 22–26
hierarchy of design, 33–36
modularity, 30–33
shared failure state, 27–30
identifying requirements with “Why?,” 15–16
Internet edge design, 429–430
multihoming, 431–443
overview, 430–432
mindset, 6–7
application requirements, 8–9
functional requirements, 7
technical requirements, 7
pitfalls, 36
assumptions, 36–37
overdesigning, 37–38
preconceived notions, 38–39
principles, 16
availability, 18–19
complexity, 19–20
cost constraints, 20–21
scalability, 20
security, 16–18
techniques, 21
failure isolation, 22–26
hierarchy of design, 33–36
modularity, 30–33
shared failure state, 27–30
top-down approach, 4
unstated requirements, 16
use cases
add technology or application, 11
brownfield network design, 10–11
design failures, 12
divest, 11–12
greenfield network design, 10
merge, 11–12
scaling networks, 12
network management, 390–391
AIOps, 399
CLA, 399
FCAPS, 391
FSO, 399
fully-automated network management, 399
gRPC network management interface, 399
high-level design, 391–395
model-driven management, 396–399
multitier design, 395–396
NETCONF protocol, 397, 398
RESTCONF protocol, 397–398
YANG, 396–397, 398
network services
IPv6
address types, 356–357
business drivers, 355–356
IPv4 migration/integration, 357–369
technical drivers, 355–356
QoS design, 369
admission control, 379–380
bandwidth allocation, 383
CBWFQ, 377–378
congestion avoidance (AQM), 379
congestion management, 376–378
DiffServ architectures, 371–378
Enterprise Campus QoS design, 385
Enterprise Edge QoS design, 385–386
FIFO, 377–378
frameworks, 384–385
hierarchical QoS, 378
high-level design, 370–371
IntServ architectures, 371
IP tunneling QoS design, 386–390
PQ, 377
strategies, 380–385
traffic policing/sharing, 379–380
traffic profiling, 376–378
twelve-class QoS baseline model, 381
network virtualization
MPLS, 232–233
architectural components, 233–235
control plane components, 234–237
L3VPN forwarding pane, 238–245
MP-BGP VPN Internet routing, 247–249
multihoming, 239–240
Non-MP-BGP VPN Internet routing, 246–247
PE-CE L3VPN routing design, 249–266
peer model, 233
route distinguishers, 235–237
RT, 237
VPN Internet access design, 246–249
VRF, 235
software-defined networks, 266
SD-LAN, 269–275
SD-WAN, 266–269
networks
purpose of, 66
scaling, 12
security, integration, 88–91
“new utility,” IT as a, 50–51
NFV (Network Functions Virtualization), 424–425
nonfunctional requirements. See technical requirements
Non-MP-BGP VPN Internet routing, 246–247
notes (exam preparation), taking, 491
notions (preconceived), network design, 38–39
O
one-to-many virtualization model, 414
online testing, exam preparation, 491
on-premises service model, 72
operating models, identifying, EA, 110
OPEX (Operating Expenses), 51
OSPF (Open Shortest Path)
area types, 167
hub-and-spoke topologies, 165–166
IS-IS vs.170
optimal routing, 188–190
route propagation, 167
suboptimal routing, 188–189, 195
totally NSSA, 168–169
totally stubby areas, 169–170
traffic engineering, 197
outcomes, business, 46
overdesigning, pitfalls of network design, 37–38
overlay networks, 271
security, 288–291
WAN, 471–473
override, VLAN, 297
over-the-top WAN virtualization, 475–482
P
PaaS (Platform as a Service), 72, 73
partitioning devices, 417–420
path isolation, 417
path virtualization, 417–420
Patient Safety Rule (HIPAA), The, 97
PBB (Provider Backbone Bridging)
EVPN design model, 135–136
H-VPLS design model, 132–133
PCI DSS (Payment Card Industry Data Security Standard), 97–
98
PEAP (Protected EAP), 295, 296
Pearson Cert Practice Test Engine, 491
access
offline access, 492
online access, 491–492
practice tests
customizing, 492–493
updates, 493
Premium Edition, 494
PE-CE L3VPN routing design, 249–266
PEP (Policy Enforcement Points), Zero Trust Architectures, 94
perimeter security (turtle shell), 17, 292–294
phantom RP, 346–347
PIM-BIDIR, 334
PIM-DM, 334
PIM-SM, 334
PIM-SSM, 334
PIP (Policy Information Points). See trust engines
pitfalls of network design, 36
assumptions, 36–37
overdesigning, 37–38
planning
business success, 52–53
decision matrices (matrix), 53–54
decision trees, 53
strategic planning approach, 54
tactical planning approach, 54
decision matrices (matrix), 53–54
decision trees, 53
IPv4 migration/integration to IPv6, 358–363
strategic planning approach, 54
suggested plan for final review/study, 494
tactical planning approach, 54
travel time, exam preparation, 491
point-to-point GRE tunnel per VRF, 476–477
policies
compliance, Layer 3 technologies, 190
security, 87
policy engines, Zero Trust Architectures, 93
portability, cloud-agnostic architecture, 76–77
port-based VPWS model, 126
PQ (Priority Queuing), 377
practice tests, Pearson Cert Practice Test Engine, 491
accessing
offline, 492
online, 491–492
customizing, 492–493
Premium Edition, 494
updating, 493
preconceived notions, network design, 38–39
preparing for exam, 490
chapter-ending review tools, 494
earplugs, 490
locking up valuables, 491
online testing, 491
Pearson Cert Practice Test Engine, 491
accessing offline, 492
accessing online, 491–492
customizing practice tests, 492–493
Premium Edition, 494
updating practice tests, 493
rest, 491
study trackers, 490
suggested plan for final review/study, 494
taking notes, 491
time budgets for exam questions, 490
travel time, 491
watch the clock, 490
presentation (web) layer, 3-tier application model, 69, 70
principles of network design, 16
availability, 18–19
complexity, 19–20
cost constraints, 20–21
scalability, 20
security, 16–18
priorities, business, 45, 46
Privacy Rule (HIPAA), The, 97
private clouds, 76
project management, 55
Agile methodologies, 57–59, 63–64
Kanban methodology, 60–62, 63–64
Scrum methodology, 59–60, 63–64
waterfall methodology, 55–57, 62–64
proprietary vs. agnostic cloud services, 77–78
public clouds, 76
purpose of networks, 66
Q
QoS design, network services, 369
admission control, 379–380
bandwidth allocation, 383
CBWFQ, 377–378
congestion avoidance (AQM), 379
congestion management, 376–378
DiffServ architectures, 371–378
Enterprise Campus QoS design, 385
Enterprise Edge QoS design, 385–386
FIFO, 377–378
frameworks, 384–385
hierarchical QoS, 378
high-level design, 370–371
IntServ architectures, 371
IP tunneling QoS design, 386–390
PQ, 377
strategies, 380–385
traffic policing/sharing, 379–380
traffic profiling, 376–378
twelve-class QoS baseline model, 381
queues
AQM, 379
WFQ, 377
R
reachability information, hiding, 175–177
redundancy, network design, 18
reference architectures
BA, 105–106
alignment, 106
guiding principles, 106–107
KPI, 108
scope, 106
EA, 108–109
aligning stakeholders, 109–110
CMDB, 110–111
identifying current state, 110–111
identifying operating models, 110
methodology, 109–111
frameworks
BOST framework, 114, 115
DODAF, 114, 115
FEAF, 112–113, 115
ITIL, 114
ITIL, 115
RMF, 112–113, 115
TOGAF, 111, 115
Zachman Framework, 112, 115
regulatory compliance, 96
data sovereignty, 98
GDPR, 98
HIPAA, 97
PCI DSS, 97–98
reliability, network design, 18
remote-access security, 288–291
remote site (branch) WAN design, 462–467
replacing technology, 11
requirements
applications, 70–72
crafting network design requirements, 9–10
design mindset
application requirements, 8–9
functional requirements, 7
technical requirements, 7
identifying with “Why?,” 15–16
nonfunctional requirements. See technical requirements
unstated requirements, network design, 16
resiliency
multicasting, 344
network design, 18
rest, exam preparation, 491
RESTCONF protocol, 397–398
reward vs. business risk vs. cost analysis, 50–51
RIB (Routing Information Base), 161–162
ring topologies, EIGRP routing, 173–174
risk scores, Zero Trust Architectures, 93
RMF (Risk Management Framework), 112–113, 115
Root Guard, 147
routed access design model, 408–409
routing
asymmetrical routing, BGP over multihomed Internet edge
design, 440–441
distinguishers, 235–237
domain logical separation, 184
MP-BGP VPN Internet routing, 247–249
multicast routing, 332–335
Non-MP-BGP VPN Internet routing, 246–247
redistribution, 198–199
multiple redistribution boundary points, 200–201
route filtering, 202–205
single redistribution boundary points, 199–200
summarization, 191–194
VRF, 235
RP (Reverse Path)
Anycast-RP, 344–346
Auto-RP, 335
embedded-RP, 343
placement of, 335–338
static RP, 335
RPF (Reverse Path Forwarding), 332–333, 341–342
RPO (Recovery Point Objective), 47
RT (Route Targets), 237
RTO (Recovery Time Objective), 47
run books (application binders), 80
S
SaaS (Software as a Service), 72, 73
scalability
BGP, 215–225
networks, 12, 20
RPF, 334
switch clustering (virtual switches), 410
scope of network design, 9–10
Scrum methodology, project management, 59–60, 63–64
SD-LAN (Software-Defined LAN), 269–275
SD-WAN (Software-Defined WAN), 266–269
security, 87
authentication, 294–296
authorization, 296–298
business assurance, 18
chokepoints, 89–91
CIA Triad, 95–96
device hardening, 281–285
DMZ, 91
domains, 89–91
hardening devices, 281–285
identity management, 294
intrusion prevention, 292–294
Layer 2 security, 285–287
Layer 3 technologies, 190, 287–288
NAC, 294
networks
design, 16–18
integration, 88–91
overlay networks, 288–291
wireless networks, 305–306
perimeter security (turtle shell), 17, 292–294
policies, 87
regulatory compliance, 96
data sovereignty, 98
GDPR, 98
HIPAA, 97
PCI DSS, 97–98
remote-access security, 288–291
session-based security, 17
topology/reachability information, hiding, 175–177
transaction-based security, 17
turtle shell (perimeter) security, 17
visibility, 298
wireless networks, 305–306
wireless security, 291–292
Zero Trust Architectures, 17, 91–92, 94
authorization for access, 93
dynamic factors, 92
endpoint devices, 93
entitled to access, 93
feedback loops, 94
infrastructure devices, 94
inventories, 93
migrating, 94–95
PEP, 94
policy engines, 93
risk scores, 93
static factors, 92
trust engines, 93
trust scores, 93
Security Rule (HIPAA), The, 97
service models, 72
IaaS, 72, 73
PaaS, 72, 73
on-premises service model, 72
SaaS, 72, 73
service virtualization, 415, 420–425
session-based security, 17
shared failure state, 27–30, 415
shared services, MPLS L3VPN, 243–245
shared trees, 337–338
single redistribution boundary points, 199–200
single WAN providers, 462–463
single-server application model, 69
SOA (Service-Oriented Architecture), 78
software-defined networks, 266
SD-LAN, 269–275
SD-WAN, 266–269
solution assessment/planning, IPv4 migration/integration to
IPv6, 358–363
sovereignty, data, 98
SP-controlled WAN, 473–474
split-brain situations, 155
SPT (Shortest-Path Tree), 336
staff expertise constraints, 15
stakeholders (EA), aligning, 109–110
static factors, Zero Trust Architectures, 92
static RP, 335
STP (Spanning Tree Protocol), 147
802.1D, 147
802.1S MST, 147
802.1W, 147
BPDU Filter, 147
BPDU Guard, 147
FHRP, 150–152
Layer 2 LAN design models, 152–154
Loop Guard, 147
multitier STP-based design model, 408
Root Guard, 147
strategic balance, business success, 54–56
strategic planning approach, 54
streaming, live-live, 347–348
stub route leaking, 172–173
study trackers, 490
suboptimal routing, 188–189, 194–196
success (business), designing for
BC, 47
business capabilities, 46
business drivers, 45, 46
business outcomes, 46
business priorities, 45, 46
business risk vs. reward vs. cost analysis, 50–51
CAPEX, 51
elasticity, strategic support for business trends, 48–49
IT
as “business innovation” enabler, 49
as a “new utility,” 50–51
nature of the business, 52
OPEX, 51
planning, 52–53
decision matrices (matrix), 53–54
decision trees, 53
strategic planning approach, 54
tactical planning approach, 54
RPO, 47
RTO, 47
strategic balance, 54–56
top-down approach, 44–45
suggested plan for final review/study, 494
summary black holes, 193–194
switch clustering (virtual switches), 154–155, 409–410
T
tactical planning approach, 54
technical drivers, IPv6, 355–356
technical requirements, design mindset, 7
techniques of network design, 21
failure isolation, 22–26
hierarchy of design, 33–36
modularity, 30–33
shared failure state, 27–30
technology
adding, 11
constraints, 14
replacing, 11
testing online, exam preparation, 491
three-tier network design model, campus networks, 405
time budgets for exam questions, 490
time constraints, 14
TOGAF (The Open Group Architecture Framework), 111, 115
top-down approach
business success, designing for, 44–45
network design, 4
topologies
Extranet topologies, 243–245
full-mesh topologies
EIGRP routing, 174
link-state over, 165–167
MPLS L3VPN, 240–241
hiding information, 175–177
hub-and-spoke topologies
EIGRP routing, 171–172
link-state over, 163–166
MPLS L3VPN, 241–244
OSPF interfaces, 165–166
Looped Square topologies, 153
Looped Triangle topologies, 153
Loop-Free Inverted U topologies, 153
Loop-Free U topologies, 154
ring topologies, EIGRP routing, 173–174
underlying physical topologies, Layer 3 technologies, 184–
186
traditional deployment architecture, cloud containerization, 78
traffic engineering
destination traffic engineering, 196
EIGRP routing, 197–198
IS-IS, 197
OSPF, 197
traffic flows, wireless networks, 306–309
traffic patterns, Layer 3 technologies, 187–190
traffic policing/sharing, QoS design, 379–380
traffic profiling, 376–378
transaction-based security, 17
transport technologies
Layer 2 media access, 120–121
Carrier Ethernet Architecture, 122–123
L2VPN Ethernet-based models, 121–122
Metro Ethernet services, 123–124
VPLS
building blocks, 127
control plane, 126–127
design considerations, 126–127
design models (overview), 129
EVPN design model, 134–139, 141
flat VPLS design model, 129–131
functional components, 128
H-VPLS design model, 131–133
VSI, 128
VXLAN design model, 140–141
VPWS
AToM, 124
design considerations, 124–125
port-based model, 126
VLAN-based transport model, 125
transport virtualization, campus networks, 415
travel time, exam preparation, 491
trends (business), elasticity as strategic support, 48–49
trunking, 148
trust engines, Zero Trust Architectures, 93
trust scores, Zero Trust Architectures, 93
tunneling, path virtualization, 418
turtle shell (perimeter) security, 17, 292–294
twelve-class QoS baseline model, 381
two-tier network design model, campus networks, 405–406
U
underlay networks, 271
underlying physical topologies, Layer 3 technologies, 184–186
UNI, Metro Ethernet services, 124
unstated requirements, network design, 16
updates
Cisco Certified Design Expert (CCDE 400–007) Exam,
514–515
practice tests, 493
use cases, design
add technology or application, 11
brownfield network design, 10–11
design failures, 12
divest, 11–12
greenfield network design, 10
merge, 11–12
scaling networks, 12
users (end), categories of, 8
V
valuables (exam preparation), locking up, 491
VASI (VRF-Service Aware Infrastructure), 423–424
video, wireless network design, 312–313
virtual switches (switch clustering), 154–155, 409–410
virtualization
campus networks, 413–414
drivers, 414–415
edge control, 415
services virtualization, 415
transport virtualization, 415
deployment architecture, cloud containerization, 79
enterprise networks, 416–420
many-to-one virtualization model, 413
MPLS, 232–233
architectural components, 233–235
control plane components, 234–237
L3VPN forwarding pane, 238–245
MP-BGP VPN Internet routing, 247–249
multihoming, 239–240
Non-MP-BGP VPN Internet routing, 246–247
PE-CE L3VPN routing design, 249–266
peer model, 233
route distinguishers, 235–237
RT, 237
VPN Internet access design, 246–249
VRF, 235
NFV, 424–425
one-to-many virtualization model, 414
path isolation, 417
path virtualization, 417–420
service virtualization, 415, 420–425
software-defined networks, 266
SD-LAN, 269–275
SD-WAN, 266–269
transport virtualization, campus networks, 415
VLAN, 417
VRF, 417
WAN, 471–483
visibility, security, 298
VLAN (Virtual Local Area Networks), 148, 297, 417
VLAN-based transport model, VPWS, 125
voice
domains, 297
wireless network design, 312–313
VPLS (Virtual Private LAN Service)
building blocks, 127
control plane, 128–129
design considerations, 126–127
design models (overview), 129
EVPN design model, 134–139, 141
flat VPLS design model, 129–131
functional components, 128
H-VPLS design model, 131–133
VSI, 128
VXLAN design model, 140–141
VPN (Virtual Private Networks)
DMVPN
MPLS over dynamic multipoint GRE (DMVPN),
479–480
per VRF, 477–478
WAN, 466–467
EVPN design model, 134–139, 141
MPLS VPN
path virtualization, 418–419
WAN migration to MPLS VPN, 484–488
overlay VPN security, 288–291
VPWS (Virtual Private Wire Service)
AToM, 124
design considerations, 124–125
port-based model, 126
VLAN-based transport model, 125
VRF (Virtual Routing and Forwarding), 235, 417
DMVPN per VRF, 477–478
point-to-point GRE tunnel per VRF, 476–477
VRF-Aware NAT, 422–423
VRRP (Virtual Router Redundancy Protocol), 150
VSI (Virtual Switching Instances), 128
VXLAN design model, 140–141
W
WAN (Wide Area Networks), 448–449
aggregation layer access, 458–460
CSC design model, 475–476
customer-controlled WAN, 473–474
design hierarchy, 458–459
DMVPN connectivity model, 466–467
dual WAN providers, 462–463
ECMP, 459–460
edge connectivity design, 460–461
EIGRP OTP, 480–482
enterprise-controlled WAN, 473–474
mLAG, 459–460
module design, 467–471
MPLS over dynamic multipoint GRE (DMVPN), 479–480
MPLS over multipoint GRE, 479–481
MPLS over point-to-point GRE tunnels, 478–479
MPLS-based WAN
Layer 2 technologies, 450–453, 457–458
Layer 3 technologies, 453–454, 457–458
virtualization, 475–476
overlay networks, 471–473
over-the-top WAN virtualization, 475–482
point-to-point GRE tunnel per VRF, 476–477
remote site (branch) WAN design, 462–467
SD-WAN, 266–269
single WAN providers, 462–463
SP-controlled WAN, 473–474
transports
Internet as WAN transport, 455–458
modern transports, 450
MPLS-based WAN, 450–454
overview, 449–450
virtualization, 471–483
WAN migration to MPLS VPN, 484–488
watch the clock, exam preparation, 490
waterfall methodology, project management, 55–57, 62–64
web (presentation) layer, 3-tier application model, 69, 70
WFQ (Weighted Fair Queuing), 377
“Why?” identifying requirements with, 15–16
wireless networks
CAPWAP, 306–309
device capabilities, 303–305
enterprise wireless network design, 309–313
high density wireless network design, 309–312
IEEE 802.11 wireless standard, 303–305
LAN controller services ports, 309
security, 305–306
traffic flows, 306–309
video design, 312–313
voice design, 312–313
wireless security, 291–292
X-Y
YANG (Yet Another Next Generation), 396–397, 398
Z
Zachman Framework, 112, 115
zero successor routes, 175
Zero Trust Architectures, 17, 91–92, 94
authorization for access, 93
dynamic factors, 92
endpoint devices, 93
entitled to access, 93
feedback loops, 94
infrastructure devices, 94
inventories, 93
migrating, 94–95
PEP, 94
policy engines, 93
risk scores, 93
static factors, 92
trust engines, 93
trust scores, 93
ZTP (Zero-Touch Provisioning), 318–319, 324
Appendix C
Memory Tables
Chapter 2
Table 2-3 Business Risk vs. Reward vs. Cost Analysis
D Associate D Business Risk Business Reward
e d Cost e
si si
g g
n n
D C
e o
c m
is p
i l
o e
n x
it
y
L Very high;
o outages are
w more likely to
occur that
would
directly bring
the business
offline, which
would make
the business
lose revenue
and market
reputation.
N Medium; High; the initial cost is higher but the
o the reward is substantially better because
s solution the business can function, and continue
i cost to make money, while a single failure
n increases occurs. With this design comes a level
g to allow of complexity that needs to be properly
l for managed.
e redundan
p t
o compone
i nts to
n mitigate
t any
s single
o points of
f failure.
f
a
i
l
u
r
e
N V Very high; the initial cost is substantially
o e higher than limiting single points of
d r failure, but now the solution is more
u y robust and can withstand a higher
a h degree of failures and still allow the
l i business to function. One of the
p g drawbacks here besides the high cost is
o h the very high complexity level. A
i business running solutions with no dual
n points of failure requires a highly skilled
t and technical team to manage and
s maintain it.
o
f
f
a
i
l
u
r
e
Chapter 3
Table 3-2 3 Tier Application Model Network Design Elements
Tier Traf Network Questions to Ask
fic Design
Patt Element
ern s
Tier Traf Network Questions to Ask
fic Design
Patt Element
ern s
Available
locally.
Hosted
within the
business’s
server
environm
ent.
No need to install
and run software
on any computer.
Everything is
available to the
end user over the
Internet.
Access to
software can be
from any device,
at any time, with
Internet
connectivity.
Accessibl Primarily used by
e by developers to
multiple create software or
users. applications.
Easy to
run
without
extensive
IT
knowledg
e.
Ia Highly When a business requires complete
a flexible. control over its infrastructure and
S wants to operate on a pay-as-you-go
basis.
Highly
scalable.
Accessibl
e by
multiple
users.
Cost-
effective.
Chapter 4
Table 4-2 Confidentiality, Integrity, and Availability
CIA Characteristic Mechanisms to Achieve
Tria
d
Sec
urit
y
Ele
men
t
Chapter 5
Table 5-2 compares the most common advantages of the previously
mentioned architecture frameworks.
Minimize risk.
Protection of assets.
Reputation management.
Cost optimization.
Chapter 6
Table 6-2 Metro Ethernet Transport Models
Service Port Based VLAN Based
Type
No N No Yes Yes
o
Operational High* Hi
complexity gh
*
VPLS H- PB EVPN PBB-EVPN
V B-
PL VP
S LS
No N No Yes Yes
o
Loop prevention
with multihomed
CEs
No N No Yes Yes
o
*IfBGP is used as the control plane for VPLS, operational complexity will
be reduced.
**What determines small, medium, and large DCI solutions is the number of
interconnected sites per customer, scale of the VMs/MACs, and the number
of customers; therefore, the suggestion here can be considered as generic
and not absolute.
Chapter 7
Table 7-2 Business Priorities, Drivers, Outcomes, and Capabilities
Relationship Mapping
HS VRRP GLBP
RP
Authentication
Chapter 8
Table 8-2 Summary of OSPF Area Types
Area Advertised Route
Type
Internal area routes + default route (both type 3 and 5 LSAs are
suppressed)
Summar
ization
Filtering
Link State EIGRP
Scalability
Manageability
Flexibility
An AS that has connections to more than one AS, and typically should
not offer a transit path
Chapter 9
Table 9-2 MPLS L3VPN RD Allocation Models
R Strength Weakness Suitable Scenario
D
M
o
d
el
*Load balancing or load sharing for multihomed sites using unique RD per
VPN per PE is covered in more detail later in this chapter.
**In large-scale networks with a large number of PEs and VPNs, unique
RD per VPN RD allocation should be used. The unique RD per VPN per
PE RD allocation model can be used only for multihomed sites if the
customer needs to load balance/share traffic toward these sites.
***BGP site of origin (SoO) can be used as an alternative to serve the same
purpose without the need of a unique RD per interface/VRF.
Chapter 10
Table 10-3 Overlay VPN Solutions Comparison
IPse G D GETVPN Remote
c R M Access
E V (Client
P Based)
N
Encryption R
C4
Key size 128-bit
(Personal)
192-bit
(Enterprise)
Cipher Str
method ea
m
Key No
manageme ne
nt
Client PSK and 802.1X PSK and 802.1X SAE and 802.1X
authenticat (EAP variant) (EAP variant) (EAP variant)
ion
Client-side certificate
required
Server-side certificate
required
WEP key
management
Rogue AP detection
Deployment difficulty Easy Mode
rate
Chapter 14
Table 14-2 Summary of IPv4 Versus IPv6
IPv4 IPv6
Address scope
IP allocation
QoS
Multicast
Security
Complex Complex
Increases operational
complexity.
Interconnect IPv6 Interconnects IPv6- Multicast traffic has to
over IPv4 in a hub- enabled remote sites go via the hub.
and-spoke topology. in hub-and-spoke
topology over IPv4
WAN. Adds control plane
complexity.
Interconnects private
IPv6 islands across Increases operational
public IPv4 clouds. complexity.
Simple, stateless,
automatic IPv6-in-
IPv4 encap/decap
that offers fast IPv6
enablement.
To offer IPv6 access Digital subscriber Dual-stack IPv4/IPv6
for residential line service on residential
gateway. (DSL)/residential gateway LAN side.
service providers
with limited
investment. Increases operational
complexity.
Stateful architecture
on L2TP network
server (LNS).
Physical layer
Layer 2
Layer 2.5
Layer 3
Layer 4
Higher layers up to 7
Operatio
ns
Secure
Transpor
t
Chapter 15
Table 15-2 Comparing Access-Distribution Connectivity Models
Multitier STP Routed Access Switch
Based Clustering
Layer 3 Distribution
gateway layer (may
services or may not
require
FHRP*)
Multitier STP Routed Access Switch
Based Clustering
Access- Fast
to-
distributi
on
converge
nce time
Architecture flexibility
Scalability
MPLS-TE support
Operational Moderat
complexity e to high
Architecture
Chapter 16
Table 16-2 Common BGP Attributes for Internet Multihoming
Attribute Usage Description
LOCAL_PREFERENCE (LP)
AS-PATH prepend
BGP weight
Egress
Ingress
Egress
D
e
s
i
g
n
f
l
e
x
i
b
i
l
i
t
y
S
c
a
l
a
b
i
l
i
t
y
M
o
n
e
t
a
r
y
c
o
s
t
O
p
e
r
a
t
i
o
n
a
l
e
f
f
i
c
i
e
n
c
y
O
p
e
r
a
t
i
o
n
a
l
c
o
m
p
l
e
x
i
t
y
Chapter 17
Table 17-5 Comparison of WAN Transport Models
MPLS L2VPN MPLS Internet as WAN
WAN L3VPN
WAN
Ban Flexible
dwid (less than L2
th MPLS-based
WAN [ME])
Cost Moderate
MPLS L2VPN MPLS Internet as WAN
WAN L3VPN
WAN
Supported number of
endpoints
Supported WAN
connectivity options
Encrypti
on style
Chapter 3
Table 3-2 3 Tier Application Model Network Design Elements
T Traffic Network Design Questions to Ask
ie Pattern Elements
r
T Traffic Network Design Questions to Ask
ie Pattern Elements
r
W End user and No database layer access. How are end users
e application accessing the web tier
b layer access globally?
ti only. The web tier needs to be
e globally accessible for the
r end users. How are the web tier–
specific networks/IP
addresses being routed?
Normally located in a
DMZ.
What’s the web tier’s
high-availability
architecture?
(Active/active,
active/standby, anycast,
etc.)
T Traffic Network Design Questions to Ask
ie Pattern Elements
r
A Web and This tier is internally How does the web tier
p database accessed only, so no communicate with the
p access only. external addresses or application tier?
li No end user routing are needed.
c should ever
a access this How does the database
ti tier directly. Load balancing should be tier communicate with the
o implemented, but how application tier?
n depends on the other tier’s
ti communication method
e with this tier (SNAT,
r NAT, Sticky, etc.).
Normally located
internally behind multiple
security layers.
Hosted
within the
business’s
server
environme
nt.
Se Character Advantages When to Use
rv istics
ic
e
M
o
de
l
Easy to
run
without
extensive
IT
knowledg
e.
Se Character Advantages When to Use
rv istics
ic
e
M
o
de
l
Cost-
effective.
Chapter 4
Table 4-2 Confidentiality, Integrity, and Availability
C Characteristic Mechanisms to
I Achieve
A
T
ri
a
d
S
e
c
u
ri
t
y
E
le
m
e
n
t
Chapter 5
Table 5-2 compares the most common advantages of the previously
mentioned architecture frameworks.
Protection of assets.
Reputation management.
Cost optimization.
Chapter 6
Table 6-2 Metro Ethernet Transport Models
Service Port Based VLAN Based
Type
*IfBGP is used as the control plane for VPLS, operational complexity will
be reduced.
**What determines small, medium, and large DCI solutions is the number of
interconnected sites per customer, scale of the VMs/MACs, and the number
of customers; therefore, the suggestion here can be considered as generic
and not absolute.
Chapter 7
Table 7-2 Business Priorities, Drivers, Outcomes, and Capabilities
Relationship Mapping
HSRP VRRP GLBP
Chapter 8
Table 8-2 Summary of OSPF Area Types
Area Type Advertised Route
OSPF stub area can be used as a transit area The transit area cannot be
for the tunnel. an OSPF stub area.
Stub An AS that has connections to more than one AS, and typically
multiho should not offer a transit path
med AS
Chapter 9
Table 9-2 MPLS L3VPN RD Allocation Models
RD Strength Weakness Suitable Scenario
Mo
del
*Load balancing or load sharing for multihomed sites using unique RD per
VPN per PE is covered in more detail later in this chapter.
**In large-scale networks with a large number of PEs and VPNs, unique
RD per VPN RD allocation should be used. The unique RD per VPN per
PE RD allocation model can be used only for multihomed sites if the
customer needs to load balance/share traffic toward these sites.
***BGP site of origin (SoO) can be used as an alternative to serve the same
purpose without the need of a unique RD per interface/VRF.
Does not contain The route is accepted into the EIGRP topology table,
an SoO value. and the SoO value from the interface that is used to
reach the next-hop CE router is appended to the route
before it is redistributed into BGP.
Chapter 10
Table 10-3 Overlay VPN Solutions Comparison
IPs GRE DMV GETVPN Remot
ec PN e
Access
(Client
Based)
192-bit (Enterprise)
Client WPE- PSK and PSK and SAE and 802.1X (EAP
authent OpenWP 802.1X 802.1X variant)
ication E-Shared (EAP (EAP
variant) variant)
Table 10-5 802.1X EAP Types Comparison
EA EAP-TLS EA PE EA LEAP
P- P- AP P-
MD TT FAS
5 LS T
Client-side No Yes No No No No
certificate (PA
required C)
Chapter 14
Table 14-2 Summary of IPv4 Versus IPv6
IPv4 IPv6
Ad 32 bit 128 bit, multiple
dre scopes
ss
sco
pe
Partially migrated
blocks may require
tunneling as an interim
solution.
Migrate Quick L Migrating certain modules This approach is
the ly i of the enterprise network suitable when the core
enterpris migra m first. device does not
e ting i support IPv6 and
network certai t requires either
fully or n e A DNS translation or hardware or software
partially enterp d tunneling mechanism such upgrades.
to be rise as ISATAP is required to
pure modul maintain the
IPv6- es communications between Increases design and
only or first, IPv6 and IPv4 islands control plane
dual such within the network. complexity.
stack as
data
center Increases operational
s complexity.
Increased control
plane complexity.
May introduce
scalability weaknesses
when both IP versions
are running together
(depends on available
hardware resources
such as memory).
Increases operational
complexity.
Stateful
architecture on
L2TP network
server (LNS).
Mecha Scenario Targeted Design Concern
nism Environment
Application sensitivity to
packet loss, jitter, and delay.
Secure
Transpo SSH HTTPS
rt
Chapter 15
Table 15-2 Comparing Access-Distribution Connectivity Models
Multitier STP Routed Access Switch Clustering
Based
Multitier STP Routed Access Switch Clustering
Based
Scalability High with proper query High with proper OSPF area
domain containment via design and area type selection
EIGRP stubs and
summarization
MPLS-TE No Yes
support
Chapter 16
Table 16-2 Common BGP Attributes for Internet Multihoming
Attribute Usage Description
Ingr Longest match over the preferred path, by dividing the prefix into
ess two halves (for instance, advertise /16 as 2 × /17 over the preferred
ingress path toward ISP A in the scenario in Figure 16-5)
I The typical mechanism to use here is to divide the PI address into two
n halves. For example, an IPv4 /16 subnet can be divided into two /17
g summary subnets, similarly to an IPv6 /48 subnet can be divided into
r two /49 summary subnets. Then advertise each half over a different
e link along with the aggregate (IPv4 /16 or IPv6 /48 in this example)
s over both links to be used in case of link failure. For unequal load
s sharing, you can use the same concept with more small subnets to be
advertised over the path with higher capacity.
E For the outbound traffic direction, you need to receive the full Internet
g route from one of the ISPs along with the default route from both.
r Accept with filtering only every other /4 for IPv4 (for example, 0/4,
e 32/4). IPv6 can use the same concept (IPv6 either selectively or the
s same concept). From the other link, increase the
s LOCAL_PREFERENCE for the default route.
In this case, the more specific route (permitted in the filtering) will be
used over one link. Every other route that was filtered out will go over
the second link using the default route. For unequal load sharing, more
subnets can be accepted/allowed from the link with higher capacity.
c
o
s
t
O Least Efficient Most efficient
p efficient
e
r
a
t
i
o
n
a
l
e
f
f
i
c
i
e
n
c
y
Chapter 17
Table 17-5 Comparison of WAN Transport Models
MPLS L2VPN MPLS L3VPN Internet as WAN
WAN WAN
Ban Very flexible (can Flexible (less than Flexible with limitations,
dwi vary between 1 L2 MPLS-based depending on the site
dth Mbps to 100 WAN [ME]) location and connectivity
Gbps) provisioning type (DSL
versus 4G versus 5G)
MPLS L2VPN MPLS L3VPN Internet as WAN
WAN WAN
mLAG with
FHRP
mLAG with
IGP
Dual WAN,
Dual WAN, dual routers single router
Dual WAN,
dual routers
www.ciscopress.com/videostore
Coupon Code:
www.ciscopress.com/title/9780137600878
Coupon Code:
If you wish to use the Windows desktop offline version of the application, simply register your book
at www.ciscopress.com/register, select the Registered Products tab on your account page, click the
Access Bonus Content link, and download and install the software from the companion website.
This access code can be used to register your exam in both the online and offline versions.
Activation Code:
Where are the companion content files?
See the card insert in the back of the book for your Pearson
Test Prep activation code and special offers.
Cisco Certified Design Expert
CCDE 400-007 Official Cert
Guide
Companion Website
Access interactive study tools on this book’s companion website,
including practice test software, review exercises, Key Term
flash card application, a study planner, and more!
1. Go to www.ciscopress.com/register.
2. Enter the print book ISBN: 9780137601042.
3. Answer the security question to validate your purchase.
4. Go to your account page.
5. Click on the Registered Products tab.
6. Under the book listing, click on the Access Bonus Content
link.
If you have any issues accessing the companion website, you can
contact our support team by going to pearsonitp.echelp.org.