100% found this document useful (2 votes)
5 views

Virtualized Cloud Data Center Networks Issues in Resource Management 1st Edition Linjiun Tsai pdf download

Ebook access

Uploaded by

vozarfolco5x
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
5 views

Virtualized Cloud Data Center Networks Issues in Resource Management 1st Edition Linjiun Tsai pdf download

Ebook access

Uploaded by

vozarfolco5x
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Virtualized Cloud Data Center Networks Issues in

Resource Management 1st Edition Linjiun Tsai


download

https://textbookfull.com/product/virtualized-cloud-data-center-
networks-issues-in-resource-management-1st-edition-linjiun-tsai/

Download more ebook from https://textbookfull.com


We believe these products will be a great fit for you. Click
the link to download now, or visit textbookfull.com
to discover even more!

Cloud Native Data Center Networking 1st Edition Dinesh


G. Dutt

https://textbookfull.com/product/cloud-native-data-center-
networking-1st-edition-dinesh-g-dutt/

Cloud Native Data Center Networking Architecture


Protocols and Tools Dinesh G. Dutt

https://textbookfull.com/product/cloud-native-data-center-
networking-architecture-protocols-and-tools-dinesh-g-dutt/

Water Resource Management Issues Basic Principles and


Applications 1st Edition Louis Theodore (Author)

https://textbookfull.com/product/water-resource-management-
issues-basic-principles-and-applications-1st-edition-louis-
theodore-author/

CCNA Data Center Introducing Cisco Data Center


Technologies Study Guide Exam 640 916 1st Edition Todd
Lammle

https://textbookfull.com/product/ccna-data-center-introducing-
cisco-data-center-technologies-study-guide-exam-640-916-1st-
edition-todd-lammle/
Data Center Handbook Plan Design Build And Operations
Of A Smart Data Center Hwaiyu Geng

https://textbookfull.com/product/data-center-handbook-plan-
design-build-and-operations-of-a-smart-data-center-hwaiyu-geng/

Redefining Diversity & Dynamics of Natural Resources


Management in Asia, Volume 3. Natural Resource Dynamics
and Social Ecological Systems in Central Vietnam:
Development, Resource Changes and Conservation Issues
1st Edition Ganesh Shivakoti
https://textbookfull.com/product/redefining-diversity-dynamics-
of-natural-resources-management-in-asia-volume-3-natural-
resource-dynamics-and-social-ecological-systems-in-central-
vietnam-development-resource-changes-and-conserv/

Contemporary issues in strategic management 1st Edition


Moutinho

https://textbookfull.com/product/contemporary-issues-in-
strategic-management-1st-edition-moutinho/

Cloud Debugging and Profiling in Microsoft Azure:


Application Performance Management in the Cloud 1st
Edition Jeffrey Chilberto

https://textbookfull.com/product/cloud-debugging-and-profiling-
in-microsoft-azure-application-performance-management-in-the-
cloud-1st-edition-jeffrey-chilberto/

Resource Allocation and Performance Optimization in


Communication Networks and the Internet 1st Edition
Liansheng Tan

https://textbookfull.com/product/resource-allocation-and-
performance-optimization-in-communication-networks-and-the-
internet-1st-edition-liansheng-tan/
SPRINGER BRIEFS IN
ELEC TRIC AL AND COMPUTER ENGINEERING

Linjiun Tsai
Wanjiun Liao

Virtualized Cloud
Data Center
Networks: Issues
in Resource
Management

123
SpringerBriefs in Electrical and Computer
Engineering
More information about this series at http://www.springer.com/series/10059
Linjiun Tsai Wanjiun Liao

Virtualized Cloud Data


Center Networks: Issues
in Resource Management

123
Linjiun Tsai Wanjiun Liao
National Taiwan University National Taiwan University
Taipei Taipei
Taiwan Taiwan

ISSN 2191-8112 ISSN 2191-8120 (electronic)


SpringerBriefs in Electrical and Computer Engineering
ISBN 978-3-319-32630-6 ISBN 978-3-319-32632-0 (eBook)
DOI 10.1007/978-3-319-32632-0

Library of Congress Control Number: 2016936418

© The Author(s) 2016


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG Switzerland
Preface

This book introduces several important topics in the management of resources in


virtualized cloud data centers. They include consistently provisioning predictable
network quality for large-scale cloud services, optimizing resource efficiency while
reallocating highly dynamic service demands to VMs, and partitioning hierarchical
data center networks into mutually exclusive and collectively exhaustive
subnetworks.
To explore these topics, this book further discusses important issues, including
(1) reducing hosting cost and reallocation overheads for cloud services, (2) provi-
sioning each service with a network topology that is non-blocking for accommo-
dating arbitrary traffic patterns and isolating each service from other ones while
maximizing resource utilization, and (3) finding paths that are link-disjoint and fully
available for migrating multiple VMs simultaneously and rapidly.
Solutions which efficiently and effectively allocate VMs to physical servers in
data center networks are proposed. Extensive experiment results are included to
show that the performance of these solutions is impressive and consistent for cloud
data centers of various scales and with various demands.

v
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Server Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Server Consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Scheduling of Virtual Machine Reallocation . . . . . . . . . . . . . . . . 3
1.5 Intra-Service Communications . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Topology-Aware Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Allocation of Virtual Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Adaptive Fit Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Time Complexity of Adaptive Fit . . . . . . . . . . . . . . . . . . . . . . . 13
Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3 Transformation of Data Center Networks . . . . . . . . . . . . . . . . . . . . 15
3.1 Labeling Network Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Grouping Network Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Formatting Star Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Matrix Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Building Variants of Fat-Tree Networks . . . . . . . . . . . . . . . . . . . 23
3.6 Fault-Tolerant Resource Allocation . . . . . . . . . . . . . . . . . . . . . . 23
3.7 Fundamental Properties of Reallocation . . . . . . . . . . . . . . . . . . . 25
3.8 Traffic Redirection and Server Migration . . . . . . . . . . . . . . . . . . 27
Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4 Allocation of Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2 Multi-Step Reallocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3 Generality of the Reallocation Mechanisms. . . . . . . . . . . . . . . . . 34
4.4 On-Line Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

vii
viii Contents

4.5 Listing All Reallocation (LAR) . . . . . . . . . . . . . . . . . . . . . . . . . 35


4.6 Single-Pod Reallocation (SPR) . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.7 Multi-Pod Reallocation (MPR) . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.8 StarCube Allocation Procedure (SCAP) . . . . . . . . . . . . . . . . . . . 37
4.9 Properties of the Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5 Performance Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.1 Settings for Evaluating Server Consolidation. . . . . . . . . . . . . . . . 41
5.2 Cost of Server Consolidation . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3 Effectiveness of Server Consolidation. . . . . . . . . . . . . . . . . . . . . 43
5.4 Saved Cost of Server Consolidation . . . . . . . . . . . . . . . . . . . . . . 43
5.5 Settings for Evaluating StarCube . . . . . . . . . . . . . . . . . . . . . . . . 44
5.6 Resource Efficiency of StarCube . . . . . . . . . . . . . . . . . . . . . . . . 45
5.7 Impact of the Size of Partitions . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.8 Cost of Reallocating Partitions . . . . . . . . . . . . . . . . . . . . . . . . . 48
6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Chapter 1
Introduction

1.1 Cloud Computing

Cloud computing lends itself to the processing of large data volumes and time-
varying computational demands. Cloud data centers involve substantial computa-
tional resources, feature inherently flexible deployment, and deliver significant
economic benefit—provided the resources are well utilized while the quality of
service is sufficient to attract as many tenants as possible.
Given that they naturally bring economies of scale, research in cloud data centers
has received extensive attention in both academia and industry. In large-scale public
data centers, there may exist hundreds of thousands of servers, stacked in racks and
connected by high-bandwidth hierarchical networks to jointly form a shared
resource pool for accommodating multiple cloud tenants from all around the world.
The servers are provisioned and released on-demand via a self-service interface at
any time, and tenants are normally given the ability to specify the amount of CPU,
memory, and storage they require. Commercial data centers usually also offer
service-level agreements (SLAs) as a formal contract between a tenant and the
operator. The typical SLA includes penalty clauses that spell out monetary com-
pensations for failure to meet agreed critical performance objectives such as
downtime and network connectivity.

1.2 Server Virtualization

Virtualization [1] is widely adopted in modern cloud data centers for its agile
dynamic server provisioning, application isolation, and efficient and flexible
resource management. Through virtualization, multiple instances of applications
can be hosted by virtual machines (VMs) and thus separated from the underlying
hardware resources. Multiple VMs can be hosted on a single physical server at one

© The Author(s) 2016 1


L. Tsai and W. Liao, Virtualized Cloud Data Center Networks:
Issues in Resource Management, SpringerBriefs in Electrical
and Computer Engineering, DOI 10.1007/978-3-319-32632-0_1
2 1 Introduction

time, as long as their aggregate resource demand does not exceed the server
capacity. VMs can be easily migrated [2] from one server to another via network
connections. However, without proper scheduling and routing, the migration traffic
and workload traffic generated by other services would compete for network
bandwidth. The resultant lower transfer rate invariably prolongs the total migration
time. Migration may also cause a period of downtime to the migrating VMs,
thereby disrupting a number of associated applications or services that need con-
tinuous operation or response to requests. Depending on the type of applications
and services, unexpected downtime may lead to severe errors or huge revenue
losses. For data centers claiming high availability, how to effectively reduce
migration overhead when reallocating resources is therefore one key concern, in
addition to pursuing high resource utilization.

1.3 Server Consolidation

The resource demands of cloud services are highly dynamic and change over time.
Hosting such fluctuating demands, the servers are very likely to be underutilized,
but still incur significant operational cost unless the hardware is perfectly energy
proportional. To reduce costs from inefficient data center operations and the cost of
hosting VMs for tenants, server consolidation techniques have been developed to
pack VMs into as few physical servers as possible, as shown in Fig. 1.1. The
techniques usually also generate the reallocation schedules for the VMs in response
to the changes in their resource demands. Such techniques can be used to con-
solidate all the servers in a data center or just the servers allocated to a single
service.

Fig. 1.1 An example of


server consolidation
VM VM
………
VM VM VM
VM
Server 1 Server 2 Server 3 …… Server n

VM VM

VM VM Off Off
………
VM
VM
Server 1 Server 2 Server 3 …… Server n
(Active servers) (Non-active servers)
1.3 Server Consolidation 3

Server consolidation is traditionally modeled as a bin-packing problem


(BPP) [3], which aims to minimize the total number of bins to be used. Here,
servers (with limited capacity) are modeled as bins and VMs (with resource
demand) as items.
Previous studies show that BPP is NP-complete [4] and many good heuristics
have been proposed in the literature, such as First-Fit Decreasing (FFD) [5] and
First Fit (FF) [6], which guarantee that the number of bins used, respectively, is no
more than 1.22 N + 0.67 and 1.7 N + 0.7, where N is the optimal solution to this
problem. However, these existing solutions to BPP may not be directly applicable
to server consolidation in cloud data centers. To develop solutions feasible for
clouds, it is required to take into account the following factors: (1) the resource
demand of VMs is dynamic over time, (2) migrating VMs among physical servers
will incur considerable overhead, and (3) the network topology connecting the VMs
must meet certain requirements.

1.4 Scheduling of Virtual Machine Reallocation

When making resource reallocation plans that may trigger VM migration, it is


necessary to take into account network bandwidth sufficiency between the migra-
tion source and destination to ensure the migration can be completed in time. Care
must also be taken in selecting migration paths so as to avoid potential mutual
interference among multiple migrating VMs. Trade-off is inevitable and how well
scheduling mechanisms strike balances between the migration overhead and quality
of resource reallocation will significantly impact the predictability of migration
time, the performance of data center networks, the quality of cloud services and
therefore the revenue of cloud data centers.
The problem may be further exacerbated in cloud data centers that host numerous
services with highly dynamic demands, where reallocation may be triggered more
frequently because the fragmentation of the resource pool becomes more severe. It is
also more difficult to find feasible migration paths on saturated cloud data center
networks. To reallocate VMs that continuously communicate with other VMs, it is
also necessary to keep the same perceived network quality after communication
routing paths are changed. This is especially challenging in cloud data centers with
communication-intensive applications.

1.5 Intra-Service Communications

Because of the nature of cloud computing, multiple uncooperative cloud services


may coexist in a multi-tenant cloud data center and share the underlying network.
The aggregate traffic from the services is highly likely to be dynamic [7], congested
[8] or even malicious [9] in an unpredictable manner. Without effective
4 1 Introduction

mechanisms for service allocation and traffic routing, the intra-service communi-
cation of every service sharing the network may suffer serious delay and even be
disrupted. Deploying all the VMs for a service into one single rack to reduce the
impact on the shared network is not always a practical or economical solution. This
is because such a solution may cause the resources of data centers to be
underutilized and fragmented, particularly when the demand of services is highly
dynamic and does not fit the capacity of the rack.
For delay-sensitive and communication-intensive applications, such as mobile
cloud streaming [10, 11], cloud gaming [12, 13], MapReduce applications [14],
scientific computing [15] and Spark applications [16], the problem may become
more acute due to their much greater impact on the shared network and much
stricter requirements in the quality of intra-service transmissions. Such types of
applications usually require all-to-all communications to intensively exchange or
shuffle data among distributed nodes. Therefore, network quality becomes the
primary bottleneck of their performance. In some cases, the problem remains quite
challenging even if the substrate network structure provides high capacity and rich
connectivity, or the switches are not oversubscribed. First, all-to-all traffic patterns
impose strict topology requirements on allocation. Complete graphs, star graphs or
some graphs of high connectivity are required for serving such traffic, which may
be between any two servers. In a data center network where the network resource is
highly fragmented or partially saturated, such topologies are obviously extremely
difficult to allocate, even with significant reallocation cost and time. Second,
dynamically reallocating such services without affecting their performance is also
extremely challenging. It is required to find reallocation schedules that not only
satisfy general migration requirements, such as sufficient residual network band-
width, but also keep their network topologies logically unchanged.
To host delay-sensitive and communication-intensive applications with network
performance guarantees, the network topology and quality (e.g., bandwidth, latency
and connectivity) should be consistently guaranteed, thus continuously supporting
arbitrary intra-service communication patterns among the distributed compute
nodes and providing good predictability of service performance. One of the best
approaches is to allocate every service a non-blocking network. Such a network
must be isolated from any other service, be available during the entire service
lifetime even when some of the compute nodes are reallocated, and support
all-to-all communications. This way, it can give each service the illusion of being
operated on the data center exclusively.

1.6 Topology-Aware Allocation

For profit-seeking cloud data centers, the question of how to efficiently provision
non-blocking topologies for services is a crucial one. It also principally affects the
resource utilization of data centers. Different services may request various virtual
topologies to connect their VMs, but it is not necessary for data centers to allocate
1.6 Topology-Aware Allocation 5

Fig. 1.2 Different resource Allocation 1 Allocation 2


efficiencies of non-blocking
Switch
networks

Switch
Server

the physical topologies for them in exactly the same form. In fact, keeping such
consistency could lead to certain difficulties in optimizing the resources of entire
data center networks, especially when such services request physical topologies of
high connectivity degrees or even cliques.
For example, consider the deployment of a service which requests a four-vertex
clique to serve arbitrary traffic patterns among four VMs on a network with eight
switches and eight servers. Suppose that the link capacity is identical to the
bandwidth requirement of the VM, so there are at least two feasible methods of
allocation, as shown in Fig. 1.2. Allocation 1 uses a star topology, which is clearly
non-blocking for any possible intra-service communication patterns, and occupies
the minimum number of physical links. Allocation 2, however, shows an inefficient
allocation as two more physical links are used to satisfy the same intra-service
communication requirements.
Apart from allocating more resources, the star network in Allocation 1 provides
better flexibility in reallocation than other complex structures. This is because
Allocation 1 involves only one link when reallocating any VM while ensuring
topology consistency. Such a property makes it easier for resources to be reallo-
cated in a saturated or fragmented data center network, and thus further affects how
well the resource utilization of data center networks could be optimized, particularly
when the demands dynamically change over time. However, the question then
arises as to how to efficiently allocate every service as a star network. In other
words, how to efficiently divide the hierarchical data center networks into a large
number of star networks for services and dynamically reallocate those star networks
while maintaining high resource utilization? To answer this question, the topology
of underlying networks needs to be considered. In this book, we will introduce a
solution to tackling this problem.

1.7 Summary

So far, the major issues, challenges and requirements for managing the resources of
virtualized cloud data centers have been addressed. The solutions to these problems
will be explored in the following chapters. The approach is to divide the problems
into two parts. The first one is to allocate VMs for every service into one or multiple
virtual servers, and the second one is to allocate virtual servers for all services to
6 1 Introduction

physical servers and to determine network links to connect them. Both sub-
problems are dynamic allocation problems. This is because the mappings from
VMs to virtual servers, the number of required virtual servers, the mapping from
virtual servers to physical servers, and the allocation of network links may all
change over time. For practical considerations, these mechanisms are designed to
be scalable and feasible for cloud data centers of various scales so as to accom-
modate services of different sizes and dynamic characteristics.
The mechanism for allocating and reallocating VMs on servers is called
Adaptive Fit [17], which is designed to pack VMs into as few servers as possible.
The challenge is not just to simply minimize the number of servers. As the demand
of every VM may change over time, it is best to minimize the reallocation overhead
by selecting and keeping some VMs on their last hosting server according to an
estimated saturation degree.
The mechanism for allocating and reallocating physical servers is based on a
framework called StarCube [18], which ensures every service is allocated with an
isolated non-blocking star network and provides some fundamental properties that
allow topology-preserving reallocation. Then, a polynomial-time algorithm will be
introduced which performs on-line, on-demand and cost-bounded server allocation
and reallocation based on those promising properties of StarCube.

References

1. P. Barham et al., Xen and the art of virtualization. ACM SIGOPS Operating Syst. Rev. 37(5),
164–177 (2003)
2. C. Clark et al., in Proceedings of the 2nd Conference on Symposium on Networked Systems
Design & Implementation, Live migration of virtual machines, vol. 2 (2005)
3. V.V. Vazirani, Approximation Algorithms, Springer Science & Business Media (2002)
4. M.R. Garey, D.S. Johnson, Computers and intractability: a guide to the theory of
NP-completeness (WH Freeman & Co., San Francisco, 1979)
5. G. Dósa, The tight bound of first fit decreasing bin-packing algorithm is FFD(I) = (11/9)OPT
(I) + 6/9, Combinatorics, Algorithms, Probabilistic and Experimental Methodologies,
Springer Berlin Heidelberg (2007)
6. B. Xia, Z. Tan, Tighter bounds of the first fit algorithm for the bin-packing problem. Discrete
Appl. Math. 158(15), 1668–1675 (2010)
7. Q. He et al., in Proceedings of the 19th ACM International Symposium on High Performance
Distributed Computing, Case study for running HPC applications in public clouds, (2010)
8. S. Kandula et al., in Proceedings of the 9th ACM SIGCOMM Conference on Internet
Measurement Conference, The nature of data center traffic: measurements & analysis (2009)
9. T. Ristenpart et al., in Proceedings of the 16th ACM Conference on Computer and
Communications Security, Hey, you, get off of my cloud: exploring information leakage in
third-party compute clouds (2009)
10. C.F. Lai et al., A network and device aware QoS approach for cloud-based mobile streaming.
IEEE Trans. on Multimedia 15(4), 747–757 (2013)
11. X. Wang et al., Cloud-assisted adaptive video streaming and social-aware video prefetching
for mobile users. IEEE Wirel. Commun. 20(3), 72–79 (2013)
12. R. Shea et al., Cloud gaming: architecture and performance. IEEE Network Mag. 27(4), 16–21
(2013)
References 7

13. S.K. Barker, P. Shenoy, in Proceedings of the first annual ACM Multimedia Systems,
Empirical evaluation of latency-sensitive application performance in the cloud (2010)
14. J. Ekanayake et al., in IEEE Fourth International Conference on eScience, MapReduce for
data intensive scientific analyses (2008)
15. A. Iosup et al., Performance analysis of cloud computing services for many-tasks scientific
computing, IEEE Trans. on Parallel and Distrib. Syst. 22(6), 931–945 (2011)
16. M. Zaharia et al., in Proceedings of the 2nd USENIX conference on Hot topics in cloud
computing, Spark: cluster computing with working sets (2010)
17. L. Tsai, W. Liao, in IEEE 1st International Conference on Cloud Networking, Cost-aware
workload consolidation in green cloud datacenter (2012)
18. L. Tsai, W. Liao, StarCube: an on-demand and cost-effective framework for cloud data center
networks with performance guarantee, IEEE Trans. on Cloud Comput. doi:10.1109/TCC.
2015.2464818
Chapter 2
Allocation of Virtual Machines

In this chapter, we introduce a solution to the problem of cost-effective VM allo-


cation and reallocation. Unlike traditional solutions, which typically reallocate VMs
based on a greedy algorithm such as First Fit (each VM is allocated to the first
server in which it will fit), Best Fit (each VM is allocated to the active server with
the least residual capacity), or Worse Fit (each VM is allocated to the active server
with the most residual capacity), the proposed solution strikes a balance between
the effectiveness of packing VMs into few servers and the overhead incurred by
VM reallocation.

2.1 Problem Formulation

We consider the case where a system (e.g., a cloud service or a cloud data center) is
allocated with a number of servers denoted by H and a number of VMs denoted by
V. We assume the number of servers is always sufficient to host the total resource
requirement of all VMs in the system. Thus, we focus on the consolidation effec-
tiveness and the migration cost incurred by the server consolidation problem.
Further, we assume that VM migration is performed at discrete times. We define
the period of time to perform server consolidation as an epoch. Let T = {t1, t2,…, tk}
denote the set of epochs to perform server consolidation. The placement sequence
for VMs in V in each epoch t is then denoted by F = { ft | 8 t 2 T}, where ft is the
VM placement at epoch t and defined as a mapping ft : V → H, which specifies that
each VM i, i 2 V, is allocated to server ft(i). Note that “ft(i) = 0” denotes that VM i is
not allocated. To model the dynamic nature of the resource requirement and the
migration cost for each VM over time, we let Rt = {rt(i) | 8 i 2 V} and Ct = {ct(i) | 8
i 2 V} denote the sets of the resource requirement and migration cost, respectively,
for all VMs in epoch t.

© The Author(s) 2016 9


L. Tsai and W. Liao, Virtualized Cloud Data Center Networks:
Issues in Resource Management, SpringerBriefs in Electrical
and Computer Engineering, DOI 10.1007/978-3-319-32632-0_2
10 2 Allocation of Virtual Machines

The capacity of a server is normalized (and simplified) to one, which may


correspond to the total resource in terms of CPU, memory, etc. in the server. The
resource requirement of each VM varies from 0 to 1. When a VM demands zero
resource, it indicates that the VM is temporarily out of the system. Since each server
has limited resources, the aggregate resource requirement of VMs on a server must
be less than or equal to one. Each server may host multiple VMs with different
resource requirements, and each application or service may be distributed to mul-
tiple VMs hosted by different servers. A server with zero resource requirements
from VMs will not be used to save the hosting cost. We refer to a server that has
been allocated VMs as an active server.
To jointly consider the consolidation effectiveness and the migration overhead
for server consolidation, we define the total cost for VM placement F as the total
hosting cost of all active servers plus the total migration cost incurred by VMs. The
hosting cost of an active server is simply denoted by a constant E and the total
hosting cost for VM placement sequence F is linearly proportional to the number of
active servers. To account for revenue loss, we model the downtime caused by
migrating a VM as the migration cost for the VM. The downtime could be esti-
mated as in [1] and the revenue loss depends on the contracted service level. Since
the downtime is mainly affected by the memory dirty rate (i.e., the rate at which
memory pages in the VM are modified) of VM and the network bandwidth [1], the
migration cost is considered independent of the resource requirement for each VM.
Let H′t be a subset of H which is active in epoch t and |H′t| be the number of
servers in H′t. Let C′t be the migration cost to consolidate H′t from epoch t to t + 1.
H′t and C′t are defined as follows, respectively:

Ht0 ¼ f ft ðiÞjft ðiÞ 6¼ 0; 8i 2 V g; 8t 2 T


X
Ct0 ¼ ct ðiÞ; 8t 2 Tnftk g
i2V; ft ðiÞ6¼ft þ 1 ðiÞ

The total cost of F can be expressed as follows:


X  X
CostðF Þ ¼ E  H 0  þ Ct0
t
t2T t2Tnftk g

We study the Total-Cost-Aware Consolidation (TCC) problem. Given {H, V, R,


C, T, E}, a VM placement sequence F is feasible only if the resource constraints for
all epochs in T are satisfied. The TCC problem is stated as follows: among all
possible feasible VM placements, to find a feasible placement sequence F whose
total cost is minimized.
2.2 Adaptive Fit Algorithm 11

2.2 Adaptive Fit Algorithm

The TCC problem is NP-Hard, because it is at least as hard as the server consol-
idation problem. In this section, we present a polynomial-time solution to the
problem. The design objective is to generate VM placement sequences F in
polynomial time and minimize Cost(F).
Recall that the migration cost results from changing the hosting servers of VMs
during the VM migration process. To reduce the total migration cost for all VMs,
we attempt to minimize the number of migrations without degrading the effec-
tiveness of consolidation. To achieve this, we try to allocate each VM i in epoch t to
the same server hosting the VM in epoch t − 1, i.e., ft(i) = ft−1(i). If ft−1(i) does not
have enough capacity in epoch t to satisfy the resource requirement for VM i or is
currently not active, we then start the remaining procedure based on “saturation
degree” estimation. The rationale behind this is described as follows.
Instead of using a greedy method as in existing works, which typically allocate
each migrating VM to an active server with available capacity either based on First
Fit, Best Fit, or Worse Fit, we define a total cost metric called saturation degree to
strike a balance between the two conflicting factors: consolidation effectiveness and
migration overhead. For each iteration of allocation process in epoch t, the satu-
ration degree Xt is defined as follows:
P
rt ði Þ
Xt ¼  0 i2V 

Ht þ 1  1

Since the server capacity is normalized to one in this book, the denominator
indicates the total capacity summed over all active servers plus an idle server in
epoch t.
During the allocation process, Xt decreases as |H′t| increases by definition. We
define the saturation threshold u 2 [0, 1] and say that Xt is low when Xt ≤ u. If Xt is
low, the migrating VMs should be allocated to the set of active servers unless there
are no active servers that have sufficient capacity to host them. On the other hand, if
Xt is large (i.e., Xt > u), the mechanism tends to “lower” the total migration cost as
follows. One of the idle servers will be turned on to host a VM which cannot be
allocated on its “last hosting server” (i.e., ft−1(i) for VM i), even though some of the
active servers still have sufficient residual resource to host the VM. It is expected
that the active servers with residual resource in epoch t are likely to be used for
hosting other VMs which were hosted by them in epoch t − 1. As such, the total
migration cost is minimized.
The process of allocating all VMs in epoch t is then described as follows. In
addition, the details of the mechanism are shown in the Appendix.
12 2 Allocation of Virtual Machines

1. Sort all VMs in V by decreasing order based on their rt(i).


2. Select VM i with the highest resource requirement among all VMs not yet al-
located, i.e.,

i arg maxj frt ð jÞj ft ð jÞ ¼ 0; 8j 2 V g


3. Allocate VM i:
(i) If VM i’s hosting server at epoch t − 1, i.e., ft−1(i), is currently active and
has sufficient capacity to host VM i with the requirement rt(i), VM i is
allocated to it, i.e., ft(i) ← ft−1(i);
(ii) If VM i’s last hosting server ft−1(i) is idle, and there are no active servers
which have sufficient residual resource for allocating VM i or the Xt
exceeds the saturation threshold u, then VM i is allocated to its last hosting
server, namely, ft(i) ← ft−1(i);
(iii) If Cases i and ii do not hold, and Xt exceeds the saturation threshold u or
there are no active servers which have sufficient residual resource to host
VM i, VM i will be allocated to an idle server;
(iv) If Cases i, ii and iii do not hold, VM i is allocated to an active server based
on Worse-Fit policy.
4. Update the residual capacity of ft(i) and repeat the procedure for allocating the
next VM until all VMs are allocated.
We now illustrate the operation of Adaptive Fit with an example where the
system is allocating ten VMs, of which the resource requirements are shown in
Table 2.1.
The saturation threshold u is set to one. The step-by-step allocation for epoch t is
shown in Table 2.2. The row of epoch t − 1 indicates the last hosting servers (i.e.,
ft−1(i)) of VMs. The rows for epoch t depict the allocation iterations, with allocation
sequence from top to bottom. For each VM, the items underlined denote the actual
allocated server while the other items denote the candidate servers with sufficient
capacity. The indicators L, X, N, A denote that the allocation decision is based on
the policies: (1) Use the last hosting server first; (2) Create new server at high
saturation; (3) Create new server because that there is no sufficient capacity in
active serves; (4) Use active server by Worse Fit allocation, respectively. Note that
the total resource requirement of all VMs is 3.06 and the optimal number of servers
to use is 4 in this example. In this example, Adaptive Fit can achieve a performance
quite close to the optimal.

Table 2.1 Resource vi rt(i) vi rt(i)


requirements for VMs
4 0.49 10 0.34
8 0.48 1 0.15
3 0.47 7 0.15
2 0.43 6 0.13
5 0.35 9 0.07
2.3 Time Complexity of Adaptive Fit 13

Table 2.2 An example of Epoch Server 1 Server 2 Server 3 Server 4


allocating VMs by Adaptive
Fit t−1 v1, v9 v6, v8 v5, v10 v2, v3
v4, v7
t v4 (L)
v8 (L) v8
v3 v3 (L)
v2 (X) v2
v5 (A) v5
v10 (A)
v1 (L) v1
v7 (A)
v6 (N)
v9 (L) v9

2.3 Time Complexity of Adaptive Fit

Theorem 2.1 (Time complexity of Adaptive Fit) Adaptive Fit is a polynomial-time


algorithm with average-case complexity O(n log n), where n is the number of VMs
in the system.
Proof We examine the time complexity of each part in Adaptive Fit. Let m denote the
number of active servers in the system. The initial phase requires O(m log m) and
O(n log n) to initialize A and A′ and V′, which are implemented as binary search trees.
The operations on A and A′ can be done in O(log m). The saturation degree estimation
takes O(1) for calculating the denominator based on the counter for the number of
servers used while the numerator is static and calculated once per epoch. The rest of
the lines in the “for” loop are O(1). Therefore, the main allocation “for” loop can be
done in O(n log m). All together, the Adaptive Fit can be done in O(n log n + n log m),
which is equivalent to O(n log n). □

Reference

1. S. Akoush et al., in Proc. IEEE MASCOTS, Predicting the Performance of Virtual Machine
Migration. pp. 37–46 (2010)
Chapter 3
Transformation of Data Center Networks

In this chapter, we introduce the StarCube framework. Its core concept is the
dynamic and cost-effective partitioning of a hierarchical data center network into
several star networks and the provisioning of each service with a star network that is
consistently independent from other services.
The principal properties guaranteed by our framework include the following:
1. Non-blocking topology. Regardless of traffic pattern, the network topology
provisioned to each service is non-blocking after and even during reallocation.
The data rates of intra-service flows and outbound flows (i.e., those going out of
the data centers) are only bounded by the network interface rates.
2. Multi-tenant isolation. The topology is isolated for each service, with band-
width exclusively allocated. The migration process and the workload traffic are
also isolated among the services.
3. Predictable traffic cost. The per-hop distance of intra-service communications
required by each service is satisfied after and even during reallocation.
4. Efficient resource usage. The number of links allocated to each service to form
a non-blocking topology is the minimum.

3.1 Labeling Network Links

The StarCube framework is based on the fat-tree structure [1], which is probably the
most discussed data center network structure and supports extremely high network
capacity with extensive path diversity between racks. As shown in Fig. 3.1, a k-ary
fat-tree network is built from k-port switches and consists of k pods interconnected
by (k/2)2 core switches. For each pod, there are two layers of k/2 switches, called the
edge layer and the aggregation layer, which jointly form a complete bipartite net-
work with (k/2)2 links. Each edge switch is connected to k/2 servers through the
downlinks, and each aggregation switch is also connected to k/2 core switches but

© The Author(s) 2016 15


L. Tsai and W. Liao, Virtualized Cloud Data Center Networks:
Issues in Resource Management, SpringerBriefs in Electrical
and Computer Engineering, DOI 10.1007/978-3-319-32632-0_3
16 3 Transformation of Data Center Networks

Aggregation Core
switch switch

Edge switch , attached Pod, consisting of


with k/2 servers (k/2)2 servers

Fig. 3.1 A k-ary fat-tree, where k = 8

through the uplinks. The core switches are separated into (k/2) groups, where the ith
group is connected to the ith aggregation switch in each pod. There are (k/2)2 servers
in each pod. All the links and network interfaces on the servers or switches are of the
same bandwidth capacity. We assume that every switch supports non-blocking
multiplexing, by which the traffic on downlinks and uplinks can be freely multi-
plexed and the traffic at different ports do not interfere with one another.
For ease of explanation, but without loss of generality, we explicitly label all
servers and switches, and then label all network links according to their connections
as follows:
1. At the top layer, the link which connects aggregation switch i in pod k and core
switch j in group i is denoted by Linkt(i, j, k).
2. At the middle layer, the link which connects aggregation switch i in pod k and
edge switch j in pod k is denoted by Linkm(i, j, k).
3. At the bottom layer, the link which connects server i in pod k and edge switch
j in pod k is denoted by Linkb(i, j, k).
For example, in Fig. 3.2, the solid lines indicate Linkt(2, 1, 4), Linkm(2, 1, 4) and
Linkb(2, 1, 4). This labeling rule also determines the routing paths. Thanks to the

1 2 3 4

1 2 3 4

1 2 3 4
1
2
3
4

Fig. 3.2 An example of labeling links


3.1 Labeling Network Links 17

symmetry of the fat-tree structure, the same number of servers and aggregation
switches are connected to each edge switch and the same number of edge switches
and core switches are connected to each aggregation switch. Thus, one can easily
verify that each server can be exclusively and exactly paired with one routing path
for connecting to the core layer because each downlink can be bijectively paired
with one exact uplink.

3.2 Grouping Network Links

Once the allocation of all Linkm has been determined, the allocation of the
remaining servers, links and switches can be obtained accordingly. In our frame-
work, each allocated server will be paired with such a routing path for connecting
the server to a core switch. Such a server-path pair is called a resource unit in this
book for ease of explanation, and serves as the basic element of allocations in our
framework. Since the resources (e.g. network links and CPU processing power)
must be isolated among tenants so as to guarantee their performance, each resource
unit will be exclusively allocated to at most one cloud service.
Below, we will describe some fundamental properties of the resource unit. In
brief, any two of the resource units are either resource-disjoint or connected with
exactly one switch regardless whether they belong to the same pod. The set of
resource units in different pods using the same indices i, j is called MirrorUnits
(i, j) for convenience, which must be connected with exactly one core switch.
Definition 3.1 (Resource unit) For a k-ary fat-tree, a set of resources U = (S, L) is
called a resource unit, where S and L denote the set of servers and links,
respectively, if (1) there exist three integers i, j, k such that L = {Linkt(i, j, k),
Linkm(i, j, k), Linkb(i, j, k)}; and (2) for every server s in the fat-tree, s 2 S if and
only if there exists a link l 2 L such that s is connected with l.
Definition 3.2 (Intersection of resource units) For any number of resource units
U1,…,Un, where Ui = (Si, Li) for all i, the overlapping is defined as \i=1…nUi =
(\i=1…nSi, \i=1…nLi).
Lemma 3.1 (Intersection of two resource units) For any two different resource
units U1 = (S1, L1) and U2 = (S2, L2), exact one of the following conditions holds:
(1) U1 = U2; (2) L1 \ L2 = S1 \ S2 = ∅.
Proof Let U1 = (S1, L1) and U2 = (S2, L2) be any two different resource units.
Suppose L1 \ L2 ≠ ∅ or S1 \ S2 ≠ ∅. By the definitions of the resource unit and the
fat-tree, there exists at least one link in L1 \ L2, thus implying L1 = L2 and S1 = S2.
This leads to U1 = U2, which is contrary to the statement. The proof is done. □
Definition 3.3 (Single-connected resource units) Consider any different resource
units U1,…,Un, where Ui = (Si, Li) for all i. They are called single-connected if
there exists exactly one switch x, called the single point, that connects every Ui.
(i.e., for every Li, there exists a link l 2 Li such that l is directly connected to x.)
18 3 Transformation of Data Center Networks

Lemma 3.2 (Single-connected resource units) For any two different resource units
U1 and U2, exactly one of the following conditions holds true: (1) U1 and U2 are
single-connected; (2) U1 and U2 do not directly connect to any common switch.
Proof Consider any two different resource units U1 and U2. Suppose U1 and U2
directly connect to two or more common switches. By definition, each resource unit
has only one edge switch, one aggregation switch and one core switch. Hence all of
the switches connecting U1 and U2 must be at different layers. By the definition of
the fat-tree structure, there exists only one path connecting any two switches at
different layers. Thus there exists at least one shared link between U1 and U2. It
hence implies U1 = U2 by Lemma 3.1, which is contrary to the statement. The proof
is done. □
Definition 3.4 The set MirrorUnits(i, j) is defined as the union of all resource units
of which the link set consists of a Linkm(i, j, k), where k is an arbitrary integer.
Lemma 3.3 (Mirror units on the same core) For any two resource units U1 and U2,
all of the following are equivalent: (1) {U1, U2}  MirrorUnits(i, j) for some i, j;
(2) U1 and U2 are single-connected and the single point is a core switch.
Proof We give a bidirectional proof, where for any two resource units U1 and U2,
the following statements are equivalent. There exist two integers i and j such that
{U1, U2}  MirrorUnits(i, j). There exists two links Linkm(i, j, ka) and Linkm(i, j, kb)
in their link sets, respectively. There exists two links Linkt(i, j, ka) and Linkt(i, j, kb) in
their link sets, respectively. The core switch j in group i connects both U1 and U2,
and by Lemma 3.2, it is a unique single point of U1 and U2. □

3.3 Formatting Star Networks

To allow intra-service communications for cloud services, we develop some


non-blocking allocation structures, based on the combination of resource units, for
allocating the services that request n resource units, where n is a non-zero integer
less than or equal to the number of downlinks of an edge switch. To provide
non-blocking communications and predictable traffic cost (e.g., per-hop distance
between servers), each of the non-blocking allocation structures is designed to
logically form a star network. Thus, for each service s requesting n resource units,
where n > 1, the routing path allocated to the service s always passes exactly one
switch (i.e., the single point), and this switch actually acts as the hub for
intra-service communications and also as the central node of the star network. Such
a non-blocking star structure is named n-star for convenience in this book.
Definition 3.5 A union of n different resource units is called n-star if they are
single-connected. It is denoted by A = (S, L), where S and L denote the set of servers
and links, respectively. The cardinality of A is defined as |A| = |S|.
3.3 Formatting Star Networks 19

Lemma 3.4 (Non-blocking topology) For any n-star A = (S, L), A is a non-
blocking topology connecting any two servers in S.
Proof By the definition of n-star, any n-star must be made of single-connected
resource units, and by Definition 3.3, it is a star network topology. Since we assume
that all the links and network interfaces on the servers or switches are of the same
bandwidth capacity and each switch supports non-blocking multiplexing, it follows
that the topology for those servers is non-blocking. □
Lemma 3.5 (Equal hop-distance) For any n-star A = (S, L), the hop-distance
between any two servers in S is equal.
Proof For any n-star, by definition, the servers are single-connected by an edge
switch, aggregation switch or core switch, and by the definition of resource unit, the
path between each server and the single point must be the shortest path. By the
definition of the fat-tree structure, the hop-distance between any two servers in S is
equal. □
According to the position of each single point, which may be an edge, aggre-
gation or core switch, n-star can further be classified into four types, named type-E,
type-A, type-C and type-S for convenience in this book:
Definition 3.6 For any n-star A, A is called type-E if |A| > 1, and the single point of
A is an edge switch.
Definition 3.7 For any n-star A, A is called type-A if |A| > 1, and the single point of
A is an aggregation switch.
Definition 3.8 For any n-star A, A is called type-C if |A| > 1, and the single point of
A is a core switch.
Definition 3.9 For any n-star A, A is called type-S if |A| = 1.
Figure 3.3 shows some examples of n-star, where three independent cloud
services (from left to right) are allocated as the type-E, type-A and type-C n-stars,
respectively. By definitions, the resource is provisioned in different ways:

type-E type-A type-C

Fig. 3.3 Examples of three n-stars


20 3 Transformation of Data Center Networks

1. Type-E: consists of n servers, one edge switch, n aggregation switches, n core


switches and the routing paths for the n servers. Only one rack is occupied.
2. Type-A: consists of n servers, n edge switches, one aggregation switch, n core
switches and the routing paths for the n servers. Exactly n racks are occupied.
3. Type-C: consists of n servers, n edge switches, n aggregation switches, one core
switch and the routing paths for the n servers. Exactly n racks and n pods are
occupied.
4. Type-S: consists of one server, one edge switch, one aggregation switch, one
core switch and the routing path for the server. Only one rack is occupied. This
type can be dynamically treated as type-E, type-A or type-C, and the single
point can be defined accordingly.
These types of n-star partition a fat-tree network in different ways. They not only
jointly achieve resource efficiency but also provide different quality of service
(QoS), such as latency of intra-service communications and fault tolerance for
single-rack failure. For example, a cloud service that is extremely sensitive to
intra-service communication latency can request a type-E n-star so that its servers
can be allocated a single rack with the shortest per-hop distance among the servers;
an outage-sensitive or low-prioritized service could be allocated a type-A or type-C
n-star so as to spread the risk among multiple racks or pods. The pricing of resource
provisioning may depend not only on the number of requested resource units but
also on the type of topology. Depending on different management policies of cloud
data centers, the requested type of allocation could also be determined by cloud
providers according to the remaining resources.

3.4 Matrix Representation

Using the properties of a resource unit, the fat-tree can be denoted as a matrix. For a
pod of the fat-tree, the edge layer, aggregation layer and all the links between them
jointly form a bipartite graph, and the allocation of links can hence be equivalently
denoted by a two-dimensional matrix. Therefore, for a data center with multiple
pods, the entire fat-tree can be denoted by a three-dimensional matrix. By
Lemma 3.1, all the resource units are independent. Thus an element of the fat-tree
matrix equivalently represents a resource unit in the fat-tree, and they are used
interchangeably in this book. Let the matrix element m(i, j, k) = 1 if and only if the
resource unit which consists of Linkm(i, j, k) is allocated, and m(i, j, k) = 0 other-
wise. We also let ms(i, j, k) denote the allocation of a resource unit for service s.
Below, we derive several properties for the framework which are the foundation
for developing the topology-preserving reallocation mechanisms. In brief, each
n-star in a fat-tree network can be gracefully represented as a one-dimensional vector
in a matrix as shown in Fig. 3.4, where the “aggregation axis” (i.e., the columns),
3.4 Matrix Representation 21

Pod-axis
Type-C allocation,
Edge-axis connected by one core switch.
Type-A allocation,
connected by one aggr. switch.
Aggregation-axis Type-E allocation,
connected by one edge switch.

Fig. 3.4 An example of the matrix representation

the “edge axis” (i.e., the rows) and the “pod axis” are used to indicate the three
directions of a vector. The intersection of any two n-stars is either an n-star or null,
and the union of any two n-stars remains an n-star if they are single-connected. The
difference of any two n-stars remains an n-star if one is included in the other.
Lemma 3.6 (n-star as vector) For any set of resource units A, A is n-star if and
only if A forms a one-dimensional vector in a matrix.
Proof We exhaust all possible n-star types of A and give a bidirectional proof for
each case. Note that a type-S n-star trivially forms a one-dimensional vector, i.e., a
single element, in a matrix.
Case 1: For any type-E n-star A, by definition, all the resource units of A are
connected to exactly one edge switch in a certain pod. By the definition of matrix
representation, A forms a one-dimensional vector along the aggregation axis.
Case 2: For any type-A n-star A, by definition, all the resource units of A are
connected to exactly one aggregation switch in a certain pod. By the definition of
matrix representation, A forms a one-dimensional vector along the edge axis.
Case 3: For any type-C n-star A, by definition, all the resource units of A are
connected to exactly one core switch. By Lemma 3.3 and the definition of matrix
representation, A forms a one-dimensional vector along the pod axis. □
Figure 3.4 shows several examples of resource allocation using the matrix
representation. For a type-E service which requests four resource units, {m(1, 3, 1),
m(4, 3, 1), m(5, 3, 1), m(7, 3, 1)} is one of the feasible allocations, where the service
is allocated aggregation switches 1, 4, 5, 7 and edge switch 3 in pod 1. For a
type-A service which requests four resource units, {m(3, 2, 1), m(3, 4, 1), m(3, 5, 1),
m(3, 7, 1)} is one of the feasible allocations, where the service is allocated
aggregation switch 3, edge switches 2, 4, 5, 7 in pod 1. For a type-C service which
requests four resource units, {m(1, 6, 2), m(1, 6, 3), m(1, 6, 5), m(1, 6, 8)} is one of
the feasible allocations, where the service is allocated aggregation switch 1, edge
switch 6 in pods 2, 3, 5, and 8.
Within a matrix, we further give some essential operations, such as intersection,
union and difference, for manipulating n-star while ensuring the structure and
properties defined above.
22 3 Transformation of Data Center Networks

Definition 3.10 The intersection of two n-stars A1 and A2, denoted by A1 \ A2, is
defined as {U|U 2 A1 and U 2 A2}.
Lemma 3.7 (Intersection of n-stars) For any two n-stars A1 and A2, let Ax = (Sx, Lx)
be their intersection, exactly one of the following is true: (1) they share at least one
common resource unit and Ax is an n-star; (2) Sx = Lx = ∅. If Case 2 holds, we say
A1 and A2 are independent.
Proof From Lemma 3.6, every n-star forms a one-dimensional vector in the matrix,
and only the following cases represent the intersection of any two n-stars A1 and A2
in a matrix:
Case 1: Ax forms a single element or a one-dimensional vector in the matrix. By
Lemma 3.6, both imply that the intersection is an n-star and also indicate the
resource units shared by A1 and A2.
Case 2: Ax is null set. In this case, there is no common resource unit shared by A1
and A2. Therefore, for any two resource units U1 2 A1 and U2 2 A2, U1 ≠ U2, and by
Lemma 3.1, U1 \ U2 is a null set. There are no shared links and servers between A1
and A2, leading to Sx = Lx = ∅. □

Definition 3.11 The union of any two n-stars A1 and A2, denoted by A1 [ A2, is
defined as {U|U 2 A1 or U 2 A2}.
Lemma 3.8 (Union of n-stars) For any two n-stars A1 and A2, all of the following
are equivalent: (1) A1 [ A2 is an n-star; (2) A1 [ A2 forms a one-dimensional vector
in the matrix; and (3) A1 [ A2 is single-connected.
Proof For any two n-stars A1 and A2, the equivalence between (1) and (2) has been
proved by Lemma 3.6, and the equivalence between (1) and (3) has been given by
the definition of n-star. □

Definition 3.12 The difference of any two n-stars A1 and A2, denote by A1\A2, is
defined as the union of {U|U 2 A1 and U 62 A2}.
Lemma 3.9 (Difference of n-stars) For any two n-stars A1 and A2, if A2  A1, then
A1\A2 is an n-star.
Proof By Lemma 3.1, different resource units are resource-independent (i.e., link-
disjoint and server-disjoint), and hence removing some resource units from any
n-star will not influence the remaining resource units.
For any two n-stars A1 and A2, the definition of A1\A2 is equivalent to removing
the resource units of A2 from A1. It is hence equivalent to a removal of some
elements from the one-dimensional vector representing A1 in the matrix. Since the
remaining resource units still form a one-dimensional vector, A1\A2 is an n-star
according to Lemma 3.6. □
3.5 Building Variants of Fat-Tree Networks 23

type-E type-A type-C

Fig. 3.5 An example of reducing a fat-tree while keeping the symmetry property

3.5 Building Variants of Fat-Tree Networks

The canonical fat-tree structure is considered to have certain limitations in its


architecture. These limitations can be mitigated by StarCube. As an instance of
StarCube is equivalent to a fat-tree network, it can be treated as a mechanism to
model the trimming and expanding of fat-tree networks. As such, we can easily
construct numerous variants of fat-tree networks for scaling purposes while keeping
its promising symmetry properties. Therefore, the resource of variants can be
allocated and reallocated as of canonical StarCube. An example is illustrated in
Fig. 3.5, where a reduced fat-tree network is constructed by excluding the green
links, the first group of core switches, the first aggregation switch in every pod, the
first server in every rack, and the first pod from a canonical 8-ary fat-tree. In this
example, a StarCube of 4 × 4 × 8 is reduced to 3 × 4 × 7. Following the con-
struction rules of StarCube, it is allowed to operate smaller or incomplete fat-tree
networks and expand them later, and vice versa. Such flexibility is beneficial to
reducing the cost of operating data centers.

3.6 Fault-Tolerant Resource Allocation

This framework supports fault tolerance in an easy, intuitive and resource-efficient


way. Operators may reserve extra resources for services, and then quickly recover
those services from server failures or link failures while keeping the topologies
logically unchanged. Only a small percentage of resources in the data centers are
needed to be kept in reserve.
Thanks to the symmetry properties of the topologies allocated by StarCube,
where for each service the allocated servers are aligned to a certain axis, the
complexity of reserving backup resources can be significantly reduced. For any
service, all that is needed is to estimate the required number of backup resource
units and request a larger star network accordingly, after which any failed resource
Another Random Scribd Document
with Unrelated Content
'This is the end,' she thought. 'This is the end.
I'll have to sew again for Mr Jones,
Do hems when I can hardly see to mend,
And have the old ache in my marrow-bones.
And when his wife's in child-bed, when she groans,
She'll send for me until the pains have ceased,
And give me leavings at the christening feast.

And sit aslant to eye me as I eat,


"You're only wanted here, ma'am, for to-day,
Just for the christ'ning party, for the treat,
Don't ever think I mean to let you stay;
Two's company, three's none, that's what I say."
Life can be bitter to the very bone
When one is poor, and woman, and alone.

'Jimmy,' she said, still doubting, 'Come, my dear,


Let's have our "Binger," 'fore we go to bed,'
And then 'The parson's dog,' she cackled clear,
'Lep over stile,' she sang, nodding her head.
'His name was little Binger.' 'Jim,' she said,
'Binger, now, chorus' ... Jimmy kicked the hob,
The sacrament of song died in a sob.

Jimmy went out into the night to think


Under the moon so steady in the blue.
The woman's beauty ran in him like drink,
The fear that men had loved her burnt him through;
The fear that even then another knew
All the deep mystery which women make
To hide the inner nothing made him shake.

'Anna, I love you, and I always shall.'


He looked towards Plaister's End beyond Cot Hills.
A white star glimmered in the long canal,
A droning from the music came in thrills.
Love is a flame to burn out human wills,
Love is a flame to set the will on fire,
Love is a flame to cheat men into mire.

One of the three, we make Love what we choose,


But Jimmy did not know, he only thought
That Anna was too beautiful to lose,
That she was all the world and he was naught,
That it was sweet, though bitter, to be caught.
'Anna, I love you.' Underneath the moon,
'I shall go mad unless I see you soon.'

The fair's lights threw aloft a misty glow.


The organ whangs, the giddy horses reel,
The rifles cease, the folk begin to go,
The hands unclamp the swing-boats from the wheel,
There is a smell of trodden orange peel;
The organ drones and dies, the horses stop,
And then the tent collapses from the top.
The fair is over, let the people troop,
The drunkards stagger homewards down the gutters,
The showmen heave in an excited group,
The poles tilt slowly down, the canvas flutters,
The mauls knock out the pins, the last flare sputters.
'Lower away.' 'Go easy.' 'Lower, lower.'
'You've dang near knock my skull in. Loose it slower.'

'Back in the horses.' 'Are the swing-boats loaded?'


'All right to start.' 'Bill, where's the cushion gone?
The red one for the Queen?' 'I think I stowed it.'
'You think, you think. Lord, where's that cushion, John?'
'It's in that bloody box you're sitting on,
What more d'you want?' A concertina plays
Far off as wandering lovers go their ways.

Up the dim Bye Street to the market-place


The dead bones of the fair are borne in carts,
Horses and swing-boats at a funeral pace
After triumphant hours quickening hearts;
A policeman eyes each waggon as it starts,
The drowsy showmen stumble half asleep,
One of them catcalls, having drunken deep.

So out, over the pass, into the plain,


And the dawn finds them filling empty cans
In some sweet-smelling dusty country lane,
Where a brook chatters over rusty pans.
The iron chimneys of the caravans
Smoke as they go. And now the fair has gone
To find a new pitch somewhere further on.

But as the fair moved out two lovers came,


Ernie and Bessie loitering out together;
Bessie with wild eyes, hungry as a flame,
Ern like a stallion tugging at a tether.
It was calm moonlight, and October weather,
So still, so lovely, as they topped the ridge.
They brushed by Jimmy standing on the bridge.

And, as they passed, they gravely eyed each other,


And the blood burned in each heart beating there;
And out into the Bye Street tottered mother,
Without her shawl, in the October air.
'Jimmy,' she cried, 'Jimmy.' And Bessie's hair
Drooped on the instant over Ernie's face,
And the two lovers clung in an embrace.

'O, Ern.' 'My own, my Bessie.' As they kissed


Jimmy was envious of the thing unknown.
So this was Love, the something he had missed,
Woman and man athirst, aflame, alone.
Envy went knocking at his marrow-bone,
And Anna's face swam up so dim, so fair,
Shining and sweet, with poppies in her hair.
III

After the fair, the gang began again.


Tipping the trollies down the banks of earth.
The truck of stone clanks on the endless chain,
A clever pony guides it to its berth.
'Let go.' It tips, the navvies shout for mirth
To see the pony step aside, so wise,
But Jimmy sighed, thinking of Anna's eyes.

And when he stopped his shovelling he looked


Over the junipers towards Plaister way,
The beauty of his darling had him hooked,
He had no heart for wrastling with the clay.
'O Lord Almighty, I must get away;
O Lord, I must. I must just see my flower,
Why, I could run there in the dinner hour.'

The whistle on the pilot engine blew,


The men knocked off, and Jimmy slipped aside
Over the fence, over the bridge, and through,
And then ahead along the water-side,
Under the red-brick rail-bridge, arching wide,
Over the hedge, across the fields, and on;
The foreman asked: 'Where's Jimmy Gurney gone?'
It is a mile and more to Plaister's End,
But Jimmy ran the short way by the stream,
And there was Anna's cottage at the bend,
With blue smoke on the chimney, faint as steam.
'God, she's at home,' and up his heart a gleam
Leapt like a rocket on November nights,
And shattered slowly in a burst of lights.

Anna was singing at her kitchen fire,


She was surprised, and not well pleased to see
A sweating navvy, red with heat and mire,
Come to her door, whoever he might be.
But when she saw that it was Jimmy, she
Smiled at his eyes upon her, full of pain,
And thought, 'But, still, he mustn't come again.

People will talk; boys are such crazy things;


But he's a dear boy though he is so green.'
So, hurriedly, she slipped her apron strings,
And dabbed her hair, and wiped her fingers clean,
And came to greet him languid as a queen,
Looking as sweet, as fair, as pure, as sad,
As when she drove her loving husband mad.

'Poor boy,' she said, 'Poor boy, how hot you are.'
She laid a cool hand to his sweating face.
'How kind to come. Have you been running far?
I'm just going out; come up the road a pace.
O dear, these hens; they're all about the place.'
So Jimmy shooed the hens at her command,
And got outside the gate as she had planned.

'Anna, my dear, I love you; love you, true;


I had to come--I don't know--I can't rest--
I lay awake all night, thinking of you.
Many must love you, but I love you best.'
'Many have loved me, yes, dear,' she confessed,
She smiled upon him with a tender pride,
'But my love ended when my husband died.

Still, we'll be friends, dear friends, dear, tender friends;


Love with its fever's at an end for me.
Be by me gently now the fever ends,
Life is a lovelier thing than lovers see,
I'd like to trust a man, Jimmy,' said she,
'May I trust you?' 'Oh, Anna dear, my dear----
'Don't come so close,' she said, 'with people near.

Dear, don't be vexed; it's very sweet to find


One who will understand; but life is life,
And those who do not know are so unkind.
But you'll be by me, Jimmy, in the strife,
I love you though I cannot be your wife;
And now be off, before the whistle goes,
Or else you'll lose your quarter, goodness knows.'
'When can I see you, Anna? Tell me, dear.
To-night? To-morrow? Shall I come to-night?
'Jimmy, my friend, I cannot have you here;
But when I come to town perhaps we might.
Dear, you must go; no kissing; you can write,
And I'll arrange a meeting when I learn
What friends are doing' (meaning Shepherd Ern).

'Good-bye, my own.' 'Dear Jim, you understand.


If we were only free, dear, free to meet,
Dear, I would take you by your big, strong hand
And kiss your dear boy eyes so blue and sweet;
But my dead husband lies under the sheet,
Dead in my heart, dear, lovely, lonely one,
So, Jim, my dear, my loving days are done.

But though my heart is buried in his grave


Something might be--friendship and utter trust--
And you, my dear starved little Jim shall have
Flowers of friendship from my dead heart's dust;
Life would be sweet if men would never lust.
Why do you, Jimmy? Tell me sometime, dear,
Why men are always what we women fear.

Not now. Good-bye; we understand, we two,


And life, O Jim, how glorious life is;
This sunshine in my heart is due to you;
I was so sad, and life has given this.
I think "I wish I had something of his,"
Do give me something, will you be so kind?
Something to keep you always in my mind.

'I will,' he said. 'Now go, or you'll be late.'


He broke from her and ran, and never dreamt
That as she stood to watch him from the gate
Her heart was half amusement, half contempt,
Comparing Jim the squab, red and unkempt,
In sweaty corduroys, with Shepherd Ern.
She blew him kisses till he passed the turn.

The whistle blew before he reached the line;


The foreman asked him what the hell he meant,
Whether a duke had asked him out to dine,
Or if he thought the bag would pay his rent?
And Jim was fined before the foreman went.
But still his spirit glowed from Anna's words,
Cooed in the voice so like a singing bird's.

'O Anna, darling, you shall have a present;


I'd give you golden gems if I were rich,
And everything that's sweet and all that's pleasant.'
He dropped his pick as though he had a stitch,
And stared tow'rds Plaister's End, past Bushe's Pitch.
O beauty, what I have to give I'll give,
All mine is yours, beloved, while I live.'
All through the afternoon his pick was slacking,
His eyes were always turning west and south,
The foreman was inclined to send him packing,
But put it down to after fair-day drouth;
He looked at Jimmy with an ugly mouth,
And Jimmy slacked, and muttered in a moan,
'My love, my beautiful, my very own.'

So she had loved. Another man had had her;


She had been his with passion in the night;
An agony of envy made him sadder,
Yet stabbed a pang of bitter-sweet delight--
O he would keep his image of her white.
The foreman cursed, stepped up, and asked him flat
What kind of gum-tree he was gaping at.

It was Jim's custom, when the pay day came,


To take his weekly five and twenty shilling
Back in the little packet to his dame;
Not taking out a farthing for a filling,
Nor twopence for a pot, for he was willing
That she should have it all to save or spend.
But love makes many lovely customs end.

Next pay day came and Jimmy took the money,


But not to mother, for he meant to buy
A thirteen-shilling locket for his honey,
Whatever bellies hungered and went dry,
A silver heart-shape with a ruby eye.
He bought the thing and paid the shopman's price,
And hurried off to make the sacrifice.

'Is it for me? You dear, dear generous boy.


How sweet of you. I'll wear it in my dress.
When you're beside me life is such a joy,
You bring the sun to solitariness.'
She brushed his jacket with a light caress,
His arms went round her fast, she yielded meek;
He had the happiness to kiss her cheek.

'My dear, my dear.' 'My very dear, my Jim,


How very kind my Jimmy is to me;
I ache to think that some are harsh to him;
Not like my Jimmy, beautiful and free.
My darling boy, how lovely it would be
If all would trust as we two trust each other.'
And Jimmy's heart grew hard against his mother.

She, poor old soul, was waiting in the gloom


For Jimmy's pay, that she could do the shopping.
The clock ticked out a solemn tale of doom;
Clogs on the bricks outside went clippa-clopping,
The owls were coming out and dew was dropping.
The bacon burnt, and Jimmy not yet home.
The clock was ticking dooms out like a gnome.
'What can have kept him that he doesn't come?
O God, they'd tell me if he'd come to hurt.'
The unknown, unseen evil struck her numb,
She saw his body bloody in the dirt,
She saw the life blood pumping through the shirt,
She saw him tipsy in the navvies' booth,
She saw all forms of evil but the truth.

At last she hurried up the line to ask


If Jim were hurt or why he wasn't back.
She found the watchman wearing through his task;
Over the fire basket in his shack;
Behind, the new embankment rose up black.
'Gurney?' he said. 'He'd got to see a friend.'
'Where?' 'I dunno. I think out Plaister's End.

Thanking the man, she tottered down the hill,


The long-feared fang had bitten to the bone.
The brook beside her talked as water will
That it was lonely singing all alone,
The night was lonely with the water's tone,
And she was lonely to the very marrow.
Love puts such bitter poison on Fate's arrow.

She went the long way to them by the mills,


She told herself that she must find her son.
The night was ominous of many ills;
The soughing larch-clump almost made her run,
Her boots hurt (she had got a stone in one)
And bitter beaks were tearing at her liver
That her boy's heart was turned from her forever.

She kept the lane, past Spindle's, past the Callows',


Her lips still muttering prayers against the worst,
And there were people coming from the sallows,
Along the wild duck patch by Beggar's Hurst.
Being in moonlight mother saw them first,
She saw them moving in the moonlight dim,
A woman with a sweet voice saying 'Jim.'

Trembling she grovelled down into the ditch,


They wandered past her pressing side to side.
'O Anna, my belov'd, if I were rich.'
It was her son, and Anna's voice replied,
'Dear boy, dear beauty boy, my love and pride.'
And he: 'It's but a silver thing, but I
Will earn you better lockets by and bye.'

'Dear boy, you mustn't.' 'But I mean to do.'


'What was that funny sort of noise I heard?'
'Where?' 'In the hedge; a sort of sob or coo.
Listen. It's gone.' 'It may have been a bird.'
Jim tossed a stone but mother never stirred.
She hugged the hedgerow, choking down her pain,
While the hot tears were blinding in her brain.
The two passed on, the withered woman rose,
For many minutes she could only shake,
Staring ahead with trembling little 'Oh's,'
The noise a very frightened child might make.
'O God, dear God, don't let the woman take
My little son, God, not my little Jim.
O God, I'll have to starve if I lose him.'

So back she trembled, nodding with her head,


Laughing and trembling in the bursts of tears,
Her ditch-filled boots both squelching in the tread,
Her shopping-bonnet sagging to her ears,
Her heart too dumb with brokenness for fears.
The nightmare whickering with the laugh of death
Could not have added terror to her breath.

She reached the house, and: 'I'm all right,' said she,
'I'll just take off my things; but I'm all right,
'I'd be all right with just a cup of tea,
If I could only get this grate to light,
The paper's damp and Jimmy's late to-night;
"Belov'd, if I was rich," was what he said,
O Jim, I wish that God would kill me dead.'

While she was blinking at the unlit grate,


Scratching the moistened match-heads off the wood,
She heard Jim coming, so she reached his plate,
And forked the over-frizzled scraps of food.
'You're late,' she said, 'and this yer isn't good,
Whatever makes you come in late like this?'
'I've been to Plaister's End, that's how it is.'

'You've been to Plaister's End?'


'Yes.'
'I've been staying
For money for the shopping ever so.
Down here we can't get victuals without paying,
There's no trust down the Bye Street, as you know,
And now it's dark and it's too late to go.
You've been to Plaister's End. What took you there?'
'The lady who was with us at the fair.'

'The lady, eh? The lady?'


'Yes, the lady.'
'You've been to see her?'
'Yes.'
'What happened then?'
'I saw her.'
'Yes. And what filth did she trade ye?
Or d'you expect your locket back agen?
I know the rotten ways of whores with men.
What did it cost ye?'
'What did what cost?'
'It.
Your devil's penny for the devil's bit.'
'I don't know what you mean.'
'Jimmy, my own.
Don't lie to mother, boy, for mother knows.
I know you and that lady to the bone,
And she's a whore, that thing you call a rose,
A whore who takes whatever male thing goes;
A harlot with the devil's skill to tell
The special key of each man's door to hell.'

'She's not. She's nothing of the kind, I tell'ee.'


'You can't tell women like a woman can;
A beggar tells a lie to fill his belly,
A strumpet tells a lie to win a man,
Women were liars since the world began;
And she's a liar, branded in the eyes,
A rotten liar, who inspires lies.'

'I say she's not.'


'No, don't'ee Jim, my dearie,
You've seen her often in the last few days,
She's given a love as makes you come in weary
To lie to me before going out to laze.
She's tempted you into the devil's ways,
She's robbing you, full fist, of what you earn,
In God's name, what's she giving in return?'

'Her faith, my dear, and that's enough for me.'


'Her faith. Her faith. O Jimmy, listen, dear;
Love doesn't ask for faith, my son, not he;
He asks for life throughout the live-long year,
And life's a test for any plough to ere
Life tests a plough in meadows made of stones,
Love takes a toll of spirit, mind and bones.

I know a woman's portion when she loves,


It's hers to give, my darling, not to take;
It isn't lockets, dear, nor pairs of gloves,
It isn't marriage bells nor wedding cake,
It's up and cook, although the belly ache;
And bear the child, and up and work again,
And count a sick man's grumble worth the pain.

Will she do this, and fifty times as much?'


'No. I don't ask her.'
'No. I warrant, no.
She's one to get a young fool in her clutch,
And you're a fool to let her trap you so.
She love you? She? O Jimmy, let her go;
I was so happy, dear, before she came,
And now I'm going to the grave in shame.

I bore you, Jimmy, in this very room.


For fifteen years I got you all you had,
You were my little son, made in my womb,
Left all to me, for God had took your dad,
You were a good son, doing all I bade,
Until this strumpet came from God knows where,
And now you lie, and I am in despair.

Jimmy, I won't say more. I know you think


That I don't know, being just a withered old,
With chaps all fallen in and eyes that blink,
And hands that tremble so they cannot hold.
A bag of bones to put in churchyard mould,
A red-eyed hag beside your evening star.'
And Jimmy gulped, and thought 'By God, you are.'

'Well, if I am, my dear, I don't pretend.


I got my eyes red, Jimmy, making you.
My dear, before our love time's at an end
Think just a minute what it is you do.
If this were right, my dear, you'd tell me true;
You don't, and so it's wrong; you lie; and she
Lies too, or else you wouldn't lie to me.

Women and men have only got one way


And that way's marriage; other ways are lust.
If you must marry this one, then you may,
If not you'll drop her.'
'No.' 'I say you must.
Or bring my hairs with sorrow to the dust.
Marry your whore, you'll pay, and there an end.
My God, you shall not have a whore for friend.
By God, you shall not, not while I'm alive.
Never, so help me God, shall that thing be.
If she's a woman fit to touch she'll wive,
If not she's whore, and she shall deal with me.
And may God's blessed mercy help us see
And may He make my Jimmy count the cost,
My little boy who's lost, as I am lost.'

People in love cannot be won by kindness,


And opposition makes them feel like martyrs.
When folk are crazy with a drunken blindness,
It's best to flog them with each other's garters,
And have the flogging done by Shropshire carters,
Born under Ercall where the while stones lie;
Ercall that smells of honey in July.

Jimmy said nothing in reply, but thought


That mother was an old, hard jealous thing.
'I'll love my girl through good and ill report,
I shall be true whatever grief it bring.'
And in his heart he heard the death-bell ring
For mother's death, and thought what it would be
To bury her in churchyard and be free.

He saw the narrow grave under the wall,


Home without mother nagging at his dear,
And Anna there with him at evenfall,
Bidding him dry his eyes and be of cheer.
'The death that took poor mother brings me near,
Nearer than we have ever been before,
Near as the dead one came, but dearer, more.'

'Good-night, my son,' said mother. 'Night,' he said.


He dabbed her brow wi's lips and blew the light,
She lay quite silent crying on the bed,
Stirring no limb, but crying through the night.
He slept, convinced that he was Anna's knight.
And when he went to work he left behind
Money for mother crying herself blind.

After that night he came to Anna's call,


He was a fly in Anna's subtle weavings,
Mother had no more share in him at all;
All that the mother had was Anna's leavings.
There were more lies, more lockets, more deceivings,
Taunts from the proud old woman, lies from him,
And Anna's coo of 'Cruel. Leave her, Jim.'

Also the foreman spoke: 'You make me sick,


You come-day-go-day-God-send-plenty-beer.
You put less mizzle on your bit of Dick,
Or get your time, I'll have no slackers here,
I've had my eye on you too long, my dear.'
And Jimmy pondered while the man attacked,
'I'd see her all day long if I were sacked.'
And trembling mother thought, 'I'll go to see'r.
She'd give me back my boy if she were told
Just what he is to me, my pretty dear:
She wouldn't leave me starving in the cold,
Like what I am.' But she was weak and old.
She thought, 'But if I ask her, I'm afraid
He'd hate me ever after,' so she stayed.

IV

Bessie, the gipsy, got with child by Ern,


She joined her tribe again at Shepherd's Meen,
In that old quarry overgrown with fern,
Where goats are tethered on the patch of green.
There she reflected on the fool she'd been,
And plaited kipes and waited for the bastard,
And thought that love was glorious while it lasted.

And Ern the moody man went moody home,


To that most gentle girl from Ercall Hill,
And bade her take a heed now he had come,
Or else, by cripes, he'd put her through the mill.
He didn't want her love, he'd had his fill,
Thank you, of her, the bread and butter sack.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like