0% found this document useful (0 votes)
298 views

Distributed Systems Principles and Paradigms: Chapter 05: Naming

This chapter discusses different approaches for naming entities in distributed systems, including: - Flat naming approaches where an entity's name is resolved to an address, such as broadcasting the name, using forwarding pointers, or home-based approaches. - Distributed hash tables which map names to nodes in a logical ring using finger tables to efficiently route name lookup requests. - Hierarchical location services which organize nodes in a search tree based on network hierarchy to improve lookup efficiency.

Uploaded by

David Barrett
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
298 views

Distributed Systems Principles and Paradigms: Chapter 05: Naming

This chapter discusses different approaches for naming entities in distributed systems, including: - Flat naming approaches where an entity's name is resolved to an address, such as broadcasting the name, using forwarding pointers, or home-based approaches. - Distributed hash tables which map names to nodes in a logical ring using finger tables to efficiently route name lookup requests. - Hierarchical location services which organize nodes in a search tree based on network hierarchy to improve lookup efficiency.

Uploaded by

David Barrett
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Distributed Systems

Principles and Paradigms


Maarten van Steen
VU Amsterdam, Dept. Computer Science
Room R4.20, [email protected]
Chapter 05: Naming
Version: November 12, 2009
Contents
Chapter
01: Introduction
02: Architectures
03: Processes
04: Communication
05: Naming
06: Synchronization
07: Consistency & Replication
08: Fault Tolerance
09: Security
10: Distributed Object-Based Systems
11: Distributed File Systems
12: Distributed Web-Based Systems
13: Distributed Coordination-Based Systems
2 / 38
Naming 5.1 Naming Entities
Naming Entities
Names, identiers, and addresses
Name resolution
Name space implementation
3 / 38
Naming 5.1 Naming Entities
Naming
Essence
Names are used to denote entities in a distributed system. To operate
on an entity, we need to access it at an access point. Access points
are entities that are named by means of an address.
Note
A location-independent name for an entity E, is independent from the
addresses of the access points offered by E.
4 / 38
Naming 5.1 Naming Entities
Identiers
Pure name
A name that has no meaning at all; it is just a random string. Pure
names can be used for comparison only.
Identier
A name having the following properties:
P1: Each identier refers to at most one entity
P2: Each entity is referred to by at most one identier
P3: An identier always refers to the same entity (prohibits reusing
an identier)
Observation
An identier need not necessarily be a pure name, i.e., it may have
content.
5 / 38
Naming 5.2 Flat Naming
Flat naming
Problem
Given an essentially unstructured name (e.g., an identier), how can
we locate its associated access point?
Simple solutions (broadcasting)
Home-based approaches
Distributed Hash Tables (structured P2P)
Hierarchical location service
6 / 38
Naming 5.2 Flat Naming
Simple solutions
Broadcasting
Broadcast the ID, requesting the entity to return its current address.
Can never scale beyond local-area networks
Requires all processes to listen to incoming location requests
Forwarding pointers
When an entity moves, it leaves behind a pointer to next location
Dereferencing can be made entirely transparent to clients by simply
following the chain of pointers
Update a clients reference when present location is found
Geographical scalability problems (for which separate chain reduction
mechanisms are needed):
Long chains are not fault tolerant
Increased network latency at dereferencing
7 / 38
Naming 5.2 Flat Naming
Simple solutions
Broadcasting
Broadcast the ID, requesting the entity to return its current address.
Can never scale beyond local-area networks
Requires all processes to listen to incoming location requests
Forwarding pointers
When an entity moves, it leaves behind a pointer to next location
Dereferencing can be made entirely transparent to clients by simply
following the chain of pointers
Update a clients reference when present location is found
Geographical scalability problems (for which separate chain reduction
mechanisms are needed):
Long chains are not fault tolerant
Increased network latency at dereferencing
7 / 38
Naming 5.2 Flat Naming
Home-based approaches
Single-tiered scheme
Let a home keep track of where the entity is:
Entitys home address registered at a naming service
The home registers the foreign address of the entity
Client contacts the home rst, and then continues with foreign
location
8 / 38
Naming 5.2 Flat Naming
Home-based approaches
Host's present location
Client's
location
1. Send packet to host at its home
2. Return address
of current location
3. Tunnel packet to
current location
4. Send successive packets
to current location
Host's home
location
9 / 38
Naming 5.2 Flat Naming
Home-based approaches
Two-tiered scheme
Keep track of visiting entities:
Check local visitor register rst
Fall back to home location if local lookup fails
Problems with home-based approaches
Home address has to be supported for entitys lifetime
Home address is xed unnecessary burden when the entity
permanently moves
Poor geographical scalability (entity may be next to client)
Question
How can we solve the permanent move problem?
10 / 38
Naming 5.2 Flat Naming
Home-based approaches
Two-tiered scheme
Keep track of visiting entities:
Check local visitor register rst
Fall back to home location if local lookup fails
Problems with home-based approaches
Home address has to be supported for entitys lifetime
Home address is xed unnecessary burden when the entity
permanently moves
Poor geographical scalability (entity may be next to client)
Question
How can we solve the permanent move problem?
10 / 38
Naming 5.2 Flat Naming
Home-based approaches
Two-tiered scheme
Keep track of visiting entities:
Check local visitor register rst
Fall back to home location if local lookup fails
Problems with home-based approaches
Home address has to be supported for entitys lifetime
Home address is xed unnecessary burden when the entity
permanently moves
Poor geographical scalability (entity may be next to client)
Question
How can we solve the permanent move problem?
10 / 38
Naming 5.2 Flat Naming
Distributed Hash Tables (DHT)
Chord
Consider the organization of many nodes into a logical ring
Each node is assigned a random m-bit identier.
Every entity is assigned a unique m-bit key.
Entity with key k falls under jurisdiction of node with smallest
id k (called its successor).
Nonsolution
Let node id keep track of succ(id) and start linear search along the
ring.
11 / 38
Naming 5.2 Flat Naming
DHTs: Finger tables
Principle
Each node p maintains a nger table FT
p
[] with at most m entries:
FT
p
[i ] =succ(p +2
i 1
)
Note: FT
p
[i ] points to the rst node succeeding p by at least 2
i 1
.
To look up a key k, node p forwards the request to node with index
j satisfying
q =FT
p
[j ] k <FT
p
[j +1]
If p <k <FT
p
[1], the request is also forwarded to FT
p
[1]
12 / 38
Naming 5.2 Flat Naming
DHTs: Finger tables
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1 4
2 4
3 9
4 9
5 18
1 9
2 9
3 9
4 14
5 20
1 11
2 11
3 14
4 18
5 28
1 14
2 14
3 18
4 20
5 28
1 18
2 18
3 18
4 28
5 1
1 20
2 20
3 28
4 28
5 4
1 21
2 28
3 28
4 28
5 4
1 28
2 28
3 28
4 1
5 9
1 1
2 1
3 1
4 4
5 14
Resolve k = 26
from node 1
Resolve k = 12
from node 28
i
s
u
c
c
(
p

+

2




)

i
-
1
Finger table
Actual node
13 / 38
Naming 5.2 Flat Naming
Exploiting network proximity
Problem
The logical organization of nodes in the overlay may lead to erratic message
transfers in the underlying Internet: node k and node succ(k +1) may be
very far apart.
Topology-aware node assignment: When assigning an ID to a node, make
sure that nodes close in the ID space are also close in the network. Can
be very difcult.
Proximity routing: Maintain more than one possible successor, and forward to
the closest.
Example: in Chord FT
p
[i ] points to rst node in
INT = [p +2
i 1
, p +2
i
1]. Node p can also store pointers to other
nodes in INT.
Proximity neighbor selection: When there is a choice of selecting who your
neighbor will be (not in Chord), pick the closest one.
14 / 38
Naming 5.2 Flat Naming
Exploiting network proximity
Problem
The logical organization of nodes in the overlay may lead to erratic message
transfers in the underlying Internet: node k and node succ(k +1) may be
very far apart.
Topology-aware node assignment: When assigning an ID to a node, make
sure that nodes close in the ID space are also close in the network. Can
be very difcult.
Proximity routing: Maintain more than one possible successor, and forward to
the closest.
Example: in Chord FT
p
[i ] points to rst node in
INT = [p +2
i 1
, p +2
i
1]. Node p can also store pointers to other
nodes in INT.
Proximity neighbor selection: When there is a choice of selecting who your
neighbor will be (not in Chord), pick the closest one.
14 / 38
Naming 5.2 Flat Naming
Exploiting network proximity
Problem
The logical organization of nodes in the overlay may lead to erratic message
transfers in the underlying Internet: node k and node succ(k +1) may be
very far apart.
Topology-aware node assignment: When assigning an ID to a node, make
sure that nodes close in the ID space are also close in the network. Can
be very difcult.
Proximity routing: Maintain more than one possible successor, and forward to
the closest.
Example: in Chord FT
p
[i ] points to rst node in
INT = [p +2
i 1
, p +2
i
1]. Node p can also store pointers to other
nodes in INT.
Proximity neighbor selection: When there is a choice of selecting who your
neighbor will be (not in Chord), pick the closest one.
14 / 38
Naming 5.2 Flat Naming
Exploiting network proximity
Problem
The logical organization of nodes in the overlay may lead to erratic message
transfers in the underlying Internet: node k and node succ(k +1) may be
very far apart.
Topology-aware node assignment: When assigning an ID to a node, make
sure that nodes close in the ID space are also close in the network. Can
be very difcult.
Proximity routing: Maintain more than one possible successor, and forward to
the closest.
Example: in Chord FT
p
[i ] points to rst node in
INT = [p +2
i 1
, p +2
i
1]. Node p can also store pointers to other
nodes in INT.
Proximity neighbor selection: When there is a choice of selecting who your
neighbor will be (not in Chord), pick the closest one.
14 / 38
Naming 5.2 Flat Naming
Hierarchical Location Services (HLS)
Basic idea
Build a large-scale search tree for which the underlying network is
divided into hierarchical domains. Each domain is represented by a
separate directory node.
A leaf domain, contained in S
Directory node
dir(S) of domain S
A subdomain S
of top-level domain T
(S is contained in T)
Top-level
domain T
The root directory
node dir(T)
15 / 38
Naming 5.2 Flat Naming
HLS: Tree organization
Invariants
Address of entity E is stored in a leaf or intermediate node
Intermediate nodes contain a pointer to a child iff the subtree rooted at
the child stores an address of the entity
The root knows about all entities
Domain D2
Domain D1
M
Field with no data
Location record
with only one field,
containing an address
Field for domain
dom(N) with
pointer to N
Location record
for E at node M
N
16 / 38
Naming 5.2 Flat Naming
HLS: Lookup operation
Basic principles
Start lookup at local leaf node
Node knows about E follow downward pointer, else go up
Upward lookup always stops at root
Domain D
M
Node has no
record for E, so
that request is
forwarded to
parent
Look-up
request
Node knows
about E, so request
is forwarded to child
17 / 38
Naming 5.2 Flat Naming
HLS: Insert operation
Domain D
M
Node has no
record for E,
so request is
forwarded
to parent
Insert
request
Node knows
about E, so request
is no longer forwarded
(a)
M
Node creates record
and stores pointer
Node creates
record and
stores address
(b)
18 / 38
Naming 5.3 Structured Naming
Name space
Essence
A graph in which a leaf node represents a (named) entity. A directory node is
an entity that refers to other nodes.
elke
.twmrc mbox
steen
home keys
"/home/steen/mbox"
"/keys"
"/home/steen/keys"
Data stored in n1
Directory node
Leaf node
n2: "elke"
n3: "max"
n4: "steen"
max
keys
n1
n2
n5
n0
n3 n4
Note
A directory node contains a (directory) table of (edge label, node identier)
pairs.
19 / 38
Naming 5.3 Structured Naming
Name space
Observation
We can easily store all kinds of attributes in a node, describing aspects
of the entity the node represents:
Type of the entity
An identier for that entity
Address of the entitys location
Nicknames
...
Note
Directory nodes can also have attributes, besides just storing a
directory table with (edge label, node identier) pairs.
20 / 38
Naming 5.3 Structured Naming
Name space
Observation
We can easily store all kinds of attributes in a node, describing aspects
of the entity the node represents:
Type of the entity
An identier for that entity
Address of the entitys location
Nicknames
...
Note
Directory nodes can also have attributes, besides just storing a
directory table with (edge label, node identier) pairs.
20 / 38
Naming 5.3 Structured Naming
Name resolution
Problem
To resolve a name we need a directory node. How do we actually nd that
(initial) node?
Closure mechanism
The mechanism to select the implicit context from which to start name
resolution:
www.cs.vu.nl: start at a DNS name server
/home/steen/mbox: start at the local NFS le server (possible recursive
search)
0031204447784: dial a phone number
130.37.24.8: route to the VUs Web server
Question
Why are closure mechanisms always implicit?
21 / 38
Naming 5.3 Structured Naming
Name resolution
Problem
To resolve a name we need a directory node. How do we actually nd that
(initial) node?
Closure mechanism
The mechanism to select the implicit context from which to start name
resolution:
www.cs.vu.nl: start at a DNS name server
/home/steen/mbox: start at the local NFS le server (possible recursive
search)
0031204447784: dial a phone number
130.37.24.8: route to the VUs Web server
Question
Why are closure mechanisms always implicit?
21 / 38
Naming 5.3 Structured Naming
Name resolution
Problem
To resolve a name we need a directory node. How do we actually nd that
(initial) node?
Closure mechanism
The mechanism to select the implicit context from which to start name
resolution:
www.cs.vu.nl: start at a DNS name server
/home/steen/mbox: start at the local NFS le server (possible recursive
search)
0031204447784: dial a phone number
130.37.24.8: route to the VUs Web server
Question
Why are closure mechanisms always implicit?
21 / 38
Naming 5.3 Structured Naming
Name linking
Hard link
What we have described so far as a path name: a name that is
resolved by following a specic path in a naming graph from one node
to another.
22 / 38
Naming 5.3 Structured Naming
Name linking
Soft link
Allow a node O to contain a name of another node:
First resolve Os name (leading to O)
Read the content of O, yielding name
Name resolution continues with name
Observations
The name resolution process determines that we read the content
of a node, in particular, the name in the other node that we need
to go to.
One way or the other, we know where and how to start name
resolution given name
23 / 38
Naming 5.3 Structured Naming
Name linking
Soft link
Allow a node O to contain a name of another node:
First resolve Os name (leading to O)
Read the content of O, yielding name
Name resolution continues with name
Observations
The name resolution process determines that we read the content
of a node, in particular, the name in the other node that we need
to go to.
One way or the other, we know where and how to start name
resolution given name
23 / 38
Naming 5.3 Structured Naming
Name linking
.twmrc
"/home/steen/keys"
"/keys" n1
n2
n5
n0
n3
n6
mbox
"/keys"
Data stored in n6
n4
elke steen
home keys
Data stored in n1
Directory node
Leaf node
n2: "elke"
n3: "max"
n4: "steen"
max
keys
Observation
Node n5 has only one name
24 / 38
Naming 5.3 Structured Naming
Name-space implementation
Basic issue
Distribute the name resolution process as well as name space management
across multiple machines, by distributing nodes of the naming graph.
Distinguish three levels
Global level: Consists of the high-level directory nodes. Main aspect is
that these directory nodes have to be jointly managed by different
administrations
Administrational level: Contains mid-level directory nodes that can be
grouped in such a way that each group can be assigned to a separate
administration.
Managerial level: Consists of low-level directory nodes within a single
administration. Main issue is effectively mapping directory nodes to local
name servers.
25 / 38
Naming 5.3 Structured Naming
Name-space implementation
Basic issue
Distribute the name resolution process as well as name space management
across multiple machines, by distributing nodes of the naming graph.
Distinguish three levels
Global level: Consists of the high-level directory nodes. Main aspect is
that these directory nodes have to be jointly managed by different
administrations
Administrational level: Contains mid-level directory nodes that can be
grouped in such a way that each group can be assigned to a separate
administration.
Managerial level: Consists of low-level directory nodes within a single
administration. Main issue is effectively mapping directory nodes to local
name servers.
25 / 38
Naming 5.3 Structured Naming
Name-space implementation
Basic issue
Distribute the name resolution process as well as name space management
across multiple machines, by distributing nodes of the naming graph.
Distinguish three levels
Global level: Consists of the high-level directory nodes. Main aspect is
that these directory nodes have to be jointly managed by different
administrations
Administrational level: Contains mid-level directory nodes that can be
grouped in such a way that each group can be assigned to a separate
administration.
Managerial level: Consists of low-level directory nodes within a single
administration. Main issue is effectively mapping directory nodes to local
name servers.
25 / 38
Naming 5.3 Structured Naming
Name-space implementation
Basic issue
Distribute the name resolution process as well as name space management
across multiple machines, by distributing nodes of the naming graph.
Distinguish three levels
Global level: Consists of the high-level directory nodes. Main aspect is
that these directory nodes have to be jointly managed by different
administrations
Administrational level: Contains mid-level directory nodes that can be
grouped in such a way that each group can be assigned to a separate
administration.
Managerial level: Consists of low-level directory nodes within a single
administration. Main issue is effectively mapping directory nodes to local
name servers.
25 / 38
Naming 5.3 Structured Naming
Name-space implementation
Basic issue
Distribute the name resolution process as well as name space management
across multiple machines, by distributing nodes of the naming graph.
Distinguish three levels
Global level: Consists of the high-level directory nodes. Main aspect is
that these directory nodes have to be jointly managed by different
administrations
Administrational level: Contains mid-level directory nodes that can be
grouped in such a way that each group can be assigned to a separate
administration.
Managerial level: Consists of low-level directory nodes within a single
administration. Main issue is effectively mapping directory nodes to local
name servers.
25 / 38
Naming 5.3 Structured Naming
Name-space implementation
org net
jp us
nl
sun
eng
yale
eng
ai linda
robot
acm
jack jill
ieee
keio
cs
cs
pc24
co
nec
csl
oce vu
cs
ftp www
ac
com
edu
gov mil
pub
globe
index.txt
Mana-
gerial
layer
Adminis-
trational
layer
Global
layer
Zone
26 / 38
Naming 5.3 Structured Naming
Name-space implementation
Item Global Administrational Managerial
1 Worldwide Organization Department
2 Few Many Vast numbers
3 Seconds Milliseconds Immediate
4 Lazy Immediate Immediate
5 Many None or few None
6 Yes Yes Sometimes
1: Geographical scale 4: Update propagation
2: # Nodes 5: # Replicas
3: Responsiveness 6: Client-side caching?
27 / 38
Naming 5.3 Structured Naming
Iterative name resolution
1
resolve(dir,[name1,...,nameK]) sent to Server0 responsible for dir
2
Server0 resolves resolve(dir,name1) dir1, returning the identication
(address) of Server1, which stores dir1.
3
Client sends resolve(dir1,[name2,...,nameK]) to Server1, etc.
Client's
name
resolver
Root
name server
Name server
nl node
Name server
vu node
Name server
cs node
1. <nl,vu,cs,ftp>
2. #<nl>, <vu,cs,ftp>
3. <vu,cs,ftp>
4. #<vu>, <cs,ftp>
5. <cs,ftp>
6. #<cs>, <ftp>
ftp
cs
vu
nl
Nodes are
managed by
the same server
7. <ftp>
8. #<ftp>
#<nl,vu,cs,ftp> <nl,vu,cs,ftp>
28 / 38
Naming 5.3 Structured Naming
Recursive name resolution
1
resolve(dir,[name1,...,nameK]) sent to Server0 responsible for dir
2
Server0 resolves resolve(dir,name1) dir1, and sends
resolve(dir1,[name2,...,nameK]) to Server1, which stores dir1.
3
Server0 waits for result from Server1, and returns it to client.
Client's
name
resolver
Root
name server
Name server
nl node
Name server
vu node
Name server
cs node
1. <nl,vu,cs,ftp>
2. <vu,cs,ftp>
7. #<vu,cs,ftp>
3. <cs,ftp>
6. #<cs,ftp>
4. <ftp>
5. #<ftp>
#<nl,vu,cs,ftp>
8. #<nl,vu,cs,ftp>
<nl,vu,cs,ftp>
29 / 38
Naming 5.3 Structured Naming
Caching in recursive name resolution
Server Should Looks up Passes to Receives Returns
for node resolve child and caches to requester
cs <ftp> #<ftp> #<ftp>
vu <cs,ftp> #<cs> <ftp> #<ftp> #<cs>
#<cs, ftp>
nl <vu,cs,ftp> #<vu> <cs,ftp> #<cs> #<vu>
#<cs,ftp> #<vu,cs>
#<vu,cs,ftp>
root <nl,vu,cs,ftp> #<nl> <vu,cs,ftp> #<vu> #<nl>
#<vu,cs> #<nl,vu>
#<vu,cs,ftp> #<nl,vu,cs>
#<nl,vu,cs,ftp>
30 / 38
Naming 5.3 Structured Naming
Scalability issues
Size scalability
We need to ensure that servers can handle a large number of requests per
time unit high-level servers are in big trouble.
Solution
Assume (at least at global and administrational level) that content of nodes
hardly ever changes. We can then apply extensive replication by mapping
nodes to multiple servers, and start name resolution at the nearest server.
Observation
An important attribute of many nodes is the address where the represented
entity can be contacted. Replicating nodes makes large-scale traditional
name servers unsuitable for locating mobile entities.
31 / 38
Naming 5.3 Structured Naming
Scalability issues
Size scalability
We need to ensure that servers can handle a large number of requests per
time unit high-level servers are in big trouble.
Solution
Assume (at least at global and administrational level) that content of nodes
hardly ever changes. We can then apply extensive replication by mapping
nodes to multiple servers, and start name resolution at the nearest server.
Observation
An important attribute of many nodes is the address where the represented
entity can be contacted. Replicating nodes makes large-scale traditional
name servers unsuitable for locating mobile entities.
31 / 38
Naming 5.3 Structured Naming
Scalability issues
Size scalability
We need to ensure that servers can handle a large number of requests per
time unit high-level servers are in big trouble.
Solution
Assume (at least at global and administrational level) that content of nodes
hardly ever changes. We can then apply extensive replication by mapping
nodes to multiple servers, and start name resolution at the nearest server.
Observation
An important attribute of many nodes is the address where the represented
entity can be contacted. Replicating nodes makes large-scale traditional
name servers unsuitable for locating mobile entities.
31 / 38
Naming 5.3 Structured Naming
Scalability issues
Geographical scalability
We need to ensure that the name resolution process scales across large
geographical distances.
Name server
nl node
Name server
vu node
Name server
cs node
Client
Long-distance communication
Recursive name resolution
Iterative name resolution
I1
I2
I3
R1
R2
R3
Problem
By mapping nodes to servers that can be located anywhere, we introduce an
implicit location dependency.
32 / 38
Naming 5.3 Structured Naming
Example: Decentralized DNS
Basic idea
Take a full DNS name, hash into a key k, and use a DHT-based system to
allow for key lookups. Main drawback: You cant ask for all nodes in a
subdomain (but very few people were doing this anyway).
Information in a node
SOA Zone Holds info on the represented zone
A Host IP addr. of host this node represents
MX Domain Mail server to handle mail for this node
SRV Domain Server handling a specic service
NS Zone Name server for the represented zone
CNAME Node Symbolic link
PTR Host Canonical name of a host
HINFO Host Info on this host
TXT Any kind Any info considered useful
33 / 38
Naming 5.3 Structured Naming
DNS on Pastry
Pastry
DHT-based system that works with prexes of keys. Consider a system in
which keys come from a 4-digit number space. A node with ID 3210 keeps
track of the following nodes
n
k
prex of ID(n
k
) n
k
prex of ID(n
k
)
n
0
0 n
1
1
n
2
2 n
30
30
n
31
31 n
33
33
n
320
320 n
322
322
n
323
323
Note
Node 3210 is responsible for handling keys with prex 321. If it receives a
request for key 3012, it will forward the request to node n
30
. For DNS: A node
responsible for key k stores DNS records of names with hash value k.
34 / 38
Naming 5.3 Structured Naming
Replication of records
Denition
Replicated at level i record is replicated to all nodes with i matching
prexes. Note: # hops for looking up record at level i is generally i .
Observation
Let x
i
denote the fraction of most popular DNS names of which the
records should be replicated at level i , then:
x
i
=

d
i
(logNC)
1+d + +d
logN1

1/(1)
with N is the total number of nodes, d =b
(1)/
and 1, assuming
that popularity follows a Zipf distribution:
The frequency of the n-th ranked item is proportional to 1/n

35 / 38
Naming 5.3 Structured Naming
Replication of records
Meaning
If you want to reach an average of C =1 hops when looking up a DNS
record, then with b =4, =0.9, N =10, 000 and 1, 000, 000 records
that
61 most popular records should be
replicated at level 0
284 next most popular records at level 1
1323 next most popular records at level 2
6177 next most popular records at level 3
28826 next most popular records at level 4
134505 next most popular records at level 5
the rest should not be replicated
36 / 38
Naming 5.4 Attribute-Based Naming
Attribute-based naming
Observation
In many cases, it is much more convenient to name, and look up
entities by means of their attributes traditional directory services
(aka yellow pages).
Problem
Lookup operations can be extremely expensive, as they require to
match requested attribute values, against actual attribute values
inspect all entities (in principle).
Solution
Implement basic directory service as database, and combine with
traditional structured naming system.
37 / 38
Naming 5.4 Attribute-Based Naming
Attribute-based naming
Observation
In many cases, it is much more convenient to name, and look up
entities by means of their attributes traditional directory services
(aka yellow pages).
Problem
Lookup operations can be extremely expensive, as they require to
match requested attribute values, against actual attribute values
inspect all entities (in principle).
Solution
Implement basic directory service as database, and combine with
traditional structured naming system.
37 / 38
Naming 5.4 Attribute-Based Naming
Attribute-based naming
Observation
In many cases, it is much more convenient to name, and look up
entities by means of their attributes traditional directory services
(aka yellow pages).
Problem
Lookup operations can be extremely expensive, as they require to
match requested attribute values, against actual attribute values
inspect all entities (in principle).
Solution
Implement basic directory service as database, and combine with
traditional structured naming system.
37 / 38
Naming 5.4 Attribute-Based Naming
Example: LDAP
C = NL
O = Vrije Universiteit
OU = Comp. Sc.
Host_Name = star Host_Name = zephyr
CN = Main server
N
Attribute Value Attribute Value
Country NL Country NL
Locality Amsterdam Locality Amsterdam
Organization Vrije Universiteit Organization Vrije Universiteit
OrganizationalUnit Comp. Sc. OrganizationalUnit Comp. Sc.
CommonName Main server CommonName Main server
Host Name star Host Name zephyr
Host Address 192.31.231.42 Host Address 137.37.20.10
answer =
search("&(C = NL) (O = Vrije Universiteit) (OU =
*
) (CN = Main server)")
38 / 38

You might also like