3 Design
3 Design
Distributed DBMS
Parallel Database Systems Distributed Object DBMS Database Interoperability Current Issues
Page 5. 1
Design Problem
I
database
Distributed DBMS
Page 5. 2
Level of sharing
Distributed DBMS
Page 5. 3
Distribution Design
Top-down
mostly in designing systems from scratch mostly in homogeneous systems
Bottom-up
when the databases already exist at a number of
sites
Distributed DBMS
Page 5. 4
Top-Down Design
Requirements Analysis Objectives User Input Conceptual Design GCS View Integration View Design
ESs
User Input
Distributed DBMS
Page 5. 5
Distributed DBMS
Page 5. 6
Fragmentation
I I
locality
concurrent execution of a number of transactions that access different portions of a relation views that cannot be defined on a single fragment will require extra processing semantic data control (especially integrity enforcement) more difficult
Distributed DBMS
Page 5. 7
PROJ1 : projects with budgets less than $200,000 PROJ2 : projects with budgets greater than or equal to $200,000
PROJ1
PNO P1 PNAME Instrumentation BUDGET 150000 LOC Montreal New York
PNO P1 P2 P3 P4 P5
PNAME
BUDGET
LOC Montreal New York New York New York Paris Boston
Instrumentation 150000 Database Develop. 135000 CAD/CAM 250000 Maintenance 310000 CAD/CAM 500000
PROJ2
PNO P3 P4 P5 PNAME CAD/CAM Maintenance CAD/CAM BUDGET 250000 310000 500000 LOC New York Paris Boston
Distributed DBMS
Page 5. 8
PROJ1: information about project budgets PROJ2: information about project names and locations
PROJ1
PNO P1 P2 P3 P4 P5 BUDGET 150000 135000 250000 310000 500000
PNO P1 P2 P3 P4 P5
PNAME
BUDGET
LOC Montreal New York New York New York Paris Boston
Instrumentation 150000 Database Develop. 135000 CAD/CAM 250000 Maintenance 310000 CAD/CAM 500000
PROJ2
PNO P1 P2 P3 P4 P5 PNAME Instrumentation Database Develop. CAD/CAM Maintenance CAD/CAM LOC Montreal New York New York Paris Boston
Distributed DBMS
Page 5. 9
Degree of Fragmentation
finite number of alternatives
tuples or attributes
relations
Distributed DBMS
Page 5. 10
Correctness of Fragmentation
I
Completeness
Decomposition of relation R into fragments R1, R2, ..., Rn is
complete if and only if each data item in R can also be found in some Ri
Reconstruction
If relation R is decomposed into fragments R1, R2, ..., Rn, then
Disjointness
If relation R is decomposed into fragments R1, R2, ..., Rn, and
Distributed DBMS
Page 5. 11
Allocation Alternatives
I I
Non-replicated
partitioned : each fragment resides at only one site
Replicated
fully replicated : each fragment at each site partially replicated : each fragment at some of the
sites
Rule of thumb:
If
read - only queries ==1 update quries
replication is advantageous,
Distributed DBMS
Page 5. 12
Easy or Non-existant
Same Difficulty
Moderate
Difficult
Easy
RELIABILITY
High
REALITY
Distributed DBMS
Realistic
Information Requirements
I
Four categories:
Database information Application information Communication network information Computer system information
Distributed DBMS
Page 5. 14
Fragmentation
I I
Distributed DBMS
Page 5. 15
Database Information
relationship
SKILL TITLE, SAL L1
EMP
Application Information
simple predicates : Given R[A1, A2, , An], a simple
predicate pj is pj : Ai =Value where {=,<,,>,,}, ValueDi and Di is the domain of Ai. For relation R we define Pr = {p1, p2, ,pm} Example :
PNAME = "Maintenance" BUDGET 200000
define M={m1,m2,,mr} as M={ mi|mi = pjPr===pj* }, 1jm, 1iz where pj* = pj or pj* = (pj).
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 17
Distributed DBMS
Page 5. 18
Application Information
minterm selectivities: sel(mi)
N
The number of tuples of the relation that would be accessed by a user query which is specified according to a given minterm predicate mi. The frequency with which a user application qi accesses data. Access frequency for a minterm predicate can also be defined.
Distributed DBMS
Page 5. 19
Therefore,
A horizontal fragment Ri of relation R consists of all the tuples of R which satisfy a minterm predicate mi. Given a set of minterm predicates M, there are as many horizontal fragments of relation R as there are minterm predicates. Set of horizontal fragments also referred to as minterm fragments.
Distributed DBMS
Page 5. 20
PHF Algorithm
Given: A relation R, the set of simple predicates Pr Output: The set of fragments of R = {R1, R2,,Rw} which obey the fragmentation rules. Preliminaries :
Pr should be complete Pr should be minimal
Distributed DBMS
Page 5. 21
A set of simple predicates Pr is said to be complete if and only if the accesses to the tuples of the minterm fragments defined on Pr requires that two tuples of the same minterm fragment have the same probability of being accessed by any application. Example :
Assume PROJ[PNO,PNAME,BUDGET,LOC] has two
applications defined on it. Find the budgets of projects at each location. Find projects with budgets less than $200000.
(1) (2)
Distributed DBMS
Page 5. 22
which is complete.
Distributed DBMS
Page 5. 23
I I
If a predicate influences how fragmentation is performed, (i.e., causes a fragment f to be further fragmented into, say, fi and fj) then there should be at least one application that accesses fi and fj differently. In other words, the simple predicate should be relevant in determining a fragmentation. If all the predicates of a set Pr are relevant, then Pr is minimal. acc(mi)
card(fi) card(fj)
1998 M. Tamer zsu & Patrick Valduriez Page 5. 24
acc(mj)
Distributed DBMS
Distributed DBMS
Page 5. 25
COM_MIN Algorithm
Given: a relation R and a set of simple predicates Pr Output: a complete and minimal set of simple predicates Pr' for Pr
Rule 1: a relation or fragment is partitioned into at least two parts which are accessed differently by at least one application.
Distributed DBMS
Page 5. 26
COM_MIN Algorithm
Initialization :
G G
find a pi =Pr such that pi partitions R according to Rule 1 set Pr' = pi ; Pr =Pr pi ; F =fi
G G
find a pj =Pr such that pj partitions some fk defined according to minterm predicate over Pr' according to Rule 1 set Pr' = Pr' pi ; Pr =Pr pi; F = F fi if =pk =Pr' which is nonrelevant then
Pr' = Pr' pk F = F fk
Distributed DBMS
Page 5. 27
PHORIZONTAL Algorithm
Makes use of COM_MIN to perform fragmentation. Input: a relation R and a set of simple predicates Pr Output: a set of minterm predicates M according to which relation R is to be fragmented
Pr' = COM_MIN (R,Pr) determine the set M of minterm predicates determine the set I of implications among pi Pr eliminate the contradictory minterms from M
Distributed DBMS
Page 5. 28
PHF Example
I I
application run at
p1 : SAL 30000 p2 : SAL > 30000 Pr = {p1,p2} which is complete and minimal Pr'=Pr
Minterm predicates
m1 : (SAL 30000) m2 : NOT(SAL 30000) = (SAL > 30000)
Distributed DBMS
Page 5. 29
PHF Example
Programmer 24000
Distributed DBMS
Page 5. 30
PHF Example
I
Find the name and budget of projects given their no. Issued at three sites Access project information according to budget one site accesses 200000 other accesses >200000
p1 : LOC = Montreal p2 : LOC = New York p3 : LOC = Paris p4 : BUDGET 200000 p5 : BUDGET > 200000
Pr = Pr' = {p1,p2,p3,p4,p5}
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 31
PHF Example
I
Distributed DBMS
Page 5. 32
PHF Example
PROJ1
PNO P1 PNAME BUDGET LOC Montreal
PROJ2
PNO P2 PNAME Database Develop. BUDGET 135000 LOC New York
Instrumentation 150000
PROJ4
PNO P3 PNAME CAD/CAM BUDGET 250000 LOC New York
PROJ6
PNO P4 PNAME Maintenance BUDGET 310000 LOC Paris
Distributed DBMS
Page 5. 33
PHF Correctness
I
Completeness
Since Pr' is complete and minimal, the selection
Reconstruction
If relation R is fragmented into FR = {R1,R2,,Rr}
R = R FR Ri i
I
Disjointness
Minterm predicates that form the basis of fragmentation
Distributed DBMS
Page 5. 34
Defined on a member relation of a link according to a selection operation specified on its owner.
Each link is an equijoin. Equijoin can be implemented by means of semijoins.
SKILL TITLE, SAL L1 EMP ENO, ENAME, TITLE L2 ASG ENO, PNO, RESP, DUR PROJ PNO, PNAME, BUDGET, LOC L3
Distributed DBMS
Page 5. 35
DHF Definition
Given a link L where owner(L)=S and member(L)=R, the derived horizontal fragments of R are defined as Ri = R
F
Si, 1iw
where w is the maximum number of fragments that will be defined on R and Si = Fi=(S) where Fi is the formula according to which the primary horizontal fragment Si is defined.
Distributed DBMS
Page 5. 36
DHF Example
Given link L1 where owner(L1)=SKILL and member(L1)=EMP
EMP1 = EMP EMP2 = EMP SKILL1 SKILL2
where
SKILL1 = =SAL30000=(SKILL) SKILL2 = SAL>30000=(SKILL)
EMP1
ENO E3 E4 E7 ENAME A. Lee J. Miller R. Davis TITLE Mech. Eng. Programmer Mech. Eng.
EMP2
ENO E1 E2 E5 E6 E8 ENAME J. Doe M. Smith B. Casey L. Chu J. Jones TITLE Elect. Eng. Syst. Anal. Syst. Anal. Elect. Eng. Syst. Anal.
Page 5. 37
Distributed DBMS
DHF Correctness
I
Completeness
Referential integrity Let R be the member relation of a link whose owner is
relation S which is fragmented as FS = {S1, S2, ..., Sn}. Furthermore, let A be the join attribute between R and S. Then, for each tuple t of R, there should be a tuple t' of S such that t[A]=t'[A]
I I
Reconstruction
Same as primary horizontal fragmentation.
Disjointness
Simple join graphs between the owner and the
member fragments.
Distributed DBMS
Page 5. 38
Vertical Fragmentation
I
More difficult than horizontal, because more alternatives exist. Two approaches :
grouping
N
splitting
N
Distributed DBMS
Page 5. 39
Vertical Fragmentation
I
Overlapping fragments
grouping
Non-overlapping fragments
splitting
Distributed DBMS
Page 5. 40
VF Information Requirements
I
Application Information
Attribute affinities
N N
a measure that indicates how closely related the attributes are This is obtained from more primitive usage data Given a set of queries Q = {q1, q2,, qq} that will run on the relation R[A1, A2,, An], = if attribute Aj is referenced by query qi 1 use(qi,Aj) = = 0 otherwise = use(qi,) can be defined accordingly
Distributed DBMS
Page 5. 41
VF Definition of use(qi,Aj)
Consider the following 4 queries for relation PROJ
q1: SELECT FROM WHERE SELECT FROM WHERE BUDGET PROJ PNO=Value PNAME PROJ LOC=Value q2: SELECT PNAME,BUDGET FROM PROJ q4: SELECT SUM(BUDGET) FROM PROJ WHERE LOC=Value
q3:
A2 0 1 1 0
A3 1 1 0 1
A4 0 0 1 1
Page 5. 42
1 0 0 0
(query access)
query access =
all sites
access execution
Distributed DBMS
Page 5. 43
A1 A2 A3 A4
A1 A2 A3 A4 45 0 45 0 5 75 0 80 45 5 53 3 3 78 0 75
Page 5. 44
Distributed DBMS
VF Clustering Algorithm
I
Take the attribute affinity matrix AA and reorganize the attribute orders to form clusters where the attributes in each cluster demonstrate high affinity to one another. Bond Energy Algorithm (BEA) has been used for clustering of entities. BEA finds an ordering of entities (in our case attributes) such that the global affinity measure
AM =
i j
is maximized.
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 45
where
bond(Ax,Ay) =
aff(Az,Ax)aff(Az,Ay)
z ==1
Distributed DBMS
Page 5. 47
BEA Example
Consider the following AA matrix and the corresponding CA matrix where A1 and A2 have been placed. Place A3:
A1 A2 AA = A3 A4 A1 A2 A3 A4 0 45 0 5 0 80 5 75 45 5 53 3 0 75 3 78 A1 A2 45 0 0 80 CA = 45 5 0 75
Ordering (0-3-1) :
cont(A0,A3,A1) = 2bond(A0 , A3)+2bond(A3 , A1)2bond(A0 , A1) = 2* 0 + 2* 4410 2*0 = 8820 = 2bond(A1 , A3)+2bond(A3 , A2)2bond(A1,A2) = 2* 4410 + 2* 890 2*225 = 10150
Ordering (1-3-2) :
cont(A1,A3,A2)
Ordering (2-3-4) :
cont (A2,A3,A4) = 1780
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 48
BEA Example
Therefore, the CA matrix has to form
A1 A3 A2 45 45 0 0
5 80 5
45 53 0
3 75
Distributed DBMS
Page 5. 49
BEA Example
When A4 is placed, the final form of the CA matrix (after row organization) is
A 1 A 3 A2 A4 A 1 45 45 A 3 45 53 A2 A4 0 0 0 5 0 3
5 80 75 3 75 78
Distributed DBMS
Page 5. 50
VF Algorithm
How can you divide a set of clustered attributes {A1, A2, , An} into two (or more) sets {A1, A2, , Ai} and {Ai, , An} such that there are no (or minimal) applications that access both (or more than one) of the sets.
A1 A2 A3 Ai Ai+1 . . .Am A1 A2 Ai Ai+1 Am
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 51
... ...
TA
BA
VF ALgorithm
Define
TQ = set of applications that access only TA BQ = set of applications that access only BA OQ = set of applications that access both TA and BA
and
CTQ = total number of accesses to attributes by applications that access only TA CBQ = total number of accesses to attributes by applications that access only BA COQ = total number of accesses to attributes by applications that access both TA and BA
Then find the point along the diagonal that maximizes CTQ=CBQ=COQ2
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 52
VF Algorithm
Two problems :
Cluster forming in the middle of the CA matrix
Shift a row up and a column left and apply the algorithm
Cost O(2m)
Distributed DBMS
Page 5. 53
VF Correctness
A relation R, defined over attribute set A and key K, generates the vertical partitioning FR = {R1, R2, , Rr}.
I
Completeness
The following should be true for A:
A = AR
i
Reconstruction
Reconstruction can be achieved by
R=
K
Ri Ri FR
Disjointness
TID's are not considered to be overlapping since they are maintained
by the system
Distributed DBMS
Page 5. 54
Hybrid Fragmentation
R
HF
G
HF
R1
VF
G
R2
G G
VF
G
VF
G
VF
VF
G
R11
R12
R21
R22
R23
Distributed DBMS
Page 5. 55
Fragment Allocation
I
Problem Statement
Given
F = {F1, F2, , Fn} S ={S1, S2, , Sm} Q = {q1, q2,, qq} fragments network sites applications
Optimality
Minimal cost
N N
Communication + storage + processing (read & update) Cost in terms of time (usually)
Performance Constraints
N
Response time and/or throughput Per site constraints (storage & processing)
Distributed DBMS
Page 5. 56
Information Requirements
I
Database information
selectivity of fragments size of a fragment
Application information
access types and numbers access localities
Distributed DBMS
Page 5. 57
Allocation
File Allocation (FAP) vs Database Allocation (DAP):
Fragments are not individual files
N
remote file access model not applicable relationship between allocation and query processing
Cost of integrity enforcement should be considered Cost of concurrency control should be considered
Distributed DBMS
Page 5. 58
number of read accesses of a query to a fragment number of update accesses of query to a fragment A matrix indicating which queries updates which fragments A similar matrix for retrievals originating site of each query
Distributed DBMS
Page 5. 59
Allocation Model
General Form min(Total Cost) subject to response time constraint storage constraint processing constraint Decision Variable xij =
Distributed DBMS
= 1 = 0 =
Allocation Model
I
Total Cost
query processing cost +
all fragments
all queries
all sites
Distributed DBMS
Page 5. 61
Allocation Model
I
Access cost
all sites all fragments
(no. of update accesses+ no. of read accesses) xij=local processing cost at a site
Distributed DBMS
Page 5. 62
Allocation Model
I
Cost of updates
all sites all fragments all sites
all fragments
Retrieval Cost
all fragments
minall sites (cost of retrieval command + cost of sending back the result)
Distributed DBMS
Page 5. 63
Allocation Model
I
Constraints
Response Time
execution time of query max. allowable response time for that query
Distributed DBMS
Page 5. 64
Allocation Model
I
Solution Methods
FAP is NP-complete DAP also NP-complete
Heuristics based on
single commodity warehouse location (for FAP) knapsack problem branch and bound techniques network flow
Distributed DBMS
Page 5. 65
Allocation Model
I
best partitioning
ignore replication at first sliding window on fragments
Distributed DBMS
Page 5. 66