0% found this document useful (0 votes)
335 views66 pages

3 Design

Uploaded by

Josita Aedo
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
335 views66 pages

3 Design

Uploaded by

Josita Aedo
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Outline

Introduction I Background I Distributed DBMS Architecture Distributed Database Design


I

Semantic Data Control Distributed Query Processing Distributed Transaction Management


Distributed DBMS

Fragmentation Data Location

Parallel Database Systems Distributed Object DBMS Database Interoperability Current Issues

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 1

Design Problem
I

In the general setting :


Making decisions about the placement of data and programs across the sites of a computer network as well as possibly designing the network itself.

In Distributed DBMS, the placement of applications entails


placement of the distributed DBMS software; and placement of the applications that run on the

database

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 2

Dimensions of the Problem


Access pattern behavior dynamic static partial information Level of knowledge complete information

data data + program

Level of sharing

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 3

Distribution Design

Top-down
mostly in designing systems from scratch mostly in homogeneous systems

Bottom-up
when the databases already exist at a number of

sites

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 4

Top-Down Design
Requirements Analysis Objectives User Input Conceptual Design GCS View Integration View Design

Access Information Distribution Design LCSs Physical Design LISs

ESs

User Input

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 5

Distribution Design Issues


Why fragment at all? How to fragment? How much to fragment? How to test correctness? How to allocate? Information requirements?

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 6

Fragmentation
I I

Can't we just distribute relations? What is a reasonable unit of distribution?


relation
N N

views are subsets of relations extra communication

locality

fragments of relations (sub-relations)


N N N

concurrent execution of a number of transactions that access different portions of a relation views that cannot be defined on a single fragment will require extra processing semantic data control (especially integrity enforcement) more difficult

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 7

Fragmentation Alternatives Horizontal


PROJ

PROJ1 : projects with budgets less than $200,000 PROJ2 : projects with budgets greater than or equal to $200,000
PROJ1
PNO P1 PNAME Instrumentation BUDGET 150000 LOC Montreal New York

PNO P1 P2 P3 P4 P5

PNAME

BUDGET

LOC Montreal New York New York New York Paris Boston

Instrumentation 150000 Database Develop. 135000 CAD/CAM 250000 Maintenance 310000 CAD/CAM 500000

PROJ2
PNO P3 P4 P5 PNAME CAD/CAM Maintenance CAD/CAM BUDGET 250000 310000 500000 LOC New York Paris Boston

P2 Database Develop. 135000

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 8

Fragmentation Alternatives Vertical


PROJ

PROJ1: information about project budgets PROJ2: information about project names and locations
PROJ1
PNO P1 P2 P3 P4 P5 BUDGET 150000 135000 250000 310000 500000

PNO P1 P2 P3 P4 P5

PNAME

BUDGET

LOC Montreal New York New York New York Paris Boston

Instrumentation 150000 Database Develop. 135000 CAD/CAM 250000 Maintenance 310000 CAD/CAM 500000

PROJ2
PNO P1 P2 P3 P4 P5 PNAME Instrumentation Database Develop. CAD/CAM Maintenance CAD/CAM LOC Montreal New York New York Paris Boston

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 9

Degree of Fragmentation
finite number of alternatives

tuples or attributes

relations

Finding the suitable level of partitioning within this range

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 10

Correctness of Fragmentation
I

Completeness
Decomposition of relation R into fragments R1, R2, ..., Rn is

complete if and only if each data item in R can also be found in some Ri

Reconstruction
If relation R is decomposed into fragments R1, R2, ..., Rn, then

there should exist some relational operator such that R = 1inRi=

Disjointness
If relation R is decomposed into fragments R1, R2, ..., Rn, and

data item di is in Rj, then di should not be in any other fragment Rk (k j ).

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 11

Allocation Alternatives
I I

Non-replicated
partitioned : each fragment resides at only one site

Replicated
fully replicated : each fragment at each site partially replicated : each fragment at some of the

sites

Rule of thumb:
If
read - only queries ==1 update quries

replication is advantageous,

otherwise replication may cause problems

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 12

Comparison of Replication Alternatives


Full-replication QUERY PROCESSING DIRECTORY MANAGEMENT CONCURRENCY CONTROL Easy Partial-replication Partitioning Same Difficulty

Easy or Non-existant

Same Difficulty

Moderate

Difficult

Easy

RELIABILITY

Very high Possible application

High

Low Possible application


Page 5. 13

REALITY
Distributed DBMS

Realistic

1998 M. Tamer zsu & Patrick Valduriez

Information Requirements
I

Four categories:

Database information Application information Communication network information Computer system information

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 14

Fragmentation

Horizontal Fragmentation (HF)


Primary Horizontal Fragmentation (PHF) Derived Horizontal Fragmentation (DHF)

I I

Vertical Fragmentation (VF) Hybrid Fragmentation (HF)

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 15

PHF Information Requirements


I

Database Information
relationship
SKILL TITLE, SAL L1

EMP

PROJ PNO, PNAME, BUDGET, LOC L3

ENO, ENAME, TITLE L2 ASG

ENO, PNO, RESP, DUR

cardinality of each relation: card(R)


Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 16

PHF - Information Requirements


I

Application Information
simple predicates : Given R[A1, A2, , An], a simple

predicate pj is pj : Ai =Value where {=,<,,>,,}, ValueDi and Di is the domain of Ai. For relation R we define Pr = {p1, p2, ,pm} Example :
PNAME = "Maintenance" BUDGET 200000

minterm predicates : Given R and Pr={p1, p2, ,pm}

define M={m1,m2,,mr} as M={ mi|mi = pjPr===pj* }, 1jm, 1iz where pj* = pj or pj* = (pj).
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 17

PHF Information Requirements


Example
m1: PNAME="Maintenance" BUDGET200000 m2: NOT(PNAME="Maintenance") BUDGET200000 m3: PNAME= "Maintenance" NOT(BUDGET200000) m4: NOT(PNAME="Maintenance") NOT(BUDGET200000)

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 18

PHF Information Requirements


I

Application Information
minterm selectivities: sel(mi)
N

The number of tuples of the relation that would be accessed by a user query which is specified according to a given minterm predicate mi. The frequency with which a user application qi accesses data. Access frequency for a minterm predicate can also be defined.

access frequencies: acc(qi)


N

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 19

Primary Horizontal Fragmentation


Definition :
Rj = Fj=(R ), 1 j w where Fj is a selection formula, which is (preferably) a minterm predicate.

Therefore,
A horizontal fragment Ri of relation R consists of all the tuples of R which satisfy a minterm predicate mi. Given a set of minterm predicates M, there are as many horizontal fragments of relation R as there are minterm predicates. Set of horizontal fragments also referred to as minterm fragments.

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 20

PHF Algorithm
Given: A relation R, the set of simple predicates Pr Output: The set of fragments of R = {R1, R2,,Rw} which obey the fragmentation rules. Preliminaries :
Pr should be complete Pr should be minimal

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 21

Completeness of Simple Predicates


I

A set of simple predicates Pr is said to be complete if and only if the accesses to the tuples of the minterm fragments defined on Pr requires that two tuples of the same minterm fragment have the same probability of being accessed by any application. Example :
Assume PROJ[PNO,PNAME,BUDGET,LOC] has two

applications defined on it. Find the budgets of projects at each location. Find projects with budgets less than $200000.

(1) (2)

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 22

Completeness of Simple Predicates


According to (1),
Pr={LOC=Montreal,LOC=New York,LOC=Paris}

which is not complete with respect to (2). Modify


Pr ={LOC=Montreal,LOC=New York,LOC=Paris, BUDGET200000,BUDGET>200000}

which is complete.

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 23

Minimality of Simple Predicates


I

I I

If a predicate influences how fragmentation is performed, (i.e., causes a fragment f to be further fragmented into, say, fi and fj) then there should be at least one application that accesses fi and fj differently. In other words, the simple predicate should be relevant in determining a fragmentation. If all the predicates of a set Pr are relevant, then Pr is minimal. acc(mi)


card(fi) card(fj)
1998 M. Tamer zsu & Patrick Valduriez Page 5. 24

acc(mj)

Distributed DBMS

Minimality of Simple Predicates


Example :
Pr ={LOC=Montreal,LOC=New York, LOC=Paris,
BUDGET200000,BUDGET>200000}

is minimal (in addition to being complete). However, if we add


PNAME = Instrumentation

then Pr is not minimal.

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 25

COM_MIN Algorithm
Given: a relation R and a set of simple predicates Pr Output: a complete and minimal set of simple predicates Pr' for Pr

Rule 1: a relation or fragment is partitioned into at least two parts which are accessed differently by at least one application.

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 26

COM_MIN Algorithm
Initialization :
G G

find a pi =Pr such that pi partitions R according to Rule 1 set Pr' = pi ; Pr =Pr pi ; F =fi

Iteratively add predicates to Pr' until it is complete


G

G G

find a pj =Pr such that pj partitions some fk defined according to minterm predicate over Pr' according to Rule 1 set Pr' = Pr' pi ; Pr =Pr pi; F = F fi if =pk =Pr' which is nonrelevant then
Pr' = Pr' pk F = F fk

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 27

PHORIZONTAL Algorithm
Makes use of COM_MIN to perform fragmentation. Input: a relation R and a set of simple predicates Pr Output: a set of minterm predicates M according to which relation R is to be fragmented
Pr' = COM_MIN (R,Pr) determine the set M of minterm predicates determine the set I of implications among pi Pr eliminate the contradictory minterms from M

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 28

PHF Example
I I

Two candidate relations : PAY and PROJ. Fragmentation of relation PAY


Application: Check the salary info and determine raise. Employee records kept at two sites

two sites Simple predicates

application run at

p1 : SAL 30000 p2 : SAL > 30000 Pr = {p1,p2} which is complete and minimal Pr'=Pr

Minterm predicates
m1 : (SAL 30000) m2 : NOT(SAL 30000) = (SAL > 30000)

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 29

PHF Example

PAY1 TITLE Mech. Eng. SAL 27000

PAY2 TITLE Elect. Eng. Syst. Anal. SAL 40000 34000

Programmer 24000

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 30

PHF Example
I

Fragmentation of relation PROJ


Applications:
N N

Find the name and budget of projects given their no. Issued at three sites Access project information according to budget one site accesses 200000 other accesses >200000

Simple predicates For application (1)

For application (2)

p1 : LOC = Montreal p2 : LOC = New York p3 : LOC = Paris p4 : BUDGET 200000 p5 : BUDGET > 200000

Pr = Pr' = {p1,p2,p3,p4,p5}
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 31

PHF Example
I

Fragmentation of relation PROJ continued


Minterm fragments left after elimination
m1 : (LOC = Montreal) (BUDGET 200000) m2 : (LOC = Montreal) (BUDGET > 200000) m3 : (LOC = New York) (BUDGET 200000) m4 : (LOC = New York) (BUDGET > 200000) m5 : (LOC = Paris) (BUDGET 200000) m6 : (LOC = Paris) (BUDGET > 200000)

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 32

PHF Example
PROJ1
PNO P1 PNAME BUDGET LOC Montreal

PROJ2
PNO P2 PNAME Database Develop. BUDGET 135000 LOC New York

Instrumentation 150000

PROJ4
PNO P3 PNAME CAD/CAM BUDGET 250000 LOC New York

PROJ6
PNO P4 PNAME Maintenance BUDGET 310000 LOC Paris

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 33

PHF Correctness
I

Completeness
Since Pr' is complete and minimal, the selection

predicates are complete

Reconstruction
If relation R is fragmented into FR = {R1,R2,,Rr}

R = R FR Ri i
I

Disjointness
Minterm predicates that form the basis of fragmentation

should be mutually exclusive.

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 34

Derived Horizontal Fragmentation


I

Defined on a member relation of a link according to a selection operation specified on its owner.
Each link is an equijoin. Equijoin can be implemented by means of semijoins.
SKILL TITLE, SAL L1 EMP ENO, ENAME, TITLE L2 ASG ENO, PNO, RESP, DUR PROJ PNO, PNAME, BUDGET, LOC L3

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 35

DHF Definition
Given a link L where owner(L)=S and member(L)=R, the derived horizontal fragments of R are defined as Ri = R
F

Si, 1iw

where w is the maximum number of fragments that will be defined on R and Si = Fi=(S) where Fi is the formula according to which the primary horizontal fragment Si is defined.

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 36

DHF Example
Given link L1 where owner(L1)=SKILL and member(L1)=EMP
EMP1 = EMP EMP2 = EMP SKILL1 SKILL2

where
SKILL1 = =SAL30000=(SKILL) SKILL2 = SAL>30000=(SKILL)

EMP1
ENO E3 E4 E7 ENAME A. Lee J. Miller R. Davis TITLE Mech. Eng. Programmer Mech. Eng.

EMP2
ENO E1 E2 E5 E6 E8 ENAME J. Doe M. Smith B. Casey L. Chu J. Jones TITLE Elect. Eng. Syst. Anal. Syst. Anal. Elect. Eng. Syst. Anal.
Page 5. 37

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

DHF Correctness
I

Completeness
Referential integrity Let R be the member relation of a link whose owner is

relation S which is fragmented as FS = {S1, S2, ..., Sn}. Furthermore, let A be the join attribute between R and S. Then, for each tuple t of R, there should be a tuple t' of S such that t[A]=t'[A]

I I

Reconstruction
Same as primary horizontal fragmentation.

Disjointness
Simple join graphs between the owner and the

member fragments.

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 38

Vertical Fragmentation
I

Has been studied within the centralized context


design methodology physical clustering

More difficult than horizontal, because more alternatives exist. Two approaches :
grouping
N

attributes to fragments relation to fragments

splitting
N

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 39

Vertical Fragmentation
I

Overlapping fragments
grouping

Non-overlapping fragments
splitting

We do not consider the replicated key attributes to be overlapping. Advantage:


Easier to enforce functional dependencies (for integrity checking etc.)

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 40

VF Information Requirements
I

Application Information
Attribute affinities
N N

a measure that indicates how closely related the attributes are This is obtained from more primitive usage data Given a set of queries Q = {q1, q2,, qq} that will run on the relation R[A1, A2,, An], = if attribute Aj is referenced by query qi 1 use(qi,Aj) = = 0 otherwise = use(qi,) can be defined accordingly

Attribute usage values


N

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 41

VF Definition of use(qi,Aj)
Consider the following 4 queries for relation PROJ
q1: SELECT FROM WHERE SELECT FROM WHERE BUDGET PROJ PNO=Value PNAME PROJ LOC=Value q2: SELECT PNAME,BUDGET FROM PROJ q4: SELECT SUM(BUDGET) FROM PROJ WHERE LOC=Value

q3:

Let A1= PNO, A2= PNAME, A3= BUDGET, A4= LOC


A1 q1 q2 q3 q4
Distributed DBMS

A2 0 1 1 0

A3 1 1 0 1

A4 0 0 1 1
Page 5. 42

1 0 0 0

1998 M. Tamer zsu & Patrick Valduriez

VF Affinity Measure aff(Ai,Aj)


The attribute affinity measure between two attributes Ai and Aj of a relation R[A1, A2, , An] with respect to the set of applications Q = (q1, q2, , qq) is defined as follows :
aff (Ai, Aj) =
all queries that access Ai and Aj

(query access)

query access =

all sites

access frequency of a query

access execution

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 43

VF Calculation of aff(Ai, Aj)


Assume each query in the previous example accesses the attributes once during each execution. S1 Also assume the access frequencies Then
aff(A1, A3) = 15*1 + 20*1+10*1 = 45
q1 q2 q3 q4 15 5 25 3 S2 20 0 25 0 S3 10 0 25 0

and the attribute affinity matrix AA is

A1 A2 A3 A4

A1 A2 A3 A4 45 0 45 0 5 75 0 80 45 5 53 3 3 78 0 75
Page 5. 44

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

VF Clustering Algorithm
I

Take the attribute affinity matrix AA and reorganize the attribute orders to form clusters where the attributes in each cluster demonstrate high affinity to one another. Bond Energy Algorithm (BEA) has been used for clustering of entities. BEA finds an ordering of entities (in our case attributes) such that the global affinity measure
AM =
i j

(affinity of Ai and Aj with their neighbors)

is maximized.
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 45

Bond Energy Algorithm


Input: The AA matrix Output: The clustered affinity matrix CA which is a perturbation of AA Initialization: Place and fix one of the columns of AA in CA. Iteration: Place the remaining n-i columns in the remaining i+1 positions in the CA matrix. For each column, choose the placement that makes the most contribution to the global affinity measure. Row order:Order the rows according to the column ordering.
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 46

Bond Energy Algorithm


Best placement? Define contribution of a placement:
cont(Ai, Ak, Aj) = 2bond(Ai, Ak)+2bond(Ak, Al) 2bond(Ai, Aj)

where
bond(Ax,Ay) =

aff(Az,Ax)aff(Az,Ay)
z ==1

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 47

BEA Example
Consider the following AA matrix and the corresponding CA matrix where A1 and A2 have been placed. Place A3:
A1 A2 AA = A3 A4 A1 A2 A3 A4 0 45 0 5 0 80 5 75 45 5 53 3 0 75 3 78 A1 A2 45 0 0 80 CA = 45 5 0 75

Ordering (0-3-1) :
cont(A0,A3,A1) = 2bond(A0 , A3)+2bond(A3 , A1)2bond(A0 , A1) = 2* 0 + 2* 4410 2*0 = 8820 = 2bond(A1 , A3)+2bond(A3 , A2)2bond(A1,A2) = 2* 4410 + 2* 890 2*225 = 10150

Ordering (1-3-2) :
cont(A1,A3,A2)

Ordering (2-3-4) :
cont (A2,A3,A4) = 1780
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 48

BEA Example
Therefore, the CA matrix has to form
A1 A3 A2 45 45 0 0

5 80 5

45 53 0

3 75

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 49

BEA Example
When A4 is placed, the final form of the CA matrix (after row organization) is
A 1 A 3 A2 A4 A 1 45 45 A 3 45 53 A2 A4 0 0 0 5 0 3

5 80 75 3 75 78

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 50

VF Algorithm
How can you divide a set of clustered attributes {A1, A2, , An} into two (or more) sets {A1, A2, , Ai} and {Ai, , An} such that there are no (or minimal) applications that access both (or more than one) of the sets.
A1 A2 A3 Ai Ai+1 . . .Am A1 A2 Ai Ai+1 Am
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 51

... ...

TA

BA

VF ALgorithm
Define
TQ = set of applications that access only TA BQ = set of applications that access only BA OQ = set of applications that access both TA and BA

and
CTQ = total number of accesses to attributes by applications that access only TA CBQ = total number of accesses to attributes by applications that access only BA COQ = total number of accesses to attributes by applications that access both TA and BA

Then find the point along the diagonal that maximizes CTQ=CBQ=COQ2
Distributed DBMS 1998 M. Tamer zsu & Patrick Valduriez Page 5. 52

VF Algorithm
Two problems :
Cluster forming in the middle of the CA matrix
Shift a row up and a column left and apply the algorithm

to find the best partitioning point

Do this for all possible shifts Cost O(m2)

More than two clusters


m-way partitioning try 1, 2, , m1 split points along diagonal and try to find

the best point for each of these

Cost O(2m)

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 53

VF Correctness
A relation R, defined over attribute set A and key K, generates the vertical partitioning FR = {R1, R2, , Rr}.
I

Completeness
The following should be true for A:
A = AR
i

Reconstruction
Reconstruction can be achieved by
R=
K

Ri Ri FR

Disjointness
TID's are not considered to be overlapping since they are maintained

by the system

Duplicated keys are not considered to be overlapping

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 54

Hybrid Fragmentation
R
HF
G

HF

R1
VF
G

R2
G G

VF
G

VF
G

VF

VF
G

R11

R12

R21

R22

R23

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 55

Fragment Allocation
I

Problem Statement
Given
F = {F1, F2, , Fn} S ={S1, S2, , Sm} Q = {q1, q2,, qq} fragments network sites applications

Find the "optimal" distribution of F to S.


I

Optimality
Minimal cost
N N

Communication + storage + processing (read & update) Cost in terms of time (usually)

Performance Constraints
N

Response time and/or throughput Per site constraints (storage & processing)

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 56

Information Requirements
I

Database information
selectivity of fragments size of a fragment

Application information
access types and numbers access localities

Communication network information


unit cost of storing data at a site unit cost of processing at a site

Computer system information


bandwidth latency communication overhead

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 57

Allocation
File Allocation (FAP) vs Database Allocation (DAP):
Fragments are not individual files
N

relationships have to be maintained

Access to databases is more complicated


N N

remote file access model not applicable relationship between allocation and query processing

Cost of integrity enforcement should be considered Cost of concurrency control should be considered

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 58

Allocation Information Requirements


I

Database Information Application Information


selectivity of fragments size of a fragment

number of read accesses of a query to a fragment number of update accesses of query to a fragment A matrix indicating which queries updates which fragments A similar matrix for retrievals originating site of each query

Site Information Network Information


unit cost of storing data at a site unit cost of processing at a site communication cost/frame between two sites frame size

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 59

Allocation Model
General Form min(Total Cost) subject to response time constraint storage constraint processing constraint Decision Variable xij =
Distributed DBMS

= 1 = 0 =

if fragment Fi is stored at site Sj otherwise


1998 M. Tamer zsu & Patrick Valduriez Page 5. 60

Allocation Model
I

Total Cost
query processing cost +
all fragments

all queries

all sites

cost of storing a fragment at a site

Storage Cost (of fragment Fj at Sk)


(unit storage cost at Sk) (size of Fj) xjk

Query Processing Cost (for one query)


processing component + transmission component

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 61

Allocation Model
I

Query Processing Cost


Processing component
access cost + integrity enforcement cost + concurrency control cost

Access cost
all sites all fragments

(no. of update accesses+ no. of read accesses) xij=local processing cost at a site

Integrity enforcement and concurrency control costs


N

Can be similarly calculated

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 62

Allocation Model
I

Query Processing Cost


Transmission component
cost of processing updates + cost of processing retrievals

Cost of updates
all sites all fragments all sites

update message cost + acknowledgment cost

all fragments

Retrieval Cost
all fragments

minall sites (cost of retrieval command + cost of sending back the result)

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 63

Allocation Model
I

Constraints
Response Time
execution time of query max. allowable response time for that query

Storage Constraint (for a site)


all fragments

storage requirement of a fragment at that site storage capacity at that site

Processing constraint (for a site)


all queries

processing load of a query at that site processing capacity of that site

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 64

Allocation Model
I

Solution Methods
FAP is NP-complete DAP also NP-complete

Heuristics based on
single commodity warehouse location (for FAP) knapsack problem branch and bound techniques network flow

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 65

Allocation Model
I

Attempts to reduce the solution space


assume all candidate partitionings known; select the

best partitioning
ignore replication at first sliding window on fragments

Distributed DBMS

1998 M. Tamer zsu & Patrick Valduriez

Page 5. 66

You might also like