0% found this document useful (0 votes)
18 views

4-Query Processing Nhom1

Uploaded by

Ngọc Sơn Vũ
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

4-Query Processing Nhom1

Uploaded by

Ngọc Sơn Vũ
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

Principles of Distributed Database

Systems
TS. Phan Thị Hà

© 2020, M.T. Özsu & P. Valduriez, TS.


Phan Thị Hà 1
Outline
◼ Introduction
◼ Distributed and parallel database design
◼ Distributed data control
◼ Distributed Query Processing
◼ Distributed Transaction Processing
◼ Data Replication
◼ Database Integration – Multidatabase Systems
◼ Parallel Database Systems
◼ Peer-to-Peer Data Management
◼ Big Data Processing
◼ NoSQL, NewSQL and Polystores
◼ Web Data Management
© 2020, M.T. Özsu & P. Valduriez, TS.
2
Phan Thị Hà
Outline
◼ Distributed Query Processing
❑ Query Decomposition and Localization
❑ Join Ordering
❑ Distributed Query Optimization
❑ Adaptive Query Processing

© 2020, M.T. Özsu & P. Valduriez, TS.


3
Phan Thị Hà
Query Processing in a DDBMS

High level user query

Query
Processor

Low-level data manipulation


commands for D-DBMS

© 2020, M.T. Özsu & P. Valduriez, TS.


4
Phan Thị Hà
Query Processing Components

◼ Query language
❑ SQL: “intergalactic dataspeak”

◼ Query execution
❑ The steps that one goes through in executing high-level
(declarative) user queries.

◼ Query optimization
❑ How do we determine the “best” execution plan?

◼ We assume a homogeneous D-DBMS

© 2020, M.T. Özsu & P. Valduriez, TS.


5
Phan Thị Hà
Selecting Alternatives

SELECT ENAME
FROM EMP NATURAL JOIN ASG
WHERE RESP = "Manager"

Strategy 1
ENAME(RESP=“Manager”EMP.ENO=ASG.ENO(EMP×ASG))
Strategy 2
 ENAME(EMP ⋈ENO (RESP=“Manager” (ASG))

Strategy 2 avoids Cartesian product, so may be “better”


© 2020, M.T. Özsu & P. Valduriez, TS.
6
Phan Thị Hà
What is the Problem?

Site 1 Site 2 Site 3 Site 4 Site 5


ASG1=ENO≤“E3”(ASG) ASG2= ENO>“E3”(ASG) EMP1= ENO≤“E3”(EMP) EMP2= ENO>“E3”(EMP) Result

© 2020, M.T. Özsu & P. Valduriez, TS.


7
Phan Thị Hà
Cost of Alternatives
◼ Assume
❑ size(EMP) = 400, size(ASG) = 1000
❑ tuple access cost = 1 unit; tuple transfer cost = 10 units
◼ Strategy 1
❑ produce ASG': (10+10)  tuple access cost 20
❑ transfer ASG' to the sites of EMP: (10+10)  tuple transfer cost 200
❑ produce EMP': (10+10)  tuple access cost  2 40
❑ transfer EMP' to result site: (10+10)  tuple transfer cost 200
Total Cost 460
◼ Strategy 2
❑ transfer EMP to site 5: 400  tuple transfer cost 4,000
❑ transfer ASG to site 5: 1000  tuple transfer cost 10,000
❑ produce ASG': 1000  tuple access cost 1,000
❑ join EMP and ASG': 400  20  tuple access cost 8,000
Total Cost 23,000

© 2020, M.T. Özsu & P. Valduriez, TS.


8
Phan Thị Hà
Query Optimization Objectives
◼ Minimize a cost function
❑ I/O cost + CPU cost + communication cost
❑ These might have different weights in different distributed
environments
◼ Wide area networks
❑ Communication cost may dominate or vary much
◼ Bandwidth
◼ Speed
◼ Protocol overhead
◼ Local area networks
❑ Communication cost not that dominant,so total cost function
should be considered
◼ Can also maximize throughput
© 2020, M.T. Özsu & P. Valduriez, TS.
9
Phan Thị Hà
Complexity of Relational Operations

Operation Complexity

Select
Project O(n)
◼ Assume (without duplicate elimination)

❑ Relations of cardinality n Project


(with duplicate elimination) O(n  log n)
❑ Sequential scan
Group

Join
Semi-join O(n  log n)
Division
Set Operators

Cartesian Product O(n2)

© 2020, M.T. Özsu & P. Valduriez, TS.


10
Phan Thị Hà
Types Of Optimizers

◼ Exhaustive search
❑ Cost-based
❑ Optimal
❑ Combinatorial complexity in the number of relations
◼ Heuristics
❑ Not optimal
❑ Regroup common sub-expressions
❑ Perform selection, projection first
❑ Replace a join by a series of semijoins
❑ Reorder operations to reduce intermediate relation size
❑ Optimize individual operations

© 2020, M.T. Özsu & P. Valduriez, TS.


11
Phan Thị Hà
Optimization Granularity

◼ Single query at a time


❑ Cannot use common intermediate results

◼ Multiple queries at a time


❑ Efficient if many similar queries

❑ Decision space is much larger

© 2020, M.T. Özsu & P. Valduriez, TS.


12
Phan Thị Hà
Optimization Timing

◼ Static
❑ Compilation ➔ optimize prior to the execution
❑ Difficult to estimate the size of the intermediate resultserror
propagation
❑ Can amortize over many executions
◼ Dynamic
❑ Run time optimization
❑ Exact information on the intermediate relation sizes
❑ Have to reoptimize for multiple executions
◼ Hybrid
❑ Compile using a static algorithm
❑ If the error in estimate sizes > threshold, reoptimize at run time

© 2020, M.T. Özsu & P. Valduriez, TS.


13
Phan Thị Hà
Statistics

◼ Relation
❑ Cardinality
❑ Size of a tuple
❑ Fraction of tuples participating in a join with another relation
◼ Attribute
❑ Cardinality of domain
❑ Actual number of distinct values
◼ Simplifying assumptions
❑ Independence between different attribute values
❑ Uniform distribution of attribute values within their domain

© 2020, M.T. Özsu & P. Valduriez, TS.


14
Phan Thị Hà
Optimization Decision Sites

◼ Centralized
❑ Single site determines the “best” schedule
❑ Simple
❑ Need knowledge about the entire distributed database
◼ Distributed
❑ Cooperation among sites to determine the schedule
❑ Need only local information
❑ Cost of cooperation
◼ Hybrid
❑ One site determines the global schedule
❑ Each site optimizes the local subqueries

© 2020, M.T. Özsu & P. Valduriez, TS.


15
Phan Thị Hà
Network Topology

◼ Wide area networks (WAN) – point-to-point


❑ Characteristics
◼ Relatively low bandwidth (compared to local CPU/IO)
◼ High protocol overhead
❑ Communication cost may dominate; ignore all other cost factors
❑ Global schedule to minimize communication cost
❑ Local schedules according to centralized query optimization
◼ Local area networks (LAN)
❑ Communication cost not that dominant
❑ Total cost function should be considered
❑ Broadcasting can be exploited (joins)
❑ Special algorithms exist for star networks

© 2020, M.T. Özsu & P. Valduriez, TS.


16
Phan Thị Hà
Distributed Query Processing
Methodology

© 2020, M.T. Özsu & P. Valduriez, TS.


17
Phan Thị Hà
Outline
◼ Distributed Query Processing
❑ Query Decomposition and Localization

© 2020, M.T. Özsu & P. Valduriez, TS.


18
Phan Thị Hà
Step 1 – Query Decomposition
Same as centralized query processing
Input : Calculus query on global relations
◼ Normalization
❑ Manipulate query quantifiers and qualification
◼ Analysis
❑ Detect and reject “incorrect” queries
◼ Simplification
❑ Eliminate redundant predicates
◼ Restructuring
❑ Calculus query ➔ algebraic query
❑ Use transformation rules

© 2020, M.T. Özsu & P. Valduriez, TS.


19
Phan Thị Hà
Step 2 – Data Localization

Input: Algebraic query on distributed relations


◼ Determine which fragments are involved
◼ Localization program
❑ Substitute for each global query its materialization program
❑ Optimize

© 2020, M.T. Özsu & P. Valduriez, TS.


20
Phan Thị Hà
Example
◼ Assume
❑ EMP is fragmented as follows:
◼ EMP1= ENO≤“E3”(EMP)
◼ EMP2= “E3”<ENO≤“E6”(EMP)
◼ EMP3= ENO≥“E6”(EMP)
❑ ASG fragmented as follows:
◼ ASG1= ENO≤“E3”(ASG)
◼ ASG2= ENO>“E3”(ASG)
◼ In any query
❑ Replace EMP by (EMP1  EMP2  EMP3)
❑ Replace ASG by (ASG1  ASG2)

© 2020, M.T. Özsu & P. Valduriez, TS.


21
Phan Thị Hà
Reduction for PHF
◼ Reduction with selection
❑ Relation R and FR={R1, R2, …, Rw} where Rj=pj(R)
pi(Rj)= if x in R: ¬(pi(x) pj(x))

SELECT *
FROM EMP
WHERE ENO="E5"

© 2020, M.T. Özsu & P. Valduriez, TS.


22
Phan Thị Hà
Reduction for PHF

◼ Reduction with join


❑ Possible if fragmentation is done on join attribute

❑ Distribute join over union

(R1 R2)⋈S  (R1⋈S)  (R2⋈S)

❑ Given Ri =pi(R) and Rj = pj(R)

Ri ⋈Rj = if x in Ri, y in Rj: ¬(pi(x) pj(y))

© 2020, M.T. Özsu & P. Valduriez, TS.


23
Phan Thị Hà
Reduction for PHF

◼ Assume EMP is fragmented as ▷◁ ENO


before and
❑ ASG1: ENO ≤ "E3"(ASG)  
❑ ASG2: ENO > "E3"(ASG)
◼ Consider the query EMP1 EMP2 EMP3 ASG1 ASG2
SELECT *
FROM EMP 
NATURAL JOIN ASG
◼ Distribute join over unions ▷◁ ENO ▷◁ ENO ▷◁ ENO
◼ Apply the reduction rule
EMP1 ASG1 EMP2 ASG2 EMP3 ASG2
© 2020, M.T. Özsu & P. Valduriez, TS.
24
Phan Thị Hà
Reduction for VF
◼ Find useless (not empty) intermediate relations
Relation R defined over attributes A = {A1, ..., An} vertically
fragmented as Ri =A'(R) where A' A:
D,K(Ri) is useless if the set of projection attributes D is not in A'
Example: EMP1=ENO,ENAME (EMP); EMP2=ENO,TITLE (EMP)
SELECT ENAME
FROM EMP

© 2020, M.T. Özsu & P. Valduriez, TS.


25
Phan Thị Hà
Reduction for DHF
◼ Rule :
❑ Distribute joins over unions
❑ Apply the join reduction for horizontal fragmentation
◼ Example
ASG1: ASG ⋉ENO EMP1
ASG2: ASG ⋉ENO EMP2
EMP1: TITLE=“Programmer” (EMP)
EMP2: TITLE=“Programmer” (EMP)
◼ Query
SELECT *
FROM EMP NATURAL JOIN ASG
WHERE EMP.TITLE = "Mech. Eng."

© 2020, M.T. Özsu & P. Valduriez, TS.


26
Phan Thị Hà
Reduction for DHF
Generic query

Selections first

© 2020, M.T. Özsu & P. Valduriez, TS. 27


Phan Thị Hà
Reduction for DHF
Joins over unions

Elimination of the empty


intermediate relations
(left sub-tree)

© 2020, M.T. Özsu & P. Valduriez, TS.


28
Phan Thị Hà
Reduction for Hybrid Fragmentation

◼ Combine the rules already specified:


❑ Remove empty relations generated by contradicting selections
on horizontal fragments;

❑ Remove useless relations generated by projections on vertical


fragments;

❑ Distribute joins over unions in order to isolate and remove


useless joins.

© 2020, M.T. Özsu & P. Valduriez, TS.


29
Phan Thị Hà
Reduction for HF

Example
Consider the following hybrid
fragmentation:
EMP1= ENO≤"E4" (ENO,ENAME (EMP))

EMP2= ENO>"E4" (ENO,ENAME (EMP)) 


EMP3= ENO,TITLE (EMP)

and the query


SELECT ENAME
FROM EMP
WHERE ENO="E5"

© 2020, M.T. Özsu & P. Valduriez, TS.


30
Phan Thị Hà
Outline
◼ Distributed Query Processing

❑ Distributed Query Optimization


© 2020, M.T. Özsu & P. Valduriez, TS.


31
Phan Thị Hà
Step 3 – Global Query Optimization

Input: Fragment query


◼ Find the best (not necessarily optimal) global schedule
❑ Minimize a cost function
❑ Distributed join processing
◼ Bushy vs. linear trees
◼ Which relation to ship where?
◼ Ship-whole vs ship-as-needed
❑ Decide on the use of semijoins
◼ Semijoin saves on communication at the expense of more local
processing
❑ Join methods
◼ Nested loop, merge join or hash join

© 2020, M.T. Özsu & P. Valduriez, TS.


32
Phan Thị Hà
Query Optimization Process

Input Query

Search Space Transformation


Generation Rules

Equivalent QEP

Search Cost Model


Strategy

Best QEP

© 2020, M.T. Özsu & P. Valduriez, TS.


33
Phan Thị Hà
Components
◼ Search space
❑ The set of equivalent algebra expressions (query trees)

◼ Cost model
❑ I/O cost + CPU cost + communication cost
❑ These might have different weights in different distributed
environments (LAN vs WAN)
❑ Can also maximize throughput

◼ Search algorithm
❑ How do we move inside the solution space?
❑ Exhaustive search, heuristic algorithms (iterative improvement,
simulated annealing, genetic,…)

© 2020, M.T. Özsu & P. Valduriez, TS.


34
Phan Thị Hà
Join Trees
◼ Characterize the search space
for optimization
◼ For N relations, there are O(N!)
equivalent join trees that can be
obtained by applying
commutativity and associativity
rules
SELECT ENAME,RESP
FROM EMP
NATURAL JOIN ASG
NATURAL JOIN PROJ

© 2020, M.T. Özsu & P. Valduriez, TS.


35
Phan Thị Hà
Join Trees

◼ Two major shapes


◼ Linear versus bushy trees

Linear Join Tree Bushy Join Tree

© 2020, M.T. Özsu & P. Valduriez, TS.


36
Phan Thị Hà
Search Strategy

◼ How to “move” in the search space


◼ Deterministic
 Start from base relations and build plans by adding one relation
at each step
 Dynamic programming: breadth-first
 Greedy: depth-first
◼ Randomized
 Search for optimalities around a particular starting point
 Trade optimization time for execution time
 Better when > 10 relations
 Simulated annealing
 Iterative improvement

© 2020, M.T. Özsu & P. Valduriez, TS.


37
Phan Thị Hà
Search Strategies

◼ Deterministic ⋈
⋈ ⋈ R4

⋈ ⋈ R3 ⋈ R3

R1 R2 R1 R2 R1 R2

◼ Randomized
⋈ ⋈
⋈ R3 ⋈ R2

R1 R2 R1 R3

© 2020, M.T. Özsu & P. Valduriez, TS.


38
Phan Thị Hà
Outline
◼ Distributed Query Processing

❑ Join Ordering

© 2020, M.T. Özsu & P. Valduriez, TS.


39
Phan Thị Hà
Join Ordering

◼ Multiple relations more difficult because too many


alternatives.
❑ Compute the cost of all alternatives and select the best one.
◼ Necessary to compute the size of intermediate relations which is
difficult.
❑ Use heuristics

© 2020, M.T. Özsu & P. Valduriez, TS.


40
Phan Thị Hà
Join Ordering – Example

Consider
PROJ ⋈PNO ASG ⋈ENO EMP

© 2020, M.T. Özsu & P. Valduriez, TS.


41
Phan Thị Hà
Join Ordering – Example
Execution alternatives

1. EMP→ Site 2 2. ASG → Site 1


Site 2 computes EMP'=EMP ⋈ ASG Site 1 computes EMP'=EMP ⋈ ASG
EMP'→ Site 3 EMP' → Site 3
Site 3 computes EMP' ⋈ PROJ Site 3 computes EMP’ ⋈ PROJ

3. ASG → Site 3 4. PROJ → Site 2


Site 3 computes ASG'=ASG ⋈ PROJ Site 2 computes PROJ'=PROJ ⋈ ASG
ASG' → Site 1 PROJ' → Site 1
Site 1 computes ASG' ▷◁ EMP Site 1 computes PROJ' ⋈ EMP

5. EMP → Site 2
PROJ → Site 2
Site 2 computes EMP ⋈ PROJ ⋈ ASG

© 2020, M.T. Özsu & P. Valduriez, TS.


42
Phan Thị Hà
Semijoin-based Ordering

◼ Consider the join of two relations:


❑ R[A] (located at site 1)
❑ S[A](located at site 2)

◼ Alternatives:
1. Do the join R ⋈AS
2. Perform one of the semijoin equivalents

R ⋈ AS  (R ⋉AS) ⋈AS
 R ⋈A (S ⋉A R)
 (R ⋉A S) ⋈A (S ⋉A R)

© 2020, M.T. Özsu & P. Valduriez, TS.


43
Phan Thị Hà
Semijoin-based Ordering
◼ Perform the join
❑ Send R to Site 2
❑ Site 2 computes R ⋈A S
◼ Consider semijoin (R ⋉AS) ⋈AS
❑ S' = A(S)
❑ S' → Site 1
❑ Site 1 computes R' = R ⋉AS'
❑ R'→ Site 2
❑ Site 2 computes R' ⋈AS
Semijoin is better if
size(A(S)) + size(R ⋉AS)) < size(R)

© 2020, M.T. Özsu & P. Valduriez, TS.


44
Phan Thị Hà
Full Reducer

◼ Optimal semijoin program that reduces each relation


more than others
◼ How to find the full reducer?
❑ Enumeration of all possible semijoin programs and select the
one that has best size reduction
◼ Problem
❑ For cyclic queries, no full reducers can be found
❑ For tree queries, full reducers exist but the number of candidate
semijoin programs is exponential in the number of relations
◼ For chained queries, where relations can be ordered so that each
relation joins only with the next relation, polynomial algorithms exist

© 2020, M.T. Özsu & P. Valduriez, TS.


45
Phan Thị Hà
Full Reducer – Example

Consider
ET (ENO, ENAME, TITLE, CITY)
AT (ENO, PNO, RESP, DUR, CITY)
PT (PNO, PNAME, BUDGET, CITY)

And the cyclic query


SELECT ENAME, PNAME
FROM ET NATURAL JOIN AT
NATURAL JOIN PT
NATURAL JOIN ET

© 2020, M.T. Özsu & P. Valduriez, TS.


46
Phan Thị Hà
Full Reducer – example

◼ Solution: transform the cyclic


query into a tree
❑ Remove one arc of the cyclic
graph
❑ Add appropriate predicates to
other arcs such that the
removed predicate is
preserved by transitivity

© 2020, M.T. Özsu & P. Valduriez, TS.


47
Phan Thị Hà
Join versus Semijoin-based Ordering

◼ Semijoin-based induces more operators, but possibly on


smaller operands

© 2020, M.T. Özsu & P. Valduriez, TS.


48
Phan Thị Hà
Distributed Cost Model

◼ Cost functions
❑ Total Time (or Total Cost)
◼ Reduce each cost (in terms of time) component individually
◼ Do as little of each cost component as possible
◼ Optimizes resource utilization and increases system throughput
❑ Response Time
◼ Do as many things as possible in parallel
◼ May increase total time because of increased total activity

© 2020, M.T. Özsu & P. Valduriez, TS.


49
Phan Thị Hà
Total Time

Total time = CPU cost + I/O cost + com. Cost

The summation of all cost factors

CPU cost= unit instruction cost  no.of instructions

I/O cost = unit disk I/O cost  no. of disk I/Os

com. cost = message initiation + transmission

© 2020, M.T. Özsu & P. Valduriez, TS.


50
Phan Thị Hà
Response Time

Response time = CPU time + I/O time + com. time

Must consider parallel execution

CPU time = unit instruction time * no. of seq instructions

I/O time = unit I/O time * no. of seq I/Os

com. time = unit msg initiation time * no. of seq msgs


+ unit transmission time * no. of seq bytes

© 2020, M.T. Özsu & P. Valduriez, TS.


51
Phan Thị Hà
Example

◼ Consider communication cost only


❑ Total time = 2 × msg initialization time + unit transmission time *
(x+y)
❑ Response time = max {time to send x from 1 to 3, time to send y
from 2 to 3}

© 2020, M.T. Özsu & P. Valduriez, TS.


52
Phan Thị Hà
Database Statistics

◼ Primary cost factor: size of intermediate relations


❑ Need to estimate their sizes
◼ Make them precise  more costly to maintain
◼ Simplifying assumption: uniform distribution of attribute
values in a relation

© 2020, M.T. Özsu & P. Valduriez, TS.


53
Phan Thị Hà
Statistics
◼ For each relation R[A1, A2, …, An] fragmented as R1, …, Rr
❑ length of each attribute: length(Ai)
❑ the number of distinct values for each attribute in each fragment:
card(AiRj)
❑ maximum and minimum values in the domain of each attribute:
min(Ai), max(Ai)
❑ the cardinalities of each domain: card(dom[Ai])
◼ The cardinalities of each fragment: card(Rj)
◼ Selectivity factor of each operator on relations
❑ See centralized query optimization statistics

© 2020, M.T. Özsu & P. Valduriez, TS.


54
Phan Thị Hà
Distributed Query Optimization

◼ Dynamic approach
❑ Distributed INGRES
❑ No static cost estimation, only runtime cost information
◼ Static approach
❑ System R*
❑ Static cost model
◼ Hybrid approach
❑ 2-step

© 2020, M.T. Özsu & P. Valduriez, TS.


55
Phan Thị Hà
Dynamic Approach
1. Execute all monorelation queries (e.g., selection,
projection)
2. Reduce the multirelation query to produce irreducible
subqueries q1→ q2 → … → qn such that there is only
one relation between qi and qi+1
3. Choose qi involving the smallest fragments to execute
(call MRQ')
4. Find the best execution strategy for MRQ'
1. Determine processing site
2. Determine fragments to move
5. Repeat 3 and 4

© 2020, M.T. Özsu & P. Valduriez, TS.


56
Phan Thị Hà
Static Approach

◼ Cost function includes local processing as well as


transmission
◼ Considers only joins
◼ “Exhaustive” search
◼ Compilation

© 2020, M.T. Özsu & P. Valduriez, TS.


57
Phan Thị Hà
Static Approach – Performing Joins

◼ Ship whole
❑ Larger data transfer
❑ Smaller number of messages
❑ Better if relations are small
◼ Fetch as needed
❑ Number of messages = O(cardinality of external relation)
❑ Data transfer per message is minimal
❑ Better if relations are large and the selectivity is good

© 2020, M.T. Özsu & P. Valduriez, TS.


58
Phan Thị Hà
Static Approach –
Vertical Partitioning & Joins
1. Move outer relation tuples to the site of the inner relation
(a) Retrieve outer tuples
(b) Send them to the inner relation site
(c) Join them as they arrive

Total Cost = cost(retrieving qualified outer tuples)


+ no. of outer tuples fetched * cost(retrieving
qualified inner tuples)
+ msg. cost * (no. outer tuples fetched * avg.
outer tuple size)/msg. size

© 2020, M.T. Özsu & P. Valduriez, TS.


59
Phan Thị Hà
Static Approach –
Vertical Partitioning & Joins
2. Move inner relation to the site of outer relation
Cannot join as they arrive; they need to be stored
Total cost = cost (retrieving qualified outer tuples)
+ no. of outer tuples fetched * cost(retrieving matching inner
tuples from temporary storage)
+ cost(retrieving qualified inner tuples)
+ cost(storing all qualified inner tuples in temporary storage)
+ msg. cost * no. of inner tuples fetched * avg. inner tuple
size/msg. size

© 2020, M.T. Özsu & P. Valduriez, TS.


60
Phan Thị Hà
Static Approach –
Vertical Partitioning & Joins
3. Move both inner and outer relations to another site
Total cost = cost(retrieving qualified outer tuples)
+ cost(retrieving qualified inner tuples)
+ cost(storing inner tuples in storage)
+ msg. cost × (no. of outer tuples fetched * avg. outer tuple
size)/msg. size
+ msg. cost * (no. of inner tuples fetched * avg. inner tuple
size)/msg. size
+ no. of outer tuples fetched * cost(retrieving inner tuples from
temporary storage)

© 2020, M.T. Özsu & P. Valduriez, TS.


61
Phan Thị Hà
Static Approach –
Vertical Partitioning & Joins
4. Fetch inner tuples as needed
(a) Retrieve qualified tuples at outer relation site
(b) Send request containing join column value(s) for outer tuples to
inner relation site
(c) Retrieve matching inner tuples at inner relation site
(d) Send the matching inner tuples to outer relation site
(e) Join as they arrive
Total Cost = cost(retrieving qualified outer tuples)
+ msg. cost * (no. of outer tuples fetched)
+ no. of outer tuples fetched * no. of inner tuples fetched * avg.
inner tuple size * (msg. cost / msg. size)
+ no. of outer tuples fetched * cost(retrieving matching inner tuples
for one outer value)

© 2020, M.T. Özsu & P. Valduriez, TS.


62
Phan Thị Hà
2-Step Optimization
1. At compile time, generate a static plan with operation
ordering and access methods only
2. At startup time, carry out site and copy selection and
allocate operations to sites
Static plan Runtime plan

© 2020, M.T. Özsu & P. Valduriez, TS.


63
Phan Thị Hà
2-Step – Problem Definition

◼ Given
❑ A set of sites S = {s1, s2, …,sn} with the load of each site
❑ A query Q ={q1, q2, q3, q4} such that each subquery qi is the
maximum processing unit that accesses one relation and
communicates with its neighboring queries
❑ For each qi in Q, a feasible allocation set of sites Sq={s1, s2,
…,sk} where each site stores a copy of the relation in qi
◼ The objective is to find an optimal allocation of Q to S
such that
❑ The load unbalance of S is minimized
❑ The total communication cost is minimized

© 2020, M.T. Özsu & P. Valduriez, TS.


64
Phan Thị Hà
2-Step Algorithm

◼ For each q in Q compute load (Sq)


◼ While Q not empty do
1. Select subquery a with least allocation flexibility
2. Select best site b for a (with least load and best benefit)
3. Remove a from Q and recompute loads if needed

© 2020, M.T. Özsu & P. Valduriez, TS.


65
Phan Thị Hà
2-Step Algorithm Example

◼ Let Q = {q1, q2, q3, q4} where q1 is


associated with R1, q2 is associated
with R2 joined with the result of q1,
etc.
◼ Iteration 1: select q4, allocate to s1,
set load(s1)=2
◼ Iteration 2: select q2, allocate to s2,
set load(s2)=3
◼ Iteration 3: select q3, allocate to s1,
set load(s1) =3
◼ Iteration 4: select q1, allocate to s3 or
s4

Note: if in iteration 2, q2 were allocated to s4, this would have produced


a better plan. So hybrid optimization can still miss optimal plans
Outline
◼ Distributed Query Processing

❑ Adaptive Query Processing

© 2020, M.T. Özsu & P. Valduriez, TS.


67
Phan Thị Hà
Adaptive Query Processing -
Motivations
◼ Assumptions underlying query optimization
❑ The optimizer has sufficient knowledge about runtime
◼ Cost information
❑ Runtime conditions remain stable during query execution
◼ Appropriate for systems with few data sources in a
controlled environment
◼ Inappropriate for changing environments with large
numbers of data sources and unpredictable runtime
conditions

© 2020, M.T. Özsu & P. Valduriez, TS.


68
Phan Thị Hà
Example: QEP with Blocked Operator

◼ Assume ASG, EMP,


PROJ and PAY each at a
different site
◼ If ASG site is down, the
entire pipeline is blocked
◼ However, with some
reorganization, the join of
EMP and PAY could be
done while waiting for
ASG

6
Adaptive Query Processing – Definition
◼ A query processing is adaptive if it receives information
from the execution environment and determines its
behavior accordingly
❑ Feed-back loop between optimizer and runtime environment
❑ Communication of runtime information between DDBMS
components
◼ Additional components
❑ Monitoring, assessment, reaction
❑ Embedded in control operators of QEP
◼ Tradeoff between reactiveness and overhead of
adaptation

© 2020, M.T. Özsu & P. Valduriez, TS.


70
Phan Thị Hà
Adaptive Components

◼ Monitoring parameters (collected by sensors in QEP)


❑ Memory size
❑ Data arrival rates
❑ Actual statistics
❑ Operator execution cost
❑ Network throughput
◼ Adaptive reactions
❑ Change schedule
❑ Replace an operator by an equivalent one
❑ Modify the behavior of an operator
❑ Data repartitioning

© 2020, M.T. Özsu & P. Valduriez, TS.


71
Phan Thị Hà
Eddy Approach

◼ Query compilation: produces a tuple D, P, C, Eddy


❑ D: set of data sources (e.g. relations)
❑ P: set of predicates
❑ C: ordering constraints to be followed at runtime
❑ Eddy: n-ary operator between D and P
◼ Query execution: operator ordering on a tuple basis
using Eddy
❑ On-the-fly tuple routing to operators based on cost and
selectivity
❑ Change of join ordering during execution
◼ Requires symmetric join algorithms such as Ripple joins

© 2020, M.T. Özsu & P. Valduriez, TS.


72
Phan Thị Hà
QEP with Eddy
◼ D= {R, S, T}
◼ P = {P (R), R ⋈1 S, S ⋈2 T)
◼ C = {S < T} where < imposes S tuples to probe T tuples using an index on join
attribute
❑ Access to T is wrapped by ⋈

© 2020, M.T. Özsu & P. Valduriez, TS.


73
Phan Thị Hà

You might also like