0% found this document useful (0 votes)
14 views

Datapump Concepts & Scenario

Uploaded by

Diwakar Reddy S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Datapump Concepts & Scenario

Uploaded by

Diwakar Reddy S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Oracle Technology Sharing!!!

Sunday, November 17, 2013

DATAPUMP CONCEPTS & SCENARIO


Important Datapump view "dba+datapump_jobs"
Common parameters IN(Traditional Export vs Expdp)

FILESIZE
FLASHBACK_SCN
FLASHBACK_TIME
FULL
HELP
PARFILE
QUERY
TABLES
TABLESPACES
TRANSPORT_TABLESPACES(exp value is Y/N, expdp value is name of the tablespace)

Common parameters IN(Traditional Import vs Impdp)

FULL
HELP
PARFILE
QUERY
SKIP_UNUSABLE_INDEXES
TABLES
TABLESPACES

Comparing (Traditional Export vs Expdp) parameters

Export Expdb
FEEDBACK STATUS
FILE DUMPFILE
LOG LOGFILE
OWNER SCHEMAS
TTS_FULL_CHECK TRANSPROT_FULL_CHECK

Comparing (Traditional Import vs Impdp) parameters

Import Importdp
DATAFILE TRANSPORT_DATAFILES
DESTROY REUSE_DATAFILES
FEEDBACK STATUS
FILE DUMPFILE
FROMUSER SCHEMAS, REMAP_SCHEMAS
IGNORE TABLE_EXISTS_ACTION(SKIP,APPEND,TRUNCATE,REPLACE)

LOG LOGFILE
INDEXFILE, SHOW SQLFILE
TOUSER REMAP_SCHEMA
Parameters in expdp Utility

ATTACH Attach the client session to existing data pump jobs


ATTACH [=[schema_name.]job_name]

COMPRESSION Specifies which data to compress before writing to the dump file set.
COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE}

CONTENT Specify what to export(ALL, DATA_ONLY, METADATA_ONLY)


CONTENT={ALL | DATA_ONLY | METADATA_ONLY}

DATA_OPTIONS Default: There is no default. If this parameter is not used, then the special data handling options it provides
simply do not take effect.

DIRECTORY Default: DATA_PUMP_DIR Location to write the dump file and log file.

DUMPFILE Default: we have to give exp.dmp

ENCRYPTION The default value depends upon the combination of encryption-related parameters that are used
ENCRYPTION = {ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE}

ENCRYPTION_ALGORITHM Specifies which cryptographic algorithm should be used to perform the encryption
ENCRYPTION_ALGORITHM = { AES128 | AES192 | AES256 }

ENCRYPTION_MODE The default mode depends on which other encryption-related parameters are used. Specifies the type
of security to use when encryption and decryption are performed.
ENCRYPTION_MODE = { DUAL | PASSWORD | TRANSPARENT }

ENCRYPTION_PASSWORD Default: There is no default; the value is user-provided. Specifies a password for encrypting
encrypted column data, metadata, or table data in the export dumpfile. This prevents unauthorized access to an encrypted
dump file set.

ESTIMATE Show how much disk space each table in the export job consumes.
ESTIMATE={BLOCKS | STATISTICS}

ESTIMATE_ONLY It estimate the space, but does not perform export


ESTIMATE_ONLY={y | n}

EXCLUDE List of objects to be excluded

FILESIZE Default: 0 (equivalent to the maximum size of 16 terabytes).Specifies the maximum size of each dump file. If the size
is reached for any member of the dump file set, then that file is closed and an attempt is made to create a new file, if the file
specification contains a substitution variable or if additional dump files have been added to the job.
FILESIZE=integer[B | KB | MB | GB | TB]

INCLUDE List of jobs to be included

JOB_NAME Name of the export job

KEEP_MASTER Specify Y not to drop the master table after export

NETWORK_LINK Specify database dblink to export from remote database

NOLOGFILE Specify Y if you do not want to create log file


NOLOGFILE={y | n}

PARALLEL Specify the maximum number of threads for the export job. also help to split the dumpfile into number dump-file
set

QUERY Default: There is no default. Allows you to specify a query clause that is used to filter the data that gets exported.
QUERY = [schema.][table_name:] query_clause

VERSION objects that are incompatible with the specified version will not be exported.

SAMPLE - Allows you to specify a percentage of data to be sampled and unloaded from the source database.
Thesample_percent indicates the probability that a block of rows will be selected as part of the sample.

Parameters in impdp Utility

CONTENT={ALL | DATA_ONLY | METADATA_ONLY}

EXCLUDE List of objects to be excluded

ENCRYPTION_PASSWORD It is required on an import operation if an encryption password was specified on the export
operation.

FLASHBACK_SCN Performs import operation that is consistent with the SCN specified from the source database. Valid only
when NETWORK_LINK parameter is used.

FLASHBACK_TIME Similar to FLASHBACK_SCN, but oracle finds the SCN close to the time specified.

INCLUDE List of jobs to be included

NETWORK_LINK Performs import directly from a source database using database link name specified in the parameter. The
dump file will be not be created in server when we use this parameter. To get a consistent export from the source database, we
can use the FLASHBACK_SCN or FLASHBACK_TIME parameters. These two parameters are only valid when we use
NETWORK_LINK parameter.

REMAP_DATAFILE Changes name of the source DB data file to a different name in the target.

REMAP_SCHEMA Loads objects to a different target schema name.

REMAP_TABLESPACE Changes name of the source tablespace to a different name in the target.

TRANSFORM We can specify that the storage clause should not be generated in the DDL for import. This is useful if the
storage characteristics of the source and target database are different. The valid values are SEGMENT_ATTRIBUTES,
STORAGE. STORAGE removes the storage clause from the CREATE statement DDL, whereas SEGMENT_ATTRIBUTES removes
physical attributes, tablespace, logging, and storage attributes.

TRANSFORM = name:boolean_value[:object_type], where boolean_value is Y or N.


For instance, TRANSFORM=storage:N:table

Data Pump differnt Export Levels in command line:


o Full Export Level
o Schema Level
o Table Level
o Tablespace Level
o Transportable Tablespace Level

EXAMPLE 1 :
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the DUMPFILE, DUMPFILE,
n LOGFILE LOGFILE
DIRECTORY DIRECTORY

tablespace need to be SCHEMAS REMAP_TABLESPACE REMAP_TABLESPACE: we importing the


rom cust_schema in TABLE_EXISTS_ACTION data to different tablespace.
db And import to sample
USERS02 tablespace TABLE_EXISTS_ACTION:REPLACE the
t_schema in Target_db table already present in the source
location. so replacing with new data

few objects during the EXCLUDE And exclude few objects and
import into the table named "sample" in
USERS02 tablespace

Expdp parfile information Import parfile information


userid=system/password@alias_name userid=system/password@alias_name
dumpfile=schemaexpdb.dmp dumpfile=schemaexpdb.dmp
logfile=schemaexpdb.log logfile=schemaimpdb.log
directory=dump_dir directory=dumplocation
schemas=cust_schema table_exists_action=replace
remap_tablespace=res:users
exclude=sequence,view,package,cluster,table:"in('sample')"

EXAMPLE 2.
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the USERID USERID
n DUMPFILE, DUMPFILE,
LOGFILE LOGFILE
DIRECTORY DIRECTORY

ng to export the sample SCHEMAS TABLE_EXISTS_ACTION


OM cust_schema from TABLE_EXISTS_ACTION: the table
nce called source . And already present in the source location.
to sample table FROM so replacing with new data
ema from the instance

Expdp parfile information Import parfile information


userid=system/password@alias_name userid=system/password@alias_name
dumpfile=table_expdb.dmp dumpfile=table_expdb.dmp
logfile=table_expdb.log logfile=tabimpdb.log
directory=dump_location directory=dump_location
tables= cust_schema.sample table_exists_action=REPLACE

EXAMPLE 3:
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the DUMPFILE, DUMPFILE,
n LOGFILE LOGFILE
DIRECTORY DIRECTORY

e sample tables from TABLES TABLE_EXISTS_ACTION TABLE_EXISTS_ACTION: APPEND the


ema. this sample has content of sample table in the target
s now we are going to database
e contents in partition1
tion2 (NOTE: partition information not
required if target has same table name
with same partition

sql> delete sample where NOTE: Before overwrite the partition in


deptno=1; target database we need to manually
delete partition and import the dump file
set to the target database.
sql> delete sample where
deptno=2; EX: If we want to overwrite the exported
data in target database, then we need to
delete value in the partition has sample
table information of deptno in(1,2).

Expdp parfile information Import parfile information

userid=system/password@alias_name userid=system/password@alias_name
dumpfile=partition_tab_expdb.dmp dumpfile=partition_expdb.dmp
logfile=partition_tab_expdb.log logfile=tab_impdb.log
directory=dump_location directory=dump_location
tables=cust_schema.sample:partition1, table_exists_action=append
cust_schema.sample:partition2

EXAMPLE 4:

xplanation SOURCE DB-EXPDP TARGET DB-IMPDP IMPDP explanation


n for most of the DUMPFILE, DUMPFILE,
n LOGFILE LOGFILE
DIRECTORY DIRECTORY

t only table in CONTENT=table TABLE_EXISTS_ACTION TABLE_EXISTS_ACTION:REPLACE the


ema at source instance table already present in the source
ort into the target location. so replacing with new data

Expdp parfile information Import parfile information

userid=system/password@source userid=system/password@target
dumpfile=schema_tab_expdb.dmp dumpfile=schema_tab_expdb.dmp
logfile= schema_tab_expdb.log logfile=schema_tab_impdb.log
directory=dump_location directory=dump_location
include=table table_exists_action=replace
schemas=cust_schema

EXAMPLE 5:

xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation


EXPDP
n for most of the DUMPFILE, DUMPFILE,
n LOGFILE LOGFILE
DIRECTORY DIRECTORY

e rows belongs to the INCLUDE, TABLE_EXISTS_ACTION, TABLE_EXISTS_ACTION: APPEND the


pt, sample need to QUERY, QUERY content of sample table in the target
e value of dept 10,20. CONTENT, SCHEMAS, database
export only the content
of the table. SCHEMAS QUERY: we have the content of samdept
10,20 but we need to import only the
content of samdept 20.
(note:always check the expdp logfile
what content are exported)

Expdp parfile information Import parfile information

userid=system/password@source userid=system/password@target
dumpfile=data_filter_expdb.dmp dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log logfile=data_filter_impdb.log
directory=dumplocation directory=dumplocation
schemas=cust_schema schemas=cust_schema
query="where deptno in(10,20)" query="where deptno = 20"
content=data_only table_exists_action=APPEND
include=table:"in('SAMPLE','SAMDEPT')"

EXAMPLE 6:
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the DUMPFILE, DUMPFILE, In using impdp multiple files can be
n LOGFILE LOGFILE detected automatically detected by passing
DIRECTORY DIRECTORY the value "%U" to dumpfile

e cust_schema schema INCLUDE, TABLE_EXISTS_ACTION, TABLE_EXISTS_ACTION:REPLACE the table


rce database and split SCHEMAS, REMAP_TABLESPACE already present in the source location. so
p file sets into 70MB in replacing with new data
port the dump file into FILESIZE
tabase. so this expdp REMAP_TABLESPACE=tbs1:tbs01
e the backupsets in Import the value to new tablespace
size. by using "%U" in
it will automatically
e sequence dumpset
ma_exp_split_01.dmp,
_exp_split_02.dmp till
e of total size of dump

Expdp parfile information Import parfile information

userid=system/password@alias_name userid=system/password@alias_name
dumpfile=schema_exp_split_%U.dmp dumpfile=schema_exp_split_%U.dmp
logfile=schema_exp_split.log logfile=schema_imp_split.log
directory=dump_location directory=dump_location
schemas=cust_schema table_exists_action=replace
include=table remap_tablespace=tbs1:tbs01
filesize=70M

EXAMPLE 7:
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the condition DUMPFILE, DUMPFILE, In using impdp multiple files can be
LOGFILE LOGFILE detected automatically detected by passing
DIRECTORY DIRECTORY the value "%U" to dumpfile

e cust_schema schema INCLUDE, TABLE_EXISTS_ACTION, TABLE_EXISTS_ACTION:REPLACE the table


rce database and split the SCHEMAS, REMAP_TABLESPACE already present in the source location. so
e sets into 6 parts of total replacing with new data
xport data. Import the PARALLEL
e into target database. so REMAP_TABLESPACE=tbs1:tbs01
dp will create the Import the value to new tablespace
ets in total size of six
using "%U" in dumpfile it
matically create the
e dumpset
ma_exp_split_01.dmp,
_exp_split_02.dmp till
e dumpset upto
_exp_split_06.dmp

Expdp parfile information Import parfile information

userid=system/password@source userid=system/password@target
dumpfile=schema_exp_split_%U.dmp dumpfile=schema_exp_split_%U.dmp
logfile=schema_exp_split.log logfile=schema_imp_split.log
directory=dump_location directory=dump_location
schemas=cust_schema table_exists_action=replace
include=table remap_tablespace=tbs1:tbs01
parallel=6 exclude=grant

EXAMPLE 8

xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation


EXPDP
n for most of the condition DUMPFILE, DUMPFILE, In using impdp multiple files can be
LOGFILE LOGFILE detected automatically detected by passing
DIRECTORY DIRECTORY the value "%U" to dumpfile

e cust_schema schema INCLUDE, TABLE_EXISTS_ACTION, TABLE_EXISTS_ACTION:REPLACE the table


rce database and split the SCHEMAS, already present in the source location. so
e sets into 6 parts of total replacing with new data
xport data in different FILESIZE
Import the dump file into
tabase. So this expdp will
e backupsets in total size
rts. by using "%U" in
it will automatically create
ence dumpset
ma_exp_split_01.dmp,
_exp_split_02.dmp till
e dumpset upto
_exp_split_06.dmp

Expdp parfile information Import parfile information

userid=system/password@alias_name Impdp parfile content:


logfile=schemaexp_split.log userid=system/password@alias_name
dumpfile=dumplocation1:schemaexp_%U.dmp, logfile=schemaimp_split.log
dumplocation2:schemaexp_%U.dmp, dumpfile=dump1:schemaexp_%U.dmp,dump2:schemaexp_%U.dmp,dump3:schemaexp_%U.dmp
dumplocation3:schemaexp_%U.dmp directory=dumplocation
directory=dump_location table_exists_action=replace
schemas=user_schema
include=table
filesize=100M
EXAMPLE :9.a
Creating the database link (for destination system schema)
Before performing exporting the data over a network link it is required to create a database link in the DESTINATION database
first. The link can be created by either destination locations's user schema or in the SYSTEM schema. If the link is created in
SYSTEM schema the user will be created automatically with the same grants during the import.

Creating the link as SYSTEM schema:

create database link source_link connect to system identified by manager using 'source';
(Note: The database connection string for the database “source” is defined in the local TNSNAMES.ORA

Creating the link as USER schema Creating the database link (in destination user schema):

For creating the database link in the destination user schema you need to create the user first. If done create the link with:

create database link source_link connect to test identified by user using 'source';

Warning raise when mixing privileged user and non-privileged users


There is one warning when mixing privileged (e.g. DBA users) with non privileged users data pump will complain about that:

ORA-31631: privileges are required


ORA-39149: cannot link privileged user to non-privileged user
You need to look out for database links created in the SYS or SYSTEM schema pointing to a non-DBA-user in the remote
database or vice versa.

For example the following link created the in SYSTEM schema wont work because SYSTEM is a dba user and test is not:

create database link source_link connect to test identified by test using 'source';

EXAMPLE:9 (b)

xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation


EXPDP
n for most of the condition DUMPFILE, DUMPFILE, In using impdp multiple files can be
LOGFILE LOGFILE detected automatically detected by passing
DIRECTORY DIRECTORY the value "%U" to dumpfile

y working in source INCLUDE, TABLE_EXISTS_ACTION, TABLE_EXISTS_ACTION:REPLACE the table


e. but taking the export SCHEMAS, already present in the source location. so
tination location and replacing with new data
he dump file set in source NETWORK_LINK
using network_link this
of export is possible. But
the source and
on must have identical
s. if users privileges fail to
throws an error. Using tns
can achieve this job

If user privilage fails to match we get this error:

ORA-31631: privileges are required


ORA-39149: cannot link privileged user to non-privileged user
You need to look out for database links created in the SYS or SYSTEM schema pointing to a non-DBA-user in the remote
database or vice versa.

Expdp parfile information Import parfile information

userid=user/user@source userid=system/password@source
logfile=netwrok_exp.log logfile=network_imp.log
directory=dump_location directory=dump_location
dumpfile=network_exp.dmp dumpfile=network_exp.dmp
schemas=user table_exists_action=replace
include=table
network_link=target
EXAMPLE 10:

xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation


EXPDP
n for most of the condition LOGFILE,
DIRECTORY

y working in target database. TABLE_EXISTS_ACTION, TABLE_EXISTS_ACTION:REPLACE the


ut using the export the table already present in the source
e set here directly taking location. so replacing with new data
he data from target location
e location.
etwork_link this method is

(NOTE: From the target server contact the source server the extract and import the data into source database. this methos is
very useful when we have no space to create dump file set. then we can implement this operation over network.)

Import parfile information


userid=user/user@target
network_link=source
logfile=network_imp.log
directory=dumplocation
table_exists_action=replace

EXAMPLE 11:

xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation


EXPDP
n for most of the condition DUMPFILE, LOGFILE,
LOGFILE DIRECTORY
DIRECTORY

y working in target INCLUDE, TABLE_EXISTS_ACTION, TABLE_EXISTS_ACTION:REPLACE the


e. But performing the SCHEMAS, table already present in the source
target database itself. NETWORK_LINK location. so replacing with new data
g the dump file set from
source. without moving
pfile in the source location.
gine we have no space on
atabase but target have
space so to utilize the
ace without consuming
ce space we can achieve
hod

Expdp parfile information Import parfile information

userid=user/user@target userid=system/password@target
logfile=netwrok_exp.log logfile=network_imp.log
dumpfile=network_exp.dmp dumpfile=network_exp.dmp
directory=dump_location directory=dump_location
schemas=cust_schema table_exists_action=replace
include=table remap_schema=user:user01

EXAMPLE 12:
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the condition DUMPFILE, LOGFILE,
LOGFILE DIRECTORY
DIRECTORY

parameter allows you to TABLE, TABLE_EXISTS_ACTION, TABLE_EXISTS_ACTION:REPLACE the


ubsets of data by SAMPLE, table already present in the source
g the percentage of data location. so replacing with new data
mpled and exported. The
percent indicates the
ty that a block of rows will
ted as part of the sample.
ame is not specified then
b 40% percentage value will

Expdp parfile information Import parfile information

userid=system/password@source userid=system/password@target
dumpfile=sample_expdb.dmp dumpfile=sample_expdb.dmp
logfile= sample_expdb.log logfile= sample_impdb.log
directory=dump_location directory=dump_location
tables=user.sample table_exists_action=replace
SAMPLE=40

Harunur Rashid at Sunday, November 17, 2013

Share

No comments:

Post a Comment

‹ Home ›
View web version

About Me

Harunur Rashid
Dhaka, Bangladesh
I have experienced on Database Architecture, Design, Administrating, Performance Tuning, Replication on Oracle
Database 9i,10g,11g and 12c on various platforms. My main interests are High Availability, Disaster Recovery and Replication
solutions for mission critical systems. 24×7
View my complete profile

Powered by Blogger.

You might also like