Datapump Concepts & Scenario
Datapump Concepts & Scenario
FILESIZE
FLASHBACK_SCN
FLASHBACK_TIME
FULL
HELP
PARFILE
QUERY
TABLES
TABLESPACES
TRANSPORT_TABLESPACES(exp value is Y/N, expdp value is name of the tablespace)
FULL
HELP
PARFILE
QUERY
SKIP_UNUSABLE_INDEXES
TABLES
TABLESPACES
Export Expdb
FEEDBACK STATUS
FILE DUMPFILE
LOG LOGFILE
OWNER SCHEMAS
TTS_FULL_CHECK TRANSPROT_FULL_CHECK
Import Importdp
DATAFILE TRANSPORT_DATAFILES
DESTROY REUSE_DATAFILES
FEEDBACK STATUS
FILE DUMPFILE
FROMUSER SCHEMAS, REMAP_SCHEMAS
IGNORE TABLE_EXISTS_ACTION(SKIP,APPEND,TRUNCATE,REPLACE)
LOG LOGFILE
INDEXFILE, SHOW SQLFILE
TOUSER REMAP_SCHEMA
Parameters in expdp Utility
COMPRESSION Specifies which data to compress before writing to the dump file set.
COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE}
DATA_OPTIONS Default: There is no default. If this parameter is not used, then the special data handling options it provides
simply do not take effect.
DIRECTORY Default: DATA_PUMP_DIR Location to write the dump file and log file.
ENCRYPTION The default value depends upon the combination of encryption-related parameters that are used
ENCRYPTION = {ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE}
ENCRYPTION_ALGORITHM Specifies which cryptographic algorithm should be used to perform the encryption
ENCRYPTION_ALGORITHM = { AES128 | AES192 | AES256 }
ENCRYPTION_MODE The default mode depends on which other encryption-related parameters are used. Specifies the type
of security to use when encryption and decryption are performed.
ENCRYPTION_MODE = { DUAL | PASSWORD | TRANSPARENT }
ENCRYPTION_PASSWORD Default: There is no default; the value is user-provided. Specifies a password for encrypting
encrypted column data, metadata, or table data in the export dumpfile. This prevents unauthorized access to an encrypted
dump file set.
ESTIMATE Show how much disk space each table in the export job consumes.
ESTIMATE={BLOCKS | STATISTICS}
FILESIZE Default: 0 (equivalent to the maximum size of 16 terabytes).Specifies the maximum size of each dump file. If the size
is reached for any member of the dump file set, then that file is closed and an attempt is made to create a new file, if the file
specification contains a substitution variable or if additional dump files have been added to the job.
FILESIZE=integer[B | KB | MB | GB | TB]
PARALLEL Specify the maximum number of threads for the export job. also help to split the dumpfile into number dump-file
set
QUERY Default: There is no default. Allows you to specify a query clause that is used to filter the data that gets exported.
QUERY = [schema.][table_name:] query_clause
VERSION objects that are incompatible with the specified version will not be exported.
SAMPLE - Allows you to specify a percentage of data to be sampled and unloaded from the source database.
Thesample_percent indicates the probability that a block of rows will be selected as part of the sample.
ENCRYPTION_PASSWORD It is required on an import operation if an encryption password was specified on the export
operation.
FLASHBACK_SCN Performs import operation that is consistent with the SCN specified from the source database. Valid only
when NETWORK_LINK parameter is used.
FLASHBACK_TIME Similar to FLASHBACK_SCN, but oracle finds the SCN close to the time specified.
NETWORK_LINK Performs import directly from a source database using database link name specified in the parameter. The
dump file will be not be created in server when we use this parameter. To get a consistent export from the source database, we
can use the FLASHBACK_SCN or FLASHBACK_TIME parameters. These two parameters are only valid when we use
NETWORK_LINK parameter.
REMAP_DATAFILE Changes name of the source DB data file to a different name in the target.
REMAP_TABLESPACE Changes name of the source tablespace to a different name in the target.
TRANSFORM We can specify that the storage clause should not be generated in the DDL for import. This is useful if the
storage characteristics of the source and target database are different. The valid values are SEGMENT_ATTRIBUTES,
STORAGE. STORAGE removes the storage clause from the CREATE statement DDL, whereas SEGMENT_ATTRIBUTES removes
physical attributes, tablespace, logging, and storage attributes.
EXAMPLE 1 :
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the DUMPFILE, DUMPFILE,
n LOGFILE LOGFILE
DIRECTORY DIRECTORY
few objects during the EXCLUDE And exclude few objects and
import into the table named "sample" in
USERS02 tablespace
EXAMPLE 2.
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the USERID USERID
n DUMPFILE, DUMPFILE,
LOGFILE LOGFILE
DIRECTORY DIRECTORY
EXAMPLE 3:
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the DUMPFILE, DUMPFILE,
n LOGFILE LOGFILE
DIRECTORY DIRECTORY
userid=system/password@alias_name userid=system/password@alias_name
dumpfile=partition_tab_expdb.dmp dumpfile=partition_expdb.dmp
logfile=partition_tab_expdb.log logfile=tab_impdb.log
directory=dump_location directory=dump_location
tables=cust_schema.sample:partition1, table_exists_action=append
cust_schema.sample:partition2
EXAMPLE 4:
userid=system/password@source userid=system/password@target
dumpfile=schema_tab_expdb.dmp dumpfile=schema_tab_expdb.dmp
logfile= schema_tab_expdb.log logfile=schema_tab_impdb.log
directory=dump_location directory=dump_location
include=table table_exists_action=replace
schemas=cust_schema
EXAMPLE 5:
userid=system/password@source userid=system/password@target
dumpfile=data_filter_expdb.dmp dumpfile=data_filter_expdb.dmp
logfile=data_filter_expdb.log logfile=data_filter_impdb.log
directory=dumplocation directory=dumplocation
schemas=cust_schema schemas=cust_schema
query="where deptno in(10,20)" query="where deptno = 20"
content=data_only table_exists_action=APPEND
include=table:"in('SAMPLE','SAMDEPT')"
EXAMPLE 6:
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the DUMPFILE, DUMPFILE, In using impdp multiple files can be
n LOGFILE LOGFILE detected automatically detected by passing
DIRECTORY DIRECTORY the value "%U" to dumpfile
userid=system/password@alias_name userid=system/password@alias_name
dumpfile=schema_exp_split_%U.dmp dumpfile=schema_exp_split_%U.dmp
logfile=schema_exp_split.log logfile=schema_imp_split.log
directory=dump_location directory=dump_location
schemas=cust_schema table_exists_action=replace
include=table remap_tablespace=tbs1:tbs01
filesize=70M
EXAMPLE 7:
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the condition DUMPFILE, DUMPFILE, In using impdp multiple files can be
LOGFILE LOGFILE detected automatically detected by passing
DIRECTORY DIRECTORY the value "%U" to dumpfile
userid=system/password@source userid=system/password@target
dumpfile=schema_exp_split_%U.dmp dumpfile=schema_exp_split_%U.dmp
logfile=schema_exp_split.log logfile=schema_imp_split.log
directory=dump_location directory=dump_location
schemas=cust_schema table_exists_action=replace
include=table remap_tablespace=tbs1:tbs01
parallel=6 exclude=grant
EXAMPLE 8
create database link source_link connect to system identified by manager using 'source';
(Note: The database connection string for the database “source” is defined in the local TNSNAMES.ORA
Creating the link as USER schema Creating the database link (in destination user schema):
For creating the database link in the destination user schema you need to create the user first. If done create the link with:
create database link source_link connect to test identified by user using 'source';
For example the following link created the in SYSTEM schema wont work because SYSTEM is a dba user and test is not:
create database link source_link connect to test identified by test using 'source';
EXAMPLE:9 (b)
userid=user/user@source userid=system/password@source
logfile=netwrok_exp.log logfile=network_imp.log
directory=dump_location directory=dump_location
dumpfile=network_exp.dmp dumpfile=network_exp.dmp
schemas=user table_exists_action=replace
include=table
network_link=target
EXAMPLE 10:
(NOTE: From the target server contact the source server the extract and import the data into source database. this methos is
very useful when we have no space to create dump file set. then we can implement this operation over network.)
EXAMPLE 11:
userid=user/user@target userid=system/password@target
logfile=netwrok_exp.log logfile=network_imp.log
dumpfile=network_exp.dmp dumpfile=network_exp.dmp
directory=dump_location directory=dump_location
schemas=cust_schema table_exists_action=replace
include=table remap_schema=user:user01
EXAMPLE 12:
xplanation SOURCE DB- TARGET DB-IMPDP IMPDP explanation
EXPDP
n for most of the condition DUMPFILE, LOGFILE,
LOGFILE DIRECTORY
DIRECTORY
userid=system/password@source userid=system/password@target
dumpfile=sample_expdb.dmp dumpfile=sample_expdb.dmp
logfile= sample_expdb.log logfile= sample_impdb.log
directory=dump_location directory=dump_location
tables=user.sample table_exists_action=replace
SAMPLE=40
Share
No comments:
Post a Comment
‹ Home ›
View web version
About Me
Harunur Rashid
Dhaka, Bangladesh
I have experienced on Database Architecture, Design, Administrating, Performance Tuning, Replication on Oracle
Database 9i,10g,11g and 12c on various platforms. My main interests are High Availability, Disaster Recovery and Replication
solutions for mission critical systems. 24×7
View my complete profile
Powered by Blogger.