Master Note For Streams Recommended Configuration
Master Note For Streams Recommended Configuration
1]
In this Document
Purpose
Scope and Application
Master Note for Streams Recommended Configuration
Configuration
1.0 Software Version
2.0 Database Parameters
Database Version 9iR2
Database Version 10gR2
3.0 Database Storage
3.1. Tablespace for Streams Administrator queues
3. 2. Separate queues for capture and apply
4.0 Privileges
5.0 Source Site Configuration
5.1. Streams and Flash Recovery Area (FRA)
5.2. Archive Logging must be enabled
5.3. Supplemental logging
5.4. Implement a Heartbeat Table
5.5. Flow Control
5.6. Perform periodic maintenance
Database Version 9iR2 and 10gR1
Database Version 10gR2 and above
5.7. Capture Process Configuration
5.8. Propagation Configuration
5.9. Additional Configuration for RAC Environments for a Source Database
6.0 Target Site Configuration
6.1. Privileges
6.2. Instantiation
6.3. Conflict Resolution
6.4. Apply Process Configuration
6.5. Additional Configuration for RAC Environments for an Apply Database
OPERATION
Global Name
Apply Error Management
Backup Considerations
Batch Processing
Source Queue Growth
Streams Cleanup/Removal
Automatic Optimizer Statistics Collection
MONITORING
Dynamic Streams views
Static Streams Views
Streams Views
Capture Views:
Propagation & Queue Views
Apply Views:
Monitoring Utility STRMMON
Alert Log
Streams Healthcheck Scripts
References
Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.8 to 11.2.0.1 - Release: 9.2 to 11.2
Information in this document applies to any platform.
Purpose
Oracle Streams enables the sharing of data and events in a data stream either within a database or from
one database to another. This Note describes best practices for Oracle Streams configurations for both
downstream capture and upstream (local) capture in version 9.2 and above.
The information contained in this note targets Replication administrators implementing Streams
replication in Oracle 9.2 and higher. This note contains key recommendations for successful
implementation of Streams in Oracle database release 9.2 and above.
Configuration
To ensure a successful Streams implementation, use the following recommendations when setting up a
Streams environment:
Software Version
Database Settings: Parameters, Storage, and Privileges
Source Site Configuration
Target Site Configuration
Oracle recommends to run streams with the latest available patchset, and the list of recommended
patches from Document 437838.1 Streams Specific Patches.
Please assess if any recommended patch conflicts with existing patches on your system.
There is Streams support in both DbControl and GridControl. GridControl should be used to manage
multiple databases in a Streams environment.
2.0 Database Parameters
For best results in a Streams environment, set the following initialization parameters, as necessary, at
each participating instance: global_names, _job_queue_interval, sga_target, streams_pool_size:
_job_queue_interval = 1 Scan rate interval (seconds) of job This improves the scan rate for
queue. Default is 5 propagation jobs to every second,
rather than every 5 seconds.
GLOBAL_NAMES =true Specifies whether a database link This parameter should be set to
is required to have the same name TRUE at each database that is
as the database to which it participating in your Streams
connects. Default is FALSE environment to eliminate errors
resulting from incorrect database
connections. This parameter setting
is honored by database links.
PARALLEL_MAX_SERVERS >= 2 Default: Derived from the values of Specifies the maximum number of
the following parameters: parallel execution processes and
parallel recovery processes for an
CPU_COUNT instance. As demand increases,
Oracle will increase the number of
PARALLEL_ADAPTIVE_MULTI_U processes from the number created
SER at instance startup up to this value.
PROCESSES Specifies the maximum number of Make sure the value of this
operating system user processes parameter allows for all background
that can simultaneously connect to processes, such as locks, job queue
the database. processes, and parallel execution
processes. In Streams, capture
processes and apply processes use
background processes and parallel
execution processes, and
propagation jobs use job queue
processes.
SESSIONS Specifies the maximum number of If you plan to run one or more
sessions that can be created in the capture processes or apply
system. Because every login processes
requires a session, this parameter in a database, then you may need to
effectively determines the increase the size of this
maximum number of concurrent parameter. Each background
users in the system. process in a database requires a
session.
SGA_MAX_SIZE Specifies the maximum size of If you plan to run multiple capture
SGA for the lifetime of the instance. processes on a single database,
then you may need to increase the
size of this parameter.
Note :_SGA_SIZE should only be
increased if a logminer error is
returned
indicating a need for more memory.
Any memory allocated to logminer
is used solely by logminer - it is not
returned to the shared_pool
after it has been allocated by capture
until the capture process
is restarted.
TIMED_STATISTICS Specifies whether or not statistics If you want to collect elapsed time
related to time are collected. statistics in the data dictionary
views related to Streams, then set
this parameter to true. The views
that include elapsed time statistics
include:
V$STREAMS_CAPTURE,
V$STREAMS_APPLY_COORDINAT
OR,
V$STREAMS_APPLY_READER,
V$STREAMS_APPLY_SERVER.
_job_queue_interval = 1 Scan rate interval (seconds) of job queue. This improves the scan rate for
Default is 5 propagation jobs to every second,
rather than every 5 seconds.
COMPATIBLE = 10.2.0.0 This parameter specifies the release with To use the new Streams features
which the Oracle server must maintain introduced in Oracle Database 10g
compatibility. Oracle servers with different Release 2, this parameter must be
compatibility levels can interoperate. set to 10.2.0.0 or higher.
JOB_QUEUE_PROCESSES > 4 Specifies the number of Jn job queue This parameter controls the
number of propagations defined processes for each instance (J000 ... maximum number of jobs that can
J999). Job queue processes handle run concurrently within the instance
requests created by DBMS_JOB. and should be set to a value
greater than the number of
propagations configured for the
database. Be sure to increase this
parameter if there are any
automated jobs configured for the
database.
LOG_ARCHIVE_DEST_STATE_n Specifies the availability state of the Enable archive logging to the
corresponding destination. The parameter specified destination for both local
suffix (1 through 10) specifies one of the and downstream capture. To use
ten corresponding downstream capture and copy the
LOG_ARCHIVE_DEST_n destination redo log files to the downstream
parameters. database using log transport
services, make sure the destination
that corresponds to the
LOG_ARCHIVE_DEST_n
destination for the downstream
database is set to enable.
PARALLEL_MAX_SERVERS Default: Derived from the values of the Specifies the maximum number of
following parameters: parallel execution processes and
parallel recovery processes for an
CPU_COUNT instance. As demand increases,
Oracle will increase the number of
PARALLEL_ADAPTIVE_MULTI_USER processes from the number
created at instance startup up to
PARALLEL_AUTOMATIC_TUNING this value.
SGA_MAX_SIZE Specifies the maximum size of SGA for To run multiple Streams processes
the lifetime of a database instance. on a single database, you may
need to increase the size of this
parameter.
SGA_TARGET =0 Specifies the total size of all System For best results, size the
Global Area (SGA) components. shared_pool and streams_pool
explicitly.
If this parameter is set to a nonzero value,
then the size of the Streams pool is
managed by Automatic Shared Memory
Management.
Tune the STREAMS_POOL_SIZE Specifies (in bytes) the size of the This parameter is modifiable. If this
Streams pool. The Streams pool contains parameter is reduced to zero when
buffered queue messages. In addition, the an instance is running, then
Streams pool is used for internal Streams processes and jobs will
communications during parallel capture not run.
and apply. Refer to
V$STREAMS_POOL_ADVICE to The size of the Streams pool is
determine the correct size to avoid affected by each of the following
excessive spills. factors:
capture process
parallelism. Increase the
Streams Pool Size by 10
MB for each capture
process. In addition, if the
capture parameter
PARALLELISM is set
greater than 1, increase
the Streams Pool size by
10Mb * parallelism. For
example, if parallelism is
set to 3 for a capture
process, then increase
the Streams pool by 30
MB.
Apply process parallelism.
Increase the Streams
Pool Size by 1 MB for
each apply process. In
addition, if the apply
parameter PARALLELISM
is set greater than 1,
increase the Streams
Pool size by 1Mb *
parallelism. For example,
if parallelism is set to 5 for
an apply process, then
increase the Streams pool
by 5 MB.
Logical Change Records
(LCRs) are stored in the
buffered queue. Increase
the size of the Streams
Pool to handle the volume
of replicated data
managed at both the
source and target
databases.
Create a separate tablespace for the streams administrator schema (STRMADMIN) at each participating
Streams database. This tablespace will be used for any objects created in the streams administrator
schema, including any spillover of messages from the in-memory queue.
For example:
Configure separate queues for changes that are captured locally and for receiving captured changes from
each remote site. This is especially important when configuring bi-directional replication between multiple
databases. For example, consider the situation where Database db1.net replicates its changes to
databases db2.net, and Database db2.net replicates to db1.net. Each database will maintain 2 queues:
one for capturing the changes made locally and other queue receiving changes from the other database.
Similarly, for 3 databases (db1.net, db2.net, db3.net) replicating the local changes directly to each other
database, there will be 3 queues at each database. For example at db1.net, queue1 for the capture
process, and queue2 and queue3 for receiving changes from each of the other databases. The two apply
processes on db1.net (apply_from_db2, apply_from_db3) apply the changes, each associated with a
specific queue (queue2 or queue3)
Queue names should not exceed 24 characters in length. Queue table names should not exceed 24
characters in length. To pre-create a queue for Streams, use the SET_UP_QUEUE procedure in the
DBMS_STREAMS_ADM package. If you use the MAINTAIN_TABLES, MAINTAIN_SCHEMAS, or
MAINTAIN_GLOBAL procedures to configure Streams and do not identify specific queue names,
individual queues will be created automatically.
Example: To configure a site (SITEA) that is capturing changes for distribution to another site, as well as
receiving changes from that other site (SITEB), configure each queue at SITEA with a separate
queue_table as follows:
dbms_streams_adm.set_up_queue(queue_table_name='QT_CAP_SITE_A,
queue_name=>'CAP_SITEA', )
dbms_streams_adm.set_up_queue(queue_table_name='QT_APP_FROM_SITEB',
queue_name=>'APP_FROM_SITEB');
If desired, the above set_up_queue procedure calls can include a storage_clause parameter to configure
separate tablespace and storage specifications for each queue table. Typically, Logical Change Records
(LCRs) are queued to an in-memory buffer and processed from memory. However, they can be spilled to
disk if they remain in memory too long due to an unavailable destination or on memory pressure
(Streams_Pool memory is too low). The storage clause parameter can be used to preallocate space for
the queue table or specify an alternative tablespace for the queue table without changing the default
tablespace for the Streams Administrator.
4.0 Privileges
The streams administrator (strmadmin) must be granted the following on each participating Streams
participating database:
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'strmadmin',
grant_option => FALSE); END;
/
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'strmadmin',
grant_option => FALSE);
END;
/
In order to create capture and apply processes, the Streams Administrator must have DBA privilege. This
privilege must be explicitly granted to the Streams Administrator.
In addition, other required privileges must be granted to the Streams Administrator schema (strmadmin)
on each participating Streams database with the GRANT_ADMIN_PRIVILEGE procedure:
In Oracle 10g and above, all the above (except DBA) can be granted using the procedure:
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
The following recommendations apply to source databases, ie, databases in which Streams capture is
configured.
5.1. Streams and Flash Recovery Area (FRA)
In Oracle 10g and above, configure a separate log archive destination independent of the Flash Recovery
Area for the Streams capture process for the database. Archive logs in the FRA can be removed
automatically on space pressure, even if the Streams capture process still requires them. Do not allow the
archive logs for Streams capture to reside solely in the FRA.
Verify that each source database is running in ARCHIVE LOG mode. For downstream capture sites (ie,
databases in which the Streams capture is configured for another database), the database at which the
source redo logs are created must have archive logging enabled.
Confirm supplemental logging is enabled at each source site. In 9iR2 Streams apply requires
unconditional logging of Unique Index and Foreign Key constraints, even if those columns are not
modified. This is because of Bug 4198593 Apply incorrectly requires unconditional logging of Unique and
FK constraints fixed in 9.2.0.8.
If you set the parallelism apply process parameter to a value greater than 1, then you must specify a
conditional supplemental log group at the source database for all of the unique and foreign key columns
in the tables for which an apply process applies changes. Supplemental logging may be required for other
columns in these tables as well, depending on your configuration.
Any columns specified in rule-based transformations or used within DML Handlers at target site must be
unconditionally logged at the source site.
Supplemental logging can be specified at the source either at the database level or for the individual
replicated table.
In 10gR2, supplemental logging is automatically configured for tables on which primary, unique, or foreign
keys are defined when the database object is prepared for Streams capture. The procedures for
maintaining streams and adding rules in the DBMS_STREAMS_ADM package automatically prepare
objects for a local Streams capture. For downstream capture sites (ie, databases in which the Streams
capture is configured for another database), the database at which the source redo logs are created must
have supplemental logging for the database objects of interest to the downstream capture process.
All target site indexed columns, including the primary key, unique index, and foreign key columns of a
replicated table or database must be logged at the source site. Primary Key logging must be
unconditionally logged, unique index and foreign keys can be conditionally logged. This supplemental
logging is enabled automatically when the source table is prepared for capture with
DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION.
Any columns specified in rule-based transformations or used within DML Handlers at target site must be
unconditionally logged at the source site. Supplemental logging for these columns must be configured
explicitly by the database administrator, using the table sql syntax: ALTER TABLE... ADD
SUPPLEMENTAL LOG... .
To Verify that supplemental logging has been specified at the source either at the database level or for
the individual replicated table:
Database level logging:
To ensure that the applied_scn of the DBA_CAPTURE view is updated periodically, implement a "heart
beat" table. A "heart beat" table is especially useful for databases that have a low activity rate. The
streams capture process requests a checkpoint after every 10Mb of generated redo. During the
checkpoint, the metadata for streams is maintained if there are active transactions. Implementing a
heartbeat table ensures that there are open transactions occurring regularly within the source database
enabling additional opportunities for the metadata to be updated frequently. Additionally, the heartbeat
table provides quick feedback to the database administrator as to the health of the streams replication.
To implement a heartbeat table: Create a table at the source site that includes a date or timestamp
column and the global name of the database. Add a rule to capture changes to this table and propagate
the changes to each target destination. Make sure that the target destination will apply changes to this
table as well. Set up an automated job to update this table at the source site periodically, for example
every minute.
Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk (Doc ID 259609.1)
In Oracle 10g and above flow control is automatically handled by the database so there is no need to
implement it manually.
A. Configuring Capture
CAPTURE requires a rule set with rules.The ADD_GLOBAL_RULES procedure cannot be used to
capture DML changes for entire database. ADD_GLOBAL_RULES can be used to capture all DDL
changes for the database.
A single Streams capture can process rules for multiple tables or schemas. For best performance, rules
should be simple. Rules that include NOT or LIKE clauses are not simple and will impact the
performance of Streams.
Minimize the number of rules added into the process rule set. A good rule of thumb is to keep the number
of rules in the rule set to less than 100. If more objects need to be included in the ruleset, consider
constructing rules using the IN clause. For example, a rule for the 6 TB_M21* tables in the MYACCT
schema would look like the following:
(:dml.get_object_owner() = 'MYACCT' and :dml.is_null_tag() = 'Y' and
:dml.get_object_name() IN ('TB_M21_1','TB_M21_2','TB_M21_3',
'TB_M21_40','TB_M21_10','TB_M211B010'))
If the maintain_* procedures are not suitable for your environment, please use the ADD_*_RULES
procedures (ADD_TABLE_RULES, ADD_SCHEMA_RULES for DML and DDL, ADD_SUBSET_RULES
for DML only, and ADD_GLOBAL_RULES for DDL only). These procedures minimize the number of
steps required to configure Streams processes. It is also possible to create rules for non-existent objects,
so be sure to check the spelling of each object specified in a rule carefully.
The Streams capture process requires a rule set with rules. The ADD_GLOBAL_RULES procedure can
be used to capture DML changes for entire database as long as a negative ruleset is created for the
capture process that includes rules for objects with unsupported datatypes.. ADD_GLOBAL_RULES can
be used to capture all DDL changes for the database.
A single Streams capture can process changes for multiple tables or schemas. For best performance,
rules for these multiple tables or schemas should be simple. Rules that include LIKE clauses are not
simple and will impact the performance of Streams.
To eliminate changes for particular tables or objects, specify the include_tagged_lcr clause along with the
table or object name in the negative rule set for the Streams process. Setting this clause will eliminate
ALL changes, tagged or not, for the table or object.
B. Capture Parameters
Values Comment
Parameter &
Recommendation
_SGA_SIZE Default: 10 Amount of memory available from the streams pool for logminer
processing. The default amount of streams_pool memory
allocated to logminer is 10Mb. Increase this value especially in
environments where large LOBs are processed. This parameter
should not be increased unless the logminer error ORA-1341 is
encountered. Streams pool memory allocated to logminer is
unavailble for other usa
Capture parameters can be set using the SET_PARAMETER procedure from the
DBMS_CAPTURE_ADM package. For example, to set the checkpoint frequency of the streams capture
process named CAPTURE_EX, use the following syntax while logged in as the Streams Administrator to
request a logminer checkpoint after processing every Gigabyte (1000Mb) of redo:
dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequency','1000');
A. Configuring Propagation
If the maintain_* procedures are not suitable for your environment(Oracle 9iR2 and 10gR1), please use
the ADD_*__PROPAGATION_RULES procedures (ADD_TABLE_PROPAGATION_RULES,
ADD_SCHEMA_PROPAGATION_RULES , ADD_GLOBAL_PROPAGATION_RULES for both DML and
DDL., ADD_SUBSET_PROPAGATION_RULES for DML only) These procedures minimize the number of
steps required to configure Streams processes. Also, it is possible to create rules for non-existent objects,
so be sure to check the spelling of each object specified in a rule carefully.
The rules in the rule set for propagation can differ from the rules specified for the capture process. For
example, to configure that all captured changes be propagated to a target site, a single
ADD_GLOBAL_PROPAGATION_RULES procedure can be specified for the propagation even though
multiple ADD_TABLE_RULES might have been configured for the capture process.
B. Propagation mode
For new propagation processes configured in 10.2 and above. set the queue_to_queue propagation
parameter to TRUE. If the database is RAC enabled, an additional service is created typically named in
the format: sys$schema.queue_name.global_name when the Streams subscribers are initially created. A
streams subscriber is a defined propagation between two Streams queues or an apply process with the
apply_captured parameter set to TRUE. This service automatically follows the ownership of the queue on
queue ownership switches (ie, instance startup, shutdown, etc). The service name can be found in the
network name column of DBA_SERVICES view.
If the maintain_* (TABLE,SCHEMA,GLOBAL) procedures are used to configure Streams,
queue_to_queue is automatically set to TRUE, if possible. The database link for this queue_to_queue
propagation must use a TNS servicename (or connect name) that specifies the GLOBAL_NAME in the
CONNECT_DATA clause of the descriptor. See section 6 on Additional Considerations for RAC below.
Propagation process configured prior to 10.2 continue to use the dblink mode of propagation. In this
situation, if the database link no longer connects to the owning instance of the queue, propagation will not
succeed. You can continue to use the 10.1. best practices for this propagation, or during a maintenance
window recreate propagation. Make sure that the queue is empty with no unapplied spilled messages
before you drop the propagation. Then, recreate the propagation with the queue_to_queue parameter set
to TRUE.
Queues created prior to 10.2 on RAC instances should be dropped and recreated in order to take
advantage of the automatic service generation and queue_to_queue propagation. Be sure to perform this
activity when the queue is empty and no new LCRs are being enqueued into the queue.
C. Propagation Parameters
latency=5 Default: Maximum wait, in seconds, in the propagation window for a message to be
60 propagated after it is enqueued.
The default value is 60. Caution: if latency is not specified for this call, then latency
will over-write any existing value with this default value (60).
For example, if the latency is 60 seconds, then during the propagation window, if
there are no messages to be propagated, then messages from that queue for the
destination will not be propagated for at least 60 more seconds. It will be at least
60 seconds before the queue will be checked again for messages to be
propagated for the specified destination. If the latency is 600, then the queue will
not be checked for 10 minutes and if the latency is 0, then a job queue process will
be waiting for messages to be enqueued for the destination and as soon as a
message is enqueued it will be propagated.
Propagation parameters can be set using the ALTER_PROPAGATION_SCHEDULE procedure from the
DBMS_AQADM package. For example, to set the latency parameter of the streams propagation from the
STREAMS_QUEUE owned by STRMADMIN to the target database whose global_name is DEST_DB for
the queue Q1, use the following syntax while logged in as the Streams Administrator:
dbms_aqadm.alter_propagation_schedule('strmadmin.streams_queue','DEST_DB',destination_queue=>'
Q1',latency=>5);
D. Network Connectivity
When using Streams propagation across a Wide Area Network (WAN), increase the session data unit
(SDU) to improve the propagation performance. The maximum value for SDU is 32K (32767). The SDU
value for network transmission is negotiated between the sender and receiver sides of the connection: the
minimum SDU value of the two endpoints is used for any individual connection. In order to take
advantage of an increased SDU for Streams propagation, the receiving side sqlnet.ora file must include
the default_sdu_size parameter. The receiving side listener.ora must indicate the SDU change for the
SID. The sending side tnsnames.ora connect string must also include the SDU modification for the
particular service.
Tuning the tcp/ip networking parameters can significantly improve performance across the WAN. Here
are some example tuning parameters for Linux. These parameters can be set in the /etc/sysctl.conf file
and running sysctl -p . When using RAC, be sure to configure this at each instance.
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# increase Linux autotuning TCP buffer limits # min, default, and max # number of
bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
For further information, please review the Oracle Net Services Guide
The archive log threads from all instances must be available to any instance running a capture process.
This is true for both local and downstream capture.
Queue Ownership
When Streams is configured in a RAC environment, each queue table has an "owning" instance. All
queues within an individual queue table are owned by the same instance. The Streams components
(capture/propagation/apply) all use that same owning instance to perform their work. This means that
Ownership of the queue can be configured to remain on a specific instance, as long as that instance is
available, by setting the PRIMARY _INSTANCE and/or SECONDARY_INSTANCE parameters of
DBMS_AQADM.ALTER_QUEUE_TABLE. If the primary_instance is set to a specific instance (ie, not 0),
the queue ownership will return to the specified instance whenever the instance is up.
Capture will automatically follow the ownership of the queue. If the ownership changes while capture is
running, capture will stop on the current instance and restart at the new owner instance.
For queues created with Oracle Database 10g Release 2, a service will be created with the service
name= schema.queue and the network name SYS$schema.queue.global_name for that queue. If the
global_name of the database does not match the db_name.db_domain name of the database, be sure to
include the global_name as a service name in the init.ora.
For propagations created with the Oracle Database 10g Release 2 code with the queue_to_queue
parameter to TRUE, the propagation job will deliver only to the specific queue identified. Also, the source
dblink for the target database connect descriptor must specify the correct service (global name of the
target database ) to connect to the target database. For example, the tnsnames.ora entry for the target
database should include the CONNECT_DATA clause in the connect descriptor for the target database.
This clause should specify (CONNECT_DATA=(SERVICE_NAME='global_name of target database')). Do
NOT include a specific INSTANCE in the CONNECT_DATA clause.
For example, consider the tnsnames.ora file for a database with the global name db.mycompany.com.
Assume that the alias name for the first instance is db1 and that the alias for the second instance is db2.
The tnsnames.ora file for this database might include the following entries:
db.mycompany.com=
(description=
(load_balance=on)
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)))
db1.mycompany.com=
(description=
(address=(protocol=tcp)(host=node1-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db1)))
db2.mycompany.com=
(description=
(address=(protocol=tcp)(host=node2-vip)(port=1521))
(connect_data=
(service_name=db.mycompany.com)
(instance_name=db2)))
Use the italicized tnsnames.ora alias in the target database link USING clause.
DBA_SERVICES lists all services for the database. GV$ACTIVE_SERVICES identifies all active services
for the database In non_RAC configurations, the service name will typically be the global_name.
However, it is possible for users to manually create alternative services and use them in the TNS
connect_data specification . For RAC configurations, the service will appear in these views as
SYS$schema.queue.global_name.
Propagation Restart
exec DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propagation'); or
exec DBMS_PROPAGATION_ADM.STOP_PROPAGATION('name_of_propagation',force=>true);
exec DBMS_PROPAGATION_ADM.START_PROPAGATION('name_of_propagation');
6.1. Privileges
Grant Explicit Privileges to APPLY_USER for the user tables
Examples:
Privileges for table level DDL: CREATE (ANY) TABLE , CREATE (ANY) INDEX, CREATE (ANY)
PROCEDURE
6.2. Instantiation
Set Instantiation SCNs manually if not using export/import. If manually configuring the instantiation scn for
each table within the schema, use the RECURSIVE=>TRUE option on the
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN procedure
For DDL Set Instantiation SCN at next higher level (ie, SCHEMA or GLOBAL level).
To simplify conflict resolution on tables with LOB columns, create an error handler to handle errors for the
table. When registering the handler using the DBMS_APPLY_ADM.SET_DML_HANDLER procedure, be
sure to specify the ASSEMBLE_LOBS parameter as TRUE.
If the maintain_* procedures are not suitable for your environment, please use the ADD_*_RULES
procedures (ADD_TABLE_RULES , ADD_SCHEMA_RULES , ADD_GLOBAL_RULES (for DML and
DDL), ADD_SUBSET_RULES
APPLY can be configured with or without a ruleset. The ADD_GLOBAL_RULES can be used to apply all
changes in the queue for the database. If no ruleset is specified for the apply process, all changes in the
queue are processed by the apply process.
A single Streams apply can process rules for multiple tables or schemas located in a single queue that
are received from a single source database . For best performance, rules should be simple. Rules that
include LIKE clauses are not simple and will impact the performance of Streams.
To eliminate changes for particular tables or objects, specify the include_tagged_lcr clause along with the
table or object name in the negative rule set for the Streams process. Setting this clause will eliminate all
changes, tagged or not, for the table or object.
B. Parameters
DISABLE_ON_ERROR=N Default: Y If Y, then the apply process is disabled on the first unresolved
error, even if the error is not fatal.
Apply parameters can be set using the SET_PARAMETER procedure from the DBMS_APPLY_ADM
package. For example, to set the DISABLE_ON_ERROR parameter of the streams apply process named
APPLY_EX, use the following syntax while logged in as the Streams Administrator:
exec dbms_apply_adm.set_parameter('apply_ex','disable_on_error','n');
In some cases, performance can be improved by setting the following hidden parameter. This parameter
should be set when the major workload is UPDATEs and the updates are performed on just a few
columns of a many-column table.
_DYNAMIC_STMTS=Y Default: N If Y, then for UPDATE statements, the apply process will
optimize the generation of SQL statements based on required
columns.
_HASH_TABLE_SIZE=1000000 Default: Set the size of the hash table used to calculate transaction
80*parallelism dependencies to 1 million.
When Streams is configured in a RAC environment, each queue table has an "owning" instance. All
queues within an individual queue table are owned by the same instance. The Streams components
(capture/propagation/apply) all use that same owning instance to perform their work. This means that
the database link specified in the propagation must connect to the owning instance of the target
queue.
the apply process is run at the owning instance of the target queue
Ownership of the queue can be configured to remain on a specific instance, as long as that instance is
available, by setting the PRIMARY _INSTANCE and SECONDARY_INSTANCE parameters of
DBMS_AQADM.ALTER_QUEUE_TABLE. If the primary_instance is set to a specific instance (ie, not 0),
the queue ownership will return to the specified instance whenever the instance is up.
Apply will automatically follow the ownership of the queue. If the ownership changes while apply is
running, apply will stop on the current instance and restart at the new owner instance.
See the OPERATION section on Global_name below. The following are some additional considerations
when running in a RAC environment. If the GLOBAL_NAME of the database is changed, ensure that the
queue is empty before changing the name and that the apply process is dropped and recreated with the
apply_captured parameter = TRUE. In addition, if the GLOBAL_NAME does not match the
db_name.db_domain of the database, include the GLOBAL_NAME in the list of services for the database
in the database parameter initialization file.
OPERATION
A Streams process will automatically restart after a database startup, assuming that the process was in a
running state before the database shut down. No special startup or shutdown procedures are required in
the normal case.
Global Name
Streams uses the GLOBAL_NAME of the database to identify changes from or to a particular database.
Do not modify the GLOBAL NAME of a Streams database after capture has been created. Changes
captured by the Streams capture process automatically include the current global name of the source
database. This means that if the global name is modified after a capture process has been configured, the
capture process will need to be dropped and recreated following the GLOBAL_NAME modification. In
addition, the system-generated rules for capture, propagation, and apply typically specify the global name
of the source database. These rule will need to be modified or recreated to adjust the
source_database_name. Finally, if the GLOBAL_NAME does not match the db_name.db_domain of the
database, include the GLOBAL_NAME in the list of services for the database in the database parameter
initialization file.
If the global name must be modified on the database, do it at a time when NO user changes are possible
on the database and the Streams queues are empty with no outstanding changes to be applied, so that
the Streams configuration can be recreated. Keep in mind that all subscribers (propagations to target
databses and the target apply processes) must also be recreated if the source database global_name is
changed. Follow the directions in the Streams Replication Administrator's Guide for Changing the DBID or
GLOBAL NAME of a source database.
It is also strongly recommended that the database init.ora parameter global_names be set to TRUE to
guarantee that database link names match the global name of the target database.
Backup Considerations
1. Ensure that any manual backup procedures that include the any of the following statements include
a non-null Streams tag:
The tag should be chosen such that these DDL commands will be ignored by the capture rule set.
To set a streams tag, use the DBMS_STREAMS.SET_TAG procedure. A non-null tag should be specified
to avoid capturing these commands.
3. Ensure that all archive logs (from all threads) are available. Database recovery depends on the
availability of these logs, and a missing log will result in incomplete recovery.
4. Ensure that the APPLY process parameter, COMMIT_SERIALIZATION, is set to the default value,
FULL.
5. Implement a "heartbeat" table. To ensure that the applied_scn of the DBA_CAPTURE view is
updated periodically, implement a "heart beat" table. Implementing a heartbeat table ensures that the
metadata is updated frequently. Additionally, the heartbeat table provides quick feedback as to the health
of streams replication. Refer to the Source Site Configuration Section: Implement a Hearbeat Table for
more details.
6. In situations that result in incomplete recovery (Point-in-Time recovery) at the source site, follow
the instructions in Chapter 9 of the Streams Replication Administrators Guide
7. In situations that result in incomplete recovery at the destination site, follow the instructions in
Chapter 9 of the Streams Replication Administrator's Guide
Batch Processing
For best performance, the commit point for batch processing should be kept low. It is preferable that
excessively large batch processing be run independently at each site. If this technique is utilized, be sure
to implement DBMS_STREAMS.SET_TAG to skip the capture of batch processing session. Setting this
tag is valid only in the connected session issuing the set_tag command and will not impact the capture of
changes from any other database sessions.
DDL Replication
When replicating DDL, keep in mind the effect the DDL statement will have on the replicated sites. In
particular, do not allow system generated naming for constraints or indexes, as modifications to these will
most likely fail at the replicated site. Also, storage clauses may cause some issues if the target sites are
not identical.
If you decide NOT to replicate DDL in your Streams environment, any table structure change must be
performed manually.
Refer to Document 313478.1 Performing Manual DDL in a Streams Environment
Propagation
At times, the propagation job may become "broken" or fail to start after an error has been encountered or
after a database restart. The typical solution is to disable the propagation and then re-enable it.
exec dbms_propagation_adm.stop_propagation('propagation_name');
exec dbms_propagation_adm.start_propagation('propagation_name');
If the above does not fix the problem, perform a stop of propagation with the force parameter and then
start propagation again.
exec dbms_propagation_adm.stop_propagation('propagation_name',force=>true);
exec dbms_propagation_adm.start_propagation('propagation_name');
An additional side-effect of stopping the propagation with the force parameter is that the statistics for the
propagation are cleared
The above is documented in the Streams Replication Administrator's Guide: Restart Broken Propagations
Automatic flow control minimizes the impact of this queue growth. Queued messages (LCRs) for
unavailable target sites will spill to disk storage while messages for available sites are processed
normally.
Propagation is implemented using the DBMS_JOB subsystem. If a job is unable to execute 16
successive times, the job will be marked as "broken" and become disabled. Be sure to
periodically check that the job is running successfully to minimize source queue growth due to
this problem.
Streams Cleanup/Removal
Removing the Streams administrator schema with DROP USER ..... CASCADE can be used to remove
the entire Streams configuration.
Oracle has the ability to restore old stats on tables including data dictionary tables using the
dbms_stats.restore... API's. This feature can be used for short term resolution, but the real solution is the
first one, where you lock optimizer stats of volatile tables.
MONITORING
All Streams processing is done at the "owning instance" of the queue. To determine the owning instance,
use the query below:
To display the monitoring view information, either query the monitoring views from the owning instance or
use the GV$ views for dynamic streams views.
Streams View Name Streams View Name from any RAC instance
V$STREAMS_CAPTURE GV$STREAMS_CAPTURE
V$STREAMS_APPLY_COORDINATOR GV$STREAMS_APPLY_COORDINATOR
V$STREAMS_APPLY_READER GV$STREAMS_APPLY_READER
V$STREAMS_APPLY_SERVER GV$STREAMS_APPLY_SERVER
V$STREAMS_POOL_ADVICE GV$STREAMS_POOL_ADVICE
V$STREAMS_TRANSACTION GV$STREAMS_TRANSACTION
V$BUFFERED_PUBLISHERS GV$BUFFERED_PUBLISHERS
V$BUFFERED_QUEUES GV$BUFFERED_QUEUES
V$BUFFERED_SUBSCRIBERS GV$BUFFERED_SUBSCRIBERS
V$PROPAGATION_RECEIVER GV$PROPAGATION_RECEIVER
V$RULE GV$RULE
V$RULE_SET GV$RULE_SET
V$RULE_SET_AGGREGATE_STATS GV$RULE_SET_AGGREGATE_STATS
Streams Views
DBA_REGISTERED_ARCHIVED_LOG
DBA_RECOVERABLE_SCRIPT
DBA_RECOVERABLE_SCRIPT_BLOCKS
DBA_RECOVERABLE_SCRIPT_ERRORS
DBA_RECOVERABLE_SCRIPT_PARAMS
DBA_STREAMS_ADD_COLUMN
DBA_STREAMS_ADMINISTRATOR
DBA_STREAMS_DELETE_COLUMN
DBA_STREAMS_GLOBAL_RULES
DBA_STREAMS_MESSAGE_CONSUMERS
DBA_STREAMS_MESSAGE_RULES
DBA_STREAMS_NEWLY_SUPPORTED
DBA_STREAMS_RENAME_COLUMN
DBA_STREAMS_RENAME_SCHEMA
DBA_STREAMS_RENAME_TABLE
DBA_STREAMS_RULES
DBA_STREAMS_SCHEMA_RULES
DBA_STREAMS_TABLE_RULES
DBA_STREAMS_TRANSFORM_FUNCTION
DBA_STREAMS_TRANSFORMATIONS
DBA_STREAMS_UNSUPPORTED
DBA_RULE_SET_RULES
DBA_RULE_SETS
DBA_RULES
DBA_HIST_BUFFERED_QUEUES
DBA_HIST_BUFFERED_SUBSCRIBERS
DBA_HIST_RULE_SET
DBA_HIST_STREAMS_APPLY_SUM
DBA_HIST_STREAMS_CAPTURE
DBA_HIST_STREAMS_POOL_ADVICE
Capture Views:
DBA_CAPTURE
DBA_CAPTURE_EXTRA_ATTRIBUTES
DBA_CAPTURE_PARAMETERS
DBA_CAPTURE_PREPARED_DATABASE
DBA_CAPTURE_PREPARED_SCHEMAS
DBA_CAPTURE_PREPARED_TABLES
Apply Views:
DBA_APPLY
DBA_APPLY_CONFLICT_COLUMNS
DBA_APPLY_DML_HANDLERS
DBA_APPLY_ENQUEUE
DBA_APPLY_ERROR
DBA_APPLY_EXECUTE
DBA_APPLY_INSTANTIATED_GLOBAL
DBA_APPLY_INSTANTIATED_OBJECTS
DBA_APPLY_INSTANTIATED_SCHEMAS
DBA_APPLY_KEY_COLUMNS
DBA_APPLY_OBJECT_DEPENDENCIES
DBA_APPLY_PARAMETERS
DBA_APPLY_PROGRESS
DBA_APPLY_SPILL_TXN
DBA_APPLY_TABLE_COLUMNS
DBA_APPLY_VALUE_DEPENDENCIES
Alert Log
Streams capture and apply processes report long-running and long transactions in the alert log.
Long-running transactions are open transactions with no activity( ie, no new change records , rollback or
commit ) for an extended period (20 minutes). Large transactions are open transactions with a large
number of change records. The alert log will report the fact that a long-running or large transaction has
been seen every 20 minutes. Not all such transactions will be reported - only 1 per 10 minute period.
When the commit or rollback is received, this fact will be reported in the alert log as well.
Streams Healthcheck Scripts
The Streams health check script is a collection of queries to determine the configuration of the streams
environment. This script should be run at each participating database in a streams configuration. In
addition to configuration information, analysis of the rules specified for streams is included to enable
quicker diagnosis of problems. A guide to interpreting the output is provided. The healthcheck script is an
invaluable tool for problem solving customer issues. The Streams Healthcheck script is available from
Document 273674.1 Streams Configuration Report and Health Check Script
To browse through the complete list of streams published articles refer to Knowledge > Browse. Then
select Oracle Technology -> Database -> Information Integration -> Streams.
To learn about Oracle University offerings related to Oracle Streams, refer to Document 762188.1 Oracle
University Offerings Related to Oracle Streams.
References
NOTE:265201.1 - Master Note for Troubleshooting Streams Apply Errors ORA-1403, ORA-26787 or
ORA-26786
NOTE:335516.1 - Master Note for Streams Performance Recommendations
NOTE:789445.1 - Master Note for Streams Setup Scripts
NOTE:1264598.1 - Master Note for Streams Downstream Capture - 10g and 11g [Video]
NOTE:313279.1 - Master Note for Troubleshooting Streams capture 'WAITING For REDO' or
INITIALIZING
NOTE:779801.1 - Streams Conflict Resolution
NOTE:290605.1 - Oracle Streams STRMMON Monitoring Utility
NOTE:730036.1 - Overview for Troubleshooting Streams Performance Issues
NOTE:437838.1 - Streams Specific Patches
NOTE:273674.1 - Streams Configuration Report and Health Check Script
NOTE:259609.1 - Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk
NOTE:365648.1 - Explain TXN_LCR_SPILL_THRESHOLD in Oracle10GR2 Streams
NOTE:782541.1 - Streams Replication Supplemental Logging Requirements
NOTE:313478.1 - Performing Manual DDL in a Streams Environment
NOTE:461278.1 - Example of a Streams Heartbeat Table
Applies to:
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6 - Release: 10.1 to 11.1
Information in this document applies to any platform.
Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
Purpose
The following script is intented to be used by the DBA to create an administrator user for STREAMS .
Software Requirements/Prerequisites
Please run this script logged in as a user who has SYSDBA privileges.
To run this script set your environment so the values below are the
same as yours or replace them in the script with values appropriate to your
environment :
Caution
This script is provided for educational purposes only and not supported by Oracle Support Services. It
has been tested internally, however, and works as documented. We do not guarantee that it will work for
you, so be sure to test it in your environment before relying on it.
Proofread this script before using it! Due to the differences in the way text editors, e-mail packages and
operating systems handle text formatting (spaces, tabs and carriage returns), this script may not be in an
executable state when you first receive it. Check over the script to ensure that errors of this type are
corrected.
Script
execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
How to setup Streams Schema level replication using MAINTAIN_SCHEMAS procedure [ID
878638.1]
In this Document
Goal
Solution
References
Applies to:
Goal
This article provide the steps needed to setup Schema Level streams environment, we will use set of
scripts provided already under DBMS_STREAMS_ADM package .
This procedure configures a Streams environment that replicates changes to specified schemas between
two databases. This procedure can either configure the environment directly, or it can generate a script
that can be edited and used to configures the environment later.
Notice this procedure should run at the capture database. The capture database is the database that
captures changes made to the source database.
This procedure is overloaded. so the schema_names parameter can provided in type of VARCHAR2 or
also in type of DBMS_UTILITY.UNCL_ARRAY. These parameters enable you to enter the list of
schemas in different ways and are mutually exclusive.
For more information about the maintain_* procedures, please review the following article :
Solution
DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS(
schema_names IN VARCHAR2,
source_directory_object IN VARCHAR2,
destination_directory_object IN VARCHAR2,
source_database IN VARCHAR2,
destination_database IN VARCHAR2,
perform_actions IN BOOLEAN DEFAULT TRUE,
script_name IN VARCHAR2 DEFAULT NULL,
script_directory_object IN VARCHAR2 DEFAULT NULL,
dump_file_name IN VARCHAR2 DEFAULT NULL,
capture_name IN VARCHAR2 DEFAULT NULL,
capture_queue_table IN VARCHAR2 DEFAULT NULL,
capture_queue_name IN VARCHAR2 DEFAULT NULL,
capture_queue_user IN VARCHAR2 DEFAULT NULL,
propagation_name IN VARCHAR2 DEFAULT NULL,
apply_name IN VARCHAR2 DEFAULT NULL,
apply_queue_table IN VARCHAR2 DEFAULT NULL,
apply_queue_name IN VARCHAR2 DEFAULT NULL,
apply_queue_user IN VARCHAR2 DEFAULT NULL,
log_file IN VARCHAR2 DEFAULT NULL,
bi_directional IN BOOLEAN DEFAULT FALSE,
include_ddl IN BOOLEAN DEFAULT FALSE,
instantiation IN INTEGER DEFAULT
DBMS_STREAMS_ADM.INSTANTIATION_SCHEMA);
++Prerequisites:
1. Set all the required database parameters related to streams as in : Note 418755.1
2. Create the streams administrator user account and grant all the mandatory permissions as in
Note 786528.1 . Use DBMS_STREAMS_ADM.MAINTAIN_ * , DBA privileges for the
Streams Administrator Account is required.
3. Create the directory objects needed to create the data pump export / import, and also if you need to
store the script for later usage, you will need directory object, can be the same one or different one. .
++An Example on how to Configure Schema level Replication using MAINTAIN_SCHEMAS procedure:
conn /as sysdba
set echo on termout on
define source=ORCL102A
define SourceGlobal_name=ORCL102A.EG.ORACLE.COM
define target=ORCL102B
define TargetGlobal_name=ORCL102B.EG.ORACLE.COM
prompt /* Create streams admin at the source db :&&source */
conn sys/oracle@&&source as sysdba
/
CREATE USER strmadmin IDENTIFIED BY strmadmin
DEFAULT TABLESPACE streams_tbs QUOTA UNLIMITED ON streams_tbs
/
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN
/
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'strmadmin',
grant_privileges => true);
END;
/
prompt /* Create streams admin at the target db:&&target */
conn sys/oracle@&&target as sysdba
CREATE USER strmadmin IDENTIFIED BY strmadmin
DEFAULT TABLESPACE streams_tbs QUOTA UNLIMITED ON streams_tbs
/
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN
/
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'strmadmin',
grant_privileges => true);
END;
/
2). Creating the directory objects needed to create the data pump export and import as a part of the setup
operation.In this example we will use it only to store the output script.This script can be saved or edited
and used again:
CREATE DIRECTORY db_files_directory AS '/home/oracle/db_files';
conn strmadmin/strmadmin@&&source
create database link &&SourceGlobal_name
connect to strmadmin identified by strmadmin
using '&&source'
/
conn strmadmin/strmadmin@&&target
create database link &&TargetGlobal_name
connect to strmadmin identified by strmadmin
using '&&source'
/
4. The example in this section uses this procedure to configure a one way DDL/DML Streams replication
environment that maintains the SCOTT schema. The source database is orcl102A, and the destination
database is orcl102B.
conn strmadmin/strmadmin@&&source
define schema_name=scott
begin
dbms_streams_adm.maintain_schemas(
schema_names=> '&&schema_name',
source_directory_object=> null,
destination_directory_object=> null,
source_database=> '&&source',
destination_database => '&&target',
perform_actions => true,
script_name =>'Schema_maintain_streams.sql',
script_directory_object=>'db_files_directory',
bi_directional=> false,
include_ddl => true ,
instantiation=>dbms_streams_adm.instantiation_schema_network);
end;
Note:
This procedure automatically excludes database objects that are not supported by Streams in the
schemas from the replication environment by adding rules to the negative rule set of each capture and
apply process. Query the DBA_STREAMS_UNSUPPORTED data dictionary view to determine which
database objects are not supported by Streams. If unsupported database objects are not excluded, then
capture errors will result.
If the bi_directional parameter is set to TRUE, then do not allow data manipulation language (DML) or
data definition language (DDL) changes to the shared database objects at the destination database while
the MAINTAIN_SCHEMAS procedure, or the script generated by the procedure, is running. This
restriction does not apply to the source database.
References
In this Document
Goal
Solution
References
Applies to:
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.2.0.2 - Release: 10.1 to 11.2
Information in this document applies to any platform.
Checked for any broken links, none has been found .. 18-Jan-2010
Goal
This article provide a very fast method to setup streams environment, we will use set of scripts provided
already under DBMS_STREAMS_ADM package, those scripts provides many levels of streams setup,
like table level, schema level etc.
Solution
Most of the procedure parameters has default values, except the first five values :
Parameter Description
table_names The tables to be configured for replication and maintained by Streams after
configuration.
source_directory_object The directory object for the directory on the computer system running the source
database into which the generated Data Pump export dump file is placed. This file
remain in this directory after the procedure completes.Can be null if network
instantiation will be used.
destination_directory_object The directory object for the directory on the computer system running the destination
database into which the generated Data Pump export dump file is transferred.Can be
null if network instantiation will be used.
source_database The global name of the source database.If the value given for the source_database
parameter does not match the global name of the database the procedure is run
on,then the procedure will configure capture as downstream
(and hence needs another 3rd database to configure)
Check you are using the global names of the databases you are trying to configure in
the source_database and destination_database parameters
destination_database The global name of the destination database.
In addition to the above parameters, there are another important set of parameters needs extra attention
if used :
capture_queue_table IN VARCHAR2 DEFAULT NULL,
capture_queue_name IN VARCHAR2 DEFAULT NULL,
apply_queue_table IN VARCHAR2 DEFAULT NULL,
apply_queue_name IN VARCHAR2 DEFAULT NULL,
When using dbms_streams_adm.maintain_* the queue and queue table name can't be more than 24
characters each.
This has always been the restriction on queue names.
For the dbms_streams_adm.maintain_* supplied queue name, Oracle adds extra 6 characters before
(AQ$_) the assigned name and after it (_E).
Since the max length for the queue name is 30 characters, you are left with only 24 characters.
Answer:
No it cannot point to ASM diskgroup. The only way to make this work is to have maintain_<> procedure
produce the script and then edit the script for ASM diskgroup access.For more information about the other
parameters please check the following document :
Oracle Database PL/SQL Packages and Types Reference
10g Release 2 (10.2) Part Number B14258-02
http://www.oracle.com/pls/db102/to_toc?pathname=appdev.102%2Fb14258%2Ftoc.htm&remark=
portal+%28Information+Integration%29
DBMS_STREAMS_ADM should be executed on the capture database. If the bidirectional option has
been chosen, then no DML's should be run on the target database (doesn't apply to the source).
Streams has a function (dbms_streams.compatible_<version>) that can be used to check the
compatibility for tables
that can be added to any positive rule (table, schema, or global level).
DBMS_STREAMS.COMPATIBLE_11_1
DBMS_STREAMS.COMPATIBLE_10_2
DBMS_STREAMS.COMPATIBLE_10_1
DBMS_STREAMS.COMPATIBLE_9_2
When using MAINTAIN_* procedures to create the Streams environment, this type of rule is generated
automatically.
1. Set all the required database parameters related to streams as in : Note 418755.1
2. Create the streams administrator user account and grant all the mandatory permissions as in
Note 786528.1. Use DBMS_STREAMS_ADM.MAINTAIN_ * , DBA privileges for the
Streams Administrator Account is required.
3. Creating the directory objects needed to create the data pump export and import .
connect <DBA user>/<password>@STRM1.NET as SYSDBA
create user STRMADMIN identified by STRMADMIN;
ALTER USER STRMADMIN DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
QUOTA UNLIMITED ON USERS;
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
ORC2:
connect <DBA user>/<password>@STRM1.NET as SYSDBA
create user STRMADMIN identified by STRMADMIN;
ALTER USER STRMADMIN DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP
QUOTA UNLIMITED ON USERS;
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
2. Creating the directory objects needed to create the data pump export and import as a part of the setup
operation.In this example we will use it only to store the output script.This script can be saved or edited
and used again:
CREATE DIRECTORY db_files_directory AS '/usr/db_files';
On the target database, ORC2:
SQL> create database link orc1 connect to strmadmin identified by
strmadmin using 'ORC1';
CONNECT strmadmin/strmadmin
DECLARE
tables DBMS_UTILITY.UNCL_ARRAY;
BEGIN
tables(1) := 'scott.dept';
tables(2) := 'scott.emp';
tables(3) := 'scott.bonus';
tables(3) := 'scott.salgrade';
DBMS_STREAMS_ADM.MAINTAIN_TABLES(
table_names => tables,
source_directory_object => NULL,
destination_directory_object => NULL,
source_database => 'orc1',
destination_database => 'orc2',
perform_actions => true,
script_name => 'configure_rep.sql',
script_directory_object => 'db_files_directory',
bi_directional => false,
include_ddl => false,
instantiation => DBMS_STREAMS_ADM.INSTANTIATION_TABLE_NETWORK);
END;
/
The above execution assumes the following :
-The script will be saved to db_files_directory.
-The script will be executed immediately.
-The network will be automatically used to instantiate the tables. (This parameter achieves that :
instantiation=> DBMS_STREAMS_ADM.INSTANTIATION_TABLE_NETWORK )
On source:
On Target:
On source :
1 row created.
SQL> commit;
Commit complete.
On the target :
The above shows that the new record inserted has been transferred successfully to the
target database.
How to setup replication using dbms_stream_adm from one source to two separate destinations?
The best method is generate the script only, and run it after the required modifications for each source,
please notice that providing source dataabse name to the script different than the global_name of the
local database will instruct the script to create downstreams setup.
Use the MAINTAIN_* procedures with default settings as much as possible. For a downstream capture
configuration, where the capture and apply are colocated at the downstream database, be sure to specify
the same queue name for both the capture_queue_name and the apply_queue_name. This will eliminate
the extraneous
propagation in downstream capture.
If the procedure execution completed successfully, (See the above output example), then the replication
is up and ready, otherwise, the following views can be used to troubleshoot :
DBA_RECOVERABLE_SCRIPT
Details about recoverable operations, shows the current running script and which block is being executed,
and the total number of blocks.
DBA_RECOVERABLE_SCRIPT_PARAMS
Details about the recoverable operation parameters used to run the script.
DBA_RECOVERABLE_SCRIPT_BLOCKS
Details about the recoverable script blocks, shows more details about each block and which tasks exactly
are being achieved by running each blosk.
DBA_RECOVERABLE_SCRIPT_ERRORS
Details showing errors during script execution, you can check it to find more details about any error.
After checking the above views, detecting and fixing the error, you can simply continue running the script
by using the following (You can also use the same procedure to rollback the script):
DBMS_STREAMS_ADM.RECOVER_OPERATION(script_id IN RAW,
operation_mode IN VARCHAR2 DEFAULT 'FORWARD');
script_id
The operation id of the procedure invocation that is being rolled forward, rolled back, or purged. Query the
SCRIPT_ID column of the DBA_RECOVERABLE_SCRIPT data dictionary view to determine the
operation id.
operation_mode:
If FORWARD, then the procedure rolls forward the operation. Specify FORWARD to try to complete the
operation.
If ROLLBACK, then the procedure rolls back all of the actions performed in the operation. If the rollback is
successful, then the procedure purges all of the metadata about the operation.
If PURGE, then the procedure purges all of the metadata about the operation without rolling the operation
back.
How to clear dbms_streams_adm.maintain_schemas recovery views after failure?
To perform the clean up you will have to remove the metadata directly from the Source database using:
If you have any doubts, please consult Oracle Support before doing such step.
For more information, please review the complete details in Oracle documentation :
10.2.0.5 Grid Control offers an excellent set of manageability & monitoring features for Streams, and it
should be used as the admin interface for Streams.
You may refer to the following Note in this context:
Note 784021.1 Managing Streams from Oracle Enterprise Manager 10g Release 5 Grid Control
Also you may find the same information in the following document:
http://www.oracle.com/technology/products/dataint/pdf/gc10_2_0_5_streams_ext_with_notes.pdf
Setup Streams Replication Between Different Source and Target Schemas with Different Table
Structures [ID 784899.1]
In this Document
Purpose
Software Requirements/Prerequisites
Configuring the Sample Code
Running the Sample Code
Caution
Sample Code
Sample Code Output
Applies to:
Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.6 - Release: 10.2 to 11.1
Information in this document applies to any platform.
Oracle Server Enterprise Edition - Version: 10.2.0.1 to 11.1.0.6
Purpose
Oracle Streams enables the sharing of data and events in a data stream, either within a database or from
one database to another. This article is intended to provide the steps for a DBA to setup streams
replication from one schema to another schema within the same database.
The provided script demonstrates setting up Streams Replication when the schemas have different table
structures (different number of columns).
The sample code be used by Oracle Support Analysts and DBAs who needs to setup Streams
Replication within the same database in Oracle 10.2 or higher.
Software Requirements/Prerequisites
The scripts provided can be used on any databases versions between Oracle Enterprise Edition 10.2.0.1
to 11.1.0.7.
It is assumed that the database runs in ARCHIVELOG mode. If this is not the case, then you need to
enable the ARCHIVELOG mode for the database before you run the scripts.
Caution
This sample code is provided for educational purposes only and not supported by Oracle Support
Services. It has been tested internally, however, and works as documented. We do not guarantee that it
will work for you, so be sure to test it in your environment before relying on it.
Proofread this sample code before using it! Due to the differences in the way text editors, e-mail
packages and operating systems handle text formatting (spaces, tabs and carriage returns), this sample
code may not be in an executable state when you first receive it. Check over the sample code to ensure
that errors of this type are corrected.
Sample Code
--------------------------- setup_streams_single_src.sql--------------------------
SPOOL streams_single_src.log
SET SQLPROMPT ''
SET ECHO ON
/*
** Warning **
*************
The following script will remove any streams existing configurations in your d
atabase.
The script will drop the STRMADMIN user if existing and will create new STRMAD
MIN user.
This also drops any existing users SHIP and OE.
You should not proceed with the script execution if there is an existing strea
ms setup
in the database, instead you may modify the script for your environment and re
-execute.*/
SET ECHO OFF
PROMPT Press ENTER to Continue or Press CTRL+C and type EXIT to abort
PAUSE
SET ECHO ON
/* 1. Remove the streams configuration from the database: */
CONNECT / as SYSDBA
EXECUTE DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
DROP USER STRMADMIN CASCADE;
DROP USER SHIP CASCADE;
DROP USER OE CASCADE;
/* 2. Setup STRMADMIN User: */
CONNECT / as sysdba
CREATE USER strmadmin IDENTIFIED BY strmadmin;
GRANT dba,connect,resource,aq_administrator_role TO strmadmin;
EXECUTE DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
ALTER SYSTEM SET aq_tm_processes=1;
/* 3. Setup Queue: */
CONNECT strmadmin/strmadmin
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'streams_queue_table',
queue_name => 'streams_queue',
queue_user => 'strmadmin');
END;
/
/* 4. Create Source Table SHIP.ORDERS: */
CONNECT / as sysdba
CREATE USER ship IDENTIFIED BY ship;
GRANT connect ,resource TO ship;
CONNECT ship/ship
CREATE TABLE SHIP.ORDERS
(
order_id number(8) PRIMARY KEY,
order_item varchar2(30),
ship_no number(8)
)
/
SPOOL streams_cleanup.log
SET SQLPROMPT ''
SET ECHO ON
/*
** Warning **
*************
The following script will remove any streams existing configurations in your d
atabase.
The script will drop the STRMADMIN user if exists. This also drops any existin
g users
SHIP and OE. You should not proceed with the script execution if there is an e
xisting
streams setup in the database, instead you may modify the script for your envi
ronment
and re-execute.
*/
SET ECHO OFF
PROMPT Press ENTER to Continue or Press CTRL+C and type EXIT to abort
PAUSE
CONNECT / AS SYSDBA
SET ECHO ON
EXECUTE DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION;
DROP USER STRMADMIN CASCADE;
DROP USER SHIP CASCADE;
DROP USER OE CASCADE;
SPOOL OFF
----------------------------
streams_cleanup.sql-----------------------------------
SQL> CONNECT ship/ship
Connected.
SQL> INSERT INTO ship.orders VALUES(23450,'Printers',98456);
1 row created.
SQL> INSERT INTO ship.orders VALUES(23451,'Scanners',98457);
1 row created.
SQL> COMMIT;
Commit complete.
SQL> SELECT * FROM ship.orders;
ORDER_ID ORDER_ITEM SHIP_NO
---------- ------------ ----------
23450 Printers 98456
23451 Scanners 98457
SQL> CONNECT oe/oe
Connected.
SQL> SET LINESIZE 200
SQL> SELECT * FROM oe.orders;
ORDER_ID ORDER_ITEM SHIP_NO SHIP_DATE SHIP_ZONE SHIP_REAC
-------- ------------ ---------- --------- ---------- ---------
23450 Printers 98456 24-FEB-09 NORTH 06-MAR-09
23451 Scanners 98457 24-FEB-09 NORTH 06-MAR-09
SQL>
In this Document
Goal
Solution
Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.1.0.7 - Release: 9.2 to 11.1
Information in this document applies to any platform.
Goal
Solution
1. Say You have 2 schemas named "hr" in 2 different databases src and dest.
2. Create strmadmin user(a streams administrator to manage streams setup) in both the databases.
-- Create the streams tablespace and set the logmnr to use it.
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'strmadmin',
grant_privileges => true);
END;
/
-- checking that the streams admin is created:
For 9i:-
Note 297273.1 9i Streams Recommended Configuration
10g:-
Note 418755.1 10.2.0.x.x Streams Recommendations
11g:-
http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_mprep.htm#i1010370
5. Create database links under the user strmadmin in both src and dest databases.
At SRC :
create database link dest connect to strmadmin identified by strmadmin using 'dest';
At Dest :
create database link src connect to strmadmin identified by strmadmin using 'src';
6. Set up 2 queues for Capture and apply in SRC Database as shown below :
conn strmadmin/strmadmin@src
begin
dbms_streams_adm.set_up_queue(
queue_table => 'apply_srctab',
queue_name => 'apply_src',
queue_user => 'strmadmin');
end;
/
begin
dbms_streams_adm.set_up_queue(
queue_table => 'capture_srctab',
queue_name => 'capture_src',
queue_user => 'strmadmin');
end;
/
7. Set up 2 queues for Capture and apply in DEST Database as shown below :
conn strmadmin/strmadmin@dest
begin
dbms_streams_adm.set_up_queue(
queue_table => 'apply_desttab',
queue_name => 'apply_dest',
queue_user => 'strmadmin');
end;
/
begin
dbms_streams_adm.set_up_queue(
queue_table => 'capture_desttab',
queue_name => 'capture_dest',
queue_user => 'strmadmin');
end;
/
conn strmadmin/strmadmin@src
begin
dbms_streams_adm.add_schema_rules (
schema_name => 'hr',
streams_type => 'capture',
streams_name => 'captures_src',
queue_name => 'capture_src',
include_dml => true,
include_ddl => true,
inclusion_rule => true);
end;
/
conn strmadmin/strmadmin@src
begin
dbms_streams_adm.add_schema_rules (
schema_name => 'hr',
streams_type => 'apply',
streams_name => 'applys_src',
queue_name => 'apply_src',
include_dml => true,
include_ddl => true,
source_database => 'dest');
end;
/
10. If needed setup conflict handlers for objects in hr@SRC, Refer the following link in Streams
documentation :
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14228/conflict.htm
11. Configure propagation process on SRC Database:
conn strmadmin/strmadmin@src
begin
dbms_streams_adm.add_schema_propagation_rules (
schema_name => 'hr',
streams_name => 'prop_src_to_dest',
source_queue_name => 'capture_src',
destination_queue_name => 'apply_dest@dest',
include_dml => true,
include_ddl => true,
source_database => 'src');
end;
/
conn strmadmin/strmadmin@dest
begin
dbms_streams_adm.add_schema_rules (
schema_name => 'hr',
streams_type => 'capture',
streams_name => 'captures_dest',
queue_name => 'capture_dest',
include_dml => true,
include_ddl => true);
end;
/
13. Set the schema instantiation SCN on SRC using the SCN of Dest database :
connect to strmadmin/strmadmin@dest
declare
v_scn number;
begin
v_scn := dbms_flashback.get_system_change_number();
dbms_apply_adm.set_schema_instantiation_scn@src(
source_schema_name => 'hr',
source_database_name => 'dest',
instantiation_scn => v_scn,
recursive => true);
end;
/
connect strmadmin/strmadmin@dest
begin
dbms_streams_adm.add_schema_rules (
schema_name => 'hr',
streams_type => 'apply',
streams_name => 'applys_dest',
queue_name => 'apply_dest',
include_dml => true,
include_ddl => true,
source_database => 'src');
end;
/
connect strmadmin/strmadmin@dest
begin
dbms_streams_adm.add_schema_propagation_rules (
schema_name => 'hr',
streams_name => 'prop_dest_to_src',
source_queue_name => 'capture_dest',
destination_queue_name => 'apply_src@src',
include_dml => true,
include_ddl => true,
source_database => 'dest');
end;
/
If object is not already exists in the dest database , instantiation can be done using export/import .
conn strmadmin/strmadmin@src
declare
v_scn number;
begin
v_scn := dbms_flashback.get_system_change_number();
dbms_apply_adm.set_schema_instantiation_scn@dest(
source_schema_name => 'hr',
source_database_name => 'src',
instantiation_scn => v_scn,
recursive => true);
end;
/
Ensure that supplemental logging is present for objects present in both SRC and DEST databases.
17. If needed Configure conflict resolution in hr@Dest Database, Refer the following link in Streams
documentation :
http://download-east.oracle.com/docs/cd/B19306_01/server. 102/b14228/conflict.htm
18.Start capture and apply processes on DEST :
Start Apply :
begin
dbms_apply_adm.set_parameter (
apply_name => 'applys_dest',
parameter => 'disable_on_error',
value => 'N');
end;
/
exec dbms_apply_adm.start_apply (apply_name=> 'applys_dest');
begin
dbms_apply_adm.set_parameter (
apply_name => 'applys_src',
parameter => 'disable_on_error',
value => 'N');
end;
/
20. Testing of the Bidirectional Steams setup can be done with DML & DDL Statements between
hr@SRC and hr@dest Schemas
Refrences:-
10g:-
http://download.oracle.com/docs/cd/B19306_01/server.102/b14228/repmultdemo.htm#STREP004
11g:-
http://download.oracle.com/docs/cd/B28359_01/server.111/b28322/config_flex.htm#insertedID2
9i:-
http://download.oracle.com/docs/cd/B10501_01/server.920/a96571/repmultdemo.htm#54726
Note 335516.1 Streams Performance Recommendations
Note 437838.1 Streams Specific Patches
Note 273674.1 Streams Configuration Report and Health Check Script
Note 290605.1 Oracle Streams STRMMON Monitoring Utility
Note 238455.1 Streams Supported and Unsupported Datatypes
In this Document
Purpose
Software Requirements/Prerequisites
Configuring the Sample Code
Running the Sample Code
Caution
Sample Code
Sample Code Output
References
Applies to:
Purpose
Starting from release 9.2, Oracle has introduced a more flexible and efficient way of implementing
replication using Streams. Oracle Streams enables the sharing of data and events in a data stream, either
within a database or from one database to another.
The steps below are intended to assist Replication DBAs in setting up and configuring Streams
Replication. The sample code outlines the steps to set up one-way streams replication at Schema level.
Software Requirements/Prerequisites
As a prerequisite, ensure the streams parameters are configured in the source and target instances as
detailed in the relevant notes for your release:
It is highly recommended to run Oracle Streams with the latest available patchset for your OS/release
combination. Also, take a look at Note 437838.1 Streams Specific Patches
To run this script either set your environment so the values below are the same as yours or replace them
in the script with values appropriate to your environment :
HR = Source schema to be replicated - This schema is already installed on the source site
The Streams Administrator (STRMADMIN) has been created as per Note 786528.1 How to create
STRMADMIN user and grant privileges.
Caution
This sample code is provided for educational purposes only and not supported by Oracle Support
Services. It has been tested internally, however, and works as documented. We do not guarantee that it
will work for you, so be sure to test it in your environment before relying on it.
Proofread this sample code before using it! Due to the differences in the way text editors, e-mail
packages and operating systems handle text formatting (spaces, tabs and carriage returns), this sample
code may not be in an executable state when you first receive it. Check over the sample code to ensure
that errors of this type are corrected.
Sample Code
Note:
If you are viewing this document online, then you can copy the text from the "BEGINNING OF SCRIPT"
line after this note to the next "END OF SCRIPT" line into a text editor and then edit the text to create a
script for your environment. Run the script with SQL*Plus on a computer that can connect to all of the
databases in the environment.
SET ECHO ON
SPOOL stream_oneway.out
/* STEP 1.- Create the streams queue and the database links that will be used for propagation. */
connect STRMADMIN/[email protected]
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRMADMIN');
END;
/
conn sys/&[email protected] as sysdba
create public database link STRM2.NET using 'strm2.net';
conn strmadmin/[email protected]
create database link STRM2.NET connect to strmadmin identified by strmadmin;
/* STEP 2.- Connect as the Streams Administrator in the target site strm2.net and create the streams
queue */
connect STRMADMIN/[email protected]
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRMADMIN');
END;
/
/* STEP 3.- Add apply rules for the Schema at the destination database */
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'HR',
streams_type => 'APPLY ',
streams_name => 'STREAM_APPLY',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'STRM1.NET');
END;
/
/* STEP 4.- Add capture rules for the schema HR at the source database */
CONN STRMADMIN/[email protected]
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'HR',
streams_type => 'CAPTURE',
streams_name => 'STREAM_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'STRM1.NET');
END;
/
/* STEP 5.- Add propagation rules for the schema HR at the source database. This step will also create a
propagation job to the destination database */
BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => 'HR',
streams_name => 'STREAM_PROPAGATE',
source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
destination_queue_name => '[email protected]',
include_dml => true,
include_ddl => true,
source_database => 'STRM1.NET');
END;
/
/* STEP 6.- Export, import and instantiation of tables from Source to Destination Database; if the objects
are not present in the destination database, perform an export of the objects from the source database
and import them into the destination database
connect STRMADMIN/[email protected]
set serveroutput on
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' || iscn);
END;
/
Instantiate the objects at the destination database with this SCN value. The
SET_TABLE_INSTANTIATION_SCN procedure controls which LCRs for a table are to be applied by the
apply process. If the commit SCN of an LCR from the source database is less than or equal to this
instantiation SCN, then the apply process discards the LCR. Else, the apply process applies the LCR.
connect STRMADMIN/[email protected]
BEGIN
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
SOURCE_SCHEMA_NAME => 'HR',
SOURCE_DATABASE_NAME => 'STRM1.NET',
RECURSIVE => TRUE,
INSTANTIATION_SCN => &iscn );
END;
then you need a dblink on the destination database to the source database
with the same name as the source database.
conn strmadmin/[email protected]
BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'STREAM_APPLY',
apply_user => 'HR');
END;
/
/* STEP 8.- Set stop_on_error to false so apply does not abort for every error; then, start the Apply
process on the destination */
conn strmadmin/[email protected]
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'STREAM_APPLY',
parameter => 'disable_on_error',
value => 'n');
END;
/
DECLARE
v_started number;
BEGIN
SELECT decode(status, 'ENABLED', 1, 0) INTO v_started
FROM DBA_APPLY WHERE APPLY_NAME = 'STREAM_APPLY';
if (v_started = 0) then
DBMS_APPLY_ADM.START_APPLY(apply_name => 'STREAM_APPLY');
end if;
END;
/
/* STEP 9.- Set up capture to retain 7 days worth of logminer checkpoint information, then start the
Capture process on the source */
conn strmadmin/[email protected]
BEGIN
DBMS_CAPTURE_ADM.ALTER_CAPTURE(
capture_name => 'STREAM_CAPTURE',
checkpoint_retention_time => 7);
END;
/
begin
DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STREAM_CAPTURE');
end;
/
/* Perform changes in tables belonging to HR on the source site and check that these are applied on the
destination */
conn HR/[email protected]
/* Confirm the insert has been done on HR.DEPARTMENTS at destination and a HR.EMPLOYEES has
now a new column */
conn HR/[email protected]
desc HR.EMPLOYEES;
References
In this Document
Goal
Solution
Platforms: 1-914CU;
This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not
been subject to an independent technical review.
Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.1.0.7 - Release: 9.2 to 11.1
Information in this document applies to any platform.
Goal
Solution
Various scenarious for adding to an existing Streams environment are discussed within the
documentation. It depends on what your current setup is and exactly what you are doing as to which of
the sections are appropriate for you.
The following demonstration requires two databases, hora10r24 and hora10r242 in this example, with uni-
directional replication. The source database (hora10r24) needs to be running in archive log mode.
set echo on
set serveroutput on
spool setup.out
exec dbms_propagation_adm.stop_propagation('STREAMS_PROPAGATION')
exec dbms_streams_adm.remove_streams_configuration;
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'strmadmin',
grant_privileges => true);
END;
/
connect test/test@hora10r24
exec dbms_streams_adm.remove_streams_configuration;
BEGIN
DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'strmadmin',
grant_privileges => true);
END;
/
connect test/test@hora10r242
connect STRMADMIN/streams@hora10r24;
CREATE DATABASE LINK hora10r242 connect to strmadmin identified by streams using 'hora10r242';
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'streams_capture_qt',
queue_name => 'streams_capture_q',
queue_user => 'strmadmin');
END;
/
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'test.testa',
streams_name => 'STREAMS_PROPAGATION',
source_queue_name => 'STRMADMIN.STREAMS_CAPTURE_Q',
destination_queue_name => '[email protected]',
include_dml => true,
include_ddl => true,
source_database => 'hora10r24.uk.oracle.com',
inclusion_rule => true,
queue_to_queue => true);
END;
/
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'test.testa',
streams_type => 'capture',
streams_name => 'STREAMS_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_CAPTURE_Q',
include_dml => true,
include_ddl => true,
source_database => 'hora10r24.uk.oracle.com',
include_tagged_lcr => false,
inclusion_rule => true);
END;
/
connect STRMADMIN/streams@hora10r242;
CREATE DATABASE LINK hora10r24 connect to STRMADMIN identified by streams using 'hora10r24';
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_APPLY_QT',
queue_name => 'STREAMS_APPLY_Q',
queue_user => 'STRMADMIN');
END;
/
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'test.testa',
streams_type => 'apply',
streams_name => 'STREAMS_APPLY',
queue_name => 'STRMADMIN.STREAMS_APPLY_Q',
include_dml => true,
include_ddl => false,
source_database => 'hora10r24.uk.oracle.com',
include_tagged_lcr => false,
inclusion_rule => true);
END;
/
connect STRMADMIN/streams@hora10r24;
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@hora10r242.uk.oracle.com(
source_object_name => 'test.testa',
source_database_name => 'hora10r24.uk.oracle.com',
instantiation_scn => iscn);
END;
/
connect STRMADMIN/streams@hora10r242;
begin
dbms_apply_adm.start_apply('STREAMS_APPLY');
end;
/
connect STRMADMIN/streams@hora10r24;
begin
dbms_capture_adm.start_capture('STREAMS_CAPTURE');
end;
/
connect test/test@hora10r24
exec dbms_lock.sleep(60)
connect test/test@hora10r242
spool off
set echo on
set serveroutput on
spool add.out
connect test/test@hora10r24
begin
dbms_apply_adm.stop_apply('STREAMS_APPLY');
end;
/
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'test.testb',
streams_type => 'apply',
streams_name => 'STREAMS_APPLY',
queue_name => 'STRMADMIN.STREAMS_APPLY_Q',
include_dml => true,
include_ddl => false,
source_database => 'hora10r24.uk.oracle.com',
include_tagged_lcr => false,
inclusion_rule => true);
END;
/
connect STRMADMIN/streams@hora10r24;
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'test.testb',
streams_name => 'STREAMS_PROPAGATION',
source_queue_name => 'STRMADMIN.STREAMS_CAPTURE_Q',
destination_queue_name => '[email protected]',
include_dml => true,
include_ddl => true,
source_database => 'hora10r24.uk.oracle.com',
inclusion_rule => true,
queue_to_queue => true);
END;
/
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'test.testb',
streams_type => 'capture',
streams_name => 'STREAMS_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_CAPTURE_Q',
include_dml => true,
include_ddl => true,
source_database => 'hora10r24.uk.oracle.com',
include_tagged_lcr => false,
inclusion_rule => true);
END;
/
connect test/test@hora10r242
connect STRMADMIN/streams@hora10r24;
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@hora10r242.uk.oracle.com(
source_object_name => 'test.testb',
source_database_name => 'hora10r24.uk.oracle.com',
instantiation_scn => iscn);
END;
/
connect STRMADMIN/streams@hora10r242;
begin
dbms_apply_adm.start_apply('STREAMS_APPLY');
end;
/
connect test/test@hora10r24
exec dbms_lock.sleep(30)
connect test/test@hora10r242
spool off
In this Document
Purpose
Software Requirements/Prerequisites
Configuring the Script
Running the Script
Caution
Script
Script Output
References
Applies to:
Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.6 - Release: 10.2 to 11.1
Information in this document applies to any platform.
Purpose
The following is a complete code sample that configures unidirectional streams replication at table level.
Software Requirements/Prerequisites
1.Ensure the streams parameters are configured in the source and target instances as detailed in
Note 298877.1 10G Streams Recommended Configuration
2.Create streams administrator user STRMADMIN as per << Note 786528.1>> 'How to create
STRMADMIN user and grant privileges'.
3. For additional Supplemental logging requirements please check note 782541 'Streams Replication
Supplemental Logging Requirements' and create the necessary supplemental logging on the source.
To run this script either set your environment so the values below are the same as yours or replace them
in the script with values appropriate to your environment :
Please cut and paste the script into a file, make the necessary changes and run it from sqlplus.
Caution
This script is provided for educational purposes only and not supported by Oracle Support Services. It
has been tested internally, however, and works as documented. We do not guarantee that it will work for
you, so be sure to test it in your environment before relying on it.
Proofread this script before using it! Due to the differences in the way text editors, e-mail packages and
operating systems handle text formatting (spaces, tabs and carriage returns), this script may not be in an
executable state when you first receive it. Check over the script to ensure that errors of this type are
corrected.
Script
/* Step 1 - Connected as the Streams Administrator, create the streams queue and the database link that
will be used for propagation at STRM1.NET. */
conn strmadmin/[email protected]
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_name => 'STREAMS_QUEUE',
queue_table =>'STREAMS_QUEUE_TABLE',
queue_user => 'STRMADMIN');
END;
/
conn sys/[email protected] as sysdba
create public database link STRM2.NET using 'strm2.net';
conn strmadmin/[email protected]
/* Step 2 - Connect as the Streams Administrator in the target site STRM2.NET and create the streams
queue */
conn strmadmin/[email protected]
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_name => 'STREAMS_QUEUE',
queue_table =>'STREAMS_QUEUE_TABLE',
queue_user => 'STRMADMIN');
END;
/
/*Step 3 -Connected to STRM1.NET, create CAPTURE and PROPAGATION rules for HR.EMPLOYESS */
conn strmadmin/[email protected]
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'HR.EMPLOYEES',
streams_name => 'STRMADMIN_PROP',
source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
destination_queue_name => '[email protected]',
include_dml => true,
include_ddl => true,
source_database => STRM1.NET);
END;
/
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'HR.EMPLOYEES',
streams_type => 'CAPTURE',
streams_name => 'STRMADMIN_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'STRM1.NET');
END;
/
conn STRMADMIN/[email protected]
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'HR.EMPLOYEES',
streams_type => 'APPLY',
streams_name => 'STRMADMIN_APPLY',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'STRM1.NET');
END;
/
BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'STRMADMIN_APPLY',
apply_user => 'HR');
END;
/
BEGIN
DBMS_APPLY_ADM.SET_PARAMETER(
apply_name => 'STRMADMIN_APPLY',
parameter => 'disable_on_error',
value => 'n');
END;
/
/*Step 7 - Take an export of the table at STRM1.NET */
exp USERID=SYSTEM/[email protected] TABLES=EMPLOYEES FILE=hr.dmp
LOG=hr_exp.log OBJECT_CONSISTENT=Y STATISTICS = NONE
/*Step 8 - Transfer the export dump file to STRM2.NET and import */
imp USERID=SYSTEM/<password>@strm2.net CONSTRAINTS=Y FULL=Y FILE=hr.dmp
IGNORE=Y COMMIT=Y LOG=hr_imp.log STREAMS_INSTANTIATION=Y
/*Step 9 - Start Apply and capture */
conn strmadmin/[email protected]
BEGIN
DBMS_APPLY_ADM.START_APPLY(
apply_name => 'STRMADMIN_APPLY');
END;
/
conn strmadmin/[email protected]
BEGIN
DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'STRMADMIN_CAPTURE');
END;
/
For bidirectionals treams setup, Please run steps 1 through 9 after interchanging Db1
and Db2. Caution should be exercised while setting the instantiation SCN this time as
one maynot want to export and import the data. Export option ROWS=N can be used for
the instantiation of objects from DB2--> DB1.
Script Output
/* Perform changes HR.EMPLOYEES and confirm that these are applied to tables on the destination */
conn hr/[email protected]
insert into hr.Employees values
(99999,'TEST','TEST','TEST@oracle','1234567',sysdate,'ST_MAN',null,null,null,null);
commit;
conn hr / [email protected]
select * From employees where employee_id=99999;
In this Document
Purpose
Software Requirements/Prerequisites
Configuring the Script
Running the Script
Caution
Script
Script Output
Applies to:
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6 - Release: 10.1 to 11.1
Information in this document applies to any platform.
Oracle Server Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
Purpose
The following script is intented to be used by the DBA to create an administrator user for STREAMS .
Software Requirements/Prerequisites
Please run this script logged in as a user who has SYSDBA privileges.
To run this script set your environment so the values below are the
same as yours or replace them in the script with values appropriate to your
environment :
Caution
This script is provided for educational purposes only and not supported by Oracle Support Services. It
has been tested internally, however, and works as documented. We do not guarantee that it will work for
you, so be sure to test it in your environment before relying on it.
Proofread this script before using it! Due to the differences in the way text editors, e-mail packages and
operating systems handle text formatting (spaces, tabs and carriage returns), this script may not be in an
executable state when you first receive it. Check over the script to ensure that errors of this type are
corrected.
Script
execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');