0% found this document useful (0 votes)
82 views

Change Data Capture From Oracle With StreamSets Data Collector - StreamSets

This document summarizes testing of StreamSets Data Collector's Oracle CDC origin plugin. It describes installing StreamSets via Docker, configuring the Oracle CDC origin with a connection string and credentials, and capturing change data from an Oracle database using LogMiner. StreamSets provides an easy way to set up streaming data pipelines from various data sources including Oracle databases.

Uploaded by

abemaybe-sms
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views

Change Data Capture From Oracle With StreamSets Data Collector - StreamSets

This document summarizes testing of StreamSets Data Collector's Oracle CDC origin plugin. It describes installing StreamSets via Docker, configuring the Oracle CDC origin with a connection string and credentials, and capturing change data from an Oracle database using LogMiner. StreamSets provides an easy way to set up streaming data pipelines from various data sources including Oracle databases.

Uploaded by

abemaybe-sms
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Blog Support Partners Contact

Why DataOps Products Solutions Customers Resources About Us $ Try Now

The DataOps Blog


Where Change Is Welcome

Change Data Capture from Oracle


Search $

Categories ▼
with StreamSets Data Collector
Products ▼
By Pat Patterson Posted in Engineering
June 12, 2018
Authors ▼

! " #
Quick Links
Try StreamSets Editor’s Note: StreamSets is no longer relies on the
continuous miner function in Oracle. Here is an update on
Product Documentation
Oracle 19c Bulk Ingest and CDC.

Customer Support
Today’s guest post is by Franck Pachot, an Oracle Consultant
at dbi services in Switzerland. Franck has over 20 years of
experience in Oracle, covering every aspect of the database
from architecture and data modeling to tuning and
operation. Franck recently documented his experiences
testing StreamSets Data Collector‘s Oracle CDC origin, and
kindly allowed us to repost his blog entry here.

With the trend of CQRS architectures where the transactions are streamed to a bunch of
heterogenous eventually consistent polyglot-persistence microservices, logical replication and
change data capture (CDC) becomes an important component, even at the architecture design
phase. This is good for existing product vendors such as Oracle GoldenGate (which must be
licensed even to use only the CDC part in the Oracle Database as Streams is going to be
desupported) or Dbvisit replicate to Kafka. But also for open source projects. There are some
ideas for running on Debezium and VOODOO but they are not yet released.

Today I tested the StreamSets Oracle CDC Origin. StreamSets Data Collector is an open source
project, started by former Cloudera and Informatica employees, to de!ne streaming data
pipelines from data sources. It is easy, simple and has a bunch of destinations possible. The
Oracle CDC origin is based on LogMiner which means that it is easy but may have some
limitations (mainly datatypes, DDL replication and performance).

Install
The installation guide is online. I choose the easiest way for testing as they provide a Docker
container:

# docker run --restart on-failure -p 18630:18630 -d --name streamsets-dc


streamsets/datacollector
Unable to find image 'streamsets/datacollector:latest' locally
latest: Pulling from streamsets/datacollector
605ce1bd3f31: Pull complete
529a36eb4b88: Pull complete
09efac34ac22: Pull complete
4d037ef9b54a: Pull complete
c166580a58b2: Pull complete
1c9f78fe3d6c: Pull complete
f5e0c86a8697: Pull complete
a336aef44a65: Pull complete
e8d1e07d3eed: Pull complete
Digest:
sha256:0428704019a97f6197dfb492af3c955a0441d9b3eb34dcc72bda6bbcfc7ad932
Show All ▼
Status: Downloaded newer image for streamsets/datacollector:latest
ef707344c8bd393f8e9d838dfdb475ec9d5f6f587a2a253ebaaa43845b1b516d
And that’s all. I am ready to connect with http on port 18630.

The default user/password is admin/admin

The GUI looks simple and e"cient. There’s a home page where you de!ne the ‘pipelines’ and
monitor them running. In the pipelines, we de!ne sources and destinations. Some connectors
are already installed, others can be automatically installed. For Oracle, as usual, you need to
download the JDBC driver yourself because Oracle doesn’t allow to get it embedded for legal
reasons. I’ll do something simple here just to check the mining from Oracle.

In ‘Package Manager’ (the little gift icon on the top) go to JDBC and check ‘install’ for the
streamsets-datacollector-jdbc-lib library.

Then in ‘External Libraries’, install (with the ‘upload’ icon at the top) the Oracle jdbc driver
(ojdbc8.jar).

I’ve also installed the MySQL one for future tests:

File Name Library ID


ojdbc8.jar streamsets-datacollector-jdbc-lib
mysql-connector-java-8.0.11.jar streamsets-datacollector-jdbc-lib

Oracle CDC pipeline


I’ll use the Oracle Change Data Capture here, based on Oracle LogMiner. The GUI is very easy:
just select ‘Oracle CDC’ as source in a new pipeline. Click on it and con!gure it. I’ve set the
minimum here.
In JDBC tab I’ve set only the JDBC Connection String to:
jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1 which is my PDB (I’m on Oracle 18c
here and multitenant is fully supported by StreamSets). In the Credentials tab I’ve set sys as
sysdba as username and its password. The con!guration can also be displayed as JSON and
here is the corresponding entry:

"configuration": [
{
"name": "hikariConf.connectionString",
"value": "jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1"
},
{
"name": "hikariConf.useCredentials",
"value": true
},
{
"name": "hikariConf.username",
"value": "sys as sysdba"
},
{
"name": "hikariConf.password",
Show All ▼
"value": "oracle"
},
I provided SYSDBA credentials and only the PDB service, but it seems that StreamSets !gured
out automatically how to connect to the CDB (as LogMiner can be started only from
CDB$ROOT). The advantage of using LogMiner here is that you need only a JDBC connection to
the source – but of course, it will use CPU and memory resource from the source database host
in this case.

Then I de!ned the replication in the Oracle CDC tab: Schema to ‘SCOTT’ and Table Name Pattern
to ‘%’. Initial Change as ‘From Latest Change’ as I just want to see the changes and not actually
replicate for this !rst test. But of course, we can de!ne a SCN here which is what must be used
to ensure consistency between the initial load and the replication. Dictionary Source to ‘Online
Catalog’ – this is what will be used by LogMiner to map the object and column IDs to table
names and column names. But be careful as table structure changes may not be managed
correctly with this option.

{
"name": "oracleCDCConfigBean.baseConfigBean.schemaTableConfigs",
"value": [
{
"schema": "SCOTT",
"table": "%"
}
]
},
{
"name": "oracleCDCConfigBean.baseConfigBean.changeTypes",
"value": [
"INSERT",
"UPDATE",
"DELETE",
Show All ▼
"SELECT_FOR_UPDATE"
]
I’ve left the defaults. I can’t think yet about a reason for capturing the ‘select for update’, but it is
there.

Named Pipe destination


I know that the destination part is easy. I just want to see the captured changes here and I took
the easiest destination: Named Pipe where I con!gured only the Named Pipe (/tmp/scott) and
Data Format (JSON)

{
"instanceName": "NamedPipe_01",
"library": "streamsets-datacollector-basic-lib",
"stageName":
"com_streamsets_pipeline_stage_destination_fifo_FifoDTarget",
"stageVersion": "1",
"configuration": [
{
"name": "namedPipe",
"value": "/tmp/scott"
},
{
"name": "dataFormat",
"value": "JSON"
},
Show All ▼
...

Supplemental logging
The Oracle redo log stream is by default focused only on recovery (replay of transactions in the
same database) and contains only the minimal physical information requried for it. In order to
get enough information to replay them in a di#erent database we need supplemental logging
for the database, and for the objects involved:

SQL> alter database add supplemental log data;


Database altered.
SQL> exec for i in (select owner,table_name from dba_tables where
owner='SCOTT' and table_name like '%') loop execute immediate 'alter table
"'||i.owner||'"."'||i.table_name||'" add supplemental log data (primary
key) columns'; end loop;
PL/SQL procedure successfully completed.

Run
And that’s all. Just run the pipeline and look at the logs:

StreamSet Oracle CDC pulls continuously from LogMiner to get the changes. Here are the
queries that it uses for that:

BEGIN DBMS_LOGMNR.START_LOGMNR(
STARTTIME => :1 ,
ENDTIME => :2 ,
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG
+ DBMS_LOGMNR.CONTINUOUS_MINE
+ DBMS_LOGMNR.NO_SQL_DELIMITER);
END;

This starts to mine between two timestamps. I suppose that it will read the SCNs to get !ner
grain and avoid overlapping information.

And here is the main one:

SELECT SCN, USERNAME, OPERATION_CODE, TIMESTAMP, SQL_REDO, TABLE_NAME,


COMMIT_SCN, SEQUENCE#, CSF, XIDUSN, XIDSLT, XIDSQN, RS_ID, SSN,
SEG_OWNER
FROM V$LOGMNR_CONTENTS
WHERE ((( (SEG_OWNER='SCOTT' AND TABLE_NAME IN
('BONUS','DEPT','EMP','SALGRADE')) )
AND (OPERATION_CODE IN (1,3,2,25)))
OR (OPERATION_CODE = 7 OR OPERATION_CODE = 36))

This reads the redo records. The operation codes 7 and 36 are for commit and rollbacks. The
operations 1, 3, 2, 25 are those that we want to capture (insert, update, delete, select for update)
and were de!ned in the con!guration. Here the pattern ‘%’ for the SCOTT schema has been
expanded to the table names. As far as I know, there’s no DDL mining here to automatically
capture new tables.

Capture
Then I ran this simple insert (I’ve added a primary key on this table as it is not there from
utlsampl.sql):

SQL> insert into scott.dept values(50,'IT','Cloud');

And I committed (as it seems that StreamSet bu#ers the changes until the end of the
transaction)

SQL> commit;

and here I got the message from the pipe:

$ cat /tmp/scott

{"LOC":"Cloud","DEPTNO":50,"DNAME":"IT"}

The graphical interface shows how the pipeline is going:

I’ve tested some bulk loads (direct-path inserts) and it seems to be managed correctly. Actually,
this Oracle CDC is based on LogMiner so it is fully supported (no mining of proprietary redo
stream format) and limitations are clearly documented.

Monitoring
Remember that the main work is done by LogMiner, so don’t forget to look at the alert.log on
the source database. With big transactions, you may need large PGA (but you can also choose
bu#er to disk). If you have Oracle Tuning Pack, you can also monitor the main query which
retrieves the redo information from LogMiner:

You will see a di#erent SQL_ID because the !lter predicates uses literals instead of bind
variables (which is not a problem here).

Conclusion
This product is very easy to test, so you can do a proof of concept within a few hours and test
for your context: supported datatypes, operations and performance. By easy to test, I mean:
very good documentation, very friendly and responsive graphical interface, and very clear error
messages.

Thanks, Franck! You can try StreamSets Data Collector’s Oracle CDC integration today, on the cloud
platform of your choice.

Related Resources

White Paper White Paper Webinar


Modern Data Integration for 12 Best Practices for Modern DataOps in Practice:
DataOps Data Integration Designing Pipelines for
Change

Products Solutions Getting Started Company Subscribe to the Newsletter


StreamSets DataOps Integration for Data Free Trials Careers
Platform Lakes Enter work email please!
Building Data Pipelines Leadership
Data Collector Ingest Data to Any
Sample Data Pipelines Events
Cloud Data Warehouse By submitting this form you agree to Terms of
Transformer
Connectors News
Real-time Applications Service and Privacy Policy
Control Hub
Documentation Legal
StreamSets on Cloud
Support and Training Privacy Policy Subscribe

Why DataOps Technology Resources Contact


Partners
What Is DataOps? Blog Contact Us Connect
Amazon Web Services
What Is Data Drift? Community Locations
Microsoft Azure ! # %
DataOps Case Studies Videos, White Papers,
Databricks Analyst Reports
De!nitive Guide to +1 415 851 1018 | [email protected]
DataOps Google Cloud Platform User Guides
Snow$ake

Copyright © 2021 StreamSets Terms of Service Privacy Policy Site Credits

You might also like