SlideShare a Scribd company logo
IBM Ascential ETL Overview: DataStage and Quality Stage
More than ever, businesses today need to understand their operations, customers, suppliers, partners, employees, and stockholders. They need to know what is happening with the business, analyze their operations, reach to market conditions, make the right decisions to drive revenue growth, increase profits and improve productivity and efficiency.
CIOs are responding to their organizations’ strategic needs by developing IT initiatives that align corporate data with business objectives.  These initiatives include: Business intelligence Master data management Business transformation Infrastructure rationalization Risk and compliance
Connect to any data or content, wherever it resides Understand and analyze information, including relationships and lineage Cleanse information to ensure its quality and consistency Transform information to provide enrichment and tailoring for its specific purposes Federate information to make it accessible to people, processes and applications IBM WebSphere Information Integration platform enables businesses to perform five key integration functions :
Data Analysis : Define, annotate, and report on fields of business data. Data Quality : Standardize source data fields Match records across or within data sources, remove duplicate data Survive  records from the best information across sources Data Transformation & Movement : Move data and transform it to meet the requirements of   its target systems Integrate data and content Provide views as if from a single source while maintaining source integrity   Software: Profile stage in QualityStage Software: QualityStage Software: DataStage Software: N/A  (not used at NCEN) Software: QualityStage Software: DataStage This presentation will deal with ETL  QualityStage  and  DataStage .
QualityStage QualityStage is used to cleanse and enrich data to meet business needs and data quality management standards. Data preparation (often referred to as  data cleansing ) is critical to the success of an integration project.  QualityStage provides a set of integrated modules for accomplishing data reengineering tasks, such as: Investigating Standardizing Designing and running matches Determining what data records survive = data cleansing
QualityStage Main QS stages used in the BRM project: Investigate   – gives you complete visibility into the actual condition of data   (not used in the BRM project because the users really know their data) Standardize   –  allows you to reformat data from multiple systems to ensure that each data type has the correct and consistent content and format   Match   – helps to ensure data integrity by linking records from one or more data sources that correspond to the same real-world entity.  Matching can be used to identify duplicate entities resulting from data entry variations or account-oriented business practices   Survive   –   helps to ensure that the best available data survives and is correctly prepared for the target destination
QualityStage  Investigate     Standardize    Match    Survive Word   Investigation  parses freeform fields into individual tokens, which are analyzed to create patterns. In addition, Word Investigation provides frequency counts on the tokens.
QualityStage  Investigate     Standardize    Match    Survive For example , to create the patterns in address data: Word Investigation uses a set of rules for classifying personal names, business names and addresses . Word Investigation provides prebuilt rule sets for investigating patterns on names and postal addresses for a number of different countries .  For the United States, the address data would include: USPREP (parses name, address and area if data not previously formatted) USNAME (for individual and organization names) USADDR (for street and mailing addresses) USAREA (for city, state, ZIP code and so on)
QualityStage  Investigate     Standardize    Match    Survive Field parsing breaks the address into individual tokens of “123”, “St.”, “Virginia” and “St.” Example :  The test field “ 123 St. Virginia St. ” would be analyzed in the following way: Lexical analysis  determines the business significance of each piece 123 = Number St. = Street type Virginia = Alpha St. = Street type Context analysis  identifies the variations data structures and content as “123 St. Virginia St.” 123 = House number St. Virginia = Street address St. = Street type
QualityStage  Investigate     Standardize     Match    Survive The  Standardize  stage allows you to reformat data from multiple systems to ensure that each data type has the correct and consistent content and format.
QualityStage  Investigate     Standardize     Match    Survive Standardization is used to invoke specific standardization Rule Sets and standardize one or more fields using that Rule Set.  Standardization is used to invoke specific standardization Rule Sets and standardize one or more fields using that Rule Set.  For example, a Rule Set can be used so that “ Boulevard ” will always be “ Blvd ” Standardization is used to invoke specific standardization Rule Sets and standardize one or more fields using that Rule Set.  For example, a Rule Set can be used so that “ Boulevard ” will always be “ Blvd ”, not “ Boulevard ”, “ Blv .”, “ Boulev ”, or some other variation. The USNAME rule set is used to standardize First Name, Middle Name, Last Name The USADDR rule set is used to standardize Address data The USAREA rules set is used to standardize City, State, Zip Code The VTAXID rule set is used to validate Social Security Number The VEMAIL rule set is used to validate Email Address The VPHONE rule set is used to validate Work Phone Number The list below shows some of the more commonly-used  Rule Sets .
QualityStage  Investigate    Standardize     Match     Survive Data  matching  is used to find records in a single data source or independent data sources Data  matching  is used to find records in a single data source or independent data sources that refer to the same entity Data  matching  is used to find records in a single data source or independent data sources that refer to the same entity (such as a person, organization, location, product, or material) regardless of the availability of a predetermined key.
Blocking Matching QualityStage  Matching Stage  basically consists of two steps: QualityStage  Investigate    Standardize     Match     Survive
XA = master record (during the first pass, this was the first record found to match with another record) DA = duplicates CP = clerical procedure (records with a weighting within a set cutoff range)  RA = residuals (those records that remain isolated) Operations in the Matching module: 2.  Processing Files 1.  Unduplication Match Fields Suspect Match Values by Match Pass Vartypes Cutoff Weights 1.  Unduplication  (group records into sets having similar attributes) QualityStage  Investigate    Standardize     Match     Survive
QualityStage  Investigate    Standardize     Match      Survive Survivorship  is used to create a ‘best record’ from all available information about an entity (such as a person, location, material, etc.).  Survivorship and formatting ensure that the best available data survives and is correctly prepared for the target destination.   Using the rules setup screen, it implements business and mapping rules, creating the necessary output structures for the target application and identifying fields that do not conform to load standards.
QualityStage  Investigate    Standardize     Match      Survive Supplies missing values in one record with values from other records on the same entity Populates missing values in one record with values from corresponding records that have been identified as a group in the matching stage Enriches existing data with external data The  Survive  stage does the following:
DataStage = data transformation
DataStage In its simplest form, DataStage performs data transformation and movement from source systems to target systems in batch and in real time.   The data sources may include indexed files, sequential files, relational databases, archives, external data sources, enterprise applications and message queues.
DataStage DataStage Administrator DataStage Manager DataStage Designer DataStage Director The DataStage client components are:
Specify general server defaults Add and delete projects Set project properties Access DataStage Repository by command interface DataStage  Administrator     Manager     Designer      Director Use DataStage  Administrator  to:
DataStage  Administrator     Manager     Designer      Director
DataStage  Administrator      Manager      Designer      Director DataStage  Manager  is the primary interface to the DataStage repository.  In addition to table and file layouts, it displays the routines, transforms, and jobs that are defines in the project.  It also allows us to move or copy ETL jobs from one project to another.
Specify how the data is extracted Specify data transformations Decode (denormalize) data going into the data mart using referenced lookups Aggregate data Split data into multiple outputs on the basis of defined constraints DataStage  Administrator    Manager     Designer      Director Use DataStage  Designer  to:
DataStage  Administrator    Manager     Designer      Director Use DataStage  Director  to run, schedule, and monitor your DataStage jobs.  You can also gather statistics as the job runs.   Also used for looking at logs for debugging purposes.
Set up a project  – Before you can create any DataStage jobs, you must set up your project by entering information about your data. Create a job  – When a DataStage project is installed, it is empty and you must create the jobs you need in DataStage Designer. Define Table Definitions Develop the job  – Jobs are designed and developed using the Designer.  Each data source, the data warehouse, and each processing step is represented by a  stage  in the job design.  The stages are linked together to show the flow of data. DataStage:  Getting Started
DataStage  Designer   Developing a job
DataStage  Designer   Developing a job
DataStage  Designer   Input Stage
DataStage  Designer   Transformer Stage The Transformer stage performs any data conversion required before the data is output to another stage in the job design. After you are done, compile and run the job.
DataStage  Designer
DataStage  Designer
DataStage  Designer
DataStage  Designer
T10  takes .txt files from the Pre-event folder and transforms them into rows. Straight_moves  moves the files into the stg_file_contact table, stg_file_broker table, or the reject file.  If it says “lead source”, it will go to the reject file (constraint). If it does not say “lead source”, it will evaluate the entire row to determine whether it will go to the contact or broker table (derivation). DataStage  An example :  Preventing the header row from inserting into MDM_Contact and MDM_Broker
Questions?
Thank you for attending
Ad

More Related Content

What's hot (19)

DATA WAREHOUSE -- ETL testing Plan
DATA WAREHOUSE -- ETL testing PlanDATA WAREHOUSE -- ETL testing Plan
DATA WAREHOUSE -- ETL testing Plan
Madhu Nepal
 
Data Warehouse (ETL) testing process
Data Warehouse (ETL) testing processData Warehouse (ETL) testing process
Data Warehouse (ETL) testing process
Rakesh Hansalia
 
Etl And Data Test Guidelines For Large Applications
Etl And Data Test Guidelines For Large ApplicationsEtl And Data Test Guidelines For Large Applications
Etl And Data Test Guidelines For Large Applications
Wayne Yaddow
 
ETL Testing Overview
ETL Testing OverviewETL Testing Overview
ETL Testing Overview
Chetan Gadodia
 
Data extraction, transformation, and loading
Data extraction, transformation, and loadingData extraction, transformation, and loading
Data extraction, transformation, and loading
Siddique Ibrahim
 
Data warehousing testing strategies cognos
Data warehousing testing strategies cognosData warehousing testing strategies cognos
Data warehousing testing strategies cognos
Sandeep Mehta
 
To Study E T L ( Extract, Transform, Load) Tools Specially S Q L Server I...
To Study  E T L ( Extract, Transform, Load) Tools Specially  S Q L  Server  I...To Study  E T L ( Extract, Transform, Load) Tools Specially  S Q L  Server  I...
To Study E T L ( Extract, Transform, Load) Tools Specially S Q L Server I...
Shahzad
 
Data Verification In QA Department Final
Data Verification In QA Department FinalData Verification In QA Department Final
Data Verification In QA Department Final
Wayne Yaddow
 
Data Warehouse Testing: It’s All about the Planning
Data Warehouse Testing: It’s All about the PlanningData Warehouse Testing: It’s All about the Planning
Data Warehouse Testing: It’s All about the Planning
TechWell
 
Data warehouse physical design
Data warehouse physical designData warehouse physical design
Data warehouse physical design
Er. Nawaraj Bhandari
 
ETL Testing - Introduction to ETL testing
ETL Testing - Introduction to ETL testingETL Testing - Introduction to ETL testing
ETL Testing - Introduction to ETL testing
Vibrant Event
 
Transaction
TransactionTransaction
Transaction
Er. Nawaraj Bhandari
 
ETL Testing Interview Questions and Answers
ETL Testing Interview Questions and AnswersETL Testing Interview Questions and Answers
ETL Testing Interview Questions and Answers
H2Kinfosys
 
ETL Process
ETL ProcessETL Process
ETL Process
Rohin Rangnekar
 
Introduction to data mining and data warehousing
Introduction to data mining and data warehousingIntroduction to data mining and data warehousing
Introduction to data mining and data warehousing
Er. Nawaraj Bhandari
 
Data Warehouses & Deployment By Ankita dubey
Data Warehouses & Deployment By Ankita dubeyData Warehouses & Deployment By Ankita dubey
Data Warehouses & Deployment By Ankita dubey
Ankita Dubey
 
Data warehouse logical design
Data warehouse logical designData warehouse logical design
Data warehouse logical design
Er. Nawaraj Bhandari
 
Cts informatica interview question answers
Cts informatica interview question answersCts informatica interview question answers
Cts informatica interview question answers
Sweta Singh
 
Testing data warehouse applications by Kirti Bhushan
Testing data warehouse applications by Kirti BhushanTesting data warehouse applications by Kirti Bhushan
Testing data warehouse applications by Kirti Bhushan
Kirti Bhushan
 
DATA WAREHOUSE -- ETL testing Plan
DATA WAREHOUSE -- ETL testing PlanDATA WAREHOUSE -- ETL testing Plan
DATA WAREHOUSE -- ETL testing Plan
Madhu Nepal
 
Data Warehouse (ETL) testing process
Data Warehouse (ETL) testing processData Warehouse (ETL) testing process
Data Warehouse (ETL) testing process
Rakesh Hansalia
 
Etl And Data Test Guidelines For Large Applications
Etl And Data Test Guidelines For Large ApplicationsEtl And Data Test Guidelines For Large Applications
Etl And Data Test Guidelines For Large Applications
Wayne Yaddow
 
Data extraction, transformation, and loading
Data extraction, transformation, and loadingData extraction, transformation, and loading
Data extraction, transformation, and loading
Siddique Ibrahim
 
Data warehousing testing strategies cognos
Data warehousing testing strategies cognosData warehousing testing strategies cognos
Data warehousing testing strategies cognos
Sandeep Mehta
 
To Study E T L ( Extract, Transform, Load) Tools Specially S Q L Server I...
To Study  E T L ( Extract, Transform, Load) Tools Specially  S Q L  Server  I...To Study  E T L ( Extract, Transform, Load) Tools Specially  S Q L  Server  I...
To Study E T L ( Extract, Transform, Load) Tools Specially S Q L Server I...
Shahzad
 
Data Verification In QA Department Final
Data Verification In QA Department FinalData Verification In QA Department Final
Data Verification In QA Department Final
Wayne Yaddow
 
Data Warehouse Testing: It’s All about the Planning
Data Warehouse Testing: It’s All about the PlanningData Warehouse Testing: It’s All about the Planning
Data Warehouse Testing: It’s All about the Planning
TechWell
 
ETL Testing - Introduction to ETL testing
ETL Testing - Introduction to ETL testingETL Testing - Introduction to ETL testing
ETL Testing - Introduction to ETL testing
Vibrant Event
 
ETL Testing Interview Questions and Answers
ETL Testing Interview Questions and AnswersETL Testing Interview Questions and Answers
ETL Testing Interview Questions and Answers
H2Kinfosys
 
Introduction to data mining and data warehousing
Introduction to data mining and data warehousingIntroduction to data mining and data warehousing
Introduction to data mining and data warehousing
Er. Nawaraj Bhandari
 
Data Warehouses & Deployment By Ankita dubey
Data Warehouses & Deployment By Ankita dubeyData Warehouses & Deployment By Ankita dubey
Data Warehouses & Deployment By Ankita dubey
Ankita Dubey
 
Cts informatica interview question answers
Cts informatica interview question answersCts informatica interview question answers
Cts informatica interview question answers
Sweta Singh
 
Testing data warehouse applications by Kirti Bhushan
Testing data warehouse applications by Kirti BhushanTesting data warehouse applications by Kirti Bhushan
Testing data warehouse applications by Kirti Bhushan
Kirti Bhushan
 

Viewers also liked (7)

ETL Process
ETL ProcessETL Process
ETL Process
Karthik Selvaraj
 
Dw & etl concepts
Dw & etl conceptsDw & etl concepts
Dw & etl concepts
jeshocarme
 
Le processus ETL (Extraction, Transformation, Chargement)
Le processus ETL (Extraction, Transformation, Chargement)Le processus ETL (Extraction, Transformation, Chargement)
Le processus ETL (Extraction, Transformation, Chargement)
Salah Eddine BENTALBA (+15K Connections)
 
data warehouse , data mart, etl
data warehouse , data mart, etldata warehouse , data mart, etl
data warehouse , data mart, etl
Aashish Rathod
 
What is ETL?
What is ETL?What is ETL?
What is ETL?
Ismail El Gayar
 
Introduction to ETL and Data Integration
Introduction to ETL and Data IntegrationIntroduction to ETL and Data Integration
Introduction to ETL and Data Integration
CloverDX (formerly known as CloverETL)
 
Clinical Data Repository vs. A Data Warehouse - Which Do You Need?
Clinical Data Repository vs. A Data Warehouse - Which Do You Need?Clinical Data Repository vs. A Data Warehouse - Which Do You Need?
Clinical Data Repository vs. A Data Warehouse - Which Do You Need?
Health Catalyst
 
Ad

Similar to Etl Overview (Extract, Transform, And Load) (20)

Data quality and bi
Data quality and biData quality and bi
Data quality and bi
jeffd00
 
Data Warehousing AWS 12345
Data Warehousing AWS 12345Data Warehousing AWS 12345
Data Warehousing AWS 12345
AkhilSinghal21
 
Database
DatabaseDatabase
Database
sumit621
 
MIS5101 WK10 Outcome Measures
MIS5101 WK10 Outcome MeasuresMIS5101 WK10 Outcome Measures
MIS5101 WK10 Outcome Measures
Steven Johnson
 
UNIT - 1 Part 2: Data Warehousing and Data Mining
UNIT - 1 Part 2: Data Warehousing and Data MiningUNIT - 1 Part 2: Data Warehousing and Data Mining
UNIT - 1 Part 2: Data Warehousing and Data Mining
Nandakumar P
 
Fundamentals of Database Design
Fundamentals of Database DesignFundamentals of Database Design
Fundamentals of Database Design
Information Technology
 
Artifacts, Data Dictionary, Data Modeling, Data Wrangling
Artifacts, Data Dictionary, Data Modeling, Data WranglingArtifacts, Data Dictionary, Data Modeling, Data Wrangling
Artifacts, Data Dictionary, Data Modeling, Data Wrangling
Faisal Akbar
 
Database Introduction for MIS Students.pptx
Database Introduction for MIS Students.pptxDatabase Introduction for MIS Students.pptx
Database Introduction for MIS Students.pptx
deepanjalshrestha1
 
Data warehousing
Data warehousingData warehousing
Data warehousing
keeyre
 
Technical Documentation 101 for Data Engineers.pdf
Technical Documentation 101 for Data Engineers.pdfTechnical Documentation 101 for Data Engineers.pdf
Technical Documentation 101 for Data Engineers.pdf
Shristi Shrestha
 
dbms-1.pptx
dbms-1.pptxdbms-1.pptx
dbms-1.pptx
SityogInstituteOfTec1
 
05. Physical Data Specification Template
05. Physical Data Specification Template05. Physical Data Specification Template
05. Physical Data Specification Template
Alan D. Duncan
 
Information Systems For Business and BeyondChapter 4Data a.docx
Information Systems For Business and BeyondChapter 4Data a.docxInformation Systems For Business and BeyondChapter 4Data a.docx
Information Systems For Business and BeyondChapter 4Data a.docx
jaggernaoma
 
Dimensional Modeling Concepts_Nishant.ppt
Dimensional Modeling Concepts_Nishant.pptDimensional Modeling Concepts_Nishant.ppt
Dimensional Modeling Concepts_Nishant.ppt
nishant523869
 
Performance management capability
Performance management capabilityPerformance management capability
Performance management capability
designer DATA
 
It 302 computerized accounting (week 2) - sharifah
It 302   computerized accounting (week 2) - sharifahIt 302   computerized accounting (week 2) - sharifah
It 302 computerized accounting (week 2) - sharifah
alish sha
 
20IT501_DWDM_PPT_Unit_II.ppt
20IT501_DWDM_PPT_Unit_II.ppt20IT501_DWDM_PPT_Unit_II.ppt
20IT501_DWDM_PPT_Unit_II.ppt
Premkumar R
 
Azure Data Fundamentals DP 900 Full Course
Azure Data Fundamentals DP 900 Full CourseAzure Data Fundamentals DP 900 Full Course
Azure Data Fundamentals DP 900 Full Course
Piyush sachdeva
 
Dqs mds-matching 15042015
Dqs mds-matching 15042015Dqs mds-matching 15042015
Dqs mds-matching 15042015
Neil Hambly
 
Big data analytics(BAD601) module-1 ppt
Big data analytics(BAD601)  module-1 pptBig data analytics(BAD601)  module-1 ppt
Big data analytics(BAD601) module-1 ppt
AmbikaVenkatesh4
 
Data quality and bi
Data quality and biData quality and bi
Data quality and bi
jeffd00
 
Data Warehousing AWS 12345
Data Warehousing AWS 12345Data Warehousing AWS 12345
Data Warehousing AWS 12345
AkhilSinghal21
 
MIS5101 WK10 Outcome Measures
MIS5101 WK10 Outcome MeasuresMIS5101 WK10 Outcome Measures
MIS5101 WK10 Outcome Measures
Steven Johnson
 
UNIT - 1 Part 2: Data Warehousing and Data Mining
UNIT - 1 Part 2: Data Warehousing and Data MiningUNIT - 1 Part 2: Data Warehousing and Data Mining
UNIT - 1 Part 2: Data Warehousing and Data Mining
Nandakumar P
 
Artifacts, Data Dictionary, Data Modeling, Data Wrangling
Artifacts, Data Dictionary, Data Modeling, Data WranglingArtifacts, Data Dictionary, Data Modeling, Data Wrangling
Artifacts, Data Dictionary, Data Modeling, Data Wrangling
Faisal Akbar
 
Database Introduction for MIS Students.pptx
Database Introduction for MIS Students.pptxDatabase Introduction for MIS Students.pptx
Database Introduction for MIS Students.pptx
deepanjalshrestha1
 
Data warehousing
Data warehousingData warehousing
Data warehousing
keeyre
 
Technical Documentation 101 for Data Engineers.pdf
Technical Documentation 101 for Data Engineers.pdfTechnical Documentation 101 for Data Engineers.pdf
Technical Documentation 101 for Data Engineers.pdf
Shristi Shrestha
 
05. Physical Data Specification Template
05. Physical Data Specification Template05. Physical Data Specification Template
05. Physical Data Specification Template
Alan D. Duncan
 
Information Systems For Business and BeyondChapter 4Data a.docx
Information Systems For Business and BeyondChapter 4Data a.docxInformation Systems For Business and BeyondChapter 4Data a.docx
Information Systems For Business and BeyondChapter 4Data a.docx
jaggernaoma
 
Dimensional Modeling Concepts_Nishant.ppt
Dimensional Modeling Concepts_Nishant.pptDimensional Modeling Concepts_Nishant.ppt
Dimensional Modeling Concepts_Nishant.ppt
nishant523869
 
Performance management capability
Performance management capabilityPerformance management capability
Performance management capability
designer DATA
 
It 302 computerized accounting (week 2) - sharifah
It 302   computerized accounting (week 2) - sharifahIt 302   computerized accounting (week 2) - sharifah
It 302 computerized accounting (week 2) - sharifah
alish sha
 
20IT501_DWDM_PPT_Unit_II.ppt
20IT501_DWDM_PPT_Unit_II.ppt20IT501_DWDM_PPT_Unit_II.ppt
20IT501_DWDM_PPT_Unit_II.ppt
Premkumar R
 
Azure Data Fundamentals DP 900 Full Course
Azure Data Fundamentals DP 900 Full CourseAzure Data Fundamentals DP 900 Full Course
Azure Data Fundamentals DP 900 Full Course
Piyush sachdeva
 
Dqs mds-matching 15042015
Dqs mds-matching 15042015Dqs mds-matching 15042015
Dqs mds-matching 15042015
Neil Hambly
 
Big data analytics(BAD601) module-1 ppt
Big data analytics(BAD601)  module-1 pptBig data analytics(BAD601)  module-1 ppt
Big data analytics(BAD601) module-1 ppt
AmbikaVenkatesh4
 
Ad

Etl Overview (Extract, Transform, And Load)

  • 1. IBM Ascential ETL Overview: DataStage and Quality Stage
  • 2. More than ever, businesses today need to understand their operations, customers, suppliers, partners, employees, and stockholders. They need to know what is happening with the business, analyze their operations, reach to market conditions, make the right decisions to drive revenue growth, increase profits and improve productivity and efficiency.
  • 3. CIOs are responding to their organizations’ strategic needs by developing IT initiatives that align corporate data with business objectives. These initiatives include: Business intelligence Master data management Business transformation Infrastructure rationalization Risk and compliance
  • 4. Connect to any data or content, wherever it resides Understand and analyze information, including relationships and lineage Cleanse information to ensure its quality and consistency Transform information to provide enrichment and tailoring for its specific purposes Federate information to make it accessible to people, processes and applications IBM WebSphere Information Integration platform enables businesses to perform five key integration functions :
  • 5. Data Analysis : Define, annotate, and report on fields of business data. Data Quality : Standardize source data fields Match records across or within data sources, remove duplicate data Survive records from the best information across sources Data Transformation & Movement : Move data and transform it to meet the requirements of its target systems Integrate data and content Provide views as if from a single source while maintaining source integrity Software: Profile stage in QualityStage Software: QualityStage Software: DataStage Software: N/A (not used at NCEN) Software: QualityStage Software: DataStage This presentation will deal with ETL QualityStage and DataStage .
  • 6. QualityStage QualityStage is used to cleanse and enrich data to meet business needs and data quality management standards. Data preparation (often referred to as data cleansing ) is critical to the success of an integration project. QualityStage provides a set of integrated modules for accomplishing data reengineering tasks, such as: Investigating Standardizing Designing and running matches Determining what data records survive = data cleansing
  • 7. QualityStage Main QS stages used in the BRM project: Investigate – gives you complete visibility into the actual condition of data (not used in the BRM project because the users really know their data) Standardize – allows you to reformat data from multiple systems to ensure that each data type has the correct and consistent content and format Match – helps to ensure data integrity by linking records from one or more data sources that correspond to the same real-world entity. Matching can be used to identify duplicate entities resulting from data entry variations or account-oriented business practices Survive – helps to ensure that the best available data survives and is correctly prepared for the target destination
  • 8. QualityStage Investigate  Standardize  Match  Survive Word Investigation parses freeform fields into individual tokens, which are analyzed to create patterns. In addition, Word Investigation provides frequency counts on the tokens.
  • 9. QualityStage Investigate  Standardize  Match  Survive For example , to create the patterns in address data: Word Investigation uses a set of rules for classifying personal names, business names and addresses . Word Investigation provides prebuilt rule sets for investigating patterns on names and postal addresses for a number of different countries . For the United States, the address data would include: USPREP (parses name, address and area if data not previously formatted) USNAME (for individual and organization names) USADDR (for street and mailing addresses) USAREA (for city, state, ZIP code and so on)
  • 10. QualityStage Investigate  Standardize  Match  Survive Field parsing breaks the address into individual tokens of “123”, “St.”, “Virginia” and “St.” Example : The test field “ 123 St. Virginia St. ” would be analyzed in the following way: Lexical analysis determines the business significance of each piece 123 = Number St. = Street type Virginia = Alpha St. = Street type Context analysis identifies the variations data structures and content as “123 St. Virginia St.” 123 = House number St. Virginia = Street address St. = Street type
  • 11. QualityStage Investigate  Standardize  Match  Survive The Standardize stage allows you to reformat data from multiple systems to ensure that each data type has the correct and consistent content and format.
  • 12. QualityStage Investigate  Standardize  Match  Survive Standardization is used to invoke specific standardization Rule Sets and standardize one or more fields using that Rule Set. Standardization is used to invoke specific standardization Rule Sets and standardize one or more fields using that Rule Set. For example, a Rule Set can be used so that “ Boulevard ” will always be “ Blvd ” Standardization is used to invoke specific standardization Rule Sets and standardize one or more fields using that Rule Set. For example, a Rule Set can be used so that “ Boulevard ” will always be “ Blvd ”, not “ Boulevard ”, “ Blv .”, “ Boulev ”, or some other variation. The USNAME rule set is used to standardize First Name, Middle Name, Last Name The USADDR rule set is used to standardize Address data The USAREA rules set is used to standardize City, State, Zip Code The VTAXID rule set is used to validate Social Security Number The VEMAIL rule set is used to validate Email Address The VPHONE rule set is used to validate Work Phone Number The list below shows some of the more commonly-used Rule Sets .
  • 13. QualityStage Investigate  Standardize  Match  Survive Data matching is used to find records in a single data source or independent data sources Data matching is used to find records in a single data source or independent data sources that refer to the same entity Data matching is used to find records in a single data source or independent data sources that refer to the same entity (such as a person, organization, location, product, or material) regardless of the availability of a predetermined key.
  • 14. Blocking Matching QualityStage Matching Stage basically consists of two steps: QualityStage Investigate  Standardize  Match  Survive
  • 15. XA = master record (during the first pass, this was the first record found to match with another record) DA = duplicates CP = clerical procedure (records with a weighting within a set cutoff range) RA = residuals (those records that remain isolated) Operations in the Matching module: 2. Processing Files 1. Unduplication Match Fields Suspect Match Values by Match Pass Vartypes Cutoff Weights 1. Unduplication (group records into sets having similar attributes) QualityStage Investigate  Standardize  Match  Survive
  • 16. QualityStage Investigate  Standardize  Match  Survive Survivorship is used to create a ‘best record’ from all available information about an entity (such as a person, location, material, etc.). Survivorship and formatting ensure that the best available data survives and is correctly prepared for the target destination. Using the rules setup screen, it implements business and mapping rules, creating the necessary output structures for the target application and identifying fields that do not conform to load standards.
  • 17. QualityStage Investigate  Standardize  Match  Survive Supplies missing values in one record with values from other records on the same entity Populates missing values in one record with values from corresponding records that have been identified as a group in the matching stage Enriches existing data with external data The Survive stage does the following:
  • 18. DataStage = data transformation
  • 19. DataStage In its simplest form, DataStage performs data transformation and movement from source systems to target systems in batch and in real time. The data sources may include indexed files, sequential files, relational databases, archives, external data sources, enterprise applications and message queues.
  • 20. DataStage DataStage Administrator DataStage Manager DataStage Designer DataStage Director The DataStage client components are:
  • 21. Specify general server defaults Add and delete projects Set project properties Access DataStage Repository by command interface DataStage Administrator  Manager  Designer  Director Use DataStage Administrator to:
  • 22. DataStage Administrator  Manager  Designer  Director
  • 23. DataStage Administrator  Manager  Designer  Director DataStage Manager is the primary interface to the DataStage repository. In addition to table and file layouts, it displays the routines, transforms, and jobs that are defines in the project. It also allows us to move or copy ETL jobs from one project to another.
  • 24. Specify how the data is extracted Specify data transformations Decode (denormalize) data going into the data mart using referenced lookups Aggregate data Split data into multiple outputs on the basis of defined constraints DataStage Administrator  Manager  Designer  Director Use DataStage Designer to:
  • 25. DataStage Administrator  Manager  Designer  Director Use DataStage Director to run, schedule, and monitor your DataStage jobs. You can also gather statistics as the job runs. Also used for looking at logs for debugging purposes.
  • 26. Set up a project – Before you can create any DataStage jobs, you must set up your project by entering information about your data. Create a job – When a DataStage project is installed, it is empty and you must create the jobs you need in DataStage Designer. Define Table Definitions Develop the job – Jobs are designed and developed using the Designer. Each data source, the data warehouse, and each processing step is represented by a stage in the job design. The stages are linked together to show the flow of data. DataStage: Getting Started
  • 27. DataStage Designer Developing a job
  • 28. DataStage Designer Developing a job
  • 29. DataStage Designer Input Stage
  • 30. DataStage Designer Transformer Stage The Transformer stage performs any data conversion required before the data is output to another stage in the job design. After you are done, compile and run the job.
  • 35. T10 takes .txt files from the Pre-event folder and transforms them into rows. Straight_moves moves the files into the stg_file_contact table, stg_file_broker table, or the reject file. If it says “lead source”, it will go to the reject file (constraint). If it does not say “lead source”, it will evaluate the entire row to determine whether it will go to the contact or broker table (derivation). DataStage An example : Preventing the header row from inserting into MDM_Contact and MDM_Broker
  • 37. Thank you for attending

Editor's Notes

  • #4: Master data management – Reliably create and maintain consistent, complete, contextual and accurate business information about entities such as customers and products across multiple systems Business intelligence – Take the guesswork out of important decisions by gathering, storing, analyzing, and providing access to diverse enterprise information. Business transformation – Isolate users and applications from the underlying information completely to enable On Demand Business. Infrastructure rationalization – Quickly and accurately streamline corporate information by repurposing and reconciling data whenever it is required Risk and compliance - Deliver a dependable information management foundation to any quality control, corporate reporting visibility and data audit infrastructure.
  • #22: DS Administrator is used for administration tasks such as setting up users, logging, creating and moving projects and setting up purging criteria
  • #23: Permissions - Assign user categories to operating system user groups or enable operators to view all the details of an event in a job log file. Tracing – Enable or disable tracing on the server. Schedule – Set up a user name and password to use for running scheduled DataStage jobs. Mainframe – Set mainframe job properties and the default platform type. Turntables – Configure cache settings for Hashed File stages. Parallel – Set parallel job properties and defaults for date/time and number formats. Sequence – Set compilation defaults for job sequences. Remote – If you have specified that parallel jobs in the project are to be deployed on a USS system, this page allows you to specify deployment mode and USS machine details.
  • #25: DataStage Designer – used to create DataStage applications (known as jobs). Each job specifies the data sources, the transformations required, and the destination of the data. Jobs are compiled to create executables that are scheduled by the Director and run on the server.
  • #26: DataStage Director – used to validate, schedule, run, and monitor DataStage job sequences.
  • #36: Constraint - Prevents data from getting into the processing piece of the ETL job (reject) Derivation - Logic at the field level (example: is it “open”? (“click through”))