Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
This document provides an introduction to SQL and relational database concepts. It explains that SQL is used to manipulate and retrieve data from relational databases. It also outlines the main SQL commands: DDL for data definition, DML for data manipulation, DCL for data control, and DQL for data queries. Key relational database concepts like tables, records, columns, and relationships are defined. Constraints, data types and integrity are discussed. Examples are provided for SQL statements like CREATE, INSERT, UPDATE, DELETE, and SELECT.
This presentation introduces Java packages, including system packages that are part of the Java API and user-defined packages. It discusses how packages organize related classes and interfaces, the structure of package names and directories, and how to create and access packages. Packages provide advantages like grouping related code, preventing name collisions, and improving reusability.
Data mining is the process of automatically discovering useful information from large data sets. It draws from machine learning, statistics, and database systems to analyze data and identify patterns. Common data mining tasks include classification, clustering, association rule mining, and sequential pattern mining. These tasks are used for applications like credit risk assessment, fraud detection, customer segmentation, and market basket analysis. Data mining aims to extract unknown and potentially useful patterns from large data sets.
ERP Chapter 1 : Business functions and business processesRey-an Baricanosa
The document discusses business functions and business processes. It defines business functions as activities specific to functional areas like marketing, supply chain management, accounting, and human resources. Business processes are collections of activities that take inputs and create outputs valuable to customers. The document explains that integrated information systems are important because functional areas are interdependent and share data. Effective data sharing between functions leads to more efficient business processes and improved company success.
This document discusses e-procurement and compares it to traditional procurement processes. E-procurement refers to purchasing goods or services electronically, primarily over the Internet. It can automate the entire procurement process from requesting quotes to contract management. This saves time and reduces irregularities compared to manual procurement. E-procurement portals allow buyers and suppliers to register and participate in online tendering processes. Key aspects of e-procurement include request for quotation, offer evaluation, reverse auctions, purchase orders, and payment status updates. Reverse auctions reverse the roles of buyers and sellers by allowing buyers to bid increasingly lower prices. The government of Andhra Pradesh was the first Indian state to adopt a single e-procurement website for all government departments.
1. Supply chain management involves efficiently integrating suppliers, manufacturers, warehouses, and retailers so that products are delivered to customers in the right quantities, locations, and times to minimize costs while meeting service requirements.
2. Key aspects of supply chain management include facilities, inventory, transportation, information sharing, sourcing decisions, and pricing strategies. These factors work together to balance efficiency and responsiveness across the supply chain.
3. An organization's competitive strategy helps determine which supply chain drivers it prioritizes, such as Walmart focusing on low inventory, centralized facilities, and information sharing to support its low-price model.
This document provides an overview of diabetes mellitus (DM), including the three main types (Type 1, Type 2, and gestational diabetes), signs and symptoms, complications, pathophysiology, oral manifestations, dental management considerations, emergency management, diagnosis, and treatment. DM is caused by either the pancreas not producing enough insulin or cells not responding properly to insulin, resulting in high blood sugar levels. The document compares and contrasts the characteristics of Type 1 and Type 2 DM.
Power Point Presentation on Artificial Intelligence Anushka Ghosh
Its a Power Point Presentation on Artificial Intelligence.I hope you will find this helpful. Thank you.
You can also find out my another PPT on Artificial Intelligence.The link is given below--
https://www.slideshare.net/AnushkaGhosh5/ppt-presentation-on-artificial-intelligence
Anushka Ghosh
Course "Machine Learning and Data Mining" for the degree of Computer Engineering at the Politecnico di Milano. In in this lecture we overview the mining of data streams
The document provides an overview of data mining concepts and techniques. It introduces data mining, describing it as the process of discovering interesting patterns or knowledge from large amounts of data. It discusses why data mining is necessary due to the explosive growth of data and how it relates to other fields like machine learning, statistics, and database technology. Additionally, it covers different types of data that can be mined, functionalities of data mining like classification and prediction, and classifications of data mining systems.
This document presents an overview of web mining techniques. It discusses how web mining uses data mining algorithms to extract useful information from the web. The document classifies web mining into three categories: web structure mining, web content mining, and web usage mining. It provides examples and explanations of techniques for each category such as document classification, clustering, association rule mining, and sequential pattern mining. The document also discusses opportunities and challenges of web mining as well as sources of web usage data like server logs.
This document discusses spatial data mining and its applications. Spatial data mining involves extracting knowledge and relationships from large spatial databases. It can be used for applications like GIS, remote sensing, medical imaging, and more. Some challenges include the complexity of spatial data types and large data volumes. The document also covers topics like spatial data warehouses, dimensions and measures in spatial analysis, spatial association rule mining, and applications in fields such as earth science, crime mapping, and commerce.
ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
An ensemble is itself a supervised learning algorithm, because it can be trained and then used to make predictions. The trained ensemble, therefore, represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built.
Data is often incomplete, noisy, and inconsistent which can negatively impact mining results. Effective data cleaning is needed to fill in missing values, identify and remove outliers, and resolve inconsistencies. Other important tasks include data integration, transformation, reduction, and discretization to prepare the data for mining and obtain reduced representation that produces similar analytical results. Proper data preparation is essential for high quality knowledge discovery.
Introduction to MapReduce - Hadoop Streaming | Big Data Hadoop Spark Tutorial...CloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2sh5b3E
This CloudxLab Hadoop Streaming tutorial helps you to understand Hadoop Streaming in detail. Below are the topics covered in this tutorial:
1) Hadoop Streaming and Why Do We Need it?
2) Writing Streaming Jobs
3) Testing Streaming jobs and Hands-on on CloudxLab
The Information Technology have led us into an era where the production, sharing and use of information are now part of everyday life and of which we are often unaware actors almost: it is now almost inevitable not leave a digital trail of many of the actions we do every day; for example, by digital content such as photos, videos, blog posts and everything that revolves around the social networks (Facebook and Twitter in particular). Added to this is that with the "internet of things", we see an increase in devices such as watches, bracelets, thermostats and many other items that are able to connect to the network and therefore generate large data streams. This explosion of data justifies the birth, in the world of the term Big Data: it indicates the data produced in large quantities, with remarkable speed and in different formats, which requires processing technologies and resources that go far beyond the conventional systems management and storage of data. It is immediately clear that, 1) models of data storage based on the relational model, and 2) processing systems based on stored procedures and computations on grids are not applicable in these contexts. As regards the point 1, the RDBMS, widely used for a great variety of applications, have some problems when the amount of data grows beyond certain limits. The scalability and cost of implementation are only a part of the disadvantages: very often, in fact, when there is opposite to the management of big data, also the variability, or the lack of a fixed structure, represents a significant problem. This has given a boost to the development of the NoSQL database. The website NoSQL Databases defines NoSQL databases such as "Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open source and horizontally scalable." These databases are: distributed, open source, scalable horizontally, without a predetermined pattern (key-value, column-oriented, document-based and graph-based), easily replicable, devoid of the ACID and can handle large amounts of data. These databases are integrated or integrated with processing tools based on the MapReduce paradigm proposed by Google in 2009. MapReduce with the open source Hadoop framework represent the new model for distributed processing of large amounts of data that goes to supplant techniques based on stored procedures and computational grids (step 2). The relational model taught courses in basic database design, has many limitations compared to the demands posed by new applications based on Big Data and NoSQL databases that use to store data and MapReduce to process large amounts of data.
Course Website http://pbdmng.datatoknowledge.it/
Contact me for other informations and to download the slides
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
The document discusses different theories used in information retrieval systems. It describes cognitive or user-centered theories that model human information behavior and structural or system-centered theories like the vector space model. The vector space model represents documents and queries as vectors of term weights and compares similarities between queries and documents. It was first used in the SMART information retrieval system and involves assigning term vectors and weights to documents based on relevance.
Hierarchical clustering methods group data points into a hierarchy of clusters based on their distance or similarity. There are two main approaches: agglomerative, which starts with each point as a separate cluster and merges them; and divisive, which starts with all points in one cluster and splits them. AGNES and DIANA are common agglomerative and divisive algorithms. Hierarchical clustering represents the hierarchy as a dendrogram tree structure and allows exploring data at different granularities of clusters.
This document discusses different types of data mining including object mining, spatial mining, text mining, web mining, and multimedia mining. It describes how data mining can be used to analyze object-relational and object-oriented databases by generalizing set-valued attributes, aggregating and approximating spatial and multimedia data, generalizing object identifiers and class hierarchies. The document also discusses spatial databases, spatial data mining, spatial data warehouses, mining spatial associations and co-locations, mining raster databases, multimedia databases, and approaches for multimedia data mining and analysis.
This document discusses time-series data and methods for analyzing it. Time-series data consists of sequential values measured over time that can be analyzed to identify patterns, trends, and outliers. Key methods discussed include trend analysis to identify long-term movements, seasonal variations, and irregular components; similarity search to find similar sequences; and dimensionality reduction and transformation techniques to reduce data size before analysis or indexing.
This document provides an overview of data mining concepts and techniques. It defines data mining as the extraction of interesting and useful patterns from large amounts of data. The document outlines several potential applications of data mining, including market analysis, risk analysis, and fraud detection. It also describes the typical steps involved in a data mining process, including data cleaning, pattern evaluation, and knowledge presentation. Finally, it discusses different data mining functionalities, such as classification, clustering, and association rule mining.
Clustering is an unsupervised learning technique used to group unlabeled data points together based on similarities. It aims to maximize similarity within clusters and minimize similarity between clusters. There are several clustering methods including partitioning, hierarchical, density-based, grid-based, and model-based. Clustering has many applications such as pattern recognition, image processing, market research, and bioinformatics. It is useful for extracting hidden patterns from large, complex datasets.
The document discusses various clustering approaches including partitioning, hierarchical, density-based, grid-based, model-based, frequent pattern-based, and constraint-based methods. It focuses on partitioning methods such as k-means and k-medoids clustering. K-means clustering aims to partition objects into k clusters by minimizing total intra-cluster variance, representing each cluster by its centroid. K-medoids clustering is a more robust variant that represents each cluster by its medoid or most centrally located object. The document also covers algorithms for implementing k-means and k-medoids clustering.
This document discusses hierarchical clustering, which produces nested clusters organized as a hierarchical tree. It can be visualized using a dendrogram. There are two main types: agglomerative, which starts with each point as its own cluster and merges the closest pairs; and divisive, which starts with all points in one cluster and splits them. Hierarchical clustering does not require specifying the number of clusters upfront like partitional clustering but is generally slower and the dendrogram can be difficult to interpret. The document provides examples of applications and notes pros and cons.
deep learning applications in medical image analysis brain tumorVenkat Projects
The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the _eld. The advantage of machine learning in an era of medical big data is that signi_cant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classi_cation, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.
Data Mining Concept & Technique-ch04.pptMutiaSari53
This chapter discusses data warehousing and online analytical processing (OLAP). It defines a data warehouse as a subject-oriented collection of integrated and nonvolatile data used for analysis. Key concepts covered include the multidimensional data cube model used to organize warehouse data, ETL processes for loading data into the warehouse, and star and snowflake schemas for conceptual modeling. The chapter also distinguishes between OLTP and OLAP systems and operations.
Data warehousing and online analytical processingVijayasankariS
The document discusses data warehousing and online analytical processing (OLAP). It defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data used to support management decision making. It describes key concepts such as data warehouse modeling using data cubes and dimensions, extraction, transformation and loading of data, and common OLAP operations. The document also provides examples of star schemas and how they are used to model data warehouses.
Course "Machine Learning and Data Mining" for the degree of Computer Engineering at the Politecnico di Milano. In in this lecture we overview the mining of data streams
The document provides an overview of data mining concepts and techniques. It introduces data mining, describing it as the process of discovering interesting patterns or knowledge from large amounts of data. It discusses why data mining is necessary due to the explosive growth of data and how it relates to other fields like machine learning, statistics, and database technology. Additionally, it covers different types of data that can be mined, functionalities of data mining like classification and prediction, and classifications of data mining systems.
This document presents an overview of web mining techniques. It discusses how web mining uses data mining algorithms to extract useful information from the web. The document classifies web mining into three categories: web structure mining, web content mining, and web usage mining. It provides examples and explanations of techniques for each category such as document classification, clustering, association rule mining, and sequential pattern mining. The document also discusses opportunities and challenges of web mining as well as sources of web usage data like server logs.
This document discusses spatial data mining and its applications. Spatial data mining involves extracting knowledge and relationships from large spatial databases. It can be used for applications like GIS, remote sensing, medical imaging, and more. Some challenges include the complexity of spatial data types and large data volumes. The document also covers topics like spatial data warehouses, dimensions and measures in spatial analysis, spatial association rule mining, and applications in fields such as earth science, crime mapping, and commerce.
ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
An ensemble is itself a supervised learning algorithm, because it can be trained and then used to make predictions. The trained ensemble, therefore, represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built.
Data is often incomplete, noisy, and inconsistent which can negatively impact mining results. Effective data cleaning is needed to fill in missing values, identify and remove outliers, and resolve inconsistencies. Other important tasks include data integration, transformation, reduction, and discretization to prepare the data for mining and obtain reduced representation that produces similar analytical results. Proper data preparation is essential for high quality knowledge discovery.
Introduction to MapReduce - Hadoop Streaming | Big Data Hadoop Spark Tutorial...CloudxLab
Big Data with Hadoop & Spark Training: http://bit.ly/2sh5b3E
This CloudxLab Hadoop Streaming tutorial helps you to understand Hadoop Streaming in detail. Below are the topics covered in this tutorial:
1) Hadoop Streaming and Why Do We Need it?
2) Writing Streaming Jobs
3) Testing Streaming jobs and Hands-on on CloudxLab
The Information Technology have led us into an era where the production, sharing and use of information are now part of everyday life and of which we are often unaware actors almost: it is now almost inevitable not leave a digital trail of many of the actions we do every day; for example, by digital content such as photos, videos, blog posts and everything that revolves around the social networks (Facebook and Twitter in particular). Added to this is that with the "internet of things", we see an increase in devices such as watches, bracelets, thermostats and many other items that are able to connect to the network and therefore generate large data streams. This explosion of data justifies the birth, in the world of the term Big Data: it indicates the data produced in large quantities, with remarkable speed and in different formats, which requires processing technologies and resources that go far beyond the conventional systems management and storage of data. It is immediately clear that, 1) models of data storage based on the relational model, and 2) processing systems based on stored procedures and computations on grids are not applicable in these contexts. As regards the point 1, the RDBMS, widely used for a great variety of applications, have some problems when the amount of data grows beyond certain limits. The scalability and cost of implementation are only a part of the disadvantages: very often, in fact, when there is opposite to the management of big data, also the variability, or the lack of a fixed structure, represents a significant problem. This has given a boost to the development of the NoSQL database. The website NoSQL Databases defines NoSQL databases such as "Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open source and horizontally scalable." These databases are: distributed, open source, scalable horizontally, without a predetermined pattern (key-value, column-oriented, document-based and graph-based), easily replicable, devoid of the ACID and can handle large amounts of data. These databases are integrated or integrated with processing tools based on the MapReduce paradigm proposed by Google in 2009. MapReduce with the open source Hadoop framework represent the new model for distributed processing of large amounts of data that goes to supplant techniques based on stored procedures and computational grids (step 2). The relational model taught courses in basic database design, has many limitations compared to the demands posed by new applications based on Big Data and NoSQL databases that use to store data and MapReduce to process large amounts of data.
Course Website http://pbdmng.datatoknowledge.it/
Contact me for other informations and to download the slides
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
The document discusses different theories used in information retrieval systems. It describes cognitive or user-centered theories that model human information behavior and structural or system-centered theories like the vector space model. The vector space model represents documents and queries as vectors of term weights and compares similarities between queries and documents. It was first used in the SMART information retrieval system and involves assigning term vectors and weights to documents based on relevance.
Hierarchical clustering methods group data points into a hierarchy of clusters based on their distance or similarity. There are two main approaches: agglomerative, which starts with each point as a separate cluster and merges them; and divisive, which starts with all points in one cluster and splits them. AGNES and DIANA are common agglomerative and divisive algorithms. Hierarchical clustering represents the hierarchy as a dendrogram tree structure and allows exploring data at different granularities of clusters.
This document discusses different types of data mining including object mining, spatial mining, text mining, web mining, and multimedia mining. It describes how data mining can be used to analyze object-relational and object-oriented databases by generalizing set-valued attributes, aggregating and approximating spatial and multimedia data, generalizing object identifiers and class hierarchies. The document also discusses spatial databases, spatial data mining, spatial data warehouses, mining spatial associations and co-locations, mining raster databases, multimedia databases, and approaches for multimedia data mining and analysis.
This document discusses time-series data and methods for analyzing it. Time-series data consists of sequential values measured over time that can be analyzed to identify patterns, trends, and outliers. Key methods discussed include trend analysis to identify long-term movements, seasonal variations, and irregular components; similarity search to find similar sequences; and dimensionality reduction and transformation techniques to reduce data size before analysis or indexing.
This document provides an overview of data mining concepts and techniques. It defines data mining as the extraction of interesting and useful patterns from large amounts of data. The document outlines several potential applications of data mining, including market analysis, risk analysis, and fraud detection. It also describes the typical steps involved in a data mining process, including data cleaning, pattern evaluation, and knowledge presentation. Finally, it discusses different data mining functionalities, such as classification, clustering, and association rule mining.
Clustering is an unsupervised learning technique used to group unlabeled data points together based on similarities. It aims to maximize similarity within clusters and minimize similarity between clusters. There are several clustering methods including partitioning, hierarchical, density-based, grid-based, and model-based. Clustering has many applications such as pattern recognition, image processing, market research, and bioinformatics. It is useful for extracting hidden patterns from large, complex datasets.
The document discusses various clustering approaches including partitioning, hierarchical, density-based, grid-based, model-based, frequent pattern-based, and constraint-based methods. It focuses on partitioning methods such as k-means and k-medoids clustering. K-means clustering aims to partition objects into k clusters by minimizing total intra-cluster variance, representing each cluster by its centroid. K-medoids clustering is a more robust variant that represents each cluster by its medoid or most centrally located object. The document also covers algorithms for implementing k-means and k-medoids clustering.
This document discusses hierarchical clustering, which produces nested clusters organized as a hierarchical tree. It can be visualized using a dendrogram. There are two main types: agglomerative, which starts with each point as its own cluster and merges the closest pairs; and divisive, which starts with all points in one cluster and splits them. Hierarchical clustering does not require specifying the number of clusters upfront like partitional clustering but is generally slower and the dendrogram can be difficult to interpret. The document provides examples of applications and notes pros and cons.
deep learning applications in medical image analysis brain tumorVenkat Projects
The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the _eld. The advantage of machine learning in an era of medical big data is that signi_cant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classi_cation, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.
Data Mining Concept & Technique-ch04.pptMutiaSari53
This chapter discusses data warehousing and online analytical processing (OLAP). It defines a data warehouse as a subject-oriented collection of integrated and nonvolatile data used for analysis. Key concepts covered include the multidimensional data cube model used to organize warehouse data, ETL processes for loading data into the warehouse, and star and snowflake schemas for conceptual modeling. The chapter also distinguishes between OLTP and OLAP systems and operations.
Data warehousing and online analytical processingVijayasankariS
The document discusses data warehousing and online analytical processing (OLAP). It defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data used to support management decision making. It describes key concepts such as data warehouse modeling using data cubes and dimensions, extraction, transformation and loading of data, and common OLAP operations. The document also provides examples of star schemas and how they are used to model data warehouses.
A data warehouse is a subject-oriented, consolidated collection of integrated data from multiple sources used to support management decision making. It is separate from operational databases and contains historical data for analysis. Data warehouses use a star schema with fact and dimension tables and support online analytical processing (OLAP) for complex analysis and reporting.
A data warehouse is a subject-oriented, integrated, time-variant collection of data that supports management's decision-making processes. It contains data extracted from various operational databases and data sources. The data is cleaned, transformed, integrated and loaded into the data warehouse for analysis. A data warehouse uses a multidimensional model with facts and dimensions to allow for complex analytical and ad-hoc queries from multiple perspectives. It is separately administered from operational databases to avoid impacting transaction processing systems and allow optimized access for decision support.
The document discusses data warehousing concepts and technologies. It defines a data warehouse as a subject-oriented, integrated, time-variant, and non-volatile collection of data used for decision making. Key aspects covered include multidimensional data modeling using facts, dimensions, and cubes; data warehouse architectures; and efficient cube computation methods such as ROLAP-based algorithms.
The document defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data to support management decision making. A data warehouse is maintained separately from operational databases and provides a platform for consolidated historical data analysis. Key features of a data warehouse include dimensional modeling using facts, dimensions, and star or snowflake schemas.
This document contains a question bank for the subject of data warehousing and mining. It provides definitions and characteristics of data warehouses, including that they are subject-oriented, integrated, time-variant, and non-volatile stores of data from multiple sources made available for analysis. It also defines multidimensional data models using fact and dimension tables, and classifies OLAP tools as relational, multidimensional, or hybrid. Key differences between star and snowflake schemas are that snowflake schemas further normalize dimension tables. Metadata is defined as data about data.
The document discusses data warehousing and OLAP (online analytical processing). It defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data used to support management decision making. The document outlines common data warehouse architectures like star schemas and snowflake schemas and discusses how data is modeled and organized in multidimensional data cubes. It also describes typical OLAP operations for analyzing and exploring cube data like roll-up, drill-down, slice and dice.
The document discusses various concepts related to data warehousing including:
1. The key characteristics of a data warehouse including being subject-oriented, integrated, time-variant, and non-updatable.
2. Common data warehouse architectures including two-level, independent data marts, dependent data marts with an operational data store, logical data marts with an active warehouse, and a three-layer architecture.
3. The Extract, Transform, Load (ETL) process and data reconciliation to integrate and transform data from source systems into the data warehouse.
The document discusses data warehousing and OLAP technology. It defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data used to support management decision making. It describes key aspects of a data warehouse including its multi-dimensional schema with fact and dimension tables and use of data cubes to enable OLAP. It also contrasts data warehouses with operational databases and discusses ETL, architecture, and design approaches.
This document discusses data warehousing and online analytical processing (OLAP). It defines a data warehouse as a subject-oriented, integrated, time-variant, and nonvolatile collection of data used for analysis and decision making. The key aspects of a data warehouse covered are its multidimensional data model using cubes and dimensions, extraction of data from multiple sources, and usage for querying, reporting, analytical processing, and data mining. Common data warehouse architectures and operations like star schemas, snowflake schemas, and OLAP functions such as roll-up and drill-down are also summarized.
Data Warehousing for students educationpptxjainyshah20
This document discusses data warehousing and OLAP technology. It defines a data warehouse as a subject-oriented, integrated, time-variant, and nonvolatile collection of data used to support management decision making. Key aspects covered include the multi-dimensional data model using cubes and dimensions, various data warehouse architectures like star schemas and snowflake schemas, and OLAP operations for analysis like roll-up, drill-down, slice and dice. Building a data warehouse requires a range of business, technology, and program management skills.
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Chapter 13. Trends and Research Frontiers in Data Mining.pptSubrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptSubrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...Subrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
Lecture 8a: Clustering Validity, Minimum Description Length (MDL), Introduction to Information Theory, Co-clustering using MDL. (ppt,pdf)
Deepayan Chakrabarti, Spiros Papadimitriou, Dharmendra Modha, Christos Faloutsos, Fully Automatic Cross-Associations, KDD 2004, Seattle, August 2004. [PDF]
Some details about MDL and Information Theory can be found in the book “Introduction to Data Mining” by Tan, Steinbach, Kumar (chapters 2,4).
This document discusses absorbing random walks on graphs. It explains that absorbing random walks can be used to compute the proximity of non-absorbing nodes to chosen absorbing nodes in a graph. The absorption probabilities provide a measure of how close each non-absorbing node is to the different absorbing nodes. These probabilities can be computed by iteratively updating the absorption probabilities of each non-absorbing node based on the probabilities of its neighbors. Absorbing random walks have applications in areas like information propagation, opinion formation, and semi-supervised learning.
Lecture 9: Dimensionality Reduction, Singular Value Decomposition (SVD), Principal Component Analysis (PCA). (ppt,pdf)
Appendices A, B from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Lecture 7: Hierarchical clustering, DBSCAN, Mixture models and the EM algorithm (ppt,pdf)
Chapter 8,9 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Lecture 10b: Classification. k-Nearest Neighbor classifier, Logistic Regression, Support Vector Machines (SVM), Naive Bayes (ppt,pdf)
Chapters 4,5 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Lecture 6: Min-wise independent hashing. Locality Sensitive Hashing. Clustering, K-means algorithm (ppt,pdf)
Chapter 3 from the book Mining Massive Datasets by Anand Rajaraman and Jeff Ullman.
Chapter 8 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Lecture 11: Naive Bayes classifier. Supervised Learning. Web Search and PageRank (ppt,pdf)
Chapter 5 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Chapter 13 from the book "Introduction to Information Retrieval" by C. Manning, P. Raghavan, H. Schutze
Lecture 4: Frequent Itemests, Association Rules. Evaluation. Beyond Apriori (ppt, pdf)
Chapter 6 from the book “Introduction to Data Mining” by Tan, Steinbach, Kumar.
Chapter 6 from the book Mining Massive Datasets by Anand Rajaraman and Jeff Ullman.
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...Infopitaara
A Boiler Feed Pump (BFP) is a critical component in thermal power plants. It supplies high-pressure water (feedwater) to the boiler, ensuring continuous steam generation.
⚙️ How a Boiler Feed Pump Works
Water Collection:
Feedwater is collected from the deaerator or feedwater tank.
Pressurization:
The pump increases water pressure using multiple impellers/stages in centrifugal types.
Discharge to Boiler:
Pressurized water is then supplied to the boiler drum or economizer section, depending on design.
🌀 Types of Boiler Feed Pumps
Centrifugal Pumps (most common):
Multistage for higher pressure.
Used in large thermal power stations.
Positive Displacement Pumps (less common):
For smaller or specific applications.
Precise flow control but less efficient for large volumes.
🛠️ Key Operations and Controls
Recirculation Line: Protects the pump from overheating at low flow.
Throttle Valve: Regulates flow based on boiler demand.
Control System: Often automated via DCS/PLC for variable load conditions.
Sealing & Cooling Systems: Prevent leakage and maintain pump health.
⚠️ Common BFP Issues
Cavitation due to low NPSH (Net Positive Suction Head).
Seal or bearing failure.
Overheating from improper flow or recirculation.
Sorting Order and Stability in Sorting.
Concept of Internal and External Sorting.
Bubble Sort,
Insertion Sort,
Selection Sort,
Quick Sort and
Merge Sort,
Radix Sort, and
Shell Sort,
External Sorting, Time complexity analysis of Sorting Algorithms.
The role of the lexical analyzer
Specification of tokens
Finite state machines
From a regular expressions to an NFA
Convert NFA to DFA
Transforming grammars and regular expressions
Transforming automata to grammars
Language for specifying lexical analyzers
We introduce the Gaussian process (GP) modeling module developed within the UQLab software framework. The novel design of the GP-module aims at providing seamless integration of GP modeling into any uncertainty quantification workflow, as well as a standalone surrogate modeling tool. We first briefly present the key mathematical tools on the basis of GP modeling (a.k.a. Kriging), as well as the associated theoretical and computational framework. We then provide an extensive overview of the available features of the software and demonstrate its flexibility and user-friendliness. Finally, we showcase the usage and the performance of the software on several applications borrowed from different fields of engineering. These include a basic surrogate of a well-known analytical benchmark function; a hierarchical Kriging example applied to wind turbine aero-servo-elastic simulations and a more complex geotechnical example that requires a non-stationary, user-defined correlation function. The GP-module, like the rest of the scientific code that is shipped with UQLab, is open source (BSD license).
This paper proposes a shoulder inverse kinematics (IK) technique. Shoulder complex is comprised of the sternum, clavicle, ribs, scapula, humerus, and four joints.
Raish Khanji GTU 8th sem Internship Report.pdfRaishKhanji
This report details the practical experiences gained during an internship at Indo German Tool
Room, Ahmedabad. The internship provided hands-on training in various manufacturing technologies, encompassing both conventional and advanced techniques. Significant emphasis was placed on machining processes, including operation and fundamental
understanding of lathe and milling machines. Furthermore, the internship incorporated
modern welding technology, notably through the application of an Augmented Reality (AR)
simulator, offering a safe and effective environment for skill development. Exposure to
industrial automation was achieved through practical exercises in Programmable Logic Controllers (PLCs) using Siemens TIA software and direct operation of industrial robots
utilizing teach pendants. The principles and practical aspects of Computer Numerical Control
(CNC) technology were also explored. Complementing these manufacturing processes, the
internship included extensive application of SolidWorks software for design and modeling tasks. This comprehensive practical training has provided a foundational understanding of
key aspects of modern manufacturing and design, enhancing the technical proficiency and readiness for future engineering endeavors.
International Journal of Distributed and Parallel systems (IJDPS)samueljackson3773
The growth of Internet and other web technologies requires the development of new
algorithms and architectures for parallel and distributed computing. International journal of
Distributed and parallel systems is a bimonthly open access peer-reviewed journal aims to
publish high quality scientific papers arising from original research and development from
the international community in the areas of parallel and distributed systems. IJDPS serves
as a platform for engineers and researchers to present new ideas and system technology,
with an interactive and friendly, but strongly professional atmosphere.
2. 2
Chapter 4: Data Warehousing and On-line
Analytical Processing
Data Warehouse: Basic Concepts
Data Warehouse Modeling: Data Cube and OLAP
Data Warehouse Design and Usage
Data Warehouse Implementation
Data Generalization by Attribute-Oriented
Induction
Summary
3. 3
What is a Data Warehouse?
Defined in many different ways, but not rigorously.
A decision support database that is maintained separately from
the organization’s operational database
Support information processing by providing a solid platform of
consolidated, historical data for analysis.
“A data warehouse is a subject-oriented, integrated, time-variant,
and nonvolatile collection of data in support of management’s
decision-making process.”—W. H. Inmon
Data warehousing:
The process of constructing and using data warehouses
4. 4
Data Warehouse—Subject-Oriented
Organized around major subjects, such as customer,
product, sales
Focusing on the modeling and analysis of data for
decision makers, not on daily operations or transaction
processing
Provide a simple and concise view around particular
subject issues by excluding data that are not useful in
the decision support process
5. 5
Data Warehouse—Integrated
Constructed by integrating multiple, heterogeneous data
sources
relational databases, flat files, on-line transaction
records
Data cleaning and data integration techniques are
applied.
Ensure consistency in naming conventions, encoding
structures, attribute measures, etc. among different
data sources
E.g., Hotel price: currency, tax, breakfast covered, etc.
When data is moved to the warehouse, it is
converted.
6. 6
Data Warehouse—Time Variant
The time horizon for the data warehouse is significantly
longer than that of operational systems
Operational database: current value data
Data warehouse data: provide information from a
historical perspective (e.g., past 5-10 years)
Every key structure in the data warehouse
Contains an element of time, explicitly or implicitly
But the key of operational data may or may not
contain “time element”
7. 7
Data Warehouse—Nonvolatile
A physically separate store of data transformed from the
operational environment
Operational update of data does not occur in the data
warehouse environment
Does not require transaction processing, recovery,
and concurrency control mechanisms
Requires only two operations in data accessing:
initial loading of data and access of data
8. 8
OLTP vs. OLAP
OLTP OLAP
users clerk, IT professional knowledge worker
function day to day operations decision support
DB design application-oriented subject-oriented
data current, up-to-date
detailed, flat relational
isolated
historical,
summarized, multidimensional
integrated, consolidated
usage repetitive ad-hoc
access read/write
index/hash on prim. key
lots of scans
unit of work short, simple transaction complex query
# records accessed tens millions
#users thousands hundreds
DB size 100MB-GB 100GB-TB
metric transaction throughput query throughput, response
9. 9
Why a Separate Data Warehouse?
High performance for both systems
DBMS— tuned for OLTP: access methods, indexing, concurrency
control, recovery
Warehouse—tuned for OLAP: complex OLAP queries,
multidimensional view, consolidation
Different functions and different data:
missing data: Decision support requires historical data which
operational DBs do not typically maintain
data consolidation: DS requires consolidation (aggregation,
summarization) of data from heterogeneous sources
data quality: different sources typically use inconsistent data
representations, codes and formats which have to be reconciled
Note: There are more and more systems which perform OLAP
analysis directly on relational databases
10. 10
Data Warehouse: A Multi-Tiered Architecture
Data
Warehouse
Extract
Transform
Load
Refresh
OLAP Engine
Analysis
Query
Reports
Data mining
Monitor
&
Integrator
Metadata
Data Sources Front-End Tools
Serve
Data Marts
Operational
DBs
Other
sources
Data Storage
OLAP Server
11. 11
Three Data Warehouse Models
Enterprise warehouse
collects all of the information about subjects spanning
the entire organization
Data Mart
a subset of corporate-wide data that is of value to a
specific groups of users. Its scope is confined to
specific, selected groups, such as marketing data mart
Independent vs. dependent (directly from warehouse) data mart
Virtual warehouse
A set of views over operational databases
Only some of the possible summary views may be
materialized
12. 12
Extraction, Transformation, and Loading (ETL)
Data extraction
get data from multiple, heterogeneous, and external
sources
Data cleaning
detect errors in the data and rectify them when possible
Data transformation
convert data from legacy or host format to warehouse
format
Load
sort, summarize, consolidate, compute views, check
integrity, and build indicies and partitions
Refresh
propagate the updates from the data sources to the
warehouse
13. 13
Metadata Repository
Meta data is the data defining warehouse objects. It stores:
Description of the structure of the data warehouse
schema, view, dimensions, hierarchies, derived data defn, data
mart locations and contents
Operational meta-data
data lineage (history of migrated data and transformation path),
currency of data (active, archived, or purged), monitoring
information (warehouse usage statistics, error reports, audit trails)
The algorithms used for summarization
The mapping from operational environment to the data warehouse
Data related to system performance
warehouse schema, view and derived data definitions
Business data
business terms and definitions, ownership of data, charging policies
14. 14
Chapter 4: Data Warehousing and On-line
Analytical Processing
Data Warehouse: Basic Concepts
Data Warehouse Modeling: Data Cube and OLAP
Data Warehouse Design and Usage
Data Warehouse Implementation
Data Generalization by Attribute-Oriented
Induction
Summary
15. 15
From Tables and Spreadsheets to
Data Cubes
A data warehouse is based on a multidimensional data model
which views data in the form of a data cube
A data cube, such as sales, allows data to be modeled and viewed in
multiple dimensions
Dimension tables, such as item (item_name, brand, type), or
time(day, week, month, quarter, year)
Fact table contains measures (such as dollars_sold) and keys
to each of the related dimension tables
In data warehousing literature, an n-D base cube is called a base
cuboid. The top most 0-D cuboid, which holds the highest-level of
summarization, is called the apex cuboid. The lattice of cuboids
forms a data cube.
16. 16
Cube: A Lattice of Cuboids
time,item
time,item,location
time, item, location, supplier
all
time item location supplier
time,location
time,supplier
item,location
item,supplier
location,supplier
time,item,supplier
time,location,supplier
item,location,supplier
0-D (apex) cuboid
1-D cuboids
2-D cuboids
3-D cuboids
4-D (base) cuboid
17. 17
Conceptual Modeling of Data Warehouses
Modeling data warehouses: dimensions & measures
Star schema: A fact table in the middle connected to a
set of dimension tables
Snowflake schema: A refinement of star schema
where some dimensional hierarchy is normalized into a
set of smaller dimension tables, forming a shape
similar to snowflake
Fact constellations: Multiple fact tables share
dimension tables, viewed as a collection of stars,
therefore called galaxy schema or fact constellation
18. 18
Example of Star Schema
time_key
day
day_of_the_week
month
quarter
year
time
location_key
street
city
state_or_province
country
location
Sales Fact Table
time_key
item_key
branch_key
location_key
units_sold
dollars_sold
avg_sales
Measures
item_key
item_name
brand
type
supplier_type
item
branch_key
branch_name
branch_type
branch
19. 19
Example of Snowflake Schema
time_key
day
day_of_the_week
month
quarter
year
time
location_key
street
city_key
location
Sales Fact Table
time_key
item_key
branch_key
location_key
units_sold
dollars_sold
avg_sales
Measures
item_key
item_name
brand
type
supplier_key
item
branch_key
branch_name
branch_type
branch
supplier_key
supplier_type
supplier
city_key
city
state_or_province
country
city
20. 20
Example of Fact Constellation
time_key
day
day_of_the_week
month
quarter
year
time
location_key
street
city
province_or_state
country
location
Sales Fact Table
time_key
item_key
branch_key
location_key
units_sold
dollars_sold
avg_sales
Measures
item_key
item_name
brand
type
supplier_type
item
branch_key
branch_name
branch_type
branch
Shipping Fact Table
time_key
item_key
shipper_key
from_location
to_location
dollars_cost
units_shipped
shipper_key
shipper_name
location_key
shipper_type
shipper
21. 21
A Concept Hierarchy:
Dimension (location)
all
Europe North_America
Mexico
Canada
Spain
Germany
Vancouver
M. Wind
L. Chan
...
...
...
... ...
...
all
region
office
country
Toronto
Frankfurt
city
22. 22
Data Cube Measures: Three Categories
Distributive: if the result derived by applying the function
to n aggregate values is the same as that derived by
applying the function on all the data without partitioning
E.g., count(), sum(), min(), max()
Algebraic: if it can be computed by an algebraic function
with M arguments (where M is a bounded integer), each of
which is obtained by applying a distributive aggregate
function
E.g., avg(), min_N(), standard_deviation()
Holistic: if there is no constant bound on the storage size
needed to describe a subaggregate.
E.g., median(), mode(), rank()
23. 23
View of Warehouses and Hierarchies
Specification of hierarchies
Schema hierarchy
day < {month <
quarter; week} < year
Set_grouping hierarchy
{1..10} < inexpensive
24. 24
Multidimensional Data
Sales volume as a function of product, month,
and region
Product
Month
Dimensions: Product, Location, Time
Hierarchical summarization paths
Industry Region Year
Category Country Quarter
Product City Month Week
Office Day
25. 25
A Sample Data Cube
Total annual sales
of TVs in U.S.A.
Date
Country
sum
sum
TV
VCR
PC
1Qtr 2Qtr 3Qtr 4Qtr
U.S.A
Canada
Mexico
sum
26. 26
Cuboids Corresponding to the Cube
all
product date country
product,date product,country date, country
product, date, country
0-D (apex) cuboid
1-D cuboids
2-D cuboids
3-D (base) cuboid
27. 27
Typical OLAP Operations
Roll up (drill-up): summarize data
by climbing up hierarchy or by dimension reduction
Drill down (roll down): reverse of roll-up
from higher level summary to lower level summary or
detailed data, or introducing new dimensions
Slice and dice: project and select
Pivot (rotate):
reorient the cube, visualization, 3D to series of 2D planes
Other operations
drill across: involving (across) more than one fact table
drill through: through the bottom level of the cube to its
back-end relational tables (using SQL)
29. 29
A Star-Net Query Model
Shipping Method
AIR-EXPRESS
TRUCK
ORDER
Customer Orders
CONTRACTS
Customer
Product
PRODUCT GROUP
PRODUCT LINE
PRODUCT ITEM
SALES PERSON
DISTRICT
DIVISION
Organization
Promotion
CITY
COUNTRY
REGION
Location
DAILY
QTRLY
ANNUALY
Time
Each circle is
called a footprint
30. 30
Browsing a Data Cube
Visualization
OLAP capabilities
Interactive manipulation
31. 31
Chapter 4: Data Warehousing and On-line
Analytical Processing
Data Warehouse: Basic Concepts
Data Warehouse Modeling: Data Cube and OLAP
Data Warehouse Design and Usage
Data Warehouse Implementation
Data Generalization by Attribute-Oriented
Induction
Summary
32. 32
Design of Data Warehouse: A Business
Analysis Framework
Four views regarding the design of a data warehouse
Top-down view
allows selection of the relevant information necessary for the
data warehouse
Data source view
exposes the information being captured, stored, and
managed by operational systems
Data warehouse view
consists of fact tables and dimension tables
Business query view
sees the perspectives of data in the warehouse from the view
of end-user
33. 33
Data Warehouse Design Process
Top-down, bottom-up approaches or a combination of both
Top-down: Starts with overall design and planning (mature)
Bottom-up: Starts with experiments and prototypes (rapid)
From software engineering point of view
Waterfall: structured and systematic analysis at each step before
proceeding to the next
Spiral: rapid generation of increasingly functional systems, short
turn around time, quick turn around
Typical data warehouse design process
Choose a business process to model, e.g., orders, invoices, etc.
Choose the grain (atomic level of data) of the business process
Choose the dimensions that will apply to each fact table record
Choose the measure that will populate each fact table record
34. 34
Data Warehouse Development:
A Recommended Approach
Define a high-level corporate data model
Data
Mart
Data
Mart
Distributed
Data Marts
Multi-Tier Data
Warehouse
Enterprise
Data
Warehouse
Model refinement
Model refinement
35. 35
Data Warehouse Usage
Three kinds of data warehouse applications
Information processing
supports querying, basic statistical analysis, and reporting
using crosstabs, tables, charts and graphs
Analytical processing
multidimensional analysis of data warehouse data
supports basic OLAP operations, slice-dice, drilling, pivoting
Data mining
knowledge discovery from hidden patterns
supports associations, constructing analytical models,
performing classification and prediction, and presenting the
mining results using visualization tools
36. 36
From On-Line Analytical Processing (OLAP)
to On Line Analytical Mining (OLAM)
Why online analytical mining?
High quality of data in data warehouses
DW contains integrated, consistent, cleaned data
Available information processing structure surrounding
data warehouses
ODBC, OLEDB, Web accessing, service facilities,
reporting and OLAP tools
OLAP-based exploratory data analysis
Mining with drilling, dicing, pivoting, etc.
On-line selection of data mining functions
Integration and swapping of multiple mining
functions, algorithms, and tasks
37. 37
Chapter 4: Data Warehousing and On-line
Analytical Processing
Data Warehouse: Basic Concepts
Data Warehouse Modeling: Data Cube and OLAP
Data Warehouse Design and Usage
Data Warehouse Implementation
Data Generalization by Attribute-Oriented
Induction
Summary
38. 38
Efficient Data Cube Computation
Data cube can be viewed as a lattice of cuboids
The bottom-most cuboid is the base cuboid
The top-most cuboid (apex) contains only one cell
How many cuboids in an n-dimensional cube with L
levels?
Materialization of data cube
Materialize every (cuboid) (full materialization),
none (no materialization), or some (partial
materialization)
Selection of which cuboids to materialize
Based on size, sharing, access frequency, etc.
)
1
1
(
n
i
i
L
T
39. 39
The “Compute Cube” Operator
Cube definition and computation in DMQL
define cube sales [item, city, year]: sum (sales_in_dollars)
compute cube sales
Transform it into a SQL-like language (with a new operator cube
by, introduced by Gray et al.’96)
SELECT item, city, year, SUM (amount)
FROM SALES
CUBE BY item, city, year
Need compute the following Group-Bys
(date, product, customer),
(date,product),(date, customer), (product, customer),
(date), (product), (customer)
()
(item)
(city)
()
(year)
(city, item) (city, year) (item, year)
(city, item, year)
40. 40
Indexing OLAP Data: Bitmap Index
Index on a particular column
Each value in the column has a bit vector: bit-op is fast
The length of the bit vector: # of records in the base table
The i-th bit is set if the i-th row of the base table has the value for
the indexed column
not suitable for high cardinality domains
A recent bit compression technique, Word-Aligned Hybrid (WAH),
makes it work for high cardinality domain as well [Wu, et al. TODS’06]
Cust Region Type
C1 Asia Retail
C2 Europe Dealer
C3 Asia Dealer
C4 America Retail
C5 Europe Dealer
RecID Retail Dealer
1 1 0
2 0 1
3 0 1
4 1 0
5 0 1
RecIDAsia Europe America
1 1 0 0
2 0 1 0
3 1 0 0
4 0 0 1
5 0 1 0
Base table Index on Region Index on Type
41. 41
Indexing OLAP Data: Join Indices
Join index: JI(R-id, S-id) where R (R-id, …) S
(S-id, …)
Traditional indices map the values to a list of
record ids
It materializes relational join in JI file and
speeds up relational join
In data warehouses, join index relates the values
of the dimensions of a start schema to rows in
the fact table.
E.g. fact table: Sales and two dimensions city
and product
A join index on city maintains for each
distinct city a list of R-IDs of the tuples
recording the Sales in the city
Join indices can span multiple dimensions
42. 42
Efficient Processing OLAP Queries
Determine which operations should be performed on the available cuboids
Transform drill, roll, etc. into corresponding SQL and/or OLAP operations,
e.g., dice = selection + projection
Determine which materialized cuboid(s) should be selected for OLAP op.
Let the query to be processed be on {brand, province_or_state} with the
condition “year = 2004”, and there are 4 materialized cuboids available:
1) {year, item_name, city}
2) {year, brand, country}
3) {year, brand, province_or_state}
4) {item_name, province_or_state} where year = 2004
Which should be selected to process the query?
Explore indexing structures and compressed vs. dense array structs in MOLAP
43. 43
OLAP Server Architectures
Relational OLAP (ROLAP)
Use relational or extended-relational DBMS to store and manage
warehouse data and OLAP middle ware
Include optimization of DBMS backend, implementation of
aggregation navigation logic, and additional tools and services
Greater scalability
Multidimensional OLAP (MOLAP)
Sparse array-based multidimensional storage engine
Fast indexing to pre-computed summarized data
Hybrid OLAP (HOLAP) (e.g., Microsoft SQLServer)
Flexibility, e.g., low level: relational, high-level: array
Specialized SQL servers (e.g., Redbricks)
Specialized support for SQL queries over star/snowflake schemas
44. 44
Chapter 4: Data Warehousing and On-line
Analytical Processing
Data Warehouse: Basic Concepts
Data Warehouse Modeling: Data Cube and OLAP
Data Warehouse Design and Usage
Data Warehouse Implementation
Data Generalization by Attribute-Oriented
Induction
Summary
45. 45
Attribute-Oriented Induction
Proposed in 1989 (KDD ‘89 workshop)
Not confined to categorical data nor particular measures
How it is done?
Collect the task-relevant data (initial relation) using a
relational database query
Perform generalization by attribute removal or
attribute generalization
Apply aggregation by merging identical, generalized
tuples and accumulating their respective counts
Interaction with users for knowledge presentation
46. 46
Attribute-Oriented Induction: An Example
Example: Describe general characteristics of graduate
students in the University database
Step 1. Fetch relevant set of data using an SQL
statement, e.g.,
Select * (i.e., name, gender, major, birth_place,
birth_date, residence, phone#, gpa)
from student
where student_status in {“Msc”, “MBA”, “PhD” }
Step 2. Perform attribute-oriented induction
Step 3. Present results in generalized relation, cross-tab,
or rule forms
47. 47
Class Characterization: An Example
Name Gender Major Birth-Place Birth_date Residence Phone # GPA
Jim
Woodman
M CS Vancouver,BC,
Canada
8-12-76 3511 Main St.,
Richmond
687-4598 3.67
Scott
Lachance
M CS Montreal, Que,
Canada
28-7-75 345 1st Ave.,
Richmond
253-9106 3.70
Laura Lee
…
F
…
Physics
…
Seattle, WA, USA
…
25-8-70
…
125 Austin Ave.,
Burnaby
…
420-5232
…
3.83
…
Removed Retained Sci,Eng,
Bus
Country Age range City Removed Excl,
VG,..
Gender Major Birth_region Age_range Residence GPA Count
M Science Canada 20-25 Richmond Very-good 16
F Science Foreign 25-30 Burnaby Excellent 22
… … … … … … …
Birth_Region
Gender
Canada Foreign Total
M 16 14 30
F 10 22 32
Total 26 36 62
Prime
Generalized
Relation
Initial
Relation
48. 48
Basic Principles of Attribute-Oriented Induction
Data focusing: task-relevant data, including dimensions,
and the result is the initial relation
Attribute-removal: remove attribute A if there is a large set
of distinct values for A but (1) there is no generalization
operator on A, or (2) A’s higher level concepts are
expressed in terms of other attributes
Attribute-generalization: If there is a large set of distinct
values for A, and there exists a set of generalization
operators on A, then select an operator and generalize A
Attribute-threshold control: typical 2-8, specified/default
Generalized relation threshold control: control the final
relation/rule size
49. 49
Attribute-Oriented Induction: Basic Algorithm
InitialRel: Query processing of task-relevant data, deriving
the initial relation.
PreGen: Based on the analysis of the number of distinct
values in each attribute, determine generalization plan for
each attribute: removal? or how high to generalize?
PrimeGen: Based on the PreGen plan, perform
generalization to the right level to derive a “prime
generalized relation”, accumulating the counts.
Presentation: User interaction: (1) adjust levels by drilling,
(2) pivoting, (3) mapping into rules, cross tabs,
visualization presentations.
50. 50
Presentation of Generalized Results
Generalized relation:
Relations where some or all attributes are generalized, with counts
or other aggregation values accumulated.
Cross tabulation:
Mapping results into cross tabulation form (similar to contingency
tables).
Visualization techniques:
Pie charts, bar charts, curves, cubes, and other visual forms.
Quantitative characteristic rules:
Mapping generalized result into characteristic rules with quantitative
information associated with it, e.g.,
.
%]
47
:
[
"
"
)
(
_
%]
53
:
[
"
"
)
(
_
)
(
)
(
t
foreign
x
region
birth
t
Canada
x
region
birth
x
male
x
grad
51. 51
Mining Class Comparisons
Comparison: Comparing two or more classes
Method:
Partition the set of relevant data into the target class and the
contrasting class(es)
Generalize both classes to the same high level concepts
Compare tuples with the same high level descriptions
Present for every tuple its description and two measures
support - distribution within single class
comparison - distribution between classes
Highlight the tuples with strong discriminant features
Relevance Analysis:
Find attributes (features) which best distinguish different classes
52. 52
Concept Description vs. Cube-Based OLAP
Similarity:
Data generalization
Presentation of data summarization at multiple levels of
abstraction
Interactive drilling, pivoting, slicing and dicing
Differences:
OLAP has systematic preprocessing, query independent,
and can drill down to rather low level
AOI has automated desired level allocation, and may
perform dimension relevance analysis/ranking when
there are many relevant dimensions
AOI works on the data which are not in relational forms
53. 53
Chapter 4: Data Warehousing and On-line
Analytical Processing
Data Warehouse: Basic Concepts
Data Warehouse Modeling: Data Cube and OLAP
Data Warehouse Design and Usage
Data Warehouse Implementation
Data Generalization by Attribute-Oriented
Induction
Summary
54. 54
Summary
Data warehousing: A multi-dimensional model of a data warehouse
A data cube consists of dimensions & measures
Star schema, snowflake schema, fact constellations
OLAP operations: drilling, rolling, slicing, dicing and pivoting
Data Warehouse Architecture, Design, and Usage
Multi-tiered architecture
Business analysis design framework
Information processing, analytical processing, data mining, OLAM (Online
Analytical Mining)
Implementation: Efficient computation of data cubes
Partial vs. full vs. no materialization
Indexing OALP data: Bitmap index and join index
OLAP query processing
OLAP servers: ROLAP, MOLAP, HOLAP
Data generalization: Attribute-oriented induction
55. 55
References (I)
S. Agarwal, R. Agrawal, P. M. Deshpande, A. Gupta, J. F. Naughton, R. Ramakrishnan, and S.
Sarawagi. On the computation of multidimensional aggregates. VLDB’96
D. Agrawal, A. E. Abbadi, A. Singh, and T. Yurek. Efficient view maintenance in data
warehouses. SIGMOD’97
R. Agrawal, A. Gupta, and S. Sarawagi. Modeling multidimensional databases. ICDE’97
S. Chaudhuri and U. Dayal. An overview of data warehousing and OLAP technology. ACM
SIGMOD Record, 26:65-74, 1997
E. F. Codd, S. B. Codd, and C. T. Salley. Beyond decision support. Computer World, 27, July
1993.
J. Gray, et al. Data cube: A relational aggregation operator generalizing group-by, cross-tab
and sub-totals. Data Mining and Knowledge Discovery, 1:29-54, 1997.
A. Gupta and I. S. Mumick. Materialized Views: Techniques, Implementations, and
Applications. MIT Press, 1999.
J. Han. Towards on-line analytical mining in large databases. ACM SIGMOD Record, 27:97-107,
1998.
V. Harinarayan, A. Rajaraman, and J. D. Ullman. Implementing data cubes efficiently.
SIGMOD’96
J. Hellerstein, P. Haas, and H. Wang. Online aggregation. SIGMOD'97
56. 56
References (II)
C. Imhoff, N. Galemmo, and J. G. Geiger. Mastering Data Warehouse Design: Relational and
Dimensional Techniques. John Wiley, 2003
W. H. Inmon. Building the Data Warehouse. John Wiley, 1996
R. Kimball and M. Ross. The Data Warehouse Toolkit: The Complete Guide to Dimensional
Modeling. 2ed. John Wiley, 2002
P. O’Neil and G. Graefe. Multi-table joins through bitmapped join indices. SIGMOD Record, 24:8–
11, Sept. 1995.
P. O'Neil and D. Quass. Improved query performance with variant indexes. SIGMOD'97
Microsoft. OLEDB for OLAP programmer's reference version 1.0. In
http://www.microsoft.com/data/oledb/olap, 1998
S. Sarawagi and M. Stonebraker. Efficient organization of large multidimensional arrays. ICDE'94
A. Shoshani. OLAP and statistical databases: Similarities and differences. PODS’00.
D. Srivastava, S. Dar, H. V. Jagadish, and A. V. Levy. Answering queries with aggregation using
views. VLDB'96
P. Valduriez. Join indices. ACM Trans. Database Systems, 12:218-246, 1987.
J. Widom. Research problems in data warehousing. CIKM’95
K. Wu, E. Otoo, and A. Shoshani, Optimal Bitmap Indices with Efficient Compression, ACM Trans.
on Database Systems (TODS), 31(1): 1-38, 2006
58. 58
Compression of Bitmap Indices
Bitmap indexes must be compressed to reduce I/O costs
and minimize CPU usage—majority of the bits are 0’s
Two compression schemes:
Byte-aligned Bitmap Code (BBC)
Word-Aligned Hybrid (WAH) code
Time and space required to operate on compressed
bitmap is proportional to the total size of the bitmap
Optimal on attributes of low cardinality as well as those of
high cardinality.
WAH out performs BBC by about a factor of two