Data Mining1
Data Mining1
Introduction
Data mining is one of the most useful techniques that help entrepreneurs, researchers, and
individuals to extract valuable information from huge sets of data. Data mining is also
called Knowledge Discovery in Database (KDD). The knowledge discovery process
includes Data cleaning, Data integration, Data selection, Data transformation, Data mining,
Pattern evaluation, and Knowledge presentation.
This note on Data mining tutorial includes all topics of Data mining such as applications,
Data mining vs Machine learning, Data mining tools, Social Media Data mining, Data mining
techniques, Clustering in data mining, Challenges in Data mining, etc.
Data mining is the act of automatically searching for large stores of information to find
trends and patterns that go beyond simple analysis procedures. Data mining utilizes
complex mathematical algorithms for data segments and evaluates the probability of future
events. Data Mining is also called Knowledge Discovery of Data (KDD).
Data Mining is a process used by organizations to extract specific data from huge databases
to solve business problems. It primarily turns raw data into useful information.
Data Mining is similar to Data Science carried out by a person, in a specific situation, on a
particular data set, with an objective. This process includes various types of services such
as text mining, web mining, audio and video mining, pictorial data mining, and social media
mining. It is done through software that is simple or highly specific. By outsourcing data
mining, all the work can be done faster with low operation costs. Specialized firms can also
use new technologies to collect data that is impossible to locate manually. There are tonnes
of information available on various platforms, but very little knowledge is accessible. The
biggest challenge is to analyze the data to extract important information that can be used to
solve a problem or for company development.
There are many powerful instruments and techniques available to mine data and find
better insight from it.
Relational Database
A relational database is a collection of multiple data sets formally organized by tables,
records, and columns from which data can be accessed in various ways without having to
recognize the database tables. Tables convey and share information, which facilitates data
searchability, reporting, and organization.
Data warehouses
A Data Warehouse is the technology that collects the data from various sources within the
organization to provide meaningful business insights. The huge amount of data comes from
multiple places such as Marketing and Finance. The extracted data is utilized for analytical
purposes and helps in decision- making for a business organization. The data warehouse is
designed for the analysis of data rather than transaction processing.
Data Repositories
The Data Repository generally refers to a destination for data storage. However, many IT
professionals utilize the term more clearly to refer to a specific kind of setup within an IT
structure.
For example, a group of databases, where an organization has kept various kinds of
information.
Object-Relational Database
A combination of an object-oriented database model and relational database model is
called an object-relational model. It supports Classes, Objects, Inheritance, etc.
One of the primary objectives of the Object-relational data model is to close the gap
between the Relational database and the object-oriented model practices frequently
utilized in many programming languages, for example, C++, Java, C#, and so on.
Transactional Database
A transactional database refers to a database management system (DBMS) that has the
potential to undo a database transaction if it is not performed appropriately. Even though
this was a unique capability a very long while back, today, most of the relational database
systems support transactional database activities.
These are the following areas where data mining is widely used:
Data Distribution
Real-worlds data is usually stored on various platforms in a distributed computing
environment. It might be in a database, individual systems, or even on the internet.
Practically, It is a quite tough task to make all the data to a centralized data repository
mainly due to organizational and technical concerns. For example, various regional offices
may have their servers to store their data. It is not feasible to store, all the data from all the
offices on a central server. Therefore, data mining requires the development of tools and
algorithms that allow the mining of distributed data.
Complex Data
Real-world data is heterogeneous, and it could be multimedia data, including audio and
video, images, complex data, spatial data, time series, and so on. Managing these various
types of data and extracting useful information is a tough task. Most of the time, new
technologies, new tools, and methodologies would have to be refined to obtain specific
information.
Performance
The data mining system's performance relies primarily on the efficiency of algorithms and
techniques used. If the designed algorithm and techniques are not up to the mark, then the
efficiency of the data mining process will be affected adversely.
Data Visualization
In data mining, data visualization is a very important process because it is the primary
method that shows the output to the user in a presentable way. The extracted data should
convey the exact meaning of what it intends to express. But many times, representing the
information to the end-user in a precise and easy way is difficult. The input data and the
output information being complicated, very efficient, and successful data visualization
processes need to be implemented to make it successful.
There are many more challenges in data mining in addition to the problems above-mentioned.
More problems are disclosed as the actual data mining process begins, and the success of data
mining relies on getting rid of all these difficulties.
Data Mining Techniques
Data mining includes the utilization of refined data analysis tools to find previously
unknown, valid patterns and relationships in huge data sets. These tools can incorporate
statistical models, machine learning techniques, and mathematical algorithms, such as
neural networks or decision trees. Thus, data mining incorporates analysis and prediction.
Depending on various methods and technologies from the intersection of machine learning,
database management, and statistics, professionals in data mining have devoted their
careers to better understanding how to process and make conclusions from the huge
amount of data, but what are the methods they use to make it happen?
In recent data mining projects, various major data mining techniques have been developed
and used, including association, classification, clustering, prediction, sequential patterns,
and regression.
1. Classification
This technique is used to obtain important and relevant information about data and
metadata. This data mining technique helps to classify data in different classes.
Data mining techniques can be classified by different criteria, as follows:
2. Clustering
Clustering is a division of information into groups of connected objects. Describing the data
by a few clusters mainly loses certain confine details, but accomplishes improvement. It
models data by its clusters. Data modeling puts clustering from a historical point of view
rooted in statistics, mathematics, and numerical analysis. From a machine learning point of
view, clusters relate to hidden patterns, the search for clusters is unsupervised learning,
and the subsequent framework represents a data concept. From a practical point of view,
clustering plays an extraordinary job in data mining applications. For example, scientific
data exploration, text mining, information retrieval, spatial database applications, CRM,
Web analysis, computational biology, medical diagnostics, and much more.
In other words, we can say that Clustering analysis is a data mining technique to identify
similar data. This technique helps to recognize the differences and similarities between the
data. Clustering is very similar to the classification, but it involves grouping chunks of data
together based on their similarities.
3. Regression
Regression analysis is the data mining process is used to identify and analyze the
relationship between variables because of the presence of the other factor. It is used to
define the probability of the specific variable. Regression, primarily a form of planning and
modeling. For example, we might use it to project certain costs, depending on other factors
such as availability, consumer demand, and competition. Primarily it gives the exact
relationship between two or more variables in the given data set.
4. Association Rules
This data mining technique helps to discover a link between two or more items. It finds a
hidden pattern in the data set.
Association rules are if-then statements that support to show the probability of
interactions between data items within large data sets in different types of databases.
Association rule mining has several applications and is commonly used to help sales
correlations in data or medical data sets.
The way the algorithm works is that you have various data, For example, a list of grocery
items that you have been buying for the last six months. It calculates a percentage of
items being purchased together.
o Lift:
This measurement technique measures the accuracy of the confidence over how
often item B is purchased.
(Confidence) / (item B)/ (Entire dataset)
o Support:
This measurement technique measures how often multiple items are purchased
and compared it to the overall dataset.
(Item A + Item B) / (Entire dataset)
o Confidence:
This measurement technique measures how often item B is purchased when item
A is purchased as well.
(Item A + Item B)/ (Item A)
5. Outer detection
This type of data mining technique relates to the observation of data items in the data set,
which do not match an expected pattern or expected behavior. This technique may be used
in various domains like intrusion, detection, fraud detection, etc. It is also known as Outlier
Analysis or Outilier mining. The outlier is a data point that diverges too much from the rest
of the dataset. The majority of the real-world datasets have an outlier. Outlier detection
plays a significant role in the data mining field. Outlier detection is valuable in numerous
fields like network interruption identification, credit or debit card fraud detection,
detecting outlying in wireless sensor network data, etc.
6. Sequential Patterns
The sequential pattern is a data mining technique specialized for evaluating sequential
data to discover sequential patterns. It comprises of finding interesting subsequences in a
set of sequences, where the stake of a sequence can be measured in terms of different
criteria like length, occurrence frequency, etc.
In other words, this technique of data mining helps to discover or recognize similar
patterns in transaction data over some time.
7. Prediction
Prediction used a combination of other data mining techniques such as trends, clustering,
classification, etc. It analyzes past events or instances in the right sequence to predict a
future event.
Data mining is described as a process of finding hidden precious data by evaluating the
huge quantity of information stored in data warehouses, using multiple data mining
techniques such as Artificial Intelligence (AI), Machine learning and statistics.
1. Business understanding
It focuses on understanding the project goals and requirements form a business point of
view, then converting this information into a data mining problem afterward a preliminary
plan designed to accomplish the target.
Tasks:
o Determine business objectives
o Access situation
o Determine data mining goals
o Produce a project plan
o Reveal significant factors, at the starting, it can impact the result of the project.
Access situation
o It requires a more detailed analysis of facts about all the resources, constraints,
assumptions, and others that ought to be considered.
Determine data mining goals
o A business goal states the target of the business terminology. For example, increase
catalog sales to the existing customer.
o A data mining goal describes the project objectives. For example, It assumes how many
objects a customer will buy, given their demographics details (Age, Salary, and City) and
the price of the item over the past three years.
o It states the targeted plan to accomplish the business and data mining plan.
o The project plan should define the expected set of steps to be performed during the rest
of the project, including the latest technique and better selection of tools.
2. Data Understanding
Data understanding starts with an original data collection and proceeds with operations
to get familiar with the data, to data quality issues, to find better insight in data, or to
detect interesting subsets for concealed information hypothesis.
Tasks:
o Collects initial data
o Describe data
o Explore data
o Verify data quality
Describe data
o It examines the "gross" or "surface" characteristics of the information obtained.
o It reports on the outcomes.
Explore data
o Addressing data mining issues that can be resolved by querying,
visualizing, and reporting, including:
o Distribution of important characteristics, results of simple aggregation.
o Establish the relationship between the small number of attributes.
o Characteristics of important sub-populations, simple statical analysis.
o It may refine the data mining objectives.
o It may contribute or refine the information description, and quality reports.
o It may feed into the transformation and other necessary information preparation.
3. Data Preparation
o It usually takes more than 90 percent of the time.
o It covers all operations to build the final data set from the original raw information.
o Data preparation is probable to be done several times and not in any prescribed
order.
Tasks
o Select data
o Clean data
o Construct data
o Integrate data
o Format data
Select data
Clean data
o It may involve the selection of clean subsets of data, inserting appropriate defaults
or more ambitious methods, such as estimating missing information by modeling.
Construct data
o It comprises of Constructive information preparation, such as generating derived
characteristics,
o complete new documents, or transformed values of current characteristics.
Integrate data
o Integrate data refers to the methods whereby data is combined from various tables,
or documents to create new documents or values.
Format data
o Formatting data refer mainly to linguistic changes produced to information that
does not alter their significance but may require a modeling tool.
4. Modeling
In modeling, various modeling methods are selected and applied, and their parameters are
measured to optimum values. Some methods gave particular requirements on the form of
data. Therefore, stepping back to the data preparation phase is necessary.
Tasks
o Select modeling technique
o Generate test design
o Build model
o Access model
Select modeling technique
o It selects the real modeling method that is to be used. For example, decision tree,
neural network.
o If various methods are applied,then it performs this task individually for each
method.
Build model
o To create one or more models, we need to run the modeling tool on the prepared
data set.
Assess model
o It interprets the models according to its domain expertise, the data mining success
criteria, and the required design.
o It assesses the success of the application of modeling and discovers methods more
technically.
o It Contacts business analytics and domain specialists later to discuss the outcomes
of data mining in the business context.
5. Evaluation
o At the last of this phase, a decision on the use of the data mining results should be
reached.
o It evaluates the model efficiently, and review the steps executed to build the model
and to ensure that the business objectives are properly achieved.
o The main objective of the evaluation is to determine some significant business issue
that has not been regarded adequately.
o At the last of this phase, a decision on the use of the data mining outcomes should be
reached.
Tasks
o Evaluate results
o Review process
Evaluate results
o It assesses the degree to which the model meets the organization's business
objectives.
o It tests the model on test apps in the actual implementation when time and budget
limitations permit and also assesses other data mining results produced.
Review process
o The review process does a more detailed evaluation of the data mining engagement
to determine when there is a significant factor or task that has been somehow
ignored.
6. Deployment
Determine:
o Deployment refers to how the outcomes need to be utilized.
Deploy data mining results by:
Tasks
o Plan deployment
o Plan monitoring and maintenance
o Produce final report
o Review project
Plan deployment:
o To deploy the data mining outcomes into the business, takes the assessment results
and concludes a strategy for deployment.
o It refers to documentation of the process for later deployment.
Review project
o Review projects evaluate what went right and what went wrong, what was done
wrong, and what needs to be improved.
Data Mining Architecture
Introduction
Data mining is a significant method where previously unknown and potentially useful
information is extracted from the vast amount of data. The data mining process involves
several components, and these components constitute a data mining system architecture.
The significant components of data mining systems are a data source, data mining engine,
data warehouse server, the pattern evaluation module, graphical user interface, and
knowledge base.
Data Source
The actual source of data is the Database, data warehouse, World Wide Web (WWW), text
files, and other documents. You need a huge amount of historical data for data mining to be
successful. Organizations typically store data in databases or data warehouses. Data
warehouses may comprise one or more databases, text files spreadsheets, or other
repositories of data. Sometimes, even plain text files or spreadsheets may contain
information. Another primary source of data is the World Wide Web or the internet.
Different Processes
Before passing the data to the database or data warehouse server, the data must be
cleaned, integrated, and selected. As the information comes from various sources and in
different formats, it can't be used directly for the data mining procedure because the data
may not be complete and accurate. So, the first data requires to be cleaned and unified.
More information than needed will be collected from various data sources, and only the
data of interest will have to be selected and passed to the server. These procedures are not
as easy as we think. Several methods may be performed on the data as part of selection,
integration, and cleaning.
In other words, we can say data mining is the root of our data mining architecture. It
comprises instruments and software used to obtain insights and knowledge from data
collected from various data sources and stored within the data warehouse.
Knowledge Base
The knowledge base is helpful in the entire process of data mining. It might be helpful to
guide the search or evaluate the stake of the result patterns. The knowledge base may even
contain user views and data from user experiences that might be helpful in the data mining
process. The data mining engine may receive inputs from the knowledge base to make the
result more accurate and reliable. The pattern assessment module regularly interacts with
the knowledge base to get inputs, and also update it.
The main objective of the KDD process is to extract information from data in the context of
large databases. It does this by using Data Mining algorithms to identify what is deemed
knowledge.
The availability and abundance of data today make knowledge discovery and Data Mining a
matter of impressive significance and need. In the recent development of the field, it isn't
surprising that a wide variety of techniques is presently accessible to specialists and
experts.
The process begins with determining the KDD objectives and ends with the
implementation of the discovered knowledge. At that point, the loop is closed, and the
Active Data Mining starts. Subsequently, changes would need to be made in the application
domain. For example, offering various features to cell phone users in order to reduce churn.
This closes the loop, and the impacts are then measured on the new data repositories, and
the KDD process again. Following is a concise description of the nine-step KDD process,
Beginning with a
managerial step:
1. Building up an understanding of the application domain
This is the initial preliminary step. It develops the scene for understanding what should be
done with the various decisions like transformation, algorithms, representation, etc. The
individuals who are in charge of a KDD venture need to understand and characterize the
objectives of the end-user and the environment in which the knowledge discovery process
will occur (involves relevant prior knowledge).
4. Data Transformation
In this stage, the creation of appropriate data for Data Mining is prepared and developed.
Techniques here incorporate dimension reduction (for example, feature selection and
extraction and record sampling), also attribute transformation (for example, discretization
of numerical attributes and functional transformation). This step can be essential for the
success of the entire KDD project, and it is typically very project-specific. For example, in
medical assessments, the quotient of attributes may often be the most significant factor and
not each one by itself. In business, we may need to think about impacts beyond our control
as well as efforts and transient issues. For example, studying the impact of advertising
accumulation. However, if we do not utilize the right transformation at the starting, then
we may acquire an amazing effect that insights to us about the transformation required in
the next iteration. Thus, the KDD process follows upon itself and prompts an understanding
of the transformation required.
At last, the implementation of the Data Mining algorithm is reached. In this stage, we may
need to utilize the algorithm several times until a satisfying outcome is obtained. For
example, by turning the algorithms control parameters, such as the minimum number of
instances in a single leaf of a decision tree.
8. Evaluation
In this step, we assess and interpret the mined patterns, rules, and reliability to the
objective characterized in the first step. Here we consider the preprocessing steps as for
their impact on the Data Mining algorithm results. For example, including a feature in step
4, and repeat from there. This step focuses on the comprehensibility and utility of the
induced model. In this step, the identified knowledge is also recorded for further use. The
last step is the use, and overall feedback and discovery results acquire by Data Mining.
Now, we are prepared to include the knowledge into another system for further activity.
The knowledge becomes effective in the sense that we may make changes to the system
and measure the impacts. The accomplishment of this step decides the effectiveness of the
whole KDD process. There are numerous challenges in this step, such as losing the
"laboratory conditions" under which we have worked. For example, the knowledge was
discovered from a certain static depiction, it is usually a set of data, but now the data
becomes dynamic. Data structures may change certain quantities that become unavailable,
and the data domain might be modified, such as an attribute that may have a value that was
not expected previously.
Data Mining vs Machine Learning
Data Mining relates to extracting information from a large quantity of data. Data mining is a
technique of discovering different kinds of patterns that are inherited in the data set and
which are precise, new, and useful data. Data Mining is working as a subset of business
analytics and similar to experimental studies. Data Mining's origins are databases,
statistics.
Data Mining and Machine learning are areas that have been influenced by each other,
although they have many common things, yet they have different ends.
Data Mining is performed on certain data sets by humans to find interesting patterns
between the items in the data set. Data Mining uses techniques created by machine
learning for predicting the results while machine learning is the capability of the computer
to learn from a minded data set.
Machine learning algorithms take the information that represents the relationship between
items in data sets and creates models in order to predict future results. These models are
nothing more than actions that will be taken by the machine to achieve a result.
Data Mining is the method of extraction of data or previously unknown data patterns from
huge sets of data. Hence as the word suggests, we 'Mine for specific data' from the large
data set. Data mining is also called Knowledge Discovery Process, is a field of science that is
used to determine the properties of the datasets. Gregory Piatetsky-Shapiro founded the
term "Knowledge Discovery in Databases" (KDD) in 1989. The term "data mining" came
in the database community in 1990. Huge sets of data collected from data warehouses or
complex datasets such as time series, spatial, etc. are extracted in order to extract
interesting correlations and patterns between the data items. For Machine Learning
algorithms, the output of the data mining algorithm is often used as input.
What is Machine learning?
Machine learning is related to the development and designing of a machine that can learn
itself from a specified set of data to obtain a desirable result without it being explicitly
coded. Hence Machine learning implies 'a machine which learns on its own. Arthur
Samuel invented the term Machine learning an American pioneer in the area of computer
gaming and artificial intelligence in 1959. He said that "it gives computers the ability to
learn without being explicitly programmed."
Machine learning is a technique that creates complex algorithms for large data processing
and provides outcomes to its users. It utilizes complex programs that can learn through
experience and make predictions.
The algorithms are enhanced by themselves by frequent input of training data. The aim of
machine learning is to understand information and build models from data that can be
understood and used by humans.
1. Unsupervised Learning
2. Supervised Learning
2. Data Mining utilizes more data to obtain helpful information, and that specific data will
help to predict some future results. For example, In a marketing company that utilizes last
year's data to predict the sale, but machine learning does not depend much on data. It uses
algorithms. Many transportation companies such as OLA, UBER machine learning
techniques to calculate ETA (Estimated Time of Arrival) for rides is based on this
technique.
3. Data mining is not capable of self-learning. It follows the guidelines that are predefined.
It will provide the answer to a specific problem, but machine learning algorithms are self-
defined and can alter their rules according to the situation, and find out the solution for a
specific problem and resolves it in its way.
4. The main and most important difference between data mining and machine learning is
that without the involvement of humans, data mining can't work, but in the case of machine
learning human effort only involves at the time when the algorithm is defined after that it
will conclude everything on its own. Once it implemented, we can use it forever, but this is
not possible in the case of data mining.
6. Data mining utilizes the database, data warehouse server, data mining engine, and
pattern assessment techniques to obtain useful information, whereas machine learning
utilizes neural networks, predictive models, and automated algorithms to make the
decisions.
Data Mining Vs Machine Learning
History In 1930, it was known as knowledge The first program, i.e., Samuel's
discovery in databases(KDD). checker playing program, was
established in 1950.
Responsibility Data Mining is used to obtain the Machine learning teaches the
rules from the existing data. computer, how to learn and
comprehend the rules.
Abstraction Data mining abstract from the data Machine learning reads machine.
warehouse.