Expert System Architecture
Expert System Architecture
An expert system is a computer program that is designed to solve complex problems and to
provide decision-making ability like a human expert. It performs this by extracting
knowledge from its knowledge base using the reasoning and inference rules according to
the user queries.
o High Performance: The expert system provides high performance for solving
any type of complex problem of a specific domain with high efficiency and
accuracy.
o Understandable: It responds in a way that can be easily understandable by the
user. It can take input in human language and provides the output in the same
way.
o Reliable: It is much reliable for generating an efficient and accurate output.
o Highly responsive: ES provides the result for any complex query within a very
short period of time.
o User Interface
o Inference Engine
o Knowledge Base
1. User Interface
With the help of a user interface, the expert system interacts with the user, takes
queries as an input in a readable format, and passes it to the inference engine. After
getting the response from the inference engine, it displays the output to the user. In
other words, it is an interface that helps a non-expert user to communicate with
the expert system to find a solution.
o Forward Chaining: It starts from the known facts and rules, and applies the inference
rules to add their conclusion to the known facts.
o Backward Chaining: It is a backward reasoning method that starts from the goal and
works backward to prove the known facts.
3. Knowledge Base
o The knowledgebase is a type of storage that stores knowledge acquired from the
different experts of the particular domain. It is considered as big storage of knowledge.
The more the knowledge base, the more precise will be the Expert System.
o It is similar to a database that contains information and rules of a particular domain or
subject.
o One can also view the knowledge base as collections of objects and their attributes.
Such as a Lion is an object and its attributes are it is a mammal, it is not a domestic
animal, etc.
A set of facts
These facts are assertions or anything that is relevant to the beginning state of
the system.
Set of Rules
This set contains all the actions that should be performed within the scope of a
problem and defines how to act on the assertion set. In the set of rules facts are
represented in an IF-THEN form.
The database
In a rule based based approach the database has a set of facts that are used to
compare against the IF (condition) part of the rules that are held in the
knowledge base.
• Match - A section of the production rule system gets matched with the
contents of the working memory to gain a conflict, where there are
several instances of the satisfied productions.
• Conflict-Resolution - After the production system is matched, one of the
production instances in the conflict is selected for execution for the
purpose of determining the progress of the process.
• Act - The production instance selected in the previous stage is s executed,
impacting the contents of the working memory.
Explanation facilities
The explanantion facilities make it possible for the user to ask the expert system
how a specific conclusion was reached and why a specific fact is required. The
expert system needs to be able to explain its reasoning and justify its advice,
analysis, or conclusion.
User interface
In a rule based approach the user interface is the means through which the user
seeking a solution to a problem communicates with the expert system. The
communication should be as meaningful and friendly as possible and the user
interface should be as intuitive as possible.
These five elements are critical for any rule-based system. They are the core
components of the rule-based system. But the system might have some
additional components as well. A couple of these components could be the
external interface and the working memory.
External interface
The external interface enables an expert system to work with external data files
and programs that are written in conventional programming languages like C,
Pascal, FORTRAN and Basic.
Working memory
The working memory stores temporary information and data.
ADVANTAGES
1. A rule-based system is generally cost-efficient and accurate in terms of its
results.
2. The outputs generated by the system are dependent on rules so the
output responses are stable and not random.
3. The coverage for different circumstances is less, whatever scenarios are
covered by the Rule Based system will provide high accuracy. The error
rate goes down because of the predefined rules.
4. It's feasible to reduce the amount of risk in terms of system accuracy.
5. Optimizing the speed of the system is easier as you know all the parts. So
providing instant outputs, is not a big issue.
DISADVANTAGES
1. A rule-based system is built upon a lot of data, deep knowledge of the
domain, and a lot of manual work.
2. Writing and generating rules for a complex system is quite challenging
and time-consuming.
3. The self-learning capacity in a rule-based system is less as it generates the
result as per the rules.
4. Complex pattern identification is a challenging task in the Rule Based
method as it takes a lot of time and analysis.
2. **Version Control**: Version control systems like Git are essential for managing
changes to source code, models, and data. This ensures collaboration, tracking changes, and
reverting to previous versions if needed.
3. **Data Storage**: Non-production systems often require large-scale data storage for
training and testing datasets. This may involve databases like PostgreSQL, MongoDB, or
cloud-based storage solutions like Amazon S3 or Google Cloud Storage.
7. **Model Evaluation**: After training, models are evaluated using validation datasets
to assess their performance. Metrics such as accuracy, precision, recall, or F1 score are
calculated to measure model effectiveness.
11. **Security and Compliance**: Non-production systems must adhere to security best
practices and regulatory requirements, especially when dealing with sensitive data. Measures
such as encryption, access controls, and compliance audits are implemented to protect data
privacy and integrity.
12. **Scaling and Performance Optimization**: As models and datasets grow in size,
non-production systems must be designed to scale efficiently. Techniques such as distributed
computing, parallel processing, and model optimization are employed to improve
performance and reduce training time.
KNOWLEDGE ACQUISITION AND VALIDATION
Knowledge acquisition and validation are crucial steps in AI development to ensure that the
models learn from reliable data and produce accurate results. Here's a breakdown of
knowledge acquisition and validation in AI:
1. **Data Collection**: The first step in knowledge acquisition is gathering relevant data.
This may involve collecting data from various sources such as databases, APIs, sensors, or
web scraping. The quality, quantity, and diversity of data play a significant role in the
performance of AI models.
2. **Data Preprocessing**: Raw data often contains noise, missing values, outliers, and
inconsistencies. Data preprocessing techniques like cleaning, normalization, feature
engineering, and dimensionality reduction are applied to prepare the data for training.
3. **Labeling and Annotation**: Supervised learning algorithms require labeled data for
training. Labeling involves assigning predefined categories or classes to data instances.
Annotation tools and crowdsourcing platforms are used to label large datasets efficiently.
5. **Training Data Splitting**: The collected data is divided into training, validation, and
test sets. The training set is used to train the model, the validation set is used to tune
hyperparameters and evaluate performance during training, and the test set is used to assess
the final model's generalization ability.
6. **Model Training**: AI models are trained using algorithms that learn patterns and
relationships from the training data. The choice of algorithm depends on the nature of the
problem (e.g., classification, regression, clustering) and the characteristics of the data.
7. **Validation Metrics**: Various metrics are used to evaluate the performance of AI
models. For classification tasks, metrics like accuracy, precision, recall, F1 score, ROC
curve, and AUC are commonly used. For regression tasks, metrics like mean squared error
(MSE) and R-squared are used.
10. **Model Evaluation and Validation**: After training and tuning, the model is
evaluated on the test set to assess its generalization performance. If the model performs well
on unseen data, it indicates that it has successfully acquired knowledge from the training data
and can make accurate predictions in real-world scenarios.
11. **Bias and Fairness Evaluation**: It's essential to evaluate AI models for bias and
fairness to ensure that they make predictions without discrimination against certain
demographic groups or protected attributes. Techniques like demographic parity, equal
opportunity, and disparate impact analysis are used for bias detection and mitigation.