0% found this document useful (0 votes)
187 views

Expert System Architecture

The document discusses expert system architecture and components. An expert system uses a knowledge base and inference engine to provide solutions to complex problems like a human expert. It explains the user interface, inference engine, and knowledge base as the main components of an expert system.

Uploaded by

sshivanshu193
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
187 views

Expert System Architecture

The document discusses expert system architecture and components. An expert system uses a knowledge base and inference engine to provide solutions to complex problems like a human expert. It explains the user interface, inference engine, and knowledge base as the main components of an expert system.

Uploaded by

sshivanshu193
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

EXPERT SYSTEM ARCHITECTURE

An expert system is a computer program that is designed to solve complex problems and to
provide decision-making ability like a human expert. It performs this by extracting
knowledge from its knowledge base using the reasoning and inference rules according to
the user queries.

Characteristics of Expert System

o High Performance: The expert system provides high performance for solving
any type of complex problem of a specific domain with high efficiency and
accuracy.
o Understandable: It responds in a way that can be easily understandable by the
user. It can take input in human language and provides the output in the same
way.
o Reliable: It is much reliable for generating an efficient and accurate output.
o Highly responsive: ES provides the result for any complex query within a very
short period of time.

Components of Expert System


An expert system mainly consists of three components:

o User Interface
o Inference Engine
o Knowledge Base
1. User Interface

With the help of a user interface, the expert system interacts with the user, takes
queries as an input in a readable format, and passes it to the inference engine. After
getting the response from the inference engine, it displays the output to the user. In
other words, it is an interface that helps a non-expert user to communicate with
the expert system to find a solution.

2. Inference Engine(Rules of Engine)


o The inference engine is known as the brain of the expert system as it is the main
processing unit of the system. It applies inference rules to the knowledge base to derive
a conclusion or deduce new information. It helps in deriving an error-free solution of
queries asked by the user.
o With the help of an inference engine, the system extracts the knowledge from the
knowledge base.
o There are two types of inference engine:
o Deterministic Inference engine: The conclusions drawn from this type of inference
engine are assumed to be true. It is based on facts and rules.
o Probabilistic Inference engine: This type of inference engine contains uncertainty in
conclusions, and based on the probability.
Inference engine uses the below modes to derive the solutions:

o Forward Chaining: It starts from the known facts and rules, and applies the inference
rules to add their conclusion to the known facts.
o Backward Chaining: It is a backward reasoning method that starts from the goal and
works backward to prove the known facts.

3. Knowledge Base
o The knowledgebase is a type of storage that stores knowledge acquired from the
different experts of the particular domain. It is considered as big storage of knowledge.
The more the knowledge base, the more precise will be the Expert System.
o It is similar to a database that contains information and rules of a particular domain or
subject.
o One can also view the knowledge base as collections of objects and their attributes.
Such as a Lion is an object and its attributes are it is a mammal, it is not a domestic
animal, etc.

Advantages of Expert System


o These systems are highly reproducible.
o They can be used for risky places where the human presence is not safe.
o Error possibilities are less if the KB contains correct knowledge.
o The performance of these systems remains steady as it is not affected by emotions,
tension, or fatigue.
o They provide a very high speed to respond to a particular query.

Limitations of Expert System


o The response of the expert system may get wrong if the knowledge base contains the
wrong information.
o Like a human being, it cannot produce a creative output for different scenarios.
o Its maintenance and development costs are very high.
o Knowledge acquisition for designing is much difficult.
o For each domain, we require a specific ES, which is one of the big limitations.
o It cannot learn from itself and hence requires manual updates.
RULE BASED SYSTEM ARCHITECTURE

What is a rule-based system in AI?


A rule-based system in AI is a system that applies human-made rules to store,
sort and manipulate data. In doing so, it mimics human intelligence.
Rule based system in AI require a set of facts or source of data, and a set of rules
for manipulating that data. These rules are sometimes referred to as ‘If
statements’ as they tend to follow the line of ‘IF X happens THEN do Y’.
The steps can be simplified to:

• First comes the data or new business event


• Then comes the analysis: the part where the system conditionally
processes the data against its rules
• Then comes any subsequent automated follow-up actions

Some of the important elements of rule-based system in AI include:

A set of facts
These facts are assertions or anything that is relevant to the beginning state of
the system.

Set of Rules
This set contains all the actions that should be performed within the scope of a
problem and defines how to act on the assertion set. In the set of rules facts are
represented in an IF-THEN form.

Termination Criteria or Interpreter


This determines whether a solution exists or not and figures out when the
process should be terminated.
What are the characteristics of rule-based
systems?
Some of the features of rule-based systems are:

• They are made up of the combined knowledge of human experts in the


problem domain.
• They represent knowledge in a very declarative manner.
• They make it possible to use various knowledge representations
paradigms.
• They support the implementation of non-deterministic search and
control strategies.
• They help in describing fragmentary, ill-structured, heuristic,
judgemental knowledge.
• They are robust and have the ability to operate using uncertain or
incomplete knowledge.
• They can help with rule based decision making.

What are the main components of a rules-


based system?
A typical rule-based system has seven basic components:
The knowledge base
It holds the domain knowledge that is necessary for problem solving. In a rules-
based system, the knowledge gets represented as a set of rules. Every rule
specifies a relation, recommendation, directive, strategy or heuristic and has
the IF (condition) THEN (action) structure. As soon as the condition part of the
rule is satisfied, the rule gets triggered and the action part gets executed.

The database
In a rule based based approach the database has a set of facts that are used to
compare against the IF (condition) part of the rules that are held in the
knowledge base.

The inference engine


The inference engine is used to perform the reasoning through which the expert
system comes to a solution. The job of the inference engine is to link the rules
that are defined in the knowledge base with the facts that are stored in the
database. The inference engine is also known as the semantic reasoner. It infers
information or performs required actions on the basis of input and the rule base
that's present in the knowledge base. The semantic reasoner involves a match-
resolve-act cycle that works like this:

• Match - A section of the production rule system gets matched with the
contents of the working memory to gain a conflict, where there are
several instances of the satisfied productions.
• Conflict-Resolution - After the production system is matched, one of the
production instances in the conflict is selected for execution for the
purpose of determining the progress of the process.
• Act - The production instance selected in the previous stage is s executed,
impacting the contents of the working memory.

Explanation facilities
The explanantion facilities make it possible for the user to ask the expert system
how a specific conclusion was reached and why a specific fact is required. The
expert system needs to be able to explain its reasoning and justify its advice,
analysis, or conclusion.
User interface
In a rule based approach the user interface is the means through which the user
seeking a solution to a problem communicates with the expert system. The
communication should be as meaningful and friendly as possible and the user
interface should be as intuitive as possible.
These five elements are critical for any rule-based system. They are the core
components of the rule-based system. But the system might have some
additional components as well. A couple of these components could be the
external interface and the working memory.

External interface
The external interface enables an expert system to work with external data files
and programs that are written in conventional programming languages like C,
Pascal, FORTRAN and Basic.

Working memory
The working memory stores temporary information and data.

ADVANTAGES
1. A rule-based system is generally cost-efficient and accurate in terms of its
results.
2. The outputs generated by the system are dependent on rules so the
output responses are stable and not random.
3. The coverage for different circumstances is less, whatever scenarios are
covered by the Rule Based system will provide high accuracy. The error
rate goes down because of the predefined rules.
4. It's feasible to reduce the amount of risk in terms of system accuracy.
5. Optimizing the speed of the system is easier as you know all the parts. So
providing instant outputs, is not a big issue.
DISADVANTAGES
1. A rule-based system is built upon a lot of data, deep knowledge of the
domain, and a lot of manual work.
2. Writing and generating rules for a complex system is quite challenging
and time-consuming.
3. The self-learning capacity in a rule-based system is less as it generates the
result as per the rules.
4. Complex pattern identification is a challenging task in the Rule Based
method as it takes a lot of time and analysis.

NON PRODUCTION SYSTEM ARCHITECTURE


Non-production system architecture in AI typically refers to the setup used for development,
testing, experimentation, and research purposes rather than deployment in a live environment
serving real users or customers. Here's a simplified overview of a typical non-production
system architecture in AI:

1. **Development Environment**: Developers work in an environment where they write,


debug, and test code. This environment often includes IDEs (Integrated Development
Environments) such as PyCharm, Jupyter Notebooks, or VSCode.

2. **Version Control**: Version control systems like Git are essential for managing
changes to source code, models, and data. This ensures collaboration, tracking changes, and
reverting to previous versions if needed.

3. **Data Storage**: Non-production systems often require large-scale data storage for
training and testing datasets. This may involve databases like PostgreSQL, MongoDB, or
cloud-based storage solutions like Amazon S3 or Google Cloud Storage.

4. **Model Development**: AI models are developed using frameworks like TensorFlow,


PyTorch, or scikit-learn. Developers experiment with different algorithms, architectures, and
hyperparameters to improve model performance.
5. **Model Training**: Training AI models requires significant computational resources,
often provided by GPUs or TPUs (Tensor Processing Units). Cloud platforms like AWS,
Google Cloud Platform, or Microsoft Azure offer scalable infrastructure for model training.

6. **Experiment Tracking**: Experiment tracking tools like MLflow or TensorBoard are


used to log parameters, metrics, and artifacts during model training. This helps in comparing
different experiments and reproducing results.

7. **Model Evaluation**: After training, models are evaluated using validation datasets
to assess their performance. Metrics such as accuracy, precision, recall, or F1 score are
calculated to measure model effectiveness.

8. **Deployment Testing**: Before deploying models to production, they undergo testing


in a staging environment. This ensures that models behave as expected and perform well
under realistic conditions.

9. **Documentation and Reporting**: Comprehensive documentation is crucial for non-


production systems, including code comments, README files, and technical reports
detailing model architecture, training procedures, and evaluation results.

10. **Continuous Integration/Continuous Deployment (CI/CD)**: CI/CD pipelines


automate the process of building, testing, and deploying models. This ensures consistency,
reliability, and efficiency in the development workflow.

11. **Security and Compliance**: Non-production systems must adhere to security best
practices and regulatory requirements, especially when dealing with sensitive data. Measures
such as encryption, access controls, and compliance audits are implemented to protect data
privacy and integrity.

12. **Scaling and Performance Optimization**: As models and datasets grow in size,
non-production systems must be designed to scale efficiently. Techniques such as distributed
computing, parallel processing, and model optimization are employed to improve
performance and reduce training time.
KNOWLEDGE ACQUISITION AND VALIDATION

Knowledge acquisition and validation are crucial steps in AI development to ensure that the
models learn from reliable data and produce accurate results. Here's a breakdown of
knowledge acquisition and validation in AI:

1. **Data Collection**: The first step in knowledge acquisition is gathering relevant data.
This may involve collecting data from various sources such as databases, APIs, sensors, or
web scraping. The quality, quantity, and diversity of data play a significant role in the
performance of AI models.

2. **Data Preprocessing**: Raw data often contains noise, missing values, outliers, and
inconsistencies. Data preprocessing techniques like cleaning, normalization, feature
engineering, and dimensionality reduction are applied to prepare the data for training.

3. **Labeling and Annotation**: Supervised learning algorithms require labeled data for
training. Labeling involves assigning predefined categories or classes to data instances.
Annotation tools and crowdsourcing platforms are used to label large datasets efficiently.

4. **Feature Selection and Engineering**: Feature selection involves identifying the


most relevant features or attributes that contribute to the predictive power of the model.
Feature engineering involves creating new features or transforming existing ones to improve
model performance.

5. **Training Data Splitting**: The collected data is divided into training, validation, and
test sets. The training set is used to train the model, the validation set is used to tune
hyperparameters and evaluate performance during training, and the test set is used to assess
the final model's generalization ability.

6. **Model Training**: AI models are trained using algorithms that learn patterns and
relationships from the training data. The choice of algorithm depends on the nature of the
problem (e.g., classification, regression, clustering) and the characteristics of the data.
7. **Validation Metrics**: Various metrics are used to evaluate the performance of AI
models. For classification tasks, metrics like accuracy, precision, recall, F1 score, ROC
curve, and AUC are commonly used. For regression tasks, metrics like mean squared error
(MSE) and R-squared are used.

8. **Cross-Validation**: Cross-validation techniques like k-fold cross-validation are used


to assess model performance more reliably, especially when the size of the dataset is limited.
It involves splitting the data into multiple subsets and training/testing the model on different
combinations of these subsets.

9. **Hyperparameter Tuning**: Hyperparameters are parameters that are not learned


during training but affect the learning process. Techniques like grid search, random search,
and Bayesian optimization are used to find the optimal hyperparameters that maximize model
performance.

10. **Model Evaluation and Validation**: After training and tuning, the model is
evaluated on the test set to assess its generalization performance. If the model performs well
on unseen data, it indicates that it has successfully acquired knowledge from the training data
and can make accurate predictions in real-world scenarios.

11. **Bias and Fairness Evaluation**: It's essential to evaluate AI models for bias and
fairness to ensure that they make predictions without discrimination against certain
demographic groups or protected attributes. Techniques like demographic parity, equal
opportunity, and disparate impact analysis are used for bias detection and mitigation.

12. **Continuous Monitoring and Updating**: AI models should be continuously


monitored in production to detect performance degradation, concept drift, or data drift.
Periodic retraining and updating of models are necessary to maintain their accuracy and
relevance over time.

You might also like