SlideShare a Scribd company logo
Abstract—Exam is an evaluation tool to measure teaching
and learning outcomes of educators and their learners
respectively. Nowadays, an automated exam question set
generator is a must have to reduce educator’s time on
preparation of exam question set and increase the quality of
exam question set. This paper proposes an Automated Exam
Question Set Generator (AEQSG) using Utility Based Agent
(UBA) and Learning Agent (LA). Furthermore, AEQSG applies
Bloom Taxonomy (BT) scaling to automate Bloom’s Taxonomy
(BT) difficulty level distribution and Genetic Algorithm (GA) to
optimize the generation of exam question set and generate high
quality exam question set that follow educational institution’s
guide-lines. The purpose of utility based agent in AEQSG is to
give the user an option to choose actions based on a user’s
preference (utility) for each generation state. Meanwhile, the
purpose of learning agent in AEQSG is to learn from its past
exam results (past generation experiences).
Index Terms—Automated exam question generator, bloom’s
taxonomy scaling, genetic algorithm, learning agent, utility
based agent.
I. INTRODUCTION
Exams are implemented in educational institutions to
monitor the progress of educators’ teaching outcome and
learners’ learning outcome. The manual preparation process
of exam question set requires educators’ great commitments
and is time-consuming. In order to reduce educators’
commitments in preparing exam question set, we have
proposed an Automated Exam Question Set Generator
(AEQSG) using Utility Based Agent (UBA), Learning Agent
(LA), Bloom Taxonomy (BT) Scaling and Genetic
Algorithm (GA).
UBA chooses actions based on a preference (utility) for
each state that maximizes the expected utility. Meanwhile,
LA starts to act with basic knowledge (question mark) and
then are able to act and adapt automatically (average past
question result mark) through learning. In our proposed
system, UBA and LA will be applying BT scaling to achieve
a desirable result. BT scaling automate the distribution of
Bloom’s Taxonomy difficulty level and automate learning
experience using Bell Curve Analysis based on the average
past exam question result mark to get optimal result for exam
question set. GA optimizes the generation of exam question
Manuscript received September 25, 2019; revised December 22, 2019
Tengku Nurulhuda Tengku Abd Rahim, Ma. Stella Tabora Domingo,
Mohamed Farid Noor Batcha are with the MIMOS Berhad, Technology Park
Malaysia, Kuala Lumpur 57000 Malaysia (e-mail: huda.rahim@mimos.my,
stella.domingo@mimos.my, farid.batcha@mimos.my).
Zalilah Abd Aziz is with the Faculty of Computer and Mathematical
Sciences, Universiti Teknologi MARA, Boulder, 40450 Shah Alam,
Selangor, Malaysia (e-mail: zalilah@tmsk.uitm.edu.my).
set based on chosen preference (utility) in AEQSG.
Bloom’s Taxonomy (BT) and GA will not be detailed
since this paper is an extended paper from the Automated
Exam Question Generator using Genetic Algorithm paper
[1].
II. INTELLIGENT AGENT
A. Utility Based Agent (UBA)
Utility based agent is almost similar to the goal based agent
that acts based on goals but it also finds the optimum way to
achieve the goal. It is beneficial when there are multiple
possible alternatives and an agent has to choose before
performing the best action. The utility function maps each
state to a real number to evaluate the efficiency of each action
in achieving the goals [2].
UBA as in Fig. 1 uses a world model together with a utility
function that measures its preferences among states and then
chooses the action that leads to the best expected utility. By
averaging over all possible outcome states, weighted by the
probability of the outcome; expected utility is computed [3].
Fig. 1. A model-based, utility-based agent [3].
B. Learning Agent (LA)
A general intelligent agent known as a learning agent is the
preferred method for creating state-of-the-art systems in
Artificial Intelligence where any type of agents can be
generalized into a learning agent to generate better actions [4].
A learning agent has the ability to learn from its past
experiences even though it starts with basic knowledge and
then adapts automatically through learning [2].
There are four main conceptual components in a learning
agent as shown in Fig. 2. The learning element is responsible
on improvements based on learning from the environment.
The learning element takes feedback from the critic.
Automated Exam Question Set Generator Using Utility
Based Agent and Learning Agent
Tengku Nurulhuda Tengku Abd Rahim, Ma. Stella Tabora Domingo, Mohamed Farid Noor Batcha, and
Zalilah Abd Aziz
International Journal of Machine Learning and Computing, Vol. 10, No. 1, January 2020
164
doi: 10.18178/ijmlc.2020.10.1.914
Meanwhile the critic describes how well the agent is doing
with respect to a fixed performance standard. The
performance element is responsible for external action
selection. The problem generator is responsible for actions
suggestion that will lead to new and informative experiences.
Therefore, a learning agent is able to learn, analyze
performance and look for new ways to optimize its
performance [2].
Fig. 2. A general learning agent [3].
III. DEEP REINFORCEMENT LEARNING
Deep Reinforcement Learning (DRL) is the combination
of Reinforcement Learning (RL) and Deep Learning (DL);
whereas RL with function approximation by deep neural
networks [5]. RL and DL are both systems that learn
autonomously. The difference between RL and DL is RL
learns dynamically by adjusting actions using continuous
feedback in order to optimize the reward, while DL learns
from a training set and then applies that learning to a new data
set [6].
Meanwhile, the difference between RL and DRL is RL is
dynamically learning with a trial and error method to
maximize the outcome, while DRL is learning from existing
knowledge and applying it to a new data set [7]. Unlike
supervised and unsupervised learning algorithms, RL is
about training an agent to interact with an environment and
maximize its reward [8].
RL model as shown in Fig. 3 consist of three basic
concepts: state, action, and reward. The state describes the
current situation of the agent. The action describes the action
taken by agent at each state. The reward describes the
feedback from the environment [8].
Fig. 3. Reinforcement learning model [8].
IV. RELATED WORK
Automated Test Paper Generation [9] is a web-based
application system prototype implementing utility-based
agent to cater exam difficulty level, and shuffling algorithm
to design question set randomly without repetition or
duplication. On the basis of utility value, two sets of question
items are selected. The first question items set consist of the
highest utility value, whereas the second question items set
consist of the second highest utility value of the test paper.
Accumulative difficulty level of the test paper is calculated
based on the question items’ utility value.
An intelligent agent is introduced to Automatic Question
Paper Setting (AQPS) [10] to automate the generation of
question paper using random number generation technique
and backtracking algorithm known as Automatic Question
Paper Generator Agent (AQPA). Basic components of AQPS
are Hash Table (HT) database, Repeated Question Checker
Module (RQCM) and Question Paper Generation Module
(QPGM). The HT database is used to store questions with
their keys with optimum space; in order to find required
question very quickly. RQCM ensures that a new retrieved
question does not match with any question in selected
question database. QPGM generates the question paper based
on selected pre-defined question format and selected question
database.
The proposed framework for Automatic Question
Generation (AQG) [11] is multi-agent which consists of
Document Processing Agent (DPA), Information
Classification Agent (ICA) and Question Generation Agent
(QGA). DPA will first start processing input text using Tree
Tagger tool to produce a one word per line form together with
its tag and lemma; the processed output of Tree Tagger is
ranked based on the frequency of each word occurrence; then
prefix and suffix of each word is removed with a stemming
process to get the root word. Bloom’s category is found in
ICA based on selected keywords after the stemming process
and word count from DPA by matching keyword with action
verb in the repository. The output from ICA will be used in
QGA to generate questions based on template that suit the
selected keywords according to the Bloom’s levels.
The two main components in Utility Based Test Paper
(UBTP) agent [12] are Knowledgebase Developer (KD) and
Test Paper Generator (TPG). KD is a component that stores
question items with its utility value with integer value in
range between 0 and 5 in file form. TPG is a component that
generate a test paper from selected question items (filtered by
its utility value) from knowledgebase based on overall
difficulty level (from 1 percent to 100 percent) entered by
examiner. Three sets of question items will be selected on the
basis of utility value in TPG algorithm. In other words, first
set has the highest utility value calculated, second set has one
less than highest utility value calculated and third set has 0
utility values.
Most of papers above implement Utility Based Agent
(UBA) only in their system but none of them are using
Learning Agent (LA) or Deep Reinforcement Learning
(DRL). In order to improve the quality of Exam Question Set
Generator, we propose multi agent with combination of two
agents; UBA and LA. We also suggest to apply DRL for LA.
V. PROPOSED SYSTEM
The overall architecture of the proposed system i.e.
Automated Exam Question Set Generator (AEQSG) using
Utility Based Agent (UBA) and Learning Agent (LA) is
shown in Fig. 4.
International Journal of Machine Learning and Computing, Vol. 10, No. 1, January 2020
165
Exam Question
Set
Exam Question
Set Generator
Question Set
Preference
(Utility)
Question
Bank
Fig. 4. Automated exam question set generator overall architecture.
Question collection workflow process as shown in Fig. 5
will be done before hand to provide the proposed system with
a collection of questions either new or previous question from
past exams. User can either key in or uploaded file for new
and previous questions into the system. Then the system will
automatically validate each uploaded file and validate each
questions keyed in or uploaded. After that, all validated
questions will be stored in the question bank.
Start
Initialize
Question List
(LQ)
Input
Question(s)
Details
Upload File(s)
For Each
Question (q)
Is q Valid?
Yes
Has More q?
Add q to LQ
No
For Each File (f)
Is f Valid?
Yes
No
Has More f?
No
No
End
Save LQ Question
Bank
Yes
Yes
Fig. 5. Question collection workflow process.
All uncategorised questions either new question or
previous question from past exam that does not have any BT
category or chapter category assigned when the questions are
keyed in or uploaded using file; the proposed system will
automatically categorise based on BT categories (as shown in
Table I) and chapter during question categorisation workflow
process (as shown in Fig. 6).
Therefore the person in charge to key in or upload
questions to proposed system can be anyone other than
Subject Matter Expert (SME). Thus, in the same time will
helps to reduce educators’ commitments and reduce
significant amount of time in preparing exam question set. In
addition to above benefits, the question categories would be
more appropriated to course syllabus guideline since it not
only categorised BT category but also chapter category for
the specific question saved into question bank.
TABLE I: BLOOM’S TAXONOMY CLASSIFICATION [1]
ID Category Level
C1 Knowledge
Easy
C2 Comprehension
C3 Application
Medium
C4 Analysis
C5 Synthesis
Hard
C6 Evaluation
Question categorisation workflow process will be started
with pre-processing of uncategorised questions using Natural
Language Processing pre-processing (NLP pre-processing)
included lower case standardization, remove non alphabet
character, tokenization, Part of Speech (POS) tagging, noun
extraction, verb extraction and stemming verbs before
Bloom’s Taxonomy classification (BT classification) and
chapter classification parallel workflow process begin.
Start
End
Lower Case
Standardization
Remove Non-
Alphabet
Character
Question
Bank
Tokenization
Stemming Verb
WordNet
Bloom s
Taxonomy
Classification
Verb Extraction
(LV)
Retrieve
Uncategorised
Question
Noun Extraction
(LN)
Chapter
Classification
Part of Speech
(POS) Tagging
Fig. 6. Question categorisation workflow process.
Start End
Retrieve Verb
List (LV)
For Each Verb
(v)
BT Verbs
Found CBT? Yes
LBT != Empty?
Identify BT
Category of v
(CBT(v))
No
No
Add CBT(v) into
LBT
Has More v?
Yes
No
Select CBT with
Highest Occurrence
(Max(CBT))
Assign CBT to
Question
Question
Bank
Update
Question
Select The Most
Relevant CBT
based on Rules
Yes
Count Occurrence
for Each CBT
(Sum(CBT))
No
Yes
Has > 1
Max(CBT)?
Initialize BT
Category List
(LBT)
Fig. 7. Bloom’s taxonomy classification workflow process.
In BT classification workflow process (as shown in Fig. 7)
each verb in verb list extracted after NLP pre-processing
International Journal of Machine Learning and Computing, Vol. 10, No. 1, January 2020
166
done; will be identified with respective BT categories based
on Bloom’s Taxonomy (BT) verbs. Then respective BT
categories with highest occurrence will be selected after the
summation of each BT categories done. If more than one
highest occurrence found, the most relevant BT category will
be selected based on rules. The selected BT category will be
assigned to the question and the question in question bank
will be updated with selected BT category.
End
Convert LN to
N-gram List
(LNG)
For Each N-
gram (ng)
Chapters
/ Nouns
Found C?
Yes
LC != empty?
Identify Chapter
of ng (C)
No
No
Add C into LC
Has More
ng?
Yes
No
Select C with
Highest Occurrence
(Max(C))
Assign C to
Question
Question
Bank
Update
Question
Select The Most
Relevant C
based on Rules
Yes
Count
Occurrence of
Each C (Sum(C))
No
Yes
Have > 1
Max(C)?
Start
Initialize
Chapter List (LC)
Retrieve Noun
List (LN)
Fig. 8. Chapter classification workflow process.
Meanwhile in chapter classification workflow process (as
shown in Fig. 8), the noun list extracted from previous NLP
pre-processing will be converted into n-gram list. After that
each n-gram in n-gram list will be identified with respective
chapter categories based on Chapters / Nouns reference table.
Then, each chapter categories summation done, the
respective chapter categories with highest occurrence will be
selected. The most relevant chapter category will be selected
based on rules, if more than one highest occurrence found.
Later, the question in question bank will be assigned and
updated with selected chapter category.
Utility Based Agent
Genetic
Algorithm (GA)
Learning Agent
Bloom s
Taxonomy
Scaling
Question Set
Preference
Exam Question
Set
User
Fig. 9. Proposed model for automated exam question set generator.
Our proposed system, AEQSG will be implemented using
both Utility Based Agent (UBA) and Learning Agent (LA). It
also implement Bloom’s Taxonomy (BT) Scaling and
Genetic Algorithm (GA) to optimize the process and enhance
the quality of the new exam question set. The proposed model
for AEQSG using UBA and LA is shown in Fig. 9.
The UBA and LA are chosen as a model for the proposed
system to achieve our AEQSG goals while optimizing exam
question set generation using GA. Moreover, LA is chosen to
increase the quality of exam question set using Bell Curve
Analysis based on the average past exam question result mark
in BT Scaling. Furthermore, BT scaling also provides the
mechanism to automate the distribution of Bloom’s
Taxonomy difficulty.
AEQSG will generated the new exam question set from
question bank based on user’s question preference. The user
is given the option to select excluded year range, academic
year, semester, education level, exam duration, exam
difficulty, subject, chapter selection, total question and
question margin for question set preference (utility function)
before the new exam question set is generated. User’s
question preference workflow process as shown in Fig. 10
will be executed before GA workflow process begin.
Start
Input Question
Set Preference
(P)
Is TQ > 2
& CS > 0
& TQ > CS?
Save P, MBT and
MQ
Yes
No
Question Set
Preference
Excluded Year
Range
Reference (EY)
Academic Year
Reference (AY)
Semester
Reference
(SM)
Subject
Reference (SJ)
Chapter
Selection
Reference (CS)
Education
Level
Reference (LV)
Exam Duration
Reference (DR)
Exam Difficulty
Reference (DF)
Total Question
Reference (TQ)
Question
Margin
Reference
(QM)
Calculate Max
BT Category Per
Chapter (MBT)
Calculate Max
Question Per BT
Category (MQ)
Initialize Fittest
Question Set
(QSF)
Question
Bank
A
Fig. 10. User’s question preference workflow process.
GA workflow process as shown in Fig 11, start with
initialize question sets based on user’s preferences question
set. After question set been initialized, the fitness value for
each question sets will be evaluated and question set is
selected using Roulette Wheel selection. Then, single
crossover and mutation will be performed based on selected
question set. After single crossover and mutation done,
fitness value for new question sets will be evaluated. Later,
International Journal of Machine Learning and Computing, Vol. 10, No. 1, January 2020
167
the best fitness will be selected from the current question sets
and pass to Bloom Taxonomy (BT) Scaling.
UBA chooses actions based on a preference (utility) for
each state that maximizes the expected utility. UBA using
GA to optimizes the generation of exam question set based on
chosen preference (utility) in AEQSG and generate high
quality exam question set that follow educational institution’s
guide-lines.
The fitness formula as shown in Fig 12, is used to calculate
the fitness value in GA (that resides in UBA) to get the best
fitness before the fitness value is pass to LA to proceed with
BT scaling process. W is represent exam question set quality
weightage percentage as shown in Table II. The lowest value
of the fitness value is the best fitness.
Initialize Question
Sets Based on P
(QSTQ)
End
For Each Iteration
(i)
Select QS using
Roulette Wheel
Selection
Perform Single
Crossover
Perform
Mutation
Save QSF
Has More i?
Yes
Question
Bank
Bloom s
Taxonomy
Scaling (BTS)
Evaluate Fitness
Value for Each
New QS
No
Evaluate Fitness
Value for Each
Question Set (QS)
Select Best
Fitness QS (QSB)
Evaluate New
Fitness Value
After BTS
Assign New
Fitness QS to
QSF
Yes
New Fitness
Value is Fittest?
No
A
Fig. 11. Genetic algorithm workflow process.
Start
Retrieve
Question Set
(QSF)
For Each
Question (q)
Retrieve AS
Question
Bank
Has Average
Score (AS)?
Yes
Has More q? No
Yes
No
AS = Question
Mark
Add AS into LS
Initialize Score
List (LS)
Scores
B
Fig. 12. BT scaling – average score workflow process.
LA starts to act with basic knowledge (question mark)
where there is no result mark average (as shown in Fig. 13);
and then it is able to act and adapt automatically (average past
question result mark) through learning. BT scaling automate
the distribution of Bloom’s Taxonomy difficulty level (as
shown in Fig. 14) and automate learning experience using
Bell Curve Analysis based on the average past exam question
result mark (as shown in Fig. 15) to get optimal result for
exam question set.
TABLE II: QUALITY OF EXAM WEIGHTAGE [1]
No. of
Level
No. of
Category
Example Weightage Quality
3
6 C1, C2, C3, C4, C5, C6 100
Good
5 C1, C2, C3, C4, C6 90
4 C1, C2, C4, C5 80
3 C2, C4, C5 70
4 C3, C4, C5, C6 60
2 3 C1, C3, C4 50 Average
2 C4, C5 40
1
2 C3, C4 30
Bad
1 C5 20
Start
Retrieve
Question Set
(QSF)
For Each
Question (q)
Retrieve AS
Question
Bank
Has Average
Score (AS)?
Yes
Has More q? No
Yes
No
AS = Question
Mark
Add AS into LS
Initialize Score
List (LS)
Scores
B
Fig. 13. BT scaling – average score workflow process.
Calculate Total
Question (TQ)
For Each BT
Category (c)
Has More c?
No
Total
TQ > MQ?
Retrieve Max
Question Per BT
Category (MQ)
Remove q from
QSF
Yes
Yes
Calculate
Current Total
Question (TCQ)
No
Retrieve Initial
Total
Question (TQ)
TQ > TCQ?
Yes
No
Select BT Category
with Min
Occurrence (NBT)
Select New
Question Based
on NBT (QBT)
Add QBT into
QSF
Recalculate TCQ
Question
Bank
Question
Bank
No
B C
Fig. 14. BT scaling – average score workflow process.
By adopting Deep Reinforcement Learning (DRL) in LA,
we will beneficial from both Reinforcement Learning (RL)
and Deep Learning (DL) advantages. RL training an agent to
interact with an environment and maximize its reward while
DL complement RL by providing RL with function
approximation by deep neural networks. RL is dynamically
learning with a trial and error method to maximize the
outcome but DRL learns dynamically by adjusting actions
using continuous feedback in order to optimize the reward
and applying it to a new data set.
Since we are in the midst of exploring DRL for this
proposed system, we will not discussed DRL in further
details. What kind of algorithm or model or deep neural
network that will be used in future for the proposed system
will not been described in this paper.
Scores
Bell Curve Analysis
Based on LS
No
Yes
Expected Bell
Curve?
End
Perform Mutation
on QNS with Same
BT Category
Update AS in LS
Select New
Question with Min
Average Score (QNS)
Retrieve AS
Has AS?
Yes
No
AS = Question
Mark
Question
Bank
No
C
Fig. 15. BT scaling – average score workflow process.
VI. CONCLUSION AND FUTURE WORK
This paper present a novel and feasible approach for
Automated Exam Question Generator using Utility Based
Agent (UBA) and Learning Agent (LA) by implementing
Genetic Algorithm (GA) and Bloom’s Taxonomy (BT)
scaling which includes automated Bloom’s Taxonomy (BT)
difficulty distribution in exam question set and Bell Curve
Analysis.
In future, a few researches can be done based on LA to
explored more on Deep Reinforcement Learning (DRL) with
the combination of other type of algorithms or model or deep
neural networks that will be used to improvise current
proposed system.
CONFLICT OF INTEREST
The authors declare no conflict of interest.
AUTHOR CONTRIBUTIONS
All works in this paper have been done amongst authors.
REFERENCES
[1] T. N. Tengku Abd Rahim, Z. Abd Aziz, R. H. Ab Rauf and N.
Shamsudin, “Automated Exam Question Generator using Genetic
Algorithm,” in Proc. 2017 IEEE Conference on e-Learning,
e-Management and e-Services (IC3e), Shah Alam, 2017.
[2] Types of AI Agents. Java T Point. [Online]. Available:
https://www.javatpoint.com/types-of-ai-agents
[3] S. Russell and P. Norvig, Artificial Intelligence, A Modern Approach,
Third Edition, New Jersey: Pearson Education, 2010.
[4] P. Gupta, Rational Agents for Artificial Intelligence, Hackernoon,
September 2017.
[5] Funstematics.ai. 2019. [Online]. Available:
https://www.funstematics.ai/ai-case-studies
[6] M. Rouse, “deep learning,” SearchEnterpriseAI, TechTarget, October
2019.
[7] T. Williams. (September 2019). Reinforcement learning Vs. deep
reinforcement learning: What’s the difference?” Techopedia Inc.
[Online]. Available:
https://www.techopedia.com/reinforcement-learning-vs-deep-reinforc
ement-learning-whats-the-difference/2/34039
[8] Jay, “Machines that learn with reinforcement learning,” Adverai, July
2018.
[9] S. A El_Rahman and A. H. Zolait, “Automated test paper generation
using utility based agent and shuffling algorithm,” International
Journal of Web-Based Learning and Teaching Technologies, pp. 69-83,
2019.
[10] K. H. Pinjani, R. Y. Raut, and P. S. Yedey, “Developing an intelligent
agent for Automatic Question Paper setting,” in Proc. National
Conference on Advanced Technologies in Computing and
Networking-ATCON-2015, Amravati, 2015.
[11] S. Pandey and K. C. Rajeswari, “Automatic question generation using
software agents for technical institutions,” International Journal of
Advanced Computer Research, vol. 3, no. 13, pp. 307-311, 2013.
[12] M. J. Arshad, M. Naz, Y. Saleem, A. Farooq and K. H. Asif, “Modeling
an agent for paper generation system using utility based approach,”
Journal of Faculty of Engineering & Technology, pp. 109-124, 2012.
Copyright © 2020 by the authors. This is an open access article distributed
under the Creative Commons Attribution License which permits unrestricted
use, distribution, and reproduction in any medium, provided the original
work is properly cited (CC BY 4.0).
Tengku Nurulhuda Tengku Abd Rahim received
the B.I.T. degree in computer system technology from
University of Malaysia Sarawak (UNIMAS), Kota
Samarahan, Sarawak, Malaysia in 2000 and the
M.C.S. degree in computer science from MARA
University of Technology (UiTM), Shah Alam,
Selangor, Malaysia in 2016. She is currently a senior
engineer at Artificial Intelligence Department,
MIMOS Berhad, Kuala Lumpur, Malaysia.
Ma. Stella Tabora Domingo graduated with BSCoE
in the field of computer engineering from University
of Baguio, Philippines in 2005
She works currently as a staff engineer at the
Artificial Intelligence Department, MIMOS Berhad,
Kuala Lumpur, Malaysia.
Mohamed Farid Noor Batcha graduated with BSEE
in the field of communications from University of
Kentucky, United States in 2001 and obtained his
masters of engineering (electrical) from University
Technology Malaysia in 2010.
He is currently a staff engineer at Artificial
Intelligence Department, MIMOS Berhad, Kuala
Lumpur, Malaysia.
Zalilah Abd Aziz received the BSc (Hons) degree in
computer science from ITM/UKM, Shah Alam,
Selangor, Malaysia in 1996, the MSc in computer
science (software engineering) from University Putra
Malaysia (UPM) in 2003, Serdang, Selangor,
Malaysia and PhD in Computer science (artificial
intelligence) from University of Nottingham,
Nottingham, United Kingdom in 2013.
She is currently a senior lecturer at the Faculty of
Computer and Mathematical Sciences, MARA
University of Technology (UiTM), Shah Alam, Selangor, Malaysia since
1997. Her primary research include software quality, software engineering,
artificial intelligence, combinatorial optimization problems, metaheuristics
and programming.
o

More Related Content

What's hot (19)

PDF
IRJET - Neural Network based Leaf Disease Detection and Remedy Recommenda...
IRJET Journal
 
PDF
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks
IJECEIAES
 
PDF
Experiences in shift left test approach
Journal Papers
 
PDF
Dc35579583
IJERA Editor
 
PDF
SOFTWARE TESTING: ISSUES AND CHALLENGES OF ARTIFICIAL INTELLIGENCE & MACHINE ...
ijaia
 
PDF
M018147883
IOSR Journals
 
DOC
Abstract.doc
butest
 
PDF
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...
Editor IJCATR
 
PDF
Kumar2021
SadhikaArora2
 
PPT
Promise 2011: "An Iterative Semi-supervised Approach to Software Fault Predic...
CS, NcState
 
PDF
Determination of Software Release Instant of Three-Tier Client Server Softwar...
Waqas Tariq
 
PDF
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES
ijseajournal
 
PDF
06522405
anilcvsr
 
PDF
Ijcatr04051006
Editor IJCATR
 
PDF
Bd36334337
IJERA Editor
 
PDF
IRJET - A Novel Approach for Software Defect Prediction based on Dimensio...
IRJET Journal
 
PDF
TOWARDS PREDICTING SOFTWARE DEFECTS WITH CLUSTERING TECHNIQUES
ijaia
 
PDF
IRJET- Software Bug Prediction using Machine Learning Approach
IRJET Journal
 
PDF
Genetic algorithm based approach for
IJCSES Journal
 
IRJET - Neural Network based Leaf Disease Detection and Remedy Recommenda...
IRJET Journal
 
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks
IJECEIAES
 
Experiences in shift left test approach
Journal Papers
 
Dc35579583
IJERA Editor
 
SOFTWARE TESTING: ISSUES AND CHALLENGES OF ARTIFICIAL INTELLIGENCE & MACHINE ...
ijaia
 
M018147883
IOSR Journals
 
Abstract.doc
butest
 
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...
Editor IJCATR
 
Kumar2021
SadhikaArora2
 
Promise 2011: "An Iterative Semi-supervised Approach to Software Fault Predic...
CS, NcState
 
Determination of Software Release Instant of Three-Tier Client Server Softwar...
Waqas Tariq
 
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES
ijseajournal
 
06522405
anilcvsr
 
Ijcatr04051006
Editor IJCATR
 
Bd36334337
IJERA Editor
 
IRJET - A Novel Approach for Software Defect Prediction based on Dimensio...
IRJET Journal
 
TOWARDS PREDICTING SOFTWARE DEFECTS WITH CLUSTERING TECHNIQUES
ijaia
 
IRJET- Software Bug Prediction using Machine Learning Approach
IRJET Journal
 
Genetic algorithm based approach for
IJCSES Journal
 

Similar to Automated exam question set generator using utility based agent and learning agent (20)

PDF
IRJET- Automated Exam Question Generator using Genetic Algorithm
IRJET Journal
 
PPTX
DEPT CONF (1) (1).pptx
vijayalakshmi257551
 
PDF
CRITERION BASED AUTOMATIC GENERATION OF QUESTION PAPER
vivatechijri
 
PDF
Enhancing Video Understanding: NLP-Based Automatic Question Generation
IRJET Journal
 
PDF
Development of system for generating questions, answers, distractors using tr...
IJECEIAES
 
PDF
Advanced Question Paper Generator using Fuzzy Logic
IRJET Journal
 
PDF
Unit1: Introduction to AI
Tekendra Nath Yogi
 
PDF
Automatic Question Paper Generator System
ijtsrd
 
PDF
Manta ray optimized deep contextualized bi-directional long short-term memor...
IJECEIAES
 
PDF
An Automatic Question Paper Generation : Using Bloom's Taxonomy
IRJET Journal
 
PDF
A New Model for Credit Approval Problems a Neuro Genetic System with Quantum ...
Anderson Pinho
 
PDF
Dynamic Question Answer Generator An Enhanced Approach to Question Generation
ijtsrd
 
PPTX
Artificial intelligence
HITESH Kumawat
 
DOC
420.doc
butest
 
DOC
Main single agent machine learning algorithms
butest
 
PPTX
Generation of Assessment Questions from Textbooks Enriched with Knowledge Models
Sergey Sosnovsky
 
PDF
AN AUTOMATED MULTIPLE-CHOICE QUESTION GENERATION USING NATURAL LANGUAGE PROCE...
kevig
 
PPT
Symbolic Rules Extraction From Trained Neural Networks
Er Kaushal
 
PDF
ICELW Conference Slides
toolboc
 
DOCX
Artificial Intelligence power point presentation document
David Raj Kanthi
 
IRJET- Automated Exam Question Generator using Genetic Algorithm
IRJET Journal
 
DEPT CONF (1) (1).pptx
vijayalakshmi257551
 
CRITERION BASED AUTOMATIC GENERATION OF QUESTION PAPER
vivatechijri
 
Enhancing Video Understanding: NLP-Based Automatic Question Generation
IRJET Journal
 
Development of system for generating questions, answers, distractors using tr...
IJECEIAES
 
Advanced Question Paper Generator using Fuzzy Logic
IRJET Journal
 
Unit1: Introduction to AI
Tekendra Nath Yogi
 
Automatic Question Paper Generator System
ijtsrd
 
Manta ray optimized deep contextualized bi-directional long short-term memor...
IJECEIAES
 
An Automatic Question Paper Generation : Using Bloom's Taxonomy
IRJET Journal
 
A New Model for Credit Approval Problems a Neuro Genetic System with Quantum ...
Anderson Pinho
 
Dynamic Question Answer Generator An Enhanced Approach to Question Generation
ijtsrd
 
Artificial intelligence
HITESH Kumawat
 
420.doc
butest
 
Main single agent machine learning algorithms
butest
 
Generation of Assessment Questions from Textbooks Enriched with Knowledge Models
Sergey Sosnovsky
 
AN AUTOMATED MULTIPLE-CHOICE QUESTION GENERATION USING NATURAL LANGUAGE PROCE...
kevig
 
Symbolic Rules Extraction From Trained Neural Networks
Er Kaushal
 
ICELW Conference Slides
toolboc
 
Artificial Intelligence power point presentation document
David Raj Kanthi
 
Ad

More from Journal Papers (20)

PDF
Graphene field effect transistor simulation with tcad on top-gate dielectric ...
Journal Papers
 
PDF
Electrochemically reduced graphene oxide (ergo) as humidity sensor effect o...
Journal Papers
 
PDF
Electrical bistabilities behaviour of all solution-processed non-volatile mem...
Journal Papers
 
PDF
Electrical transportation mechanisms of molybdenum disulfide flakes graphene ...
Journal Papers
 
PDF
A real time aggressive human behaviour detection system in cage environment a...
Journal Papers
 
PDF
A numerical analysis of various p h level for fiber optic ph sensor based on ...
Journal Papers
 
PDF
A novel character segmentation reconstruction approach for license plate reco...
Journal Papers
 
PDF
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
Journal Papers
 
PDF
Wafer scale fabrication of nitrogen-doped reduced graphene oxide with enhance...
Journal Papers
 
PDF
Ultrasonic atomization of graphene derivatives for heat spreader thin film de...
Journal Papers
 
PDF
Preliminary study of poly (tetrahydrofurturyl acrylate) thin film as a potent...
Journal Papers
 
PDF
New weight function for adapting handover margin level over contiguous carrie...
Journal Papers
 
PDF
Implementation of embedded real time monitoring temperature and humidity system
Journal Papers
 
PDF
High voltage graphene nanowall trench mos barrier schottky diode characteriza...
Journal Papers
 
PDF
High precision location tracking technology in ir4.0
Journal Papers
 
PDF
Modeling of dirac voltage for highly p doped graphene field effect transistor...
Journal Papers
 
PDF
Implementation of vehicle ventilation system using node mcu esp8266 for remot...
Journal Papers
 
PDF
Energy level tuning of cd se colloidal quantum dots in ternary 0d 2d-2d cdse ...
Journal Papers
 
PDF
A review on graphene based light emitting functional devices
Journal Papers
 
PDF
Uncertainty on virtualization and internet of things (iot) the connectivity...
Journal Papers
 
Graphene field effect transistor simulation with tcad on top-gate dielectric ...
Journal Papers
 
Electrochemically reduced graphene oxide (ergo) as humidity sensor effect o...
Journal Papers
 
Electrical bistabilities behaviour of all solution-processed non-volatile mem...
Journal Papers
 
Electrical transportation mechanisms of molybdenum disulfide flakes graphene ...
Journal Papers
 
A real time aggressive human behaviour detection system in cage environment a...
Journal Papers
 
A numerical analysis of various p h level for fiber optic ph sensor based on ...
Journal Papers
 
A novel character segmentation reconstruction approach for license plate reco...
Journal Papers
 
A hybrid model based on constraint oselm, adaptive weighted src and knn for l...
Journal Papers
 
Wafer scale fabrication of nitrogen-doped reduced graphene oxide with enhance...
Journal Papers
 
Ultrasonic atomization of graphene derivatives for heat spreader thin film de...
Journal Papers
 
Preliminary study of poly (tetrahydrofurturyl acrylate) thin film as a potent...
Journal Papers
 
New weight function for adapting handover margin level over contiguous carrie...
Journal Papers
 
Implementation of embedded real time monitoring temperature and humidity system
Journal Papers
 
High voltage graphene nanowall trench mos barrier schottky diode characteriza...
Journal Papers
 
High precision location tracking technology in ir4.0
Journal Papers
 
Modeling of dirac voltage for highly p doped graphene field effect transistor...
Journal Papers
 
Implementation of vehicle ventilation system using node mcu esp8266 for remot...
Journal Papers
 
Energy level tuning of cd se colloidal quantum dots in ternary 0d 2d-2d cdse ...
Journal Papers
 
A review on graphene based light emitting functional devices
Journal Papers
 
Uncertainty on virtualization and internet of things (iot) the connectivity...
Journal Papers
 
Ad

Recently uploaded (20)

PDF
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
PDF
Automating Feature Enrichment and Station Creation in Natural Gas Utility Net...
Safe Software
 
PPTX
From Sci-Fi to Reality: Exploring AI Evolution
Svetlana Meissner
 
PDF
Transcript: Book industry state of the nation 2025 - Tech Forum 2025
BookNet Canada
 
PDF
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
PPTX
Future Tech Innovations 2025 – A TechLists Insight
TechLists
 
PDF
How do you fast track Agentic automation use cases discovery?
DianaGray10
 
PPTX
Mastering ODC + Okta Configuration - Chennai OSUG
HathiMaryA
 
PPT
Ericsson LTE presentation SEMINAR 2010.ppt
npat3
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PPTX
Agentforce World Tour Toronto '25 - Supercharge MuleSoft Development with Mod...
Alexandra N. Martinez
 
PDF
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 
PDF
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
PPTX
Agentforce World Tour Toronto '25 - MCP with MuleSoft
Alexandra N. Martinez
 
PDF
NLJUG Speaker academy 2025 - first session
Bert Jan Schrijver
 
PDF
SIZING YOUR AIR CONDITIONER---A PRACTICAL GUIDE.pdf
Muhammad Rizwan Akram
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PDF
The 2025 InfraRed Report - Redpoint Ventures
Razin Mustafiz
 
PDF
AI Agents in the Cloud: The Rise of Agentic Cloud Architecture
Lilly Gracia
 
PPTX
MuleSoft MCP Support (Model Context Protocol) and Use Case Demo
shyamraj55
 
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
Automating Feature Enrichment and Station Creation in Natural Gas Utility Net...
Safe Software
 
From Sci-Fi to Reality: Exploring AI Evolution
Svetlana Meissner
 
Transcript: Book industry state of the nation 2025 - Tech Forum 2025
BookNet Canada
 
Kit-Works Team Study_20250627_한달만에만든사내서비스키링(양다윗).pdf
Wonjun Hwang
 
Future Tech Innovations 2025 – A TechLists Insight
TechLists
 
How do you fast track Agentic automation use cases discovery?
DianaGray10
 
Mastering ODC + Okta Configuration - Chennai OSUG
HathiMaryA
 
Ericsson LTE presentation SEMINAR 2010.ppt
npat3
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
Agentforce World Tour Toronto '25 - Supercharge MuleSoft Development with Mod...
Alexandra N. Martinez
 
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
Agentforce World Tour Toronto '25 - MCP with MuleSoft
Alexandra N. Martinez
 
NLJUG Speaker academy 2025 - first session
Bert Jan Schrijver
 
SIZING YOUR AIR CONDITIONER---A PRACTICAL GUIDE.pdf
Muhammad Rizwan Akram
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
The 2025 InfraRed Report - Redpoint Ventures
Razin Mustafiz
 
AI Agents in the Cloud: The Rise of Agentic Cloud Architecture
Lilly Gracia
 
MuleSoft MCP Support (Model Context Protocol) and Use Case Demo
shyamraj55
 

Automated exam question set generator using utility based agent and learning agent

  • 1. Abstract—Exam is an evaluation tool to measure teaching and learning outcomes of educators and their learners respectively. Nowadays, an automated exam question set generator is a must have to reduce educator’s time on preparation of exam question set and increase the quality of exam question set. This paper proposes an Automated Exam Question Set Generator (AEQSG) using Utility Based Agent (UBA) and Learning Agent (LA). Furthermore, AEQSG applies Bloom Taxonomy (BT) scaling to automate Bloom’s Taxonomy (BT) difficulty level distribution and Genetic Algorithm (GA) to optimize the generation of exam question set and generate high quality exam question set that follow educational institution’s guide-lines. The purpose of utility based agent in AEQSG is to give the user an option to choose actions based on a user’s preference (utility) for each generation state. Meanwhile, the purpose of learning agent in AEQSG is to learn from its past exam results (past generation experiences). Index Terms—Automated exam question generator, bloom’s taxonomy scaling, genetic algorithm, learning agent, utility based agent. I. INTRODUCTION Exams are implemented in educational institutions to monitor the progress of educators’ teaching outcome and learners’ learning outcome. The manual preparation process of exam question set requires educators’ great commitments and is time-consuming. In order to reduce educators’ commitments in preparing exam question set, we have proposed an Automated Exam Question Set Generator (AEQSG) using Utility Based Agent (UBA), Learning Agent (LA), Bloom Taxonomy (BT) Scaling and Genetic Algorithm (GA). UBA chooses actions based on a preference (utility) for each state that maximizes the expected utility. Meanwhile, LA starts to act with basic knowledge (question mark) and then are able to act and adapt automatically (average past question result mark) through learning. In our proposed system, UBA and LA will be applying BT scaling to achieve a desirable result. BT scaling automate the distribution of Bloom’s Taxonomy difficulty level and automate learning experience using Bell Curve Analysis based on the average past exam question result mark to get optimal result for exam question set. GA optimizes the generation of exam question Manuscript received September 25, 2019; revised December 22, 2019 Tengku Nurulhuda Tengku Abd Rahim, Ma. Stella Tabora Domingo, Mohamed Farid Noor Batcha are with the MIMOS Berhad, Technology Park Malaysia, Kuala Lumpur 57000 Malaysia (e-mail: [email protected], [email protected], [email protected]). Zalilah Abd Aziz is with the Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, Boulder, 40450 Shah Alam, Selangor, Malaysia (e-mail: [email protected]). set based on chosen preference (utility) in AEQSG. Bloom’s Taxonomy (BT) and GA will not be detailed since this paper is an extended paper from the Automated Exam Question Generator using Genetic Algorithm paper [1]. II. INTELLIGENT AGENT A. Utility Based Agent (UBA) Utility based agent is almost similar to the goal based agent that acts based on goals but it also finds the optimum way to achieve the goal. It is beneficial when there are multiple possible alternatives and an agent has to choose before performing the best action. The utility function maps each state to a real number to evaluate the efficiency of each action in achieving the goals [2]. UBA as in Fig. 1 uses a world model together with a utility function that measures its preferences among states and then chooses the action that leads to the best expected utility. By averaging over all possible outcome states, weighted by the probability of the outcome; expected utility is computed [3]. Fig. 1. A model-based, utility-based agent [3]. B. Learning Agent (LA) A general intelligent agent known as a learning agent is the preferred method for creating state-of-the-art systems in Artificial Intelligence where any type of agents can be generalized into a learning agent to generate better actions [4]. A learning agent has the ability to learn from its past experiences even though it starts with basic knowledge and then adapts automatically through learning [2]. There are four main conceptual components in a learning agent as shown in Fig. 2. The learning element is responsible on improvements based on learning from the environment. The learning element takes feedback from the critic. Automated Exam Question Set Generator Using Utility Based Agent and Learning Agent Tengku Nurulhuda Tengku Abd Rahim, Ma. Stella Tabora Domingo, Mohamed Farid Noor Batcha, and Zalilah Abd Aziz International Journal of Machine Learning and Computing, Vol. 10, No. 1, January 2020 164 doi: 10.18178/ijmlc.2020.10.1.914
  • 2. Meanwhile the critic describes how well the agent is doing with respect to a fixed performance standard. The performance element is responsible for external action selection. The problem generator is responsible for actions suggestion that will lead to new and informative experiences. Therefore, a learning agent is able to learn, analyze performance and look for new ways to optimize its performance [2]. Fig. 2. A general learning agent [3]. III. DEEP REINFORCEMENT LEARNING Deep Reinforcement Learning (DRL) is the combination of Reinforcement Learning (RL) and Deep Learning (DL); whereas RL with function approximation by deep neural networks [5]. RL and DL are both systems that learn autonomously. The difference between RL and DL is RL learns dynamically by adjusting actions using continuous feedback in order to optimize the reward, while DL learns from a training set and then applies that learning to a new data set [6]. Meanwhile, the difference between RL and DRL is RL is dynamically learning with a trial and error method to maximize the outcome, while DRL is learning from existing knowledge and applying it to a new data set [7]. Unlike supervised and unsupervised learning algorithms, RL is about training an agent to interact with an environment and maximize its reward [8]. RL model as shown in Fig. 3 consist of three basic concepts: state, action, and reward. The state describes the current situation of the agent. The action describes the action taken by agent at each state. The reward describes the feedback from the environment [8]. Fig. 3. Reinforcement learning model [8]. IV. RELATED WORK Automated Test Paper Generation [9] is a web-based application system prototype implementing utility-based agent to cater exam difficulty level, and shuffling algorithm to design question set randomly without repetition or duplication. On the basis of utility value, two sets of question items are selected. The first question items set consist of the highest utility value, whereas the second question items set consist of the second highest utility value of the test paper. Accumulative difficulty level of the test paper is calculated based on the question items’ utility value. An intelligent agent is introduced to Automatic Question Paper Setting (AQPS) [10] to automate the generation of question paper using random number generation technique and backtracking algorithm known as Automatic Question Paper Generator Agent (AQPA). Basic components of AQPS are Hash Table (HT) database, Repeated Question Checker Module (RQCM) and Question Paper Generation Module (QPGM). The HT database is used to store questions with their keys with optimum space; in order to find required question very quickly. RQCM ensures that a new retrieved question does not match with any question in selected question database. QPGM generates the question paper based on selected pre-defined question format and selected question database. The proposed framework for Automatic Question Generation (AQG) [11] is multi-agent which consists of Document Processing Agent (DPA), Information Classification Agent (ICA) and Question Generation Agent (QGA). DPA will first start processing input text using Tree Tagger tool to produce a one word per line form together with its tag and lemma; the processed output of Tree Tagger is ranked based on the frequency of each word occurrence; then prefix and suffix of each word is removed with a stemming process to get the root word. Bloom’s category is found in ICA based on selected keywords after the stemming process and word count from DPA by matching keyword with action verb in the repository. The output from ICA will be used in QGA to generate questions based on template that suit the selected keywords according to the Bloom’s levels. The two main components in Utility Based Test Paper (UBTP) agent [12] are Knowledgebase Developer (KD) and Test Paper Generator (TPG). KD is a component that stores question items with its utility value with integer value in range between 0 and 5 in file form. TPG is a component that generate a test paper from selected question items (filtered by its utility value) from knowledgebase based on overall difficulty level (from 1 percent to 100 percent) entered by examiner. Three sets of question items will be selected on the basis of utility value in TPG algorithm. In other words, first set has the highest utility value calculated, second set has one less than highest utility value calculated and third set has 0 utility values. Most of papers above implement Utility Based Agent (UBA) only in their system but none of them are using Learning Agent (LA) or Deep Reinforcement Learning (DRL). In order to improve the quality of Exam Question Set Generator, we propose multi agent with combination of two agents; UBA and LA. We also suggest to apply DRL for LA. V. PROPOSED SYSTEM The overall architecture of the proposed system i.e. Automated Exam Question Set Generator (AEQSG) using Utility Based Agent (UBA) and Learning Agent (LA) is shown in Fig. 4. International Journal of Machine Learning and Computing, Vol. 10, No. 1, January 2020 165
  • 3. Exam Question Set Exam Question Set Generator Question Set Preference (Utility) Question Bank Fig. 4. Automated exam question set generator overall architecture. Question collection workflow process as shown in Fig. 5 will be done before hand to provide the proposed system with a collection of questions either new or previous question from past exams. User can either key in or uploaded file for new and previous questions into the system. Then the system will automatically validate each uploaded file and validate each questions keyed in or uploaded. After that, all validated questions will be stored in the question bank. Start Initialize Question List (LQ) Input Question(s) Details Upload File(s) For Each Question (q) Is q Valid? Yes Has More q? Add q to LQ No For Each File (f) Is f Valid? Yes No Has More f? No No End Save LQ Question Bank Yes Yes Fig. 5. Question collection workflow process. All uncategorised questions either new question or previous question from past exam that does not have any BT category or chapter category assigned when the questions are keyed in or uploaded using file; the proposed system will automatically categorise based on BT categories (as shown in Table I) and chapter during question categorisation workflow process (as shown in Fig. 6). Therefore the person in charge to key in or upload questions to proposed system can be anyone other than Subject Matter Expert (SME). Thus, in the same time will helps to reduce educators’ commitments and reduce significant amount of time in preparing exam question set. In addition to above benefits, the question categories would be more appropriated to course syllabus guideline since it not only categorised BT category but also chapter category for the specific question saved into question bank. TABLE I: BLOOM’S TAXONOMY CLASSIFICATION [1] ID Category Level C1 Knowledge Easy C2 Comprehension C3 Application Medium C4 Analysis C5 Synthesis Hard C6 Evaluation Question categorisation workflow process will be started with pre-processing of uncategorised questions using Natural Language Processing pre-processing (NLP pre-processing) included lower case standardization, remove non alphabet character, tokenization, Part of Speech (POS) tagging, noun extraction, verb extraction and stemming verbs before Bloom’s Taxonomy classification (BT classification) and chapter classification parallel workflow process begin. Start End Lower Case Standardization Remove Non- Alphabet Character Question Bank Tokenization Stemming Verb WordNet Bloom s Taxonomy Classification Verb Extraction (LV) Retrieve Uncategorised Question Noun Extraction (LN) Chapter Classification Part of Speech (POS) Tagging Fig. 6. Question categorisation workflow process. Start End Retrieve Verb List (LV) For Each Verb (v) BT Verbs Found CBT? Yes LBT != Empty? Identify BT Category of v (CBT(v)) No No Add CBT(v) into LBT Has More v? Yes No Select CBT with Highest Occurrence (Max(CBT)) Assign CBT to Question Question Bank Update Question Select The Most Relevant CBT based on Rules Yes Count Occurrence for Each CBT (Sum(CBT)) No Yes Has > 1 Max(CBT)? Initialize BT Category List (LBT) Fig. 7. Bloom’s taxonomy classification workflow process. In BT classification workflow process (as shown in Fig. 7) each verb in verb list extracted after NLP pre-processing International Journal of Machine Learning and Computing, Vol. 10, No. 1, January 2020 166
  • 4. done; will be identified with respective BT categories based on Bloom’s Taxonomy (BT) verbs. Then respective BT categories with highest occurrence will be selected after the summation of each BT categories done. If more than one highest occurrence found, the most relevant BT category will be selected based on rules. The selected BT category will be assigned to the question and the question in question bank will be updated with selected BT category. End Convert LN to N-gram List (LNG) For Each N- gram (ng) Chapters / Nouns Found C? Yes LC != empty? Identify Chapter of ng (C) No No Add C into LC Has More ng? Yes No Select C with Highest Occurrence (Max(C)) Assign C to Question Question Bank Update Question Select The Most Relevant C based on Rules Yes Count Occurrence of Each C (Sum(C)) No Yes Have > 1 Max(C)? Start Initialize Chapter List (LC) Retrieve Noun List (LN) Fig. 8. Chapter classification workflow process. Meanwhile in chapter classification workflow process (as shown in Fig. 8), the noun list extracted from previous NLP pre-processing will be converted into n-gram list. After that each n-gram in n-gram list will be identified with respective chapter categories based on Chapters / Nouns reference table. Then, each chapter categories summation done, the respective chapter categories with highest occurrence will be selected. The most relevant chapter category will be selected based on rules, if more than one highest occurrence found. Later, the question in question bank will be assigned and updated with selected chapter category. Utility Based Agent Genetic Algorithm (GA) Learning Agent Bloom s Taxonomy Scaling Question Set Preference Exam Question Set User Fig. 9. Proposed model for automated exam question set generator. Our proposed system, AEQSG will be implemented using both Utility Based Agent (UBA) and Learning Agent (LA). It also implement Bloom’s Taxonomy (BT) Scaling and Genetic Algorithm (GA) to optimize the process and enhance the quality of the new exam question set. The proposed model for AEQSG using UBA and LA is shown in Fig. 9. The UBA and LA are chosen as a model for the proposed system to achieve our AEQSG goals while optimizing exam question set generation using GA. Moreover, LA is chosen to increase the quality of exam question set using Bell Curve Analysis based on the average past exam question result mark in BT Scaling. Furthermore, BT scaling also provides the mechanism to automate the distribution of Bloom’s Taxonomy difficulty. AEQSG will generated the new exam question set from question bank based on user’s question preference. The user is given the option to select excluded year range, academic year, semester, education level, exam duration, exam difficulty, subject, chapter selection, total question and question margin for question set preference (utility function) before the new exam question set is generated. User’s question preference workflow process as shown in Fig. 10 will be executed before GA workflow process begin. Start Input Question Set Preference (P) Is TQ > 2 & CS > 0 & TQ > CS? Save P, MBT and MQ Yes No Question Set Preference Excluded Year Range Reference (EY) Academic Year Reference (AY) Semester Reference (SM) Subject Reference (SJ) Chapter Selection Reference (CS) Education Level Reference (LV) Exam Duration Reference (DR) Exam Difficulty Reference (DF) Total Question Reference (TQ) Question Margin Reference (QM) Calculate Max BT Category Per Chapter (MBT) Calculate Max Question Per BT Category (MQ) Initialize Fittest Question Set (QSF) Question Bank A Fig. 10. User’s question preference workflow process. GA workflow process as shown in Fig 11, start with initialize question sets based on user’s preferences question set. After question set been initialized, the fitness value for each question sets will be evaluated and question set is selected using Roulette Wheel selection. Then, single crossover and mutation will be performed based on selected question set. After single crossover and mutation done, fitness value for new question sets will be evaluated. Later, International Journal of Machine Learning and Computing, Vol. 10, No. 1, January 2020 167
  • 5. the best fitness will be selected from the current question sets and pass to Bloom Taxonomy (BT) Scaling. UBA chooses actions based on a preference (utility) for each state that maximizes the expected utility. UBA using GA to optimizes the generation of exam question set based on chosen preference (utility) in AEQSG and generate high quality exam question set that follow educational institution’s guide-lines. The fitness formula as shown in Fig 12, is used to calculate the fitness value in GA (that resides in UBA) to get the best fitness before the fitness value is pass to LA to proceed with BT scaling process. W is represent exam question set quality weightage percentage as shown in Table II. The lowest value of the fitness value is the best fitness. Initialize Question Sets Based on P (QSTQ) End For Each Iteration (i) Select QS using Roulette Wheel Selection Perform Single Crossover Perform Mutation Save QSF Has More i? Yes Question Bank Bloom s Taxonomy Scaling (BTS) Evaluate Fitness Value for Each New QS No Evaluate Fitness Value for Each Question Set (QS) Select Best Fitness QS (QSB) Evaluate New Fitness Value After BTS Assign New Fitness QS to QSF Yes New Fitness Value is Fittest? No A Fig. 11. Genetic algorithm workflow process. Start Retrieve Question Set (QSF) For Each Question (q) Retrieve AS Question Bank Has Average Score (AS)? Yes Has More q? No Yes No AS = Question Mark Add AS into LS Initialize Score List (LS) Scores B Fig. 12. BT scaling – average score workflow process. LA starts to act with basic knowledge (question mark) where there is no result mark average (as shown in Fig. 13); and then it is able to act and adapt automatically (average past question result mark) through learning. BT scaling automate the distribution of Bloom’s Taxonomy difficulty level (as shown in Fig. 14) and automate learning experience using Bell Curve Analysis based on the average past exam question result mark (as shown in Fig. 15) to get optimal result for exam question set. TABLE II: QUALITY OF EXAM WEIGHTAGE [1] No. of Level No. of Category Example Weightage Quality 3 6 C1, C2, C3, C4, C5, C6 100 Good 5 C1, C2, C3, C4, C6 90 4 C1, C2, C4, C5 80 3 C2, C4, C5 70 4 C3, C4, C5, C6 60 2 3 C1, C3, C4 50 Average 2 C4, C5 40 1 2 C3, C4 30 Bad 1 C5 20 Start Retrieve Question Set (QSF) For Each Question (q) Retrieve AS Question Bank Has Average Score (AS)? Yes Has More q? No Yes No AS = Question Mark Add AS into LS Initialize Score List (LS) Scores B Fig. 13. BT scaling – average score workflow process. Calculate Total Question (TQ) For Each BT Category (c) Has More c? No Total TQ > MQ? Retrieve Max Question Per BT Category (MQ) Remove q from QSF Yes Yes Calculate Current Total Question (TCQ) No Retrieve Initial Total Question (TQ) TQ > TCQ? Yes No Select BT Category with Min Occurrence (NBT) Select New Question Based on NBT (QBT) Add QBT into QSF Recalculate TCQ Question Bank Question Bank No B C Fig. 14. BT scaling – average score workflow process. By adopting Deep Reinforcement Learning (DRL) in LA, we will beneficial from both Reinforcement Learning (RL) and Deep Learning (DL) advantages. RL training an agent to interact with an environment and maximize its reward while DL complement RL by providing RL with function approximation by deep neural networks. RL is dynamically learning with a trial and error method to maximize the outcome but DRL learns dynamically by adjusting actions
  • 6. using continuous feedback in order to optimize the reward and applying it to a new data set. Since we are in the midst of exploring DRL for this proposed system, we will not discussed DRL in further details. What kind of algorithm or model or deep neural network that will be used in future for the proposed system will not been described in this paper. Scores Bell Curve Analysis Based on LS No Yes Expected Bell Curve? End Perform Mutation on QNS with Same BT Category Update AS in LS Select New Question with Min Average Score (QNS) Retrieve AS Has AS? Yes No AS = Question Mark Question Bank No C Fig. 15. BT scaling – average score workflow process. VI. CONCLUSION AND FUTURE WORK This paper present a novel and feasible approach for Automated Exam Question Generator using Utility Based Agent (UBA) and Learning Agent (LA) by implementing Genetic Algorithm (GA) and Bloom’s Taxonomy (BT) scaling which includes automated Bloom’s Taxonomy (BT) difficulty distribution in exam question set and Bell Curve Analysis. In future, a few researches can be done based on LA to explored more on Deep Reinforcement Learning (DRL) with the combination of other type of algorithms or model or deep neural networks that will be used to improvise current proposed system. CONFLICT OF INTEREST The authors declare no conflict of interest. AUTHOR CONTRIBUTIONS All works in this paper have been done amongst authors. REFERENCES [1] T. N. Tengku Abd Rahim, Z. Abd Aziz, R. H. Ab Rauf and N. Shamsudin, “Automated Exam Question Generator using Genetic Algorithm,” in Proc. 2017 IEEE Conference on e-Learning, e-Management and e-Services (IC3e), Shah Alam, 2017. [2] Types of AI Agents. Java T Point. [Online]. Available: https://www.javatpoint.com/types-of-ai-agents [3] S. Russell and P. Norvig, Artificial Intelligence, A Modern Approach, Third Edition, New Jersey: Pearson Education, 2010. [4] P. Gupta, Rational Agents for Artificial Intelligence, Hackernoon, September 2017. [5] Funstematics.ai. 2019. [Online]. Available: https://www.funstematics.ai/ai-case-studies [6] M. Rouse, “deep learning,” SearchEnterpriseAI, TechTarget, October 2019. [7] T. Williams. (September 2019). Reinforcement learning Vs. deep reinforcement learning: What’s the difference?” Techopedia Inc. [Online]. Available: https://www.techopedia.com/reinforcement-learning-vs-deep-reinforc ement-learning-whats-the-difference/2/34039 [8] Jay, “Machines that learn with reinforcement learning,” Adverai, July 2018. [9] S. A El_Rahman and A. H. Zolait, “Automated test paper generation using utility based agent and shuffling algorithm,” International Journal of Web-Based Learning and Teaching Technologies, pp. 69-83, 2019. [10] K. H. Pinjani, R. Y. Raut, and P. S. Yedey, “Developing an intelligent agent for Automatic Question Paper setting,” in Proc. National Conference on Advanced Technologies in Computing and Networking-ATCON-2015, Amravati, 2015. [11] S. Pandey and K. C. Rajeswari, “Automatic question generation using software agents for technical institutions,” International Journal of Advanced Computer Research, vol. 3, no. 13, pp. 307-311, 2013. [12] M. J. Arshad, M. Naz, Y. Saleem, A. Farooq and K. H. Asif, “Modeling an agent for paper generation system using utility based approach,” Journal of Faculty of Engineering & Technology, pp. 109-124, 2012. Copyright © 2020 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0). Tengku Nurulhuda Tengku Abd Rahim received the B.I.T. degree in computer system technology from University of Malaysia Sarawak (UNIMAS), Kota Samarahan, Sarawak, Malaysia in 2000 and the M.C.S. degree in computer science from MARA University of Technology (UiTM), Shah Alam, Selangor, Malaysia in 2016. She is currently a senior engineer at Artificial Intelligence Department, MIMOS Berhad, Kuala Lumpur, Malaysia. Ma. Stella Tabora Domingo graduated with BSCoE in the field of computer engineering from University of Baguio, Philippines in 2005 She works currently as a staff engineer at the Artificial Intelligence Department, MIMOS Berhad, Kuala Lumpur, Malaysia. Mohamed Farid Noor Batcha graduated with BSEE in the field of communications from University of Kentucky, United States in 2001 and obtained his masters of engineering (electrical) from University Technology Malaysia in 2010. He is currently a staff engineer at Artificial Intelligence Department, MIMOS Berhad, Kuala Lumpur, Malaysia. Zalilah Abd Aziz received the BSc (Hons) degree in computer science from ITM/UKM, Shah Alam, Selangor, Malaysia in 1996, the MSc in computer science (software engineering) from University Putra Malaysia (UPM) in 2003, Serdang, Selangor, Malaysia and PhD in Computer science (artificial intelligence) from University of Nottingham, Nottingham, United Kingdom in 2013. She is currently a senior lecturer at the Faculty of Computer and Mathematical Sciences, MARA University of Technology (UiTM), Shah Alam, Selangor, Malaysia since 1997. Her primary research include software quality, software engineering, artificial intelligence, combinatorial optimization problems, metaheuristics and programming. o