0% found this document useful (0 votes)
69 views

8 - Using Concept Maps in Adaptive Knowledge Assessment

Paper presents novel approach regarding adaptive knowledge assessment using concept maps. Adaptive knowledge assessment adapts assessment tasks to the ability and knowledge level of a particular learner. Prototype of an intelligent knowledge assessment system based on multiagent paradigm and concept maps is described.

Uploaded by

Ridwan Efendi
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views

8 - Using Concept Maps in Adaptive Knowledge Assessment

Paper presents novel approach regarding adaptive knowledge assessment using concept maps. Adaptive knowledge assessment adapts assessment tasks to the ability and knowledge level of a particular learner. Prototype of an intelligent knowledge assessment system based on multiagent paradigm and concept maps is described.

Uploaded by

Ridwan Efendi
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 15

Using concept maps in adaptive knowledge assessment

Alla Anohina, Vita Graudina, Janis Grundspenkis Riga Technical University, Kalku street 1, Riga, Latvia, LV 1658 {alla.anohina; vita.graudina; janis.grundspenkis}@cs.rtu.lv

Abstract: The paper presents a novel approach regarding adaptive knowledge assessment using concept maps. Adaptive knowledge assessment adapts assessment tasks to the ability and knowledge level of a particular learner and makes a corresponding system more powerful and valuable. The already developed prototype of an intelligent knowledge assessment system based on multiagent paradigm and concept maps is described, but mainly the paper focuses on the support of adaptive assessment regarding the further enhancement of the system. Keywords: computer-assisted assessment, concept maps, intelligent assessment system, adaptive assessment

1 Introduction
Computer-assisted assessment of learner's knowledge is not a novel term in computer science and education. The most widespread systems are ones based on various tests where correct answers are pre-defined. The main drawback of such systems is a level of intellectual behavior, which can be assessed. As a rule it is not above the fourth level in the well-known Bloom's taxonomy [4]. Only a few computer-assisted assessment systems, which assess higher levels of intellectual abilities and skills, have been developed. They are based on strongly subject dependent

214

Alla Anohina, Vita Graudina, Janis Grundspenkis

tasks such as essays or free-text responses. These systems use methods of natural language processing and, therefore, are extremely complicated. The paper offers a reasonable compromise replacing tests by concept maps, which allow to assess higher order skills and simultaneously do not require natural language processing. So, in rather simple systems the idea of adaptive knowledge assessment may be implemented. The remainder of this paper is organized as follows. Section 2 briefly discusses computer-assisted assessment. Section 3 gives an overview of different assessment tasks based on concept maps. The developed multiagent concept map based intelligent assessment system is described in Section 4. Section 5 introduces computer adaptive assessment. The possibilities to use concept maps in adaptive knowledge assessment using the developed system are discussed in Section 6. Finally, conclusions are presented and some directions for future work are outlined.

2 Computer-assisted assessments
According to [9] the term computer-assisted assessment refers to the use of computers in assessment, encompassing delivering, marking and analysis of assignments or examinations, as well as collation and analysis of data gathered from optical mark readers. The most widespread computer-assisted assessment systems are ones based on objective tests [5, 24] that offer a learner a set of questions, answers on which are pre-defined [9]. The mostly used question types are multiple choice questions, multiple response questions, graphical hotspot questions, fill in blanks, text/numerical input questions, etc. Computer-assisted assessment typically is included in virtual learning environments, e.g. Blackboard (http://www.blackboard.com) or WebCT (http://www.webct.com), or it can be implemented in the form of a specialized assessment system. In the last case there is a set of available tools both from institutional and commercial developers [24]. CASTLE (http://www.le.ac.uk/castle), TRIADS (http://www.derby.ac.uk/assess/newdemo/mainmenu.html), and TAL (http://www.tal.bris.ac.uk) are examples of software developed within the framework of institutional projects. Commercial tools are Hot Potatoes (http://hotpot.uvic.ca), Respondus (http://www.respondus.com), WebQuiz XP (http://eng.smartlite.it/en2/products/webquiz/index.asp), Questionmark Perception (http://www.questionmark.com/us/home.htm), and others. The analysis of these products allows to define the following functional capabilities of computer-assisted assessment systems: templates for creation of questions, full functionality relat-

Using concept maps in adaptive knowledge assessment

215

ed to the management of questions (creation, removing, editing, etc.), planning of knowledge assessment activities, defining of feedback, reporting on the performance of both learners and questions, creation and support of question banks, extensive possibilities of question randomizing (different questions at each attempt of a test, different questions for each learner, etc.), various question delivery modes (without returns to already answered questions, re-answering of questions and moving through them, etc.), multimedia integration into questions, and others. Computer-assisted assessment provides a number of advantages [10, 14, 16, 18, 24]: greater flexibility regarding place and time of assessment, potential for providing assessments for large number of learners efficiently, instant feedback to learners, extensive feedback to teachers, reduced errors in comparison with human marking, decreased time needed for supervising and marking of assessments, and potential for frequent assessments. Besides the advantages computer-assisted assessment systems have also drawbacks [10, 14, 18, 24]: some types of questions cannot be marked automatically as computer-assisted assessment is suited to those questions which require a limited response, unsupervised computer-assisted assessment sessions present a risk of plagiarism and illegal use of other materials, and some learners may have poor skills of information technologies usage. However, the main drawback of such systems is a level of intellectual behavior, which can be assessed. According to [5, 16], it is not above the fourth level in the well-known Bloom's taxonomy [4], which includes three levels of lower order skills (Knowledge, Comprehension, and Application), and three levels of higher order skills (Analysis, Synthesis, and Evaluation). However, in [9] this assertion is called to be erroneous, but it is pointed out that designing test questions to assess higher order skills can be time consuming and requires skill and creativity. Tasks such as essays or free-text responses which allow a learner to offer original answers and his/her judgments and assess higher order skills demand more complex structure and functional mechanisms of systems. Such systems are based on artificial intelligence, for example, e-rater [6], c-rater [15], Auto-marking [23], Atenea [20], and others. Unfortunately, essays and free-text responses are strongly subject and language dependent, and, as a consequence, a corresponding assessment system is narrowly focused. From the authors viewpoint the concept mapping approach offers reasonable balance between requirements to assess higher levels of knowledge and complexity of an assessment system.

216

Alla Anohina, Vita Graudina, Janis Grundspenkis

3 Concept mapping for knowledge assessment


Concept mapping can be used to externalize and make explicit the conceptual knowledge that student holds in a knowledge domain [7]. Concept maps are graphs, which include concepts as nodes and relations between them as arcs. Sometimes so called linking phrases are used. Usually concept maps are represented as a hierarchy with most general concepts at the top of the map and the more specific concepts placed at the lowest levels [17]. Concept maps can have different topologies, too [26]. Assessment based on concept maps can be characterized in terms of: 1) a task that invites a learner to provide evidence bearing on his/her knowledge structure in a domain, 2) a format for the learner's response, and 3) a scoring system to evaluate learner's concept map [21]. In order of issued task and required answer format, it is possible to provide learners with various levels of task difficulties, as well as assess different knowledge levels. One of the ways to deal with different degrees of difficultness is to issue tasks with different degree of directedness [22]. Directedness is connected with information provided to learners. Tasks vary from high-directed to low-directed. High-directed concept map tasks provide learners with concepts, connecting lines, linking phrases, and a map structure. In contrast, in a low-directed concept map task learners are free to decide which concepts and how many of them should be included and how they will be related in their maps. In other words, tasks can be divided in a subset of fill-in tasks where learners are provided with a blank structure of a map and lists of concepts and linking phrases, and in a subset of construct a map tasks where learners are free to make their choices. Fill-in tasks can be different, too (Table 1). First, they vary on what is provided for learners (concept list, and/or linking phrases list), do they need to define something by themselves or do they need to use linking phrases at all. Second, they vary on how a pre-defined concept map structure is provided: does it contains already some filled concepts and/or linking phrases, or it is empty. Construct a map tasks can have the same variety as fill-in tasks and also constraints on a number of concepts needed to use in the concept map, and on a structure (should it be strictly hierarchical or have some cycles). Difficultness degree can be provided also with different number of concepts. Assessments based on construct a map tasks more accurately evaluate differences in learners knowledge structures and elicit more high-order cognitive processes [26].

Using concept maps in adaptive knowledge assessment Table 1. Fill-in tasks Task Is provided Concepts list Linking phrases list A X X B X C X D X E F Need to define Concepts Linking phrases X X X X

217

4 The intelligent knowledge assessment system


The first prototype of an intelligent assessment system based on concept maps and multiagent paradigm has been developed and tested [2]. Its main purpose is to allow a teacher put into practice the notion of process oriented learning when learners knowledge is assessed continuously during a learning course. At the moment the system supports only one task: filling of a concept map structure. Two types of links are used. Important conceptual links show that relationships between the corresponding concepts are considered as important knowledge in a given learning course. Less important conceptual links specify desirable knowledge. The linking phrases and direction are not used in the developed prototype. The system consists of three modules. The administrator module allows to manage data about learners and groups of learners, teachers and learning courses. The teachers module supports a teacher in the development of concept maps and in examining of learners final score. The learners module includes tools for filling of concept maps provided by a teacher and for viewing feedback after his/her solution submission. The modules interact sharing a database which stores data about teachers and their learning courses, learners and groups of learners, teachercreated and learner-completed concept maps, learners final score and systems users (Fig. 1). The systems functionality and its client/server architecture are described in details in [2]. Use case diagrams of construction of a new concept map by a teacher and examining of learners results, as well as concept map filling by a learner are given in [3].

218

Alla Anohina, Vita Graudina, Janis Grundspenkis

LEARNER MODULE ADMINISTRATOR MODULE

TEACHER MODULE

DATABASE D ata abo ut: sys tem 's u sers, learners and groups of learners, teachers and learning courses l e a rn er' s c o nc e pt m a ps a n d score, teacher's concept maps

INTELLIGENT KNOWLEDGE ASSESSMENT SYSTEM

Fig. 1. The architecture of the system The system supports the following scenario. A teacher divides a learning course into some stages. Using the developed system the teacher prepares concept maps for each stage in the following way. Concepts taught to learners at the first stage are included in the first concept map of the learning course. At the second stage learners acquire new concepts which the teacher adds to the concept map of the first stage without changing the relationships among already existing concepts. Thus, a concept map of each stage is an extension of a concept map of the previous stage. A concept map of the last stage displays all concepts in the learning course and relationships between them. During knowledge assessment learners get a structure of a concept map, which corresponds to the learning stage. At the first stage it is an empty structure with very few initial concepts defined by the teacher. In the subsequent stages new concepts are included in addition with those, which a learner already has correctly inserted during the previous stages. After finishing the concept map, the learner confirms his/her solution and the system compares concept maps of the learner and the teacher on the basis of five patterns described below. The final score and the learners concept map are stored in the database. The learner receives feedback about correctness of his/her solution. At any time the teacher has an opportunity to examine a concept map completed by the learner and his/her score. Figure 2 displays the described scenario.

Using concept maps in adaptive knowledge assessment


LEARNER

219

Structure of the concept map of the current stage

Learnercompleted concept map

Feedback

Concept maps and their characteristics

Teacher's concept maps

Teacher-created concept map

INTELLIGENT ASSESSMENT AGENT: comparison of concept maps

TEACHER Feedback Learner's concept maps and score

Learner-completed concept map and score

INTELLIGENT KNOWLEDGE ASSESSMENT SYSTEM

Fig. 2. The scenario of the systems operation The system is a multiagent system, which consists of an intelligent agent for assessment of learners knowledge level and a group of human agents, i.e. learners who are communicating with this agent. The intelligent assessment agent is a core of the system and its unique feature. It makes the basis of the learners module and includes the communication, knowledge evaluation, interaction registering, and expert agents described in [2]. The intelligent assessment agent is responsible for the comparison of teachers and learners concept maps using an algorithm sensitive to arrangement of concepts. The main assumption is that the learners understanding of relationships between concepts has the primary value, while the type of a link and a place of concepts within the structure of a concept map are the secondary things. Five patterns of learner solutions can be recognized by the agent (Fig. 3): Pattern 1. The learner has related concepts as they are connected within a standard map of the teacher. In this case the learner receives 5 points regarding every important link and 2 points for less important link. Fig. 3b shows that the learner has related concepts A and E which fully matches the same relationship in the teacher-created concept map (Fig. 3a). Pattern 2. The learner has defined a relationship, which does not exist in a concept map of the teacher. In this case he/she does not receive any points. In Fig.

220

Alla Anohina, Vita Graudina, Janis Grundspenkis

3c it is shown that the learner has related concepts A and H, but such relationship does not exist in the teacher-created map (Fig. 3a). Pattern 3. The learners defined relationship exists in a standard map, the type of a link is correct, but at least one of concepts is placed in an incorrect place. The learner receives 80% from maximum score for that link. Fig. 3d shows that the learner has defined relationship between concepts B and D, which also exists in the teachers map (Fig. 3a). Both concepts are placed in the incorrect places although the type of the link is correct. Pattern 4. The learners defined relationship exists in a standard map, the type of a link is wrong, and at least one of concepts is placed in an incorrect place. The learner receives 50% from maximum score for the correct link. This pattern is displayed in Fig. 3e. Comparing the learner defined relationship between A and F with teachers one (Fig. 3a) it is easy to see that concept F is placed in an incorrect place, as well as type of the link between concepts is less important instead of important link. Pattern 5. A concept is placed in a wrong place, but its place is not important. The learner receives maximum score for a corresponding link. Fig. 3f displays that the learner has exchanged concepts M and L by places comparing with the teacher-created concept map (Fig. 3a).
H G E F I J A K M a) Pattern 3 C D E B I F A K Pattern 4 L d) M L e) M Pattern 5 L f) M G H I F C D E B A K G H I F C D E B A K G H L C B I F D C D B A K L b) M Pattern 1 E G H I F J C D E B A Pattern 2 K L c) M G H J

Using concept maps in adaptive knowledge assessment

221

Fig. 3. Patterns for learners solution evaluation with the intelligent assessment agent: a) a teacher-created concept map; b) f) patterns within a learner-created concept map The developed system has been tested in four learning courses and seventy four students were involved (the testing example is described in [2]). The students positively evaluated the chosen approach to knowledge assessment, as well as functionality and user interface of the system (questions and student answers are given in [1]). They also stated desire to use such assessment technique in courses that will follow. The idea to computerized concept mapping is not new at all. A number of commercial and non-commercial graphical software packages and tools already exist, for example, AXON Idea Processor (web.singnet.com.sg/~axon2000/), Inspiration (www.inspiration.com), IHMC CmapTools (cmap.ihmc.us), which provide such functions as concept map construction, navigation and sharing, and can be used as a useful learning tool. These products do not assess created concept maps. This task can be solved by such tools as COMPASS [12] and the system described in [8]. The developed system has two discriminative features in comparison to them. Both known systems consider assessment as a discrete event, while the system described in this paper supports process oriented learning and allows the teacher to extend the initially created concept map for the new stage of assessment. The second unique feature is an algorithm that compares the teachers and learners concept maps and is sensitive to the arrangement and coherence of concepts. The third feature that is under development at the moment is concept map adaptive knowledge assessment described below in this paper.

5 Computer adaptive assessment


Computer adaptive assessment adapts tests to knowledge level of each learner [19], in the following way [25]. A learner receives an assessment item of average difficulty. If he/she does not answer this item or gives an incorrect answer, a less difficult item is presented. Otherwise, he/she gets a more difficult item. This process continues until the predetermined test termination criteria have been met. Therefore, learners with a low knowledge level do not respond to very difficult items, but learners at a high achievement level are not required to answer too simple items. Adaptive assessment provides more accurate conclusions about the actual knowledge level of each learner [19].

222

Alla Anohina, Vita Graudina, Janis Grundspenkis

This kind of assessment is supported by some software products mentioned in Section 2, e.g., Questionmark Perception and TRIADS. There is also some research in this area. PASS module [11] is based on adaptive testing and adaptive questions techniques, and can be integrated in an Adaptive Educational Hypermedia System in order to provide personalized assessment selecting the appropriate question for a learner according to the questions parameters, the assessment parameters, and the current learners knowledge level. E-TESTER [13] automatically creates questions based on e-learning content provided to a learner.

6 Concept map based adaptive knowledge assessment


Functionality of the developed system can be essentially improved by providing adaptability of offered tasks to knowledge level of a learner. At the moment the same structure of a concept map and initial concepts are presented to all learners irrespective of a level of achievements of a particular learner. The idea of computer adaptive assessment allows to identify at least two possibilities. The first assumes to change the systems behavior when a learner fills the structure of a concept map. The system should monitor each learners performance during the task. If it determines that the learner has some difficulties, it should intervene by filling some empty places with correct concepts. This process continues until only a few empty places will remain or the learner will complete the task. Despite of simplicity of the approach there are several unsolved issues:

What methods will allow the system to determine when the learner has met
difficulties? Whether the system should take into account how long the learner does nothing before inserting the next concept? Whether the system should track multiple movings of the same concept? What principle will the system use to put correct concepts into empty places? What number of empty places can serve as a criterion of the termination of system interventions? What should the system do with concepts which the learner has incorrectly inserted into structure of a concept map?

The second approach assumes enrichment of the system by tasks of various types. In this case adaptability can be implemented using the following scenario. At the first assessment stage all learners receive the task of the lowest directedness. If the system determines that a learner has some difficulties, it increases a degree of directedness. This process continues until the learner completes the task or reaches the highest degree of directedness. At the next stage the learner re-

Using concept maps in adaptive knowledge assessment

223

ceives the task of directedness which he/she has achieved in the previous assessment. The level of task directedness can be increased in case if the learner has successfully completed the task in the previous assessment without lowering the directedness. Of course, it is necessary to store achieved degree of task directedness in a profile of the learner. There are uninvestigated aspects. First of all, it is necessary to find methods which will allow the system to determine that the learner has met some difficulties. The next problem is the large set of concept map based tasks ranging from fill-in tasks to construct a map tasks (Section 3). All of them cannot be implemented due to huge amount of needed work. Thus, it is necessary to select a subset of tasks. Last, but not least, is the development of the user interface which should offer new tools when the directedness degree of the task will increase.

7 Conclusions and future work


The paper focuses on computer adaptive assessment which main advantage is opportunity to present learners with tasks appropriate to their knowledge level. Concept maps can be successfully used as a core of adaptive assessment systems. Authors believe that usage of concept maps provides possibility to issue tasks with different level of difficultness and to assess fourth and fifth levels of knowledge according with Blooms taxonomy. At the moment the analysis of the described possibilities concerning adaptive assessment using concept maps is at its early stage.

8 Acknowledgment
This work has been partly supported by the European Social Fund within the National Program Support for the carrying out doctoral study programs and postdoctoral researches project Support for the development of doctoral studies at Riga Technical University. The main results are outcomes of the research project Concept maps and ontology based intelligent system for student knowledge self-assessment and process oriented knowledge control.

224

Alla Anohina, Vita Graudina, Janis Grundspenkis

References
1. Anohina A, Graudina V, Grundspenkis J (2006) Intelligent system for learners knowledge self-assessment and process oriented knowledge control based on concept maps and ontologies (accepted for Annual Proceedings of Vidzeme University College) 2. Anohina A, Grundspenkis J (2006) Prototype of multiagent knowledge assessment system for support of process oriented learning (accepted for the Proc. of the 7th Int. Baltic Conf. on DB&IS) 3. Anohina A, Stale G, Pozdnyakov D (2006) Intelligent system for student knowledge assessment (accepted for Proceedings of Riga Technical Univ.) 4. Bloom BS (1956) Taxonomy of educational objectives. Handbook I: The cognitive domain. David McKay Co Inc., New York 5. Bull J Introduction to computer-assisted assessment. Available at: http://asp2.wlv.ac.uk/celt/download.asp?fileid=44& detailsid=200008 6. Burstein J, Leacock C, Swartz R (2001) Automated evaluation of essays and short answers. In: Proc. of the 5th Int. Computer Assisted Assessment Conf., pp 41-45 7. Canas A (2003) A summary of literature pertaining to the use of concept mapping techniques and technologies for education and performance support 8. Chang KE, Sung YT, Chen SF (2001) Learning through computerbased concept mapping with scaffolding aid. J. Computer Assisted Learning, vol. 17: 21-33 9. Computer-assisted Assessment (CAA) Centre - http://www.caacentre.ac.uk 10. E-assessment. Available at: http://en.wikipedia.org/wiki/E-assessment 11. Gouli E, Papanikolaou KA, Grigoriadou M (2002) Personalizing assessment in adaptive educational hypermedia systems. In: Proc. of the 2nd Int. Conf. on Adaptive Hypermedia and Adaptive WebBased Systems. Springer-Verlag, London, UK, pp 153 163 12. Gouli E, Gogoulou A, Papanikolaou K, Grigoriadou M (2004) COMPASS: an adaptive Web-based concept map assessment tool. Proc. of the 1st Int. Conf. on Concept Mapping.

Using concept maps in adaptive knowledge assessment

225

13. Guetl C, Dreher H, Williams R (2005) E-TESTER: A computerbased tool for auto-generated question and answer assessment. In: Proc. of the E-Learn 2005-World Conf. on E-Learning in Corporate, Government, Healthcare, and Higher Education, vol. 2005, no. 1, pp 2929-2936 14. Lambert G (2004) What is computer aided assessment and how can I use it in my teaching? (Briefing paper) Canterbury Christ Church University College 15. Leacock C, Chodorow M (2003) C-rater: scoring of short-answer questions. Computers and the Humanities, 37(4): 389-405 16. Mogey N, Watt H (1996) Chapter 10: The use of computers in the assessment of student learning. In: Stoner G (ed) Implementing Learning Technology. Learning Technology Dissemination Initiative, pp 50-57 17. Novak JD, Canas AJ (2006) The theory underlying concept maps and how to construct them. Technical Report IHCM CmapTools 2006-1. 18. Oliver A (2000) Computer aided assessment - the pros and cons. Available at: http://www.herts.ac.uk/ltdu/learning/caa_procon.htm 19. Papanastasiou E (2003) Computer-adaptive testing in science education. In: Proc. of the 6th Int. Conf. on Computer Based Learning in Science, pp 965-971 20. Prez D, Alfonseca E, Rodrguez P (2004) Application of the BLEU method for evaluating free-text answers in an e-learning environment. In: Proc. of the 4th Int. Language Resources and Evaluation Conf. 21. Ruiz-Primo MA, Shavelson RJ (1996) Problems and issues in the use of concept maps in science assessment. J. Res. Sci. Teaching 33 (6) : 569-600 22. Ruiz-Primo MA (2004) Examining concept maps as an assessment tool. In: Proc. of the 1st Conf. in Concept Mapping 23. Sukkarieh JZ, Pulman SG, Raikes N (2003) Auto-marking: using computational linguistics to score short, free text responses. In: Proc. of the 29th Conf. of the Int. Association for Educational Assessment 24. Using computer assisted assessment to support student learning. The Social Policy and Social Work subject centre with the Higher Education Academy (SWAP) http://www.swap.ac.uk/elearning/develop6.asp

226

Alla Anohina, Vita Graudina, Janis Grundspenkis 25. What is CAA? (2005) Castle Rock Research Corp.- http://www.castlerockresearch.com/caa/Default.aspx 26. Yin Y, Vanides J, Ruiz-Primo MA, Ayala CC, Shavelson RJ (2005) Comparison of two concept-mapping techniques: implications for scoring, interpretation, and use. J. Res. Sci. Teaching, vol. 42, no. 2 : 166-184

You might also like