Assessing The Effects of ICTin Education
Assessing The Effects of ICTin Education
Europe Direct is a service to help you nd answers to your questions about the European Union Freephone number (*):
00 800 6 7 8 9 10 11
(*) Certain mobile telephone operators do not allow access to 00 800 numbers or these calls may be billed.
European Commission Joint Research Centre Legal notice The opinions expressed and arguments employed herein do not necessarily reect the ofcial views of the OECD or of the governments of its member countries or those of JRC or the European Commission. Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use which might be made of this publication. A great deal of additional information on the European Union is available on the Internet. It can be accessed through the Europa server (http://europa.eu/). This document has been produced with the nancial assistance of JRC. Cataloguing data can be found at the end of this publication. Luxembourg: Publications Ofce of the European Union, 2009 ISBN 978-92-79-13112-7 doi:10.2788/27419.
European Union/OECD, 2009 Reproduction is authorised provided the source is acknowledged. Printed in France
PRINTED ON WHITE CHLORINE-FREE PAPER
Contents
Introduction
Friedrich Scheuermann and Francesc Pedr
13
21
41
What do we know about the effective uses of information and communication technologies in education in developing countries? 61
Michael Trucano
69
ICT to improve quality in education A conceptual framework and indicators in the use of information communication technology for education (ICT4E)
Marcelo Cabrol and Eugenio Severin
83
A conceptual framework for benchmarking the use and assessing the impact of digital learning resources in school education
Beat Bilbao-Osorio and Francesc Pedr
107
121
The impact of ICT in education policies on teacher practices and student outcomes in Hong Kong
Nancy Law, Yeung Lee and H.K. Yuen
143
Introduction
Despite the fact that education systems have been heavily investing in technology since the early 1980s, international indicators on technology uptake and use in education are missing. For more than 25 years education systems have been able to design and implement policies in this domain without those indicators, so the question is: why start discussing them now? Is the information available not good enough?
Why now?
The existing international indicators still mirror the rst policy priorities of the early 1980s: securing student access to computers and the Internet in schools. Indicators such as ratios of students per computer or percentage of schools with broadband access, although still a concern in some countries, do not yet provide the most relevant information for todays policy in the eld: how is technology used in schools? Is this use truly supporting the emergence of the learning environment that a knowledge-based society requires? Certainly, knowledge economies and societies would greatly benet from a broader set of internationally comparable indicators. These could monitor progress in ICT uptake and unveil important information about use, ranging from issues such as frequency to purpose. If carried out in an international comparable framework they will become an important tool for benchmarking policies and practices across countries and over time. Our increasingly technology-rich world raises new concerns for education while also expecting schools to become the vanguard of knowledge societies. Firstly, technology can provide the necessary tools for improving the teaching and learning process, opening new opportunities and avenues. In particular, it could enhance the customisation of the educational process, adapting it to the particular needs of the student. Secondly, education has the role of preparing students for adult life, and therefore it must provide students with those skills necessary to join a society where technology-related competencies are becoming increasingly indispensable. The development of these competencies, which are part of the set of the so-called 21st century competencies, is increasingly becoming an integral part of the goals of compulsory education. Finally, in a knowledge economy driven by technology, people who do not master these competencies may suffer from a new form of digital divide that may affect their capacity to fully integrate the knowledge economy and society.
Introduction
Because of these reasons, most countries have undertaken signicant investments to enhance the role of technology in education recently, after some years of less activity immediately after the implosion of the Internet bubble. Many would say that the incorporation of technology in education has lost its status as policy priority number one, although for a number of political reasons investments have not been stopped. In many respects, the principle of build it and they will come seems to have taken root, and education systems keep investing in technology based on the belief that, sooner or later, schools and teachers will adopt it and benet from it. The question that arises then is whether or not these new investments are paying off; is this investment in technology within education systems managing to full expectations?
Introduction
competences. In 2007 the Council identied a framework of 16 core indicators for monitoring progress in the eld of education. ICT skills are a core indicator in this framework. Technology is hence expected to play an increasing role in education in the coming years. Last but not least, there is the pending issue of whether or not todays teaching and learning experience in schools matches what could be expected from a knowledge society. The question is not which technology leads to increased productivity in education, but which new technology-supported methodologies improve student performance over traditional ones, if any at all, and which other factors intervene. Previous calls have already been made in order to investigate the explicit relationships among technology, instructional strategy, psychological processes and contextual factors. The almost innite array of methodological possibilities makes this kind of investigation extremely difcult, but not impossible, provided that there is sufcient effort devoted to the accumulation and dissemination of the resulting knowledge base. Such a task might appear overwhelming, particularly as the technological frontier is constantly changing. However, it is worth the effort. And policymakers and researchers cannot be in a position to monitor what is truly going on in schools unless critical indicators about intensity, purpose and context of use of technology in education are available.
Introduction
Introduction
Pedr discuss the conceptual approach proposed by the OECD for looking into the impact of digital learning resources and benchmarking the use in school education. A series of reective case studies are presented in Chapter 4. One important aspect of ICT impact assessment is to be clear on what is to be assessed at the individual level and to think about appropriate ways of measurements. Technology use and critical thinking and problem-solving approaches (new literacies) are discussed by Edys S. Quellmalz in the context of assessment design and implementation. She looks at current approaches in assessment and underlines the need to reach consensus about what is to be measured. ICT implementation policies in education in Hong Kong are then analysed by Nancy Law, Yeung Lee and H. K. Yuen in terms of their impact on teaching and learning processes. They also present an interesting research design and concepts of information literacy assessment. Willem Pelgrum then reports about monitoring scenarios and sets of indicators on the use and impact of ICT in primary and secondary education. His work is based on the results of a study carried out in the European Union which can be seen as a further step to implement mechanisms for regular ICT implementation monitoring at a European level. A theoretical framework of various factors affecting ICT use in education is presented by Heeok Heo and Myunghee Kang. This framework had been embedded in a nationwide investigation in Korea. Findings clearly indicate that a better understanding of the real impact can only be achieved if more consideration is given to the use of ICT in informal learning. In addition, results from a comparative analysis in the European countries on ICT in primary education are then described by Roger Blamire. The approach was based on an analytical framework allowing an examination of the impact on three different levels: on learning and learners, on teachers and teaching and on primary school development plans and strategies. Altogether these cases help to better understand the need for comprehensive studies of the complex interactions between various types of ICT implementation and its effects, including other factors to take into account which have not yet been addressed by existing studies. The aim of this book is to provide a basis for the design of frameworks, the identication of indicators and existing data sources as well as gaps in areas where further research is to be initiated. The contributions clearly demonstrate that there is a need for the development of consensus around widely accepted approaches, indicators and methodologies. In this context more harmonisation of existing survey approaches would be desirable. Therefore, this collection of articles follow the intention of both organisations, the OECD and the European Commission, to foster international cooperation with other relevant international organisations and to serve as a starting point for common reection on ways to assess how ICT is used in education. Without such an assessment, it is virtually impossible to make any progress in the direction of understanding better how the actual pedagogies are transformed and which policies, both at national and local levels, are making a difference. Only a truly international comparative effort can provide the necessary evidence. And even if the contributions in this book show a vast diversity of
Introduction
perspectives, at least they point in the right direction. Even more important than getting the hard evidence is to make signicant progress in understanding the worth of technology in education and in how to measure progress. This book has to be seen as a serious attempt to touch base and, as such, has to be taken as the beginning of a journey. The sooner we start walking the better. Friedrich Scheuermann Francesc Pedr
10
CHAPTER
CONTEXT AND GENERAL REFLECTIONS
In search of the sustainable knowledge base: multi-channel and multi-method? Addressing the complexity of impact A multilevel approach towards ICT in education
11
Abstract
This article discusses the need for developing an open, exible and international knowledge base for ICT in education, in which joint development of benchmarks can play a key role for addressing complexity, multi-stakeholder interests and international comparisons. The need for a multi-channel and multi-method approach is elaborated. The article is written from the point of view of a policymaker.
Introduction
ICT (information and communication technologies) in education lives a life at the crossroads between evidencebased policymaking, learning and the fast-changing world of technology. Key stakeholders (politicians, parents, teachers, school leaders) demand evidence of the impact of ICT derived from research, monitoring and evaluation. The challenge for policymakers is (in collaboration with the research community and the educational community) to develop a sustainable knowledge base for ICT in education, in which key indicators and other sources of information are identied, which enables better insight into the use and effects of ICT for learning. I have chosen to discuss the issue of developing benchmarks for ICT in education, because benchmarks are embedded in the evolving knowledge base in this eld. This article is structured in four parts. In the rst part, I describe the policy backdrop, within which the issue of
developing a sustainable knowledge base should be discussed. The second part focuses on the issue of what we have learnt from R & D with regard to the effects of ICT in education. In the third part, I describe the concept of the multi-channel and multi-method knowledge base, before I nish with some remarks on the issue of a systemic approach to benchmarks and other critical components of a knowledge base for ICT in education. This article is written from the point of view of a policymaker.
13
and their underlying rationales share many common features. Kozma (2008) has identied important reasons for investing in ICT for education. To support economic growth mainly by developing human capital and increasing the productivity of the workforce. To promote social development by sharing knowledge, fostering cultural creativity, increasing democratic participation, improving access to government services and enhancing social cohesion. To advance education reform, i.e. major curriculum revisions, shifts in pedagogy or assessment changes. To support educational management and accountability, with an emphasis on computer-based testing and the use of digital data and management systems. These features relate the issue of ICT in education to its function in a broader, societal context. The role of ICT in education must also be linked to educational needs. In many countries, the role of ICT is linked to issues of educational attainment and the importance of ICT for advancing robust learning strategies on the side of the students. A second area is ICT as a tool for the support of personalisation strategies in teaching and learning. ICT can also be used to increase visualisation and variation in many subjects. As a greater proportion of our homes are linked to the Internet, the role of ICT in home/ school access is now being exploited. Many children start to use ICT at an early age, and the home and the family are, in many cases, an arena for the initial acquisition of digital skills. Thus, education has a role to play in furthering these skills, based on pedagogical
principles. Our educational systems should bear in mind that ICT should be an integral part of learning, in order to provide learners from families with a low socio-economic status with necessary digital skills for learning, work and life in order to avoid digital divides. ICT is not integrated in education for its own sake. A proper integration of ICT in key policy priorities in different countries can be a productive approach in order to secure ICT as a mainstream part of education. In Norway, ICT is not subject to a separate strategy; it is rather embedded in the national curriculum and linked to overall political priorities stated by the government: quality of learning, higher completion rates and students well-being and mastery.
14
OECD, through its work on the PISA studies, has been able to demonstrate interesting correlations between home access and use of ICT on the one hand and PISA score on the other hand. The relation between ICT use at school and PISA score is fare more complex. So far, these correlations have not been explained. The study E-learning Nordic (Ramboll Management, 2006), which looks at the perceived impact of ICT, shows that all stakeholders (students, parents, teachers, principals) believe that ICT can have a positive impact on teaching and learning. The studies and reports mentioned above represent a plethora of studies. The European Schoolnet shows in its metastudy on impact studies (EUN, 2006) that there are a number of studies, also related to patterns of use across the technological spectrum. Impact studies cover a wide spectrum between the search for causal relationships between ICT and educational attainment on the one hand and studies looking at the perceived impact of ICT on the other hand. The focus of some studies has been on causality and on quantitative issues regarding ICT use. It is time to review critically whether we have been asking the right research questions. In its rst report on ICT and PISA score (OECD, 2004), the OECD states:
It is the quality of ICT usage, rather than necessarily the quantity, that will determine the contribution that these technologies make to students outcome.
Instead of looking for causality, we need to ask how we can improve and optimise the use of ICT in teaching and learning, and in doing so we also need to listen to the voices of the learners and the practitioners.
15
developed a superb infrastructure, access to ICT is still an issue in many European countries. This is truly the case if you look at access issues on a global scale. The same goes for the need for monitoring the evolving patterns of use. We need to be able to assess the speed of uptake of different technologies for learning as well as assessing the degree of variation across the spectrum of learning technologies. A particular challenge with regard to monitoring the patterns of use is the high degree of technological and cultural diversity that is to be expected in many countries around the globe. Gender issues are visible. PISA data show that although the gap between genders is closing, there are still interesting differences to be found with regard to patterns of use. A fairly new dimension regarding gender issues is that it might be just as important to study differ-
ences within a gender as between genders. Digital learning resources (DLR) are characterised by complexity a crossroads between pedagogy, technology, IPR and the marketplace. This is an area which, in my opinion, has been under-assessed, and we need a stronger focus both on benchmarking of digital learning resources as well as a research agenda for DLR and learning. For PISA (2003) and PISA (2006), follow-up analysis based on ICT data has been undertaken. In future, the ICT analysis of PISA should be replicated and improved, and the ICT familiarity questionnaire should be updated in order to keep up with the evolving use of ICTs for learning. Few countries have developed good methodologies for assessing digital skills among students. Such methodologies should be developed both within and across subjects.
Figure 1: Pupil use of digital content, computer games, mobile phones and office programs seventh grade, ninth grade and VK1, where daily and weekly have been merged (in percentages).
16
Some countries are monitoring both access and use of ICT. The Norwegian ITU Monitor (Arnseth et al., 2007) is a biannual monitor that assesses the status with regard to ICT in Norwegian schools. The following gure shows an example of patterns of use among Norwegian students. The list of topics shows that there are many phenomena in ICT and learning that should be monitored and assessed through a variety of channels, but is this enough? In the next chapter I will elaborate on the need for a multi-method approach in order to ensure a sustainable and systemically coherent knowledge base.
development is to be able to capture the complexity of the learning process. In my view, we need to further explore the potential of ethnographic research and so-called test-bed studies. However, a downside to these approaches is that they are consuming both in terms of time and money. In the last couple of years, we have seen projects in several countries aiming at capturing the voices of the learners. One example of this is the digital generation project, funded by the MacArthur Foundation programme for digital media and learning. The project conveys how children develop engagement, self-directed learning, creativity and empowerment through the use of digital media. Our educational systems need to develop our ability to listen to and reect on the voices of the learners in order to understand how digital media inuence the lives and learning of our children. This topic will be addressed in the second half of the OECD new millennium learners project. Digital media play a much bigger role in the lives of our students today than before. A Norwegian report from 2008 (Arnseth et al., 2008) shows that more than nine out of 10 adolescents aged 16 to 19 use social media, and three out of four use social media on a daily basis. This raises the question of whether only ICT use in schools should form the basis of our understanding of digital media and learning. We may have to broaden the scope and include out-of-school use of digital media, given the extensive home use of digital media. This would also acknowledge the fact that the home of youngsters is the rst arena for the acquisition of digital skills, albeit an informal, but nevertheless important arena.
17
A system of benchmarks
As I wrote in the introduction to this paper, benchmarking is an integral part of the knowledge base national authorities, and the research and educational community must develop. Developing a system of benchmarks is an exercise that requires careful planning and solid reections on the selection and usability of benchmarks. As a point of departure for discussion, it is possible to distinguish between different types of benchmarks for ICT in education. I have divided them into rst, second and third order benchmarks. First order benchmarks are typically related to access to ICT. This could be pupil: PC ratio and broadband access. Second order benchmarks try to capture in what ways and to what extent ICT is used in teaching and learning. These benchmarks can cover a wide range of use patterns and learning technologies, and they should capture both teachers and students use of ICT for learning. Third order benchmarks should cover the impact of ICT in teaching and learning. Benchmarks should be related to learning outcomes and learning strategies. Development of benchmarks should pay attention to the need for research and development in order to meet demands for validity and methodological rigour. Many countries have elaborated benchmarks of the rst and second order, but it has proved difcult to develop solid third order benchmarks. Further research efforts should therefore be directed at the development of such benchmarks.
Another important consideration regarding the benchmarking of ICT in education is related to the search for precision and validity. Given the complexity of education, underlying research-based concepts and models will inevitably reach a high level of sophistication. Herein lies a danger. The models can be too ambitious in their strive for perfection, and it is important to realise that the concepts and models behind benchmarks must nd an equilibrium between simplicity and complexity, because, by the end of the day, they should meet the needs of policymakers and practitioners.
18
suited for international comparisons. However, so far little work has been done to develop an agreed international framework for benchmarking ICT in education. It should be in everyones interest to develop an international benchmarking framework. This could be done in a joint OECDEU collaboration. One important consideration is to agree on common topics for benchmarking, and it is in my opinion vital to make sure that a sufcient spectrum of issues is addressed. Digital learning resources are a good case for benchmarking development, because DLR has a high degree of complexity, they are important for the quality of learning and there is too little evidence on the impact of DLRs. Developing a framework for benchmarking is a challenge that cannot be solved by one party alone. It is vital to ensure that such a framework should be developed in a triangular collaboration between researchers, policymakers and practitioners. The notion of methodological validity is important in research and benchmarking. When it comes to benchmarking of ICT and the issue of power of denition of what we are looking for to benchmark, it is in my opinion interesting to combine methodological validity with the notion
of political validity. By political validity I mean (in the context of discussing benchmark development) that the choice of benchmarks should not only be directed by methodological perspectives, it should also pay attention to the needs of key stakeholders in education when it comes to the choices of benchmarks. As such, developing benchmarks should take place at the crossroads between policy, practice and research. Methodological validity ensures that we can trust the information we get from benchmarks, political validity ensures that stakeholders in politics and society get the information they need.
References
Arnseth, H. C., Hatlevik, O., Klvstad, V., Kristiansen, T. and Ottestad, G. (2007). ITU Monitor 2007. Oslo: ITU. Arnseth, H. C., Bucher, T., Enli, G., Hontvedt, M., Klvstad, V., Maas, A. and Storsul, T. (2008). Nye nettfenomener: Staten og delekulturen. Oslo: ITU and University of Oslo. European Schoolnet (EUN) (2006). The ICT impact report: a review of studies of ICT impact on schools in Europe. Brussels: European Schoolnet.
19
Harrison, C. et al. (2002). ImpaCT2: the impact of information and communication technologies on pupil learning and attainment. Coventry: Becta. Kozma R. B. (2008). Comparative analysis of policies for ICT in education, in: J. Voogt and G. Knezek (eds), International handbook on information technology in primary and secondary education. New York: Springer. OECD (2004). Are students ready for a technology rich world? What PISA studies tell us. Paris: OECD. OECD (2008). New millennium learn-ers: a project in progress. Paris: OECD. Ramboll Management (2006). E-learning Nordic 2006: impact of ICT on education. Copenhagen: Ramboll Management. The George Lucas Educational Foundation, The digital generation project. http://www.edutopia.org/digital-generation The MacArthur Foundation, Digital media and learning. http://www.mac found.org/dml/
20
Abstract
Within research on ICT and school development there is an increased understanding of the complexity involved in such processes. However, the focus on indicators and the impact of ICT in education from a policy perspective have been oriented towards a more narrow understanding of impact and outcomes, especially on the individual level. This article argues for the need for a multilevel approach towards ICT in education in order to fully understand the impact of such technologies in the education system. In the rst part, some theoretical reections on change and the research on impact are presented. In the second part, some examples will be described, mostly from a Norwegian setting, and in the last part, some key indicators of impact on different levels will be discussed.
Introduction
The most important point I have learned by studying the impact of ICT (information and communication technologies) on Norwegian education during the last 10 years is the complexity and multilevel aspects of such innovations. The challenge is not so much to develop indicators for ICT in education as such. At present there are several available frameworks of indicators, about implementation of ICT in educational settings, about digital literacy, about leadership and so forth. The challenge is rather to study different levels and domains at the same time, and to bring different sets of indicators together into one strategy in order to assess the broad scope of impact of ICT on education.
In recent years, there has been a tendency to argue that complexity is an issue in itself in studying knowledge practices (Law and Mol, 2002) or studies on ICT, development and schools (Engestrm, Engestrm and Suntio, 2002; Thomson, 2007). In order to fully understand or assess the effects of ICT in education we need to know more about how ICT operates on different levels, and what we are really measuring on which levels. It is crucial that we synthesise the research with a holistic perspective in order to lay a foundation for further development in this area (Sutherland, Robertson and John, 2009). In this article, the argument is built around the need to look at the bigger picture in order to create sustainable developments throughout our education systems, and understand ICT as a catalyst for change
21
on different levels. This creates challenges for the development of indicators of the impact of ICT in education since several sets of indicators need to be developed and different methods must be used. The objective would be to build a model that looks at how different levels and dimensions work together to create conditions for change and the integration of ICT in educational practice.
and people work together and relate to each other, as a globalising process (Castells, 1996). Education is also thought of in a more distributed way by using these technologies for educational purposes, such as in computer supported collaborative learning (CSCL). The challenge, and the complexity, rests on how these levels and perspectives relate to each other. This is a challenge of educational research in general, but especially when trying to understand the mechanisms involved in the educational use of ICT. In the research literature there is now a greater consciousness towards multilevel analysis (Van Dijk, 2009) and more holistic approaches towards learning and school development (Hakkarainen, Palonen, Paavola and Lehtinen, 2004; Arnseth and Ludvigsen, 2006; Sutherland, Robertson and John, 2009). As David Olson has pointed out in his book Psychological theory and educational reform (2003):
The problem, I believe, is that the theories that gave us insight into childrens understanding, motivation, learning and thinking have never come to terms with schooling as an institutional practice with its duties and responsibilities for basic skills, disciplinary knowledge, grades, standards, and credentials What is required, then, is an advance in our understanding of schools as bureaucratic institutions that corresponds to the advances in our understanding of the development of the mind. (D. Olson, 2003:xxi)
Understanding change
A major challenge for developments within technology and education today is to grasp the complexity of such developments. In general, there has been a tendency to simplify the research approaches and understanding of how digital technologies might have an impact on schools and educational outcomes (Cuban, 1986, 2001; Erstad, 2004), and evidence of the impact of ICT on educational practice has mainly been drawn from small-scale case studies (Condie and Munro, 2007). Both policymakers and researchers have created expectations towards the impact of information and communication technologies on student learning, which has not gained strong support in the research literature (ibid.). Much research has been oriented towards the new possibilities and limitations created by the implementation of digital technologies into educational settings (De Corte, Verschaffel, Entwistle and van Merrienboer, 2003). Again, other research and development initiatives have been more directed towards the institutional framework of school development and the use of ICT (Krumsvik, 2009). In later years, there has also been a growing interest for networks, both online and ofine (Veugelers and OHair, 2005). The argument goes that digital technologies have created a new situation for how organisations
Olson argues that the challenge is to combine different levels in our understanding and analysis of key characteristics of how schools function as learning organisations, and also the conditions for changes of activity at different levels.
22
In his classic book The new meaning of educational change (1991), Michael Fullan presents a broad framework on different levels and involving different actors in understanding educational reform and school development. Also in his later book Change forces (1993), he addresses the real complexity of dynamic and continuous change, showing the challenges this implies both for peoples mind-sets and for mechanisms dening educational practices. This has made the research community understand that change was not an event that occurred in such a way that a before and after could be recognised and measured; rather, he dened change as a process. In recent years, this has been taken up by other researchers trying to develop models to study and also to create interventions into educational practices in order to work towards school development. This represents a movement away from traditional models of change based on organisational theory such as Senge or Nonaka and Takeuchi, towards models trying to grasp the complexity of change processes through the activities involved. The most important perspective for studying change processes in schools in recent years has been activity theory, or more specically cultural-historic activity theory (CHAT) (Engestrm, 1987). This has grown out of the intellectual work done by the Russian psychologist Vygotsky, in the 1920s and 1930s, and later on by Leontjev. The focus of this perspective is on activity as the unit of analysis and mediation between actors and certain cultural tools. Yrj Engestrm has then expanded this model beyond the person and the tools by introducing a larger framework of factors that are part of developmental processes on different levels, such as rules and
norms, division of labour and communities of practice. The relation between these factors is dened as an activity system, and within an organisation and between organisations there might be several activity systems that relate to each other in different ways. The complexities of knowledge creation and knowledge building have been an issue within research communities dealing with CSCL, studying how collaborative and distributed ways of working using different technological applications stimulate knowledge building among learners. This can be seen in the developmental work done by Marlene Scardamalia and Carl Bereiter in Canada (Scardamalia and Bereiter, 2006). Knowledge building, and the technological platform that has been developed (Knowledge Forum), aim for collective cognitive responsibility among learners. Collective responsibility refers to a condition in which responsibility for the success of the group is distributed across all members rather than being concentrated on the leader. Collective cognitive responsibility refers to taking responsibility to know what needs to be known on the cognitive level in addition to the more tangible practical aspects. Networking is a broad conceptualisation based on global perspectives on social development, but which also relates specically to the role of education in moving towards knowledge societies and the role of networking in such processes. As an example, in the Unesco report Towards knowledge societies (2005), the concept of learning is closely tied to innovation and networking. Cred and Mansell (1998) have also shown how this thinking on knowledge societies and networking is fundamentally based on identifying new ICT opportunities.
23
In his literature review on whole school change, Thomson emphasises that: The ways in which we think about the school also impact on what counts as change. There are two important aspects to thinking about change in schools: (1) understanding the school as an organisation, and (2) understanding that change will be multilayered (2007:15). In his presentation of a framework for change, he focuses on two important themes: the timing of, and time for, whole school change, and a supportive framework. The impact of ICT has become a key factor in many studies in understanding how new technologies both might be a catalyst and a driving force for change processes in themselves, and also an element that supports change within organisational settings. All this points towards a stronger emphasis on multilevel approaches studying change and the impact of ICT on different levels within the same analysis.
An upscaling of activities has brought about a need for development of indicators that capture the more systemic developments of ICT in education, and how that transcends to the micro level of teaching and learning by teachers and students: not how we change single schools in the way they work with ICT, but rather how all schools and the school system as such experience changes by implementing and using ICT. One example is the national curriculum in Norway, from 2006, which denes the ability to use digital tools and digital competence as a basic skill throughout the curriculum. In this way, the Ministry of Education and Research has placed a strong emphasis on ICT as part of learning activities in schools. ICT should be an integrated part of learning activities among all students, at all levels of primary and secondary education and in all subjects. This also challenges how schools are organised. The focus on ICT and digital competence in the new national curriculum builds on former plans and documents. At the same time, it points towards future competencies, what are also termed as 21st century skills (www.21stcenturyskills.org). The important implication for the discussions in this article is the commitment this implies for teachers and students to use ICT much more broadly in the learning activities in schools. In this way, a stronger push mechanism is created for school leaders and teachers to work towards capacity building on school development and the use of ICT in order to full the challenges of the new curriculum. Important national objectives related to the new national curriculum can be summarised as follows:
24
a focus on how ICT can contribute to an increased quality in teaching and learning; an increased use of new ICT-based means for cooperation and interchange of knowledge and experience at all levels of the educational system; a broad access to learning materials and the development of new and varied forms of learning in order to stimulate activity, independence and cooperation; an increased focus on students critical reection with respect to the use of ICT in teaching and learning and in society in general; an increased focus on how to avoid creating digital divides. Such curriculum developments also point to the need for multilevel analysis of the ways we study the impact of ICT on education. And it brings up a future observation of and an orientation towards the competencies students need today and in the future and that our school system needs to take into consideration. Competencies are here understood on different levels, not only seeing competency as an individual ability, but also on the collective level and the school level. In a Norwegian context, we have had different projects and strategies in developing indicators on these different levels, also trying to dene what is called the digital competent school or digital maturity. Before I move on to some reections on a multilevel approach to indicator development on ICT in education, I want to give two examples from my own research where such a multilevel approach has become apparent.
Two examples
Example 1 PILOT (project innovation in learning, organisation and technology)
PILOT was the largest and most extensive project in Norway related to the pedagogical use of ICT in schools during the years 200004. The project was initiated by the Ministry of Education and Research, and a national agency (ITU) was responsible for coordinating the research work and research communities involved in the project. This project was part of upscaling of activities on a national level using new digital technologies, from a few innovative teachers and schools towards whole school communities and including many schools. Some 120 primary and secondary schools in nine regions of Norway took part in this four-year research and development project based on interventions concerning the educational use of ICT and developing a framework within whole school settings. The aim of the project was: to get the participating schools to develop the pedagogical and organisational opportunities afforded by the use of ICT, and to develop and spread new knowledge on this subject. The research design was structured with a quantitative part (pre-post) and a qualitative part (during). In the initial phase, infrastructure and technological challenges were in focus. However, in the second part of the project, the focus was much more on various pedagogical approaches to education. This was due in part to the fact that the use of technology had become more common in everyday life at many of the schools, and in part to the fact that technology could not be used as a helpful aid until the proper
25
conditions had been established. In other words, the schools spent time restructuring the school day so that they could benet from the educational opportunities that ICT represented. A number of the regions reported a positive impact on the pupils learning achievement with respect to academic performance, motivation for learning and changes of the subject content through the use of digital learning resources. Results from this project showed that schools handled the challenges of change and the introduction of ICT as a new object in very different ways. Four typologies of schools were identied according to two dimensions, one going from unsystematic versus systematic in the way school communities worked towards school development, and another going from being development oriented in the school culture towards being dominated by resistance towards change (Erstad, 2004).
Technological problems dominated the project during the rst year, but were then resolved for most schools.
School administrators
PILOT as a project involving the whole school community was challenging for school administrators. The majority of principals reported that the school had initiated changes in activities in the school organisation due to the integration of ICT, such as no longer using paper for sending out messages and instead putting them on the local network.
26
Teachers believed that ICT has a positive effect on pupils performance, that it creates more exibility and differentiation, and that this tendency was amplied during the course of PILOT. After the introduction of ICT, teachers experienced a positive change in their work day that intensied during the PILOT period. There is often a small group of enthusiastic teachers running the activities. Activists are important.
Sustainability
The school leaders reported that they would continue the restructuring efforts and ICT work after PILOT had nished. Learning communities help create a basis for and support change processes. In the majority of schools, the PILOT activities gained a stronger local foundation.
to the advancement of knowledge and experiences are important. Of course, the challenges for optimal function of such networks are huge and it might be difcult to nd the right balance between a strong leadership for development and stimulating initiatives among participants where leadership is more invisible. Networks are by denition decentralised, which makes leadership and division of responsibility and labour a challenge. The main focus is on the role technology has in supporting and building networks for learning. An important aim of the programme has been diffusion of innovations to a large number of schools, through small funds and incentives. In the different reports during the last four years, teachers and school leaders report that the economic funds have not been the most important incentives for participating. Rather it is the possibility of working with others in building capacities that make both each school but also the collective efforts in each network stronger.
Starting up
The rst year of the programme was dominated by a lot of insecurity, unclear denitions of responsibility on different levels (locally, regionally and nationally) and technologies that did not work optimally between schools. After the rst year, the participating schools became more experienced, and the division of labour and responsibility was made clearer, which created a platform to dene a new phase of more strategic development. The intention of the programme was to build up capacities for learning and networking that could be further developed after the programme ended, implying a model for expansive learning and knowledge building. By
27
using a strategy of reection on action, networks have been able to learn from the challenges and tensions during the rst phase, for example in the way networks have become more focused in their work, concentrating on certain aspects of technology use and educational perspectives, instead of trying to be too broad in their approach.
28
developments of knowledge building, focusing on how to build experiences and knowledge together over time. The working method chosen in most networks was a combination of meetings where participants met face to face, and online collaborative efforts. The physical meetings turned out to be very important for the networks, because they got time to discuss and reect together and to bring up tensions and problems in the developmental process at the schools, as part of the expansive learning processes. The teachers and school leaders reported that these meetings had an important function to make the networks evolve as communities of learning.
special planning. In some mininetworks, online collaboration has worked better because they have a more focused approach and a clearer understanding of why they use online resources for networking. Schools that already had experience with using ICT reported that they felt that they gave more than they got in return. This is due to the way networks were organised, where schools with more experience in using ICT should work with schools with less experience in this area, but which might have experience in other areas that they could bring to the collaboration. Commitment of school leaders and school owners to make sure of sustainability over time.
Dimensions of indicators
So how might these examples and the discussion above help in developing a multilevel approach on indicators about the impact of ICT on education? Most importantly, what is described above shows the necessity of understanding ICT and its impact on education on different levels. The synergy of different levels is the basis for change and development in both projects, where ICT is both a catalyst for change and a new cultural tool for enhancing student learning. This implies a higher degree of complexity in developing indicators. However, the results from studies like the ones mentioned above show that schools that dene ICT as important on different levels of the organisation and have a strategy for how the whole school should orient itself towards the use of ICT are more successful in using ICT for educational purposes than other schools.
29
Below, I present some key components that are important as sets of indicators to measure the impact of ICT on education. Again, I will mainly build on projects and developments in Norway. Perspectives on digital literacies/competencies are seen here as something that frames these sets of indicators, something aggregated that relates to all indicators in one way or another.
In her book Literacy for sustainable development in the age of information (1999); Naz Rassool presents an overview of different debates on literacy in recent decades. Her point is that research perspectives on technology and literacy need to reconceptualise power structures within the information society, with an emphasis on communicative competence in relation to democratic citizenship. Digital technologies create new possibilities for how people relate to each other, how knowledge is dened in negotiation between actors and how it changes our conception of learning environments in which actors make meaning. Empowerment is related to the active use of different tools, which must be based upon the prerequisite that actors have the competence and critical perspective on how to use them for learning. Literacy, seen in this way, implies processes of inclusion and exclusion. Some have the skills and know-how to use them for personal development, but others do not. Schooling is meant to counteract such cultural processes of exclusion. One report on conceptualising digital/ ICT literacy often referred to is Digital transformations: a framework for ICT Literacy (ETS, 2002) written by a team of experts for the Educational Testing Service in the USA. In this report, they identied some key concepts of what they called ICT literacy. One interpretation of such key concepts can be the following (my elaboration based on ETS). (See Figure 1). This consists of more general competencies (communicate, create, access, information handling, critical/ analytical) that are not connected to specic subjects in school or specic technologies. They can be taught and are not only related to what is learned in school settings, but also to situations outside the school.
30
Be able to open software, sort out and save information on the computer and other simple skills in using the computer and software. Be able to download different information types from the Internet. Know about and be able to get access to information. Be able to orient oneself in digital networks, learning strategies in using the Internet. Be able to organise information according to a certain classification scheme or genre. Be able to compare and put together different types of information related to multimodal texts. Be able to check and evaluate if one has got the information one seeks to get from searching the Internet. Be able to judge the quality, relevance, objectivity and usefulness of the information one has found. Critical evaluation of sources. Be able to communicate information and express oneself through different mediation means. Be able to take part in net-based interactions of learning, and take advantage of digital technology to cooperate and take part in networks. Be able to produce and create different forms of information as multimodal texts, make web pages and so forth. Be able to develop something new by using specific tools and software. Remixing different existing texts into something new.
Evaluate
Communicate
Cooperate
Create
Figure 1. Key concepts of ICT literacy (my elaboration based on key concepts in the ETS report)
Other frameworks have used digital competence as an overall term. One example is the working group on key competences of the European Commission, Education and training 2010. This programme identies digital competence as one of the eight domains of key competencies, dening it as the condent and critical use of information society technologies for work, leisure and communication. These competencies are related to logical and critical thinking, to high-level information management skills and to well-developed communication skills. At the most basic level, ICT skills com-
prise the use of multimedia technology to retrieve, assess, store, produce, present and exchange information, and to communicate and participate in networks via the Internet. (European Commission, 2004, p. 14). Digital competence in this framework encompasses knowledge, skills and attitudes related to such technologies. As shown in this section, there are different frameworks to relate to in our understanding of digital literacy/competence which relate to different levels and issues. However, the key challenge is to go deeper into the implications of
31
A more fruitful approach would be to study impact on different levels and look at co-variation between levels. This will give a broader and richer understanding of impact that is also closer to the experiences of schools. One way of dening indicators on different levels is to describe them on macro, meso and micro levels. Two of the levels of indicators mentioned in Figure 2 are on the macro level (national, local). The meso levels would be the institutional and learning environments. The micro levels focus on teacher and student practices and outcomes (collective and individual). Below is an attempt to bring together different levels and different contexts
32
National level
Impact on a national level deals with key factors of importance for how ICT is implemented in the school system in different countries. This is most of all related to the ways countries dene ICT as of importance in educational development. This is to go beyond the policy slogans about the importance of ICT in itself and a technological determinism, and focus more on the concrete steps taken by policymakers in different countries. The methods used for such indications of impact could be analysis of policy documents and monitoring through national surveys of developments within the education system. Some key indicators on this level are as follows. Curriculum development: In many countries, ICT is mentioned in curriculum documents, but it differs in what way and to what extent. In most countries, curricula are important in the way they frame the education system and the practices taking place within these systems. For example, in my own country (Norway), digital literacy has been written into the national
curriculum as of 2006. From a former situation where ICT was mentioned as a tool that might be integrated into the classroom, the new curriculum states that ICT has to be used in all subjects and on all levels of compulsory schooling. There has thus been a marked impact on the curriculum. Infrastructure/access: In most countries during the last decade there has been a prime focus on making computers and Internet connections available to educational institutions. This has partly been a national responsibility by ministries and other national agencies, and is expressed in different national documents and action plans. Some countries have also adopted instruments to monitor progress in this area, which specify the ratio of computers and Internet access per students and teachers. A critique has surfaced in recent years about the focus on implementation of technology in the education system for too much technological determinism. Standardisation: Many countries have started work on standardisation of technological solutions. The ISO standard has been implemented in several European countries for the coordination of technological developments and to make use more accessible across different technologies and platforms. This has become an important part of technological strategies on national levels, as an indication of developments within ICT and education systems. Digital learning resources: National initiatives to stimulate the production of digital learning resources have been important, yet problematic, in many countries. As such, they are an important indicator of progress on a national level, because they are important for how teachers and students use ICT in education. Publishing companies have invested
33
in technological developments to develop different learning resources beyond the book. Yet investments have not always made a prot and such companies are often reluctant to make the necessary investments. This has also raised issues about public and private collaborations to develop such resources on a systems level for education. Use: Some countries have instruments to follow the actual use of ICT on different levels within the education system. This is to get a national overview of implications of investments and implies a set of indicators to be developed at a national level to map how ICT is used on different levels and subjects in order to compare and see developments.
tures, which are important especially to secure the use of ICT among teachers.
Institutional level
Leadership: On the institutional level, the leadership at the school is important in creating the setting for ICT use. This of course relates to the implementation strategies developed by national and local authorities, but also to how the leadership gives direction to certain developments. This also concerns how the school and the leadership at the school make the strategies for school development with the use of ICT explicit. It often varies how the school leadership manages to develop strategies that have real implications on a practical level. Another indicator concerning leadership could be how schools use ICT as an administrative tool. School culture: Each school is different from another due to differences in leadership, the teacher community, the local community of the school, the student population and so forth. School culture relates to the daily life of each school. The school culture inuences the way ICT is implemented and used in the school. As shown in the PILOT project above, some schools see ICT as a catalyst for change while others are much more sceptical towards ICT. Collaboration: This could be an indication of the ways teachers collaborate and share experiences in order to build up competencies in using ICT. Collaboration could also be between schools, between school leaders in a community, or between students nationally and internationally. The point is that this is often an indication of how schools use ICT as a tool for collaboration. Reorganisation: An indication of impact on the institutional level also relates to
Local level
Strategies: Important on a local level is the extent to which local authorities develop strategies, expressed in different kinds of documents, to give a direction for the implementation and use of ICT in education. It varies a lot as to how well such documents and local policies are developed and used. Some are too vague and contain unrealistic intentions and visions; others have clear objectives and implementation plans. Infrastructure/access: Even though there are national policies concerning the implementation of infrastructure, it varies to what extent this is followed up on a local level. It is therefore necessary to develop indicators that track the implementation of infrastructure on a local level. Support: Another important aspect concerning impact on a local level is support structures, both for implementation of technology and guidelines for use. Local authorities have been important in many countries in developing such support struc-
34
the extent to which schools start to reorganise their practices due to the implementation of new technologies. For example, that the introduction of laptops makes it difcult to uphold a traditional classroom setting.
resources are used within the learning environment. Assessment: To what extent assessment procedures are changed. How teachers and students use summative and/or formative ways of assessment.
Collective level
Collaborative work: This point is an indication of how the use of ICT might stimulate more collaborative work among students, and that project work becomes more prevalent in schools. Sharing content: To what extent students and teachers upload content produced in schools to the Web and sharing it with others. Or the extent to which they reuse content that they nd on the Web as part of their own learning activities.
Individual level
Outcomes: Different indications of the outcomes of ICT use on the individual level, both in a summative and a formative way related to learning. Knowledge building, problem solving: The ways in which ICT stimulates knowledge building and problem solving among students, assessed by performance assessment. ICT competencies: The differences in ICT competencies among students, the digital divide. These are just some examples of indicators that might be thought of on different levels. Some indicators overlap on different levels; others are unique for specic levels. When we have this more holistic view of indicators on different levels, we might see better how they are important in different ways on different levels.Some of these levels and indicators are directed towards preconditions for use of ICT, some
35
towards the framing of such use and some towards the actual use and outcomes of such use. Indicators on national and local levels are primarily preconditions for use in the way they create the platform and the basics for use by providing the technology. The framing relates to the institutional level, teacher education and the learning environment, which create conditions for how ICT will be used in educational settings, while the collective and individual aspects relate more directly to the use of ICT itself and to outcomes of such use.
Implications
In specifying indicators of ICT in education, the argument in this article has been to draw different levels together in order to get a fuller and wider understanding of the role of ICT in our education system. As stated, this is not an easy task, but the risk of reducing the complexity of impact of ICT on our education system is that we only see a part of the picture, and that we do not see how things are interconnected. Such a multilevel approach has implications for policy, practice and research. Policy: Policymakers need to take into consideration how the system levels interconnect with the practice levels in their understanding of impact. My impression is that policies within this area have moved beyond simple technological determinism, believing that technology itself will create change, towards an awareness of the complexity involved in drawing up policies for ICT in education. Still, the understanding of impact is often drawn towards simple outcomes on the individual level. A multilevel approach might give a more realistic understanding of how impact is interrelated on different levels, thereby
avoiding reducing ICT in education to a question of whether students learn better now than before. Change and outcome is about the system of education and how students learn is connected to teachers competencies in this area, about the assessment system, about the available digital learning resources and so forth. Policymakers can develop strategies for systems of indicators and collection of such data that will provide them with the necessary tools for creating capacity for further development within this area. Practice: In order to stimulate use of ICT in educational practice, we need a better understanding of the interrelationship between different levels, and how each of them might strengthen or hinder changes within educational practices. It is the impact on the practical level that is of importance, but that level is dependent on developments on other levels, like school leadership, digital learning resources, curriculum development and so forth. Teachers and students need a framework that stimulates change and development. Perspectives on digital/ICT literacies, for example, have real implications on a practical level in the way this term applies to certain learning objectives using ICT. In addition, it relates directly to several other levels. Research: There is a need for more research that manages to grasp the complexity of the matters mentioned above. One example given in this article is activity theory developed by Yrj Engestrm, but we need more development in this area to be able to develop analytic concepts and research tools that can help us research such a multilevel approach to the impact of ICT on education better than we are able to at present.
36
References
Arnseth, H. C. and Ludvigsen, S. (2006). Approaching institutional contexts: systemic versus dialogic research in CSCL, International Journal of ComputerSupported Collaborative Learning, Vol. 1, No 2. Balanskat, A., Blamire, R. and Kefala, S. (2006). The ICT impact report: a review of studies of ICT impact on schools in Europe. Brussels: European Schoolnet, European Commission. Castells, M. (1996). The rise of the network society, the information age: economy, society and culture, Vol. I. Cambridge, MA: Blackwell. Condie, R. and Munroe, B. (2007). The impact of ICT in schools a landscape review. London: Becta Research. Cred, A. and Mansell, R. (1998). Knowledge societies in a nutshell: information technology for sustainable development. Report for the UN Commission on Science and Technology for Development and the International Development Research Centre. Retrieved at www.idrc.ca/openbooks/858-9/, on 30.07.2009. Cuban, L. (1986). Teachers and machines: the use of classroom technology since 1920. New York: Teachers College Press. Cuban, L. (2001). Oversold and underused: computers in the classroom. Cambridge, MA: Harvard University Press. De Corte, E., Verschaffel, L., Entwistle, N. and van Merienboer, J. (eds) (2003). Powerful learning environments: unravelling basic components and dimensions. Amsterdam: Pergamon. Eliassen, E., Jsendal, J. S. and Erstad, O. (2008). Ledelse av Lrende nettverk. (Leadership of Networks for Learning) ITU. Oslo: University of Oslo. Engestrm, Y. (1987). Learning by expanding: an activitytheoretical approach to developmental research. Helsinki. Retrieved 30.07.2009 at http://lchc.ucsd.edu/ MCA/Paper/Engestrom/expanding/toc.htm. Engestrm, Y., Engestrm, R. and Suntion, A. (2002). Can a school community learn to master its own future? An activity-theoretical study of expansive learning among middle school teachers, in: G. Wells and G. Claxton (eds) Learning for life in the 21st century. Oxford: Blackwell. Erstad, O. (2004). PILOTer for skoleutvikling (PILOTs for school development. Final and summary report of the PILOT project. 19992003). Report No 28. Oslo: ITU. Erstad, O. (2005). Digital kompetanse i skolen (Digital literacy in the school). Oslo: Universitetsforlaget (University Press). ETS (2002). Digital transformation: a framework for ICT literacy. Princeton, NJ: Educational Testing Service. European Commission. (2004). Key competences for lifelong learning: a European reference framework. Directorate-General for Education and Culture. Retrieved on 30.07.2009 from http://europa.eu.int/comm/education/policies/2010/doc/basicframe.pdf
37
Fullan, M. G. (1991). The new meaning of educational change. London: Cassell. Fullan, M.G. (1993). Change forces: probing the depths of educational reform. London: Falmer Press. Goodlad, J. I. and Anderson, R. H. (1984). A place called school. Blacklick, OH: McGraw-Hill. Gray, J., Hopkins, D., Reynolds, D., Wilcox, B., Farell, S. and Jesson, D. (1999). Improving schools: performance and potential. Buckingham: Open University Press. Hakkarainen, K., Palonen, T., Paavola, S. and Lehtinen, E. (2004). Communities of networked expertise. Amsterdam: Elsevier. Kozma, R. B. (ed.) (2003). Technology, innovation and educational change: a global perspective. Eugene, OR: ISTE Publ. Krumsvik, R. (ed.) (2009). Learning in the network society and the digitized school. New York: Nova Science Publishers. Law, J. and Mol, A. (2002). Complexities: social studies of knowledge practices. Durham, NC: Duke University Press. Olson, D. (2003). Psychological theory and educational reform. Cambridge: Cambridge University Press. Pahl, K. and Rowsell, J. (2005). Literacy and education: understanding the new literacy studies in the classroom. Thousand Oaks, CA: Sage. Rassool, N. (1999). Literacy for sustainable development in the age of information. Clevedon: Multilingual Matters Ltd. Reinking, D., McKenna, M. C., Labbo, L. D. and Kieffer, R. D. (eds) (1998). Handbook of literacy and technology: transformations in a post-typographic world. Mahwah, NJ: Lawrence Erlbaum. Scardamalia, M. and Bereiter, C. (2006). Knowledge building: theory, pedagogy, and technology, in: R. K. Sawyer (ed.) The Cambridge handbook of the learning sciences. Cambridge: Cambridge University Press. Skogerb, M., Ottestad, G. and Axelsen, H. K. (2007). Lrende nettverk fra informasjonsutveksling til kunnskapsdannelse? (Learning networks from information sharing to knowledge building?) Report. Oslo: ITU. Sutherland, R., Robertson, S. and John, P. (2009). Improving classroom learning with ICT. London: Routledge. Thomson, P. (2007). Whole school change: a review of the literature. London: Creative Partnerships. Van Dijk, J. (2009). Outline of a multilevel approach of the network society, paper presented at the annual meeting of the International Communication Association, Sheraton New York, New York City, NY. Retrieved 30.07.2009 at http://www.allacademic.com/meta/p12672_index.html Veugelers, W. and Ohair, M. J. (eds) (2005). Network learning for educational change. Buckingham: Open University Press.
38
CHAPTER
STATE OF THE ART
II
Monitoring in education: an overview What do we know about the effective uses of information and communication technologies in education in developing countries?
39
Abstract
In this article, a description of educational monitoring will be provided. This constituted the background for a study about monitoring ICT in primary and secondary education in the EU (1) (see Chapter IV: Indicators on ICT in primary and secondary education). First the function of monitoring for policymaking will be described, showing that educational monitors in general can have different functions, and the concepts of policy goals, indicators, instruments and data will be introduced. A distinction can be made between international, national and school monitoring. This is followed by a description of the main steps involved in designing and conducting international comparative educational monitors, sketching a number of dilemmas for which solutions need to be sought. This is followed by a review of methodological issues in international comparative monitoring.
Functions of monitoring
Monitoring can be dened very broadly as the act of periodically/continuously observing something. The act of observation will be called assessment further on and hence regular assessment equals monitoring. An educational monitor is thus assessment of education and how it is developing over time. This denition is fairly neutral and could, in certain situations, when explicit targets are set, be translated into assessment of education in order to determine if standards are met. Educational monitoring can be focused on many different characteristics of education, such as input, processes and learning outcomes and
many different methods can be used for collecting observations. Qualitative and quantitative methods can be distinguished. In this study, the main focus is on quantitative methods that allow for comparisons between countries and, hence, imply statistical generalisations to the educational system at large. A distinction can be made between national and international monitors. National educational monitors are meant to draw conclusions about changes that take place in educational systems over time, which implies that the observations are collected in such a way that they are comparable over time. International comparative educational monitors offer possibilities to interpret the state of the art and/ or changes over time in one country with reference to changes in other countries, provided that the measures that are used are internationally
(1) This study was nanced (at a cost of EUR 122 200) by the European Commission. Contract EACEA-2007-3278. Opinions presented in this chapter do not reect or engage the Community. European Commission
41
comparable between countries and over time. A fairly recent development is school monitoring, whereby schools keep track of their developments (sometimes in comparison with other schools) for evidence-based school policymaking. A full multilevel monitor would be a system in which international, national and school monitoring are integrated. Monitoring in general can be conceived as regular assessments that are part of a cyclic policy process that consists of a number of steps, as shown in Figure 1. Monitoring implies a regular repeat of step 2 (Figure 2). Figure 2 concerns a very general model that can be applied in many
different settings, for instance at the international (worldwide, regional), national, school and even individual level. Given the purpose of our study, we will further focus mainly on the international level and will describe below in more detail each of the steps that are distinguished in Figure 1, in particular in terms of what is required in each of these steps, which concepts are relevant and which questions and dilemmas will be faced.
1. Policy goals
Whereas national monitors are focused on policy goals that are relevant for one countrys stakeholders, the group of stakeholders is larger for international monitors and the participating countries
42
Monitoring in education
need to decide rst on which common goals a monitor should be focused. An example of a common goal might be To connect all schools to the Internet. A dilemma in establishing common goals is that some goals may be highly relevant in some countries (e.g. those which are just starting to connect to the Internet), but not or not yet relevant in other countries (e.g. those which have already realised this goal). We will call this goal disparities. What can also happen is that certain common goals have a short lifetime, so that they were perhaps highly relevant in a certain time period, but were no longer so later on (for example because the goals have been reached). In relation to ICT particularly, where rapid technological developments are taking place, this is an issue of special concern (in this respect the notion of life expectancy of indicators becomes relevant). Once common goals have been established, indicators for monitoring the progress towards these goals need to be dened. If goal statements are very concrete, as in the example above, this may be relatively easy to do, such as the percentage of schools
that have a connection to the Internet. However, when the goal statements are fairly global, as is often the case in international consensus-building processes (e.g. provide all students with access to the Internet), a number of different indicator denitions may be needed (e.g. number of Internet connected computers per 100 students, connection speed, etc.). A serious problem in dening indicators concerns their comprehensiveness, which is the extent to which they adequately cover the domain that is implied by the goal statements. Monitors can potentially have quite serious (unintended) conservative impacts on educational policymaking if the comprehensiveness is low. This can occur if, for instance, they do not cover relatively new competencies, but rather focus on traditional competencies of students. For example, suppose that the use of ICT leads to a slight decrease in mathematics skills (for which an indicator is available), because as a result of students autonomous working less content can be covered. If, at the same time, a high increase in communication and studying skills (for which no indicators are dened) occurs, this positive effect would remain unnoticed and
43
there would be a chance that ICT use in mathematics would be discouraged. In this respect, the notion of holistic monitoring is relevant. International comparative assessments may have a big impact on education. Recently a consortium of Cisco, Intel and Microsoft concluded that, in order to reform education, the current prevailing international comparative assessments would have to be changed. For practical reasons, the number of indicators that can be addressed in an assessment is limited (see point 2). Therefore establishing priority needs is an essential aspect of step 1. An important distinction in Figure 1 is between primary and secondary indicators (sometimes also called respectively key indicators and background or explanatory indicators). Primary indicators are those that are featured as the main focus of an assessment; for instance when it concerns PISA or IEA-TIMSS-PIRLS, primary indicators concern the test results in mathematics, science and/or reading, which are usually the rst to be featured when statistical reports from these international monitors are released. Secondary indicators are used to throw further light on the test results, for instance by examining difference in outcomes between sub-populations in countries (e.g. boys and girls) or for analysing how the differences between countries can be explained.
issues and constraints that need to be considered when designing an international comparative assessment. Firstly, as the instruments are administered to educational actors in schools (school leaders, teachers, students and sometimes parents) a serious constraint is the amount of time that can be asked from each respondent to answer the tests/questionnaires. Increasing the amount of time will lead to lower response rates, which then in turn would affect the quality of national statistical estimates that are based on the collected data. As the number of questions that can be included in questionnaires is limited, this in turn has implications for the number of intended indicators that can be included. Initial priority decisions can be made on the basis of a priori response time estimates. Further, during the process of operationalisation and piloting (when response-time estimates can be collected) it may appear that the number of intended indicators needs to be further reduced. An important issue concerning the operationalisation of intended indicators concerns costs. Developing completely new indicators is a timeconsuming process, because empirical evidence needs to be collected regarding the comparability, statistical quality and interpretability of the new measures. After the data are collected, indicator statistics can be calculated. For example, when an indicator denition might be use of ICT, one of the indicator statistics might be percentage of students using ICT daily at school. If the same intended indicator was included in earlier assessments, another indicator statistic might be increase of daily use of ICT at school between 2000 and 2009.
2. Assessment
An international comparative assessment consists of collecting data in representative national samples on the basis of instruments (usually questionnaires and tests) that contain operationalisations of the intended indicators (from step 1). There are several
44
Monitoring in education
system, the need may arise to try to nd out what the potential reasons are that could lead to interventions aimed at realising improvements. Existing international comparative monitors usually include quite a number of secondary indicators that are intended to be used for explaining the differences between countries and between schools and students within countries. A common experience among researchers involved in the process of nding causes is that the set of secondary indicators is too limited to answer concrete why questions that are posed after the data have been collected, and hence this often does not result in concrete suggestions for policy interventions that could lead to improvement. A more fundamental problem is that the collected data do not allow for causeeffect analyses. At best they can result in strengthening or weakening particular beliefs about cause and effects. Therefore some countries occasionally conduct additional research in order to nd out whether handles can be found for improvement. In the past, one country (the Netherlands), scoring low on international reading tests, conducted in-depth analyses on the reading methods used in schools and concluded that these were no longer up to date. A change of reading methods took place and later it appeared that the international ranking had considerably improved, which strengthened the belief that the reading methods were among the potential causes of low performance. This is an example of qualitative follow-up of the international assessment.
4. Diagnosis
Once the primary indicator statistics have raised concerns about the existence of weaknesses in the education
5. Interventions
Throughout the world, many examples are available of policy actions that were undertaken as a result of the outcomes of international comparative
45
assessments. It seems safe to infer, on the basis of the continuing increase of participating countries in international comparative educational monitors (from 20 in the IEA studies in the 1980s to over 60 in the current IEA studies), that policymakers are becoming more aware of the potential benets of international comparative educational monitors for evidencebased policymaking. It should be noted that interventions do not necessarily need to be top-down: if schools in a country could see how they perform on the primary indicators (by means of school monitoring) and make inferences about the existence of potential weaknesses and their likely causes, these initiatives might be designed and generated at school level. This approach is advocated in some EU countries. The policy cycle that is sketched above may help to illustrate several functions that international comparative monitors may have, such as: description (mirror), accountability, benchmarking, enlightenment, understanding and cross-national research. Some of these functions (such as benchmarking, monitoring, understanding and cross-national research) can be explicitly addressed by the research design, while other functions are more or less collateral (mirror, enlightenment). Monitors can help in the process of evidence-based policymaking by which decisions are based on facts rather than rhetoric. In this sense, monitors are also conceived as navigation tools. However, one should also be aware of potential resistance to participate in international comparative monitors, as these may be perceived as leading to undesirable inuences on educational policymaking.
This may, in particular, be the case when it concerns ICT indicators. The main steps underlying the design and execution of a monitor can be summarised as follows. 1. Establishing common objectives 2. Dening indicators 3. Operationalising indicators (= instruments) 4. Drawing samples of respondents 5. Collecting data 6. Presenting descriptive results 7. Generating questions for diagnosis 8. Analysing data 9. Making recommendations for interventions 10. Making recommendations for revised/new indicators.
46
Monitoring in education
The IEA has existed for over 50 years. As a non-governmental organisation, it conducts large-scale quantitative assessment in mathematics, science, reading, civic education and ICT, amongst other things. The core studies (in mathematics, science and reading) take place roughly every four years and, since 2000, the assessments have also been conducted roughly each four years. In 2011, a combined assessment of mathematics, science and reading will take place. The OECD PISA assessment was conducted for the rst time in 2000 and is run every three years. The core performance domains are mathematics, science and reading. The latest assessment took place in 2009 and is expected to be reported by the end of 2010. The next assessment is scheduled for 2012. Since 2000, the majority of EU countries have participated in the OECD assessments (PISA) and/or IEA (TIMSS and PIRLS, respectively mathematics/science and reading) at the primary and/or secondary education level.
Intentions may be formally legislated in syllabi, examination standards or in the words of the IEA intended curricula. These constitute the basis for guiding many educational processes, such as the content of the textbooks, teaching and learning activities in schools, the content of (in-service or pre-service) teacher training, etc. An analysis of these intentions is usually the basis for designing international comparative assessments that are currently run by international organisations, such as OECD (PISA) and IEA (TIMSS, PIRLS). These analyses may be based on extensive curriculum analyses (IEA) or expert opinions about what the important life skills are that students need to acquire in schools (OECD). The outcomes of such analyses constitute the basis for developing the content specications for the instruments that are used to measure educational outcomes (e.g. in the cognitive domain, such as mathematics, science and reading, but also affective, e.g. learning motivation), whereas on the other hand these content specications can also be used for measuring the opportunities that schools offer to students to learn these contents. Educational monitoring that would only be focused on these three core concepts would allow educational actors to make a limited number of inferences, such as: for national monitors: whether intentions, OTL and outcomes are changing over time, whether discrepancies exist between intentions and OTL, whether inequities exist between sub-populations of students and how these are changing over time; for international monitors: the same as for national monitorsbut with enhanced possi-
47
bilities to interpret the national observations with reference to what is happening in other countries. Although such inferences are important as a rst step towards understanding educational progress, they would offer insufcient handles for undertaking policy actions for remediation. Therefore it is necessary in educational monitors to also address concepts that are (politically) malleable and which relate to areas that are believed to inuence OTL and outcomes. For these concepts (earlier we referred to secondary indicators), an almost endless variety of candidates could be generated, such as: competencies of teachers; number of hours in the timetable scheduled for certain OTL areas; availability and quality of learning materials; instructional methods applied; school organisation and quality of leadership; class climate; examination standards. For the study that formed the basis of this chapter, the question was how ICT ts into the picture sketched above. ICT can be conceived as a transversal issue as well as a subject area. When ICT is a subject area (such as mathematics and science in existing international comparative monitors) the previous concepts could be translated into, for example: core concepts, such as: the intentions (formally legislated or informally adhered to) with regard to ICT literacy, the OTLs for learning about and learning with ICT,
the ICT-related competencies of students; instrumental concepts, such as: the competencies of teachers about ICT (technical ICT literacy) and the use of ICT (pedagogical ICT literacy), the number of hours scheduled for learning about ICT, the availability and quality of ICT learning materials. When ICT is conceived as a transversal issue, the concepts mentioned above could be considered all instrumental.
Measurement
In order to be able to make statements about the concepts (and derived indicators) underlying the assessment, measures are needed that can be used for statistical generalisations. With regard to measurement, a main distinction that can be made is between what and whom is measured. What refers to the constructs that are materialised in instruments. Typically in ICEMs, which are targeting students, the following types of instruments are distinguished.
48
Monitoring in education
Context instruments: for collecting information about school external conditions (e.g. funding, regulations, curriculum). For instance, to what extent do curricula prescribe the use of ICT? School instruments: containing questions about school characteristics (e.g. organisation, management, school policies). For instance, how many computers are available in schools or what is the vision of school leaders about desirable pedagogical approaches using ICT? Teacher instruments: containing questions about instructional practices. For instance, to what extent do teachers use ICT for testing students? Student instruments: tests for measuring achievement and student questionnaires about activities and background. For instance, how often did you use a computer for learning mathematics?
collection, in order to determine which items constitute the best test. Psychometric analyses are conducted to check if the items t in intended scales. Translation verication. All items are originally in the English language. They are translated into national languages which can be many, as in South Africa where in IEA-PIRLS (reading literacy) a translation into 12 languages is needed. It is crucially important that the translation matches the international version as well as possible. The quality of the translations is checked by involving professional translators. Lay-out verication. As even the layout of tests and questionnaires may inuence the responses of the testees, the national lay-out of tests and questionnaires is checked at the international coordination centres to determine whether any deviations can be discovered. The question as to who is measured relates to the issue of populations and samples, which is discussed in the next section.
Development of instruments
The construction of international instruments is usually a very time-consuming activity in which many steps are specied that are all intended to improve the quality of the assessments. Some of these activities are described briey below. Involvement of international experts. At the start of international assessments, committees of experts with a good reputation in the areas that are tested are formed. These committees review items in order to guarantee that they represent the content area. National experts are involved in judging proposed items in terms of their t with national curricula. Pilot testing is conducted on roughly double the amount of items that is actually needed for the main data
49
international population denition. Most IEA studies are focused on student populations at three levels in the education system: primary education, lower secondary education and upper secondary education. Denitions that were used in TIMSS2003 (for what in most countries constitutes the primary and lower secondary level) were, for example: all students enrolled in the upper of the two adjacent grades that contained the largest proportion of 13-year-old students at the time of testing and all students enrolled in the upper of the two adjacent grades that contained the largest proportion of 9-year-olds. These correspond to the eighth and fourth grade in practically every country (Mullis et al., 2004). The main reason for choosing a gradebased denition is that in IEA assessments, teachers and student data are linked and hence IEA is targeting data collection in intact classes. It should be noted that linkage is also possible when targeting individual students, but more complicated: for many teachers, their current reference point for their instructional activities is still an intact class. This is typical for the traditional organisation of teaching and learning in schools. If the current reform trends in education (which call for more individual learning trajectories and multidisciplinary team teaching) are implemented on a large scale, the target class approach may need to be changed. Grade-based denitions do not necessarily result in comparable populations across countries. The national denitions may still result in large variability with regard to characteristics that impact the interpretation of the assessment outcomes, for instance: the number of years that students are in school may differ, while in some
countries grade repetition occurs frequently, resulting in large age-variation between countries. The approach of the OECD to dening student populations is different from the IEA: it is age-based. In PISA2003, the denition was all students who are aged between 15 years 3 months and 16 years 2 months at the time of the assessment, regardless of the grade or type of institution in which they are enrolled and of whether they are in fulltime or part-time education (OECD, 2004). This denition has several disadvantages when it comes to the comparability of populations: the number of years in school may differ between countries, and students are at different grade levels. A practical problem is the collection of data from teachers about their instructional practices which can be linked to the students. To some extent, this can be overcome by asking students to provide information about their teachers. An example may illustrate the problem of differences between countries in terms of population characteristics. The results of PISA2003 showed that the scores of Danish students were moderate (roughly 36 score points under the top for mathematics) as compared to other countries, which resulted in some consternation among stakeholders in Denmark, particularly because the Danish education system is believed to be one of the best in the world. Certainly the expenditures on education are quite high (for secondary education, 35 % of GDP, which is among the highest in the world). However, when comparing the characteristics of the populations of students from countries in PISA2003 it appears that Danish students were, in comparison with students from other countries, one year less in school, because of a
50
Monitoring in education
later entry age into compulsory education. From TIMSS1995, Pelgrum and Plomp (2002) estimated that one year of schooling can result in score point differences that varied between 13 and 44. Hence, one may wonder if the position of Denmark in the international tables might be attributable to the deviating educational career of Danish students. Another complication of using age-based samples is that the logistics of data collection are much more complex (resulting in a heavier burden on NRCs and schools, causing higher costs and risks of higher nonresponse). The population denitions also have implications for the denitions of teachers and school populations that is, to which populations the results can be generalised. This is reected in the way the results are presented, for example: IEA: percentage of students by their schools report of teachers involvement in professional development opportunities in mathematics and science (TIMSS2003, Exhibit 6.6.); OECD: percentage of students in schools where the principals report that mathematics teachers were monitored in the preceding year through the following methods (Figure 5.17, PISA2003); Sampling Once the population denitions for each country are settled, samples can be drawn. These samples need to be of high quality in order to warrant good estimates for the whole population. Therefore, international assessments apply sampling standards, which cover a number of aspects. Accuracy: the population parameters should have an accuracy for
means of m 0.1s (where m is a mean estimate and s is its estimated standard deviation) and for percentages: p 5% (where p is a percentage estimate). Participation rates: criteria are dened for participation rates that should be reached in order to consider a sample acceptable. These standards have implications, amongst others, for the reporting of the outcomes. Flags are applied for samples that are considered not too far below standards. The results of some countries are agged and shown below the line which means that the sample quality is considered to be insufcient. It also happens that the results of some countries are excluded from the international reports, which occurred with the Netherlands in PISA2000. An example of rules for agging from TIMSS2003 is provided by Martin et al. (2004), see http://timss.bc.edu.
51
Answers to questions or test items may be unreadable or conicting (e.g. more than one answer). The materials are not correctly returned, for example because of wrong addresses, failing mail services, wrong handling at the data collection institute or sloppiness at schools (sometimes materials were completed but returned one year after data collection). In order to minimise data loss as much as possible, rigorous procedures are nowadays implemented in most ICEMs, that are all documented in manuals and software programs as is shown for instance in the TIMSS2003 technical report (see http://timss. bc.edu/ for more details). In particular, when achievement tests are used, it is of crucial importance that the test administration takes place in a very controlled manner in order to avoid the test scores being biased downwards or upwards. This requires the following, for instance. Cheating should be avoided. Students need to be motivated to answer the test this is particularly important because quite often students will perceive the test as low-stake as it will not have consequences for their grades in school. Use of tools such as calculators or other aids should be standardised this is not always possible, because in some countries certain aids are always allowed while this is not the case in other countries. This may have serious consequences for the interpretation of differences between countries. Nowadays many countries have to spend substantial budgets in order to guarantee the proper return of
the instruments. This can mean that whole teams are busy for a considerable amount of time with: checking returned questionnaires and tests for completeness and readability; contacting schools to get hold of missing materials or to clarify unreadable answers; reminding schools by (e-)mail or phone to return the materials; informing schools about the disastrous effects when they, on second thoughts (after an initial agreement to participate), are inclined not to participate: sometimes the data for a whole country are excluded from the international reports. For planning a period for data collection, it is important to try to avoid overlap with other time-consuming and competing activities in school, such as the weeks before the school holidays, when everyone is busy with end-ofterm activities, or, in some countries, the periods in which the nal examinations are taking place. Data collection is one of the biggest budget items for national teams, because it is time consuming and requires quite high expenditures for materials (printing, mailing). Hence, one would expect that considerable budget reductions might be possible when the data are collected electronically, via online data collection (ODC). ODC was not feasible for a long time, because respondents (schools, teachers and/or students) did not have access to ICT, the Internet or were not competent enough to use these facilities. The IEA SITES2006 was the rst ICEM to apply ODC on a large scale. A feasibility test of ODC, conducted in two groups of respondents, randomly
52
Monitoring in education
More research is needed to investigate and try out these possibilities. This requires staging and cooperation at the international level, involving different partners that are still working independently (such as national and international assessment organisations).
SITES2006 was a study that only used instruments at school and teacher level. The sample sizes for these categories of respondents are usually relatively small in ICEMs and, hence, the efciency prot is much less than when ODC can also be used for students in those assessments that administer tests and/or questionnaires to students. Feasibility tests of ODC for large-scale student assessments still need to be tried out. Additional advantages of applying ODC in the future might be: negligible costs for data entry (see next section); tailored testing; performance testing, such as testing via simulations, practical laboratory skills, communication competencies, etc.; possibility to provide more direct and timely feedback to respondents (which may increase willingness to participate); more continuous and periodic monitoring for large samples of schools and possibilities for school self evaluation; more possibilities for diagnosis.
53
with other statistical sources in a country, e.g. from other investigations or national census statistics, the behaviour of variables should be plausible, e.g. an unusually high correlation between two variables in one or a few countries may be suspicious and could point to errors in the data, examining differential item-functioning, e.g. relative high p-values in one country as compared with other countries could potentially (but not necessarily) point to aws in the translation of test items. When potential problems are discovered in the data, it is often necessary to go back to the returned questionnaires and/or tests to nd out what was actually entered by the respondents. It goes without saying that by applying ODC, many of the problems that result from human failures during data entry can be avoided. However, ODC is not a panacea for getting error-free data, because: respondents may accidentally hit wrong keys it is not known whether this is maybe more likely than accidentally hitting a wrong answer in a printed questionnaire (e.g. for a lter question); for open questions requiring the specication of a number (e.g. number of students in school), respondents may accidentally write a wrong number. Part of the procedure for checking and cleaning data is: inspection of national univariate statistics by each participant; inspection of international univariates by national research coordin-
ators (NRCs) and the international coordinating centre (ICC) quite often suspicious statistics are discovered by comparing univariate statistics and sometime even at a late stage by inspecting the outcomes of the analyses, for instance as a result of an undiscovered translation error (e.g. dont mind translated as dont like in SITESM1). Errors in the data are not always caused by data entry failures. They may also result from printing errors or national adaptations in questionnaires. Due to all these checking and validation steps, it can take some time to produce international data sets that are ready for further processing and analysis. However, it is usually not until the rst ofcial report is published (about a year after data collection) that the data sets are considered to be in their nal shape and ready for public access. This is because, as mentioned above, even at a late stage errors in the data can be detected. The purpose of data processing is to produce statistics that were envisaged when conceptualising and designing the ICEM. These statistics may be: univariate and based on one variable (e.g. a percentage of students having a computer at home) or composed on a set of variables (e.g. a mean number of possession from a set of 10 in students homes); bivariate, for instance breakdowns of such test scores for boys and girls or correlations, e.g. between score on a like-math-scale and the mathachievement score; multivariate, e.g. structural models that are tted on the data.
54
Monitoring in education
A persistent problem in data processing is how to handle missing data, such as when respondents (intentionally or accidentally) have not answered a question. For example, the following cases may occur. Missing should be interpreted as 0. Although respondents are explicitly instructed to write a zero if that is the answer to particular open questions, this does not always happen, and bias can be introduced in the statistical estimates. Missing to be interpreted as neutral, such as when response scales are used without a neutral answer category. Missing by design, such as when matrix sampling (2) is used. Missing data that result from design are often replaced by imputed values. Most imputations take place via regression analyses in which a large number of variables are used to predict scores on the variable that contains missing codes. Once the regression weights are known, these are used to predict the score for the missing answers. How to handle other missing data requires a close inspection of the data, because, as argued above, hypotheses need to be generated about what missing may mean. A very important step in data processing is the calculation of appropriate standard errors for the statistics that are produced. A standard error is an estimate of the sampling inaccuracy. It is used to describe the so-called condence interval for statistical estimates of population parameters. For simple
(2) Matrix sampling means that a sample of questions is administered to a (sub)sample of respondents.
random samples, this can be simply calculated by dividing the standard deviation of a statistic by the square root of the number of cases. As was explained in the section about populations and sampling, ICEMs are not normally based on random samples of students, but are rather so-called cluster-samples: rst schools are selected and then students within schools. As the units in these clusters usually resemble each other, the effect of this approach is that less accurate estimates can be made. Hence, using statistical tests from standard software such as SPSS is not correct and therefore appropriate dedicated procedures need to be developed for each ICEM separately.
Data analysis
The purpose of data analysis in general is to nd answers to several types of questions, such as the following. Why questions, e.g. Why are the achievement scores in certain countries low?, Why are the scores on emerging-practice indicators in some countries much higher than in other countries?. Questions about hypothesised relationships: e.g. Is the availability of ICT related to the extent that emerging pedagogical practices exist in schools? Exploratory questions: Which school factors are associated with the existence of emerging pedagogical practices? In the current international descriptive reports, variables can be found that could be of interest for further analysing the data, e.g. breakdowns of achievement scores by different groups of students, those having computers at home, low, medium and high
55
social welfare index, etc. The current reports, particularly the PISA reports, also contain initial results of more indepth analyses. However, these analyses do not offer more than a rst approach to the analysis of the data. For a comprehensive analysis, the behaviour of a large set of variables needs to be taken into account, which is often done by tting models on the data (conrmatory, that is, based on an a priori hypothesised structure; or exploratory and aimed at generating a posteriori hypotheses, which is more common: by trying out many different models and by determining which model ts the best). Examples of statistical programs for modelling are LISREL and AMOS (part of the SPSS package). As the data often have a multi-level character (school-, teacher- and student-level), so called hierarchical linear modelling (HLM) programs are also used. Finding appropriate models that t the data well is a time-consuming process, which often takes place after the rst descriptive ICEM reports have been published. It should, however, be noted that the OECD included quite a lot of multivariate analyses in the PISA reports. Sometimes special issues of journals or dedicated books are devoted to secondary analyses of the assessment data (e.g. Robitaille and Beaton, 2002). However there is a lack of upto-date meta-analyses, showing which analyses have been done over the years and which results have been reported. Such an activity is important, among other reasons because it is not yet very well understood why some variables are highly intercorrelated in some countries but not in others. Also, as mentioned before, quite often constraints of studies do not allow for enough variables covering the a poste-
riori research questions. This in itself is not a fundamental problem, but rather the lack of a coherent and long-term research agenda is, or in the words of Martin et al. (2004): more work needs to be done to identify the most fruitful variables to capture the dynamic processes that take place within schools and to understand how national and cultural contexts interact with other factors to inuence how education is transmitted and received.
Reporting
As argued earlier in this chapter, an important step in any ICEM is the valuation of the results. ICEM reports offer a rich variety of statistics that can help the participants to judge the results for their country. In ICEMs this is usually a relative judgment, that is, country statistics are valued on the basis of comparisons with other countries. A danger in interpreting the statistics may be that too much of an atomistic approach is used (focusing on one or a few subject areas) rather than trying to value an education system from a holistic perspective. However, it can be observed that once the nal report has been released, absolute judgments also enter the scene, e.g. some people claiming that despite the high score of a country in fact the quality of maths achievement is very low. This happened recently in the Netherlands, when a group of researchers from the Freudenthal Institute for Science and Mathematics Education concluded that, despite the high international ranking, the level of achievement in the PISA tests was very low.
Secondary analyses
ICEMS result in huge data sets (50 countries with on average
56
Monitoring in education
5 000 students per country is not uncommon) that are nowadays easily accessible for several purposes. The background documents on design and methodological issues (sampling, technical standards, psychometrics) also reect how researchers in the eld apply theoretical insights from educational methodology. These data can be of value for examining and illustrating several methodological topics and for conducting substantive research after the data have been archived, including the following. Conceptualisation (concepts and indicators): every interested person can have access to instruments, conceptual frameworks and data and, hence, can be involved in reecting about the choices that were made in particular assessments. Questionnaire development: by critically examining questionnaires that have been used in international assessments, forming hypotheses about the strong and weak points and analysing the data to nd evidence for these hypotheses, much can be learned about issues that concern questionnaire development. Sampling: several issues are worth examining and discovering in the international data les. Is the accuracy of the population estimates comparable to theoretical expectations? Do education systems where streaming occurs have higher intra-class correlations than systems where this is not the case? Data collection: international comparative assessment projects over the past 30 years have developed a whole set of tips and tricks for collecting high-quality data from large samples of students, teachers and schools in a country.
Data analysis: international comparative data sets nowadays offer a wealth of opportunities to investigate how certain measures behave under different circumstances. Questions include: do attitude measures from Japanese and UK data show the same underlying dimensions?. Substantive questions: international comparative assessments typically cover a broad range of topics. For instance, the tests for measuring student achievement may contain hundreds of questions covering a large part of the mathematics domain. Detailed examination of these items may reveal much more than the overall test statistics which are published in the international reports. Whereas in earlier days the use of international databases was complicated (often the data les which were stored on tape did not even t on hard disks of mainframe computers) nowadays anyone with a relatively simple laptop can download the data bases and conduct analyses. Such analyses usually require an in-depth understanding of technical details, such as: which sampling weights are available in the data les and how these should be used; how to calculate standard errors in a correct way, taking into account the sampling design of the studies; how to apply the so-called plausible values that are stored in the les. However, when carefully studying the technical documentation and user guides that are available for the ICEMs, secondary analyses are possible for almost everyone with some afnity for statistics.
57
PISA data can be explored online (3). A tool that may be useful in doing secondary analyses on IEA data is the IEA International Database Analyzer (IEA IDB Analyzer), a plug-in for SPSS that helps to correctly handle data and which can be found at http://www.iea. nl/iea_software.html.
use of the Internet as reported by Japanese students (see Chapter 4). Self-ratings: Quite often in international as well as national ICT monitors, instead of using objective standardised tests, students and/ or teachers are asked to rate their own ICT competencies. Although such measures may be ne as indicators of self-condence, they are often used as proxies for real competences. Such use is unwarranted, as self-ratings are prone to bias (Stromsheim, 2002; Ross, 2006). Teacher perceptions: Some assessments in the past included perceptions of teachers regarding the impact of ICT on, for instance, motivation and skills of students. The validity of such measures is highly questionable and the ratings are prone to wishful thinking. Hence, in future assessments, such measures should only be used as an indicator of teachers attitudes towards ICT.
58
Monitoring in education
be calculated in order to provide a quantitative estimate for the targeted indicator. Hence these will be called indicator statistics in this study. For instance, examples of statistics for the indicator denition just mentioned might be: mean number of computers per school in each country or the median number of computers. Another relevant statistic might be the mean number of students per available computer in a countrys education system. It is conceivable that data are available for which no indicator statistics exist, which can quite often happen in international comparative assessments due to space limitations in the nal reports. Indicator areas may also exist for which no indicator denitions
are available. Hence, these terminological distinctions were relevant for the purpose of our study. The distinction between primary and secondary indicators was also introduced and the problem was mentioned of dening appropriate secondary indicators before the questions for the phase of diagnosis are generated. It was pointed out that in order to avoid undesirable impacts on educational decision-making, holistic monitoring is needed. It was also argued that multilevel monitoring may be an important option for the future. Several potential advantages of online data collection were mentioned that may play a role in further discussions about a future EU ICT monitor.
References
Brecko, B. N. and Carstens, R. (2006). Online data collection in sites 2006: paper versus web survey. Do they provide comparable results? Paper presented at the Second International Research Conference: Proceedings of the IRC-2006, Amsterdam. Martin, M. O., Mullis, I. V. S., Gonzalez, E. J. and Chrostowski, S. J. (2004). TIMSS 2003 international science report. Findings from IEAs trends in international mathematics and science study at the fourth and eighth grades. Chestnut Hill: TIMSS & PIRLS International Study Center, Boston College. Pelgrum, W. J. and Plomp, T. (2002). Indicators of ICT in mathematics: status and co-variation with achievement measures, in: D. F. Robitaille and A. E. Beaton (eds), Secondary analyses of the TIMSS data. Dordrecht: Kluwer Academic Publishers. Ross, J. A. (2006). The reliability, validity and utility of self-assessment, Practical assessment, Reasearch and Evaluation, Vol. 11, No 10, 213. Stromsheim, J. (2002). Information and communication technology (ICT) competencies of pupils and teachers (unpublished).
59
What do we know about the effective uses of information and communication technologies in education in developing countries?
Michael Trucano (1)
World Bank, ICT Education and Social Sector Innovation Specialist, infoDev
Executive summary
infoDev maintains a series of knowledge maps that attempt to document what is known and what is not known about ICT use in education. These knowledge maps reveal that, despite a decade of large investment in ICT to benet education in OECD countries, and increasing use of ICT in education in developing countries, important gaps remain in our knowledge. In addition, there appears to be a dearth of useful resources attempting to translate what is known to work and not work in this eld for policymakers and donor staff working on education issues in developing countries, especially those issues related to education for all and other education-related millennium development goals. A lack of reliable data related to the impact of ICT on learning and achievement in developing countries, as well as a lack of useful indicators and methodologies to measure such impact, hampers policy guidance in this area. A mismatch also exists between methods used to measure the effects of ICT use in education in developing countries, and type of learning styles and practices that the introduction of ICT is meant to promote, or at least facilitate. Despite a lack of reliable impact evidence, recent infoDev surveys of World Bank support for ICT components in projects in its education portfolio, and country-level surveys sponsored by infoDev of ICT use in education in Africa and the Caribbean, document tremendous growth in the use of and demand for ICT in the education sector. This mismatch between weak evidence and growing use raises many questions about the nature of ICT-related investments in the education sector in developing countries.
(1) NB: The ndings, interpretations and conclusions expressed herein are entirely those of the author and do not necessarily reect the view of infoDev, the donors of infoDev, the International Bank for Reconstruction and Development/World Bank and its afliated organisations, the Board of Executive Directors of the World Bank or the governments they represent. The World Bank cannot guarantee the accuracy of the data included in this work.
61
What do we know about the effective uses of information and communication technologies in education in developing countries?
infoDev maintains a series of knowledge maps outlining what is known and what is not about the use of information and communication technologies (ICT) in education. These knowledge maps reveal that, despite a decade of heavy investment in ICT to benet education in OECD countries, and increasing use of ICT in education in developing countries, signicant gaps remain in our knowledge. In addition, there appears to be a dearth of useful resources for policymakers and donor staff working on education issues in developing countries, identifying what is known to work and not to work in this eld, especially in support of education for all (EFA) and other education-related millennium development goals (MDGs) (see Trucano, 2005). The knowledge maps, which are used to help guide discussions between donors and governments exploring the use of ICT in the education sector, investigate 10 topics (impact of ICT on learning and achievement, monitoring and evaluation, equity issues, costs, current projects and practices, specic ICT tools, teaching and ICTs, content and curriculum, policy issues, and school-level issues). The key ndings are divided into four major themes.
Widely accepted, standard methodologies and indicators to assess the impact of ICT in education do not exist. A disconnection is apparent between the rationales most often presented to advance the use of ICT in education (to introduce new teaching and learning practices and to foster 21st century thinking and learning skills) and their actual implementation (predominantly for use in computer literacy and dissemination of learning materials).
62
practice ICTs are most often used in education in less developed countries (LDCs) to support existing teaching and learning practices with new (and, it should be noted, often quite expensive) tools. While impact on student achievement is still a matter of reasonable debate, a consensus seems to have formed that the introduction and use of ICT in education can help promote and enable educational reform, and that ICT is a useful tool to both motivate learning and promote greater efciencies in education systems and practices.
evaluation studies of key initiatives like NEPAD e-Schools, is a rst step in a larger, ongoing, systematic and coordinated initiative to track developments in technology use in the education sector to help inform a wide variety of stakeholders interested in the topic as they seek solutions to larger, more fundamental educational and development challenges in the years ahead.
Key findings
ICT use in schools in Africa and the Caribbean is growing rapidly (from an admittedly low base). This growth is largely the result of bottom up initiatives, often facilitated by civil society organisations. Barriers to use include high costs (especially of connectivity), poor infrastructure, insufcient human resource capacity, high costs, a variety of disincentives for use and inadequate or insufcient policy frameworks. The process of adoption and diffusion of ICT in education in Africa is in transition and widely variable. A marked shift seems to be emerging from a decade of experimentation in the form of donor-supported, NGO-led, small-scale pilot projects towards a new phase of systemic integration informed by national government policies and multistakeholder-led implementation processes. This shift from projects to policies, and the more systematic development that that implies, would not be possible without the growing commitment to ICT in education on the part of government leaders across the continent (Farrell/Isaacs, 2007). ICT use in education in the Caribbean, and the context of its
63
use, varies only within a limited range. ICT use in schools in the region is primarily centred on basic ICT literacy instruction and computer use.
Planning for ICT use in education in developing countries: a way forward for policymakers
As an aid to education policymakers in developing countries under tremendous pressure from parents, vendors, business, technology advocates, etc. to provide schools with a variety of ICT, infoDev, Unesco and others partners have developed and utilised an ICT-in-education toolkit as part of policy consultations in 26 countries (see Haddad, 2007). Feedback from toolkit users consistently states that provisioning ICT for use in schools, no matter how hard and expensive initially, is the easiest and cheapest element in a series of policy choices that ultimately could make ICT use sustainable and/or benecial for learners. Indeed, the appropriate and effective integration of ICT in schools to impact teaching and learning practices is much more complicated. The proliferation of ICT use outside the school especially the growing use of mobile phones has yet to impact in any meaningful way on the use of ICT within formal education systems. To help guide policy choices around technology use and choice in education in developing countries, a more robust set of shared indicators and evaluation methodologies must be developed and tested in realworld circumstances. As discussed in infoDevs Monitoring and evaluation of ICT in education projects: a handbook for developing countries, evidence to date suggests that policymakers
and project leaders should think in terms of combinations of input factors that can work together to inuence impact. Coordinating the introduction of computers with national policies and programmes related to changes in curriculum, pedagogy, assessment and teacher training is more likely to result in greater learning and other outcomes (Wagner, 2005). The process of integrating ICT into educational systems and activities can be (and typically is) arbitrary, ad hoc and disjointed, as evidenced through recent infoDev surveys of ICT use in education in the 75 developing countries (Farrell et al., 2007a, 2007b, 2007c, Trucano, 2007). Such adhocracy often results in ineffective, unsustainable and wasteful investments. On the other hand, a comprehensive set of analytical, diagnostic and planning tools, such as those promoted through the ICT in education toolkit, can force a certain discipline on the process. The use of tools does not make policy formulation scientic and rational. Nor will it replace the political/organisational nature of policy formulation (Haddad, 2007). That said, it is clear that current tools available to help aid policymakers make informed decisions about technology choices for schools are quite primitive. Reasonable minds can argue over what is meant by impact and performance, but substituting belief for scientic inquiry does not seem to be a particularly responsible course of action. The power of ICT as an enabler of change for good, as well as for bad is undeniable. However, the use of ICT in education in many developing countries, especially the poorest of the poor, is associated with high cost and potential failure. Simply wishing away
64
the existing local political economy of the way technology is implemented and supported in schools does not mean that it actually goes away. With more rigorous analysis and evidence of impact, and better decision tools,
developing country policymakers and their partners in the international community can make wiser and more sustainable choices in deploying ICT to enhance access to, and quality of, education at all levels.
References
Farrell, Glen and Isaacs, Shaka (2007a). Survey of ICT and education in Africa: a summary report, based on 53 country surveys. Washington, DC: infoDev/ World Bank. Available at http://www.infodev.org/en/Publication.353.html, last accessed on 22.09.2009. Farrell, Glen, Isaacs, Shaka and Trucano, Michael (2007b). The NEPAD e-Schools demonstration project: a work in progress (a public report). Washington, DC: infoDev/World Bank; Vancouver, British Columbia: Commonwealth of Learning. Farrell, Glen, Isaacs, Shaka and Trucano, Michael (eds) (2007c). Survey of ICT and education in Africa: Vol. 2: 53 country reports. Washington, DC: infoDev/ World Bank. Available at http://www.infodev.org/en/Publication.354.html, last accessed on 22.09.2009. Gaible, Edmond (2007). Critical review and survey of ICT in education in the Caribbean. Washington, DC: infoDev/World Bank. Haddad, Wadi (2007). ICT in education toolkit for policymakers, planners and practitioners (version 2.0). Washington, DC: infoDev/World Bank; Paris: Unesco. Trucano, Michael (2007). ICT components in World Bank education projects (200104).Washington, DC: infoDev/World Bank. Trucano, Michael (2005). Knowledge maps: ICTs in education. Washington, DC: infoDev/World Bank. Wagner, Daniel et al. (2005). Monitoring and evaluation of ICT in education projects: a handbook for developing countries. Washington, DC: infoDev/World Bank.
65
CHAPTER
CONCEPTUAL FRAMEWORKS
III
A framework for understanding and evaluating the impact of information and communication technologies in education ICT to improve quality in education A conceptual framework and indicators in the use of information communication technology for education (ICT4E) A conceptual framework for benchmarking the use and assessing the impact of digital learning resources in school education
67
A framework for understanding and evaluating the impact of information and communication technologies in education
Katerina Kikis, Friedrich Scheuermann and Ernesto Villalba During the last decades, considerable resources have been invested in hardware, software, connections, training and support actions under the scope of improving the quality of teaching and learning. A major tenet of the policies that supported the introduction of information and communication technologies (ICT) in education was that they can become catalysts for change. Undoubtedly, some countries have made considerable progress in bringing networked ICT into education and made it possible for teachers and learners to use them on a daily basis. In many other cases, however, implementation policies have not been a consequence of systematic analysis and reection. As a consequence, we still know little about the impact and effectiveness of ICT in education. To close this gap, the Center for Research on Lifelong Learning based on benchmarks and indicators (CRELL) established a research project on measuring ICT performance and effectiveness in education. The project explores the effects of ICT on learning outcomes aiming at stimulating debate on educational policy needs. This paper presents the rst step in the process. It presents a conceptual framework to guide the analysis for orienting work activity towards the study of ICT effectiveness.
tant consequences in the articulation of educational policies. The identied gap in assessing the impacts of ICT is especially unsatisfying for policymaking stakeholders who aim at dening evidence-based strategies and regulatory measures for effective ICT implementation and efcient use of resources. Emerging technologies (e.g. smartboards, mobile devices) stimulate the change in contextual conditions for learning. Computer equipment and software are becoming increasingly available inside educational establishments as well as in private households not only for school-related activities of young people, but also for
69
learning at all stages in life. Instructional practices are changing due to new possibilities to access and share information, new roles and pedagogical paradigms. Furthermore, we observe new ways of learning in the context of new educational software applications and tools provided, digital resources available, etc. (see, for example, Redecker, 2009). This justies once more the need to study the effects of ICT at different levels and to examine implications for the individual and society. More insights into the multifaceted effects are needed to enable us to conduct cost-benet studies in an appropriate manner and to react to necessary changes by updating national curricula, designing teacher training programmes and revising adequate school and classroom implementation, keeping in mind that ICT is often a catalyst for change but does not itself determine the direction of change. There is a lack of comprehensive studies of the complex interactions between various types of ICT implementation and the effects of other factors such as school-based interventions, socioeconomic status and expenditure. It appears that, rstly, we are in need of instruments which will allow assessing and monitoring the state of use and changes affected. Secondly, we need to identify the various sources and gaps in a systematic manner in order to determine data available and desired. There are a number of ambitious initiatives to explore the scope of inuencing factors already carried out (see, for example, Ramboll Management, 2006; Underwood et al., 2007). They provide a good basis for going one step further and designing a systematic approach to identify the use of ICT and its effects on all different levels and stages concerned.
In many cases, in the context of school education, the massiveness of government top-down ICT-related programmes and reforms implied that policymakers were expecting schools to change sooner rather than later. Unlike books or blackboards, digital technologies tend to age and even become unusable within just a few years. Furthermore, technology changes very fast and even if older technology is still usable it can be incompatible with new digital products and services or be unsuitable for their full exploitation. Overall, this top-down approach has had its own risks because the heavy investments could pay back only if schools were ready enough to start immediately using ICT in productive ways. The massiveness of the programmes and reforms introduced also implied that the changes anticipated were envisaged to take place not just in some or even in the majority, but in all schools within a system. The reformers probably pushed ahead because they wanted to minimise the risk of creating inequalities among schools which make heavy use of ICT and those that, for one reason or another, do not. The scenario, however, that assumed that all schools would start using ICT in productive ways as soon as the teachers and the pupils put their hands on it was not very realistic. What was more plausible was that the top-down programmes and reforms would gradually help more and more teachers and pupils alter their teaching and learning practices. According to this scenario, the early adopters who used ICT prior to the implementation of massive topdown programmes and reforms will soon be joined by an early majority, and the sceptics, what Rogers (1995) called the late majority, will eventually follow them. As teachers and pupils convert from being non-users to regular users of ICT for teaching and learning, they in
70
parallel learn how to use them in optimal ways, i.e. as they learn something new, they learn new ways to learn. In other words, according to this scenario, ICT will penetrate and change schools in successive stages.
how teachers and pupils actually use ICT (utilisation indicators), what the outcomes are of their use (outcome indicators), and, more recently, what the impact is of their use on school learning (learning impact indicators). Utilisation indicators often measure how often teachers and students use ICT for school teaching and learning, what they use and for what purposes (for example, what kind of software they use for subject teaching and learning), and how they use it (for example, whole-classroom teaching, group work, individual work, etc.). Outcome indicators often focus on the attitudes of teachers and pupils towards ICT, and their condence and skills in using ICT. They also start to focus on wider strategic practices such as the use of ICT for lifelong learning and professional development, and assessment of actual ICT skills is starting to be developed in some areas. It is, however, much less common to use indicators to measure the impact of the use of ICT on pupils attainment in core curriculum subjects. The development and use of indicators is popular among policymakers because they provide them with a wealth of easy-to-use information. However, it is important to bear in mind that the use of indicators has its limitations: generally, indicators provide support to assess a current state, but usually do not cover other important issues, such as reasons for not using ICT; mental effects on learner and learning, etc. Moreover, comparative surveys typically only provide a snapshot of a given situation at a very specic moment in time. Furthermore, the choice of mainly input indicators is often driven by political priorities and the philosophy and concerns of the bodies, often government supported, issuing such
71
studies. Therefore, the indicators tend to focus on areas where there has been a recent policy initiative and they tend to ignore other areas which, although highly relevant, are not included in the current policy agenda or may reveal disturbing policy failures. For example, the use of the ratio between pupils and computers and the ratio between teachers and computers as input indicators draws a picture which may be quite different from the picture which would result if the teacher: pupil ratio was also included as a third indicator. From a wider perspective, the indicators approach often reects the wider top-down, outside-inside mentality that was adopted through the implementation of massive programmes and reforms. In a way, it is a consistent part of a wider top-down policymaking culture which assumes that the starting points for generating school change are the actions of policymakers (Kollias and Kikis, 2005). From a European perspective, the development and use of indicators is highly relevant, especially for the development of monitoring policies established by the European Union. The Lisbon strategy set up the open method of coordination (OMC) in education and training (among other elds). This implies that Member States agreed to be monitored in a series of issues to allow for mutual policy learning. In 2002, ve benchmarks were established as the average level to achieve by 2010 and several indicators were proposed for monitoring purposes. In addition, the recent emphasis on evidence-based policies in education (see European Commission, 2007a) (1) also provides a
(1) European Commission (2007). Towards more knowledge-based policy and practice in education and training. SEC (2007) 1098. Luxembourg: Ofce for Ofcial Publications of the European Communities.
strong policy support for the creation of monitoring tools in education. In 2007, the Commission published the coherent framework of indicators (European Commission, 2007b). This communication established 16 indicators that were adopted by the European Council and can be used to monitor Member States in the achievement of the Lisbon goals in education and training, one of which is ICT skills. In the current state, there is a necessity to place this indicator within a wider context of ICT use and integration. Likewise, other European programmes, such as i2010, aim at promoting the positive contribution of ICT in the economy, society and quality of life. There is a need to have a framework that will allow evaluating the impact of ICT for this purpose, particularly its contribution in educational settings.
72
Additionally, national experiences and studies are a good source of information. These, however, do not allow for comparison across countries in a straightforward manner. These constitute case studies and could be used as lessons learned. For the present paper, the focus remains on the comparative, international sources of information. The data compiled by international bodies might be instrumental in providing the context for the effects of ICT in education. Any effect has to necessarily be related to the context where it has appeared. In this regard, several international bodies collect information on ICT infrastructure. The OECD, for example, publishes the Communications outlook and the Information technologies outlook every two years. These two publications provide an overview of the situation in the telecom market. They contain plenty of information on Internet availability and infrastructure as well as the dynamics in industries supplying IT goods. Eurostat also provides a good amount of statistics through the information society statistics survey (ISS). ISS is carried out in two main surveys pertaining to ICT usage in enterprises and ICT usage in households and individuals. The aggregate numbers can be obtained by breakdowns of age group, sex, educational level, employment situation and region. However, the information provided is limited. In terms of e-skills, for example, it is only possible to obtain the percentage of people who report to have done tasks of the type installed a new device or written a computer program in the last three months, in the last year or never. Despite the efforts of Eurostat in keeping up with the pace of change and adapting to new developments in ICT, some of the items become
obsolete relatively fast and have to be replaced, which makes it difcult to track changes over time. The survey is mainly directed to assessment of ICT and Internet use in the workingage population and thus has limited value for education. The ICT usage in enterprises survey only retrieves information on the so-called core sectors of the economy, which means that services such as education are not covered by the survey. ISS, therefore, can be used to provide a picture of the context in which the effects of ICT in education can be assessed but would need to be adapted for allowing the study of ICT effects in education. Studies concerning education at a comparative level are carried out by the OECD and IEA on a regular basis. Their main focus is on the assessment of student achievement in different competences: reading, mathematics and science. These further concern themselves with investigation of ICT use in education. PISA is probably the best known survey of this type. It has had important political impact and results in PISA are used within the OMC to monitor progress towards the Lisbon objective (the percentage of low-skilled readers is used as one of the ve benchmarks agreed by the Council in 2002). PISA has a specic module on ICT. The module has been modied in each of the three rounds of PISA (2000, 2003, 2006) and will probably have a different version in 2009. It strives to gather information from 15-years-olds (the PISA target group) on the use they make of computers and their self-reported capacity for doing certain computer tasks. In 2004, the OECD published a report specically looking into PISA and ICT: Are students ready for a technologyrich world? The report mainly looks into the effects of use of ICT in student
73
performance. But it lacks information on how the computer has been used and in what way because of the limitations of the ICT module questionnaire. TIMSS and PIRLS, carried out under the auspices of the International Association for the Evaluation of Education (IEA), also have specic information on the use of ICT. In TIMSS, for example, information on the use of ICT is linked to subject, and, therefore, it is more possible to explore the impact of the educational use of ICT on student performance. But we have no information on how the computer has been used. In terms of thematic studies, there are a number of initiatives looking specically into aspects of ICT in education. Empirica (2006), in a study nanced by the European Commission, explores the access and use of ICT in European schools in 2006. It presents information for 25 EU Member States, Norway and Iceland, but it does not look into student results so it is not possible to study this important aspect of ICT impact. Another relevant study is SITES, which, like TIMSS, is under the auspices of the IEA. The survey explores the use of computers in teaching through sampling teachers, principals and ICT responsibility in schools. It does not look into student achievement, but it does look at the perceived impact on ICT in students from the teachers perspective.
and comparison between countries has to be done cautiously. Trucano (2005) also reviews a series of studies on ICT impact in schools. He also concludes that the impact of ICT use on learning outcomes is unclear and calls for the need for more widely accepted methodologies and indicators to assess the impact on education (Trucano, 2005, p. 1). In a similar line, Cox and Marshall (2007) point out that studies and indicators on ICT do not reect sound effects. They maintain that this relates mainly to three aspects: opposing views on ICT and education; different perspectives on/goals for innovation in learning/learning contexts; missing planning strategies for educational change. Current approaches for evaluating ICT in education are often only focused on a few aspects, such as input, utilisation and outcome/impact. By the use of indicators, they can assess how the input (e.g. monetary, infrastructure, resources) relates to the impact. These models may apply for several purposes, but come too short to assess the integration of ICT in policies and curricula, particularly because they often use a snapshot, one time and one level approach. Furthermore, evaluation has to care about different states in the implementation process and analyse changes in the culture of the school system at the micro level (pupils) as well as at the meso (school) and macro (curriculum/attainment targets) level. Therefore, a conceptual framework is needed to look into the various dimensions of ICT use and to discuss possibilities to measure the effects of use of electronic media
74
in education. Such an orientation aims at constructing a framework to look at the relevant domains and interdependence between components related to ICT in educational processes from a holistic perspective. This paper provides a rst attempt at an innovative approach to the study of the impact of ICT/ICT innovation in learning. It will further provide a multidimensional framework for analysis which can locate heterogeneous indicators from different studies and data sources. This provides a coherent structure to guide the exploration of data and the map of complex relationships.
10-year research project (2), identies ve phases of technology integration into schools (see Dwyer et al., 1991). These phases, as described by a more recent report on school technology and readiness prepared by the CEO Forum on Education and Technology (CEO, 1999, p. 14); are: entry, adoption, adaptation, appropriation and invention. These models focus on what teachers and pupils actually do when they use ICT in schools, something that the indicators approach deals with only in a limited way (for example, a common utilisation indicator is the average hours of weekly use for teaching). The above models, when used in combination with indicators such as those described earlier, may offer outcomes of more explanatory power regarding the integration of ICT in education. They may also offer a more solid basis for developing models and other instruments to study the capacity of educational systems to absorb ICT-related pedagogic innovations. For example, reviewing the above technology integration phases in relation to what we dened as ICT-related pedagogic innovations in schools, one can identify the ACOT models phases of appropriation and invention as those offering the most promising potential for the diffusion of pedagogic ICT-related innovations in schools. A more recent effort to use indicators within a model of ICT integration in education was made in the context of a project carried out by Unescos Institute of Information Technology (3) in 2001. The Morels matrix (4), which was adopted as an instrument for
(2) See http://www.apple.com/education/k12/ leadership/acot/library.html (3) See http://www.iite.ru/ (4) This matrix was named after Prof. Raymond Morel from Switzerland, who developed it.
75
evaluating the degree to which ICT has been integrated in an educational system, is based on the assumption that this process progresses through four distinct successive phases: (a) emerging, (b) applying, (c) integrating and (d) transforming. Unesco has further developed this approach to help schools determine their stage of progress in implementing ICT. Variations of the matrix have been used in comparative studies of ICT implementation at various levels of education (see Unesco, 2003a). As with the ACOT model, the transition from one phase of ICT implementation to another in the above matrix presupposes the emergence and diffusion of several types of innovations. Pedagogic innovations are implicitly assumed to be the driving force in that they are sine qua non for any other innovation to have a meaningful impact on school teaching and learning. A somewhat newer version of the stages approach is exemplied in e-maturity models (see, for example, Butt and Cebulla, 2006; Underwood and Dillon, 2004; Underwood et al., 2007). Such models focus on what teachers and pupils actually do when they use ICT in schools, something that the indicators approach deals with only in supercial ways. When such models are used to guide evaluation, in combination with the indicators approach, this may offer outcomes of more explanatory power regarding the integration of ICT in education. They may also offer a more solid basis for developing models and other instruments to study the capacity of educational systems to absorb ICT-related pedagogic innovations. Besides the different stages, there are several levels to be considered when studying the effects of ICT. Indicators
and emphasis of domains studied may vary depending on which of these levels are taken into consideration: macro, meso and micro levels. The macro level refers to aspects at the highest level of aggregation. At this level, indicators would refer to global or national socioeconomic characteristics related to the use and integration of ICT in education. In a way, the macro level could be seen as the specic ICT context where meso and micro levels are situated. The meso level refers to aspects at the institutional level (school, organisations, universities, etc.). The meso level refers to aspects related to an intermediate level that shaped the relationship between micro and macro level aspects. The micro level refers to the individual; it portrays individuals in their use of ICT. These levels present different focuses and relate to each other in that lower levels are integrated (belong) into higher levels (an individual is in a school, a school is in a region, a region is in a country, etc.). These three levels determine the type of indicators that we might use within each of the domains. Some indicators at the macro level or meso level might just be aggregations of micro level data. For example, the percentage of those reporting the use of computers for instruction in a country is the result of the aggregation of individual (micro level) teachers answers. If we were to analyse these data at the micro level (the impact of ICT in a specic individual/teacher, for example), the aggregate level indicator would serve to contextualise his/ her answers. Some indicators, on the other hand, might be exclusively of a specic level, as for example the existence of a national policy to have all school materials digitalised.
76
Conceptual framework
All in all, we can say that learning practices and teaching for a variety of obvious reasons need to be assessed in different ways. New tools and instruments are required to monitor both achievements and progress made in the context of ICT, but there is no clear position yet on adequate indicators, instruments and scales for measurement. A conceptual framework would help to alleviate this decit. There is a need for a thorough, rigorous and multifaceted approach to analysing the impact of ICT on education and students learning (Cox and Marshall, 2007; also Kikis and Kolias, 2005; Aviram and Talmi, 2004). Of interest here is that, as early as 1997, Collins pointed out that research into the contribution of ICT to students thinking and acting reects the social and epistemological beliefs of the research community. This has serious implications for evidence-based policies. Major policy analyses that encompass a wide range of settings and look for commonalities and differences as a result of systemic conditions are often missed from most of the previous research agendas. Currently conducted meta-analyses on ICT and attainment suggest that the most robust evidence of ICT use in enhancing learning was from those studies that focused on specic uses of ICT (Cox and Marshall, 2007, p. 60). The purpose of a conceptual framework should be to provide an orientation for any kind of measurement required in the decision-making process. A framework serves as the basis for modelling an appropriate assessment approach and the design of methodologies and instruments. It connects to all aspects of empirical enquiry. When drafting a framework, we would therefore expect
that, contrary to the specic models, a conceptual framework acts as a reference which is exible and adaptable to the purpose of a study to be carried out. To take an example: if we want to study if technology is having a positive impact on educational performance, a framework would help us to identify the various domains in the given context to be looked at (such as ICT availability and devices used, pedagogies applied in which subject areas, etc.) and possible perspectives to be taken into account (school level, individual level, etc.). This is important for ensuring that all relevant aspects are considered and that a systematic approach is followed that is transparent and comprehensible for the stakeholders involved. It provides a holistic view and supports the setting of standard orientations when dening the evaluation methodology and selecting appropriate instruments for measurement. In more complex evaluation settings, when conclusions are to be based on a combination of surveys conducted by different research teams worldwide, it would, ideally, also contribute to a coherent common approach to the identication of phenomena to be analysed and their evidence-based interpretation in the light of a common understanding of aspects to be studied. In the case of the assessment of ICT effects in education, this is to the benet of more effective valorisation of evaluation studies carried out and better quality of analytical work. A conceptual framework could furthermore act as the basis for the design of monitoring tools aimed at informing policy on the emerging trends, their effects and their implications for current or future education. It is therefore oriented towards medium- and longterm policies and benchmarks dened for ensuring effective integration into
77
society. A framework can facilitate the construction of models to explain ICT effects in education, and for the adap-
tation of instruments and data sources that are further analysed and reported (see Figure 1).
78
A conceptual framework is given in Figure 2 for further discussion which takes into account the political context of European education. It covers several domains relevant to specic EU policy priorities. However, policy goals/priorities are presented here as an example and could be adapted to any other policy priority which might be dominant in other countries. The framework is divided into domains, indicators and stages. The domains identied by the conceptual framework here represent the relevant areas of study. When assessing the effects of ICT in education, such domains should cover the complete range of analytical constructs to be studied in the context of the integration and use of ICT in education. Ideally, each domain should be exclusive and not overlap with other domains. Based on the literature review carried out between 2007 and 2008 relating to European projects, case studies and research reports, the following six dominant blocks were identied in the research discussions. Policies: By this term we understand any type of strategies relating to the implementation of ICT and their effective use. This could take place at a national policy level as well as at an institutional level, such as in universities, schools, etc. Resources: This domain refers to the ICT infrastructure in terms of hardware, software, network capacities and any type of digital resources used for teaching and learning. Curriculum: By curriculum we understand the level of ICT integration in the curriculum, including courses on how to use ICT effectively. Organisation: This term refers to organisational measures to implement ICT and its use. One example
is the use of content/learning management systems for educational purposes. Teaching practices: This domain characterises the use of ICT for teaching activities, pedagogical practices, etc. Learning: Like the denition provided above, Learning focuses on the use of ICT by the learner (student, etc.). It is possible to nd specic indicators for each of the domains that describe the state of the domain and that vary from context to context and case to case. For example, in the domain referring to resources, one possible aspect to look at would be ICT availability. As indicated above, the specic indicators to look at here would be determined partially by the level of analysis (macro, meso or micro) to be undertaken. As such, at macro level, it would be possible to use indicators such as broadband penetration, ICT availability in the country or percentage of educational software sales in a country among others. At the meso level, indicators would be slightly different and would refer specically to school contexts (or to another meso level entity that would be in focus). In our example, possible indicators would include the presence of LAN in schools or the percentage of schools reporting having educational software. At the micro level, indicators would refer to individuals in relation to the availability of ICT, for example individuals reporting on having educational software at home and uses made. Furthermore, the present framework permits the identication of the ICT maturity stages. Each of the different indicators identied would have certain levels that would suggest a specic stage of ICT maturity. As such, continu-
79
ing with our example, ICT resources in schools might have reached a certain degree that would allow for a transforming stage (lets say all schools in a country have an adequate supply of ICT tools). However, other indicators, for example relating to curriculum, might not be as advanced, or have no teachers trained in the pedagogical use of ICT. These latest indicators would denote an emerging state. Under this scope, the framework provides a holistic picture of the range of aspects related to ICT. It is important to note that the different indicators would have a different degree of aggregation depending on the analysis that we will want to draw from it (see Figure 1). The framework provides the pre-stage for the analysis, allowing stakeholders to see the relevant aspects in a holistic picture before a specic analysis is carried out. As such, individual reporting of the number of computers at home, for example, can be aggregated at the national level to analyse country-specic patterns in relation to use and possession, or can be used at the individual level to carry out studies on the use and possession of ICT by individuals in relation, for example, to their age. Our framework permits the review results of the analysis in light of the greater scenery of ICT within a given setting. This facilitates the consideration of aspects not specically accounted for in the original level of analysis, but which might play an important role in understanding the results.
interventions and provide a basis for further decisions. The framework presented in this paper builds a comprehensive model for the analysis of ICT effects into the educational process from various levels and perspectives. It establishes a structure for reecting on relevant indicators. The framework takes into account different levels of analysis allowing therefore for differentiation in scope. The framework further introduces different stages of implementation. This allows policymakers to acquire a holistic view on policy changes and the effects these have on different actors within the educational system. A holistic view is an essential aspect for policy evaluation because it can disclose the maturity of the implementations of policies. In brief, the paper proposes that in order to deepen our analysis of the impact of ICT on education, we need to shift our attention from technology per se to processes and skills teachers and learners are currently applying. This will allow us to identify and explore conditions and factors that are shaping the way ICT is used in education. Under this perspective, we need to shift from approaches that exclusively monitor macro level aspects to an integrated model where the three different levels are considered in conjunction. Such a comprehensive approach to the study of ICT effects and their impact on education needs to be considered in a coherent manner. The proposed framework allows for the integration of different levels and types of data sources. It is important to bear in mind that there appears to be a need to reect beyond pure observations and evaluate more concretely institutional contexts of learning (schools, university, etc.), learning situations and teaching processes to determine under which
Outlook
Conceptual frameworks are important tools for orienting and evaluating policy decisions. They offer policymakers dimensions for consideration when evaluating the effectiveness of policy
80
circumstances ICT-based activities can enhance learning and improve skills. Due to the complexity involved in mapping factors/variables on to one another, the evaluation of the causes of the observed impacts requires a
degree of qualitative interpretation. It is highly recommended that the actors engaged in the process dene the scope for evaluation and on such bases interpret the results.
References
Aviram, R. and Talmi, D. (2004). The impact of ICT in education: the three opposed paradigms, the lacking discouse. At: http://www.elearningeuropa.info/ extras/pdf/ict_impact.pdf (accessed 25.4.2009). Balanskat, A., Blamire, R. and Kefala, S. (2006). A review of studies of ICT impact on schools in Europe. Brussels: European Schoolnet. Butt, S. and Cebulla, A. (2006). E-maturity and school performance: a secondary analysis of COL evaluation data. London: National Center for Social Research. CEO Forum on Education and Technology (1999). Professional development: a link to better learning (Year Two Report). Washington, DC: CEO Forum on Education and Technology. Cox, M. J. and Marshall, G. M. (2007). Effects of ICT: do we know what we should know?, Education and Information Technologies, Vol. 12, 5970. Cuban, L. (1986). Teachers and machines: the classroom use of technology since 1920. New York: Teachers College Press. Dwyer, D. C., Ringstaff, C. and Sandholtz, J. H. (1991). Changes in teachers beliefs and practices in technology-rich classrooms, Educational Leadership, Vol. 48, No 8, 4552. Kollias, A. and Kikis, K. (2005). Pedagogic innovations with the use of ICT: from wider visions and policy reforms to school culture. Barcelona: Universitat de Barcelona. Moersch, C. (1995). Levels of technology implementation (LoTi): a framework for measuring classroom technology use, Learning and Leading with Technology, Vol. 23, No 3, 4042. Ramboll Management (2006). E-learning Nordic 2006: impact of ICT on education. Copenhagen: Ramboll Management. Redecker, C. (2009). Review of learning 2.0 practices: study on the impact of Web 2.0 innovations on education and training in Europe, JRC Scientic and Technical Reports, EUR 23664. Retrieved 10.08.2009, from http://ftp.jrc.es/ EURdoc/JRC49108.pdf. Rogers, E. M. (1995). Diffusion of innovations. New York: Free Press.
81
Sarason, S. (1990). The predictable failure of educational reform. San Francisco, CA: Jossey-Bass. Trucano, M. (2005). Knowledge maps: ICT in education, Washington, DC: infoDev/World Bank. Underwood, J. and Dillon, G. (2004). Maturity modelling: a framework for capturing the effects of technology, Technology, Pedagogy and Education, Vol. 13, No 2, 213225. Underwood, J., Baguley, T., Banyard, P., Coyne, E., Farrington-Flint, L. and Selwood, I. (2007). Impact 2007: Personalising learning with technology. Coventry: Becta. Unesco (2003). Developing and using indicators of ICT use in education. Unesco (2003a). Towards policies for integrating information and communication technologies into education.
82
ICT to improve quality in education A conceptual framework and indicators in the use of information communication technology for education (ICT4E)
Marcelo Cabrol and Eugenio Severin (1)
Inter-American Development Bank (2)
1. Introduction
The use of information and communication technologies (ICT) in education is no longer optional. A substantial change in society and individuals (3) has occurred thanks to development
(1) We express our thanks for the revisions and corrections by Carla Jimnez (IADB), as well as comments and suggestions by Juan Enrique Hinostroza, Claudia Peirano, Mara Paz Domnguez, Francesc Pedr, Friedrich Scheuermann, Seong Geun Bae and Michael Trucano. (2) The Inter-American Development Bank Technical Notes encompass a wide range of best practices, project evaluations, lessons learned, case studies, methodological notes and other documents of a technical nature that are not ofcial documents of the bank. The information and opinions presented in these publications are entirely those of the author(s), and no endorsement by the Inter-American Development Bank, its Board of Executive Directors or the countries they represent is expressed or implied. (3) In this technological environment, computers have become an integral part of our societies and our lives, transforming such diverse matters as the way we work and relax, how businesses operate, the conduct of scientic research, and the ways governments govern. They are integrating into other technologies in cars, phones and many other things that used to be low-tech. There is every reason to suppose that the pace of technological change will continue though we cannot say precisely in which forms and directions. (OECD, 2008)
in ICT, its penetration into the structures of production, knowledge management, communication and culture, the demand for new skills and competencies and the loss of importance in others. In addition, there has been a change in ways of approaching and understanding the world and development of new industries. For all these reasons, schools, countries and regions are compelled to develop new initiatives that incorporate ICT tools in teaching and learning, so that education systems can succeed in linking the new demands of the knowledge society with the new characteristics of learners (4). Some education systems in Latin America have overcome the challenge of access to education and are now confronting the demand for quality improvement; some systems face signicant challenges in attempting to include all children in the learning process; others require more radical solu(4) Economic theory describes three factors that can lead to increased productivity: capital deepening (that is, the use of equipment that is more productive than earlier versions), higherquality labor (a more knowledgeable workforce that is more productive), and technological innovation (the creation, distribution and use of new knowledge). (Kozma, 2008)
83
tions to support student learning while designing strategies within already fragile institutions. In any case, properly implemented ICT projects offer an alternative for implementation and impact on student learning (5), especially with new millennium learners (NML) (6). Nevertheless, most current evaluations have no conclusive information to inform decision-makers on how ICT can improve the quality of education (7). The lack of clarity about different options and impact areas of ICT use in education is an obstacle for the development of successful projects.
Inadequate assessment of the incorporation of ICT initiatives in education is in many cases a result of intuitive and unsound development but also relates to the lack of specic tools that would give condence to measure these impacts adequately separate from a myriad of other variables present in educational processes, which are dynamically affected by the introduction of ICT. It is very likely that this lack of instruments is a natural consequence of the emerging development in this process. Considering that the personal computer has been in existence for only 30 years, and that the rst computers that came to some schools did so only about 20 years ago, it is only logical that we still have many unanswered questions about how ICT can achieve the best contribution towards improving the quality of education. In fact, recent literature has drawn attention to the innovation phenomenon in educational practices incorporating ICT, with the caveat that so far the greatest amount of experience has been limited to computerisation of processes and practices, which continues to repeat the same actions of the past, but now with the support of computers and other technological devices. The predictable consequence is that impact on results will be quite limited (8). The use of ICT in the context of disruptive innovation and comprehensive intervention regarding the above
(8) Many of todays schools are not teaching the deep knowledge that underlies innovative activity. But it is not just a matter of asking teachers to teach different curriculum, because the structural congurations of the standard model make it very hard to create learning environments that result in deeper understanding. (Sawyer, 2008).
(5) All the studies reviewed have identied a range of important wider benets of ICT on learning. These include the positive impact of ICT on student motivation and skills, independent learning and teamwork. Increased motivation leads to more attention during lessons which can be exploited by the teacher. Aspects for more individualised learning were described in a variety of ways. Students learn more independently, at their own pace and according to their needs. They also take more responsibility for their own learning process. As seen, ICT can benet likewise academically strong and weak students and students with special needs. (Balanskat et al., 2006) (6) In times when a lot of emphasis is put on the effectiveness of teaching, more attention should be devoted to the changes occurring in pupils as they increasingly become NML. Their emergence claims for a reconsideration of ICTbased educational innovations putting pupils new attitudes and expectations, as well as transformed competences at the very centre. (Pedr, 2006) (7) The exercise to establish a knowledge map developed by the World Bank infoDev (Michael Trucano, 2005) showed how, beyond the large investments made in many countries to use ICT in education systems, data to support the afrmation of its role in improving education are limited and debatable.
84
practices is probably a better prognosis for changing results. Nevertheless, even less data, assessments or studies are available on this issue (9). This document presents a general conceptual framework to support the design, implementation, monitoring and evaluation of projects where information and communication technologies have been incorporated to improve education quality. One of the main challenges in the use of ICT in education is the lack of indicators that offer clear criteria and objective information to allow policymakers to make the proper decisions (10). Projects have not always considered rigorous evaluation processes and in those instances where they have, ICT impact on learning has not been
the focus (11). Lastly, the project offering is so vast that there is no common framework that can be both exible and broad enough to include the diverse nature, contexts and different stages of projects. The main hypothesis of the framework is that the goal of all education projects is to improve student learning, regardless of whether they are children or adults. The goal expected and measured in these projects should then be impact(s) on learning and changes brought about by implementation and enabling such learning. Learning outcomes can be broadened by putting children at the centre of the learning process. It is necessary to consider improvements in students involvement in and commitment to learning as the initial result. This plays a direct role in curricular learning
(11) See, for example, the conclusion of the World Bank evaluation of an ICT programme in Colombia: The main reason for these (poor) results seems to be the failure to incorporate the computers into the educational process. Although the programme increased the number of computers in the treatment schools and provided training to the teachers on how to use the computers in their classrooms, surveys of both teachers and students suggest that teachers did not incorporate the computers into their curriculum (The use and misuse of computers in education evidence from a randomized experiment in Colombia, Barrera-Osorio and Linden, 2009). Also in the Enciclomedia Project in Mxico: no signicant differences were found in the knowledge skills, implementation and evaluation of content among children who used Enciclomedia and those who did not have such equipment. Even children from 6th grade who did not use that technology had a better result by reaching 1.48 over 1.23 points over those who did have such a tool, Whereas in the application of content learned, those rst gained 2.15 points to 2.11 for those who did have this tool. Those 5th grade students without Enciclomedia were best evaluated with 1.83 points on 2 of their classmates with this equipment, Libro Blanco Enciclomedia, ILCE, 2007.
(9) Schools should use computers and related technologies to help students who are poorly served, or not served at all, by the current technology of education that is, by the schools most of us grew up with. In addition, elementary and secondary students ought to use computers, the Internet and other digital tools directly, not necessarily through a school. In these ways, schools, students and families will help promising computer-based technologies grow and improve. The schools can pay a huge price for not changing in time to accommodate new technologies. (Christensen et al., 2008) (10) The World Summit on the Information Society (WSIS) concluded that: We must develop a realistic plan for evaluating results and setting benchmarks (both qualitative and quantitative) at the international level, through comparable statistical indicators and research ndings to monitor the implementation of the action plan goals and objectives, taking into account national circumstances. (WSIS 2005)
85
improvement and can be observed in the participation and continuance of students in the learning process, and with improvement in teaching practices and learning processes as well. It also takes into account that these changes in practices and improvements are directly linked to the impact and the development of either general skills or 21st century skills, including an understanding of ICT skill acquisition (12). The monitoring and evaluation process should be considered more carefully and rigorously as a substantial component of each project, much more than it has been thus far, to account for such impacts. Monitoring and evaluation processes must be incorporated as an integral part of the process itself. Review of key information before (baseline), during the process (monitoring) and at the end of the project (nal evaluation) is fundamental to the proposed framework. The use of indicators to measure the systems level of development and maturation will be an indispensable tool for making policy decisions based on solid data and targeted knowledge. The proposed framework identies ve domains (inputs) that should be considered in an education system or in each specic project, its planning processes and products, and those processes that, though not directly
(12) In the case of students from low-income families, the exibility of schools is even smaller. Wealthier schools attract the best teachers, leaving the least prepared teachers to schools in poor and remote areas. [...] Consequently, these systems perpetuate social inequalities, lose excellent students as victims of boredom, increase the cost of education through the high dropout and repetition rates, and pass the cost of training their graduates to employers or other systems. (Haddad and Drexler, 2002)
involved, can be affected by the development of the project (13). Application of this framework and indicators at different levels of education systems (national or subnational) aims to provide a holistic and integrated vision of ICT incorporation in order to support decision-making regarding actions that can or should be made based on the available information, taking into consideration all necessary areas or domains (inputs). At the specic project level, use of diverse quantitative and qualitative methodologies for data collection and observation will provide a set of indicators. This evaluation will allow measurement of the projects efciency and monitoring of its development by those carrying out the project and other stakeholders, making it easier to determine best practices and promote the development of new initiatives for use of ICT in education areas. This framework has been developed taking into account empirical information available from past Inter-American Development Bank experience and from other experts in ICT education project implementation. Considering that every ICT in education project implements different lines of action, the framework is broad in nature, allowing different variables to be reviewed and selected (like a road(13) Since computer availability alone will not have an impact, policymakers and project leaders should think in terms of combinations of input factors that can work together to inuence learning. Coordinating the introduction of computers with national policies and programmes related to changes in curriculum, pedagogy, assessment and teacher training is more likely to result in widespread use and impact. (Kozma, 2005)
86
map) depending on direct or indirect involvement in the project and how they could be affected by it. Regardless of the variables and components included in the project, the goal (and objectives) should be linked to the improvement of learning and its implementation should take into account monitoring and evaluation mechanisms linked to the objectives. A good evaluation will allow results
from one ICT education project to be compared to other projects (ICTrelated or not) in order to evaluate the efciency of the investment. This document should be considered a working paper within the conceptual framework, which will be improved through the development of new projects and continually updated due to the constantly changing nature of ICT education processes and products.
87
Emerging
Applying
Integrating
Pedagogy Technical School organization Management systems Incentives plans Legal framework Community attitudes and expectations Priority and visibility Budget
Management
Transforming
88
Final goal: student learning Development stages Intermediate Impact Final Processes and products ICT layout and tech specs Implementation process Access and use Curriculum development Learning organization Resources availability Systems use Teachers performance ICT experience ICT training
2. Conceptual framework
Definition
The conceptual framework for the design, implementation, monitoring and evaluation of ICT projects in education (ICT4E framework) is presented in the following table.
Infrastructure
Resources
Training
Support
Sustainability
Political Financial
3. Student skills Critical thinking Problem resolution Creativity and innovation Communication Collaboration ICT
Baseline
Monitoring
Final evaluation
As shown in the table, the framework includes the following elements. Student learning, as the main goal of all project implementation. Students must be considered the direct beneciaries of any ICT4E initiative, regardless of whether they are children or adults. The Inputs refer not only to project lines of action but also to factors that could be affected by its implementation. The Processes and products are those elements that will be modied by the project and should demonstrate the results of the implementation. The projects Impact and the conditions that allow such outcomes are measured broadly with different variables. Development stages: four stages are described which will impact the design, implementation and evaluation of the projects. The process of Monitoring and evaluation includes different sources of data and information. The elements included in the framework are described below.
2. Impact
Results
1. Practices
The use of ICT in education implies the reasonable expectation that modications in teaching methodologies and student learning processes will occur (14). ICT offers a unique opportunity for access and knowledge construction. In order to achieve effective, comprehensive use of ICT in education development of new learning practices, strategies and methodologies must be put into place (15). A review of the literature indicates that, in instances where ICT has been incorporated as an additional tool to maintain the status quo, educational impacts are scant or non-existent. This is an important eld for innovation, where ICT4E plays an important catalysing role. The link between
(14) When learning scientists rst went into classrooms (Sawyer, 2006), they discovered that most schools were not teaching the deep knowledge that underlies knowledge work. By the 1980s, cognitive scientists had discovered that children retain material better, and are able to generalise it to a broader range of contexts, when they learn deep knowledge rather than surface knowledge, and when they learn how to use it in real-world social and practical settings. Thus, learning scientists began to argue that standard model schools were not aligned with the knowledge economy. (Benavides et al., 2008) (15) Measuring changes in learning and teaching processes is a time-consuming task, but one which may yield valuable results. Knowing how educational technology changes teaching practices, as well as the ways in which students learn, is fundamental for evaluating its effectiveness and for developing better tools. (Balanskat et al., 2006)
1. Student learning
Student learning is the purpose and main goal of an education systems actions and must remain so regarding use of ICT in educational processes. In each specic project, students are direct beneciaries, so the expected results should be directly linked to the learning that the project explicitly aims to impact or which will be indirectly impacted by the projects action. The projects impact (positive, negative or no change) and its effectiveness will depend on evidence of
89
teaching and learning practices and the growing daily interaction of students with digital, multimedia and interactive environments make this a key aspect of the framework and an important element to connecting projects with expected results (16).
Impact
3. Student achievement
A countrys education curriculum determines the knowledge and skills that students should achieve for each grade as well as tasks required of teachers and schools. The rst area where impact is evident in ICT4E projects is in learning associated with a specic school subject or topic, or how the curriculum content is divided according to learning aims or expected competencies for each student. Typically, this impact has been evaluated in subjects such as language, mathematics and science, since these are the subjects evaluated in most standardised tests (focus groups or by census) and, therefore, data are available in many countries (e.g. standardised tests such as TIMMS and PISA). Even though these instruments have had a small, limited eld of measurement to date (limited to only certain skills and content), studies have revealed positive but moderate correlations between ICT projects and test results. There are some challenges in countries that do not have national tests or participate in international standardised tests. In these cases the project could develop ad-hoc standardised tests to be administered before, during and after the project implementation (baseline and evaluation) or among groups that do or do not participate in the project (control and comparison groups). A lack of rigorous studies in this area has made it harder to prove the reasonable expectation that a countrys investment in ICT projects can improve learning in different subjects. Therefore it remains to be
2. Student involvement
One of the fundamental components of educational processes is student commitment. Although it may be obvious, the motivation and ongoing participation of students are necessary for project success. Furthermore, student motivation and enthusiasm in activities have a positive impact, not only with respect to potential learning results and development of new competencies but also to the learning environment, in stakeholder expectations and results for student promotion from one level to another. These processes also generate change regarding the motivation and expectations of parents and teachers. Both are intertwined with student motivation and expectations, resulting in the ongoing development of learning, Data on attendance, repetition, promotion and drop-out rates are usually available and facilitate the performance of straightforward impact analysis. Measuring motivation requires other instruments which, when applied correctly, can yield important information about the effects of ICT4E projects.
(16) One of the fundamental lessons to be learnt from European, North American and Australian experiences over the last 20 years has been that those responsible for helping people learn must be condent in the appropriate use of new technologies if the process is to be successful. Introducing new technologies into places of learning should involve a fundamental shift whereby the role of teachers becomes less didactic and more that of facilitating individual learning processes. (Unwin, 2005)
90
seen whether this impact is signicant, and, if so, on what subjects. More important yet is the lack of clarity as to what impacts can be reasonably expected in projects according to their stage of development or maturity. This task is especially complex because the introduction of ICT into education processes is often accompanied by modications in teaching methodologies. In fact, this is what is intended; with the introduction of ICT, old methodologies could have little or no impact. Evidently both people and governments reasonably expect that use of ICT in education (usually a complex and expensive process) will improve student learning, and this needs to be proven empirically.
thinking and problem solving; creativity and innovation; and communication and collaboration. Development of ICT competences is also considered. Until now, evaluation has not been particularly exact and has been mostly conducted through qualitative studies, interviews and perception surveys that collect information/data on the vision of students, or through structured observation exercises. Nevertheless, more objective tools will be developed over time that will allow for more rigorous evaluation exercises. One of the components of the OECD new millennium learners project is developing ICT competencies for a working denition framework and tools for evaluation. Another initiative working towards similar objectives is the alliance supported by CISCO, Intel and Microsoft and a group of universities and international institutions: Transforming education: assessing and teaching 21st century skills. Information and communications technologies are instruments that are a regular part of a range of work and development opportunities. Even a basic understanding of ICT use can result in opportunities for access and growth, both personally and professionally, which can make the difference in a countrys overall development. ICT skills and competencies are a clear objective in any project involving the use of ICT in education; therefore it is necessary to evaluate the effectiveness of each project. To perform these tasks, standardised tests will be used alongside IDBs own validated test to evaluate student ICT skills before, during and after implementation of activities in primary education.
91
3. Development stages
Clearly, the type of projects to develop and evaluate (as well as the impacts expected) will depend on the respective stage of development in the use of ICT in and the educational context where each project will be applied (18). The development stage reached through incorporation of ICT into education systems is strongly correlated with the type and depth of potential changes in application contexts. Thus, the intensity of use and the impact increase to the extent that efforts toward incorporation are sustained over time. Following Morels Matrix (2001), four project phases are proposed which are vital in the projects design, implementation, follow-up and evaluation steps, and in the follow-up of comparable education systems. Therefore, by analysing the indicators described in the Processes and products column, you can determine the development
(18) Countries which are presented in the initial stages of ICT incorporation in education have different assessment needs than those who already have a long tradition of use. For example, initially it is important that teachers and students have access to software and hardware and that they have acquired basic skills in computer science. Countries which are at the initial stages of ICT incorporation in education have different assessment needs than those who already have a long tradition of ICT use. For example, initially it is important that teachers and students have access to software and hardware and that they have acquired basic skills in computer science. In the case of countries at more advanced stages, other considerations such as management of educational innovations, changes in educational curricula and other organizational changes in schools, and ongoing support and training for staff are more important. (Manual for the production of statistics on the information economy, UNCTAD, 2008)
stage of the project (emerging, applying, integrating and transforming) and inform the expected outcome with results indicators. For example, you can generically describe these steps for each domain considered in the general framework, in the table. The table operates in practice as a section for reading the indicators present in a system or project, which allows for ascertaining maturity or stage of development. Once this section has been applied to each system or project, reading it may provide criteria for decisionmaking regarding the domains registering greater or less progress and, therefore, the kind of priorities that could lead the development of new actions. Denition of development stages is directly related to reasonable expectations for the impact that ICT has on educational systems, particularly with respect to learning, skills and student competences. It is therefore possible to enter into the table below some examples of the kind of results that can be found in education systems or in project target groups. Analysis of indicators will therefore depend on each stage of development. Until now, limited and partial investments in ICT (implying very small changes in inputs) were rarely expected to involve changes that can quickly translate into new and improved skills and competences in students. Applying this framework has allowed us to recognise that the achievement of signicant impacts is the result of a development process
92
that requires a broad vision, comprehensive, integrated implementation and development time in order to exhibit genuine impact.
tation of curriculum content in ICT or other subjects (in the use of ICT). b. Content: Digital or analog material aimed at teaching and learning with technology tools, e.g. encyclopedias, manuals, textbooks, books, guides, videos and hypertext. c. Tools: Software development or support initiatives for development of teaching and learning processes; e.g. productivity applications, virtual simulators and modeling. d. Information systems: Aimed at supporting implementation and distribution of management and education information systems at the school, country and regional levels, as well as those that allow monitoring of educational projects and their stakeholders, including curriculum, pedagogies and possible models of use (19).
4. Domains or inputs
Domains or inputs considered in project design and evaluation include the following:
1. Infrastructure
a. Physical: Initiatives associated with provision of infrastructure necessary for the use of and access to ICT, e.g. laboratories, libraries and furniture. b. Equipment: Equipment planned for the project or considered part of the project (even if not conceived as a direct part of the project) includes computers, printers, projectors and the conditions included in the purchase and use of those items, e.g. guarantee and service support. c. Connectivity: Access to Internet and networks that allow their use for education purposes; bandwidth access, connection stability and technologies that facilitate better online trafc and provide privacy protection lters for content accessed by students. Implementation of a reliable local network structure that is safe and accessible. d. Support: Activities aimed at administration, maintenance and repair of equipment as well as problem-solving related to project activities and technical support for users.
3. Human resources
a. Teacher training: Initial and inservice training associated with the adoption, adaptation and updating of curriculum and practices for the integration of ICT into education. b. ICT competences: Training activities for the acquisition and/or certication of specic ICT skills, general education, and productivity and communication tools.
(19) Clearly, compared to the traditional structure of the Internet, with few transmitters and many receivers, a new platform begins to be adopted where web applications are easy to use and allow for many transmitters, many receivers and a signicantly higher information exchange rate. Some of the most common resources are having an impact in teaching models based on online technologies such as blogs, wikis and others. (Cobo Romany and Kuklinski, 2007)
2. Contents
a. ICT curriculum: Initiatives linked to the implementation and/or adap-
93
94
Emergence Computer laboratories, broadband Internet access. Educator or administrator prepared to provide technical support. Computer networks in laboratories and classrooms used in combination with other devices (cameras, scanners, etc.). Continuous access to computers for students and educators. Wireless networks. Local staff specialised in support. Curriculum contemplates all inclusive use of ICT. Educational contents and applications enriched and adapted to specific practices. Basic applications for content creation and reconstruction of teaching and learning objects. Collaborative, student-centred pedagogy. Application Integration Transformation Diverse platforms available for communication and learning, web-based communication and collaboration services, self-managed learning systems. Local staff highly specialised in support and solutions development.
Infrastructure
Isolated PCs for administrative processes, restricted access to computers for students and educators.
Contents Curriculum takes into account the basic development of ICT competencies. Educational portals with access to digital resources that support the curriculum. E-mail and web search services available. Teacher-centred pedagogy.
Curriculum does not exclusively take into account the use of ICT. Office automation and educational games applications. CDs or local software with educational content (e.g. encyclopedias). Teachercentred pedagogy.
Curriculum comprehensively incorporates the use of ICT as a knowledge-building strategy. Advanced options for the development of content and collaboration among diverse stakeholders. Platforms for experimentation and publication of resources. Student-centred pedagogy: critical-thinking, collaborative, experiential. Initial and in-service training associated with the curriculum and with educational uses for ICT in the classroom. Training of local staff for support in the pedagogical integration of ICT. Peer learning networks, self-managed continuing education systems. Peer networks and online collaboration.
Human resources
Training according to individual interests. No pedagogical support for the integration of ICT.
General training in ICTs through in-service teacher training programmes. No local pedagogical support for ICT integration.
Administration Holistic view aiming to integrate processes by incorporating technologies. Complex, interconnected information technology systems for system-critical recording and communication. Regular incorporation of the community into formal processes and communications. Development of broad, comprehensive ICT policies covering the set of domains with similar depth levels, allowing flexible areas for specific context-dependent adaptations. Medium-term budgets guaranteed. Legal adjustments facilitating incorporation of ICT and their use in education. Incentive systems integrated into predefined educational achievements.
Pragmatic view based on individual interests. No pedagogical support for the integration of ICT.
Practical view based on the adoption of new technologies. Information technology administration of some systems, but they are not interconnected. Isolated, partial involvement of the organised community.
Proactive, innovative view aiming to generate developments that allow for new, better systems for information, recording and communication. Community actively seeking solutions and engaged in the collaborative building of shared knowledge.
Policies
Causistic and experimental development of isolated ICT initiatives. Without policies or budgets allocated over the long term. There no adjustments to the legal framework, nor are specific incentives being considered.
Limited development of ICTs plans, based on centralised, concentrated decisions. Partial, generic policies that take into account some components at various depth levels. Short-term budgets (associated with specific projects). Indirect generic adjustments to the legal framework (telecommunications and education plans). Pilot programmes for specific incentives.
Development of educational plans and policies that take ICT into account holistically together with their strategies and components, allowing broad areas for their specific inclusion into context. Inclusive budgets over the long term. Legal framework completely adapted to new requirements. Incentives associated with the systems overall learning achievements.
95
96
Emergence Teacher-centred classes that sporadically incorporate the use of ICT into some school activity beginning with its regular curricular planning. Students have regular access to technologies, but seldom connect them with their school experience. Student-centred classes; the teacher assumes the role of presenter and tutor, actively proposing and accompanying the work of students who use ICT collaboratively in their school work. This use is rather intensive in the context of the school but substantially low outside of it and the proposed activities. Lifelong learning environment; teachers and students continually collaborate in the creation and communication of knowledge. Emphasis on investigation and the development of projects, with the increasing autonomy of each actor and abundant use of platforms for communication and collaboration. Proactive, autonomous attitude throughout entire life. High expectations regarding their future and the role that education plays in it. Active attitude of the students regarding learning. High expectations regarding their learning and personal achievements, though not explicitly connected to their school experience. Medium impact Medium impact Application Integration Transformation
Practices
Predominance of vertical, expository classes. Classes centred on the teacher and his/her knowledge. ICT as specific training content for the students. Students have difficulties accessing technologies for use.
Student involvement
Passive attitude of students regarding learning. Low or moderate expectations regarding the impact of studies on their lives in the future.
Passive attitude of students regarding learning. Moderate expectations regarding the impact of school on their lives in the future generate motivations outside of school.
None
Low impact
None
c. Use of ICT for education: Training initiatives for the specic use of ICT in educational contexts (20). d. Pedagogical support: Efforts to provide educational support and follow up for participants, guidance or tutoring service developed for implementation of proposed activities.
stood as the ownership level with the success and objectives of those leading the project). b. Budget: Long-term budget needed for operational continuity and development of complementary initiatives required for the projects success. c. Legal framework: Actions to adjust and adapt the rules and regulations to enhance and improve the impact of the initiative and minimise the risks. Includes measures to improve the safety and security of minors, regulations associated with industries and copyright protection d. Incentives: Plans and programmes designed to (positively or negatively) underscore beneciary commitment and the results of the project expected by its participants.
4. Management
a. Administration: Structures and strategies for system and project management and administration for all levels considered (school, province, country and region) as well as the relationship with other institutional stakeholders associated with the project e.g. strategic allies and donors. b. Information dissemination: Activities aimed at providing information about project results, strategies and actions and involving all potential interested stakeholders and beneciaries of the project. c. Community involvement: How scope, strategies and actions are communicated. How all actors concerned and potentially affected by the projects development are involved. Actions that promote (and allow for) the active participation of community members and families in the development (and as direct beneciaries) of the project.
5. Policy
a. Planning: The projects priority (short or long term) in the context of other initiatives, plans, projects or actions, including visibility (under(20) Particularly important here is Unescos work in the development of the use of ICT in education and its standards for teachers.
1. Infrastructure
a. Amenities: Specic references about the technical characteristics of the equipment. The relationship between product characteristics and specic reasons why the equipment was selected; distribution and the nal characteristics of the equipment as it
97
is implemented. Also included here is the connection with other existing equipment indirectly related to the success of the project. Characteristics and conditions of connectivity. b. Implementation process: Description of project logistics, location and equipment distribution. Additionally, specics on the procedure for equipment selection, purchasing, distribution and integration/implementation in projects. Also included are references to the investment made in the context of the project essential to its success, such as classrooms or buildings (even when they have not been a projectspecic component), as well as calendars and systems in use by ICT users and their availability. c. Helpdesk: Describes systems installed to lend support to indirect and direct project beneciaries in the event of technical and pedagogical difculties. It will provide the user rate, response time, mechanism used, most common difculties, the bestrated responses and other indicators describing support available to participants.
curriculum, pedagogical approach(es) at the institutional level as well as knowledge management strategies. c. Availability of resources: Levels of access to educational resources from direct and indirect beneciaries; whenever possible underscore relevance and importance with respect to project objectives. d. Access and use: The opportunity for and simplicity of access to the information and management systems by the beneciaries (direct or indirect), whenever possible, provide their relevance to and the quality of the proposed objectives.
3. Human resources
a. Teacher performance: Describes teacher background information pertinent to the project: e.g., performance, planning activities, student:teacher ratio, performance evaluation and incentives. b. ICT experience: Previous experience with ICT in educational use, both in and outside the classroom. c. Models for educational use: Characteristics of ICT training to stakeholders in order to capitalise on the use of ICT in educational contexts. d. Education support system: Mechanisms aimed at motivating and lending support to the work of different stakeholders involved in the project, such as tutoring or assistantships for teachers, personal or online support plans, training resources, mutual communication among peers and guides for families.
2. Resources
a. Curriculum development: Work developed to connect curriculum to the learning goals and project objectives associated with ICT4E. Inclusion of ICT in the curriculum at the different levels as a competency or as cross-cutting or vertical content, learning goals specically proposed by the stakeholders. b. Learning organisation: Description of how learning activities are structured and organised, including how the curriculum is developed (integrated or separated from other thematic areas), how often and at what time of day ICT is integrated into the
98
4. Management
a. School organisation: The way the project is integrated into the overall institutional scheme of the school, how many hours each teacher spends on it and systems aimed at organising and supervising the projects operation. b. Management systems: Institutional framework, systems and mechanisms implemented by the project, or that the project modies and impacts and which allow for follow-up of project activities and objectives. c. Systems use: Opportunity for and simplicity of access to the information and management systems by the beneciaries (direct or indirect), whenever possible stating relevance to and quality of the proposed objectives. d. Community attitudes and expectations: Actions involved in the projects implementation aiming to include the initiative in its development context, introduction of participants (direct or indirect) to the project, communication with those involved in the project who facilitate the projects implementation. Also describe how the project considers the impact on the community, particularly regarding students families.
ations. Any difculties with the procedure and future nancing plans should be described. The expenses entailed by the project should be noted, specifying one-time purchases as well as recurring purchases that will therefore be part of the project in the future. Mechanisms recommended to secure funding in the future. For long-term implementation, the projects strengths and weaknesses and how the project itself plans to address them. This will include the total ownership cost as proposed by GESCI (21). c. Priority and visibility: The position of those responsible for the project as well as project objectives and the promotion of such activities. d. Legal framework: Description of regulations associated with project implementation. e. Incentives plans: Programme or incentive plans associated with the projects beneciaries and objectives.
6. Evaluation
The Conceptual Framework is not proposed as an evaluation model, nor does it develop specic assessment instruments. It should work as a guide to consider the elements involved in ICT for education projects. The evaluators using the Conceptual Framework should then apply and develop the adequate evaluation models and instruments depending on the context.
5. Sustainability
a. National (subnational) plans: Displays the existence or lack of national plans that comprehensively maintain and describe the use of ICT in education systems, linking them to each other and to the rest of the goals and policies, and to the development strategies as well. b. Budget: Different budget sources and procedures that are directly or indirectly involved in the projects oper-
1. Baseline
The data that inform the processes and products before the projects implementation and by which the project impact can be measured.
(21) Global e-school and communities initiative http://www.gesci.org/
99
The baseline is concerned with data that allow for identication of indicator status at the system level upon starting the application or before project implementation. From these initial data (sum zero), system progress or project action impact will be measured, once they are implemented. The baseline should take into consideration the systems level, a broad set of indicators that facilitates precise analysis of ICT incorporation status. At the project level, you should select those indicators that explicitly impact the projects objectives, including those linked to student learning. Wherever possible, however, the data for all processes should be taken into consideration to facilitate documentation of unforeseen impacts.
3. Evaluation
This process involves comprehensive review of a project, its achievements, progress and difculties, and establishes its impact vis--vis proposed objectives. Evaluation is conducted at project completion or at the end of a given phase of the projects implementation, and its purpose is to measure actions and the strategy proposed against the results obtained, and to monitor its relationship with and impact on system indicators. Along these lines, impact made on all areas, processes and products must be taken into consideration and not only the ones where the project has implemented actions. Evaluation is a process that is crucial to every project and should be considered an essential component at the outset of project design. Whenever possible, efforts should be made to have evaluation conducted by an external entity unassociated with the projects direct or indirect executors, to achieve objectivity and impartiality. Whenever possible, experimental evaluation methods should be favoured to complement other data sources to produce more solid, reliable results.
100
Regarding use in monitoring systems, we propose creating an index based on a set of indicators to help describe the respective system. When applying indicators at the project level, this set of indicators lends support to and organises the project evaluation process, but in no case is it completely exhaustive, since this process involves many other variables. For purposes of organising the indicators and associating them to the proposal framework, we have considered the need for input, process and impact indicators, depending on the data type you want to describe and its scope of application. Nonetheless, process indicators are applied exclusively at the project level and refer specically to the components that each project
proposes to develop; consequently, it is dened ad hoc. The methodological proposal for applying the indicators in the context of this conceptual framework and its associated indicators is comprised of ve instances:
1. System index
At the systems level, the IDB proposal is to consider all or the greatest number of indicators possible from among those proposed, in order to achieve the most complete view possible of the development status for the incorporation of ICT into education. This set of indicators, to the extent that it is possible to obtain complete, up-todate information, allows us to create one or more indices accounting for
System index
Evaluation
101
the status of progress in the incorporation of ICT into education, allowing us to determine the systems phase of development and the areas in which it is more or less advanced.
ering is to be carried out periodically at regular intervals, depending on the availability of data at each level: before the project begins: building the baseline; mid-term measurement: data gathering at the halfway point, while a project is being implemented. Allows you to determine impacts over the medium term and to take steps if necessary; end of project measurement: gathering information upon completion of the intervention; facilitates quantication of changes in indicators during the project implementation period. At this point, the status of all the input indicators is ascertained. These indicators provide information about the impact attributable to the project and about changes observed in the overall status of the system undergoing intervention. A fourth instance of data gathering is advisable, whenever possible: follow-up measurement: gathering information one or two periods after the respective project is completed. This allows evaluation of the status of the situation over the medium term, after the project has ended. At this point, drops may be observed in some indicators due to lack of funding for recurring expenses, for example. Process indicators required for the project to report should also be dened. Reports on these indicators will be of utmost value to the project executor because this facilitates rigorous monitoring of project implementation and provides the opportunity to make suggestions and, if necessary, to propose remedial measures.
102
At the outset of the project, it is advisable to agree on a timetable for submitting reports on these indicators. Perhaps not all of these indicators will be relevant to all of the processes. This means agreement must be reached among the parties regarding which indicators will be used for each project management plan and what reporting intervals will be observed.
educational systems and the support of empirical evidence on how to optimally capitalise on ICT potentials. ICT alone will not make the difference. We are condent that no technological device will solve the enormous challenges facing the education systems seeking to meet todays demands. We are not facing a technological challenge, but an educational challenge (22). We know that training people, a countrys human capital, is a complex process involving a myriad of variables with which ICT must interact dynamically to produce the changes required. We acknowledge that we are facing a challenge that is both vast and new, but which also changes at speeds heretofore unseen. Therefore, we expect this proposal will undergo continual revisions, adjustments and reformulations. We present it with the humility of an individual who explores unknown lands without the benet of certainties or necessary tools, but with the urgency of having to move forward with determination. Currently we are preparing the proposal for indicators that reect and complement the scale proposed in this conceptual framework. To accom(22) Todays classroom teachers need to be prepared to provide technology-supported learning opportunities for their students. Being prepared to use technology and knowing how that technology can support student learning have become integral skills in every teachers professional repertoire. Teachers need to be prepared to empower students with the advantages technology can bring. Schools and classrooms, both real and virtual, must have teachers who are equipped with technology resources and skills and who can effectively teach the necessary subject matter content while incorporating technology concepts and skills. (Unesco, 2008)
5. Impact evaluation
The nal evaluation of a project may take into consideration a broad set of tools, models and indicators to report on results. According to the proposal presented herein, we suggest taking into account how project results have enabled modication of indicators of the system where they were introduced, in terms of impact. These indicators were established in the denition of the general indicators and in the selection of specic indicators relevant to project action. In this way, denition of the indicator allows us to set goals for the project, which under the same terms of the indicator it proposes to change. Therefore, for each relevant indicator, the project impact evaluation presents its respective status before the intervention, the status targeted by the intervention (goal) and the percentage of the goal achieved.
103
plish this task, we are considering a very important proposal that Unesco UIS has already developed including over 50 indicators, which we are complementing with additional indicators covering all the areas proposed. We are making this seminal work available to those who wish to collaborate in its continual improvement and to the development of tools that
allow us to apply it to our specic contexts. We are still striving to improve it, in collaboration with experts and other agencies and international organisations. It is now being implemented in currently operating bank-supported projects in Latin America and the Caribbean for the purpose of aligning denitions, specifying and testing indicators and building new instruments for its implementation.
References
Balanskat, A., Blamire, R. and Kefala, S. (2006). The ICT impact report: a review of studies of ICT impact on schools in Europe. Brussels: European Schoolnet. Barrera-Osorio, F. and Linden, L. L. (2009). The use and misuse of computers in education: evidence from a randomized experiment in Colombia. Washington, DC: World Bank. Becta, (2006). The Becta review: evidence on the progress of ICT in education. Accessed at: http://becta.org.uk/corporate/publications/documents/The_Becta_ Review_2006.pdf. Benavides, F., Dumont, H. and Istance, D. (2008). The search for innovative learning environments, on innovating to learn, learning to innovate. Paris: OECD. Cobo Romani, C. and Pardo Kulinski, H. (2007). Planeta Web 2.0. Inteligencia colectiva o medios fast food. Grup de Recerca dInteraccions Digitals. Universitat de Vic. Flacso Mxico. Barcelona / Mxico DF. Christensen, C. M., Horn, M. B. and Johnson, C. W. (2008). Disrupting class: how disruptive innovation will change the way the world learns. McGraw-Hill, 2008. Hepp, P., Hinostroza, J. E., Laval, E. and Rehbein, L. (2004). Technology in schools: education, ICT and the knowledge society. Washington, DC: World Bank. (http://www1.worldbank.org/education/pdf/ICT_report_oct04a.pdf). Kozma, R. (2008). ICT, education reform, and economic growth: a conceptual framework. Kozma, R. (2005). Monitoring and evaluation of ICT for education impact: a review, in: M. Trucano (ed.), Monitoring and evaluation of ICT in education projects. infoDev, World Bank. OECD, CERI (2008). Trends shaping education. Paris: OECD.
104
Partnership for 21st Century Skills (2009), P21 framework denitions document. Pedr, F. (2006). The new millennium learners: challenging our views on ICT and learning. Paris: OECD, CERI. Sawyer, K. (2008). Optimising learning implications of learning sciences research on innovating to learn, learning to innovate. Paris: OECD. Trucano, M. (2005). Knowledge maps: ICTs in education. infoDev/World Bank, Washington, DC. Unwin, T. (2005). Capacity building and management in ICT for education, in: M. Trucano (ed.), Monitoring and evaluation of ICT in education projects. infoDev. Unesco (2008), ICT competency standards for teachers. Unesco. Unesco (2003). Performance indicators for ICT in education. Bangkok: Unesco. http://www.unescobkk.org/index.php?id=1109 Unesco (n.d.). Indicators for assessing ICT impact in education. Bangkok: Unesco. http://www. unescobkk.org/index.php?id=662 Unesco Institute for Statistics (UIS) (2005). ICTs and education indicators: suggested core indicators based on meta-analysis of selected international school surveys. WSIS Phase II, Tunis. Available at: http://www.itu.int/ITUD/ict/partnership/material/ICT_Education_Paper_Nov_2006.pdf. Unesco Institute for Statistics (UIS) (2008). Proposal for internationally comparable core indicators on ICTs in education. World Bank (2004). Monitoring and evaluation: some tools, methods and approaches. http://www.worldbank.org/oed/oed_approach.html.
105
A conceptual framework for benchmarking the use and assessing the impact of digital learning resources in school education
Beat Bilbao-Osorio and Francesc Pedr
OECD Centre for Educational Research and Innovation
The comparative study of information and communication technologies (ICT) in school education has primarily focused on investments in infrastructure, equipment and the resulting ratios per pupil, as well as on in-service teacher training and, lately, on the incentives and barriers for classroom use. Less attention has been paid to the development and publication of digital learning resources (DLR) as a mean to increase the added value that ICT can bring about for teaching and learning. In some countries, governments have started to subsidise programmes, repositories and networks focusing on DLR. However, until now, little empirical evidence has existed on the dimensions and impact of these policies, both on their capacity to foster the development of DLR and on their nal effects on the teaching and learning processes. The OECD Centre for Educational Research and Innovation (CERI) has recently completed a project intended to bridge this knowledge gap by reviewing and evaluating the process of innovation involved in policies and public as well as private initiatives designed to promote the development, distribution and use of DLR for the school sector. Among its nal outputs (23), this project includes the delivery of a conceptual framework for the creation of a system of indicators related to the development, use and effects of DLR. This chapter presents the resulting initial proposal. It aims at shedding more empirical light on the theoretical and policy debate about the effects of technologyenhanced learning in schools. In this respect, the chapter sets the scene for the ongoing policy debate and then discusses the lack of empirical evidence. Then it outlines the objectives of the CERI proposal and describes its main components. The nal section comments on what the next steps will be in the process of dening and compiling the appropriate indicators.
(23) The main report is published as CERI-OECD (2009), Beyond textbooks: digital learning resources as systemic innovation in the Nordic countries. Paris: OECD.
107
OECD already reported that education policymakers saw enormous potential for ICT to transform education. In 1999, the limited available data on trends in ICT investment and use were headed up sharply (OECD, 1999). Around that time, an OECD conference warned about the urgency of bridging the digital divide (OECD, 2000). In 2004, PISA data conrmed the exponential growth in the presence of ICT in education (OECD, 2004). In just three years, between 2000 and 2003, student-percomputer ratios dropped by more than half in most countries (and by a factor of 4 to 5 in those that were lagging). While less than a third of secondary schools had Internet access in 1995, it was virtually universal by 2001. Although there are no internationally comparable data on current expenditure on educational ICT hardware and software, there are signs of unmet demand for additional investment, particularly in the areas of hardware upgrading and availability of digital content or learning resources. According to the most recent PISA data, a lack of adequate computer software for instruction is cited by school principals as an important hindrance to science instruction (OECD, 2007) However, there seems to be little empirical evidence (24) about the nal benets associated with these investments in terms of use of ICT in schools and their impacts throughout the educational system, and claims of unfullled promises have opened an academic and policy debate about whether the considerable investment in ICT pays off in any obvious way.
(24) Recently a number of studies have aimed at analysing the impacts of ICT in education. The analytical works at SITES, E-Nordic Learning, Becta or the OECDs PISA reports (based on 2003 and 2006 results) are the main experiences in the eld.
108
Lack of empirical evidence about the effects of DLR, and ICT more broadly, on education
Computers in education are generally used in two broad contexts: (i) to provide computer skills training, which teaches students how to use computers; (ii) to provide technology enhanced learning (TEL), which uses computers to enhance teaching and learning methods, strategies and activities in the whole curriculum. While there is a clear case for the use of ICT for enhancing the computer skills of students, the role of TEL is more controversial (Machin et al., 2006). There is neither a strong and well-developed theoretical case nor much empirical evidence supporting the expected benets accruing from the use of ICT in schools, as different studies nd mixed results (Kirkpatrick and Cuban, 1998). Apparently, there seems to be no conclusive evidence. On the one hand, studies carried out by Becta (2002) and Machin et al. (2006) nd a positive effect on the use of ICT and educational attainment, and, on the other hand, the research carried out by Fuchs and Woessman (2004), Leuven et al. (2004) or Goolsbee and Guryan (2002) nd no real positive effect between the use of ICT and educational results once other factors, such as school characteristics or socioeconomic background, are taken into account. There is a generalised belief that, overall, the no signicant difference phenomenon, documented on many occasions in the case of distance learning, also emerges in school education. According to this, there is insufcient evidence to afrm either the superiority or inferiority of ICT-
rich methodologies. This would seem to be the outcome of the two systematic reviews of literature conducted recently, which conclude that in general and despite thousands of studies about the impact of ICT use on student attainment, it is difcult to measure and remains reasonably open to debate (infoDev, 2005), and also that some studies reveal a positive correlation between the availability of computer access or computer use and attainment, others reveal a negative correlation, whilst yet others indicate no correlation whatsoever between the two (Kozma, 2006). Experiments can only attempt to determine how effective ICT is in teaching specic school subjects, due to the multitude of compartmentalised methodologies to be found in a single school, and even in lines or different groups of students studying the same subject, albeit with different teachers. Consequently, the experiments designed to date compare the educational attainment of a group of students using an ICT-rich teaching methodology with the achievement of another group with similar characteristics being taught using traditional methods. However, an in-depth analysis of the available knowledge base shows that school attainment only improves if certain pedagogical conditions are met. This is the conclusion reached by Kulik (2003), who used the measurement of the effects found by eight different meta-analyses covering 335 studies before 1990 and 61 controlled experiments whose outcomes were published after 1990. Most of the studies carried out in the 1990s concluded that stimulation programmes have positive effects when used to enhance reading and writing capabilities and that, albeit less frequently, they have a clearly
109
positive effect on maths and natural and social sciences. Indeed, simply giving students greater access to both computers and Internet resources often results in improved writing skills. The assessments of primary school pupils using tutorials to improve their writing increased signicantly in this eld. Even very young primary school pupils using computers to write their own stories ended up improving their marks in reading. In short, there is a positive correlation between the frequent use of word processors and improved writing-related capabilities. Much less attention has been paid both by researchers and policymakers to the actual determinants of ICT use in school and their impacts in different dimensions of the educational system. For a long time, as noted above, ICT investments have been channelled towards the construction of an ICT infrastructure in schools, and most available resources have been devoted to the acquisition of ICT equipment, i.e. computers, and of Internet access connections, e.g. broadband networks. While this investment is a clear pre-requisite to foster the use of ICT in schools, it can also be regarded as a necessary but not sufcient condition to assure its use, if other factors are not simultaneously born in mind. More precisely, factors such as the competences and attitudes of teachers to use ICT or the availability of DLR have also been identied as key factors to explain the degree of use of ICT by teachers and students. While teachers attitudes and competences in respect of ICT have been widely recognised as a key factor (Williams et al., 1998) and signicant public investments have aimed at enhancing these competences, much less attention has been paid to
the development of DLR and to the development of content production. Although many big private publishing companies have entered the market of developing DLR and have acknowledged their potential, until recently they have regarded this market as unattractive and major investments have not been made. A possible explanation for this may lie in the role that private publishers play in the development of school content, either in analog or digital form. Commercial publishers have traditionally played a key role in developing and distributing printed learning material. However, when it comes to DLR, they seem to nd that the market may not be ready to use this type of resource yet, mainly due to the lack of infrastructure, teachers skills or cultural factors. Therefore, they may lack the necessary incentive to develop this kind of material. At the same time, the lack of readily available DLR of sufcient quality can also affect the motivation and attitudes of teachers towards DLR and ICT more broadly, and the need to invest in ICT infrastructures. On the whole, a vicious circle appears when the lack of signicant teacher demand proves a disincentive to publishers offers, which in turn affects demand negatively, and where all the determinants are closely intertwined. In addition to private publishers, students and teachers have also started producing DLR by themselves, partly along the lines and rationale which are successfully inspiring the production and use of open educational materials in higher education (25). There has been a shift in the paradigm where both teachers and students were only
(25) See CERI-OECD, Giving knowledge for free: open educational resources in higher education. (Paris, OECD, 2007).
110
users of learning material, and they are now also producing content material which they exchange among themselves and that is regarded by their peers as very important. The material of these user-producer teachers and students is increasingly important and will continue to be so as Web 2.0 applications become more available generally. However, until now, its study has also been somehow neglected in traditional studies.
data sources and the possibility (or not) of linking different datasets. 4. To highlight possible options to generate the missing data. As a result of the analysis of the data already available, data gaps will be identied and different strategies and tools to develop the required data will be suggested.
Definition
While there is a clear and practical interest to track the availability and use of DLR, there is an even greater interest in understanding the causes driving the development and use of DLR, and the impacts they generate on the teaching and learning processes, because the lessons learnt can be used to rene our understanding of the incentives and barriers regarding the broader use of ICT to enhance school education. An analytical framework capable of identifying and explaining these factors, their interrelations and their impacts would allow analysts to enhance their knowledge about the use of DLR and ICT more broadly, and to provide evidence-based policy recommendations for policymakers. However, at the moment, the lack of a holistic conceptual framework that takes into account all the intervening factors and their possible interrelationships, and the lack of available data have prevented the development of more robust results allowing to monitor and evaluate the role that different sources of ICT investment, including investments in DLR, play in the use of ICT and in the teaching and learning processes and the educational attainment of students. This lack of empirical evidence has also affected the necessary political support for any further investments and has increased the feeling among stakeholders of
111
unfullled promises related to the use of ICT in the educational system. In light of the information gathered in the OECD project on DLR during the interviews conducted with a number of stakeholders (i.e. departments of education, teachers, headmasters, students, local and regional governments and publishers) and a review of the existing literature on comparative research and recent practices, an analytical framework is proposed below. This framework aims to account for the factors affecting the development, use and impacts of DLR, as well as for the complexity of the interrelationships between these factors. Figure 1 presents a visual representation of this framework. The proposed model presents a number of investment measures on the left-hand side of the chart that are interrelated. Each of these invest-
ments produces a specic output in the form of available computers or Internet access (for the case of ICT infrastructure), digital learning resources or enhanced teachers ICT competences. The combination of these outputs would inuence the actual use of DLR and ICT more broadly, in a particular moment in the educational system. However, rather than claiming a linear and causal relationship, the model intends to reect the complex nature of the interaction between each of these factors and the actual use of ICT/DLR. For instance, higher levels of ICT/DLR use could also stimulate higher levels of ICT/DLR investments. In addition to these three main direct investment variables, a number of environmental factors would also affect the levels of DLR/ICT use and therefore should be included in the model. These variables relate to the
Figure 1: Analytical framework for assessing the development, use and impacts of DLR
112
overall ICT environment in the country that may push for or against the use of ICT in society in general, and in the educational system in particular. Particular attention has to be paid to the fact that very different factors can be brought into the picture. The degree of public policy inuence on these factors could largely differ both in scope and impact depending on the nature of these factors. Teachers commitment to the use of ITC in classes, for example, is a key variable that affects the nal use of DLR or ICT in schools, and that would be the result of a mix of factors such as policies to promote ICT in schools and teachers attitudes and convictions regarding the role of ICT in the teaching and learning processes. Pupils expectations would be another variable that could signicantly affect the use of DLR and ICT and that could be far from being affected by public intervention. These factors are somehow the soil where the DLR/ ICT investments are seeded and that could be a determinant in obtaining the desired fruit. As a result, policymakers are confronted with a policy dilemma in terms of what to do: invest in infrastructure, DLR, teaching competences (in which ones, and how much?) and/or in improving the ICT environment (how, and how much?) in order to obtain the desired results in terms of enhancing the ICT/DLR use. Finally, the model suggests that the use of ICT/DLR could have a nal impact on the educational system by allowing students to achieve higher educational attainment, developing stronger digital competences and improving the perceived satisfaction in the teaching and learning processes. However, as happened previously, the relationship between the
variables is not unidirectional, and therefore higher levels of technological competences, better academic performances or higher levels of satisfaction in the teaching and learning processes could also inuence higher ICT/DLR uses, triggering a virtuous upwards circle that would move within the whole model. The relationships between the different variables in this model are hypothetical and their existence (or not) should be investigated empirically, should data become available.
The variables
The model described above presents a number of variables and hypothetical relationships between the variables that would need to be tested. This section briey presents the different variables. As presented, this section only identies the variables and provides some initial suggestions for their denition and measurement. The difference in scope of these denitions would therefore affect the type of data that would be required. These variables, classied according to their nature and role in the proposed model, are as follows. 1. Direct investment variables: These are the different sources of investment where a clear connection between the initial investment and the actual results accruing from them can be identied. The model identies three investment types, closely intertwined between them. ICT infrastructure: This variable deals with the investment in equipment (computers, whiteboards, laptops, projects) and network connections. A number of clear outputs can also be
113
observed as a direct result of these investments: the number of computers by students or the number of computers with (broadband) Internet connection by student are just a few examples of this type of variable. Digital learning resources (DLR): There have been many denitions of DLR. In this project (26), it has been pointed out that DLR can refer either to any resource used by teachers and students for the purpose of learning, or to only resources particularly designed to be used in learning settings. It is both a strength and a weakness of the former denition that it is very general it can refer to anything from a stone or a feather, to Encyclopedia Britannica or advanced databases, as long as it is used for learning. The second denition is more limited and hence easier to use. But it excludes open learning resources like online newspaper articles, most computer games, and applications such as Google Earth. Moreover, it would not take into account the production of DLR carried out by individual teachers and shared within a closed system or intranet exclusively. As a result, it is important to note that this denition and measurement would be a stricter approximation of the overall DLR concept and therefore any conclusions about the availability and role of DLR should be handled very carefully. Teachers ICT competences: This variable relates to those investments aiming at making the
(26) Please refer to the full OECD report Beyond Textbooks. Digital Learning Resources in the Nordic Countries for a more thorough denition of DLR.
teachers more competent and eventually having a positive attitude towards ICT and using ICT in school. The input investment would be the resources devoted towards teacher training and ICT. The output measure, however, could differ and allow for different denitions and measures. On the one hand, an easy and direct measure could be the number of teachers trained in the system. On the other hand, a more complex measure could relate to the attitudes and changes in attitudes of the trained teachers towards the use of ICT/DLR. 2. Outcomes: An intermediate outcome can be linked and traced back to the initial investment variables, but can also be inuenced by some external factors. Use of ICT/DLR: The amount and nature of the different uses of DLR and ICT. This broad variable could be broken down into different categories and create a typology of different types of ICT/DLR uses according to the different categories of DLR, for example. Equally, a classication of the use by subject and class group would also provide more information that could be useful when analysing its relationship with the investment variables. 3. Impacts: These are the nal objective that the initial investments aim at. The model identies two main types of possible impacts. Student performance: The use of ICT and DLR could have an impact on the students performance that could go in two directions:
114
Development of ICT competences (or 21st century competences): The denition of ICT competences could be restricted to the effective use of the ICT infrastructure, i.e. use of a computer or the Internet, or it could have a broader scope, where students would be able to use, search, understand and even produce different content in a digital support in order to obtain or show a better understanding of particular subjects. In the latter, specic denitions of competences should be developed and appropriate tests should be put in place in order to measure and evaluate the achievement of these competencies. Academic performance in basic subjects: The use of ICT in learning different subjects could have an impact in the actual academic attainment of students in these different subjects. Analysing these results and comparing them before and after the use of ICT/DLR would be important to establish any causal relationship between the two. Improved or new teaching and learning process: The use of DLR and ICT could also improve or bring about new processes of both teaching and learning, making it more interesting for students and teachers, and improving the communication between the different stakeholders. Having an objective measure of improved process could be very difcult as it would require a clear denition and measurement of all the different aspects affect-
ing this process, including the always fuzzy concept of quality. However, a subjective measurement of the changes in the process by the different stakeholders could be a way to get around this initial difculty. 4. Environmental factors: These variables, although they cannot be directly controlled by direct government investment, have a very clear impact in the capacity of the direct investments to achieve the desired results. They are the soil where the different investments (the seeds) are planted. Teachers commitment to ICT: The teachers commitment and determination to use ICT and DLR in their schools is one key variable that may explain differences in the levels of investment in schools and also in the actual use of ICT/DLR by the teachers. This is particularly true in decentralised systems, where teachers count on a large degree of autonomy. Also, research has shown the relevance of leadership in schools in this domain. Socioeconomic factors: Socioeconomic background, age and gender of students have been pointed out in the literature as being key factors that may inuence not only their learning expectations but also the degree and scope of the actual use of ICT/DLR (outcome variable), and also inuence decisively the students educational attainment (impact variable). Therefore, any study that aims at drawing causal relationships between the variables should take these factors into account.
115
In addition to these variables, it is important to note that the model also identies a very broad variable that somehow affects all the different variables in the model. The overall ICT environment. This variable aims at explaining the overall societal attitude towards the use of ICT, not only in the educational system, but more broadly in all aspects of life. This broad variable would include: ICT responsiveness: ICT readiness and acceptance in the overall society inuence the pressure and demand for the inclusion of ICT in the educational system as well as the attitudes of both teachers and students towards the use of ICT. Possible measures of this responsiveness could be the penetration of ICT in homes or in rms. National curriculum: The inclusion of the obligation to use ICT/DLR in the national curricula, directly or indirectly (by way of mentioning them in the denition of expected pupils competences), may be a variable that may explain difference across countries in the use of ICT/DLR, and also may be a factor affecting the levels of ICT/DLR investments in the educational system.
nities, but in itself access to DLR does not imply automatically granting better educational processes. How DLR is used, in the wider context of other intervening factors, is the critical variable and not much is known about it yet. This points out the need for a clear understanding not only of the intervening factors but, in particular, of their interrelationships. CERIs ongoing work in this domain is addressing some of these issues in close cooperation with other international agencies. The main objective is to enlarge the number and quality of indicators about access to and use of ICT in education. In so doing, the main activities that will be carried out are as follows. 1. Redenition and renement of the model: A validation of the model should be carried out. More precisely, this activity would (re-) dene and identify new factors, map the hypothetical relationships between the variables and revisit the scope of the model. This renement of the model would allow building the necessary consensus in order to develop internationally agreed and comparable indicators. 2. Redenition of variables: Alternative denitions for the variables are available, with differences in scope and nature. A commonly agreed redenition of the variables would then be necessary. 3. Evaluation of available data: Based on the agreed model, an evaluation of the existing data sources and the possibility to link different datasets in a coherent manner should be carried out.
116
4. Data needs assessment: Based on the agreed model and the data already available, a data needs assessment should be carried out. As mentioned in the denition of the variables of the conceptual model, the data needs can be dened in
different levels of depth. The complexity and cost to obtain the data should match the utility and a consensus should be reached when dening the variables and developing the necessary methods to obtain the required new data.
References
Becta (2002). ImpaCT2 The impact of information and communication technologies on pupil learning and attainment. Coventry: Becta. Empirica (2006). Benchmarking access and use of ICT in European schools 2006: nal report from head teacher and classroom teacher surveys in 27 European countries, August 2006. Bonn. Fuchs, T. and Woessman, L. (2004). Computers and student learning: bivariate and multivariate evidence on the availability and use of computers at home and at school, Brussels Economic Review/Cahiers Economiques de Bruxelles, Editions du Dulbea, Universit libre de Bruxelles, Department of Applied Economics (Dulbea), Vol. 47, No 34, 35985. Johnson, L., Levine, A. and Smith, R. (2009). The 2009 horizon report. Austin, TX: New Media Consortium. Kausar, T., Choudhry, B. N. and Gujjar, A. A. (2008). A comparative study to evaluate the effectiveness of computer assisted instruction (CAI) versus class room lecture (RL) for computer science at ICS level, Turkish Online Journal of Educational Technology TOJET, October 2008, Vol. 7, Issue 4, Article 2. Kozma, R. (2003). Technology and classroom practices: an international study, Journal of Research on Technology in Education, Vol. 36, No 1, 114. Kozma, R. B. (2008). Comparative analysis of policies for ICT in education. Center for Technology in Learning, SRI International (to appear in the International Handbook on Information Technology in Education). Kulik, J. (2003). The effects of using instructional technology in elementary and secondary schools: what controlled evaluation studies say. Menlo Park, CA: SRI International. Lee, M. G. (2003). Comparative analyses of ICT integration initiatives in Korean, German and American educations, Mediapdagogik (10.1.2003). Machin, S., McNally, S. and Silva, O. (2006). New technologies in schools: is there a pay off? London: Centre for Economic Performance; Bonn: Institute for the Study of Labour.
117
OECD, (1999). OECD science, technology and industry scoreboard 1999, Paris: OECD. OECD (2000). OECD Forum 2000. Paris: OECD. OECD (2007). Giving knowledge for free: the emergence of open educational resources. Paris: OECD. OECD (2009). Working out change: systemic innovation in vocational education and training. Paris: OECD. Ramboll Management (2006). E-learning Nordic 2006 Impact of ICT on education. Copenhagen: Ramboll Management.
118
CHAPTER
CASE STUDIES
IV
Assessing new technological literacies The impact of ICT in education policies on teacher practices and student outcomes in Hong Kong Indicators on ICT in primary and secondary education: results of an EU study Impacts of ICT use on school learning outcome ICT impact data at primary school level: the STEPS approach
119
Abstract
As technologies and contexts of their use increase, characterizations of 21st century skills have grown beyond operation of computer productivity tools to encompass individuals use of the Internet, specialized software, and facility with handheld and wireless devices. New literacies have expanded to refer to expertise in the use of a range of digital media and information and communication technologies exercised in academic and applied settings to solve a range of problems (Quellmalz & Haertel, 2008). This paper addresses: (1) distinguishing features of the multiple frameworks for ICT, 21st century skills, and new literacies; (2) alternative assessment designs and prototype student assessments of new literacies, (3) evidence-centered design methods for establishing technical quality, and (4) features of coherent, balanced assessments of new literacies across classroom, district, state, national and international levels.
Introduction
Information and communication technologies (ICT) permeate school, work, personal and civic activities. Their prevalence speaks to the centrality of these powerful, transformative tools in all walks of life. Policymakers throughout the world recognise the signicance of technologies for economic, civic and global progress, along with the concomitant need for coherent educational policies to promote and implement skills characterised as new literacies, 21st century skills, information and communication technology skills and technological literacy (ISTE, 2007; Partnership for 21st Century Skills, 2005; Kozma, McGhee, Quellmalz, and Zalles, 2004).
As technologies and contexts of their use increase, characterisations of 21st century skills have grown beyond operation of computer productivity tools to encompass individuals use of the Internet, specialised software and facility with handheld and wireless devices. New literacies have expanded to refer to expertise in the use of a range of digital media and information and communication technologies exercised in academic and applied settings to solve a range of problems (Quellmalz and Haertel, 2008). Technologies are increasingly recognised as transforming schooling as a result of their capacity to extend students opportunities to access
121
rich repositories of knowledge and to engage in deep, extended problem solving. Large-scale national and international studies are providing evidence that technologies are truly changing and improving schools by enriching curricula, tailoring learning environments, offering opportunities for embedding assessment within instruction and providing collaborative tools to connect students, teachers and experts locally and globally (Kozma, 2003; Law, Pelgrum and Plomp, 2008). Despite the pervasiveness of technology, there are few traditional large-scale tests or curriculum-embedded, formative measures that directly measure new literacies (Burns and Ungerleider, 2002; Quellmalz and Kozma, 2003). The quest for tests of students prociencies with these 21st century skills is hindered by a number of persistent issues. There are myriad denitions of information and communication technologies and technological literacy knowledge and skills. The contexts in which ICT should be taught and tested vary widely. The extent to which the knowledge and skills about technologies to be used within a domain-based problem or context can be distinguished from the domain-specic knowledge and skills required is ambiguous (Bennett, Jenkings, Persky and Weiss, 2003; Quellmalz and Kozma, 2003). Methods for designing 21st century assessments and for documenting their technical quality have not been widely used. Finally, a critical issue facing the promotion of 21st century learning is that assessments of ICT should be coherent across levels of educational systems (Pellegrino et al., 2001). Coherence must start with common or overlapping denitions of the knowledge and skills to be assessed as new literacies. If the designs of international, national, state and classroom
level tests of new literacies are not aligned and articulated, the assessment systems will not be balanced and the validity of inferences about student performance will be compromised.
122
and skills in subjects such as maths and reading. These test designs aim to reduce or eliminate the demands of the technology, treating it as an irrelevant construct. Equivalence of paper-based and technology-based forms is the goal. Technology-based tests are increasing rapidly in largescale state, national and international testing, where technology is being embraced as a means to reduce the costs and logistics of assessment functions such as test delivery, scoring and reporting. Technology-based tests typically assume that supportive technology tools such as calculators or word processors are irrelevant to the content constructs being tested and therefore not to be measured separately. Since these types of testing programs seek comparability of paper and online tests, the tests tend to present static stimuli and use traditional constructedresponse and selected-response item formats. For the most part, these conventional online tests remain limited to measuring knowledge and skills that can be easily assessed on paper. Consequently, they do not take advantage of technologies that can measure more complex knowledge structures and extended inquiry and problem solving included in 21st century ICT frameworks. In short, a technology delivered and scored test of traditional subjects is not an assessment of 21st century ICT skills and should not be confused as one. This paper focuses on assessments of technology and assessments with technology, not assessments by technology. It addresses: (i) distinguishing features of the multiple frameworks for ICT, 21st century skills and new literacies; (ii) alternative assessment designs and prototype student assessments of new literacies; (iii) evidencecentered design methods for establish-
ing technical quality, and (iv) features of coherent, balanced assessments of new literacies across classroom, district, state, national and international levels.
123
ICT technological literacy frameworks. The NAEP technological literacy framework species three major assessment areas: technology and society, engineering design and systems and ICT (see http://naeptech2012.org). The addition of engineering design to ICT frameworks incorporates knowledge and skills about how technologies are developed as well as how they are used. The 2012 NAEP technological literacy framework, which will shape the next decade of designs of the assessments that serve as the nations report card, deliberately species a wider range of technology products and processes than those called out in ICT frameworks. Technologies in the designed world include those in contexts such as transportation, energy, agriculture and health, as well as information and communication technologies (ITEA, 2000). Specications for assessment vs. curriculum: A critical issue in the assessment of the new technological literacies is the distinction between curriculum and assessment frameworks. The standards specied by ISTE, ITEA and national frameworks set goals for promoting technology understanding and use. These standards aim to shape curriculum, instruction and assessment. In contrast, a national or international assessment framework may limit the content and skills specied in the framework to what can be directly tested and reported in large-scale, on-demand assessments. Thus, extended projects, collaboration and teamwork or creativity are unlikely to be tested in systematic, replicable ways on the large-scale tests, but can be promoted and potentially assessed at the classroom level. The role of domain knowledge: Another issue is the role of knowledge about topics and contexts required
to complete tasks and items using technology. Background knowledge from life experience will present task demands different from tasks requiring knowledge from academic subjects. For assessments of students ability to use technologies in a range of academic and practical problems, assessment frameworks must be explicit about the areas, complexity and familiarity of content in assessment tasks or items and if that knowledge will be scored in addition to processes and operations. The next section describes a coordinated assessment framework developed in an international project that aimed to provide a cross-cutting set of knowledge and skills that could be used to test ICT literacy in academic or applied contexts.
124
ment of performance assessments of ICT that could be used across a range of technology use in school subjects documented in IEA SITES Modules 1 and 2 (Kozma, 2003). To these ends, a working group of international experts in ICT representing Chile, Finland, Norway, Singapore and the United States was formed. The group aligned standards documents that specied important technology prociencies with those that focused on mathematics and science (since NSF was the funding agency) and the role of technology within those domains. To create the coordinated ICT assessment framework, the descriptions and classications of problem solving and inquiry from the maths and science frameworks were incorporated into the more general categories of information processing, knowledge management and communication categories in the technology prociency frameworks. From these frameworks, the project team culled common categories of ICT use
that could shape the coherent collection of evidence across studies of students abilities to use ICT in academic domains. The cross-cutting framework laid out the knowledge and skills to be assessed. It served as the rst component of an evidencecentered assessment design for ICT (Mislevy and Haertel, 2006). Figure 1 presents a model of the coordinated ICT assessment framework. The circle depicts the subject matter domains the content and processes of the disciplines of science and mathematics addressed in the NSF project. Other academic domains in social science and the humanities were not included, although the generic framework could be applied to domains other than maths and science. The left side of the circle represents the declarative knowledge of the domain, which can vary from content-lean, factual knowledge to content-rich, schematic knowledge composed of interrelated concepts and principles (Baxter and Glaser, 1998). The right
125
side of the circle represents the process dimension, in which problem-solving demands of an assessment can range from simple, procedural knowledge for routine problems to complex, strategic knowledge for nonroutine problems. Within the problem space, learners use ICT strategies to integrate technologies into the problemsolving activities. The ICT strategies include: taking advantage of the capabilities of technologies to understand and plan how to approach a problem; accessing and organising information and relevant data; representing and transforming data and information; analysing and interpreting information and data; critically evaluating the relevance, credibility and appropriateness of information, data and conclusions; communicating ideas, ndings and arguments; designing products within constraints; and collaborating to solve complex problems and manage information. These strategies align with current versions of 21st century skills. The gure deliberately portrays these ICT strategies as non-linear and iterative. Thus, planning may be needed to nd relevant digital information and data at the outset of a task and again, at a later stage of the task, to decide what to vary in the test of a model. Various technologies can support collaboration throughout the problemsolving activities. Technology tools appear in the center of the problem space in a tool kit. Internet, productivity and specialised tools such as simulations or visualisations may be chosen to accomplish multiple ICT strategies. Factual and procedural knowledge required for operation of specic tools or classes of tools can vary according to the affordances of particular tools and the basic or more advanced features
chosen or required. This framework was designed to focus on generalizable ICT strategies, rather than on discrete, often changing, features of technology tools.
126
Module 1
Sample questions/tasks Given data in text message of 4 years of hare and lynx population data, describe the problem. Given data for more years by collaborators, describe the problem. Type in a search to find how hare and lynx populations are related. Look through these three sites. Take notes and cite sources. Copy and paste information. Pick which search might be better. Are these good search results? Send suggestions to collaborator.
Strategy component Plan Analyse problem. strategies and Choose procedures. appropriate tools. Collaborate Integrate others to solve data. problem. ICT strategy Access information and data. Organise information and data. Critically evaluate. Collaborate. Formulate a search query. Conduct search. Enter information in table or notes. Evaluate quality of search results. Contribute feedback.
Web browser Table Search box Search results Web directory Web pages Table Word document E-mail Enter the 25 years of population Represent Display data in one Spreadsheet data into a spreadsheet. and transform format, convert to Table Create another way to look at the data and a different form. Graph pattern. information. Record and read What is the relationship in 2003? Analyse and data. What trends do you see? interpret data. Identify and What do you predict will happen explain trends. in 5 years? Make predictions. Run the model with given Analyse data. Read graphs. Modelling tool settings. Interpret data. Infer trends. Word What are the populations in 2002 Make predictions. processor and 2005? Explain What do you predict will happen predictions. in 2008? Increase the lynx population. What do you think will happen? Run the model. Explain Plan your recommendation and Plan Specify position Web form presentation. argument. Identify relevant Word Compose your presentation Communicate evidence. processor using information and pictures findings and Present Tables from websites, data. supported recommendation, Graphs Present argument. argument. relevant data Graphics and information Presentation in coherent tool argument. Critique recommendation from Critically Critique position, E-mail another team (with inaccurate evaluate evidence, support Word data) by explaining if you agree arguments. explanation, processor with the recommendation, the organisation. appropriateness of their data and information, their support for the recommendation.
Table 1: ICT assessment scenario: predatorprey Problem: Parks are being overrun by hares. The government should reintroduce lynx. Science and math content: Familiar or given.
127
affordances of the modular design for ICT performance tasks would permit custom design and adaptive assessment. The next section presents examples of prototype ICT assessments of and with technology that use the modular design approach.
Module 1 assesses ICT planning strategies through questions and tasks for analysing the problem by examining data on hare and lynx populations, while selecting from a set of technology tools. Module 1 assesses collaborative planning through tasks and questions in which the student uses e-mail to examine hare and lynx population data sent by virtual team members. Evidence of skills in operating the technology tools is a by-product of students use of the tools in the problem-solving tasks. Assessment of strategies for using technology to access and organise information is tested in Module 2 in a series of tasks in which the student formulates a search query, gathers information and data from web pages and organises them in a table. Critical evaluation, tested throughout the modules, is assessed by questions on the credibility of information from a web report produced by a fur trading company and by questions on the effectiveness of web search results. Module 3 assesses the ICT strategies for using technologies to represent and transform information and data. Questions and tasks ask students to convert data sent in an e-mail text message by virtual collaborators to data on a spreadsheet and then transform the data into a graph. Module 4 tests the ICT strategies for using technologies for analysis and interpretation of information and data. Questions and tasks ask students to read specied data presented in tables and graphs and to interpret trends. Module 5 tests analysis and interpretation by using a modelling tool that displays the pattern of hare and lynx
128
populations. Students answer questions about output of the model at specied years, predict trends,and manipulate population values in the model to test predictions. In Module 6, uses of ICT strategies and technologies for planning a presentation and communicating ndings and results are tested. Figures 2 to 9 illustrate the modules. The predatorprey modules were designed to permit exibility in international administrations in which the assessment would be a national option. The modules such as using a spreadsheet or modelling tool could be removed if students had not had experience with these tools, but the ow of the problem-solving task would not be disrupted.
Assessments with technology: solving complex science and mathematics problems using advanced learning tools: A second assessment design goal in the NSF ICT assessment project was to draw on the coordinated ICT assessment framework and the modular design approach to fashion prototype performance assessments for the secondary school level that would tap transformative uses of learning with technology in advanced science and mathematics (e.g. visualisations, modelling, specialised software). The prototypes addressed assessment targets for science concepts, ICT strategies and the use of technology tools. The prototypes were designed to serve as classroom-level models for teachers and evaluators to assess student learning at the secondary level in innovative technology-supported curricula in which students had the opportunity
129
130
131
132
to work with the types of technological tools used by professionals. One prototype was designed to test the ability of high school physics students to apply the laws of motion to solve an authentic problem (designing a motorway car crash barrier) with a widely used commercial modelling tool, Interactive Physics. The targeted knowledge and skills included: (a) physics concepts related to force, mass, acceleration and velocity; (b) ICT inquiry strategies for planning/ design, conduct of investigations (running the simulation), analysis and interpretation (of acceleration and velocity graphs), evaluation of possible design solutions and communication of a recommendation; and (c) technology prociencies related to using the modelling tool, graphing tool and presentation tools. The task design consisted of a series of modules in which students planned their design, iteratively predicted and tried out designs using the simulation, interpreted results, evaluated a proposed design and developed a presentation for their recommended design. Evidence of student learning was provided by scores for student work related to physics knowledge, the component inquiry skills and
technology use. Figure 9 presents a screen shot of the module. Another prototype designed according to the modular design approach tested a student model for secondary students to solve an applied problem by using a widely available commercial visualisation tool, ArcView. The targeted knowledge and skills included: (a) science and maths knowledge; (b) inquiry skills for planning and conducting investigations, analysing and interpreting data and communicating recommendations; and (c) technology use. The task design involved presentation of the problem (Which states meet requirements to apply for solar power funds?); accessing, analysing and combining visualisations of different types of data (for solar energy); interpreting data; and presenting a recommendation. Evidence of student learning consisted of scores for the three outcome areas and their components. Figure 10 presents a screen shot of the assessment. The three prototypes developed in the NSF project described above used the coordinated framework for the design of ICT assessments to illustrate how a modular design approach can shape assessments of new literacies that can
133
Figure 10: High school science and math assessment using visualisations
vary foregrounding of competencies in use of the technology tools, in use of 21st century ICT cognitive strategies, or domain knowledge and skills. The next section examines contemporary designs of technology-based assessments and their potential for providing evidence of student learning of new literacies advocated in 21st century and ICT skill frameworks.
nology. These testing programs are capitalising on the capacities of technology to support logistical assessment functions including test development, delivery, adaptation, scoring and reporting. A new generation of assessments, however, is attempting to move beyond logistical supports of testing by technology to reformulating task and item formats to test 21st century thinking and reasoning processes with technology in order to overcome many of the limitations of conventional testing practices. In 2006, the Programme for International Student Assessment (PISA) conducted a pilot of computerbased assessment in science which used animations and simulations of phenomena such as energy ow in a nuclear reactor to test science skills
134
that could not be tested in the paperbased booklets. In 2009, PISA included electronic texts to test reading. Since 2005, the US state of Minnesota has administered computer-based state science tests in grades 5, 8 and 11. These science tasks present animations and simulations of laboratory experiments and phenomena such as the water cycle. In the USA, in 2011, the national assessment of educational progress (NAEP) for writing, word processing and editing tools will be used in the computer-administered test for grade 8 and grade 12 students to compose essays. The large-scale tests described above are assessments of subject matter knowledge and processing skills, i.e. assessments of learning with technology. Data is not collected on how well technologies are used, nor of the number of 21st century skills used, such as collaboration or multimedia presentations. In fact, these subject area tests are designed to minimize the requirements for knowing how to operate particular technology tools. Large-scale assessments of the new technological literacies that directly test and report on the spectrum of 21st century ICT skills are not yet available. A 2003 ICT feasibility test by PISA was conducted with a small sample of students in Japan, Australia and the USA. The study pilot was an assessment of technology which tested a set of ICT skills for access, management, integration and evaluation. Modules included uses of web (select relevant reliable site, search), desktop (email, database) and e-learning (science simulation) environments. Scored ICT prociencies related to students abilities to correctly use the technologies. A full-scale ICT assessment was not funded by PISA.
Recommendations for 21st century ICT assessments are turning from a primary emphasis on summative goals to methods for assessing new literacies within school curricula. Assessment designs are seeking to harness technology to measure understanding of complex and dynamic phenomena that were previously difcult to assess by conventional means. In the domains of reading and written composition, ICT tools such as web browsers, word processors, editing, drawing and multimedia programs can support reading and writing processes. These same tools can expand the cognitive skills that can be assessed, including accessing and nding relevant information, integrating multiple sources of information, planning, drafting, composition and revision. These assessments of learning with technology can vary along a continuum from static to animated and dynamic displays of information, data and phenomena and from static to interactive ways for students to solve problems and enter responses (Koomen, 2006). At the beginning of the continuum would fall technology-based assessments by technology intended to replicate paper counterparts. Assessments that would fall at a midpoint on the continuum may permit students to construct tables and graphs or they may present animations of science experiments or phenomena, such as chemical reactions, for students to observe. Assessments presenting dynamic simulations that allow students to interact by manipulating multiple variables would be placed at the most transformative end of the continuum. Technology-enhanced assessments can offer the following benets. Present authentic, rich, dynamic environments.
135
Support access to collections of information sources and expertise. Present phenomena difcult or impossible to observe and manipulate in classrooms. Represent temporal, causal, dynamic relationships in action. Allow multiple representations of stimuli and their simultaneous interactions (e.g., data generated during a process). Allow overlays of representations, symbols. Allow student manipulations/investigations, multiple trials. Allow student control of pacing, replay, reiterate. Make student thinking and reasoning processes visible. Capture student responses during research, design, problem solving. Allow use or simulations of a range of tools (Internet, productivity, domainbased). Across the disciplines, technologies have expanded the phenomena that can be investigated, the nature of argumentation and the use of evidence. The area of science assessment is perhaps leading the way in exploring the presentation and interpretation of complex, multi-faceted problem types and assessment approaches. Technologies are being used to represent domains, systems, models and data, and their manipulation, in ways that previously were not possible. Dynamic models of ecosystems or molecular structures help scientists visualise and communicate complex interactions. This move from static to dynamic models has changed the nature of inquiry among professionals and the way that academic disciplines can be taught and tested. Moreover, the computers ability to capture student inputs permits collecting evidence of processes such as problem-solving sequences and
strategy use as reected by information selected, numbers of attempts and time allocation. Such work involves reconceptualizing assessment design and use and tying assessment more directly to the processes and contexts of learning and instruction. Assessments of new literacies at the classroom level: The systematic, direct assessment of new literacies in classrooms remains rare. Although students may be taught to use common and advanced tools, teachers tend not to have specic technological literacy standards to meet nor testing methods to gather evidence of student skill in using the technologies. Teachers are typically left on their own to gure out how to integrate technology into their curricula. The state of practice for assessing new literacies integrated into instructional activities remains in its infancy. The advent of the 2012 NAEP Tecnological Literacy probe will provide as set of examples of new literacies in areas of Technology and Society, Engineering Design and Systems, and ICT. In the USA, assessments of 21st century skills and technological literacy standards are required for all students by grade 8; however, states may report achievement on a state test or from school reports. School reports may be based on teacher reports that may, in turn, be based on questionnaires or rubrics judging students use of ICT in project work. Most teachers do not have access to classroom assessments of 21st century skills or professional development opportunities to construct their own. Moreover, the lack of technical quality of teacher-made and commercially developed classroom assessments is well documented (Wilson and Sloan, 2000). Even more of a problem is the
136
lack of clarity for teachers on how to monitor student progression on the development of 21st century skills, not only tool use, but ways to think and reason with the tools. Teachers need formative assessment tools for these purposes. The UK ICT Stage 3 assessment programme represented an attempt to provide teachers with assessments to check and monitor their students operation of ICT tools (National Assessment Agency, 2008). In a 2007 pilot of an ICT test, modules on use of websites, databases, graphs, images and presentations were administered and teachers received feedback on where students prociencies fell on a continuum of operational tasks. Teachers were then expected to help their students become more procient with the ICT tools. A major challenge reported from the 2007 pilot was that teachers viewed the time required to prepare students to take the exams as time taken away from their regular instruction. This nding supports the need for assessments of 21st century ICT strategies and operations that are designed as assessments of learning with technology. For direct assessments of new literacies knowledge and strategies to become integrated into classroom formative assessment practices, new literacies assessments must be systematically designed and subjected to technical quality screening. The formative use of assessment has been repeatedly shown to signicantly benet student achievement (Black and Wiliam, 1998). Such effects depend on several classroom practice factors, including alignment of assessments with standards and frameworks, quality of the feedback provided to students, involvement of students in self-
reection and action, and teachers actually making adjustments to their instruction based on the assessment results (1). Technologies are wellsuited to supporting many of the data collection, complex analysis and individualised feedback and scaffolding features needed for the formative use of assessment (2). However, for the most part, technology-based assessments that provide students and teachers with feedback on performance on the subject matter tasks and items do not also provide feedback on students use of embedded technology tools such as graphs, tables or visualisations. The next section describes assessments being developed by WestEd in a SimScientists project funded by the National Science Foundation (Quellmalz, Timms and Buckley, 2009). The project is studying the use of science simulations for end-of-unit, summative, benchmark purposes and for curriculum embedded formative purposes. The project assesses complex science learning with technology. Students use a range of technology tools and inquiry skills to investigate science problems that relate to understanding increasingly complex levels of grade-appropriate models of science systems. Assessment targets are integrated knowledge about a science system and inquiry skills aligned with 21st century skills such as analysis, evaluation and communication. Although the project does not directly assess students use of technology tools or their abilities to select appropriate tools for a task, this paper offers
(1) Black, P., Harrison, C., Lee, C., Marshall, B., and William, D. (2004). Phi Delta Kappan 86, 8. (2) Brown, J., Hinze, S., and Pellegrino, J. W. (2008). In: 21st century education, T. Good (ed.), Sage, Thousand Oaks. CA, Vol. 2, Chap. 77, 245255.
137
suggestions for how such assessments could be augmented with tasks, items and feedback to promote 21st century ICT strategies such as tool selection and use or collaborative research. By providing formative feedback and further scaffolding on the use of technologies as they are used during subject matter problem solving, the assessments can encompass new literacies and lessen teachers perceptions that technological uency poses additional, irrelevant burdens. Figure 10 presents a screen shot of tasks in a SimScientists assessment designed to provide evidence of middle school students understanding of ecosystems and inquiry practices. Students are presented with the overarching problem of preparing a presentation and report to describe the ecology of a lake for an interpretive centre. They investigate the roles and relationships of the sh
and algae by observing animations of the interactions between and among organisms in the lake. The assessments then present sets of simulationbased tasks and items that focus on students understanding of the emergent behaviours of the dynamic ecosystem by conducting investigations with the simulation to predict, observe and explain what happens to population levels when numbers of particular organisms are varied. In a culminating task, students write a report of their ndings about the lake ecosystem. In a companion set of curriculum embedded assessments, the technological infrastructure identies types of errors and follows up with feedback and graduated coaching. In the assessment screen shown, feedback is provided if the students investigations saved do not show organisms existing for the specied amount of time. Levels of feedback and coaching
Figure 11: SimScientists assessment screenshot Using a model to conduct investigations about population dynamics
138
progress from identifying that an error has occurred and asking the student to try again, to showing results of investigations that met the specications. In the task shown, additional evidence could be collected on technological literacy. The system could score how well students are able to vary values for the number of organisms while using the simulation, use the graph inspector to examine the graphs and tables, and save and enlarge views of graphs of multiple experiments. Such additions would allow assessment of technology, i.e. students understanding of how and when to use the technology features in the simulations, as well as assessment of learning outcomes with technology.
design represent a best practice in the eld of assessment (Mislevy and Haertel, 2008). Assessments of new literacies, then, should specify an assessment framework with these three components. The framework should identify the 21st century ICT domain knowledge and processes that dene the constructs to be measured. These would include students declarative knowledge about technology tools, such as their purposes and features, and students procedural knowledge, or prociency for operating particular technology tools. The 21st century ICT domain would also dene strategies such as information processing, knowledge management, problem solving and communication; each of these are strategies that individuals must draw on to make use of technologies to address signicant, recurring problems in general, applied contexts and in academic disciplines. For new literacy assessments aiming to measure technology use and also to measure academic knowledge and skills, the framework would need to specify, test and report separately 21st century thinking and reasoning strategies including collaboration and communication, use of desk top or e-learning tools and domain knowledge and processes. The second component of evidencecentred design for the assessment of new literacies would then specify the features of assessment tasks and items that would elicit observations of achievement of the 21st century ICT and domain knowledge and skills of interest. The types of assessment tasks and items would represent the types of fundamental contexts, problems and activities in which examinees use technology in school and applied settings.
139
The third component of evidence-centred design would specify: (a) the evidence of student learning that needs to be extracted from student responses to the assessment tasks and items; (b) how the responses will be scored; and (c) the details of the statistical models needed to calibrate items and create prociency estimates and reports of students knowledge and skills. By shaping large-scale and classroom assessments according to this principled assessment design approach, new literacies assessments can initiate the process of documenting technical quality by describing a systematic design process. Further technical quality evidence gathered during cognitive labs of students thinking aloud as they solve assessment tasks and items would provide evidence of construct validity. The psychometric data from analyses of student performance on tasks and items would provide further evidence of technical quality. Since the process of documenting technical quality requires considerable expertise, some of the assessment resources made available to teachers for classroom formative assessment should have such processes and data documented.
coherence will increase the validity of inferences from the assessments and increase the likelihood that information about student performance can be used to describe and promote skilled use of technologies in signicant academic and applied tasks.
Summary
The development of assessments of new literacies is in its early stages. Multiple frameworks, contexts and points of view both invigorate and complicate design efforts. Educators differ as to whether or not technology should be assessed as a distinct domain or should be integrated into assessments within academic disciplines (Quellmalz and Kozma, 2003). Expert panels need to reach consensus on the knowledge and skills that constitute new literacy skills and how those skills align with the knowledge and skills in subject matter frameworks and standards. Research is needed on how to design tasks that integrate the use of technologies into subject matter tests and how to directly test, extract and report the skill with which technologies are operated and strategically used. Experts need to identify the features and functions of technologies that are relevant to academic and 21st century constructs of interest as well as those features that need to be controlled because they interfere with performance on targeted knowledge and skills. Studies are needed to examine student performance on items and tasks in which technology is assumed to enhance or hinder performance. Work with technology-based assessments that scaffold learning and performance in complex tasks while adapting to student responses is also in its early stages. Research on ways that these adaptive modules can serve
140
as formative and summative assessments is greatly needed. Changes in scaffolding could be features that are varied in the assessment tasks. Research would examine how changes in the scaffolding levels of assessment task designs relate to student performance. Such efforts would provide the eld with interdisciplinary 21st century
ICT assessment frameworks, principled assessment designs, exemplary assessments and evidence of their validity. In the 21st century students will need to become facile users of technologies, and 21st century educators will need to be able to dene, target, measure and promote students progress on these new literacies.
References
Baxter, G. P., and Glaser, R. (1998). The cognitive complexity of science performance assessments, Educational Measurement: Issues and Practice, Vol. 17, No 3, 3745. Black, P. and Wiliam, D. (1998). Inside the black box: raising standards through classroom assessment. London: Kings College. Bennett, R. E., Jenkins, F., Persky, H. and Weiss, A. (2003). Assessing complex problem solving performances, Assessment in Education, Vol. 10, 347373. Burns, T. C. and Ungerleider, C. S. (2002). Information and communication technologies in elementary and secondary education, International Journal of Educational Policy, Research, and Practice, Vol. 3, No 4, 2754. Crawford, V. and Toyama, Y. (2002). Assessment of student technology prociency and an analysis of the need for technology prociency assessments: a review of state approaches. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA. International Society for Technology in Education (ISTE). (2007). National educational technology standards for students: connecting curriculum and technology. Eugene, OR. International Technology in Education Association (ITEA). (2000). Standards for technological literacy. Koomen, M. (2006). The development and implementation of a computer-based assessment of science literacy in PISA 2006. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA. Kozma, R. (2003). Technology, innovation, and educational change: a global perspective. Eugene, OR: International Society for Technology in Education. Kozma, R., McGhee, R., Quellmalz, E. and Zalles, D. (2004). Closing the digital divide: evaluation of world links, International Journal of Educational Development, Vol. 24, No 4, 361381.
141
Law, N., Pelgrum, W. J. and Plomp, T. (eds) (2008). Pedagogy and ICT use in schools around the world: ndings from the IEA SITES 2006 study. Hong Kong: Comparative Education Research Center. Mislevy, R. J. and Haertel, G. D. (2006). Implications of evidence-centred design for educational testing, Educational Measurement: Issues and Practice, Vol. 25, No 4, 620. National Assessment Agency (2008). Report on the 2007 key stage ICT test pilot. London: Qualications and Curriculum Authority. Partnership for 21st Century Skills (2005). Assessment of 21st century skills: the current landscape. Tucson, AZ. Available at: http://www.21stcenturyskills. org/images/stories/otherdocs/Assessment_Landscape.pdf . Pellegrino, J., Chudowsky, N. and Glaser, R. (2001). Knowing what students know: the science and design of educational assessment. Washington, DC: National Academy Press. Quellmalz, E. S. and Haertel, G. D. (2008). Assessing new literacies in science and mathematics, in: D. J. Leu, Jr., J. Coiro, M. Knowbel and C. Lankshear (eds), Handbook of research on new literacies. Mahwah, NJ: Erlbaum. Quellmalz, E. S. and Moody, Mark (2004). Models for multi-level state science assessment systems. Report commissioned by the National Research Council Committee on Test Design for K-12 Science Achievement. Quellmalz, E. S. and Pellegrino, J. W. (2009). Technology and testing. Science, Vol. 323, 7579. Venezky, R. L. and Davis, C. (2002). Quo vademus? The transformation of schooling in a networked world. Paris: OECD. Available at http://www.oecd.org/ dataoecd/48/20/2073054.pdf Wilson, M., and Sloan, K. (2000). From principles to practice: an embedded assessment system, Applied Measurement in Education, Vol. 13, No 2, 181208.
142
The impact of ICT in education policies on teacher practices and student outcomes in Hong Kong
Nancy Law, Yeung Lee and H. K. Yuen
University of Hong Kong
1. Introduction
A major theme running through education policy recommendations (Catts and Lau, 2008; CERI, 2001; European Council, 2000; OECD, 2005; Unesco, 2008) and policy initiatives (US Department of Education, 1996; CDC, 2001; Singapore MOE, 1997, 1998) in many parts of the world is the importance for education to prepare its citizenry for life in the 21st century. This has brought about changes in the school curriculum as well as plans for the integration of IT (1) in the teaching and learning process to foster the development of 21st century skills in students. Is there evidence that these education policy initiatives impact on how teaching and learning take place in schools, and even more importantly, on students learning outcomes? In this paper, we explore this question in the context of the policy initiatives that have taken place in Hong Kong since 1998, when the rst IT in education masterplan was launched (EMB, 1998), drawing on the data that have been collected over the period 1998 to 2006 from international and local evaluation studies, with a particular focus on an evaluation study of students information literacy skills con(1) IT and ICT are used interchangeably to refer to information and communication technology.
ducted as part of the evaluation of the effectiveness of the implementation of the rst and second ICT in education masterplans (Law et al., 2007). This paper begins with an overview of the three IT in education strategies (EMB, 1998, 2004; EDB, 2008) launched in Hong Kong to highlight the policy foci and the changes in emphases that have taken place over time. It then summarises the changes in teaching practice and ICT use in Hong Kong schools between 1998 and 2006 based on ndings from international comparative studies of ICT in education. The design and key results from the evaluation study of students information literacy skills is then described. The paper ends with a discussion of the links between education policy, teaching practice and students outcomes as revealed by the ndings.
143
made in the inaugural policy address of Mr Tung Chee Hwa, the rst Chief Executive after the return of Hong Kongs sovereignty to China in 1997 (EMB, 1998; p. i). The vision of this rst policy was to help students develop an understanding of the pervasive impact of ICT on their daily lives and society as a whole, as well as higher order thinking skills and abilities to seek, evaluate, organise and present information. The document indicates the need for schools to undergo a paradigm shift for the policy to be implemented successfully, though it does not elaborate on the nature of the shift. It highlights four important missions in order to achieve this vision. 1. Access and connectivity to provide students and teachers with adequate and equitable access to IT facilities and access to information worldwide. 2. Teacher enablement to assist teachers migration to the new teaching mode. 3. Curriculum and resource support to meet the target of having 25 % of the school curriculum taught with the support of IT. 4. Fostering a community-wide culture to coordinate all stakeholders within and outside the school sector (school management, teachers, students, parents, the business sector and other community bodies) to take up their new roles in IT in education in a collaborative manner in implementing the policy. It is important to note that it was only in 2000, two years after the launch of the rst ve-year strategy, that the comprehensive curriculum reform initiative to renew the school curriculum with the goal of preparing the younger generation for meeting the challenges
of a knowledge-based society was launched (EC, 2000). This curriculum reform had a major impact on the formulation of the second IT in education policy Empowering Learning and Teaching with Information Technology (EMB, 2004). This document formulated the goal to transform school education from a largely teacher-centred approach to a more interactive and learner-centred approach (EDB, 2004, p. i) as the paradigm shift targeted. The vision of this second policy was to encourage the effective use of ICT as a tool for enhancing learning and teaching to prepare the younger generation for the information age, turning schools into dynamic and interactive learning institutions, and fostering collaboration among schools, parents and the community (EDB, 2004, p. 10). The document used a somewhat different rhetorical language. Instead of missions, this document identied seven strategic goals: 1. empowering learners with ICT; 2. empowering teachers with ICT; 3. enhancing school leadership for the knowledge age; 4. enriching digital resources for learning; 5. improving ICT infrastructure and pioneering pedagogy using ICT; 6. providing continuous research and development; 7. promoting community-wide support and community building. These seven goals have a much stronger educational focus and reect different priorities and a more comprehensive set of strategies compared with the missions contained in the rst policy. Empowering learning is identied as the policy goal while the other six are strategic goals. There is an
144
underpinning assumption in this document that the process of IT implementation involves innovation, the nature of which is not only technological, but also pedagogical. It is within this framework that enhancing school leadership such that principals and key personnel in schools understand better the nature and process of change required and continual research and development were given important strategic considerations in this second policy. The second policy was planned to provide strategic guidance for three years in view of the uidity in the technology and education arenas. The third IT in education policy document Right technology at the right time for the right task was released in 2008 (EDB, 2008). As indicated by the title, IT is perceived as purely instrumental in this document; it does not see the need to identify what is right; and the focus is at the task level rather than at the level of an overarching curriculum/educational goal. Instead of identifying missions (as in the rst policy) or goals (as in the second policy), this third policy identied six strategic actions. 1. Provide a depository of curriculum-based teaching modules with appropriate digital resources. 2. Continue to sharpen teachers ICT pedagogical skills. 3. Assist schools in drawing up and implementing school-based ICT in education development plans. 4. Enable schools to maintain effective ICT facilities. 5. Strengthen technical support to schools and teachers. 6. Collaborate with non-governmental organisations to raise information literacy of parents and launch parental guidance programmes on e-learning at home.
This third policy is clearly a turnaround in the developmental direction taken by the rst two. There is avoidance of any indication that there are value judgments in deciding how and what technology is used and that the vision and leadership of the school matters. It is a policy document in name without having to play the role of a policy without having to set a policy directive, with the least possibility of stimulating any debate or controversy. This policy was also released with an extremely low prole. There was no formal launch and no media publicity. It is not possible to pinpoint what might have caused such change and discontinuity in policy. However, there was a major change in the top-level leadership in the Education Bureau at the time this policy was drafted and approved, and the key people who led the curriculum reform launched in 2000 had stepped down.
3. Teaching practice and ICT use in Hong Kong schools (1998 to 2006)
Hong Kong took part in all three modules of the Second Information Technology in Education Study (SITES) conducted under the auspices of the International Association for the Evaluation of Educational Achievement (IEA). The rst module, SITES-M1 (study homepage at http:// www.mscp.edte.utwente.nl/sitesm1), focused on describing the status of ICT and its use in schools through a survey of principals and technology coordinators, with data collection conducted at the end of 1998. Details of the design and ndings from this study are reported in Pelgrum and Anderson (1999). This study collected information on the percentage of schools having ICT available for use for
145
instructional purposes within formal or informal educational settings, as well as the extent to which principals perceive emergent practices in teaching and learning were present in their schools. Emergent practices were dened as those practices designed towards developing students lifelong learning abilities. These are generally more student-centered, open-ended learning and teaching activities with characteristics not commonly found in traditional classrooms. These emergent characteristics include the following, and namely that students: develop abilities to undertake independent learning; learn to search for, process and present information; are largely responsible for controlling their own learning progress; learn and/or work during lessons at their own pace; are involved in cooperative and/or project-based learning; determine for themselves when to take a test. The third SITES module, SITES 2006, was designed as a survey of schools and teachers to examine the kinds of pedagogical practices adopted in different countries and the use of ICT in them. In this module, the principals were also asked the same question on their perception of the extent to which emergent practices were present in their schools. The mathematics teachers and science teachers surveyed in this study were also asked about the frequency with which different kinds of teaching and learning activities (traditional as well as lifelong learning oriented ones) took place in their classrooms and whether ICT was used in those activities. Details of the design and ndings from the SITES 2006
study are reported in Law, Pelgrum and Plomp (2006). As the rst IT in education strategy in Hong Kong was only launched in November 1998, computers were not used much for instructional purposes except for the teaching of computingrelated subjects in the curriculum. Data collection for SITES-M1 was conducted at the end of 1998. The studentcomputer ratios in primary and secondary schools in Hong Kong were 53.3 and 35.7 respectively, which were rather low levels of hardware provisions among the participating countries at the time (Pelgrum and Anderson, 1999). Use of computers for instructional purposes in non-computing subjects was extremely rare. The SITES 2006 teacher survey results showed 70 % of mathematics teachers and 82 % of science teachers in Hong Kong reported having used ICT with the sampled grade 8 classes that they taught in that school year, which was among the highest percentage reported in the participating countries. This nding indicates that in terms of classroom adoption, the government strategies have achieved noticeable success. Obviously, use is not the only criterion for gauging success in policy implementation. If ICT use were to support students development of 21st century skills, it matters whether learning was still organised as traditional teachercentered instruction or lifelong learning oriented as characterised above, and how ICT was actually used in classroom settings. The SITES 2006 teacher survey results indicate that the pedagogical orientation of Hong Kong teachers was among the most traditional among the 22 participating educational systems (Law and Chow, 2008a). Further, the extent of
146
pedagogical deployment of ICT was slightly greater for traditional pedagogical activities than lifelong learning oriented ones. On the other hand, the strong traditional pedagogical orientation should not be interpreted as one of the policy outcomes. In fact, the SITES-M1 ndings show that principals in Asian countries, including Hong Kong, generally reported much lower levels of presence of emerging practice in their schools compared with their counterparts in other participating countries (Pelgrum and Anderson, 1999). The results from the principal survey in SITES 2006 show a remarkable swing in the percentage of principals reporting a lot of presence of emerging practice compared with the same statistic reported in 1998 with countries like Norway, Slovenia and Denmark reporting big decreases while big increases were observed in Asian education systems such as Hong Kong, Japan, Thailand and Singapore, and in some others such as Israel and Italy. Findings from the two SITES studies seem to indicate that there has been a move towards more emerging, lifelong learning oriented pedagogical practices in Hong Kong classrooms over the period 1998 to 2006, though practices as a whole are still very traditional because of the cultural and historical background of the schools.
of the second IT in education policy (EMB, 2004). The goal was to nd out whether students were able to make effective use of ICT to tackle learning tasks in the school curriculum at a level that is not normally achievable without the appropriate use of technology. In commissioning this project, the EDB was interested in methodological innovation in assessment that will assess not only technical operational skills, but also students problem-solving and lifelong learning skills. Hence the focus was less on the psychometric qualities of the evaluation indicators but more on exploring new ways of assessing new kinds of outcomes. In this section, we will elaborate on the conceptual framework taken in this study with respect to IL, the key principles underpinning the design of the assessment tasks and a brief description of the assessment instruments, the technology platform used to conduct the assessment and the sampling design for the study.
147
subject domains. Hence, the assessment of IL should also take account of the domain context. Figure 1 is a diagrammatic representation of the conceptual framework underpinning this study on how IL develops in the context of learning within school curriculum subjects. In this framework, IL encompasses both cognitive and technical prociency. Cognitive prociency refers to the desired foundation skills of every-
day life at school, at home and at work. Literacy, numeracy, problem-solving and spatial/visual literacy demonstrate these prociencies. Technical prociency refers to basic knowledge of hardware, software applications, networks and elements of digital technology. These prociencies are developed through acquiring generic technical IT skills and applying them for interactive learning within the corresponding subject learning contexts in everyday learning and teaching practices.
148
ther developed a rubric with four levels of performance: novice, basic, procient and advanced. Rubrics are scales of performance that can be used to judge the quality of students performance based on the descriptive criteria provided (Popham, 2003). Rubrics are considered appropriate for use in this study as they can be used across a broad range of subjects in assessing both the process and product of students learning (Moskal, 2000). Moreover, in assessing complex competences, rubrics providing specic objective criteria for different levels of performance offer a way to provide the desired validity in the grading process (Morrison and Ross, 1998; Wiggins, 1998). The IL rubric developed in the present study is modied from NCREL (2003), which has been validated by Lee (2009) for use in assessing students IL outcomes as indicated by their performance in learning activities in the classroom and through their authentic learning products. Some examples of the rubrics and their application in the context of specic tasks in the PAs developed in this study will be given later in this paper.
Using ICT tools to identify and appropriately represent information needs Collecting and/or retrieving information in digital environments Using ICT tools to apply an existing organisational or classification scheme to information Interpreting and representing information, such as by using ICT tools to synthesise, summarise, compare and contrast information from multiple sources Adapting, applying, designing or inventing information in ICT environments Communicating information properly in its context (audience and media) in ICT environments Judging the degree to which the information satisfies the needs of the task in ICT environments, including determining authority, bias and timeliness of materials
Table 1: The seven dimensions of IL in the ETS framework adopted in this study (Source: ETS, 2003, p. 18)
149
The number of tasks that assess achievement for each of the dimensions may also vary across the different PAs, depending on the subject disciplines with respect to their subject nature. For each PA, general guidelines will be given at the beginning of the assessment to the students for answering the questions. Besides, the approximate completion time for each main question is indicated at the end of the question in each PA.
4.4. Example performance assessment items illustrating the IL dimension they assess
Some examples of assessment items drawn from the technical PA, mathematic PA and science PA are given below in this section to illustrate how the dimensions of IL are assessed in the different subject areas.
150
An item to assess the dene dimension. Figure 2 shows an item in the technical PA designed to assess students ability to dene their information needs. Here, students are asked to plan a trip for their grandfather and grandmother to visit Hong Kong. It asked the students to dene appropriate keywords for searching the discover Hong Kong website. The assessment criteria are related to whether students can identify the appropriate keywords or not.
An item to assess the access dimension. Figure 3 shows an item in the mathematics PA designed to assess students ability to access information effectively. In this item, students are asked to use a search engine to retrieve correct fares for adults and children to visit the Hong Kong Ocean Park. The assessment criteria are related to whether students can access relevant and correct information or not.
151
An item to assess the manage dimension. Figure 4 shows an item in the technical PA designed to assess students ability to manage information effectively. This item asked the students to edit the information in a Word document according to six given formatting requirements. Students were also provided with a sample text formatted according to those six requirements for their reference. The assessment criteria are based on the number of changes that students can make correctly.
Figure 4: An item in the technical PA designed to assess the manage dimension An item to assess the integrate dimension. Figure 5 shows an item in the mathematics PA designed to assess students ability to integrate information effectively. In this item, students are asked to manipulate an interactive applet to observe changes in the area of a rectangle with the different lengthwidth congurations formed by a piece of string of xed length. Students are then asked to deduce the maximum area of a rectangle that can be enclosed by the piece of string. The assessment criteria are based on the comprehensiveness of the students manipulations and observations, and the correctness of the students interpretations. Figure 5: An item in the mathematics PA designed to assess the integrate dimension
152
An item to assess the create dimension. Figure 6 shows an item in the science PA designed to assess students ability to effectively create representations of information. In this item, students were asked to use electronic resources to create a classication chart with four categories for nine species and also include both the names and photos of those species in the chart. The assessment criteria are based on the complexity of the chart created.
Figure 6: An item in the science PA designed to assess the create dimension An item to assess the communicate dimension. Figure 7 shows an item in the technical PA designed to assess students ability to communicate information effectively. This item asks students to share and discuss their suggestions on their choice of scenic spots for their grandparents using a discussion forum. The assessment criteria are based on whether the students can post their ideas and give responses to their peers or not. Figure 7: An item in the technical PA designed to assess the communicate dimension
153
An item to assess the evaluate dimension. Figure 8 shows an item in the science PA designed to assess students ability to communicate information effectively. In this item, students have to run and observe the behaviour of an ecological simulation, and then propose a guideline for protecting the pond ecosystem. Hence they need to be able to evaluate the challenges to the pond ecology based on their observations of the simulation as well as what they have learnt from other information sources they read. The assessment criteria are based on whether students generated guidelines applied to the whole ecosystem and wether sufcient reasons were given. Figure 8: An item in the science PA designed to assess the evaluate dimension
the information) and create (adapting, applying, designing or inventing information in ICT environments). Hence two scoring rubrics are necessary for assessing these two aspects of the students performance. The scoring rubric shown in Table 2 is for scoring performance in the create dimension only. The specic skill pertaining to the create dimension in this task is the ability to use an advanced tool to create a wellstructured chart. The scoring criteria and an illustrative sample of students work for each level of performance are also provided in Table 2. Experienced teachers were recruited to score the students performance based on the students responses to the questions as well as the products they created for the assessment. The
scoring of the PA tasks requires expert judgment based on a thorough understanding of the scoring rubrics. A training workshop including an inter-coder moderation and discussion of discrepant scoring was conducted before the formal scoring took place. The intercoder reliabilities for the scoring were 0.95 in mathematics, 0.99 in Chinese language at grade 5, 0.96 in Chinese language at grade 8, 0.95 in science and 0.98 in the technical PA for both grades 5 and 8.
4.6. Challenges encountered in the design of performance assessment tasks in this study
We encountered serious challenges in the design of the PA tasks. A com-
154
prehensive literature review conducted at the start of the study revealed that most of the reported empirical work on assessment of IL was in the area of assessing technical IL (e.g. ETS, 2003; Lennon et al. 2003; Jacobs, 1999).
Assessment of IL in subject-specic contexts was only found for science (e.g. Quellmalz et al., 1999; Quellmalz and Kozma, 2003). Hence the development of PA for technical IL and in different subject domains using the same IL
IL dimension and specific Performance Scoring IL skill level criteria assessed Create able Advanced Able to use to use an an advanced advanced tool (diagram tool to create function, a wellExcel or other structured drawing tool) chart to create a chart with at least 2 levels of hierarchical structure Proficient Able to use an advanced tool (diagram function or other drawing tool) to create a chart with 1 level of hierarchical structure Basic Able to use Or a simple tool (table) to create a classification table
Novice
155
framework is a pioneering attempt we have made in this study. There are a number of challenges we faced in the design of the PA tasks that have not been satisfactorily resolved. Two of the challenges with important methodological implications are reported below. One of the methodological challenges in comparing students IL performance across subject areas is the task dependence of the level of IL performance required for the most satisfactory completion of a task along a specic dimension. For example, an item assessing the create dimension in the mathematics PA as shown in Figure 5 (question 3.1) only asks students to use an interactive program to create three rectangles and record their lengths and widths. The level of performance required for the most satisfactory completion of this task is at the basic level only. On the other hand, question 3.1 in the science PA (See Figure 6) assessing the same dimension (create) required students to create a classication diagram. For the mathematics item, it simply required students to follow the instruction to create an artifact. On the other hand, the science PA required a higher level of competence for satisfactory task completion since they need to determine the shape of the chart and how many hierarchical levels they need for the chart. In other words, the levels of skills and competences in the create dimension in the science PA are higher than those required in the technical PA. Matching the levels of performance required for all dimensions on two different PAs is particularly difcult if the task contexts in both have to be authentic. Another challenge for comparability across subject domains is that there are some digital tools and their usage which are core IL performance only for
specic subjects because of the nature of the learning tasks and the discipline. Examples of these are exploratory geometry tools in mathematics and simulation tools for exploring the outcome of different scenarios in science. In the present study, in assessing evaluation skills in science, students were required to run the simulation program and make observations of how the ecology changes and then discuss with their peers to propose a guideline for protecting the pond ecosystem (see Figure 8 for task detail). However, for evaluating the same dimension in the technical PA, students only need to critically evaluate whether the retrieved information was related to the topic, without the need for using any subject specic tool. It is also not possible to isolate the effect of the students subject matter knowledge on their IL performance.
156
pled school to take part in the IL assessment. As both the assessment of IL and online performance assessment were totally new to schools, it was not easy to get schools to voluntarily agree to take part in the study. At the end, 40 primary and 33 secondary schools took part in the study (after replacement). A total of 1 320 grade 5 students and 1 302 grade 8 students took part in the main data collection in this study. Two pilot studies had been conducted before the main data collection. The rst pilot study was to ensure the validity of the instruments and the second pilot study was to try out the logistic arrangements of the main study. To ensure that the assessment conducted in this study was fair and valid, it was necessary for students in all schools to have access to a uniform computing environment. As a result of the implementation of the rst IT in education strategy, all schools in Hong Kong had been equipped with at least one computer laboratory with broadband Internet access. However, the differences in hardware and software infrastructure and congurations were still extremely wide between schools in Hong Kong. It was also administratively not feasible for the study team to install the same software environment in the computer laboratories of the sampled schools. After exploring possible alternatives, we decided on the use of a remote server system the Microsoft Windows Terminal Server (WTS) as the most suitable technology platform for the administration of the IL performance assessments in our context. Students worked on the PA tasks in the computer laboratories in their own schools, which acted as dumb terminals. All assessment taskrelated computations and manipulations were in fact carried out and saved on the WTS.
5. Students information literacy outcomes Impact of eight years of ICT in education policy in Hong Kong
We analysed the results of the assessment by computing the percentage score obtained by the students for the items in each dimension. In the following section, we will rst report on the technical IL achievement of the grade 5 and grade 8 students to examine the differences between them. Next, a summary of the results across the three PA at the grade 8 level is provided. A comparison of the students IL achievement across the three domains (technical, Chinese language and science) is then given, taking account of the challenges mentioned and of the limitations of the study, as pointed out in section 4.6.
157
The very large inter-school differences lead to some rather surprising observations. Firstly, the mean achievement of the best-performing primary school was almost the same or higher than the median school mean of the secondary schools for all the IL dimensions with the exception of manage. On the other hand, the lowest-performing secondary schools had means below the median of all the school means for the primary schools except for the evaluate dimension. Results also showed that there were signicant differences between schools in terms of students levels of IL competences in technical prociency. This seems to indicate that school experiences matter in contributing to the IL outcomes of students and that the number of years of schooling and cognitive maturity contribute less to students IL outcomes compared with the curriculum experiences of students.
Figure 9a: Boxplots of the school means of grade 5 students IL performance in the technical PA across the 40 primary schools
158
Figure 9b: Boxplots of the school means of grade 8 students IL performance in the technical PA across the 33 secondary schools
Figure 10: Boxplots of the school means of grade 8 students IL performance in the Chinese language PA across the 33 secondary schools
159
Figure 11: Boxplots of the school means of grade 8 students IL performance in the science PA across the 33 secondary schools
160
higher overall student IL achievement levels. In fact, some newer schools with medium student achievement banding known for their engagement in curriculum and pedagogical innovation showed higher student IL achievement than some schools well-known for their excellent general academic achievement. These ndings indicate that IL achievement in the subject areas is not only dependent on students achievement levels in the specic subject area assessed, but also on how ICT has been integrated into the curriculum by teachers in their classrooms. There was wide variation within schools and between schools in terms of student IL achievement, indicating that both student background and learning experience in school matters.
6. Conclusion
What have the rst two IT in education strategies in Hong Kong achieved? The studies reviewed in this paper indicate that some basic measures of infrastructure and teacher use have been achieved in all publicly funded schools in Hong Kong. There have been some changes in pedagogy, but pedagogical innovation integrated with ICT use is still rare and not well integrated with use of ICT tools specic to subject areas. Students have generally gained some basic IT operational skills but are very poor in tackling the more complex tasks involving information literacy skills in integration, evaluation, create and communicate. The ndings also indicate that learning experience in school matters in terms of students IL achievement and that there is still a long way ahead between students ICT use in classrooms and nurturing 21st century skills in Hong Kong. Analyses of the SITES 2006 data indicate that school leadership impacts
on teachers pedagogy (Law, 2008), which in turn also inuences the perceived impact of ICT on students learning outcomes (Law and Chow, 2008b). Further, in-depth analyses of the SITES 2006 and SITES-M1 ndings indicate that system level policy impacts on teachers pedagogical practice orientation and ICT use (Law, Lee and Chan, in press). The analysis of the policy changes in Hong Kong, both in terms of the rst two IT in education strategies and the overall school curriculum reform which started in 2000, has resulted in a stronger lifelong learning orientation in pedagogical practices in Hong Kong classrooms. The various international and local studies indicate that the policy initiatives have brought about positive (though yet still small) progress in realising the goal of leveraging the use of ICT to prepare students for life in the 21st century. The apparent change in policy direction in the third strategy is hence somewhat worrying. It has lost the strong focus on pedagogy and fostering of school leadership for ICT use in schools to support curriculum innovation, which have been found to be most important for achieving the educational potential of ICT. Another concern is the absence of any mention of research and development as a strategic goal in the third strategy. The continuing support for local ICT-related research initiatives as well as Hong Kongs participation in the SITES studies have provided valuable data and ndings to inform policy and practice. It is hoped that the absence of mention is not an indication that such support will not be forthcoming in the third strategy. The study on performance assessment of students IL skills reported earlier in this paper is only a preliminary study, and should be a priority area for further research.
161
References
Catts, R. and Lau, J. (2008). Towards information literacy indicators. Paris: Unesco. Retrieved 17.08.09 from http://www.uis.unesco.org/template/pdf/cscl/ InfoLit.pdf CERI (2001). What schools for the future. Paris: OECD. Curriculum Development Council (CDC) (2001). Learning to learn The way forward in curriculum. Hong Kong: Hong Kong Government Printing. European Council (2000). Lisbon European Council 23 and 24 March 2000, Presidency conclusions. Lisbon: European Parliament. Retrieved 17.08.08 from http://www.europarl.europa.eu/summits/lis1_en.htm. EC (Education Commission) (2000). Learning for life, learning through life: reform proposals for the education system in Hong Kong, Hong Kong: Education Commission. Hong Kong SAR Government. Retrieved 17.08.09, from http:// www.edb.gov.hk/FileManager/EN/Content_4079/overview-e.pdf EDB (2008). Right technology at the right time for the right task. Hong Kong: Education Bureau (EDB), Hong Kong SAR Government. EMB (1998). Information technology for learning in a new era: ve-year strategy 1998/99 to 2002/03. Hong Kong: Education and Manpower Bureau, Hong Kong SAR Government. Retrieved 17.08.09 from http://www.edb.gov.hk/index. aspx?langno=1&nodeID=425. EMB (2004). Empowering learning and teaching with information technology. Hong Kong: Education and Manpower Bureau, Hong Kong SAR Government. Retrieved 17.08.09 from http://www.edb.gov.hk/index.aspx?langno=1&nodeID=2497. Educational Testing Service (ETS) (2003). Succeeding in the 21st century: What higher education must do to address the gap in information and communication technology prociencies. Princeton: NJ. Jakobs, K. (1999). Information technology standards and standardization: A global perspective. IGI Global. Law, N. and Chow, A. (2008a). Pedagogical orientations in mathematics and science and the use of ICT, in N. Law, W. J. Pelgrum and T. Plomp (eds), Pedagogy and ICT in schools around the world: ndings from the SITES 2006 study, 121179. Hong Kong: CERC and Springer. Law, N. and Chow, A. (2008). Teacher characteristics, contextual factors, and how these affect the pedagogical use of ICT, in: N. Law, W. J. Pelgrum and T. Plomp (eds), Pedagogy and ICT in schools around the world: ndings from the SITES 2006 study. Hong Kong: CERC and Springer.
162
Law, N., Lee, M. W. and Chan, A. (in press). Policy impacts on pedagogical practice and ICT use: an exploration of the results from SITES 2006. Journal of Computer Assisted Learning. Law, N., Yuen, A., Shum, M. and Lee, Y. (2007). Final report on Phase (II) study on evaluating the effectiveness of the Empowering learning and teaching with information technology strategy (2004/07). Retrieved 23.07.09 from http://www. edb.gov.hk/index.aspx?langno=1&nodeID=6363 Lee, Y. (2009). An investigation of 21st century skills in innovative pedagogical practices using technology. Unpublished doctoral dissertation, University of Hong Kong. Lennon, M., Kirsch, I., Von Davier, M., Wagner, M. and Yamamoto, K. (2003). Feasibility study for the PISA ICT literacy assessment. Report to network A, a joint project of ACER, ETS, NIER. Paris: OECD. Retrieved 17.08.09 from http:// www.pisa.oecd.org/dataoecd/35/13/33699866.pdf Morrison, G. R. and Ross, S. M. (1998). Evaluating technology-based processes and products, in: R. S. Anderson and B. W. Speck (eds), Changing the way we grade student performance: classroom assessment and the new learning paradigm, 6977. San Francisco, CA: Jossey-Bass. Moskal, B. M. (2000). Scoring rubrics: What, when, and how? Practical Assessment, Research and Evaluation Vol. 7, No 3. Retrieved 23.07.06 from http://pareonline.net/getvn.asp?v=7&n=3 NCREL. (2003). enGauge 21st century skills: literacy in the digital age [electronic version]. Retrieved 08.01.06 from http://www.ncrel.org/engauge/skills/ indepth.htm OECD. (2005). Denition and selection of key competencies Executive summary [electronic version]. Retrieved 16.06.06 from http://www.deseco.admin. ch/bfs/deseco/en/index/02.html Pal, L. A. (2001). Beyond policy analysis Public issue management in turbulent times. (2nd ed). Scarborough (Ontario): Nelson. Pelgrum, W. J. and Anderson, R. E. (eds). (1999). ICT and the emerging paradigm for life-long learning. Amsterdam: IEA. Popham, J. W. (2003). Test better, teach better: the instructional role of assessment. Alexandria, Virginia: Association for Supervision and Curriculum Development. Quellmalz, E., Schank, P., Hinojosa, T. and Padilla, C. (1999). Performance assessment links in science, Practical Assessment, Research and Evaluation, Vol. 6, No 10, 19981999.
163
Quellmalz, E. and Kozma, R. (2003). Designing assessments of learning with technology, Assessment in Education: Principles, Policy and Practice, Vol. 10, No 3, 389407. Singapore Ministry of Education (1997). Masterplan for IT in education: 1997 2002. Retrieved 17.08.09 from http://www.moe.gov.sg/edumall/mpite/index. html Singapore Ministry of Education (1998). Learning to think: thinking to learn. Singapore: Author. Unesco (2008). ICT competency standards for teachers: Policy framework. Paris: Unesco. US Department of Education. (1996). Getting Americas students ready for the 21st century: meeting the technology literacy challenge. Washington, DC: US Government Printing Ofce. Wiggins, G. (1998). Educative assessment: designing assessments to inform and improve student performance. San Francisco, CA: Jossey-Bass Publishers.
Introduction
The study Indicators on ICT in education was run under the auspices of the European Commission. The study was run from November 2008 until October 2009 (1). The main purposes of the study were the following. 1. To identify a set of indicators that are relevant for enabling the regular monitoring of the use and impact of ICT in primary and secondary education. 2. To describe scenarios for monitoring ICT in education in the European Union. The study was focused on the 27 EU Member States, the three candidate countries and the countries from the European Economic Area. This group will hereafter be referred to as EU+. In line with the main steps for monitoring that are described in Chapter II: Monitoring in education: an overview. The main questions addressed in this study were the following. 1. What are the policy issues regarding ICT in education?
(1) This study was nanced (at a cost of EUR 122 200) by the European Commission. Contract EACEA-2007-3278. Opinions presented in this chapter do not reect or engage the Community. European Commission
2. In which areas are indicators needed? 3. Which international comparative data are available and what are the data gaps? 4. Which actions could be undertaken for ensuring that monitoring of European benchmarks and international comparisons on educational progress will take place in the future? Each of these questions will be addressed in the subsequent sections.
1. Policy issues
As explained in Chapter II, educational monitoring is primarily a tool for policymaking and, hence, a rst step in the process of exploring scenarios for monitoring ICT in education in the EU+ consisted of analysing the intentions of policymakers with regard to this area. A distinction will be made between common objectives and common goals/topics. The common objectives were inferred from EU policy documents reecting common ICT-related objectives that originate from the Lisbon strategy and follow-up declarations. For all the EU+ countries that were targeted in this study, policy documents were collected from several sources, for instance ofcial policy documents issued by ministries,
165
reports available through the EUN Insight project and/or articles about national policies regarding ICT written by researchers in a recent book edited by Plomp et al. (2009). A qualitative analysis of these documents was conducted, which resulted in a list of policy topics. In the next sections, these topics will be summarised and the issues underlying these topics will be described. This will constitute the basis for forming an impression of the current relevance of indicator domains for these topics in the targeted group of countries for which a survey was conducted among ICT policy experts from the EU+ countries (see Section 1.2).
LexUriServ.do?uri=OJ:L:2006:394:00 10:0018:en:PDF), eight main competency areas were distinguished: communication in the mother tongue; communication in foreign languages; mathematical competence and basic competences in science and technology; digital competence; learning to learn; social and civic competences; sense of initiative and entrepreneurship; cultural awareness and expression. These areas will be further referred to as the EU core competency areas. Although most of these competency areas can be considered more or less traditional (as they always tended to be featured in national educational goals of countries), some of these (such as digital competence, learning to learn and sense of initiative and entrepreneurship) are believed to be essential for the information society, but also an underlying expectation can be observed that ICT is a crucial facilitator for acquiring and maintaining competencies in these areas. Learning to learn can be conceived as a basic skill underlying the ability for lifelong learning, and, hence, against this background it is relevant to observe that in the Councils conclusions on a strategic framework for European cooperation in education and training for the period until 2020 (ET2020), the importance of lifelong learning is reiterated (http://www.consilium.europa.eu/uedocs/cms_data/ docs/pressdata/en/educ/107622.pdf). The common objectives are not explicit in terms of performance expectations and ICT-related opportunities to learn. It seems fair to infer that the underlying assumption is that countries are
166
expected to implement opportunities for students that lead to improvements in these core competency areas. With regard to ICT, access for all implied in relation to these areas could be interpreted as opportunities for students in school to use ICT for learning. However, concrete targets need to be further dened. The implications for our study are that, when it concerns students, competencies and attitudes will mainly refer to these EU core competency areas.
5. Connectedness, e.g. national and/ or international cooperation, publicprivate partnerships. 6. Teacher training, e.g. teacher competencies, pedagogical drivers licence. 7. Support, e.g. the way technical and/or pedagogical support is made available. 8. Transversal issues, e.g. equity, nancing, safety. In the next section, policy issues that are underlying these topics will be described.
1.2. National ICT-related policy topics for primary and secondary education
An analysis of ICT-related policy documents from the targeted group of EU+ countries was undertaken. All policy documents collected were read and coded, and the topics that were covered in these documents were listed. This resulted in a long list of categories that were classied in terms of main topics, sub-topics, sub-sub topics, etc. The main topics that resulted from this analysis were as follows. 1. Infrastructure: this concerns issues such as hardware and software and sub-issues such as access to the Internet, broadband connections and open-source software. 2. Curriculum and content: this covers issues such as pedagogical approach (e.g. autonomous learning), content (e.g. development of methods), assessment (e.g. portfolios, digital drivers licence). 3. Outcomes, e.g. competencies, digital literacy. 4. School leadership, e.g. change management.
1.2.1 Infrastructure
Infrastructure as a topic is very broad. It covers sub-topics such as hardware and software, which are still quite broad, as policy concerns with regard to hardware cover a further wide range of topics, as is the case with software. The overall picture regarding policy issues with regard to infrastructure resulting from the analysis of policy documents can be summarised as follows. A rst observation is that ICT infrastructure is still an important topic for policy concerns. This topic is addressed in almost all documents and can be considered a crucial condition for the use of ICT: ICT infrastructure should be present before ICT can be used. In the early days of the introduction of computers in education, a shortage of hardware and/or software was often mentioned by educational practitioners as a major obstacle for integrating ICT in teaching and learning. The policy documents refer to the intention to improve the current infrastructure, namely:
167
equipping classrooms with fast Internet connections (e.g. Austria, Belgium); providing interactive white boards to schools (e.g. UK, Denmark); standardising systems and software (e.g. UK); providing laptops for teachers (e.g. UK); improving buildings (e.g. Cyprus); ensuring own e-mail addresses for students and teachers (e.g. France). It is interesting to note that some countries seem to take more initiatives than others regarding the provision of new equipment. For instance, the UK made substantial investments, while countries like Denmark (pilot project) and Sweden were more hesitant. In some countries, equipping schools no longer seems to be a policy priority, for instance in Norway where there are no national programmes or initiatives for introducing new devices in schools. This last observation is important, because it illustrates that countries are in different stages of introducing ICT in education. This will have consequences for the monitoring needs of these countries. Several policy strategies are in place for allocating and nancing equipment in schools, namely: lump sum nancing in the Netherlands; in Estonia, a school must submit a statement indicating how it is currently using ICT in its teaching and learning programmes; the school must also detail how it will use the new equipment; in Slovakia, the schools have to prepare a project proposal in which they
present a vision of how they would use ICT in their schools. In Belgium and other countries, the access of students (and even the local community) to ICT infrastructure outside school hours is stimulated. With regard to software, it can be noted that the developement and maintenance of high-quality software for education has been a challenge since the rst micro-computers were installed in schools around the mid1980s. Recent initiatives concern the creation of educational portals offering open content (via Internet-reachable databases containing educational content in many different forms) and the promotion of using open-source software which, in principle, can be attuned to the needs of the users (a common problem in education is that teachers do not like pre-cooked content which they cannot change). In the UK, the curriculum online programme (see http://www.dfes.gov.uk/curriculumonline/) was launched in 2003 and provided every teacher and school with e-learning credits that they could spend on approved ICT resources purchased through the website.
168
The policy documents analysed often (in more than 50 % of the documents) refer to curriculum measures that were planned in order to promote the integration of ICT. Frequently mentioned are the intention to integrate ICT in school subjects and the development of methods for ICT-assisted learning. A major distinction that can be made is between learning about ICT (ICT as an object) and learning with the help of ICT (ICT as a tool). Whereas in some countries the acquisition of ICT skills is organised via a separate subject, in other countries it is assumed that these skills can be acquired via the traditional subject areas (for instance, in some German states, it is integrated in media education, while in other countries, in particular the new member countries, a separate informatics subject exists). Some documents are very explicit about the issue of separate ICT subjects, for instance Belgium where the policy underpinning the plan is to incorporate ICT into different courses rather than to introduce a specic ICT-related course. Next to the expectation that ICT can improve outcomes of learning in traditional subject areas, a number of policy documents also mention that ICT can help to implement new ways of learning whereby the students (with the help of ICT) acquire more control and responsibility for their own learning processes and outcomes. For instance, digital portfolios are conceived as a tool that can help to keep track of learning activities and products resulting from these activities. Making explicit links between digital instructional materials and curriculum goals (the Netherlands) is conceived as helping the teacher to choose appropriate ICT applications. In the UK,
mention is made of learning design packages that would enable teachers in all sectors to build their own individual and collaborative learning activities around digital resources. In some countries, the intention is explicitly formulated that ICT should be a daily part of student learning activities (e.g. Belgium, Estonia), which is an example of an explicit educational policy objective dealing with promoting the opportunities of students to learn with and/or about ICT.
1.2.3 Outcomes
From the policy plans a clear expectation is transmitted, phrased in different terms and with different degrees of explicitness, but with an underlying strong conviction that the use of ICT in education can improve access to teaching and learning opportunities, help to enhance the quality of teaching and learning, improve learning outcomes and promote positive reform of education systems. However, these expectations are global in character. The past decades have witnessed a search for getting a better insight into what impact may be expected from applying ICT in education. It is still unclear how to answer questions, such as the following. 1. What are the basic functional e-literacy skills that students should master when they leave compulsory education? 2. In which content areas can most added value be expected when ICT is applied? 3. Has the use of ICT in the past decades improved the competencies of our students in core subject areas? Are students better prepared for lifelong learning (in terms, for instance, of motivation to learn, analysing
169
their own shortcomings, setting out learning trajectories, self-assessment, problem solving, etc.)? In this respect, a recent knowledge mapping exercise conducted by the World Banks infoDev Group (Trucano, 2005) is relevant. It revealed that, despite decades of large investment in information and communication technologies to benet education in OECD countries and despite the increasing use of ICT in education in developing countries, data to support the perceived conviction on the benets from ICT are limited and evidence of effective impact is very elusive or debatable. These ndings highlighted various knowledge gaps and recognised the need for internationally accepted standards, methodologies and indicators to better measure the real benets of ICT in education. This lack of good quality and unquestionable data, in addition to the absence of standardised guidelines for establishing relevant and comparable indicators, hinders the ability of policymakers to make informed decisions or to demonstrate greater voluntarism towards the integration of ICT into their education systems. The above is not meant to claim that no research has yet been done regarding these questions. Many research and meta-studies have been conducted over the past decades. Most of these studies, however, do not deal with changes in the total education system, and, therefore, when policymakers have to take policy initiatives for the educational system at large, they often stand with empty hands. The policy documents that were analysed offer the following expos regarding (expected) outcomes. Objectives of new or revised curricula for primary
and secondary education should also pay attention to ICT-related competencies, sometimes combined with media literacy (among others, Germany). For this purpose, ICT can be a separate subject or integrated in other subject areas. Several countries made a deliberate choice for either one of these models. However, countries differ with regard to what is included in the ICT competencies. Some countries have examinations to establish these competencies, such as the junior computer drivers licence. A prerequisite is that all students have the opportunity to use ICT during their schooling or at home. The latter refers to, for example, disadvantaged students in secondary education (United Kingdom). The efforts countries undertake to include ICT in the curriculum fall under the umbrella of the more general goal of bridging the digital gap by providing all citizens with opportunities to acquire basic ICT skills and skills to use all kinds of ICT services. Another goal is that students are well prepared for the labour market. Governments of some countries initiated programmes to promote access to computers and the Internet at home. Students get an extra opportunity to use a computer and to learn with computers. It is not only students who benet from these programmes, but also their families.
170
leaders may be important gatekeepers and facilitators in the implementation of ICT. We extracted the following observations from the policy documents. School leaders need appropriate training in a new kind of management in which ICT is a permanent factor from now on in their strategy. In the UK, tools are provided to help school leaders to assess how well their organisation uses ICT. These tools help to modernise the school management (Austria). In Belgium, school leaders have to develop their ICT policy instead of using an imposed policy document made by the government. School leaders also have to do this in Germany. The reason is that they can describe their vision but also become aware of what is needed to achieve this vision and the impact on teaching and learning. Norwegian schools are required to develop an ICT plan. An ICT policy document is not required in Sweden though local stakeholders ask school leaders to have one. Each school has to make a quality report every year. This report includes plans for how to improve. Schools in Malta have included their ICT policy in their school development plan. One of the topics that has been identied in the research literature as important when it concerns school leadership is the development of a common vision on ICT that is shared by all stakeholders in the school (and preferably consistent with the vision from stakeholders outside the schools, such as ministry, inspectorate, parents). This topic is hardly addressed in the policy documents that were analysed. An exception is the UK where BECTA aims to deliver a vision for ICT in schools.
ing the real world to enter the school more easily. The walls of the school and the classrooms are no longer difcult blockades for integrating real-life components in the learning process. There is also a growing awareness that ICT innovations within schools cannot be realised without the help of the outside world and that the help of outside colleagues and even business rms is needed. In the policy documents, we nd this reected in several examples in almost all European countries. Most important are the links between schools and private partners (business companies). Several companies in the eld of ICT, such as Apple, Intel and Microsoft, are involved in partnerships. For the schools in the respective countries, the publicprivate partnerships involve training of teachers, development of ICT-related educational materials (including e-learning and portals), infrastructure (hardware and access to the Internet) and support and/or funding. Most of the publicprivate partnerships are taking place at national level, but some are regionally based, as in France. In several European countries, projects have been set up to establish a link between school and the school environment. These projects vary in their goals: enabling students to learn at home or in hospital, informing parents of the achievements of their children and to have contact with teachers, increasing digital literacy of other family members (including parents) or providing access to the Internet at home.
1.2.5 Connectedness
ICT can help to open the school to the world as well as vice versa by allow-
171
cult to contradict a statement like this, but it is even more difcult and quite often impossible to realise adequate continuous staff development activities for all teachers in an education system. Since the early days of ICT in education, policy solutions have been tried in order to train teachers adequately but the complaints about the lack of teachers competencies and condence remained and hence the search for adequate solutions (that are also payable) is continuing. Many promising initiatives were taken, and applied in small contexts, but were probably not upscaleable. The current policy issues that were inferred from the policy documents are summarised below. In all European countries, the in-service training of teachers is a policy issue. Training programmes and other arrangements have been set up to organise the training of teachers. Teachers are offered opportunities to learn how to use ICT for their own use and how to use it in the teaching-learning process. An example is Hungary, where teacher training is beginning to concentrate on ICT-based educational methodology, with particular emphasis on how to make optimal use of educational technology in the classroom. Several countries have formulated ICT competencies for teachers, including the didactical skills to use ICT in the classroom (using ICT as a pedagogical tool). In some countries, the training results in a certicate or the European computer driving licence. In Lithuania, for example, the basic modules of the European computer driving licence have been extended with additional modules specically related to the use of ICT in schools, as in Denmark, where the pedagogical computer driving licence has been developed. In some countries, such as the United Kingdom and Lithuania, attention is
also paid to the training of librarians in the eld of ICT. Within the framework of ICT projects, programmes have been set up for the in-service training of teachers, among others the MoNES programme in Poland, the KK-foundation in Sweden, the POCTI programme in Portugal, FOR TIC in Italy, Infovek in Slovakia and OPE. in Finland. In many countries, the teacher training institutes are involved in the in-service training of teachers. One would expect that, next to the in-service training of teachers, the European countries consider the pre-service training of teachers as an important issue. However, only a few policy (related) documents state this issue. In Belgium, the institutes for teacher training have to pay attention to the ICT competencies of their students by setting new attainment targets and goals, not only for the basic ICT skills but also for skills related to using ICT in the teaching-learning process.
1.2.7 Support
At the beginning of the computer era, technical support in schools was important. Teachers did not have sufcient ICT knowledge to solve hardware and software problems. At that time, hardware and software in schools were less reliable. Nowadays, technical support is provided quite often by professionals or well-trained staff in schools, especially in secondary education. Even more important than technical support is pedagogical support needed by teachers when applying ICT in teaching and learning. The policy documents that have been reviewed show that teachers may have difculties in implementing ICT in the teaching-learning process and that they need support to accomplish this task. Mostly, the support is pro-
172
vided by agencies outside the school. In Sweden, Schoolnet offers many different services, functioning as an information centre, a library and a news agency. Schoolnet provides a platform for the development of new educational approaches opened up by the Internet and new multimedia technologies. In Portugal, the Ministry of Education relaunched the Nnio programme to broaden the ICT competence centres network to support all school groups in the country. Hence, there are indications of a change of emphasis from technical support to pedagogical support. This is, among other things, reected in the role of school ICT coordinators, which in some countries is no longer limited to technical support. Educational support, including in-service training, is a task of the coordinator. In Catalonia (Spain), a new job description for ICT coordinators in schools (with specic regard paid to the new breed of technical support services), reforming inservice teacher training and setting up new pedagogical support services for ICT using personnel from pedagogical resource centres has been created. In some countries (e.g. Portugal), the function of ICT coordinator does not exist and, hence, the teachers have to organise the technical and pedagogical support in their schools.
Therefore, courses in basic ICT skills are set up or people are given access to ICT facilities after ofce hours. Activities take place within the framework of digital literacy for all, narrowing the digital divide and lifelong learning. A policy goal in Finland is that all citizens have opportunities and the basic capabilities to use electronic services (e-services) and content. Special programmes are aimed at certain groups in society: disadvantaged children in (secondary) education, students who are ill, young sportsmen and sportswomen, young migrants or certain regions in a country. Several programmes also focus on parents and other groups (elderly persons, disabled persons). The programmes provide training in basic skills, access to (broadband) Internet, computers at home or digitalisation of (learning) materials. Disabled persons are often faced with ill-adjusted standards and extra costs for hardware. This limits their access to the knowledge society. Documents from Sweden and Portugal state that there is no specic programme in these countries. Financing Governments (mostly ministries of education) in several EU countries purchase the hardware, software and access to the Internet and/or they nance the training of teachers. Sometimes local governments are involved too, as in Poland. Initially, hardware was nanced by grants and sponsors in Slovakia because the government had not yet set up an information technology programme. Programmes were later set up to give schools access to the Internet. By participating in European projects, schools received equipment.
173
Safety In policy documents, two aspects are distinguished regarding safety of ICT use: rst is the protection of children against harmful content; second the critical evaluation and use of sources. For instance, in Sweden, a programme was set up to raise the awareness of children, parents and educators with regard to the rst aspect. Other countries have started similar initiatives. In Malta, a portal has been developed to protect schools from inappropriate content and it also offers links to useful educational websites. The Greek school network has a protection policy for students. In many countries, protection is often offered by the government through providing ltering techniques, information on how to use the Internet or a telephone line to report illegal information. Campaigns have been launched to increase citizen awareness. Monitoring In order to be able to evaluate ICT policy in education, monitoring of the implementation takes place in quite a number of countries. The monitoring focuses, among other things, on infrastructure, competencies, integration in the teaching-learning process, perceptions, attitudes and needs.
such expectations certainly exist. For instance, one of the many possible conceptualisations of expected relationships is shown in Figure 1, which contains most of the issues that were identied in the document analysis and can be summarised as follows. The ICT learning opportunities of students have a (hypothesised) impact on the competencies and attitudes that they acquire. These opportunities are believed to depend on the pedagogical practices of teachers (which in turn depend on the extent to which the teachers are trained) and availability and access to ICT infrastructure, which is a crucial condition for creating ICT-OTL at school. On the other hand, these opportunities are determined by what students learn outside school. Policymakers can inuence these conditions via curricula, but countries differ in the extent to which the curricula can be prescriptive. As the use of ICT is an educational change, the role of school leaders is important as well as the availability of teacher training facilities for getting acquainted with the technical and pedagogical aspects of ICT. The fact that the policy documents are not specic with regard to expected ICT-OTL and impact is not surprising, on the one hand, as it is currently still too early to take policy decisions for the education system at large, since it is not yet known what works and what does not beyond the borders of small-scale pilots, case studies, experiments and the like. On the other hand, although policy documents are usually not very specic, given the large investments with regard to ICT infrastructure in education, one would expect more explicit expectations to be formulated. However, if clear policy expectations are lacking, one may wonder what implications this may have for EU monitoring.
174
Figure 1: The main concepts for monitoring ICT use and impact
1.2.10 Implications for monitoring the use and impact of ICT in the EU
As mentioned in Chapter 2, in order to be able to make inferences about whether progress is being made with regard to educational outcomes, policymakers need monitors that show on the basis of reliable and valid quantitative indicators to what extent expected changes are taking place over time. Although the analysis of policy documents does not immediately lead to identifying common goals, they offer a rst step for delineating goal domains that can (in principle) be further dened and used for an exploration among ICT policy experts from the EU+ group of countries. The approach for this exploration is described in Section 1.3. Next to nding empirical evidence for the relevance of indicator domains, the common objectives (resulting from the Lisbon strategy) can (in principle) be used as a basis for more concrete indicator denitions. However, as
described in Section 1.2.1, these need to be further specied in order to be useful for drawing up indicator denitions and operationalisations.
175
the results from this survey will be summarised. Firstly, a description will be given of the extent to which the respondents experienced in general a need for comparative indicators on ICT in education. Next, an overview will be given of the areas for which the highest needs were expressed. Several caveats should be taken into account when using the ratings presented in the next sections for setting indicator priorities. Firstly, the descriptions for each area were quite general and hence more concrete indicator elaborations could elicit different indicator needs, as usually is the case: the more concrete a proposal, the less consensus may be expected among panel members. Also, one should take into account that the ratings concern subjective estimates of panel members, which do not necessarily reect the opinions of national educational actors involved in decision-making about edu-
cational matters and in particular what to monitor, how extensively and how frequently. Nevertheless, the ratings can be used for a rst priority list which, when it eventually comes to monitoring ICT in the EU, can be further revised in subsequent negotiations between countries, taking into account too areas other than the ones considered in our study.
Yes, definitely
No
Unlikely
10
20
30 Percentage of countries
40
50
60
176
international comparative monitoring of ICT in education. In only one country did the panel member have the opinion that this need did not exist. Apparently, the panel members felt condent about their capacity to rate, because the answer category Dont know was not used at all. It could be inferred from this poll that, throughout Europe, there is a need for international comparative monitoring of ICT in primary and/or secondary education.
be given by taking 60% agreement about high needs as the threshold for selecting indicator areas. The indicator areas which 60% or more of the panel members qualied as highly needed is shown in Table 1. The issues Connectedness, Curriculum and content and Infrastructure did not contain topics for which 60 % or more of the panel members expressed a high need. The fact that Curriculum and content was not rated as highly needed by many panel members is a bit surprising, as it is often argued that the curriculum is an important handle for introducing educational change.
73 % 65 % 60 % 60 %
66 % 64 %
61 %
82 % 68 % 62 % 61 % 63 %
177
2. ICT-related data available in regular assessments from the IEA and OECD
In order to determine which data and instruments with regard to ICT were available in the regular assess-
ments from the OECD and/or IEA which have been conducted since 2000, all questionnaires from these studies were collected and mapped on the list of policy topics that were described in Section 2.2. The ICTrelated data which are available in the existing data sets are listed in Table 2 below.
Primary education
Number of computers available for instruction (school leader) Number of Internet computers at school Shortage of computers for instruction in general (perceived by school leaders) Shortage of computers for instruction in mathematics/science (school leaders) Access to Internet in general (teachers) Access to Internet for mathematics/ science (teachers) Computers available in classroom and/or elsewhere (teacher) Computers available for educational purposes (teacher) Computers available for mathematics/ science (teacher) Availability of computer at students home
Secondary education
Availability of computer software at students home Availability of computer at students home Access to Internet at students home Access to Internet in general (teachers) Access to Internet for mathematics/ science (teachers) Shortage of computers for instruction mathematics/science (teacher) Computers available for mathematics/ science (teacher) Shortage of software for mathematics/ science (teacher)
Primary education
Computer use in general Computer use at school Computer use outside school Internet use outside school Use of computers for communication purposes
Secondary education
Computer use in general Computer use at school Computer use in mathematics Computer use outside school Use of Internet outside school Use of Internet at school for: Downloading music Collaboration Use of computers for: Playing computer games Writing stories or reports Spreadsheets Graphical software Programming Downloading Searching information Communication
178
Use as reported by teachers Primary education Use for searching information on Internet Use in mathematics for: Exploration Practice Searching information Use in science for: Experiment Practice Searching information Simulation Use for reading: Use of computers Use of software Writing stories Use of Internet for collaboration Secondary education Use in mathematics for: Exploration Practice Searching information Analysis Use in science for: Experiment Practice Searching information Analysis Simulation
Competencies Secondary education Self-ratings by students with regard to: Using anti-virus software Programming PowerPoint presentation Multimedia presentation Downloading a file Sending a file Downloading music E-mailing Designing web pages
Support Primary education Availability of educational support (perceived by school leaders) Shortage of technical support (perceived by school leaders) Person who is providing educational support Secondary education Shortage of support for mathematics/ science (perceived by teachers)
In addition to the above, the OECD databases also contain data about the years of experience in computer use that students had at the time of data collection.
For the purpose of our study available statistics about students use of ICT and infrastructure were extracted from the available data bases. The statistics that are included in the nal report of this project are listed in Table 3.
179
180
Primary education PIEA Statistic per EU+ country TIEA PIEA TIEA PISA PISA TIEA PISA Secondary education TIEA 2001 2003 2006 2007 2000 2003 2003 2006 2007 % students having used computers at all % students using computers overall weekly % students using computers for writing L L L L L L L L L L L L L L L L L L L L L L L L L L L L L L L
Area
Short label
ICT-OTL
Use overall
Overall
Frequent
For writing
Information retrieval % students using computers for information retrieval % students using computers for collaboration % students using spreadsheets % students using computers for programming % students using computers for e-mailing/chatting % students using educational software
Collaboration
Spreadsheets
Programming
E-mailing/chatting
Educational software
Use at school % students having used computers at school overall % students having used computers at school weekly L L L L L L L L L L L L L L L L L L
Overall
Frequent
Mathematics overall % students having used computers at school in mathematics overall % students having used computers for mathematics and science schoolwork
Schoolwork
Use outside school % students having used computers outside school overall % students having used computers outside school weekly % students having used Internet outside school daily Average score on scale self-confidence in learning mathematics Average score on scale valuing mathematics Average score on scale self-confidence in learning science Average score on scale valuing science L L L L L L L L L L L L L L L L L L L L L L
Overall
Frequent
Internet frequent
Liking science
Infrastructure School Distribution of (5 intervals) of average number of computers per 100 Distribution of (5 intervals) of average number of Internet computers per 100 students S S S S S S S S S
Home % students having computers in their homes % students having Internet access at home S S L L L L L L L L S L L L L L S
Computer availability
Internet availability
Support
181
For space considerations, in the next section only a limited number of statistics listed in Table 2 are shown.
students at grade 4 primary education level ever used a computer at all. This indicator is based on questions shown in Box 1. A rst observation from Figure 3 regards the data gaps with regard to the coverage of EU+ countries and the incomplete time series. For most countries for which data existed from 2007 the conclusion seems warranted that nearly all students in primary education had used a computer at least once. Steady increases occurred from 2001 (sometimes exceptional as in Latvia). The cross-study trends have face validity to the extent that an expected steady increase is indeed observed. An illustration of an observation that requires further in-depth research concerns the statistics for Italy where the percentage in 2007 is lower than in 2003 (a similar phenomenon was observed in the household survey from Eurostat). This could point to, although not necessarily, incomparability of samples. Another interesting observation is that this indicator has reached the end of its lifetime, because it is close to the ceiling of 100 % (already even by 2001 in some countries). This is due to the global character of the indicator (whether computers were used ever) which had value in the early days of the introduction of computers, but
Box 1: Source of indicator Source: Question: Answers: Calculation: PIRLS2001, TIMSS2003, TIMSS2007 Do you ever use a computer (do not include Nintendo, Gameboy or other TV/video game computers)? Yes, no Percentage of yes answers
182
80
Percentage
60
40
20
UK UKS JP E 97 99 99 95 99 99 97 94
US 94 98 97
60 59 86
97 44
Sources: PIEA2001: the IEA PIRLS assessment (reading) conducted in 2001. TIEA2003 and TIEA2007: the IEA TIMSS assessment (mathematics and science) conducted in 2003 and 2007. For the meaning of country acronyms, see Annex A.
currently it is more appropriate to zoom in on the intensity of use of ICT in general by students. Hence, in Figure 4 below, the percentages of students are shown who indicated that they used computers at least weekly. The calculations are based on a questionnaire item shown in Box 2. From Figure 4 one may infer that, in some countries, the weekly use
Box 2: Source of indicator presented in Chart 2 Source: Question: Answers: Calculation: PIRLS2001, PIRLS2006
of computers by students in grade 4 substantially increased between 2001 and 2006, particularly in countries that joined the EU more recently (for instance, Bulgaria, Latvia, Lithuania). In other countries (e.g. the Netherlands and UK), this statistic is reaching a ceiling and, hence, future statistics can be better expressed in terms of daily use of computers, perhaps with
How often do you use a computer in each of these places? At home, at school, other place. Every day or almost every day, once or twice a week, once or twice a month, never or almost never Percentage students answering every day or almost every day or once or twice a week on use at home or use at school or use at another place
183
80
Percentage
60
40
20
0 AT PIEA2001
BEf BEf BG CY CZ DK FR DE EL HU l r 46 51 63
IS
IE 9
IT LV LT LU NL NO PL RO SK 64 44 41 86 78
SI
ES SE TR MK
UK UK US E S
78 72 52 66 84 90 84 75 86 95
33 42 67
83 37 46 92 88 89 77 98 95 93
PIEA2006 76 84 78 78
89 83 83 71 95 89 86 66 79 84 80 89
Sources: PIEA2001 and PIEA2006: the IEA PIRLS assessment (reading) conducted in 2001 and 2006. For the meaning of country acronyms, see Annex A.
a further differentiation towards the number of hours per day. The few examples above concerned students use of ICT irrespective of the context (inside or outside school) and the indicators show that most students are engaged with ICT and hence, in
principle, there are ample opportunities to learn with and/or about technology. The question is whether students, in general (both inside as well as outside school), use computers for school work. This question has been addressed in TIMSS2007 (see Box 3).
Box 3: Source of indicator presented in Figure 5 Source: Question: TIMSS2007 How often do you use a computer for your schoolwork (in and out of school)? In mathematics In science Every day, at least once a week, once or twice a month, a few times per year, never Percentage students answering every day or at least once a week or once or twice a month
Answers: Calculation:
184
Figure 5: Monthly use in general for mathematics and science schoolwork, grade 4
Percentage of grade 4 learners using computers for mathematics and science schoolwork
100
80
Percentage
60
40
20
0 Mathematics Science
AT 17 21
CZ 39 40
DK 53 35
DE 32 39
HU 26 30
IT 27 35
LV 19 31
LT 35 49
NL 58 24
NO 50 26
SK 27 31
SI 34 38
SE 29 21
UKE 54 50
UKS 55 41
JP 30 31
US 37 34
Sources: TIEA2007: the IEA TIMSS assessment (mathematics and science) conducted in 2007. For the meaning of country acronyms, see Annex A.
The statistics in Figure 5 show that in most countries large groups of primary school students do not seem to encounter opportunities for learning mathematics and science with the help of computers (either inside or outside school). This not only points to the existence of digital divides in the population of students, but also to underuse of ICT in areas where many good examples of ICT applications exist.
noted that ubiquitous use of ICT in schools is still rare. One may wonder whether this should be judged negatively. Rather, the question emerges and so what? As long as it is not known whether students skills are seriously hampered by a lack of ICT use in schools, this question cannot be answered. Hence, a plea should be made for measuring the extent to which students lack skills which evidently can be improved by more sophisticated use of ICT in teaching and learning. For planning future monitoring, this implies that the focus (as used to be the case in the past) should shift from monitoring ICT-related conditions (as was, for example, the case in SITES2006) to ICT-related student outcomes. This implies substantial investments in designing adequate instruments. With political will this should be possible: if mankind is able to create instruments to measure the characteristics of distant planets, it
185
is certain that, with adequate investment, it should be possible to offer educational actors the instruments to observe what is happening in educational practices.
3. Recommendations
Indicators for ICT-related student outcomes will have to be developed. International organisations (the EU, OECD, Unesco) could stimulate this development through their regular research programmes. A rst step could be to generate frameworks for ICT use in the most important core competency areas and to create for each of these areas item banks containing concrete performance tasks that are perceived as relevant by a substantial number of countries. If, in the short term, the development of concrete performance tasks is too complex, it is advised to focus at rst on denitions of these tasks and to monitor the extent to which students have opportunities (in and outside school) to acquire the competencies required by these tasks. In relation to this, it is recommended that international organisations coordinate the development and elaboration of frameworks for monitoring. For the developers of indicators for the other areas, it is recommended that the indicator denitions are tuned to the competency frameworks. It is recommended that international organisations stimulate the creation and use of a worldwide instrument bank containing measures that can be used for assessing the development of ICT in education. Substantial priorities could be based on the overview provided in Table 1. Incentives might for instance consists of co-nancing national projects in which measures from this instrument bank are used.
The prot for countries consist of being able to use measures that have relatively high quality and are extensively tested, whereas where other countries use the same measures, comparative data also become available without the need for a heavy international overhead. It is recommended that studies are undertaken in which the characteristics and impact of existing ICT-related school monitors are investigated. It is recommended that international organisations coordinate their efforts to develop a vision regarding the future of monitoring educational change (of which ICT is one component). For the EU, a key question is whether this monitoring will be run fully under the auspices and control of the Commission addressing the EU core competency areas. This would be a vision for the long term (1015 years) which could set the scene developing appropriate solutions for organisational, nancial and methodological issues. Several elements that have been dealt with in this chapter (and Chapter II) could be part of such a vision, such as (a) capitalising on highly innovative forms of monitoring (through online data collection and authentic tasks), (b) holistic and multi-level monitoring (e.g. including school monitoring) and (c) tailored monitoring allowing for exibility according to the indicator needs of countries. Part of this vision would be to sketch the responsibilities and roles of the different international organisations involved in regular international comparative assessments. In the short term, the EU (but maybe this is also applicable to APEC and other organisations) could embark on existing assessments that are run by OECD
186
and IEA in order to explore which desirable indicators can be included in these assessments and which options are feasible for guaranteeing an adequate geographical coverage of the EU Member States.
wrong with the students skills for which ICT could offer solutions?. An implication of our study is that, in years to come, intense efforts need to be undertaken to dene 21st century skills, and the opportunities that schools should offer to students to learn with and about ICT. This calls for international cooperation, as it implies a substantial investment in the development of new curricula and assessment methods, which would probably outstrip the manpower and nancial capacities of individual countries. What then is the role of the European Commission to ensure that appropriate and efcient methods for monitoring will ultimately be in place? In this respect many potential actions could be considered of which the most prevalent ones were presented in Section 1.3.2. Still, the future trajectory is paved with uncertainties as much internal EU and external negotiation with third parties will be needed before a workable operational plan can be made. Nevertheless, the message appearing from our study is that the Commission has a very important potential role in stimulating and facilitating these future developments.
References
Plomp, T., Anderson, R. E., Law, N. and Quale, A. (eds). (2009). Cross national policies and practices on information and communication technology in education (2nd ed.). Greenwich, CT: Information Age Publishing. Trucano, M. (2005). Knowledge maps: ICT in education. Washington DC: infoDev/World Bank.
187
Acronyms used in some charts showing indicator statistics Belgium (Flemish) Belgium (French) BEfl BEfr UK (England) UK (Scotland) UKE UKS
188
Myunghee Kang
Ewha Womans University, Korea
I. Introduction
At the start of the 21st century, human society is facing an information and communication revolution, resulting in the advent of new technologies. Computers and network technology have inuenced a range of societal and cultural aspects of life as well as individual experiences. People in modern societies have different lifestyles, thinking styles, ways of working and new communication patterns compared to previous societies. This has been well proven by a variety of research ndings in human and social science studies. Many enquiring scholars and practitioners have made an effort to discover the effects of technologies on individuals lifestyles and communication modes. It may be assumed that different lifestyles result in different learning styles and outcomes. Some authors claim that digital technologies could be powerful transformational tools in individuals learning and growth. Even commercial videogames could have a positive impact on cognitive development and skills. Some other studies present the negative inuence of technology use on human behaviours (Meyo, 2009). Even though there are inconsistent ndings on the impact of advanced technologies in human life, no doubt
is posed on the imperative for the effective use of digital technologies in education. Many efforts have been made to adopt information and communication technologies (ICT) to promote learning excellence in various educational settings. At national and institutional levels, educational policies and regulations have been established to support the educational use of ICT. In schools and classroom settings, teachers and school administrators are attempting to nd the best ways to use ICT technology for their teaching and students success. However, accomplishments that are convincingly the result of the direct causal impact of ICT use are not always easily identiable. It is even hard to ascertain the impact of ICT use in a simple way, because many other factors besides ICT itself might inuence the ICT use in the individuals genuine growth in education. Suppose that a 10th grader performed better in mathematics after using ICT in maths classes for a certain period of time. Of course ICT is an important tool for the student to improve his/her maths performance, but there might be other factors improving the performance, such as the way in which he/she uses ICT, learning contents, teachers support, etc. In spite of all
189
the limitations, salient studies to demonstrate the impact should be carried out to promote successful educational implementation. Currently, there are a signicant number of initiatives assessing and monitoring the quality of ICT use and its impact on education. SITES (the second information technology in educational study), sponsored by the International Association for the Evaluation of Educational Achievement (IEA), is an exemplary study, which identies and describes the educational use of ICT across 26 countries in the world. The study collected data from different stakeholders, and compared and interpreted the results based on the relationships of various factors affecting the educational use of ICT (Pelgrum and Anderson, 1999; Kozma, 2003). The OECD has also emphasised the need for clarifying the effects of ICT use comparing PISA results. European Schoolnet published a technical report to provide comprehensive information on the impact of digital technologies on learning and teaching using international evidence (Balanskat, Blamire and Kefala, 2006). In the meantime, the Korean Ministry of Education, Science and Technology (MEST) has the opportunity to work on the impact studies of ICT use on educational performance in cooperation with the OECD. For better understanding of the relationships of ICT use and educational performance, this paper will provide a theoretical mapping of various factors affecting ICT use in education by using a conceptual framework, which was a part of the ndings of the Korean study, and a summary of key ndings of a nationwide investigation conducted in Korea. Constructing a conceptual framework is a useful way to connect all aspects in a study, and then it may
190
ing on school projects with peers and acquiring new knowledge and skills. The second use is to enhance administrative productivity such administrative services as grading and keeping records in schools are vital for tracing a students learning history and monitoring each students performance. The automated administrative services using ICT are benecial to all stakeholders in schools. Third, ICT is used to build information literacy the school curriculum includes ICT as a learning object for students. The ultimate goal of ICT education is to develop ICT skills for problemsolving in real life. The main contents may include computer architecture and cyber ethics. ICT is an indispensable tool for people living in this society. Teachers who have ICT skills can effectively prepare teaching materials using computers and present complex ideas better than those who have fewer ICT skills. Students who have ICT skills can also be successful in their learning and achieve greater outcomes than others who have fewer ICT skills. The irreversible inuence of ICT will eventually revolutionise the way we learn and teach but the revolution may be not remarkable viewed over a short time. In particular, the changes in educational settings are very slow. It is also hard to determine the positive inuence of ICT use in educational performance in schools, because assessing the impact is complex, and lots of factors affect the processes and outcomes of ICT use (White, 1997). Educational performance in school settings can be interpreted in various ways. From the perspective of learners, educational performance may refer to learning achievement and outcomes obtained from the prescribed learning contents and activities. These
include the mastery of content knowledge, basic skills and attitudes as well as core competencies needed in this modern society. On the teachers side, educational performance might refer to teaching competencies, pedagogical content knowledge and teachers roles in the learning processes and outcomes. For educational administrators, educational performance relates to drop-out rate, underachievement in school work, entrance rates to higher education, reputation ratings from stakeholders outside of schools and so forth. The learners performance, in most cases, will be a key component to assess educational performance in school settings. That is why we, rst, need to clarify the impact of ICT use on educational performance in learning and from the learners point of view.
191
ICT use
Educational performance
Students (SES, experience with technology/activities/ produces/roles/communications) Teacher (Ed. background/innovation history/experience with tech./norms) classroom factors (organisation/size/type and arrangement of tech. facil.) School types and location/school organisation/local culture/intended curriculum staff development/ICT infrastructure/technical support/innovation history Economic forces/cultural norms/ed. goals and problems/ed. funding/ curriculum standards/teacher standards/ICT policies/ICT infrastructure
ICT use and its impact on educational performance may be inuenced by various factors such as the personal attributes of teachers and students, and curriculum and teaching practices at the micro level. At the meso level, the school environment and its surrounding factors may affect the use of ICT in educational practice. At the macro level, ICT use and educational performance may be inuenced by sociocultural norms, economic forces and technological advances. This paper focuses on understanding the effect of ICT use on educational performance at the micro level and controls meso and macro level variables as constants either by random selection or by setting research boundaries.
portable devices, such as cellular phones, are included in ICT use as well. Individuals may use ICT in their daily lives, and their use may have a considerable inuence on personal performance. The following three dimensions are employed to clarify the patterns and frequency of ICT use.
ICT use
ICT is characterised as a networked computer that can process and communicate information in this study. However, stand-alone computers and
192
with others and playing online games. Along with these digital lifestyles, ICT use by children and youths might have some inuence on their thinking and learning styles in schools.
a project for solving a problem can use software to present ideas and thoughts. A social context refers to a setting in which two or more learners use one computer together, or in which a learner works with friends to perform collaborative tasks online. Such tools as wikis, blogs and bulletin boards might be used by learners to interact with others. For example, students could use a wiki for the collaborative development of a project.
Educational performance
The meaning of educational performance is vague and diverse depending on domains, despite the long history of research and attention from academia as well as practitioners. Based upon previous studies, educational performance may be conceptualised as a futuristic concept that encompasses not only the traditional concept of education but also the extended version of human learning. The educational performance of learners is dened as the processes and results of performance, which are revealed internally
193
and externally through the integration of essential knowledge, skills and attitudes, and the continuing construction of experiences with ICT use. To make operational denitions of complex educational performance, the study suggests a two-dimensional taxonomy model, which is composed of six cells within the two dimensions: (1) three performance domains (cognitive, affective and sociocultural) by (2) two behaviour levels (internal, external). This model utilises the approaches of Blooms taxonomy of educational objectives and Krathwohls taxonomy of affective competencies. It also puts more emphasis on socio cultural aspects and less on psycho motor aspects than other approaches. The performance dimension contains three categories: cognitive, affective and socio cultural. These three categories are assumed to be mutually independent and, at the same time, to be critical for learners in the future.
Traditional educational taxonomies emphasised cognitive categories with less, if any, emphasis on the affective and sociocultural dimensions. As the world evolves into a more post-modern society, however, where multiple voices are heard, its citizens, including the younger generation, should be sensitive to socio cultural performance. The presence of ubiquitous computing technology connected in a global network will also accelerate sociocultural dynamism. These categories are assumed to lie along a continuum from internalised (or centripetal) behaviour, to externalised (or centrifugal) behaviour. The continuum underlying the behaviour levels is assumed to be the orientation of performance; that is, internal competencies are believed to be oriented more toward the learners themselves, while external competencies relate more to the world and others outside. In the new millennium, learners are expected to be more participatory and
194
active practitioners who will contribute to the betterment of the community and the world. To live as active practitioners, learners should understand the cognitive, affective and sociocultural aspects of the world to make it a more liveable place. Recent epistemological perspectives such as those of Leontevs activity theory and Lave and Wengers situated cognition theory also conrm this internal-to-external developmental orientation. The following descriptions briey explain the six cells constructed by three performance domains and two behavioural levels. Cognitive-internal competency: This refers to the individuals internal ability to select and gather information, and construct knowledge. Cognitive-external competency: Cognitive-internal competency is manifested as useful tools for transforming the individuals situated lifeworld. Effective problem solving is a relevant example. Affective-internal competency: To live as an independent and mature member of many overlapping communities, a learner should have a set of internal values to recognise the importance of oneself as well as of others. Individuals should also be able to appreciate social norms such as the importance of honesty and integrity. Affective-external competency: Mature individuals are those who act in accordance with their own true values in adverse as well as favourable situations. Self-efcacy, goalsetting and perseverance are a few examples. Sociocultural-internal competency: As future societies will be more socially diverse, individuals need to tolerate and appreciate one another. This sociocultural perform-
ance begins with open-mindedness toward uncertainty. Members should also be equipped with global communication skills such as foreign language prociency and cross-cultural understanding. Sociocultural-external competency: If one fully recognises the presence of others and acquires communication skills, then one may be ready to collaborate with others to make the community a better one. Assuming proactive roles, such as those of leadership, performing social services and maintaining strong ties with others in a community are some exemplary behaviours.
195
The overall interpretation of the results in the investigation indicates that ICT use and educational performance were signicantly connected. ICT use has a positive inuence not only on cognitive competencies enhanced through traditional education systems, but also on affective and sociocultural competencies required for individuals in future societies. The ndings from the investigation are summarised as follows (Kang, Heo, Jo, Shin, Seo and Shin, 2008). First, using ICT outside school inuences an individuals educational performance more than using it in school settings. In most cases, learners can get access more conveniently in homes and commercial computer rooms outside schools than in schools. Schools still provide limited access to learners. Teachers, probably, are responsible for ICT use in both class hours and after classes. Most activities using ICT in class hours are to present information to students by teachers. Few opportunities may be provided to students to use computers except for special needs. It suggests that we should rethink how to use ICT in schools and integrate learners experiences in informal settings into school learning. Second, when individuals use ICT for their learning rather than entertainment, it generates a positive impact on educational performance. It means that such activities as playing games and listening to music may not enhance educational performance much, even though some studies on the educational use of games reported its positive impact on learning outcomes. As an investigation on types of games that most individuals use for their entertainment reports, violent games may spread more than other types of games
(Ferguson, 2007). However, participation in online communities as an activity outside schools positively affected sociocultural competencies rather than the other two competencies. Third, ICT use in individual contexts resulted in a more positive inuence on learners educational performance than using it in social contexts. When individuals use ICT for their learning outside schools, it possibly enhances the cognitive, affective and sociocultural competencies of their educational performance. Using ICT in social contexts also has a small positive impact on their educational performance. Collaborative learning outside of schools as a learning activity in a social context may enhance educational performance. It indicates that collaborative learning while solving real problems, and working on authentic projects must be included for better learning. Web 2.0 tools, one of the recent technologies, have been widely used in many situations and are expected to provide more opportunities for sharing ideas and cooperating among individuals in social contexts. It is evident that ICT use affects learners educational performance positively, but its impact is mainly on the cognitive development within their educational performance. This study assumed that the integration of cognitive, affective and sociocultural competencies is important for individuals to be successful in current and future society. Even though ICT use did not inuence affective and sociocultural competencies much, more attention should be paid to possible methods for using ICT for developing those competencies in and out of school settings.
196
V. Conclusion
The biggest challenge in assessing ICT impact on learners educational performance is to identify the distinctive inuence of ICT use on it. As mentioned earlier, educational performance is a vague concept and difcult to dene and to measure, and various attributes of learners and complicated features of the external environment surrounding learners might affect their performance in the present and future. However, it is not a matter of whether the impact of ICT use can be exactly measured. We need to pay more attention to what and how to measure, and to make interpretations to promote better performance. The following aspects should be taken into account in possible further impact studies on ICT use in education. First, more interest needs to be taken in making connections and studying the relationships among various factors that inuence ICT use in education. This paper elucidates diverse factors on three levels, referring to school settings and the related supra systems. Among them, some factors relate directly to learners performance and some others indirectly. It will lead to the construction of another framework for comprehensive interpretations and future development. Second, ICT use in informal learning must be examined for a better understanding of ICT use in learners perform-
ance. Sometimes, individuals use ICT in personal contexts (home, cafs and pupils houses) more than in schools and then those experiences can affect ICT use in schools in some ways. Most activities in a school setting might be predetermined by teachers and through national standards, but all experiences and activities outside of schools cannot be estimated precisely. When another new world where cyberspace and physical space are combined in one space opens, the apparent distinction between formal and informal learning may disappear. Since this is the case, ICT use in informal learning that happens to learners unintentionally should be paid more attention by educational practitioners. Third, the quantitative and qualitative approaches in assessing and interpreting the impact of ICT use in education should be combined for the comprehensive understanding of this emerging phenomenon. While the quantitative approach answers best to problems requiring a description of trends or an explanation of the relationships among variables, the qualitative approach will address questions referring to the exploration of little-known situations or a detailed understanding of a central phenomenon (Creswell, 2008). Unknown factors affecting ICT use in education may be found through qualitative methods of evaluation.
References
Balanskat, A., Blamire, R. and Kefala, S. (2006). The ICT impact report: a review of studies of ICT impact on schools in Europe. Retrieved 01.08.09, from http:// insight.eun.org/shared/data/pdf/impact_study.pdf Creswell, J. W. (2008). Educational research: planning, conducting, and evaluating quantitative and qualitative research (3rd ed.). Upper Saddles River, NJ: Prentice Hall.
197
Ferguson, C. J. (2007). The good, the bad and the ugly: a meta-analytic review of positive and negative effects of violent video games, Psychiatric Quarterly, 78(4), 309316. Kang, M., Heo, H., Jo, I, Shin, J., Seo, J. and Shin, S. (2008). The new millennium learners and educational performance: the 2nd year report. Technical report. KERIS. Kang, M., Kim, D., Lee, I and Heo, H. (2007). The new millennium learners and educational performance. Background paper of CERI-KERIS international expert meeting on ICT and educational performance. Cheju Island, South Korea. Kang, M., Kim, D., Lee, I, Heo, H., Seo, J. and Shin, S. (2007). The new millennium learners and educational performance: the 1st year report. Technical report. KERIS. Kikis, K., Scheuerman, F. and Villalba, E. (2009). A framework for understanding and evaluating the impact of information and communication technologies in education. A paper presented at the international expert workshop on assessing the effects of ICT in education Indicators, criteria and benchmarks for international comparisons in Ispra, Italy. Kozma, R. B. (2003). Technology, innovation, and educational change: a global change. Eugene, OR: International Society for Technology in Education. Lim, Cher Ping (2006). The science and art in integrating ICT in Singapore schools. Singapore: iT21 (Singapore) Pte Ltd. Meyo, M. J. (2009). Video games: a route to large-scale STEM education? Science, 323, 7982. Pedr, F. (2006). The new millennium learners: challenging our views on ICT and learning. Pelgrum, W. J. and Anderson, R. E. (1999). ICT and the emerging paradigm for lifelong learning: a worldwide educational assessment of infrastructure, goals and practices. Enschede: Printpartners Ipskamp. Smaldino, S. E., Lowther, D. L. and Russell, J. D. (2008). Instructional technology and media for learning (9th ed.). Upper Saddles River, NJ: Pearson Prentice Hall. Taylor, R. (1980). The computers in the school: tutor, tool, tutee. New York: Teachers College Press. White, J. N. (1997). Schools for the 21st century. Harpenden: Lennard Publishing.
198
Introduction
The relationship between information and communication technologies (ICT) and improved teaching and learning has increasingly been the focus of interest for education policymakers, researchers and other education stakeholders after two decades of ICT investment and integration in schools across Europe. What impact or difference can ICT make in education systems? How can ICT be a motor for improvement, progress, educational change and innovation? The interrelationship between policy, practice and research has likewise become an important focus within the area of evidence-based policymaking. The ICT impact report a review of studies on the impact of ICT in education produced by European Schoolnet in the framework of the European Commissions ICT cluster revealed considerable gaps in what is known at a European level about the impact of ICT in schools.
Evidence or access to evidence on the impact of ICT in schools is unevenly spread across Europe. Many of the ndings relate to the United Kingdom and to England in particular. They are mostly in
English. There are gaps in what is known about other countries. No doubt some evidence exists and efforts should be made to identify it and ensure it is translated. If it does not exist, efforts should be made to support transnational studies to ensure good coverage and reliable results. (Balanskat, Blamire and Kefala, 2006)
The Study of the impact of technology in primary schools (STEPS) sought to close this gap and to provide a more balanced and comprehensive picture of the impact of ICT on primary education. The study was commissioned by the European Commission DirectorateGeneral for Education and Culture (2) and undertaken jointly by European Schoolnet (EUN) and Empirica GmbH between January 2008 and June 2009. Empirica was responsible for the LearnInd survey of 30 000 teachers and head teachers in 27 European countries for the Directorate-General for the Information Society and Media (Empirica, 2006): this provided quantitative evidence on the access and use of ICT in European schools in 2006 generally in primary and secondary education. Based on the experience of both organisations in the eld and the application of different approaches and methods (quantitative and qualitative)
(2) This study was nanced (at a cost of EUR 232 545) by the European Commission. Contract EACEA-2007-3278. Opinions presented in this chapter do not reect or engage the Community. European Commission.
(1) This paper draws on longer studies in STEPS written by the author, A. Balanskat, T. Hsing, W. Korte, B. van Oel and L. Sali.
199
for gathering and analysing developments in ICT in education, European Schoolnet and Empirica worked in a complementary way to paint a rich portrait of the impact of ICT on primary education. The main purpose of STEPS was to produce a comparative analysis of the main strategies for the integration of ICT in primary schools in the EU-27, Iceland, Liechtenstein and Norway, their impact and future development perspectives. The study aimed to identify the impact of ICT at three levels: on learning and learners, on teachers and teaching and on primary school development plans and strategies. It sought to identify the main drivers and enablers for effective and efcient use of ICT, and to propose recommendations on the integration of ICT in education for policymakers and stakeholders. The challenge was considerable: to identify commonalities across 209 866 schools (3) offering primary-level education in the 30 countries surveyed, ranging from 14 in Liechtenstein to 55 329 in France. Moreover, compulsory schooling in the countries covered begins between the ages of four and seven and most primary schools are managed, funded and governed by the local municipal councils and so data tend to be held locally and are not always available. The nal report amounted to some 66 separate reports totalling over 1 000 pages. In the following sections, the approach and main ndings are outlined.
(3) A primary school is dened as one that educates children between the ages of four and 11. The gures do not include private schools or kindergartens. A number of countries have all-age schools or combine primary and lower secondary schools in one school.
Approach
The methodological challenge was considerable. Strategy and impact were the two underlying concepts of the STEPS study. They can be seen as the two ends of a chain: a strategy is always designed with the aim of having impact. Strategies and policies are shaped at several levels, and this makes policy implementation and evaluation a difcult task, especially because they involve attitudinal and work process changes. How do we know whether it was the intervention that made the impact without taking other factors into account? Can change attributed to an ICT strategy be isolated from other factors? How was policy implemented in practice? How do we measure impact? As Gordezky et al. note:
Changing a large complex school system is a messy business. Results from change efforts are often unpredictable, show up in ways that are difcult to quantify, and can lead to counterintuitive and undesirable consequences. (Gordezky, Marten and Rowan, 2004)
A number of strategic layers play a role when looking at the implementation of ICT. Strategies can be found from societal level all the way down to an individual teacher making strategic decisions on when and how to use ICT. These levels include, rst, society at large and how it tackles ICT; second, the education system (including policy targets and the main actors). The third and fourth layers are formed by governing bodies (e.g. regional or local authorities) and by individual schools. A nal layer is the end-user: often the teachers, but also the learners themselves. These end-users develop strategies to comply with national,
200
regional and local requirements; and of course to satisfy their own targets. Impact can be described as the overall achievement of an intervention on these domains within the educational system and can be described by a variety of qualitative and quantitative indicators such as improvements in national tests or improved learning in schools depending on the policy target. It is the end point of an intervention involving input, process, output and outcome. Isolating the variable which actually causes the impact is problematic in education. Within STEPS, the following denition of impact was used: a signicant inuence or effect of ICT on the measured or perceived quality of (parts of) education. The study was based on the assumption that not all impacts are positive or intended, that not all policies are implemented as planned and that classroom practices are hard to change (see McLaughlin, 2005). Although evidence about effective strategies has been identied, policies are generally shaped to local contexts and practices take a long time to change. Years of ICT impact studies conrm this complex picture. ICT impacts cannot always be measured through test scores sometimes no gain in test scores can be found and no direct link can be established between an ICT intervention and improved attainment. One solution in this study was to look at impact not only in attainment (hence the broad denition of impact) but also to look at how ICT improves processes of teaching and learning within the school. A multi-perspective approach was adopted for STEPS, taking into account evidence from stakeholders (policymakers, teachers and head teachers), research and site visits to schools (including interviews with
learners). Evidence came from ve sources: a policymaker survey in the 30 countries to provide an overview of policy approaches to ICT in primary education; an analysis of quantitative data from over 18 000 teachers and head teachers interviewed for the 2006 LearnInd ICT benchmarking survey (Korte and Hsing, 2006); a review and analysis of the evidence from over 60 research studies published in more than 20 countries; 250 responses to a school survey seeking qualitative insight into the impact of national strategies in schools, and the identication of good practices via self-reporting; 25 case studies documenting the good practices identied.
Policy survey
The policy survey was the main tool for deepening knowledge of national and regional strategies and was in three parts: general information about the characteristics of the primary school system (ranging from the number of schools, curriculum, teachers pay and conditions to school governance) and emerging policy trends and priorities; the use of ICT in primary schools, covering ICT resourcing, teacher skills development and ICT support, the place of ICT in teaching and learning; ICT policy for primary schools, including ICT in education policy, examples of strategies and good practice. The policy survey was completed between July 2008 and March 2009. National correspondents (in most
201
cases nominated by ministries of education) gathered information on national or regional policy contexts, often translating documents only available in the local language. This was supplemented by information from other STEPS sources (LearnInd data, school surveys and the literature review) and by data in the public domain (EUN insight country reports (4), Eurydice (5)). The results of the policymaker survey were analysed and presented in Report 1, Policy survey results and analysis, providing an overview and comparison of policies and types of strategies. Summaries of national policies were also included in 30 country briefs.
technical infrastructure in schools, including computer equipment and Internet connectivity; the use of ICT in class and for educational purposes; ICT competence of teachers; barriers to ICT use as perceived by teachers and head teachers. The results of the LearnInd data analysis were presented and discussed in Report 2, LearnInd data results and analysis. Summaries of the data analysis per country can be found in the 30 country briefs.
Literature review
The main scope of the literature review was qualitative rather than quantitative, in order to ensure sufcient coverage from participating countries. The aim was to identify and summarise in English recent studies (up to four per country) that gave important insights in the eld and to include countries where information access has so far proven to be difcult due to language barriers and fragmented research as revealed by the ICT impact report. The appointment of committed key experts from existing partner networks, from a wide geographical area (north, south, east and western Europe) and especially in those countries where until now information had been unobtainable, enabled important studies in those countries to be identied, and, most importantly, to make the results of these studies more widely known. Summaries of research in each country were presented in the country briefs.
Teacher survey
Quantitative data in the LearnInd surveys used standardised interviews with head teachers and class teachers (a random sample) in 27 European countries collected in 2006. The sample was split between primary, lower secondary and upper secondary schools, but STEPS concentrated on the results of primary schools only. In total, 12 379 interviews with classroom teachers and 6 449 interviews with head teachers of schools which offer primary education were carried out. The use of ICT in European primary schools was measured using the following criteria: teachers attitudes and motivation with regard to ICT, including perceived impact of ICT;
(4) http://insight.eun.org/ww/en/pub/insight/misc/ country_report.cfm (5) http://eacea.ec.europa.eu/portal/page/portal/ Eurydice/Overview/OverviewByCountry
School survey
The STEPS school survey aimed to gather examples of the integration of ICT in primary school daily activities and to obtain a snapshot of current
202
views of teachers on ICT use and impact in their school. The survey consisted of an online questionnaire with both closed and open questions in nine languages.
include the voice of teachers, pupils and school leaders; complement the evidence base by an in-depth investigation and observation. In total, 25 contrasting schools in 13 countries were selected for a case study visit. The case study (written by an evaluation team) followed a xed format. At school, teacher and learner levels, the reporters were asked to highlight impact, enablers and barriers. All the case studies were analysed in terms of themes, issues and typologies and presented in Report 5, Case study analysis.
Case studies
The purpose of the case studies was to nd out more about effective use of ICT and enablers or barriers at different levels of the education system. The case studies sought to show how the strategies of policymakers, schools and teachers impacted on teaching and learning. The case studies were designed to show the richness of implementation and also to describe a number of typical situations in sometimes quite different schools and contexts. In most cases, the visits were related to a specic application of ICT or a project which had been identied by ministries of education or schools themselves as demonstrating good practice. Within STEPS, the case studies helped to: visualise what happened in the classrooms;
Key findings
An analytical framework was developed early in the project and used for the integrated analysis and presentation of the overall ndings. The framework visually captures key elements and represents them in a logical and concise way. The analytical framework is built around a core of teachers, learners and the school as a whole. The framework helps to describe the context in which ICT is introduced and implemented.
203
The model consists of ve levels: society, education system, school, teachers and learners. These levels represent where strategies, enablers and barriers can be found. The framework reads from left to right, representing not only a hierarchical ow but also a ow from strategy to impact. A synthesis report was compiled taking into account the results of the ve contributory reports described above. It presented key ndings, conclusions and recommendations for future work. The key ndings are summarised below, together with suggestions for further investigation. They are grouped under four headings: impact on learners and learning, impact on teachers and teaching, impact on schools and planning and system-wide ndings.
vidual needs, although schools nd it hard to isolate the contribution of ICT to test scores. However, research suggests that there is a discrepancy between childrens under-use of ICT at school and their more frequent and often more sophisticated use at home. Although a range of digital skills are acquired outside school informally, some basic computer skills are not.
204
test scores. Virtual learning environments enable the individual tracking of progress and help identify the next learning step, so enabling pupils themselves to detect errors and shortcomings. Achievement can be recorded in e-portfolios.
subjects where resource development by individual teachers is difcult and/or costly. Almost all aspects of assessment: developing effective tools to measure ICT skills; enabling ICT deployment by students within the assessment process; e-assessment; etc. Development of indicators on successful use of ICT in relation to differing learning tasks and contexts. Understanding the feasibility, costs and benets of personalised learning.
205
Figure 3: Spain: after-school on-site training, responsive to needs, with a pedagogical expert on hand
according to the LearnInd data: from around 90 % in the Nordic countries to approximately 35 % in Greece, Latvia and Hungary. Teachers nd that ICT supports in equal measure a range of learning and teaching styles, whether didactic or constructivist, in passive activities (exercises, practice) and in more active learning (self-directed learning, collaborative work). The research shows that rich constructivist learning environments improve learning outcomes, especially for learners from disadvantaged areas. Teachers in some countries (the United Kingdom, Cyprus, the Netherlands, Portugal and Poland) are more optimistic about ICT than others (Sweden, France and Austria). Nevertheless, a signicant minority (21 %) consider that using computers in class does not in itself have signicant learning benets.
There is little to no correlation between impact-optimism and levels of school equipment, sophistication of use or even teacher skills. There is a cluster of countries with high skill levels and high expectations as to ICT impact: the United Kingdom, the Netherlands, Cyprus and Malta.
206
tasks, but lack the pedagogical vision to integrate ICT effectively in teaching. The research shows that ICT can promote new pedagogical approaches, but only if ICT is fully integrated into subject lessons. In the Nordic countries, teachers in primary schools more often regard ICT as supporting their pedagogy than teachers in secondary schools.
Quality training increases teachers motivation and digital and pedagogical skills
Teachers responding to the good practice survey consider that using ICT improves their motivation and teaching skills. We know from the policy survey that the 30 countries are investing in developing teacher ICT skills; but that in a signicant number of countries teachers entering the profession may have little formal training in using ICT in teaching. Researchers have drawn some worrying conclusions about the effectiveness of continuing professional development in ICT: that teachers have failed to acquire the desired level of ICT skills for classroom instruction and that training has not translated into gains in pupil learning. Research suggests that teachers adapt more easily to new technologies through a step-bystep approach with minimal disruption, and that on-site is preferable to offsite training. Training courses failed to match needs and lack the pedagogical and practical dimension, according to the analysis of responses to the policy survey. The survey also indicates that reliable technical back-up and inspiring pedagogical support for teachers are often missing.
ways ICT specically can enhance teaching and learning. Developing fully integrated models of ICT-supported learning delivery which provide examples and templates to guide local development. The environment and conditions for continuing professional development for teachers in relation to ICT. Improving interoperability in the interests of maximum exchange, deployment and sharing of teaching materials.
207
saturation (the UK, where all primary schools have at least one). Denmark, Estonia and Norway have the highest levels of virtual learning environments that offer access from outside school. Smaller primary schools are disadvantaged in terms of equipment, according to research, yet case studies show that the benets for schools in small communities are considerable.
ICT makes administration accessible to wider groups through a web interface and school records are more easily maintained, exchanged and updated. However, research indicates that school ICT plans tend to concentrate more on infrastructure than on how ICT can be used to enhance teaching and learning, and this can actually work against innovation (as found in some case studies). Virtual learning environments are becoming more widespread, but are used more for administration than for learning. Research shows that sufcient time is needed to assimilate virtual learning environments. However, once introduced, they are increasingly used by teachers.
Strategies for ICT tend to feature infrastructure and teachers digital competence
Responses to the policy survey indicate that all 30 countries have or have recently had at least one ICT policy or initiative affecting primary
208
schools, usually aimed at improving infrastructure and digital competence among teachers; and less frequently targeted at the supply of digital learning resources, pedagogical reform or leadership. From the 74 policies, programmes and projects analysed in the study, strategies range from a systemwide intervention including ICT to specic projects focused on, for example, equipment, e-safety or teacher educator ICT training and with the locus of control running from central government control to high levels of school autonomy and responsibility. ICT in schools is still a topic that arouses controversy; and where the debate involves the general public, the concerns tend to be about e-safety, according to the policy surveys.
209
nology can then be evaluated in terms of its contribution to these wider policy aims. Until recently, policy measures to encourage the use of ICT have tended to focus on improving infrastructure and developing teacher competence in ICT. From that narrow perspective it is more difcult to justify the investment. In some recent education policies and initiatives, ICT is invisible, either because it is a given or perhaps because it is perceived as problematic. Yet the evidence suggests that the impact of ICT on schools, teachers and learners can increase the effect of other initiatives, for example reducing learner dropout, efciency gains, key competence development, improved teaching and school autonomy. Although the studies reviewed in STEPS provided a generally positive picture of ICT impact, information is patchy and tends to focus on inputs. More research is needed into the impact of ICT on learning outcomes, and in other sectors, such as secondary education, and to identify transferable interventions. More international cooperation on regular benchmarking and lessons learned, denitions and methodologies would help to assess the return on investments in technology in education, and enable teachers, school leaders and policyshapers to make sound decisions. As Michael Trucano of the World Bank recently said:
It is necessary to have new types of evaluation in place and new monitoring indicators. The impact of ICTs on learning and future employment is still debatable, precisely because there is no standard methodology. (Trucano, 2009).
210
References
Balanskat, A., Blamire, R. and Kefala, S. (2006). The ICT impact report: a review of studies of ICT impact on schools in Europe. Brussels: European Schoolnet. Empirica (2006). Benchmarking access and use of ICT in European schools 2006: nal report from head teacher and classroom teacher surveys in 27 European countries. Download at http://europa.eu.int/information_society/eeurope/i2010/ docs/studies/nal_report_3.pdf Gordezky, R., Martens, K. and Rowan, S. (2004). Inuencing system-wide change at the Toronto District School Board. Download at http://thresholdassociates. com/successes/pdf/Futuresearch.pdf Korte, W. and Hsing, T. (2006). LearnInd: benchmarking access and use of ICT in European schools. Bonn: Empirica. McLaughlin, M. (2005). In: A. Lieberman (ed.), The roots of educational change: international handbook of education change. Dordrecht: Springer. Trucano, M. (2009). Speech at Reinventing the classroom seminar, 15 September 2009, Washington DC.
211
Assessing the effects of ICT in education Indicators, criteria and benchmarks for international comparisons
Luxembourg: Publications Ofce of the European Union, 2009 2009 211 pp. 17.6 x 25 cm ISBN 978-92-79-13112-7 doi:10.2788/27419 Price (excluding VAT) in Luxembourg: EUR 15
LB-78-09-991-EN-C
10.2788/27419