1302 Other 970 1 10 20181103
1302 Other 970 1 10 20181103
Abstract: One of the big challenges faced by research in the Technology Enhanced Learning
(TEL) field has to do with the injection of innovation into real educational contexts. Very often,
innovative technologies fail to be taken up by practitioners because of difficulties in absorbing
both methodological and technological innovation of the target contexts. This may be caused by
resistance of the target users associated with conservatism of the contexts, but also by inadequate
approaches to innovation promotion or even lack of evidence of the return of investment of the
innovation itself. Thus, a crucial need of the TEL field consists in the ability to evaluate both the
efficacy of a new technology in the specific context to permeate, and the effectiveness and
adequacy of the intervention designed to inject this innovation into the intended situation. This
paper contributes to fill in this gap by proposing an approach that joins aspects of Guskey’s
model to evaluate the effectiveness of teacher training events together with indicators of the
well-known Technology Acceptance Model, generally used to predict acceptance of a new
technology. The approach proposed, called T&EAM (Technology&Event Acceptance Model), is
illustrated. The discussion concerns its strengths and weaknesses and provides inputs for future
applications and research.
Introduction
objective is to develop and inject methodological and technological innovation into a ‘virgin’
usually leverages on teachers and is typically triggered by training events aiming to familiarize
them with the technology, and then entails some kind of follow up, where they are scaffolded
and guided through their first steps in the use of the new technology in real life contexts. In these
2
situations, policy makers and/or researchers need to evaluate the results of such actions, both in
Acceptance Model), built upon the conjunction of two existing and consolidated models, which
have been merged to form a single framework for the evaluation of a technology-based
Our ambition is to set the basis for the development of a framework that can be adopted
in many other TEL projects, provided that they share the need of evaluating the effects of an
Theoretical background
Bearing in mind that the issue we intend to address here is the definition of an approach
to evaluate the combined effects of the introduction of a new technology in a given context (and
its methodological underpinnings) and of a training event addressing the perspective users, our
literature review focuses on both aspects of the problem: the evaluation of the impact of a new
technology in a given context and the evaluation of training events/programmes, and specifically
Both of these areas are very rich: there is plenty of models and framework addressing
these issues, some of which are very well-known and consolidated. With no ambition to be
exhaustive, in the following sections we concentrate first on some of the most popular models to
evaluate the impact of technological innovation, and then we focus on the evaluation of training
programmes.
3
A number of models have been proposed in the last decades to analyse and predict user
acceptance of new technological tools (Davis, 1989; Rogers, 2010; Thompson, Higgins, &
Howell, 1991; Venkatesh & Davis, 2000; Venkatesh, Thong, & Xu, 2012).
Among these, some of the most well-known aim to predict users’ intentions towards
technology, and actual usage of it, as dependent variables, on the basis of various determinants
(i.e. independent variables) that include: attitudes, perception of usefulness, perception of ease of
use, motivation (both extrinsic and intrinsic), and other social factors. One of the most popular,
the Technology Acceptance Model (TAM) (Chuttur, 2009; Davis, 1989), focuses on two
determinants, Perceived Usefulness and Perceived Ease of Use, and has given rise to several
derivatives and evolutions, often used in educational contexts (Cheung & Vogel, 2013; Liu,
Chen, Sun, Wible, & Kuo, 2010; Persico et al., 2014; Tarhini, Hone, & Liu, 2013). For example,
TAM2 (Venkatesh & Davis, 2000), considers some additional determinants concerning social
influence, including for example Subjective Norm, defined as “the person’s perception that most
people who are important to him think he should or should not perform the behavior in
question”(Fishbein & Ajzen, 1975, p. 302). As described in the following, TAM and TAM2
provide the foundations for the development of our evaluation approach, although the three
variables (Perceived Usefulness, Perceived Ease of Use and Subjective Norm) are not used as
determinants, to predict behaviour, but rather as indicators of acceptance, after usage of the
technology.
Besides those cited so far, the following models have been considered in METIS for
possible inspiration. The Motivational Model (Davis et al., 1992) focuses on Extrinsic
Motivation and Intrinsic Motivation as determinants. This model has been drawn and adapted
4
from the Motivational Theory of the psychological field to fit the information systems domain
and model new technology adoption and use (Vallerand, 1997). The Model of PC Utilization
(MPCU) by Thompson and colleagues (Thompson et al., 1991) aims to predict PC utilization,
and complements the perspectives put forward by TAM; MPCU establishes a framework to
Roger’s renowned Innovation Diffusion Theory (Rogers, 2010). MPCU, in particular, has been
widely applied to the ICT field, and focuses on a number of deteminants, including Relative
Advantage, [perceived] Ease of Use, Image, Visibility, and Voluntariness of Use. The Unified
Theory of Acceptance and Use of Technology (UTAUT) model (Venkatesh et al., 2003, 2012) is
Performance Expectancy, Effort Expectancy and Social Influence. Interestingly, other aspects
that are usually considered important in technology adoption, such as attitude toward using
technology, self-efficacy, and anxiety, according to UTAUT do not have a direct impact on
technology usage; while other conditions seem to influence technology adoption, including
evaluation of training programmes and training initiatives of different kind. With no intention to
The Kirkpatrick’s 4 levels model is probably one of the most well-known and widely
people involved in the training initiative), learning (a measure of knowledge and skills increase),
behaviour (a measure of change in behaviour) and results (a measure of the effects on the
learning, job behaviour, organization, ultimate value (i.e.: the financial effects, both on the
having been adapted to a teacher training context, thus paying special attention to effects on
school contexts and students. It encompasses the following levels: participant reaction,
participant learning, organizational support and learning, participant use of new knowledge and
Other models that have been explored, and have to some extent influenced our work,
include:
Tyler’s model of curriculum development (Tyler, 1942), which for the first time
utility and actual use, and evaluators should facilitate the evaluation process and
design any evaluation with careful consideration of how everything that is done, from
the Context, Input, Process, and Product (CIPP) evaluation model (Stufflebeam &
projects, personnel, products, institutions, and systems, whose core components are
the (Input-Process Output) IPO model (Bushnell, 1990), aimed at enabling decision
makers to select the package that will ensure the effectiveness of a training program;
the Training Valuation System (TVS) model (Fitz-enz, 1994), that includes situation
problem and designing the training), impact (the variables that impact on
This section describes the T&EAM approach, the associated indicators, as well as the
As already mentioned, the TAM and its subsequent evolutions were chosen as the
acknowledged that this model was originally devised as a predictive tool. However, Persico et al.
(2014) have already shown how the TAM indicators “perceived ease of use” and “perceived
usefulness” can be used for ex-post assessment of the impact of a technology, by collecting
information concerning users’ opinions about these two indicators and complementing them with
data gathered from other sources, such as observation and data tracked by the system itself.
Furthermore, the subjective norm indicator introduced by TAM2 are also to be used.
7
The reasons for the choice of TAM and TAM2 indicators (Venkatesh & Davis, 2000) as
main indicators of the T&EAM approach, are two-fold: first, the number of experiences and
studies where they had been applied, witness their capacity to adapt to several different contexts,
even when it comes to assessing teachers’ acceptance of technology (Huntington & Worrell,
2013; Persico et al., 2014). Especially in those studies concerning the barriers to technology
uptake by teachers (Delfino, Manca, Persico, & Sarti, 2004; Lambert, Gong, & Cuper, 2008;
Lloyd & Albion, 2009), the TAM indicators have proved to be key determinants and that training
initiatives can improve some of these factors, to increase the chances that the proposed
A second reason for this choice is that these models are applicable to any technology,
provided that their indicators and the evaluation means are tailored to the system structure,
functions and user types. This process of adaptation/tailoring is essential, especially when
dealing with formative evaluation, in such a way to achieve an accurate diagnosis of the
problems.
Thus, in our approach the “perceived ease of use” and “perceived usefulness” indicators
are used to build data collection tools aiming to understand the users’ opinions after use of the
technology during ad hoc training event(s). In our model, these subjective data are then
complemented with more objective data about actual usage of the system. This latter information
is typically obtained thanks to tracking mechanisms built in the technology, usually with learning
analytics techniques (Authors, 2014). These data provide, among other things, a measure of
trustworthiness of the users’ opinions. If, for example, a user says that a given functionality was
easy to use, but tracked data show he/she never used it, his/her opinion is less trustworthy than
8
that of a user who claims the functionality was difficult to use after having engaged with it for a
According to the proposed approach, the evaluation of the training initiative(s) used to
introduce the technology in one context, can be carried out according to Guskey’s model (2002).
In this model, derived from Kirkpatrick’s work (1994), evidence is collected and analysed at five
critical levels: 1) workshop participants' reactions (i.e. perceptions on the training event) 2)
workshop participants' learning (i.e. knowledge and skills gained); 3) organization support and
change (i.e. impact on the organization where the participants work and organisation’s support to
the implementation of the innovation); 4) participants' use of new knowledge and skills (i.e.
application of the acquired competence in the teaching profession); 5) student learning outcomes
(i.e. impact on the students who are the ultimate beneficiaries of the innovation proposed).
While most evaluation models focus on levels 1 and 2, Guskey’s model also takes into
consideration factors that can facilitate or hinder innovation within an organization (level 3) and
long term effects of the training events on participants (level 4), as well as on their students
(level 5), and this is the main added value of this model in respect to the others.
According to the T&EAM, while level 1 to 3 are typically gauged at the end of the
training event(s), level 4 and 5 data collection take place after the follow up (medium term). The
data collected from training participants are also complemented with data concerning the actual
training sessions. These data are collected during the events typically by an observer, taking
Overall, in the T&EAM approach we have merged the TAM and the Guskey’s models,
have customized their original indicators, so to form a unique evaluation framework, in such a
way that data collection and data analysis are conducted by means of joint evaluation means.
The resulting T&EAM approach (see Fig. 1) allows to strike a balance between the need
to carry out a deep analysis and evaluation of different aspects of the technology and the training
events, on one hand, and the requirement to keep the effort of the users relatively low, so to
Fig. 1 represents the cyclic process of data collection and evaluation providing feedback
on both the technology and the teacher training events. The data collected concern:
10
• Participants’ opinions, gathered at the end of the training event(s), in a very easy
Sometimes, when one boosts an innovation into a real context, this is done in the context
of complex (European) projects, where several parallel events are held and data need to be
collected in a homogenous and comprehensive way (Authors, 2015). The actors usually involved
in projects of this kind, comprise a number of institutions/agents that carry out the pilot of the
training events in one or more contexts (indicated as the trainers, in the following), plus one
institution usually leading the evaluation (identified as the evaluator in the following), and one
actor in charge of the development and tuning of the technology (the developer) (see Fig. 2).
The coordinating institution (the coordinator) could be any of the above, although it is
preferable that the evaluator is in charge of evaluation only, to avoid conflicts of interest. The
evaluator usually devises or instantiates the evaluation model, designs and produces the
evaluation tools, coordinates data collection (which is carried out on site by the trainers) and
In case the evaluation involves institutions in different countries, language problems need
to be handled with the support of local partners; so, for example, the questionnaires should be
developed in one common language (typically English), and translated into the local languages.
A first phase of analysis of any narrative (answers to open questions or interviews) should be
11
carried out by the trainers, based on common guidelines provided by the evaluators, to produce
Discussion
The T&EAM approach has been developed and experimented for the first time in
learning design. In this project, the authors of this paper where in charge of the evaluation
workpackage (Authors, 2013; Authors, 2015a; 2015b). Within METIS, the target of the
innovation were three different educational contexts (namely Higher Education, Vocational
Training and Adult Education), thus the evaluation approach was applied to these three
situations. Indeed, the T&EAM approach proved flexible enough to fit in with the three different
1
http://www.metis-project.org/
12
contexts, and appears to be potentially exportable to several other educational contexts (Authors,
2015).
Furthermore, within the METIS project the application of the T&EAM evaluation
approach yielded important results, providing useful feedback and suggestions for improving and
tuning both the proposed technology and the training format, so to increase the possibility that
the technology is then taken up by other actors in the same (or similar) contexts.
The approach allowed us to collect the data in a very unobtrusive way, with data
collection carried out by the project partners in charge of the training in each context according
The questionnaires and the interview rubrics were produced in English and translated in
Spanish and Greek by the local partners. A first round of the qualitative analysis was carried out
locally, to produce English narratives corresponding to the open answers to questionnaires and
interview transcripts.
This organization allowed for the T&EAM approach to be easily and consistently
adopted and managed even by the partners who were not directly involved in its
once translated in the local languages; the interviews, carried out by the local partners based on a
common rubric provided in English, were slightly more complicated, because they required a
certain amount of time and an effort to produce a synthesis in English of the interviewee
answers. Data collection through interviews was possible as long as the number of interviewees
is relatively small; in case of big numbers, probably they should be replaced by questionnaires or
As far as the indicators are concerned, the ones deriving from the TAM model and
devoted to evaluating technology acceptance proved very effective. Given that in METIS the
technology was rich of functions to be evaluated, in order to make it easier for respondents to
recall the functions the questionnaire was investigating, questions were enriched with pictures of
the platform, so to highlight the interface controls associated to the various program functions.
This proved to be an effective strategy that allowed the users to straightforwardly understand the
questions.
The indicators focusing on the training coming from Guskey’s model were also very
useful: not only did they yield information about the adequacy of the workshops in the different
contexts, but they also informed us about the possibility that the technology is really taken up in
the various situations. Some problems emerged when collecting data for I6 (Student outcomes),
as it turned out to be particularly challenging for teachers to collect these data on the field and
almost impossible to compare them with students’ outputs obtained before the technology was
introduced. In particular, as it often happens in TEL research, evidence about students learning
appears very difficult to assess, as innovative methods and technologies cannot be easily
compared with traditional ones. Probably, structured data collection protocols would have helped
teachers to systematically collect significant data about students learning ad this is something
Another challenge posed by the T&EAM approach regarded the juxtaposition of the data
tracked by the system and those coming from the questionnaires and interviews. One of the
reasons for these difficulties is the difference of granularity between the data typically tracked by
the platform and those collected through the questionnaires and interviews. While the former are
usually low level data, concerning individual actions of the users, the latter are higher-level data
14
referring to the technology functions. Their comparison might require some effort to elaborate
and aggregate the tracked data, so that they can be used to put in the right light the users’
As a last consideration, we should note that, usually the life span of a project is rather
short and does not allow to wait for long term evidence that the innovation really permeates the
target system. However, what can realistically be evaluated is the acceptance of the technology,
the impact of the training event, as well as the short/medium term changes compared to the
Conclusions
The T&EAM evaluation approach presented above aims to assess the acceptance of an
innovative technology, when this is introduced for the first time into an educational context
The novelty of the model lies not so much in the indicators and tools used, which mainly
derive from other existing and well-known evaluation models, but rather in the way they are used
and integrated into one coherent evaluation framework thus producing an overarching model.
The proposed evaluation means jointly assess the technology and the training events and
consider all the variables that may affect the uptake of the innovation, in order to produce a
picture of the forces that may foster or hinder the integration of the innovation into real
conditions.
Even if the T&EAM has been conceived in the framework of one specific project, we
believe the problem addressed is frequent in the TEL field, where many of the projects funded by
the EC or other funding agencies aim to introduce methodological and technological innovation
15
into established educational systems; for this reason, further research directions should aim to
As to the authors, further research efforts will be devoted to the identification of the
invariant factors of the model and of the degrees of freedom left to the evaluators when applying
the model.
Acknowledgement
This study was funded by Project METIS (Meeting teachers co-design needs by means of
References
Bushnell, D. S. (1990). Input, Process, Output: A Model for Evaluating Training. Training and
Cheung, R., & Vogel, D. (2013). Predicting user acceptance of collaborative technologies: An
extension of the technology acceptance model for e-learning. Computers & Education, 63,
160–175. doi:10.1016/j.compedu.2012.12.003
Chuttur, M. (2009). Overview of the Technology Acceptance Model: Origins, Developments and
Future Directions. Sprouts: Working Papers on Information Systems, 9(37).
Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of
Information Technology. MIS Quarterly, 13(3), 319–339. doi:10.2307/249008
16
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1992). Extrinsic and intrinsic motivation to use
computers in the workplace. Journal of Applied Social Psychology, 22(14), 1111–1132.
Delfino, M., Manca, S., Persico, D., & Sarti, L. (2004). Online learning: attitudes, expectations
and prejudices of adult novices. In V. Uskov (Ed.), Proceedings of the IASTED Web-Based
Education Conference (pp. 31–36). Calgary, Canada: ACTA Press. Retrieved from
http://ben.upc.es/butlleti/innsbruck/416-121.pdf
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: an introduction to
theory and research. Reading, Ma: Addison-Wesley Pub.Co.
Fitz-enz, J. (1994). Yes...You Can Weigh Training’s Value. Training, 31(7), 54–58.
Guskey, T. R. (2002). Professional Development and Teacher Change. Teachers and Teaching,
8(3/4), 381–391. doi:10.1080/135406002100000512
Kirkpatrick, D. L. (1994). Evaluating training programs: The four levels. San Francisco, CA:
Berrett-Koehler.
Lambert, J., Gong, Y., & Cuper, P. (2008). Technology, Transfer and Teaching: The Impact of a
Single Technology Course on Preservice Teachers’ Computer Attitudes and Ability.
Journal of Technology and Teacher Education, 16(4), 385–410.
17
Liu, I.-F., Chen, M. C., Sun, Y. S., Wible, D., & Kuo, C.-H. (2010). Extending the TAM model
to explore the factors that affect Intention to Use an Online Learning Community.
Computers & Education, 54(2), 600–610. doi:10.1016/j.compedu.2009.09.009
Lloyd, M., & Albion, P. (2009). Altered Geometry: A New Angle on Teacher Technophobia.
Journal of Technology and Teacher Education, 17(1), 65–84.
Persico, D., Manca, S., & Pozzi, F. (2014). Adapting the technology acceptance model to
evaluate the innovative potential of e-learning systems. Computers in Human Behavior, 30,
614–622. doi:10.1016/j.chb.2013.07.045
Authors (2015a).
Authors (2015b).
Authors (2013).
Authors (2015).
Authors (2014).
Rogers, E. M. (2010). Diffusion of Innovations (4rt ed.). New York, NY: Simon and Schuster.
Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, and applications
(Vol.3). San Francisco, CA: Jossey-Bass.
Tarhini, A., Hone, K., & Liu, X. (2013). User Acceptance Towards Web-based Learning
Systems: Investigating the Role of Social, Organizational and Individual Factors in
European Higher Education. Procedia Computer Science, 17, 189–197.
doi:10.1016/j.procs.2013.05.026
18
Thompson, R. L., Higgins, C. A., & Howell, J. M. (1991). Personal Computing: Toward a
Conceptual Model of Utilization. MIS Quarterly, 15(1), 124–143. doi:10.2307/249443
Venkatesh, V., & Davis, F. D. (2000). A Theoretical Extension of the Technology Acceptance
Model: Four Longitudinal Field Studies. Management Science, 46(2), 186–204.
doi:10.1287/mnsc.46.2.186.11926
Venkatesh, V., Thong, J., & Xu, X. (2012). Consumer Acceptance and Use of Information
Technology: Extending the Unified Theory of Acceptance and Use of Technology. MIS
Quarterly, 36(1), 157–178.