0% found this document useful (0 votes)
86 views

Tools For Evaluating Apps Papadakis

Uploaded by

Maria Arvaniti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views

Tools For Evaluating Apps Papadakis

Uploaded by

Maria Arvaniti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

The current issue and full text archive of this journal is available on Emerald Insight at:

https://www.emerald.com/insight/1741-5659.htm

Evaluating
Tools for evaluating educational educational
apps for young children: a apps for young
children
systematic review of the literature
Stamatios Papadakis
Preschool Education, University of Crete, Rethymnon, Greece
Received 5 August 2020
Revised 22 October 2020
Accepted 23 October 2020
Abstract
Purpose – This study, by critically analyzing material from multiple sources, aims to provide an overview
of what is available on evaluation tools for educational apps for children. To realize this objective, a
systematic literature review was conducted to search all English literature published after January 2010 in
multiple electronic databases and internet sources. Various combinations of search strings were used due to
database construction differences, while the results were cross-referenced to discard repeated references,
obtaining those that met the criteria for inclusion.
Design/methodology/approach – The present study was conducted according to the methods provided
by Khan et al. (2003) and Thomé et al. (2016). The whole procedure included four stages: planning the review,
identifying relevant studies in the literature, critical analysis of the literature, summarizing and interpreting
the findings (Figure 1). Furthermore, in this analysis, a well-known checklist, PRISMA, was also used as a
recommendation (Moher et al., 2015).
Findings – These review results reveal that, although there are several evaluation tools, in their majority
they are not considered adequate to help teachers and parents to evaluate the pedagogical affordances of
educational apps correctly and easily. Indeed, most of these tools are considered outdated. With the emergence
of new issues such as General Data Protection Regulation, the quality criteria and methods for assessing
children’s products need to be continuously updated and adapted (Stoyanov et al., 2015). Some of these tools
might be considered as good beginnings, but their “limited dimensions make generalizable considerations
about the worth of apps” (Cherner, Dix and Lee, 2014, p. 179). Thus, there is a strong need for effective
evaluation tools to help parents and teachers when choosing educational apps (Callaghan and Reich, 2018).
Research limitations/implications – Even though this work is performed by following the systematic
mapping guideline, threats to the validity of the results presented still exist. Although custom strings that
contained a rich collection of data were used to search for papers, potentially relevant publications that would
have been missed by the advanced search might exist. It is recommended that at least two different reviewers
should independently review titles, abstracts and later full papers for exclusion (Thomé et al., 2016). In this
study, only one reviewer – the author – selected the papers and did the review. In the case of a single
researcher, Kitchenham (2004) recommends that the single reviewer should consider discussing included and
excluded papers with an expert panel. The researcher, following this recommendation, discussed the inclusion
and exclusion procedure with an expert panel of two professionals with research experience from the
Department of (removed for blind review). To deal with publication bias, the researcher in conjunction with
the expert panel used the search strategies identified by Kitchenham (2004) including: Grey literature,
conference proceedings, communicating with experts working in the field for any unpublished literature.
Practical implications – The purpose of this study was not to advocate any evaluation tool. Instead, the
study aims to make parents, educators and software developers aware of the various evaluation tools
available and to focus on their strengths, weaknesses and credibility. This study also highlights the need for a

The author would like to thank his colleagues for their help during this paper.
Funding: The present study was funded by a grant from the Special Account for Research Funds
of University of Crete (SARF UoC).
Interactive Technology and Smart
Disclosure statement: No competing financial interests exist. Education
Data availability: data generated or analyzed during this review are included within the manuscript © Emerald Publishing Limited
1741-5659
and additional files. DOI 10.1108/ITSE-08-2020-0127
ITSE standardized app evaluation (Green et al., 2014) via reliable tools, which will allow anyone interested to
evaluate apps with relative ease (Lubniewski et al., 2018). Parents and educators need a reliable, fast and easy-
to-use tool for the evaluation of educational apps that is more than a general guideline (Lee and Kim, 2015). A
new generation of evaluation tools would also be used as a reference among the software developers,
designers to create educational apps with real educational value.
Social implications – The results of this study point to the necessity of creating new evaluation tools
based on research, either in the form of rubrics or checklists to help educators and parents to choose apps with
real educational value.
Originality/value – However, to date, no systematic review has been published summarizing the available
app evaluation tools. This study, by critically analyzing material from multiple sources, aims to provide an
overview of what is available on evaluation tools for educational apps for children.
Keywords Rubrics, Preschool education, Checklists, Educational mobile applications,
Mobile educational applications, Evaluation
Paper type Research paper

1. Introduction
Access to mobile applications (apps) and smart mobile devices such as tablets is
continuously increasing worldwide, as these tools are becoming cheaper and more easily
accessible. Meanwhile, research indicates that educators and parents accept smart screen
technologies as sources for educational renewal and learning opportunities for children ages
3 to 6 (Neumann, 2018). Since educators have embraced educational apps, app developers
target young children’s parents via these apps (Shing and Yuan, 2016). Vaala et al. (2015)
state that most educational apps on the two popular app stores (Google Play, App Store) are
presented as appropriate for young children. However, just the label “educational” or “for
children” does not indicate that these apps have been validated for educational purposes
(Hirsh-Pasek et al., 2015; Kucirkova, 2019). If educational apps are not designed with an
understanding of how young children develop and learn, educators and parents could waste
valuable time, money, and resources on products that do not teach their children (Callaghan,
2018). Most of the self-proclaimed educational apps targeting children try to teach only basic
skills via rote learning (Shuler, 2012); however, due to the misuse of their multimedia
capabilities, these apps mostly distract young children from the educational process
(Radesky et al., 2015).
For the reasons mentioned above, parents and educators must correctly evaluate an
educational app’s appropriateness in terms of children’s needs, satisfaction, etc. (Bouck et al.,
2016). The majority of educators, as well as parents, are no experts in media technology and
pedagogy. Usually, non-pedagogical attributes such as app price, user ratings, and reviews,
as well as the number of downloads, guide their decision regarding an app choice (Notari
et al., 2016). This selection procedure yields little or no information on app quality (Stoyanov
et al., 2015). Many educators and parents often make the error of not choosing
developmentally appropriate apps (Cooper, 2012), and young children may not fully take
advantage of the opportunity to use high-quality educational apps (Kucirkova, 2014).
Educators must use evaluation tools to measure educational apps’ appropriateness in terms
of the quality and relevance of the children’s educational, emotional and developmental
needs (Lubniewski et al., 2018; More and Travers, 2013).
However, to date, no systematic review has been published summarizing the available
app evaluation tools. This study, by critically analyzing material from multiple sources,
aims to provide an overview of what is available on evaluation tools for educational apps for
children in the form of game-based apps (and not in eBooks apps) that try to foster social,
emotional and cognitive development of young children in the formal and informal learning
environment. A systematic literature review (S.L.R.) was conducted to search all English Evaluating
literature published after January 2010 in multiple electronic databases and internet sources educational
to realize this objective. Various combinations of search strings were used due to database
construction differences, while the results were cross-referenced to discard repeated
apps for young
references, obtaining those that met the criteria for inclusion. children

2. Review of literature on app evaluation


In the past decade, there has been a sharp increase in touch screen technologies, such as
tablets, by young children around the globe (Colliver et al., 2019; Kucirkova et al., 2017).
Taking into consideration additional characteristics such as simplicity, intuitive design,
portability, connectivity and speed, it is not strange that these devices are becoming
increasingly pervasive amongst young-age users in both formal and informal learning
environments (Beschorner and Hutchison, 2013; Falloon, 2014; Marsh et al., 2015; Neumann,
2018).
An additional novel characteristic of smart mobile devices is that they could have many
potential apps, many of which have been designated as educational and suitable for young-
age children (Kucirkova, 2014; Papadakis, 2020). The marketing and target audience for app
developers are educators and parents looking for apps to help children aged 3 to 6 years
build early math and literacy skills (Guernsey and Levine, 2015a; Notari et al., 2016).
Martens et al. (2018), in their study, found that a simple search using the terms “A.B.C.” or
“Alphabet” in the app store returned between 279 and 286 ABCs apps. The market for
educational apps for children will continue to increase well into the future (Shing and Yuan,
2016), as the ubiquity of smart mobile devices and the low cost of apps are allowing learners
from a more disadvantaged socioeconomic background to have access to the educational
content everywhere and at any time (Callaghan and Reich, 2018; Pew Research Center, 2017;
Shuler, 2012).
Most app developers, due to a lack of knowledge, claim that their product is
“educational” only because it includes “educational” content such as A.B.C.s, Maths and
shapes (Larkin et al., 2019). The issue of technologies being marketed as “educational”
without being properly vetted is not new. In 1999, for instance, Buckleitner warned that
[. . .] choosing the best books, manipulatives, toys, and software is an important and essential task
for anyone who works with children. As computer use becomes more common in home and
classroom learning, the selection of software takes on even more importance (Buckleitner, 1999,
p. 211).
The difference between then and now is that now whether through apps, games, or videos,
in support of learning at school or investigating interests at home, learning through mobile
technology has become a daily part of family life (Blum-Ross et al., 2018).
Kathy Hirsh-Pasek and her co-researchers state that an educational app must foster
active, engaged, meaningful and socially interactive learning (Hirsh-Pasek et al., 2015).
Research has shown that the majority of educational apps are presented in the form of a
game or a multiple choice quiz, in a closed-ended format, specially designed for simple math
or literacy purposes or drawing, supporting only input from a single-finger touch, without
conceptualizing the affordances of the new technological interfaces (Baccaglini-Frank and
Maracci, 2015). Other apps focus on the “fun” experience so that the emphasis on the specific
educational content is too weak (Kolâs et al., 2016). Several reviews of educational apps for
young-age children have also found that most of them simply follow outdated models in the
form of skill and drill practices and flashcards (Callaghan and Reich, 2018). In fact, app
developers determine their app classification and audience (Sawers, 2019). Digital stores
ITSE offer some guidance, but their main concern is whether the app complies with applicable
privacy laws worldwide, such as COPPA (Children’s Online Privacy Protection Act) and
GDPR (General Data Protection Regulation). They do not assess educational apps or games
on whether they promote learning or to what extent they are related to basic life skills,
critical thinking and problem solving.
Even if educators or parents had the time to sort through the plethora of apps available in
the digital stores, they would face difficulties determining whether these apps have real
educational value (Kucirkova, 2017). Although there is an app review system in the app
stores (from 1-Star to 5-Star), typical users reviews are not in line with the real educational
value of the apps as mostly they evaluate them in the most positive way possible (Harrison
and Lee, 2018; Author, 2017a). Additionally, app users often prioritize app prices,
downloading mostly free or even low-cost apps regardless of their educational quality
(Rideout and Katz, 2016). Other adult users select apps for children based on visually
appealing graphics (Naidoo, 2014). Although multimedia elements have been found to
facilitate the learning process, this selection method adds little educational value as most of
these apps are full of conflicting and confusing multimedia content (Beschorner and
Hutchison, 2013). For these reasons, the “educational” app market is considered unregulated
and untested, and teachers and parents are most likely to spend the most time and even
money on educational apps without real educational value (Martens et al., 2018; Shing and
Yuan, 2016). It is most likely that the most popular apps in the app stores’ educational
category are not necessarily the most useful ones (Notari et al., 2016). Instead, it is possible
that children, parents, and teachers may not use apps with real educational value (Chen
et al., 2019).
Therefore, teachers need to evaluate apps to ensure that their content is accurate and
appropriate for young children’s specific educational needs (Harrison and Lee, 2018; More
and Travers, 2013). They need evaluation tools that will help them judge an app’s
educational quality quickly and easily (Harrison and Lee, 2018; Lee and Kim, 2015;
Lubniewski et al., 2018; Marsh et al., 2018). It is already known that several app evaluation
tools have been developed (Green et al., 2014). Are these tools adequate to help educators to
find and use “educational” apps with educational value?

3. Review questions
To examine what exists in the literature, the following questions were used:

RQ1. Which tools exist to evaluate educational apps?


RQ2. Which quality elements are evaluated?
RQ3. Are these tools considered adequate?
RQ1 is derived from describing the evaluation tools for children’s educational apps to
inform teachers, parents, researchers and software developers. RQ2 is derived from the need
to determine which quality elements are used by these tools during the evaluation process.
RQ3 is derived from the need to determine whether these tools are considered adequate.

4. Review method
A S.L.R. aims to identify, evaluate, and synthesize all available research evidence relevant to
a particular topic or research question (Cronin et al., 2008; Denyer and Tranfield, 2009;
Kitchenham, 2004). This S.L.R. will summarize the field regarding the evaluation tools for
educational apps for young children and form the basis for more targeted future research.
The present study was conducted according to the methods provided by Khan et al. (2003) Evaluating
and Thomé et al. (2016). The whole procedure included four stages: planning the review, educational
identifying relevant studies in the literature, critical analysis of the literature, summarizing,
and interpreting the findings (Figure 1). Furthermore, in this analysis, a well-known
apps for young
checklist, PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), children
was also used as a recommendation (Moher et al., 2015).

4.1 Search strategy


Based on the relevant literature (Cronin et al., 2008; Kitchenham, 2004; Moher et al., 2015), the
following steps were followed: 1. Search terms relevant to the research questions were
identified; 2. Possible replacement terms for search terms were adapted; 3. The appropriate
search string was constructed, connected Boolean logic operators (AND, OR), simple
operators (*,?) , and exact phrases; 4. A specified range of online databases and web sources
was selected; 5. The search string was applied to the article fields; 6. The searches were
restricted to sources from 2010 to date; 7. The search findings were analyzed and discussed.

4.2 Data sources


The sources searched were chosen based on their relevance to the education, social science,
and computing domain, including 20 different databases (Table 1). Greenhalgh and Peacock
(2005) advice for “snowball,” “backward,” and “forward” searches to improve the
completeness and to avoid publication bias. The backward search involves identifying and
examining the references or works cited in retrieved articles (Webster and Watson, 2002).
Forward search means reviewing studies that cite an additional source after it has been
published. This approach was adopted with Google Scholar. Publications from doctoral
dissertations were also included in the review (Thomé et al., 2016). Additional data were also
collected from web sources (such as webpages and blogs). Furthermore, this study used a
bibliographic package (Reference Manager by Mendeley) to manage the literature research
references.
4.2.1 Search strings. There are minor differences in the terminology used in the context
of mobile learning. For instance, mobile applications, apps, iPad apps, tablet apps, smart
mobile apps, touchscreen apps, etc., can represent a mobile application’s content. In an S.L.
R., it is highly recommended to consider alternative terms with similar meanings to receive

Planning the Search Analysis of Results report


review literature

•1. Formulate •2. Define the •7. Reading of •9. Analyze and
the research sources of the selected synthesize the
question and literature literature results
sub-questions search •8. Data •10. Generation
of the review •3. Set inclusion extraction and of the review
and exclusion coding report
criteria
•4. Define the
search criteria
•5. Search of
literature
•6. Selection of
literature Figure 1.
General guidelines of
Source: Adapted from Parahoo (2006) the current research
ITSE Data sources Web addresses

1 A.C.M. (Association for Computing Machinery) https://dlnext.acm.org/


Digital Library
2 Cambridge Core – Journals and Books Online www.cambridge.org/core/
3 CiteSeerX https://citeseerx.ist.psu.edu
5 EBSCO Education Research Complete www.ebsco.com/academic-
libraries/subjects/education
6 Emerald Insight www.emerald.com/insight/
7 ERIC – Education Resources Information Center https://eric.ed.gov
8 Google Scholar https://scholar.google.com
9 Ingenta Connect www.ingentaconnect.com
10 JSTOR www.jstor.org
11 Learning and Technology Library (LearnTechLib) www.learntechlib.org/
(formerly EdITLib)
12 LISTA (Library, Information Science and https://libraryresearch.com
Technology)
13 ProQuest www.proquest.com/
14 SAGE Journals https://journals.sagepub.com
15 ScienceDirect www.sciencedirect.com
16 Scopus www.scopus.com
Table 1. 17 Springer ScienceþBusiness Media www.springer.com
Databases and 18 Taylor and Francis https://taylorandfrancis.com/
search engines (in 19 Web of Science www.webofknowledge.com
alphabetical order) 20 Wiley www.wiley.com

further information (Cronin et al., 2008, p.41). Following the researcher’s recommendation,
this study gleaned alternative keywords from the database thesaurus and combined
keywords using “Boolean” operators (Cronin et al., 2008). The search string was defined by
identifying core concepts such as mobile educational apps, evaluation, rubric, educational
level, and education, including synonyms, as indicated in Table 2. From these key search
terms, replacement terms were identified.
Boolean logical operators such as AND and OR were applied to various databases to
create a subset of search results.
4.2.2 Inclusion criteria. Following Kitchenhand’s (2004) recommendations, the following
inclusion criteria were used:
 Inclusion Criteria 1: The study is focused on smart mobile devices – particularly
smartphones and tablets.
 Inclusion Criteria 2: The study describes an educational app assessment tool.
 Inclusion Criteria 3: The study reports on educational apps for children.
 Inclusion Criteria 4: The study is scientifically sound.

In this study, the term “scientifically sound” refers to all tools included in papers published
in peer-reviewed journals, acknowledging their quality. In the inclusion criteria, web sources
such as webpages and blogs that contained educational app assessment tools were also
used, for reasons explained in the section “Execution of the Review”.
4.2.3 Exclusion criteria. Considering Kitchenhand’s (2004) recommendations, exclusion
criteria were used to exclude those not related to this review’s subject. If anyone of the
following criteria was met, the study was excluded from the present study:
 Exclusion Criteria 1: The study is not written in English. Evaluating
 Exclusion Criteria 2: The study reports on mobile educational apps for adults. educational
 Exclusion Criteria 3: The paper is theoretical without providing an assessment tool. apps for young
 Exclusion Criteria 4: The paper has already been listed in another database. children
 Exclusion Criteria 5: The paper or the assessment model is not accessible.
 Exclusion Criteria 6: The study is just published as an abstract.

5. Execution of the review


Different criteria could be used to decide when to stop the literature search process: time and
logic (Levy and Ellis, 2006; Petticrew and Roberts, 2008). It is advisable to stop the search
process when a new search round returns little notable results to existing findings.
Haddaway et al. (2015) recommend analyzing the first 200 to 300 search results from various
databases and search engines. In this S.L.R., only the first 300 articles by title and abstract
were examined. When the search string returned fewer than 300 results, all articles were
analyzed within this review. From Google Scholar, the 1,000 most relevant results were
selected. As a result, 5,486 articles were initially analyzed. The S.L.R. was conducted in June
and July 2019. Table 4 summarizes the returned results per data source in alphabetical order
(Table 3).
The review is divided into four stages: At the first stage, the initial selection of studies is
based on an extensive search of the literature. This searching procedure returned 19,834
papers. For the first stage, 5,486 papers were included. Irrelevant and duplicate papers were
removed. In the second stage, the title and abstract of the remaining papers were screened
based on inclusion and exclusion criteria. In all 322 papers with relevant titles and abstracts
were found. Then, the third stage of screening was performed. In this stage, the articles were
scoped for information on the evaluation tools. Several articles were excluded, such as those
focused on evaluation tools for adults’ educational apps. As a result, 70 articles were
identified as possible primary studies. In the next stage, full articles were carefully reviewed,
emphasizing their relevance to the research questions. 52 articles were excluded. Finally, the
remaining 18 articles were examined in more depth to guarantee that they met the current
study criteria at the last stage. Seven irrelevant articles were filtered out. The remaining 11
articles constituted the final dataset for analysis. Figure 2 shows the overall procedure
according to the “Preferred Reporting Items for Systematic Reviews and Meta-Analyses
(PRISMA) Flow Diagram” (Moher et al., 2015).

Core concepts Synonyms

Mobile apps Mobile applications, iPad apps, tablet apps, smart mobile apps, touchscreen apps
Evaluation Assessment, validation
Rubric Instrument, comprehensive rubrics, assessment form, criteria, checklist, Critical evaluation,
tools, framework, scale, rate model, method, design principles, design guidelines, app
design
Educational Education, early childhood education, preschool education, nursery school education, infant Table 2.
level school education, preprimary education, primary education Core concepts and
Education Educational, instructional, teaching, learning, training synonyms
ITSE Initial 1st stage 2nd stage 3rd stage 4th stage 5th stage
Data source search (Identification) (Screening) (Eligibility) (Eligibility) (Included)

A.C.M. (Association for


Computing Machinery) Digital
Library 1,933 300 20 1 0 0
Cambridge Core – Journals
and Books Online 1,856 300 3 0 0 0
CiteSeerX 1,586 300 5 0 0 0
Digital Library – CSDL j IEEE
Computer Society 1,435 300 42 5 2 1
EBSCO Education Research
Complete 1,604 300 15 8 2 1
Emerald Insight 99 5 2 0 0 0
ERIC – Education Resources
Information Center 1,474 300 32 17 1 0
Google Scholar 2,500 1,000 22 0 0 0
Ingenta Connect 108 9 3 1 1 1
JSTOR 163 163 1 0 0 0
Learning and Technology
Library (LearnTechLib)
(formerly EdITLib) 95 95 5 0 0 0
LISTA (Library, Information
Science and Technology) 1,298 300 1 1 1 1
ProQuest 301 300 27 5 1 0
SAGE Journals 1,286 300 11 3 2 1
Science Direct 1,120 300 36 5 2 1
Scopus 632 300 10 5 1 0
Springer ScienceþBusiness
Media 775 300 28 8 1 1
Taylor and Francis 357 14 3 1 1 1
Table 3.
Web of Science 531 300 44 4 1 1
Returned results per Wiley 681 300 12 6 2 2
data source Summary 19,834 5,486 322 70 18 11

6. Data analysis
A complete list of the 11 articles is shown in Table 4, in tool type and chronological order.
On average, there was one study per year between 2012 and 2018, with only a slight
difference in 2013 (two studies) and 2015 (four studies). Regarding the type of the evaluation
tool, six studies referred to a rubric and five papers to a checklist. There was equality in the
number of tools as there are five rubrics and six checklists. Surprisingly, despite the
incredible popularity of the educational apps for young children, there were only three tools
for evaluating educational apps targeting young children (1 rubric and 2 criteria).
Additionally, eight studies (73 %) described tools for General Education, and three studies
(27 %) described tools designed for Special Education.
We found that there were also several freely available tools on webpages and blogs
besides the studies located in various publications and databases during the review.
Another interesting point of view is that some of these tools, especially Walker’s (2010)
rubric, were cited in many studies and were used as a basis for the creation of tools
mentioned in various studies. Furthermore, everyone interested in evaluating educational
apps has unrestricted access to these freely available tools rather than the tools presented
in the 11 selected studies kept in databases with restricted access. For these reasons,
Educational
Tool level Domain Subdomain Year Characteristics Criteria Study

1 Rubric K-12 General General 2013 Length: 1 page Curriculum connection Walker, H. C. (2013).
education* educational Education Levels of performance: 4 Authenticity Establishing content validity
apps Scores: 1 - 4 (1: worst – 4: best) Feedback of an evaluation rubric for
Not applicable (N/A) option: Differentiation mobile technology
No User Friendliness applications utilizing the
Criteria: 7 Student Use Delphi method. Doctoral
Sub criteria: 0 Student Performance dissertation, Johns Hopkins
University, Maryland, U.S.A.
2 Rubric K-12 education Science General 2014 Length: 1 page Accuracy Green, L. S., Hechter, R. P.,
learning Education Levels of performance: 3 Relevance of Content Tysinger, P. D., and
Scores: 1-3 (1: worst – 3: best) Sharing Findings Chassereau, K. D. (2014).
Not applicable (N/A) option: Feedback Mobile app selection for 5th
Yes Scientific Inquiry and through 12th grade science:
Criteria: 6 Practices The development of the MASS
Sub criteria: 0 Navigation rubric. Computers and
Education, 75, 65–71.
3 Rubric Early Early Literacy General 2015 Length: 1 page Multimodal Features Israelson, M. H. (2015). The
childhood Learning Education Levels of performance: 4 Literacy Content app map: A tool for systematic
education** Scores: 1-4 (1: worst – 4: best) The intuitiveness of evaluation of apps for early
Not applicable (N/A) option: App Navigation literacy learning. The Reading
No User Interactivity Teacher, 69(3), 339–349.
Criteria: 4
Sub criteria:0
4 Rubric K-12 education Instructional General 2015 Length: 8 pages Instruction Lee, C-Y. and Cherner, T. S.
apps Education Levels of performance: 5 Design (2015). A comprehensive
Scores: 1-5 (1: worst – 5: best) Engagement evaluation rubric for assessing
Not applicable (N/A) option: instructional apps. Journal of
Yes Information Technology
Domain: 3 Education: Research, 14, 21-53
Sub criteria: 24
(continued)

on evaluation tools
Evaluating

articles
Table 4.
apps for young

described in selected
educational

Detailed information
children
ITSE

Table 4.
Educational
Tool level Domain Subdomain Year Characteristics Criteria Study

5 Rubric K-12 education General Special 2016 Length: 3 pages Objective Ok, M. W., Kim, M. K., Kang,
educational Education Levels of performance: 3 Strategy E. Y., and Bryant, B. R. (2016).
apps Scores: 1–3 (1: worst – 3: best) Examples How to find good apps: An
Not applicable (N/A) option: Practice evaluation rubric for
No Error Correction and instructional apps for teaching
Criteria: 13 Feedback students with learning
Sub criteria: - Error Analysis disabilities. Intervention in
Items: - Progress Monitoring School and Clinic, 51(4), 244-
Score: Yes Motivation 252.
Type of score: Equation Navigation
App usage suggestion: Yes Visual and Auditory
Stimuli
Font
Customized Settings
Content Error and Bias
6 Rubric Early General General 2017 Length: 3 pages Educational Content Papadakis, S., Kalogiannakis,
childhood learning Education Levels of performance: 4 Design M., and Zaranis, N. (2017).
education Scores: 1-4 (1: worst – 4: best) Functionality Designing and creating an
Not applicable (N/A) option: Technical educational app rubric for
No characteristics preschool teachers. Education
Domain: 4 and Information
Sub criteria: 18 Technologies, 22(6), 3147-3165.
7 Criteria Early General General 2012 Length: 1 page Educational McManis, L. D., and
childhood learning Education Level of performance: 4 Appropriate Gunnewig, S. B. (2012).
education Scores: 1-4 (1 = No 2 = Unsure Child-Friendly Finding the education in
3 = Somewhat 4 = Yes) Enjoyable/Engaging educational technology with
Not applicable (N/A) option: Progress Monitoring/ early learners. Young
No Assessment Children, 67(3), 14-24.
Criteria: 6 Individualizing
Sub criteria: 18 Features
Score: Yes
Type of score: Equation
(continued)
Educational
Tool level Domain Subdomain Year Characteristics Criteria Study

Proposal for app use: Yes


(A-F)
8 Criteria Early General Special 2013 Length: 1 page Accessibility More, C. M., and Travers, J. C.
childhood educational Education Level of performance: 3 Content (2013). What’s app with that?
education apps Scores: 1-4 (1: Characteristic is Individualization Selecting educational apps for
mostly absent – 4: young children with
Characteristic is mostly disabilities. Young Exceptional
present) Children, 16(2), 15-32.
Not applicable (N/A) option:
No
Criteria: 3
Sub criteria: 27
Score: Yes
Type of score: Summary
Proposal for app use: No
9 Criteria K-12 education General General 2015 Length: 2 pages Teaching and Lee, J. S., and Kim, S. W.
educational Education Level of performance:- Learning (2015). Validation of a tool
apps Scores: - Screen Design evaluating educational apps
Not applicable (N/A) option: Technology for smart education. Journal of
No Economy and Ethics Educational Computing
Criteria: 4 Research, 52(3), 435–450.
Sub criteria: 8
Items: 33
Score: No
Type of score:-
Proposal for app use: No
(continued)

Table 4.
apps for young
Evaluating
educational

children
ITSE

Table 4.
Educational
Tool level Domain Subdomain Year Characteristics Criteria Study

10 Criteria K-12 education General Special 2015 Length: 3 pages Design features Weng, P. L. (2015). Developing
educational Education Level of performance:- Individuation an app evaluation rubric for
apps Scores: (D=Disagree, N= Support practitioners in special
Neutral, A=Agree, na=not Overall impression education. Journal of Special
applicable, u= uncertain) Education Technology, 30(1),
Not applicable (N/A) option: 43-58.
Yes
Criteria: 4
Sub criteria: 11
Items: -
Score: No
Type of score: -
Proposal for app use:-
11 Criteria K-12 education General General 2018 Length: 1 page Student Interest Lubniewski, K. L., Arthur, C.
educational Education Level of performance: 3 Design Features L., and Harriott, W. (2018).
apps Scores: Yes, No, Somewhat Connection to Evaluating instructional apps
Not applicable (N/A) option: Curriculum using the app checklist for
Yes Instruction Features educators (A.C.E.).
Criteria: 4 International Electronic
Sub criteria: 26 Journal of Elementary
Items:- Education, 10(3), 323–329.
Score: Yes
Type of score: Summary
Recommendation for app use:
Yes

Notes: * K12 education is a term for primary and secondary education, including Kindergarten; ** Early childhood education is a term for education (formally
and informally) from birth up to the age of eight
Evaluating
educational
apps for young
children

Figure 2.
Diagram of the S.L.R.
using a PRISMA flow
diagram

these tools were also included under the characterization of “nonscientific-based tools in
this review.” In this study, the term “nonscientific-based tools” refers to freely available
online tools for evaluating educational apps that have not been “peer-reviewed” and
present their quality criteria without focusing on research methodology and scientific
evidence. Table 5 presents these tools in chronological order.
When examining the publication year of the seven nonscientific-based tools, the earliest
publication date was in 2010, ending with a tool in 2017. The number of online evaluation
tools was significantly increased between the years 2010 and 2012. There were only three
tools published in the next few years. Regarding the type of tools presented, four tools were
rubrics and three tools were checklists.
A question that arises from the information presented in Table 5 is whether these 11 tools
have fundamental characteristics in common. A simple listing of criteria, without further
categorization, revealed that there are over 250 different elements presented in the 11 tools.
Of these, only five elements are common in these 11 tools, such as design features, feedback,
etc. The present study tried to categorize these different elements further. The tool proposed
by Lee and Cherner (2015) was used as a basis for comparison and resulted in a final list of
34 criteria (Table 6). This count is significantly smaller than the initial number of the 250
sub-criteria.
ITSE

Table 5.

based tools
about non-scientific-
Detailed information
Educational
Tool level Domain Subdomain Year Characteristics Criteria/Sub criteria Source

1 Rubric K-12 General General 2010 Length: 1 page Curriculum Connection Walker, H. C. Evaluation
education educational Education Levels of Authenticity Rubric for iPod Apps. https://
apps performance: 4 Feedback bit.ly/31WG05U
Scores: 1-4 (1: worst – Differentiation
4: best) User Friendliness
Not applicable (N/A) Student Motivation
option: No
Criteria: 6
Sub criteria: -
2 Rubric K-12 General General 2011 Length: 1 page Curriculum Connection Schrock, K. Evaluation rubric
education educational Education Levels of Authenticity for iPod/iPad apps. www.
apps performance: 4 Feedback ipads4teaching.net/uploads/3/
Scores: 1-4 (1: worst – Differentiation 9/2/2/392267/ipad_app_rubric.
4: best) User Friendliness pdf
Not applicable (N/A) Motivation
option: No Reporting
Criteria: 7
Sub criteria: -
3 Rubric K-12 General Special 2011 Length: 1 page Curriculum Connection Van Houten, J. Ievaluate app
education educational Education Levels of Type of Skills practices Rubric. https://bit.ly/30kqobT
apps performance: 4 Age and Grade Level
Scores: 1-4 (1: worst – Languages
4: best) Adjustable levels
Not applicable (N/A) Prompts
option: No Ease of Use
Criteria: 14 Engagement
Sub criteria: - Customization
Score: Yes Alternative Access
Type of score: Simple Data Collected
equation National Curriculum
Recommendation for Gender Neutral
app use: Yes When was the app
updated
(continued)
Educational
Tool level Domain Subdomain Year Characteristics Criteria/Sub criteria Source

4 Rubric K-12 General General 2012 Length: 1 page Relevance Vincent, T. Ways to evaluate
education educational Education Levels of Customization educational Apps. Learning in
apps performance: 4 Feedback hand. https://bit.ly/1LhV6ra
Scores: 1-4 (1: worst– Thinking Skills
4: best) Usability
Not applicable (N/A) Engagement
option: No Sharing
Criteria: 7
Sub criteria:-
Score: No
Type of score:-
Recommendation for
app use: -
5 Checklist K-12 General General 2015 Length: 1 page Curriculum connection Schrock, K. Critical evaluation
education educational Education Levels of Authenticity of a content-based iPad/iPod
apps performance: 2 Feedback app. www.ipads4teaching.net/
Scores: Yes - No Differentiation uploads/3/9/2/2/392267/
Not applicable (N/A) User-friendliness evalipad_content.pdf
option: Yes Student motivation
Criteria: 12 Reporting
Sub criteria:- Sound
Score: No Instructions
Type of score: - Support page
Recommendation for Navigation
app use: - Modalities
6 Checklist K-12 General General 2016 Length: 3 pages Technical/User Haines, C. Evaluating Apps
education educational Education Level of performance: Experience and New Media for Young
apps 2 Criteria for story apps Children: A Rubric. https://bit.
Scores: Yes-No Criteria for game, play, ly/33UJGHd
Not applicable (N/A) or creation apps
option: Yes
(continued)

Table 5.
apps for young
Evaluating
educational

children
ITSE

Table 5.
Educational
Tool level Domain Subdomain Year Characteristics Criteria/Sub criteria Source

Criteria: 2
(technical/user
experience criteria
additional content
criteria specific to app
type)
Sub criteria:
11 TUE criteria
11 content criteria
(Story Apps)
11 content criteria
(Toy/Creation Apps)
Score: No
Type of score:-
Recommendation for
app use: -
7 Checklist K-12 General General 2017 Length: 6 pages Content: Story, KIDMAP. The dig checklist for
education educational Education Level of Information, and inclusive, high-quality
apps performance:- Activity children’s media. www.
Scores:- Art joinKIDMAP.org/digchecklist/
Not applicable (N/A) Audio
option: No Audience
Criteria: 8 Purpose
Sub criteria: 4 Functionality and
Items: 80 Navigation
Score: No Instructions, Guides,
Type of score: - and Support materials
Recommendation for for grownups
app use:- Creative Team
Lee and Green
Cherner Papadakis et al. Ok et al. Israelson Walker More and Lee and McManis and Lubniewski Weng
Sub criteria (2015) et al. (2017) (2014) (2016) (2015) (2013) Travers (2013) Kim (2015) Gunnewig (2012) et al. (2018) (2015)

1 Enable content creation x x


2 Higher-order thinking x x x x x x x x
skills
3 Content appropriateness x x x x x x x x x
4 Connections to future x x
learning
5 Value of errors x x x x x x x x x
6 Feedback to teacher x x x x x
7 Level of learning x x x x x x
material
8 Cooperative/ social x x
learning
9 Student performance x x
10 Accommodation of x
individual differences
11 Platform integration x x
12 Screen design x
13 Ease of use x x x x x x x x x x
14 Navigation x x x x x x x x
15 Goal orientation x x x x x x x x x x
16 Information x x x x x
presentation
17 Media integration x x x x x
18 Cultural sensitivity x x x
19 Learner control x x x x x x x
20 Pace x x x x x x x x x x
21 Personal preferences x
22 Interest x x
23 Aesthetics x x x x x x
24 Student behavioral x x x x x
intention to use
25 Instructions existence x x x x x
(continued)

Grouped criteria per


Table 6.
apps for young
Evaluating
educational

study
children
ITSE

Table 6.
Lee and Green
Cherner Papadakis et al. Ok et al. Israelson Walker More and Lee and McManis and Lubniewski Weng
Sub criteria (2015) et al. (2017) (2014) (2016) (2015) (2013) Travers (2013) Kim (2015) Gunnewig (2012) et al. (2018) (2015)

26 Performance and x
reliability
27 Interoperability of x x x x
system
28 App cost x x
29 Copyrights x x x
management
30 Data spill x x
31 Support page x x x
32 Team x
33 Free of external links x
34 Developer information x
35 Frequency of updates
This study also tried to determine which of these 34 criteria are mostly used in the various Evaluating
tools – Figure 3 presents data separately for rubrics and checklists and cumulatively. The educational
“navigation” criterion was present in all rubrics, followed by the “content appropriateness,” apps for young
the “value of errors,” the “screen design,” and the “learner control” criterion. These criteria
were in five of the six tools. Strangely enough, the criterion of “cultural sensitivity” is met
children
only in three rubrics. However, the situation is a little bit different according to the grouped
criteria used in checklists as three grouped criteria found in all tools (“screen design,” “ease
of use,” “learner control”). In contradiction with the rubrics, almost all the checklists (4 out of
5) mentioned the “cultural sensitivity” criterion.
The present study tried to understand the degree of completeness of the selected studies’
tools by comparing the total number of criteria determined and the criteria found by a
separate tool. Complete tools were two rubrics (Lee and Cherner, 2015; Papadakis et al., 2017)
and one checklist (Lee and Kim, 2015). All the other tools covered the grouped criteria in a
percentage lower than 50% (Table 7).
As already noted, in this review, nonscientific-based tools were also included. As
described above, the same procedure was followed for these tools to achieve comparable
results. As they were already formed, the group criteria were used as a basis for the non-
scientific based tools. One extra item was used in these tools, compared to 34 items in the
scientific-based tools, that of the “frequency of app updates” (Table 8).
The most common grouped criterion in the non-scientific tools is the “Content
appropriateness,” which is met in all tools, followed by the “Higher-order thinking skills” (6
times), the “Value of errors” (5 times), and the “Level of learning material” (5 times), the
“Navigation” (5 times) and the “Learner control” (5 times). In both tools (scientific and non-
scientific tools), the “Navigation” and the “Learner control” grouped criteria are observed at
higher rates. This study also tried to figure out the degree of non-scientific tools.
Unfortunately, these tools only fulfill the grouped criteria to a limited extent. For instance,
the Walker tool managed to fulfill only 6 group criteria, the Schrock tool only 7, etc. Only the
KIDMAP tool covers the grouped criteria to a satisfactory degree (51 %) (Table 9).
For a detailed explanation of the criteria mentioned above and their relevance with early
childhood education, readers can refer to Lee and Cherner (2015) study.

7. Discussion of results
7.1 RQ1: Which tools exist to evaluate educational apps?
This study identified 11 articles describing two different approaches to evaluate educational
apps after selecting the eligible studies. Six studies present a rubric, and five studies present
a checklist. Additionally, this study also identified seven nonscientific-based tools. Four web
sources present a rubric, while three web sources present a checklist. The rubric provided by
Ok et al. (2016) can be considered “hybrid” as it combines a rubric’s content completeness
into a single checklist.
In general, the results of RQ1 revealed that the existing body of scientific-based tools is
smaller than expected compared to the number of research papers published about the low
quality of educational apps. During this review, evaluation tools that are available for free
on various websites were also found. Although these sources might be used as “a first step”
in educational app evaluation, they are not based on scientific evidence and significantly
omit important app assessment aspects. Only one freely available tool (KIDMAP) can be
considered as comprehensive enough to evaluate educational apps. The other six tools have
limitations and can be considered inadequate tools in terms of their evaluation power.
ITSE

Figure 3.
Grouped criteria were
met in the two tools,
separately and
collectively
Lee and Lee and Kim Papadakis More and Lubniewski McManis and Weng, Green et al. Walker, Ok et al. Israelson,
Tools Cherner (2015) (2015) et al. (2017) Travers (2013) et al. (2018) Gunnewig (2012) 2015) (2014) 2013) (2016) (2015)

Total number
of criteria 35 35 35 35 35 35 35 35 35 35 35
Criteria found 24 20 19 13 13 12 12 9 8 8 6
Percentage 69 57 54 37 37 34 34 26 23 23 17

Number of grouped
Table 7.

criteria per tool


apps for young
Evaluating
educational

children
tool
ITSE

Table 8.
Grouped criteria per
Sub criteria Walker (2010) Schrock (2011) Vincent (2012) Schrock (2015) Van Houten (2011) Haines (2016) KIDMAP (2017)

1 Enable content creation x


2 Higher-order thinking skills x x x x x x
3 Content appropriateness x x x x x x x
4 Connections to future learning
5 Value of errors x x x x x
6 Feedback to teacher x x x x
7 Level of learning material x x x x x
8 Cooperative/ social learning
9 Student performance
Accommodation of individual
10 differences x
11 Platform integration x x
12 Screen design
13 Ease of use x x x
14 Navigation x x x x
15 Goal orientation x x x x x
16 Information presentation x x
17 Media integration x x
18 Cultural sensitivity x x
19 Learner control x x x
20 Pace x x x x x
21 Personal preferences x
22 Interest
23 Aesthetics x x x
24 Student behavioral intention to use x
25 Instructions existence x x x
26 Performance and reliability x x
27 Interoperability of system x x
28 App cost
29 Copyrights management x x
30 Data spill
31 Support page
32 Team x
33 Free of external links x x
34 Developer information x x
35 Frequency of updates x
7.2 RQ2: Which quality elements are evaluated? Evaluating
Martens et al. (2018) state that there is no easy way to determine what constitutes an educational
excellent educational app. They argue that parents and educators should look out for “lack
of quality” characteristics such as the presence of advertisements and in-app purchases,
apps for young
poor design, privacy concerns, etc. This review to answer RQ2 analyzed the quality criteria children
used by the scientific and non-scientific tools. The initial results revealed that the tools use
different sets of elements to evaluate the educational apps. This wide diversity of criteria
does not allow for the formation of a clear pattern. This problematic situation is found in
many S.L.R.s. For instance, in his study, Rosell-Aguilar (2017) found that a few criteria are
standard among most frameworks. Petri and von Wangenheim (2016) describe the same
problem in their attempt to find educational games’ evaluation tools. Identifying the various
tools’ standard features was demanding, as different terms were used for the same concept
in different tools. For instance, Lee and Cherner’s (2015) rubric used the term “Value of
errors,” while other rubrics for the same characteristic used the term “Feedback.” Other
rubrics used the term “Personal preferences” instead of “Customization.” In most cases, the
characterization of the various elements did not precisely match.
Kathy Hirsh-Pasek and her co-researchers state that an educational app must foster
active, engaged, meaningful, and socially interactive learning. Lee and Cherner (2015) rubric
to homogenize further the initial huge number of evaluative dimensions in the various tools
were used. Summarizing, criteria that refer to the “Design” section of the apps such as “The
Screen Design,” “Navigation,” “Learner Control” were found in all scientifically based tools.
This is considered necessary, as Marsh et al. (2015) state that there is evidence that suggests
that apps of appropriate quality and design promote a wide range of play and creativity for
preschoolers (p.34). The same researchers state that creativity is “defined as the production
of original content and evidence of diverse forms of thinking, both often present in young
children’s play and everyday uses of technology” (Marsh et al., 2015 p.2). Furthermore, other
important design and content elements such as the “Content Appropriateness,” the “Value of
Errors” (e.g. feedback), the “Higher-order Thinking Skills,” the “Ease of Use” and the
“Cultural Sensitivity” (e.g. bias-free) were found in most tools. This is a critical issue to
consider as, since the era of personal computers, researchers recognized the need for
educational software that is easily navigated, without errors in design, and with a higher
level of parametrization that is intuitive and engaging for students of all ages (see Haugland
Developmental Software Scale, Haugland, 1999).
It should be noticed that some critical criteria surprisingly were not met in all tools. For
instance, the “Leveling” sub-criterion was found in only 6 out of 11 tools. It is well known
that educational content should not be too generic or too demanding, as children, on the one
hand, will quickly leave the app and on the other hand, children will not get quickly bored
with the app (Department of Education and Training, 2016; Goodwin, 2012). Additionally,
criteria such as “Feedback to teacher/parent” (5 out of 11), “Cooperative/Social learning”
(2 out of 11), “Student performance” (2 out of 11), “Ability to save progress” (2 out of 11),

KIDMAP Schrock Haines Van Houten Vincent Schrock Walker


Tools (2017) (2015) (2016) (2011) (2012) (2011) (2010)
Table 9.
Criteria 35 35 35 35 35 35 35 Number of grouped
Criteria covered 18 13 13 11 9 7 6 criteria per non-
Percentage 51 37 37 31 26 20 17 scientific tool
ITSE “Accommodation of individual differences” (1 out of 11) that are also considered necessary
for educational apps (Hirsh-Pasek et al., 2015) are not evaluated by the majority of the
available tools.

7.3 RQ3: Are these tools considered adequate?


Considering the 18 tools’ content, most of these tools can be classified as inadequate. Most of
these tools were developed almost immediately after the advent of tablets (2010–2012) and
omit essential quality criteria. For instance, criteria such as the absence of advertisements
and the presence of in-app purchases, as well as privacy and GDPR concerns, etc., are not
present. This is considered particularly important, as researchers claim that advertisements
distract the users from the learning process (Papadakis et al., 2019; Wojdynski and Bang,
2016). Meyer et al. (2019), in their study, found that apps with little or no cost, due to their
profit model, had a significantly higher percentage of pop-up advertisements that hinder
learning. Furthermore, advertisement presence, especially in the free apps, may also reveal
an “app gap” as ordinary families from low economic backgrounds or families from
developing countries are found to be using primarily free apps (Guernsey, and Levine,
2015b; Mouza and Barrett-Greenly, 2015). Since 2012, researchers have warned the in-app-
purchase method as a dangerous trend (Shuler et al., 2012). Only the tool created by Lee and
Kim used copyright and data spill criteria in its evaluation procedure. According to the
Australian Cyber Security Centre (2019), a data spill is the accidental or deliberate exposure
of information into an uncontrolled or unauthorized environment or persons without a need-
to-know. The absence of these criteria could be due to the age of the evaluation tools. Most of
the tools were created before personal data, and GDPR was pivotal in today’s society. The
GDPR 2016/679 is a regulation in E.U. law on data protection and privacy in the European
Union and the European Economic Area (European Commission, 2016).
The “Evaluation Rubric for Educational Apps” (Lee and Cherner, 2015) is the most
comprehensive instrument of evaluation quality and depth. REVEAC (Papadakis et al.,
2017), considering its dimensions, can also be classified as comprehensive. Another
comprehensive approach can be found in the Lee and Kim (2015) tool. It presents a checklist
for the evaluation of educational apps for smart education. The other tools included in this
review are not considered comprehensive as they lack essential elements of quality.
An additional point of concern is that only five scientific-based tools (two rubrics and
three criteria checklists) have been created with an eye for young children apps. The other
tools have been created for K-12 education. Although they can also evaluate educational
apps that target young children, it is essential to highlight this inexistence as app developers
are increasingly targeting this age group (Hiniker et al., 2015; Kucirkova, 2017; Shuler, 2012).
Additionally, young age children have special needs, and, in any case, they should not be
treated as “young adults” (Anthony et al., 2014). For instance, aspects such as “palm rest”
(Cohen et al., 2011) is usually a concern for young children but not for teenagers.

7.4 Further questions arise


This study raises several questions. For instance, the “App frequency” sub-criterion exists
only in one tool. It is widely known that increased student engagement and enthusiasm are
prerequisites of using apps for educational purposes (Lubniewski et al., 2018) and that
children quickly get bored (Department of Education and Training, 2016). So, is this
criterion necessary? Related questions may arise due to the lack of criteria such as the “App
cost,” the “Developer information/support page,” and the “Performance and reliability.” The
educational app price range is vast – from nothing to hundreds of dollars (Statista, 2019).
Not all apps are of the same quality, and app price, of course, does not necessarily correlate
with its quality (Shing and Yuan, 2016). So, how can we evaluate an app in terms of its price? Evaluating
Also, some tools at the end of the evaluation procedure provide information such as a score educational
or a description. Should we be satisfied with the level of information provided? For instance,
in his rubric, Walker (2010) provides a minimum score to be considered necessary for an app
apps for young
to be useful. Vincent (2012) suggests that the more criteria an app meets, the better. Rosell- children
Aguilar (2017) highlights that as apps serve different purposes for different learners,
insisting that all the criteria determine factors for the generic evaluation of an app could be
misleading.
Additionally, other tools offer a “Not applicable (N/A)” choice. Should evaluation tools
include this possibility? Rosell-Aguilar (2017) notes that, although some criteria are
undoubtedly more crucial than others, one should not dismiss an app’s potential because it
does not meet one specific criterion.
Another question that arises is whether researchers should try to make a “one-size-fits-
all” evaluation tool for all different kinds of educational apps or, instead, they should try to
create different tools optimized for the different types of apps: open-ended apps, close-ended
apps, apps to promote Computational Thinking, Mathematics, Literacy, etc. Walker states
that given the wide variety of apps in the marketplace, an evaluation tool for measuring app
quality “must be broad enough to address multiple grade/age levels and content areas, yet
specific enough to address the inclusion of best practices in teaching and learning” (Walker,
2013, p. 6). On the contrary, Lee and Cherner (2015) warn that “creating a tool to evaluate all
kinds of educational apps is impossible” (p. 37).

8. Threats to validity
Even though this work is performed by following the systematic mapping guideline, threats
to the validity of the results presented still exist. Although custom strings that contained a
rich collection of data were used to search for papers, potentially relevant publications that
would have been missed by the advanced search might exist. It is recommended that at least
two different reviewers should independently review titles, abstracts, and later full papers
for the exclusion (Thomé et al., 2016). In this study, only one reviewer – the author – selected
the papers and did the review. In the case of a single researcher, Kitchenham (2004)
recommends that the single reviewer consider discussing included and excluded papers
with an expert panel. Following this recommendation, the researcher discussed the inclusion
and exclusion procedure with an expert panel of two professionals with research experience
from the Department of Preschool Education, University of Crete, Greece. To deal with
publication bias, the researcher, in conjunction with the expert panel, used the search
strategies identified by Kitchenham (2004), including:
 grey literature;
 conference proceedings; and
 communicating with experts working in the field for any unpublished literature.

Cronin et al. (2008, p.38) state that a literature review must collect information from multiple
sources. This study did not limit the search to only one scientific domain, but it included
databases and publishers from three different domains (education, social science, and
computing domain), as well as standalone journals. Furthermore, to mitigate threats based
on the search string, the study performed extra research with Google Scholar by adopting
backward and foreword search (Webster and Watson, 2002). Denyer and Tranfield (2009)
note that other sources such as websites, workshops, etc., and other “grey literature” are all
critical in addition to academic papers. Whether to search for specific data sources depends
ITSE on the field and the evidence available (Denyer and Tranfield, 2009, p. 684). This study
included nonscientific-based research to provide a comprehensive overview of the available
evaluation tools for reasons already explained.
In this review, tools that referred to Special Education were also included. A preliminary
reading of these tools led to the decision to apply to General Education apps. Furthermore,
these tools’ inclusion will allow special education teachers, software developers, and course
parents to become informed. Some other tools in their title refer to iPod/iPad apps or, in
general, to Apple products. These tools were also included in the review, as the same
guidelines exist for apps for iOS and Android operating systems.
A paper entitled “Does the app fit? Using the Apps Consideration Checklist” (Tammaro
and Jerome, 2012) cited in some papers. Although several attempts have been made to find
this paper, including the search of additional databases such as ResearchGate and
Academia or contacting former and current publishers of the publication (C.E.C. and Exinn),
all efforts proved futile. Thus, this paper was not included in the review. Relevant papers
were also found, but they were not included in the results for various reasons. For instance,
Buckler (2012) presented a rubric strictly oriented toward evaluating apps for adults with
special needs. Cherner et al. (2016) published a rubric for evaluating apps from the teachers’
scope, e.g. apps that help teachers complete everyday tasks for their daily teaching practice.
Additionally, the Chen (2016) rubric was also excluded as it evaluates language-learning
apps for second language adult learners. Stoyanov and his colleagues presented an exciting
evaluation tool in the form of a rubric called MARS. It was excluded as it evaluates mobile
health apps (Stoyanov et al., 2015). Finally, Heyman (2018) published a handy paper that
addresses all multimedia learning features that support or hinder learning. As these features
are not presented in a rubric or a checklist, the paper was excluded.
For the same reason, there were also some exclusions from the review of incredibly
valuable papers, such as of those of Hirsh-Pasek et al. (2015), Marsh et al. (2018), Rosell-
Aguilar (2017), Zosh et al. (2017), among others, as they were in the form of guidelines.
Furthermore, various other non-scientific tools (mainly in the form of rubrics) were
mentioned in several sites, but their links were redirected to an error page. We are
reasonably confident that we are unlikely to have missed many significant relevant studies
or tools for the reasons mentioned above.

9. Conclusion and future work


Although there are several evaluation tools, these review results reveal that most of them
are not considered adequate to help teachers and parents evaluate educational apps’
pedagogical affordances correctly and quickly. Quite a few of the tools are somewhat dated
(over 5 years old). With the emergence of new issues such as GDPR, the quality criteria and
methods for assessing children’s products need to be continuously updated and adapted
(Stoyanov et al., 2015). Some of these tools might be considered as good beginnings, but their
“limited dimensions make generalizable considerations about the worth of apps” (Cherner
et al., 2014, p. 179). Thus, there is a strong need for useful evaluation tools to help parents
and teachers choose educational apps (Callaghan and Reich, 2018).
The purpose of this study was not to advocate any evaluation tool. Instead, the study
aims to make the community more aware of the available evaluation tools and focus on their
strengths, weaknesses, and credibility. This study also highlights the need for a
standardized app evaluation tool (Green et al., 2014), which will allow anyone interested to
evaluate apps with relative ease (Lubniewski et al., 2018). Parents and educators need a
proper, fast, and easy-to-use tool to evaluate educational apps more than general guidelines
(Lee and Kim, 2015). For instance, Papadakis et al. (2020), taking into consideration the
relevant functions of the apps, children’s expectations, and surely the educators’ implicit Evaluating
demands, propose that a valid instrument for assessing educational apps for children aged educational
3–6 years must contain the following four dimensions: usability, efficiency, parental control
apps for young
and security.
Until this happens, parents and educators should refer to independent organizations such children
as Common Sense Media and Kindertown that they have started describing highly effective
learning apps’ desirable characteristics. Additionally, Lisa Guernsey and Michael Levine
suggest a series of curation sites for parents and educators who review educational apps.
Those sites are Children’s Technology Review (childrenstech.com), Graphite (graphite.org),
Know What’s Inside (knowwhatsinside.com), Parent’s Choice Foundation (parents-choice.
org), and Teachers with Apps (teacherswithapps.com) (Guernsey and Levine, 2015b).

References
Anthony, L., Brown, Q., Tate, B., Nias, J., Brewer, R. and Irwin, G. (2014), “Designing smarter touch-
based interfaces for educational contexts”, Personal and Ubiquitous Computing, Vol. 18 No. 6,
pp. 1471-1483.
Australian Cyber Security Centre (2019), “Data spill management guide”, available at: www.cyber.gov.
au/publications/data-spill-management-guide (accessed March 2020).
Baccaglini-Frank, A. and Maracci, M. (2015), “‘Multi-touch technology and preschoolers” development
of number sense’”, Digital Experiences in Mathematics Education, Vol. 1 No. 1, pp. 7-27.
Beschorner, B. and Hutchison, A. (2013), “IPads as a literacy teaching tool in early childhood”,
International Journal of Education in Mathematics, Science and Technology, Vol. 1 No. 1,
pp. 16-24.
Blum-Ross, A., Donoso, V., Dinh, T., Mascheroni, G., O” Neill, B., Riesmeyer, C. and Stoilova, M. (2018),
Looking forward: technological and social change in the lives of European children and young
people, Report for the I.C.T. Coalition for Children Online. I.C.T., Brussels, Coalition.
Bouck, E.C., Satsangi, R. and Flanagan, S. (2016), “Focus on inclusive education: evaluating apps for
students with disabilities: supporting academic access and success: Bradley Witzel, editor”,
Childhood Education, Vol. 92 No. 4, pp. 324-328.
Buckleitner, W. (1999), “The state of children’s software evaluation-yesterday, today and in the 21st
century”, Information Technology in Childhood Education Annual, Vol. 1 No. 1, pp. 211-220.
Buckler, T. (2012), “Is there an app for that? Developing an evaluation rubric for apps for use with
adults with special needs”, The Journal of B.S.N. Honors Research, Vol. 5 No. 1, pp. 19-32.
Callaghan, M.N. (2018), “Connecting learning and developmental sciences to educational preschool
apps: analyzing app design features and testing their effectiveness”, Doctoral dissertation, UC
Irvine.
Callaghan, M.N. and Reich, S.M. (2018), “Are educational preschool apps designed to teach? An analysis
of the app market”, Learning, Media and Technology, Vol. 43 No. 3, pp. 280-293.
Chen, X. (2016), “Evaluating language-learning mobile apps for second-language learners”, Journal of
Educational Technology Development and Exchange, Vol. 9 No. 2, p. 3.
Chen, T., Hsu, H.M., Stamm, S.W. and Yeh, R. (2019), “Creating an instrument for evaluating critical
thinking apps for college students”, E-Learning and Digital Media, Vol. 16 No. 6, pp. 433-454.
Cherner, T., Dix, J. and Lee, C. (2014), “Cleaning up that mess: a framework for classifying educational
apps”, Contemporary Issues in Technology and Teacher Education, Vol. 14 No. 2, pp. 158-193.
Cherner, T., Lee, C.Y., Fegely, A. and Santaniello, L. (2016), “A detailed rubric for assessing the quality
of teacher resource apps”, Journal of Information Technology Education: Innovations in Practice,
Vol. 15, pp. 117-143.
ITSE Cohen, M., Hadley, M. and Frank, M. (2011), Young Children, Apps and iPad, Michael Cohen Group,
New York, NY.
Colliver, Y., Hatzigianni, M. and Davies, B. (2019), “Why can’t I find quality apps for my child? A model
to understand all stakeholders’ perspectives on quality learning through digital play”, Early
Child Development and Care, pp. 1-15.
Cooper, R.J. (2012), “Hi ConnSENSE fans! RJ cooper here. ConnSENSE bulletin”, available at: www.
connsensebulletin.com/2012/09/hi-connsense-fans-rj-cooper-here/ (accessed August 2019).
Cronin, P., Ryan, F. and Coughlan, M. (2008), “Undertaking a literature review: a step-by-step
approach”, British Journal of Nursing, Vol. 17 No. 1, pp. 38-43.
Denyer, D. and Tranfield, D. (2009), “Producing a systematic review”, in Buchanan, D. and Bryman, A.
(Eds), The Sage Handbook of Organizational Research Methods, Sage, London, pp. 671-689.
Department of Education and Training (2016), “Evaluation of the early learning language Australia
trial”, Deloitte Access Economics, available at: https://docs.education.gov.au/system/files/doc/
other/2016-ella-evaluation-report.pdf (accessed August 2019).
European Commission (2016), Proposal for a Regulation of the European Parliament and of the Council
on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free
Movement of Such Data (General Data Protection Regulation), European Commission, Brussels.
Falloon, G. (2014), “What’s going on behind the screens? Researching young students’ learning
pathways using iPads”, Journal of Computer Assisted Learning, Vol. 30 No. 4, pp. 318-336.
Goodwin, K. (2012), Use of tablet technology in the classroom, Curriculum and Learning Innovation
Centre, N.S.W. Department of Education and Communities, Strathfield, N.S.W.
Green, L.S., Hechter, R.P., Tysinger, P.D. and Chassereau, K.D. (2014), “Mobile app selection for 5th through
12th grade science: the development of the MASS rubric”, Computers and Education, Vol. 75, pp. 65-71.
Greenhalgh, T. and Peacock, R. (2005), “Effectiveness and efficiency of search methods in systematic
reviews of complex evidence: audit of primary sources”, BMJ, Vol. 331 No. 7524, pp. 1064-1065.
Guernsey, L. and Levine, M.H. (2015a), “Pioneering literacy in the digital age”, Technology and Digital
Media in the Early Years: Tools for Teaching and Learning, pp. 104-114.
Guernsey, L. and Levine, M.H. (2015b), Tap, Click, Read: Growing Readers in a World of Screens, John
Wiley and Sons.
Haddaway, N.R., Collins, A.M., Coughlin, D. and Kirk, S. (2015), “The role of google scholar in evidence
reviews and its applicability to grey literature searching”, PloS One, Vol. 10 No. 9, p. e0138237.
Haines, C. (2016), “Evaluating apps and new media for young children: a rubric”, available at: https://
nevershushed.files.wordpress.com/2016/09/2016evaluatingappsandnewmediaforyoungchildrenarubric.
pdf (accessed August 2019).
Harrison, T.R. and Lee, H.S. (2018), “iPads in the mathematics classroom: developing criteria for
selecting appropriate learning apps”, International Journal of Education in Mathematics, Science
and Technology, Vol. 6 No. 2, pp. 155-172.
Haugland, S. (1999), “Computers and young children: the newest software that meets the developmental
needs of young children”, Early Childhood Education Journal, Vol. 26 No. 4, pp. 245-254.
Heyman, N. (2018), “Identifying features of apps to support using evidence-based language intervention
with children”, Assistive Technology, Vol. 32 No. 6, pp. 1-11, doi: 10.1080/10400435.2018.1553078.
Hiniker, A., Sobel, K., Hong, S.R., Suh, H., Irish, I., Kim, D. and Kientz, J.A. (2015), “Touchscreen
prompts for preschoolers: designing developmentally appropriate techniques for teaching young
children to perform gestures”, Proceedings of the 14th International Conference on Interaction
Design and Children, pp. 109-118.
Hirsh-Pasek, K., Zosh, J.M., Golinkoff, R.M., Gray, J.H., Robb, M.B. and Kaufman, J. (2015), “Putting
education in ‘educational’ apps: lessons from the science of learning”, Psychological Science in the
Public Interest, Vol. 16 No. 1, pp. 3-34.
Israelson, M.H. (2015), “The app map: a tool for systematic evaluation of apps for early literacy Evaluating
learning”, The Reading Teacher, Vol. 69 No. 3, pp. 339-349.
educational
Khan, K.S., Kunz, R., Kleijnen, J. and Antes, G. (2003), “Five steps to conducting a systematic review”,
Journal of the Royal Society of Medicine, Vol. 96 No. 3, pp. 118-121.
apps for young
KIDMAP (2017), “The dig checklist for inclusive, high-quality children’s media”, available at:www.
children
joinkidmap.org/digchecklist/ (accessed August 2019).
Kitchenham, B. (2004), Procedures for Undertaking Systematic Reviews, Joint Technical Report,
Computer Science Department, Keele University (TR/SE0401) and National I.C.T. Australia Ltd.
Kolâs, L., Nordseth, H. and Munkvold, R. (2016), “Learning with educational apps: a
qualitative study of the most popular free apps in Norway”, 15th International
Conference on Information Technology Based Higher Education and Training (ITHET),
Istanbul, IEEE, pp. 1-8.
Kucirkova, N. (2014), “iPads in early education: separating assumptions and evidence”, Frontiers in
Psychology, Vol. 5 No. 715, pp. 1-3.
Kucirkova, N. (2017), “iRPD – a framework for guiding design-based research for iPad apps”, British
Journal of Educational Technology, Vol. 48 No. 2, pp. 598-610.
Kucirkova, N. (2019), “Reading to your child? Digital books are as important as print books”, available
at: https://sciencenorway.no/books-children-opinion/reading-to-your-child-digital-books-are-as-
important-as-print-books/1606950 (accessed March 2020).
Kucirkova, N. Wells Rowe, D. Oliver, L. and Piestrzynski, L.E. (2017), “Children’s writing with and on
screen(s): a narrative literature review”, COST ACTION ISI1410 DigiLitEY.
Larkin, K., Kortenkamp, U., Ladel, S. and Etzold, H. (2019), “Using the ACAT framework to evaluate
the design of two geometry apps: an exploratory study”, Digital Experiences in Mathematics
Education, Vol. 5 No. 1, pp. 59-92.
Lee, C.Y. and Cherner, T.S. (2015), “A comprehensive evaluation rubric for assessing instructional
apps”, Journal of Information Technology Education: Research, Vol. 14, pp. 21-53.
Lee, J.S. and Kim, S.W. (2015), “Validation of a tool evaluating educational apps for smart education”,
Journal of Educational Computing Research, Vol. 52 No. 3, pp. 435-450.
Levy, Y. and Ellis, T.J. (2006), “A systems approach to conduct an effective literature review in support
of information systems research”, Informing Science: The International Journal of an Emerging
Transdiscipline, Vol. 9, pp. 181-212.
Lubniewski, K.L., Arthur, C.L. and Harriott, W. (2018), “Evaluating instructional apps using the app checklist
for educators (A.C.E.)”, International Electronic Journal of Elementary Education, Vol. 10 No. 3,
pp. 323-329.
McManis, L.D. and Gunnewig, S.B. (2012), “Finding the education in educational technology with early
learners”, Young Children, Vol. 67 No. 3, pp. 14-24.
Marsh, J., Plowman, L., Yamada-Rice, D., Bishop, J., Lahmar, J. and Scott, F. (2018), “Play and
creativity in young children’s use of apps”, British Journal of Educational Technology,
Vol. 49 No. 5, pp. 870-882.
Marsh, J. Plowman, L. Yamada-Rice, D. Bishop, J.C. Lahmar, J. Scott, F. Davenport, A. Davis, S. French,
K. Piras, M. Thornhill, S. Robinson, P. and Winter, P. (2015), “‘Exploring play and creativity in
preschoolers’ use of apps: report for early years practitioners”, available at: www.techandplay.
org/reports/TAP_Final_Report.pdf (accessed August 2019).
Martens, M., Rinnert, G.C. and Andersen, C. (2018), “Child-centered design: developing an inclusive
letter writing app”, Frontiers in Psychology, Vol. 9, pp. 2277.
Meyer, M., Adkins, V., Yuan, N., Weeks, H.M., Chang, Y.J. and Radesky, J. (2019), “Advertising in young
children’s apps: a content analysis”, Journal of Developmental and Behavioral Pediatrics : JDBP, Vol. 40
No. 1, pp. 32-39.
ITSE Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P. and Stewart, L.A.
(2015), “Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-
P) 2015 statement”, Systematic Reviews, Vol. 4 No. 1, p. 1.
More, C.M. and Travers, J.C. (2013), “What’s app with that? Selecting educational apps for young
children with disabilities”, Young Exceptional Children, Vol. 16 No. 2, pp. 15-32.
Mouza, C. and Barrett-Greenly, T. (2015), “Bridging the app gap: an examination of a professional
development initiative on mobile learning in urban schools”, Computers and Education, Vol. 88,
pp. 1-14.
Naidoo, J.C. (2014), Diversity Programming for Digital Youth: Promoting Cultural Competence in the
Children’s Library, ABC-CLIO.
Neumann, M.M. (2018), “Using tablets and apps to enhance emergent literacy skills in young children”,
Early Childhood Research Quarterly, Vol. 42, pp. 239-246.
Neumann, M.M., Merchant, G. and Burnett, C. (2018), “Young children and tablets: the views of parents
and teachers”, Early Child Development and Care, pp. 1-12.
Notari, M.P., Hielscher, M. and King, M. (2016), “Educational apps ontology”, in Churchill, D., Fox, B.
and King, M. (Eds), Mobile Learning Design, Lecture Notes, Springer, Singapore, pp. 83-96.
Ok, M.W., Kim, M.K., Kang, E.Y. and Bryant, B.R. (2016), “How to find good apps: an evaluation rubric
for instructional apps for teaching students with learning disabilities”, Intervention in School
and Clinic, Vol. 51 No. 4, pp. 244-252.
Papadakis, S. (2020), “Robots and robotics kits for early childhood and first school age”, International
Journal of Interactive Mobile Technologies (IJIM)), Vol. 14 No. 18, pp. 34-56.
Papadakis, S., Kalogiannakis, M. and Zaranis, N. (2017), “Designing and creating an educational app
rubric for preschool teachers”, Education and Information Technologies, Vol. 22 No. 6,
pp. 3147-3165.
Papadakis, S., Vaiopoulou, J., Kalogiannakis, M. and Stamovlasis, D. (2020), “Developing and exploring
an evaluation tool for educational apps (ETEA) targeting kindergarten children”, Sustainability,
Vol. 12 No. 10, p. 4201.
Papadakis, S., Zaranis, N. and Kalogiannakis, M. (2019), “Parental involvement and attitudes towards
young Greek children’s mobile usage”, International Journal of Child-Computer Interaction,
Vol. 22, p. 100144.
Parahoo, K. (2006), Nursing Research – Principles, Process and Issues, 2nd ed., Palgrave, Houndsmill.
Petri, G. and von Wangenheim, C.G. (2016), “How to evaluate educational games: a systematic”, Journal
of Universal Computer Science, Vol. 22 No. 7, pp. 992-1021.
Petticrew, M. and Roberts, H. (2008), Systematic Reviews in the Social Sciences: A Practical Guide, John
Wiley and Sons.
Pew Research Center (2017), “A third of Americans live in a household with three or more
smartphones”, available at: www.pewresearch.org/fact-tank/2017/05/25/a-third-of-americans-
live-in-a-household-with-three-or-more-smartphones/ (accessed August 2019).
Radesky, J.S., Schumacher, J. and Zuckerman, B. (2015), “Mobile and interactive media use by young
children: the good, the bad, and the unknown”, Pediatrics, Vol. 135 No. 1, pp. 1-3.
Rideout, V.J. and Katz, V.S. (2016), Opportunity for All? Technology and Learning in Lower-Income
Families. A Report of the Families and Media Project, The Joan Ganz Cooney Center at Sesame
Workshop, New York, NY.
Rosell-Aguilar, F. (2017), “State of the app: a taxonomy and framework for evaluating language
learning mobile applications”, CALICO Journal, Vol. 34 No. 2, pp. 243-258.
Sawers, P. (2019), “Google asks android developers to categorize apps based on content and target age”,
available at: https://venturebeat.com/2020/03/05/microsofts-ai-generates-3d-objects-from-2d-
images/ (accessed March 2020).
Schrock, K. (2011), “Evaluation rubric for iPod/iPad apps”, available at: www.ipads4teaching.net/ Evaluating
uploads/3/9/2/2/392267/ipad_app_rubric.pdf (accessed August 2019).
educational
Schrock, K. (2015), “Critical evaluation of a content-based iPad/iPod app”, available at: www.
ipads4teaching.net/uploads/3/9/2/2/392267/evalipad_content.pdf (accessed August 2019). apps for young
Shing, S. and Yuan, B. (2016), “Apps developed by academics”, Journal of Education and Practice, Vol. 7 children
No. 33, pp. 1-9.
Shuler, C., Levine, Z. and Ree, J. (2012), iLearn II: An Analysis of the Education Category of Apple’s App
Store, The Joan Ganz Cooney Center at Sesame Workshop, New York, NY.
Statista (2019), “Average price of paid apps in the Apple App Store and Google Play as of 1st quarter
2018 (in U.S. dollars)”, available at: www.statista.com/statistics/262387/average-price-of-
android-ipad-and-iphone-apps/ (accessed August 2019).
Stoyanov, S.R., Hides, L., Kavanagh, D.J., Zelenko, O., Tjondronegoro, D. and Mani, M. (2015), “Mobile
app rating scale: a new tool for assessing the quality of health mobile apps”, JMIR mHealth and
Uhealth, Vol. 3 No. 1, p. e27.
Tammaro, M.T. and Jerome, M.K. (2012), “Does the app fit? Using the apps consideration checklist”, in
Ault, M.J. and Bausch, M.E. (Eds), Apps for All Students: A Teacher’s Desktop Guide,
Technology and Media Division (T.A.M.) for the Council for Exceptional Children, Reston, VA,
pp. 23-31.
Thomé, A.M.T., Scavarda, L.F. and Scavarda, A.J. (2016), “Conducting systematic literature review in
operations management”, Production Planning and Control, Vol. 27 No. 5, pp. 408-420.
Vaala, S., Ly, A. and Levine, M.H. (2015), Getting a Read on the App Stores: A Market Scan and
Analysis of Children’s Literacy Apps, Full Report, Joan Ganz Cooney Center at Sesame
Workshop. 1900 Broadway, New York, NY 10023.
Van Houten, J. (2011), “Ievaluate app rubric”, available at: https://static.squarespace.com/static/
50eca855e4b0939ae8bb12d9/50ecb58ee4b0b16f176a9e7d/50ecb593e4b0b16f176aa97b/
1330388174777/JeanetteVanHoutenRubric.pdf (accessed August 2019).
Vincent, T. (2012), “Ways to evaluate educational apps”, available at: http://learninginhand.com/blog/
ways-to-evaluate-educational-apps.html (accessed August 2019).
Walker, H. (2010), “Evaluation rubric for iPod apps”, available at: http://learninginhand.com/blog/
evaluation-rubric-for-educational-apps.html (accessed August 2019).
Walker, H.C. (2013), “Establishing content validity of an evaluation rubric for mobile technology
applications utilizing the Delphi method”, Doctoral dissertation, Johns Hopkins University, MD,
U.S.A.
Webster, J. and Watson, R. (2002), “Analyzing the past to prepare for the future: writing a literature
review”, M.I.S. Quarterly, Vol. 26 No. 2, pp. Xiii-Xxiii.
Weng, P.L. (2015), “Developing an app evaluation rubric for practitioners in special education”, Journal
of Special Education Technology, Vol. 30 No. 1, pp. 43-58.
Wojdynski, B.W. and Bang, H. (2016), “Distraction effects of contextual advertising on online news
processing: an eye-tracking study”, Behaviour and Information Technology, Vol. 35 No. 8,
pp. 654-664.
Zosh, J.M., Lytle, S.R., Golinkoff, R.M. and Hirsh-Pasek, K. (2017), “Putting the education back in
educational apps: how content and context interact to promote learning”, in Barr, R. and
Linebarger, D.N. (Eds), Media Exposure during Infancy and Early Childhood, Springer, Cham,
pp. 259-282.

Further reading
Chau, C.L. (2014), “Positive technological development for young children in the context of children’s
mobile apps”, Doctoral dissertation, Tufts University, available at: http://gradworks.umi.com/
3624692.pdf (accessed August 2019).
ITSE Common Sense Media (2013), “Zero to eight: children’s media use in America”, available at: www.
commonsensemedia.org/research/zero-to-eight-childrens-media-use-in-america-2013 (accessed
August 2019).
Pilar, R.A., Jorge, A. and Cristina, C. (2013), “The use of current mobile learning applications in EFL”,
Procedia - Social and Behavioral Sciences, Vol. 103, pp. 1189-1196.

Corresponding author
Stamatios Papadakis can be contacted at: [email protected]

For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: [email protected]

You might also like