Tools For Evaluating Apps Papadakis
Tools For Evaluating Apps Papadakis
https://www.emerald.com/insight/1741-5659.htm
Evaluating
Tools for evaluating educational educational
apps for young children: a apps for young
children
systematic review of the literature
Stamatios Papadakis
Preschool Education, University of Crete, Rethymnon, Greece
Received 5 August 2020
Revised 22 October 2020
Accepted 23 October 2020
Abstract
Purpose – This study, by critically analyzing material from multiple sources, aims to provide an overview
of what is available on evaluation tools for educational apps for children. To realize this objective, a
systematic literature review was conducted to search all English literature published after January 2010 in
multiple electronic databases and internet sources. Various combinations of search strings were used due to
database construction differences, while the results were cross-referenced to discard repeated references,
obtaining those that met the criteria for inclusion.
Design/methodology/approach – The present study was conducted according to the methods provided
by Khan et al. (2003) and Thomé et al. (2016). The whole procedure included four stages: planning the review,
identifying relevant studies in the literature, critical analysis of the literature, summarizing and interpreting
the findings (Figure 1). Furthermore, in this analysis, a well-known checklist, PRISMA, was also used as a
recommendation (Moher et al., 2015).
Findings – These review results reveal that, although there are several evaluation tools, in their majority
they are not considered adequate to help teachers and parents to evaluate the pedagogical affordances of
educational apps correctly and easily. Indeed, most of these tools are considered outdated. With the emergence
of new issues such as General Data Protection Regulation, the quality criteria and methods for assessing
children’s products need to be continuously updated and adapted (Stoyanov et al., 2015). Some of these tools
might be considered as good beginnings, but their “limited dimensions make generalizable considerations
about the worth of apps” (Cherner, Dix and Lee, 2014, p. 179). Thus, there is a strong need for effective
evaluation tools to help parents and teachers when choosing educational apps (Callaghan and Reich, 2018).
Research limitations/implications – Even though this work is performed by following the systematic
mapping guideline, threats to the validity of the results presented still exist. Although custom strings that
contained a rich collection of data were used to search for papers, potentially relevant publications that would
have been missed by the advanced search might exist. It is recommended that at least two different reviewers
should independently review titles, abstracts and later full papers for exclusion (Thomé et al., 2016). In this
study, only one reviewer – the author – selected the papers and did the review. In the case of a single
researcher, Kitchenham (2004) recommends that the single reviewer should consider discussing included and
excluded papers with an expert panel. The researcher, following this recommendation, discussed the inclusion
and exclusion procedure with an expert panel of two professionals with research experience from the
Department of (removed for blind review). To deal with publication bias, the researcher in conjunction with
the expert panel used the search strategies identified by Kitchenham (2004) including: Grey literature,
conference proceedings, communicating with experts working in the field for any unpublished literature.
Practical implications – The purpose of this study was not to advocate any evaluation tool. Instead, the
study aims to make parents, educators and software developers aware of the various evaluation tools
available and to focus on their strengths, weaknesses and credibility. This study also highlights the need for a
The author would like to thank his colleagues for their help during this paper.
Funding: The present study was funded by a grant from the Special Account for Research Funds
of University of Crete (SARF UoC).
Interactive Technology and Smart
Disclosure statement: No competing financial interests exist. Education
Data availability: data generated or analyzed during this review are included within the manuscript © Emerald Publishing Limited
1741-5659
and additional files. DOI 10.1108/ITSE-08-2020-0127
ITSE standardized app evaluation (Green et al., 2014) via reliable tools, which will allow anyone interested to
evaluate apps with relative ease (Lubniewski et al., 2018). Parents and educators need a reliable, fast and easy-
to-use tool for the evaluation of educational apps that is more than a general guideline (Lee and Kim, 2015). A
new generation of evaluation tools would also be used as a reference among the software developers,
designers to create educational apps with real educational value.
Social implications – The results of this study point to the necessity of creating new evaluation tools
based on research, either in the form of rubrics or checklists to help educators and parents to choose apps with
real educational value.
Originality/value – However, to date, no systematic review has been published summarizing the available
app evaluation tools. This study, by critically analyzing material from multiple sources, aims to provide an
overview of what is available on evaluation tools for educational apps for children.
Keywords Rubrics, Preschool education, Checklists, Educational mobile applications,
Mobile educational applications, Evaluation
Paper type Research paper
1. Introduction
Access to mobile applications (apps) and smart mobile devices such as tablets is
continuously increasing worldwide, as these tools are becoming cheaper and more easily
accessible. Meanwhile, research indicates that educators and parents accept smart screen
technologies as sources for educational renewal and learning opportunities for children ages
3 to 6 (Neumann, 2018). Since educators have embraced educational apps, app developers
target young children’s parents via these apps (Shing and Yuan, 2016). Vaala et al. (2015)
state that most educational apps on the two popular app stores (Google Play, App Store) are
presented as appropriate for young children. However, just the label “educational” or “for
children” does not indicate that these apps have been validated for educational purposes
(Hirsh-Pasek et al., 2015; Kucirkova, 2019). If educational apps are not designed with an
understanding of how young children develop and learn, educators and parents could waste
valuable time, money, and resources on products that do not teach their children (Callaghan,
2018). Most of the self-proclaimed educational apps targeting children try to teach only basic
skills via rote learning (Shuler, 2012); however, due to the misuse of their multimedia
capabilities, these apps mostly distract young children from the educational process
(Radesky et al., 2015).
For the reasons mentioned above, parents and educators must correctly evaluate an
educational app’s appropriateness in terms of children’s needs, satisfaction, etc. (Bouck et al.,
2016). The majority of educators, as well as parents, are no experts in media technology and
pedagogy. Usually, non-pedagogical attributes such as app price, user ratings, and reviews,
as well as the number of downloads, guide their decision regarding an app choice (Notari
et al., 2016). This selection procedure yields little or no information on app quality (Stoyanov
et al., 2015). Many educators and parents often make the error of not choosing
developmentally appropriate apps (Cooper, 2012), and young children may not fully take
advantage of the opportunity to use high-quality educational apps (Kucirkova, 2014).
Educators must use evaluation tools to measure educational apps’ appropriateness in terms
of the quality and relevance of the children’s educational, emotional and developmental
needs (Lubniewski et al., 2018; More and Travers, 2013).
However, to date, no systematic review has been published summarizing the available
app evaluation tools. This study, by critically analyzing material from multiple sources,
aims to provide an overview of what is available on evaluation tools for educational apps for
children in the form of game-based apps (and not in eBooks apps) that try to foster social,
emotional and cognitive development of young children in the formal and informal learning
environment. A systematic literature review (S.L.R.) was conducted to search all English Evaluating
literature published after January 2010 in multiple electronic databases and internet sources educational
to realize this objective. Various combinations of search strings were used due to database
construction differences, while the results were cross-referenced to discard repeated
apps for young
references, obtaining those that met the criteria for inclusion. children
3. Review questions
To examine what exists in the literature, the following questions were used:
4. Review method
A S.L.R. aims to identify, evaluate, and synthesize all available research evidence relevant to
a particular topic or research question (Cronin et al., 2008; Denyer and Tranfield, 2009;
Kitchenham, 2004). This S.L.R. will summarize the field regarding the evaluation tools for
educational apps for young children and form the basis for more targeted future research.
The present study was conducted according to the methods provided by Khan et al. (2003) Evaluating
and Thomé et al. (2016). The whole procedure included four stages: planning the review, educational
identifying relevant studies in the literature, critical analysis of the literature, summarizing,
and interpreting the findings (Figure 1). Furthermore, in this analysis, a well-known
apps for young
checklist, PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), children
was also used as a recommendation (Moher et al., 2015).
•1. Formulate •2. Define the •7. Reading of •9. Analyze and
the research sources of the selected synthesize the
question and literature literature results
sub-questions search •8. Data •10. Generation
of the review •3. Set inclusion extraction and of the review
and exclusion coding report
criteria
•4. Define the
search criteria
•5. Search of
literature
•6. Selection of
literature Figure 1.
General guidelines of
Source: Adapted from Parahoo (2006) the current research
ITSE Data sources Web addresses
further information (Cronin et al., 2008, p.41). Following the researcher’s recommendation,
this study gleaned alternative keywords from the database thesaurus and combined
keywords using “Boolean” operators (Cronin et al., 2008). The search string was defined by
identifying core concepts such as mobile educational apps, evaluation, rubric, educational
level, and education, including synonyms, as indicated in Table 2. From these key search
terms, replacement terms were identified.
Boolean logical operators such as AND and OR were applied to various databases to
create a subset of search results.
4.2.2 Inclusion criteria. Following Kitchenhand’s (2004) recommendations, the following
inclusion criteria were used:
Inclusion Criteria 1: The study is focused on smart mobile devices – particularly
smartphones and tablets.
Inclusion Criteria 2: The study describes an educational app assessment tool.
Inclusion Criteria 3: The study reports on educational apps for children.
Inclusion Criteria 4: The study is scientifically sound.
In this study, the term “scientifically sound” refers to all tools included in papers published
in peer-reviewed journals, acknowledging their quality. In the inclusion criteria, web sources
such as webpages and blogs that contained educational app assessment tools were also
used, for reasons explained in the section “Execution of the Review”.
4.2.3 Exclusion criteria. Considering Kitchenhand’s (2004) recommendations, exclusion
criteria were used to exclude those not related to this review’s subject. If anyone of the
following criteria was met, the study was excluded from the present study:
Exclusion Criteria 1: The study is not written in English. Evaluating
Exclusion Criteria 2: The study reports on mobile educational apps for adults. educational
Exclusion Criteria 3: The paper is theoretical without providing an assessment tool. apps for young
Exclusion Criteria 4: The paper has already been listed in another database. children
Exclusion Criteria 5: The paper or the assessment model is not accessible.
Exclusion Criteria 6: The study is just published as an abstract.
Mobile apps Mobile applications, iPad apps, tablet apps, smart mobile apps, touchscreen apps
Evaluation Assessment, validation
Rubric Instrument, comprehensive rubrics, assessment form, criteria, checklist, Critical evaluation,
tools, framework, scale, rate model, method, design principles, design guidelines, app
design
Educational Education, early childhood education, preschool education, nursery school education, infant Table 2.
level school education, preprimary education, primary education Core concepts and
Education Educational, instructional, teaching, learning, training synonyms
ITSE Initial 1st stage 2nd stage 3rd stage 4th stage 5th stage
Data source search (Identification) (Screening) (Eligibility) (Eligibility) (Included)
6. Data analysis
A complete list of the 11 articles is shown in Table 4, in tool type and chronological order.
On average, there was one study per year between 2012 and 2018, with only a slight
difference in 2013 (two studies) and 2015 (four studies). Regarding the type of the evaluation
tool, six studies referred to a rubric and five papers to a checklist. There was equality in the
number of tools as there are five rubrics and six checklists. Surprisingly, despite the
incredible popularity of the educational apps for young children, there were only three tools
for evaluating educational apps targeting young children (1 rubric and 2 criteria).
Additionally, eight studies (73 %) described tools for General Education, and three studies
(27 %) described tools designed for Special Education.
We found that there were also several freely available tools on webpages and blogs
besides the studies located in various publications and databases during the review.
Another interesting point of view is that some of these tools, especially Walker’s (2010)
rubric, were cited in many studies and were used as a basis for the creation of tools
mentioned in various studies. Furthermore, everyone interested in evaluating educational
apps has unrestricted access to these freely available tools rather than the tools presented
in the 11 selected studies kept in databases with restricted access. For these reasons,
Educational
Tool level Domain Subdomain Year Characteristics Criteria Study
1 Rubric K-12 General General 2013 Length: 1 page Curriculum connection Walker, H. C. (2013).
education* educational Education Levels of performance: 4 Authenticity Establishing content validity
apps Scores: 1 - 4 (1: worst – 4: best) Feedback of an evaluation rubric for
Not applicable (N/A) option: Differentiation mobile technology
No User Friendliness applications utilizing the
Criteria: 7 Student Use Delphi method. Doctoral
Sub criteria: 0 Student Performance dissertation, Johns Hopkins
University, Maryland, U.S.A.
2 Rubric K-12 education Science General 2014 Length: 1 page Accuracy Green, L. S., Hechter, R. P.,
learning Education Levels of performance: 3 Relevance of Content Tysinger, P. D., and
Scores: 1-3 (1: worst – 3: best) Sharing Findings Chassereau, K. D. (2014).
Not applicable (N/A) option: Feedback Mobile app selection for 5th
Yes Scientific Inquiry and through 12th grade science:
Criteria: 6 Practices The development of the MASS
Sub criteria: 0 Navigation rubric. Computers and
Education, 75, 65–71.
3 Rubric Early Early Literacy General 2015 Length: 1 page Multimodal Features Israelson, M. H. (2015). The
childhood Learning Education Levels of performance: 4 Literacy Content app map: A tool for systematic
education** Scores: 1-4 (1: worst – 4: best) The intuitiveness of evaluation of apps for early
Not applicable (N/A) option: App Navigation literacy learning. The Reading
No User Interactivity Teacher, 69(3), 339–349.
Criteria: 4
Sub criteria:0
4 Rubric K-12 education Instructional General 2015 Length: 8 pages Instruction Lee, C-Y. and Cherner, T. S.
apps Education Levels of performance: 5 Design (2015). A comprehensive
Scores: 1-5 (1: worst – 5: best) Engagement evaluation rubric for assessing
Not applicable (N/A) option: instructional apps. Journal of
Yes Information Technology
Domain: 3 Education: Research, 14, 21-53
Sub criteria: 24
(continued)
on evaluation tools
Evaluating
articles
Table 4.
apps for young
described in selected
educational
Detailed information
children
ITSE
Table 4.
Educational
Tool level Domain Subdomain Year Characteristics Criteria Study
5 Rubric K-12 education General Special 2016 Length: 3 pages Objective Ok, M. W., Kim, M. K., Kang,
educational Education Levels of performance: 3 Strategy E. Y., and Bryant, B. R. (2016).
apps Scores: 1–3 (1: worst – 3: best) Examples How to find good apps: An
Not applicable (N/A) option: Practice evaluation rubric for
No Error Correction and instructional apps for teaching
Criteria: 13 Feedback students with learning
Sub criteria: - Error Analysis disabilities. Intervention in
Items: - Progress Monitoring School and Clinic, 51(4), 244-
Score: Yes Motivation 252.
Type of score: Equation Navigation
App usage suggestion: Yes Visual and Auditory
Stimuli
Font
Customized Settings
Content Error and Bias
6 Rubric Early General General 2017 Length: 3 pages Educational Content Papadakis, S., Kalogiannakis,
childhood learning Education Levels of performance: 4 Design M., and Zaranis, N. (2017).
education Scores: 1-4 (1: worst – 4: best) Functionality Designing and creating an
Not applicable (N/A) option: Technical educational app rubric for
No characteristics preschool teachers. Education
Domain: 4 and Information
Sub criteria: 18 Technologies, 22(6), 3147-3165.
7 Criteria Early General General 2012 Length: 1 page Educational McManis, L. D., and
childhood learning Education Level of performance: 4 Appropriate Gunnewig, S. B. (2012).
education Scores: 1-4 (1 = No 2 = Unsure Child-Friendly Finding the education in
3 = Somewhat 4 = Yes) Enjoyable/Engaging educational technology with
Not applicable (N/A) option: Progress Monitoring/ early learners. Young
No Assessment Children, 67(3), 14-24.
Criteria: 6 Individualizing
Sub criteria: 18 Features
Score: Yes
Type of score: Equation
(continued)
Educational
Tool level Domain Subdomain Year Characteristics Criteria Study
Table 4.
apps for young
Evaluating
educational
children
ITSE
Table 4.
Educational
Tool level Domain Subdomain Year Characteristics Criteria Study
10 Criteria K-12 education General Special 2015 Length: 3 pages Design features Weng, P. L. (2015). Developing
educational Education Level of performance:- Individuation an app evaluation rubric for
apps Scores: (D=Disagree, N= Support practitioners in special
Neutral, A=Agree, na=not Overall impression education. Journal of Special
applicable, u= uncertain) Education Technology, 30(1),
Not applicable (N/A) option: 43-58.
Yes
Criteria: 4
Sub criteria: 11
Items: -
Score: No
Type of score: -
Proposal for app use:-
11 Criteria K-12 education General General 2018 Length: 1 page Student Interest Lubniewski, K. L., Arthur, C.
educational Education Level of performance: 3 Design Features L., and Harriott, W. (2018).
apps Scores: Yes, No, Somewhat Connection to Evaluating instructional apps
Not applicable (N/A) option: Curriculum using the app checklist for
Yes Instruction Features educators (A.C.E.).
Criteria: 4 International Electronic
Sub criteria: 26 Journal of Elementary
Items:- Education, 10(3), 323–329.
Score: Yes
Type of score: Summary
Recommendation for app use:
Yes
Notes: * K12 education is a term for primary and secondary education, including Kindergarten; ** Early childhood education is a term for education (formally
and informally) from birth up to the age of eight
Evaluating
educational
apps for young
children
Figure 2.
Diagram of the S.L.R.
using a PRISMA flow
diagram
these tools were also included under the characterization of “nonscientific-based tools in
this review.” In this study, the term “nonscientific-based tools” refers to freely available
online tools for evaluating educational apps that have not been “peer-reviewed” and
present their quality criteria without focusing on research methodology and scientific
evidence. Table 5 presents these tools in chronological order.
When examining the publication year of the seven nonscientific-based tools, the earliest
publication date was in 2010, ending with a tool in 2017. The number of online evaluation
tools was significantly increased between the years 2010 and 2012. There were only three
tools published in the next few years. Regarding the type of tools presented, four tools were
rubrics and three tools were checklists.
A question that arises from the information presented in Table 5 is whether these 11 tools
have fundamental characteristics in common. A simple listing of criteria, without further
categorization, revealed that there are over 250 different elements presented in the 11 tools.
Of these, only five elements are common in these 11 tools, such as design features, feedback,
etc. The present study tried to categorize these different elements further. The tool proposed
by Lee and Cherner (2015) was used as a basis for comparison and resulted in a final list of
34 criteria (Table 6). This count is significantly smaller than the initial number of the 250
sub-criteria.
ITSE
Table 5.
based tools
about non-scientific-
Detailed information
Educational
Tool level Domain Subdomain Year Characteristics Criteria/Sub criteria Source
1 Rubric K-12 General General 2010 Length: 1 page Curriculum Connection Walker, H. C. Evaluation
education educational Education Levels of Authenticity Rubric for iPod Apps. https://
apps performance: 4 Feedback bit.ly/31WG05U
Scores: 1-4 (1: worst – Differentiation
4: best) User Friendliness
Not applicable (N/A) Student Motivation
option: No
Criteria: 6
Sub criteria: -
2 Rubric K-12 General General 2011 Length: 1 page Curriculum Connection Schrock, K. Evaluation rubric
education educational Education Levels of Authenticity for iPod/iPad apps. www.
apps performance: 4 Feedback ipads4teaching.net/uploads/3/
Scores: 1-4 (1: worst – Differentiation 9/2/2/392267/ipad_app_rubric.
4: best) User Friendliness pdf
Not applicable (N/A) Motivation
option: No Reporting
Criteria: 7
Sub criteria: -
3 Rubric K-12 General Special 2011 Length: 1 page Curriculum Connection Van Houten, J. Ievaluate app
education educational Education Levels of Type of Skills practices Rubric. https://bit.ly/30kqobT
apps performance: 4 Age and Grade Level
Scores: 1-4 (1: worst – Languages
4: best) Adjustable levels
Not applicable (N/A) Prompts
option: No Ease of Use
Criteria: 14 Engagement
Sub criteria: - Customization
Score: Yes Alternative Access
Type of score: Simple Data Collected
equation National Curriculum
Recommendation for Gender Neutral
app use: Yes When was the app
updated
(continued)
Educational
Tool level Domain Subdomain Year Characteristics Criteria/Sub criteria Source
4 Rubric K-12 General General 2012 Length: 1 page Relevance Vincent, T. Ways to evaluate
education educational Education Levels of Customization educational Apps. Learning in
apps performance: 4 Feedback hand. https://bit.ly/1LhV6ra
Scores: 1-4 (1: worst– Thinking Skills
4: best) Usability
Not applicable (N/A) Engagement
option: No Sharing
Criteria: 7
Sub criteria:-
Score: No
Type of score:-
Recommendation for
app use: -
5 Checklist K-12 General General 2015 Length: 1 page Curriculum connection Schrock, K. Critical evaluation
education educational Education Levels of Authenticity of a content-based iPad/iPod
apps performance: 2 Feedback app. www.ipads4teaching.net/
Scores: Yes - No Differentiation uploads/3/9/2/2/392267/
Not applicable (N/A) User-friendliness evalipad_content.pdf
option: Yes Student motivation
Criteria: 12 Reporting
Sub criteria:- Sound
Score: No Instructions
Type of score: - Support page
Recommendation for Navigation
app use: - Modalities
6 Checklist K-12 General General 2016 Length: 3 pages Technical/User Haines, C. Evaluating Apps
education educational Education Level of performance: Experience and New Media for Young
apps 2 Criteria for story apps Children: A Rubric. https://bit.
Scores: Yes-No Criteria for game, play, ly/33UJGHd
Not applicable (N/A) or creation apps
option: Yes
(continued)
Table 5.
apps for young
Evaluating
educational
children
ITSE
Table 5.
Educational
Tool level Domain Subdomain Year Characteristics Criteria/Sub criteria Source
Criteria: 2
(technical/user
experience criteria
additional content
criteria specific to app
type)
Sub criteria:
11 TUE criteria
11 content criteria
(Story Apps)
11 content criteria
(Toy/Creation Apps)
Score: No
Type of score:-
Recommendation for
app use: -
7 Checklist K-12 General General 2017 Length: 6 pages Content: Story, KIDMAP. The dig checklist for
education educational Education Level of Information, and inclusive, high-quality
apps performance:- Activity children’s media. www.
Scores:- Art joinKIDMAP.org/digchecklist/
Not applicable (N/A) Audio
option: No Audience
Criteria: 8 Purpose
Sub criteria: 4 Functionality and
Items: 80 Navigation
Score: No Instructions, Guides,
Type of score: - and Support materials
Recommendation for for grownups
app use:- Creative Team
Lee and Green
Cherner Papadakis et al. Ok et al. Israelson Walker More and Lee and McManis and Lubniewski Weng
Sub criteria (2015) et al. (2017) (2014) (2016) (2015) (2013) Travers (2013) Kim (2015) Gunnewig (2012) et al. (2018) (2015)
study
children
ITSE
Table 6.
Lee and Green
Cherner Papadakis et al. Ok et al. Israelson Walker More and Lee and McManis and Lubniewski Weng
Sub criteria (2015) et al. (2017) (2014) (2016) (2015) (2013) Travers (2013) Kim (2015) Gunnewig (2012) et al. (2018) (2015)
26 Performance and x
reliability
27 Interoperability of x x x x
system
28 App cost x x
29 Copyrights x x x
management
30 Data spill x x
31 Support page x x x
32 Team x
33 Free of external links x
34 Developer information x
35 Frequency of updates
This study also tried to determine which of these 34 criteria are mostly used in the various Evaluating
tools – Figure 3 presents data separately for rubrics and checklists and cumulatively. The educational
“navigation” criterion was present in all rubrics, followed by the “content appropriateness,” apps for young
the “value of errors,” the “screen design,” and the “learner control” criterion. These criteria
were in five of the six tools. Strangely enough, the criterion of “cultural sensitivity” is met
children
only in three rubrics. However, the situation is a little bit different according to the grouped
criteria used in checklists as three grouped criteria found in all tools (“screen design,” “ease
of use,” “learner control”). In contradiction with the rubrics, almost all the checklists (4 out of
5) mentioned the “cultural sensitivity” criterion.
The present study tried to understand the degree of completeness of the selected studies’
tools by comparing the total number of criteria determined and the criteria found by a
separate tool. Complete tools were two rubrics (Lee and Cherner, 2015; Papadakis et al., 2017)
and one checklist (Lee and Kim, 2015). All the other tools covered the grouped criteria in a
percentage lower than 50% (Table 7).
As already noted, in this review, nonscientific-based tools were also included. As
described above, the same procedure was followed for these tools to achieve comparable
results. As they were already formed, the group criteria were used as a basis for the non-
scientific based tools. One extra item was used in these tools, compared to 34 items in the
scientific-based tools, that of the “frequency of app updates” (Table 8).
The most common grouped criterion in the non-scientific tools is the “Content
appropriateness,” which is met in all tools, followed by the “Higher-order thinking skills” (6
times), the “Value of errors” (5 times), and the “Level of learning material” (5 times), the
“Navigation” (5 times) and the “Learner control” (5 times). In both tools (scientific and non-
scientific tools), the “Navigation” and the “Learner control” grouped criteria are observed at
higher rates. This study also tried to figure out the degree of non-scientific tools.
Unfortunately, these tools only fulfill the grouped criteria to a limited extent. For instance,
the Walker tool managed to fulfill only 6 group criteria, the Schrock tool only 7, etc. Only the
KIDMAP tool covers the grouped criteria to a satisfactory degree (51 %) (Table 9).
For a detailed explanation of the criteria mentioned above and their relevance with early
childhood education, readers can refer to Lee and Cherner (2015) study.
7. Discussion of results
7.1 RQ1: Which tools exist to evaluate educational apps?
This study identified 11 articles describing two different approaches to evaluate educational
apps after selecting the eligible studies. Six studies present a rubric, and five studies present
a checklist. Additionally, this study also identified seven nonscientific-based tools. Four web
sources present a rubric, while three web sources present a checklist. The rubric provided by
Ok et al. (2016) can be considered “hybrid” as it combines a rubric’s content completeness
into a single checklist.
In general, the results of RQ1 revealed that the existing body of scientific-based tools is
smaller than expected compared to the number of research papers published about the low
quality of educational apps. During this review, evaluation tools that are available for free
on various websites were also found. Although these sources might be used as “a first step”
in educational app evaluation, they are not based on scientific evidence and significantly
omit important app assessment aspects. Only one freely available tool (KIDMAP) can be
considered as comprehensive enough to evaluate educational apps. The other six tools have
limitations and can be considered inadequate tools in terms of their evaluation power.
ITSE
Figure 3.
Grouped criteria were
met in the two tools,
separately and
collectively
Lee and Lee and Kim Papadakis More and Lubniewski McManis and Weng, Green et al. Walker, Ok et al. Israelson,
Tools Cherner (2015) (2015) et al. (2017) Travers (2013) et al. (2018) Gunnewig (2012) 2015) (2014) 2013) (2016) (2015)
Total number
of criteria 35 35 35 35 35 35 35 35 35 35 35
Criteria found 24 20 19 13 13 12 12 9 8 8 6
Percentage 69 57 54 37 37 34 34 26 23 23 17
Number of grouped
Table 7.
children
tool
ITSE
Table 8.
Grouped criteria per
Sub criteria Walker (2010) Schrock (2011) Vincent (2012) Schrock (2015) Van Houten (2011) Haines (2016) KIDMAP (2017)
8. Threats to validity
Even though this work is performed by following the systematic mapping guideline, threats
to the validity of the results presented still exist. Although custom strings that contained a
rich collection of data were used to search for papers, potentially relevant publications that
would have been missed by the advanced search might exist. It is recommended that at least
two different reviewers should independently review titles, abstracts, and later full papers
for the exclusion (Thomé et al., 2016). In this study, only one reviewer – the author – selected
the papers and did the review. In the case of a single researcher, Kitchenham (2004)
recommends that the single reviewer consider discussing included and excluded papers
with an expert panel. Following this recommendation, the researcher discussed the inclusion
and exclusion procedure with an expert panel of two professionals with research experience
from the Department of Preschool Education, University of Crete, Greece. To deal with
publication bias, the researcher, in conjunction with the expert panel, used the search
strategies identified by Kitchenham (2004), including:
grey literature;
conference proceedings; and
communicating with experts working in the field for any unpublished literature.
Cronin et al. (2008, p.38) state that a literature review must collect information from multiple
sources. This study did not limit the search to only one scientific domain, but it included
databases and publishers from three different domains (education, social science, and
computing domain), as well as standalone journals. Furthermore, to mitigate threats based
on the search string, the study performed extra research with Google Scholar by adopting
backward and foreword search (Webster and Watson, 2002). Denyer and Tranfield (2009)
note that other sources such as websites, workshops, etc., and other “grey literature” are all
critical in addition to academic papers. Whether to search for specific data sources depends
ITSE on the field and the evidence available (Denyer and Tranfield, 2009, p. 684). This study
included nonscientific-based research to provide a comprehensive overview of the available
evaluation tools for reasons already explained.
In this review, tools that referred to Special Education were also included. A preliminary
reading of these tools led to the decision to apply to General Education apps. Furthermore,
these tools’ inclusion will allow special education teachers, software developers, and course
parents to become informed. Some other tools in their title refer to iPod/iPad apps or, in
general, to Apple products. These tools were also included in the review, as the same
guidelines exist for apps for iOS and Android operating systems.
A paper entitled “Does the app fit? Using the Apps Consideration Checklist” (Tammaro
and Jerome, 2012) cited in some papers. Although several attempts have been made to find
this paper, including the search of additional databases such as ResearchGate and
Academia or contacting former and current publishers of the publication (C.E.C. and Exinn),
all efforts proved futile. Thus, this paper was not included in the review. Relevant papers
were also found, but they were not included in the results for various reasons. For instance,
Buckler (2012) presented a rubric strictly oriented toward evaluating apps for adults with
special needs. Cherner et al. (2016) published a rubric for evaluating apps from the teachers’
scope, e.g. apps that help teachers complete everyday tasks for their daily teaching practice.
Additionally, the Chen (2016) rubric was also excluded as it evaluates language-learning
apps for second language adult learners. Stoyanov and his colleagues presented an exciting
evaluation tool in the form of a rubric called MARS. It was excluded as it evaluates mobile
health apps (Stoyanov et al., 2015). Finally, Heyman (2018) published a handy paper that
addresses all multimedia learning features that support or hinder learning. As these features
are not presented in a rubric or a checklist, the paper was excluded.
For the same reason, there were also some exclusions from the review of incredibly
valuable papers, such as of those of Hirsh-Pasek et al. (2015), Marsh et al. (2018), Rosell-
Aguilar (2017), Zosh et al. (2017), among others, as they were in the form of guidelines.
Furthermore, various other non-scientific tools (mainly in the form of rubrics) were
mentioned in several sites, but their links were redirected to an error page. We are
reasonably confident that we are unlikely to have missed many significant relevant studies
or tools for the reasons mentioned above.
References
Anthony, L., Brown, Q., Tate, B., Nias, J., Brewer, R. and Irwin, G. (2014), “Designing smarter touch-
based interfaces for educational contexts”, Personal and Ubiquitous Computing, Vol. 18 No. 6,
pp. 1471-1483.
Australian Cyber Security Centre (2019), “Data spill management guide”, available at: www.cyber.gov.
au/publications/data-spill-management-guide (accessed March 2020).
Baccaglini-Frank, A. and Maracci, M. (2015), “‘Multi-touch technology and preschoolers” development
of number sense’”, Digital Experiences in Mathematics Education, Vol. 1 No. 1, pp. 7-27.
Beschorner, B. and Hutchison, A. (2013), “IPads as a literacy teaching tool in early childhood”,
International Journal of Education in Mathematics, Science and Technology, Vol. 1 No. 1,
pp. 16-24.
Blum-Ross, A., Donoso, V., Dinh, T., Mascheroni, G., O” Neill, B., Riesmeyer, C. and Stoilova, M. (2018),
Looking forward: technological and social change in the lives of European children and young
people, Report for the I.C.T. Coalition for Children Online. I.C.T., Brussels, Coalition.
Bouck, E.C., Satsangi, R. and Flanagan, S. (2016), “Focus on inclusive education: evaluating apps for
students with disabilities: supporting academic access and success: Bradley Witzel, editor”,
Childhood Education, Vol. 92 No. 4, pp. 324-328.
Buckleitner, W. (1999), “The state of children’s software evaluation-yesterday, today and in the 21st
century”, Information Technology in Childhood Education Annual, Vol. 1 No. 1, pp. 211-220.
Buckler, T. (2012), “Is there an app for that? Developing an evaluation rubric for apps for use with
adults with special needs”, The Journal of B.S.N. Honors Research, Vol. 5 No. 1, pp. 19-32.
Callaghan, M.N. (2018), “Connecting learning and developmental sciences to educational preschool
apps: analyzing app design features and testing their effectiveness”, Doctoral dissertation, UC
Irvine.
Callaghan, M.N. and Reich, S.M. (2018), “Are educational preschool apps designed to teach? An analysis
of the app market”, Learning, Media and Technology, Vol. 43 No. 3, pp. 280-293.
Chen, X. (2016), “Evaluating language-learning mobile apps for second-language learners”, Journal of
Educational Technology Development and Exchange, Vol. 9 No. 2, p. 3.
Chen, T., Hsu, H.M., Stamm, S.W. and Yeh, R. (2019), “Creating an instrument for evaluating critical
thinking apps for college students”, E-Learning and Digital Media, Vol. 16 No. 6, pp. 433-454.
Cherner, T., Dix, J. and Lee, C. (2014), “Cleaning up that mess: a framework for classifying educational
apps”, Contemporary Issues in Technology and Teacher Education, Vol. 14 No. 2, pp. 158-193.
Cherner, T., Lee, C.Y., Fegely, A. and Santaniello, L. (2016), “A detailed rubric for assessing the quality
of teacher resource apps”, Journal of Information Technology Education: Innovations in Practice,
Vol. 15, pp. 117-143.
ITSE Cohen, M., Hadley, M. and Frank, M. (2011), Young Children, Apps and iPad, Michael Cohen Group,
New York, NY.
Colliver, Y., Hatzigianni, M. and Davies, B. (2019), “Why can’t I find quality apps for my child? A model
to understand all stakeholders’ perspectives on quality learning through digital play”, Early
Child Development and Care, pp. 1-15.
Cooper, R.J. (2012), “Hi ConnSENSE fans! RJ cooper here. ConnSENSE bulletin”, available at: www.
connsensebulletin.com/2012/09/hi-connsense-fans-rj-cooper-here/ (accessed August 2019).
Cronin, P., Ryan, F. and Coughlan, M. (2008), “Undertaking a literature review: a step-by-step
approach”, British Journal of Nursing, Vol. 17 No. 1, pp. 38-43.
Denyer, D. and Tranfield, D. (2009), “Producing a systematic review”, in Buchanan, D. and Bryman, A.
(Eds), The Sage Handbook of Organizational Research Methods, Sage, London, pp. 671-689.
Department of Education and Training (2016), “Evaluation of the early learning language Australia
trial”, Deloitte Access Economics, available at: https://docs.education.gov.au/system/files/doc/
other/2016-ella-evaluation-report.pdf (accessed August 2019).
European Commission (2016), Proposal for a Regulation of the European Parliament and of the Council
on the Protection of Individuals with Regard to the Processing of Personal Data and on the Free
Movement of Such Data (General Data Protection Regulation), European Commission, Brussels.
Falloon, G. (2014), “What’s going on behind the screens? Researching young students’ learning
pathways using iPads”, Journal of Computer Assisted Learning, Vol. 30 No. 4, pp. 318-336.
Goodwin, K. (2012), Use of tablet technology in the classroom, Curriculum and Learning Innovation
Centre, N.S.W. Department of Education and Communities, Strathfield, N.S.W.
Green, L.S., Hechter, R.P., Tysinger, P.D. and Chassereau, K.D. (2014), “Mobile app selection for 5th through
12th grade science: the development of the MASS rubric”, Computers and Education, Vol. 75, pp. 65-71.
Greenhalgh, T. and Peacock, R. (2005), “Effectiveness and efficiency of search methods in systematic
reviews of complex evidence: audit of primary sources”, BMJ, Vol. 331 No. 7524, pp. 1064-1065.
Guernsey, L. and Levine, M.H. (2015a), “Pioneering literacy in the digital age”, Technology and Digital
Media in the Early Years: Tools for Teaching and Learning, pp. 104-114.
Guernsey, L. and Levine, M.H. (2015b), Tap, Click, Read: Growing Readers in a World of Screens, John
Wiley and Sons.
Haddaway, N.R., Collins, A.M., Coughlin, D. and Kirk, S. (2015), “The role of google scholar in evidence
reviews and its applicability to grey literature searching”, PloS One, Vol. 10 No. 9, p. e0138237.
Haines, C. (2016), “Evaluating apps and new media for young children: a rubric”, available at: https://
nevershushed.files.wordpress.com/2016/09/2016evaluatingappsandnewmediaforyoungchildrenarubric.
pdf (accessed August 2019).
Harrison, T.R. and Lee, H.S. (2018), “iPads in the mathematics classroom: developing criteria for
selecting appropriate learning apps”, International Journal of Education in Mathematics, Science
and Technology, Vol. 6 No. 2, pp. 155-172.
Haugland, S. (1999), “Computers and young children: the newest software that meets the developmental
needs of young children”, Early Childhood Education Journal, Vol. 26 No. 4, pp. 245-254.
Heyman, N. (2018), “Identifying features of apps to support using evidence-based language intervention
with children”, Assistive Technology, Vol. 32 No. 6, pp. 1-11, doi: 10.1080/10400435.2018.1553078.
Hiniker, A., Sobel, K., Hong, S.R., Suh, H., Irish, I., Kim, D. and Kientz, J.A. (2015), “Touchscreen
prompts for preschoolers: designing developmentally appropriate techniques for teaching young
children to perform gestures”, Proceedings of the 14th International Conference on Interaction
Design and Children, pp. 109-118.
Hirsh-Pasek, K., Zosh, J.M., Golinkoff, R.M., Gray, J.H., Robb, M.B. and Kaufman, J. (2015), “Putting
education in ‘educational’ apps: lessons from the science of learning”, Psychological Science in the
Public Interest, Vol. 16 No. 1, pp. 3-34.
Israelson, M.H. (2015), “The app map: a tool for systematic evaluation of apps for early literacy Evaluating
learning”, The Reading Teacher, Vol. 69 No. 3, pp. 339-349.
educational
Khan, K.S., Kunz, R., Kleijnen, J. and Antes, G. (2003), “Five steps to conducting a systematic review”,
Journal of the Royal Society of Medicine, Vol. 96 No. 3, pp. 118-121.
apps for young
KIDMAP (2017), “The dig checklist for inclusive, high-quality children’s media”, available at:www.
children
joinkidmap.org/digchecklist/ (accessed August 2019).
Kitchenham, B. (2004), Procedures for Undertaking Systematic Reviews, Joint Technical Report,
Computer Science Department, Keele University (TR/SE0401) and National I.C.T. Australia Ltd.
Kolâs, L., Nordseth, H. and Munkvold, R. (2016), “Learning with educational apps: a
qualitative study of the most popular free apps in Norway”, 15th International
Conference on Information Technology Based Higher Education and Training (ITHET),
Istanbul, IEEE, pp. 1-8.
Kucirkova, N. (2014), “iPads in early education: separating assumptions and evidence”, Frontiers in
Psychology, Vol. 5 No. 715, pp. 1-3.
Kucirkova, N. (2017), “iRPD – a framework for guiding design-based research for iPad apps”, British
Journal of Educational Technology, Vol. 48 No. 2, pp. 598-610.
Kucirkova, N. (2019), “Reading to your child? Digital books are as important as print books”, available
at: https://sciencenorway.no/books-children-opinion/reading-to-your-child-digital-books-are-as-
important-as-print-books/1606950 (accessed March 2020).
Kucirkova, N. Wells Rowe, D. Oliver, L. and Piestrzynski, L.E. (2017), “Children’s writing with and on
screen(s): a narrative literature review”, COST ACTION ISI1410 DigiLitEY.
Larkin, K., Kortenkamp, U., Ladel, S. and Etzold, H. (2019), “Using the ACAT framework to evaluate
the design of two geometry apps: an exploratory study”, Digital Experiences in Mathematics
Education, Vol. 5 No. 1, pp. 59-92.
Lee, C.Y. and Cherner, T.S. (2015), “A comprehensive evaluation rubric for assessing instructional
apps”, Journal of Information Technology Education: Research, Vol. 14, pp. 21-53.
Lee, J.S. and Kim, S.W. (2015), “Validation of a tool evaluating educational apps for smart education”,
Journal of Educational Computing Research, Vol. 52 No. 3, pp. 435-450.
Levy, Y. and Ellis, T.J. (2006), “A systems approach to conduct an effective literature review in support
of information systems research”, Informing Science: The International Journal of an Emerging
Transdiscipline, Vol. 9, pp. 181-212.
Lubniewski, K.L., Arthur, C.L. and Harriott, W. (2018), “Evaluating instructional apps using the app checklist
for educators (A.C.E.)”, International Electronic Journal of Elementary Education, Vol. 10 No. 3,
pp. 323-329.
McManis, L.D. and Gunnewig, S.B. (2012), “Finding the education in educational technology with early
learners”, Young Children, Vol. 67 No. 3, pp. 14-24.
Marsh, J., Plowman, L., Yamada-Rice, D., Bishop, J., Lahmar, J. and Scott, F. (2018), “Play and
creativity in young children’s use of apps”, British Journal of Educational Technology,
Vol. 49 No. 5, pp. 870-882.
Marsh, J. Plowman, L. Yamada-Rice, D. Bishop, J.C. Lahmar, J. Scott, F. Davenport, A. Davis, S. French,
K. Piras, M. Thornhill, S. Robinson, P. and Winter, P. (2015), “‘Exploring play and creativity in
preschoolers’ use of apps: report for early years practitioners”, available at: www.techandplay.
org/reports/TAP_Final_Report.pdf (accessed August 2019).
Martens, M., Rinnert, G.C. and Andersen, C. (2018), “Child-centered design: developing an inclusive
letter writing app”, Frontiers in Psychology, Vol. 9, pp. 2277.
Meyer, M., Adkins, V., Yuan, N., Weeks, H.M., Chang, Y.J. and Radesky, J. (2019), “Advertising in young
children’s apps: a content analysis”, Journal of Developmental and Behavioral Pediatrics : JDBP, Vol. 40
No. 1, pp. 32-39.
ITSE Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P. and Stewart, L.A.
(2015), “Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-
P) 2015 statement”, Systematic Reviews, Vol. 4 No. 1, p. 1.
More, C.M. and Travers, J.C. (2013), “What’s app with that? Selecting educational apps for young
children with disabilities”, Young Exceptional Children, Vol. 16 No. 2, pp. 15-32.
Mouza, C. and Barrett-Greenly, T. (2015), “Bridging the app gap: an examination of a professional
development initiative on mobile learning in urban schools”, Computers and Education, Vol. 88,
pp. 1-14.
Naidoo, J.C. (2014), Diversity Programming for Digital Youth: Promoting Cultural Competence in the
Children’s Library, ABC-CLIO.
Neumann, M.M. (2018), “Using tablets and apps to enhance emergent literacy skills in young children”,
Early Childhood Research Quarterly, Vol. 42, pp. 239-246.
Neumann, M.M., Merchant, G. and Burnett, C. (2018), “Young children and tablets: the views of parents
and teachers”, Early Child Development and Care, pp. 1-12.
Notari, M.P., Hielscher, M. and King, M. (2016), “Educational apps ontology”, in Churchill, D., Fox, B.
and King, M. (Eds), Mobile Learning Design, Lecture Notes, Springer, Singapore, pp. 83-96.
Ok, M.W., Kim, M.K., Kang, E.Y. and Bryant, B.R. (2016), “How to find good apps: an evaluation rubric
for instructional apps for teaching students with learning disabilities”, Intervention in School
and Clinic, Vol. 51 No. 4, pp. 244-252.
Papadakis, S. (2020), “Robots and robotics kits for early childhood and first school age”, International
Journal of Interactive Mobile Technologies (IJIM)), Vol. 14 No. 18, pp. 34-56.
Papadakis, S., Kalogiannakis, M. and Zaranis, N. (2017), “Designing and creating an educational app
rubric for preschool teachers”, Education and Information Technologies, Vol. 22 No. 6,
pp. 3147-3165.
Papadakis, S., Vaiopoulou, J., Kalogiannakis, M. and Stamovlasis, D. (2020), “Developing and exploring
an evaluation tool for educational apps (ETEA) targeting kindergarten children”, Sustainability,
Vol. 12 No. 10, p. 4201.
Papadakis, S., Zaranis, N. and Kalogiannakis, M. (2019), “Parental involvement and attitudes towards
young Greek children’s mobile usage”, International Journal of Child-Computer Interaction,
Vol. 22, p. 100144.
Parahoo, K. (2006), Nursing Research – Principles, Process and Issues, 2nd ed., Palgrave, Houndsmill.
Petri, G. and von Wangenheim, C.G. (2016), “How to evaluate educational games: a systematic”, Journal
of Universal Computer Science, Vol. 22 No. 7, pp. 992-1021.
Petticrew, M. and Roberts, H. (2008), Systematic Reviews in the Social Sciences: A Practical Guide, John
Wiley and Sons.
Pew Research Center (2017), “A third of Americans live in a household with three or more
smartphones”, available at: www.pewresearch.org/fact-tank/2017/05/25/a-third-of-americans-
live-in-a-household-with-three-or-more-smartphones/ (accessed August 2019).
Radesky, J.S., Schumacher, J. and Zuckerman, B. (2015), “Mobile and interactive media use by young
children: the good, the bad, and the unknown”, Pediatrics, Vol. 135 No. 1, pp. 1-3.
Rideout, V.J. and Katz, V.S. (2016), Opportunity for All? Technology and Learning in Lower-Income
Families. A Report of the Families and Media Project, The Joan Ganz Cooney Center at Sesame
Workshop, New York, NY.
Rosell-Aguilar, F. (2017), “State of the app: a taxonomy and framework for evaluating language
learning mobile applications”, CALICO Journal, Vol. 34 No. 2, pp. 243-258.
Sawers, P. (2019), “Google asks android developers to categorize apps based on content and target age”,
available at: https://venturebeat.com/2020/03/05/microsofts-ai-generates-3d-objects-from-2d-
images/ (accessed March 2020).
Schrock, K. (2011), “Evaluation rubric for iPod/iPad apps”, available at: www.ipads4teaching.net/ Evaluating
uploads/3/9/2/2/392267/ipad_app_rubric.pdf (accessed August 2019).
educational
Schrock, K. (2015), “Critical evaluation of a content-based iPad/iPod app”, available at: www.
ipads4teaching.net/uploads/3/9/2/2/392267/evalipad_content.pdf (accessed August 2019). apps for young
Shing, S. and Yuan, B. (2016), “Apps developed by academics”, Journal of Education and Practice, Vol. 7 children
No. 33, pp. 1-9.
Shuler, C., Levine, Z. and Ree, J. (2012), iLearn II: An Analysis of the Education Category of Apple’s App
Store, The Joan Ganz Cooney Center at Sesame Workshop, New York, NY.
Statista (2019), “Average price of paid apps in the Apple App Store and Google Play as of 1st quarter
2018 (in U.S. dollars)”, available at: www.statista.com/statistics/262387/average-price-of-
android-ipad-and-iphone-apps/ (accessed August 2019).
Stoyanov, S.R., Hides, L., Kavanagh, D.J., Zelenko, O., Tjondronegoro, D. and Mani, M. (2015), “Mobile
app rating scale: a new tool for assessing the quality of health mobile apps”, JMIR mHealth and
Uhealth, Vol. 3 No. 1, p. e27.
Tammaro, M.T. and Jerome, M.K. (2012), “Does the app fit? Using the apps consideration checklist”, in
Ault, M.J. and Bausch, M.E. (Eds), Apps for All Students: A Teacher’s Desktop Guide,
Technology and Media Division (T.A.M.) for the Council for Exceptional Children, Reston, VA,
pp. 23-31.
Thomé, A.M.T., Scavarda, L.F. and Scavarda, A.J. (2016), “Conducting systematic literature review in
operations management”, Production Planning and Control, Vol. 27 No. 5, pp. 408-420.
Vaala, S., Ly, A. and Levine, M.H. (2015), Getting a Read on the App Stores: A Market Scan and
Analysis of Children’s Literacy Apps, Full Report, Joan Ganz Cooney Center at Sesame
Workshop. 1900 Broadway, New York, NY 10023.
Van Houten, J. (2011), “Ievaluate app rubric”, available at: https://static.squarespace.com/static/
50eca855e4b0939ae8bb12d9/50ecb58ee4b0b16f176a9e7d/50ecb593e4b0b16f176aa97b/
1330388174777/JeanetteVanHoutenRubric.pdf (accessed August 2019).
Vincent, T. (2012), “Ways to evaluate educational apps”, available at: http://learninginhand.com/blog/
ways-to-evaluate-educational-apps.html (accessed August 2019).
Walker, H. (2010), “Evaluation rubric for iPod apps”, available at: http://learninginhand.com/blog/
evaluation-rubric-for-educational-apps.html (accessed August 2019).
Walker, H.C. (2013), “Establishing content validity of an evaluation rubric for mobile technology
applications utilizing the Delphi method”, Doctoral dissertation, Johns Hopkins University, MD,
U.S.A.
Webster, J. and Watson, R. (2002), “Analyzing the past to prepare for the future: writing a literature
review”, M.I.S. Quarterly, Vol. 26 No. 2, pp. Xiii-Xxiii.
Weng, P.L. (2015), “Developing an app evaluation rubric for practitioners in special education”, Journal
of Special Education Technology, Vol. 30 No. 1, pp. 43-58.
Wojdynski, B.W. and Bang, H. (2016), “Distraction effects of contextual advertising on online news
processing: an eye-tracking study”, Behaviour and Information Technology, Vol. 35 No. 8,
pp. 654-664.
Zosh, J.M., Lytle, S.R., Golinkoff, R.M. and Hirsh-Pasek, K. (2017), “Putting the education back in
educational apps: how content and context interact to promote learning”, in Barr, R. and
Linebarger, D.N. (Eds), Media Exposure during Infancy and Early Childhood, Springer, Cham,
pp. 259-282.
Further reading
Chau, C.L. (2014), “Positive technological development for young children in the context of children’s
mobile apps”, Doctoral dissertation, Tufts University, available at: http://gradworks.umi.com/
3624692.pdf (accessed August 2019).
ITSE Common Sense Media (2013), “Zero to eight: children’s media use in America”, available at: www.
commonsensemedia.org/research/zero-to-eight-childrens-media-use-in-america-2013 (accessed
August 2019).
Pilar, R.A., Jorge, A. and Cristina, C. (2013), “The use of current mobile learning applications in EFL”,
Procedia - Social and Behavioral Sciences, Vol. 103, pp. 1189-1196.
Corresponding author
Stamatios Papadakis can be contacted at: [email protected]
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: [email protected]