Dissertation and Scholarly Research Recipes For Success
Dissertation and Scholarly Research Recipes For Success
Dissertation and Scholarly Research: Recipes for Success, 2018 Edition
Copyright © 2018 by Marilyn K. Simon and Jim Goes
ISBN-10: 1546643885
ISBN-13: 978-1546643883
Dissertation Success, LLC has the exclusive rights to reproduce this work,
to prepare derivative works from this work, to publicly distribute this work,
to publicly perform this work, and to publicly display this work.
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise,
without the prior written permission of the copyright owner.
Printed in the United States of America
Preface
A Dissertation Guide for Professional Learners
Doctoral education has changed dramatically over the last three decades.
Traditionally, the pursuit of a doctoral or research credential involved study at a
large, traditional research university, and was reserved for those seeking careers
in academia or research. The process of completing doctoral level work usually
required a commitment to full time study, varied tremendously between
institutions, and was often somewhat mysterious. As a result of time and place
demands, few professionals pursued doctoral degrees.
Today all of this has changed. The emergence and growth of online education,
and the competitive moves of large online universities moving upmarket from
undergraduate and master’s level programs into the doctoral ranks, have led to a
proliferation of doctoral learning possibilities for busy professionals. Along
with greater access to doctoral training for nontraditional learners has come
growing value in the doctoral credential within professional ranks. More and
more, accomplished individuals in industry, the nonprofit sector, education, and
other professional arenas pursue doctoral study as a means to advance their
careers, their market value in the workplace, and their professional stature. If this
sounds like you, then you have come to the right place. Recipes for Success is
the right book to launch you on a successful quest for the doctoral degree.
The research-based dissertation or doctoral study is the hallmark of most
doctoral programs, and sets doctoral-level study apart from other levels of
learning. Yet few busy professionals have much of a sense of how to begin the
process of dissertation development, how to do original research of credible
academic quality that contributes to their profession, and how to craft research
results into a winning dissertation. This book fills that gap. From the very
beginning of your doctoral journey to the ultimate achievement of degree
completion, this book is your guide to the process and content of dissertation and
research creation.
Choosing a research topic and developing conceptual and methodological
frameworks around this topic challenges most learners. Unlike other levels of
study, dissertation writing is a profound act of original scholarship, involving
deep original thought, critical thinking, the highest level of writing, and creation
of new and actionable knowledge. As a result, there is no shortage of books
about how to write a dissertation. But most of them are not written for most
of us.
In Recipes for Success, we articulate a process by which you can build the pieces
of a successful dissertation. Using a workbook approach rich in tools, templates,
frameworks, examples, web integration, and hard-won lessons from experience,
Recipes provides a friendly, easy to navigate process for crafting issues and ideas
into research and results.
Dissertations are very personal endeavors and accomplishments, and originate
with problems and issues that are meaningful and important to the doctoral
learner. Most professionals are deeply grounded in their understanding of the
issues and needs of their profession. Recipes builds on this this understanding,
helping learners to discover and frame issues they are passionate about, and to
build a credible and influential research study around this passion. While most
dissertation guides focus largely or exclusively on the mechanics of writing and
organization, Recipes approaches dissertation development as an iterative
process of thinking and self-reflection that leads learners to discover what
matters most to them and to their professions, and enables them to frame this
meaning into a research problem and purpose, and to organize and execute a
study design to fit, and thus solve the problem and achieve the purpose. Once
this basis of meaning for the dissertation is established, the entire process and
organization of dissertation writing becomes more natural, more understandable,
and even more fun, and thus has a much higher likelihood of success,
satisfaction, and professional value.
As you embark on your research and dissertation journey, you may encounter
barriers, roadblocks, and the occasional dead end. Recipes is your guide and
companion to navigate around these bumps on the road to completion. Based on
our 40+ years of collective experience in the online educational setting,
mentoring over 300 professional learners to success in completing their
doctorates (including numerous award winners), we identify the most important
factors for success and the traps to avoid.
Whether you are just considering doctoral study, are already in a doctoral
program, or are working to develop and complete your dissertation, you will find
Recipes for Success a key ingredient in your success as a doctoral learner.
Good luck on your doctoral journey!
The 2018 edition of Recipes features:
Updates and changes to all sections.
Updated chapters on interviews, questionnaire design, surveys, and the
literature review.
Links to web pages that support your research
Improved coverage of qualitative and quantitative methods of data
analysis, including practical instruction on the latest versions of software
packages such as NVivo and Atlas.ti
An attractive new layout which aids navigability and enhances the book's
student learning experience.
More practical examples helping bring theory to practice!
More checklists to guide you on creating a delectable feast!
Source: Harburg, Ernest. (1966). Research Map. American Scientist, 54, 470. Used by permission.
Introduction
Congratulations! By procuring your copy of Recipes for Success, and by reading
the information that you are now reading, you have taken an important first step
toward securing the successful realization of your goal—you have shown
interest and intent to join the ranks of doctoral scholars. Your next step is to turn
your interest into actions that get results. Putting together an excellent
dissertation is like planning and preparing a gourmet feast for a gathering of
distinguished guests who are connoisseurs of fine cooking. You, the researcher,
can think of yourself as the chef and chief meal engineer for this eloquent repast.
Careful preparation is needed each step of the way, along with a formal means to
reach your desired goal.
Recipes for Success is presented in three phases. In PHASE 1 you will start your
initial preparation, gather ingredients, and prepare the menu for your feast. This
includes your mental, physical, and psychological preparation along with the
selection of the type of meal (topic and research method) you will serve. In
PHASE 2 you will gather your accoutrements and utensils to collect and analyze
data to help you solve the problem you pose, answer your research questions,
and achieve your purpose. In PHASE 3 you will learn how to put your meal
(dissertation) together to ensure a delicious high-quality study to serve at your
feast. Included are several presentations and numerous web links to serve as
your maître d' for your banquet.
Cutting Board
1. Take a few minutes to reflect on the benefits you will receive upon the
successful completion of your goal. What are some of those benefits?
2. Visit ProQuest: http://www.proquest.com/en-US/products/dissertations/
and find a current dissertation in your field and one from your university. If
you cannot gain entry into this URL, check with your university. Take time
to digest this information. Check the style and form, table of contents, and
number of pages in each chapter.
3. Imagine what your dissertation will look like. Can you see it bound with
your name and degree on the cover? Will you have it hardbound, soft
bound, or both? How many copies do you think you will make of your
final dissertation text? Who will appear in your acknowledgment section?
To whom will you dedicate your dissertation?
4. Most dissertations are between 100 and 200 pages. Approximately how
many pages do you envision your dissertation to be?
5. Do you envision using graphs (figures) and charts (tables)? How many
references will you have consulted?
6. Ask yourself: what will change once I get my degree?
7. How will your research and your degree affect your professional work?
How might it change the way you think about your work and your life?
8. Whom will you tell about your successes?
9. What do you expect to see as their reaction?
10. What type of support system do you have or need to obtain to complete
your degree?
11. The next time you answer a robo call, introduce yourself as Dr. (surname),
and see how that title feels to you.
It is important to keep your attitude positive once you begin your
dissertation process. If you have the good fortune of receiving feedback
from your committee members, consider this a gift of their time and
expertise. Your committee wants to see you produce a quality study that
both you and committee members are proud to sign. Make sure you
respond graciously to feedback, and ask for clarification and elaboration if
you are uncertain about any of the directives you are given to improve your work. Remember this is a
learning process. When you change one component of your study make sure you change all related
areas as well. Be diligent in correcting problems identified by your committee wherever they are
present in your dissertation proposal or final draft. Your committee members will appreciate your
diligence and favorable mind-set. Providing your committee members with a change chart or revision
log that acknowledges their comments and how you responded to their comments and concerns will
help expedite the next review and honor the feedback received. Examples of change charts can be
found on our support website at dissertationrecipes.com.
Your body and mind are closely related. Your mental efficiency is affected by
the state of your body. Check to see that your diet is healthful. Many studies
suggest that protein helps keep the brain alert and that the brain’s performance
is also affected by choline (a B-complex vitamin that is found in egg yolks,
beef liver, fish, raisins, walnuts, and legumes). Other B vitamins, such as
niacin and folic acid, as well as Vitamin C and iron are also essential in
maintaining a healthy brain. To keep your memory sharp, eat lots of folic
acid, found in leafy green vegetables, lean meat, fish, legumes, dairy
products, grains, citrus fruits, and dried beans and nuts.
Stay away from alcohol during your research and writing days. See to it that
your exercise is efficient and enjoyable. At the very minimum, you should be
doing 20 minutes of cardiovascular exercise three times a week. Thirty
minutes a day of a fitness program that combines flexibility, endurance, and
strength is even better. The mental effects of regular exercise are profound
and extensive, affecting your intellect, memory, and emotions. Even if you
exercise on a regular basis that may not be enough. There is mounting
evidence that exercise will not undo the damage done by prolonged sitting.
We can't all stand up at work but even small adjustments, like standing while
talking on the phone, going over to talk to a colleague rather than sending an
email, or simply taking the stairs, will help. Consider setting up a standing
desk, or a table that will allow you to do your research and writing while
upright.
1. What in your diet needs to be improved so that you have the best possible
nutrition?
_______________________________________________________________
2. What type of exercise do you enjoy doing that could help you become
more fit?
_______________________________________________________________
3. What other measures can you take to support yourself in the
successful obtainment of your goal?
_______________________________________________________________
4. Sleep is important for the renewed health of the brain. When you drift off
into dreamland—a procedure that happens in stages—your brain goes
through a series of psychological processes that restores both mind and
body. At certain stages, memories are consolidated and at other stages your
brain is working out resolutions to unconscious conflicts. Certain factors
such as the use of alcohol or drugs, a noisy bedroom, an uncomfortable
bed, “dis”stress carried over from the day, or other disruptions may upset
this pattern. What can you do to improve your sleep?
_______________________________________________________________
Too much sleep can be as detrimental as can too little sleep. Most adults
need 7–10 hours of sleep. It is important to experiment and find out what is
ideal for you. How much sleep do you really need to feel great? See that
you get that amount during your research working days and try to
eliminate any conditions that disrupt your sleep.
5. What are the good and bad stresses in your life?
6. What can you do to decrease the bad stresses?
7. What can you do to celebrate your successes?
__1. Take time to reflect on what it is you are hoping to find before you
begin to read.
__2. Survey the table of contents and note major headings.
__3. If there are chapter summaries in a text, or an abstract for a paper,
read them before exploring the chapter or paper.
__4. As you read, try to relate the information to something you are
already familiar with.
__5. Take notes or highlight important ideas. If you can do this at the
keyboard, you can save yourself quite a bit of time.
__6. Check for patterns that the author might be applying such as:
Cause and effect: The author explains a situation or theory and then
delves into the consequences of its application.
Compare/contrast: The author examines two or more different
theories or situations and their relationship to each other.
Process-description: A certain concept, program, or project is
delineated and then examples are provided.
Sequential: A case is built in a linear or historical manner.
__7. Be an active reader. Always ask yourself questions about what you
are reading: What is the author’s purpose? Why am I reading this? What
conclusions does the author come to? Is this reasonable? Who else
supports this view? How does this document relate to my study?
__8. Where you read is important. Reading at night, in bed, doesn't work
for many people because it makes them sleepy (which means that you
may not comprehend the information). Everyone is different, however,
so read in a place that's comfortable, free of distractions, and that has
good lighting – this is important even when you are reading from a
screen.
__9. Imagine the author is personally speaking with you (just like your
Recipes for Success does).
When you check out http://www.proquest.com/en-US/products/dissertations/ to find
dissertations in your field, you will notice there is a great deal of variety with regard to
quality and style between dissertations. However, checking out recent dissertations from
your university in areas that you wish to investigate can be extremely helpful. Your
university will likely grant you free access online access or have a place where you can
visit recent dissertations approved by the university
Academic/Scholarly/Doctoral Writing
The ink of the scholar is more sacred than the blood of the martyr. —
Mohammed
In academic writing, a specialized form of discourse often develops. At times
this rarefied language is necessary to capture the complexity and distinctiveness
of processes not easily described in colloquial terms. At other times, however,
writers use terms that are understood only by an in-group of ideologically
sympathetic theorists. The purpose of scholarly writing is to demonstrate
comprehensive knowledge of a subject; to assert the validity of all claims made;
and to convey the value of the author’s ideas through evidence, authoritative
style, and scholarly voice to consumers of the research.
Academic writing needs to be specific and objective. It is very important to
present a variety of opinions on controversial issues to achieve the highest level
of scholarship. It is also important to check for possible biases and use primary
sources whenever possible. You need sources to substantiate for readers
everything that is not common knowledge. It is not enough to say you found
something in multiple places, or that “research suggests.” You need to
specifically cite those sources. One source is usually not enough for
controversial or complicated issues. It is much stronger to use multiple sources
to allow the reader to see the balance of evidence, but choose the best sources to
support your case. For example if you claim that charter schools have improved
educational opportunities for middle school children, several current sources
need to be cited along with reliable data that provides evidence to support this
claim.
Choosing only the evidence that supports an argument detracts from the writer’s
credibility. If there is evidence that appears to refute the researcher’s claim, that
evidence must be addressed and effectively challenged. The writing needs to be
clear, precise, and devoid of redundancy and hyperbole. Use no more words than
are necessary to convey your meaning. Your meal should be prepared al dente
(only to the point of doneness).
Both over- and understatement should be avoided. Statements should be specific
and topical sentences established for all paragraphs. The flow of words should
be smooth and comprehensible, and bridges should be established between ideas.
To truly appreciate the complexities of the world and the intricacies of human
experience, it is essential that we understand how we can be misled by the
apparent evidence of our experiences (Gilovitch, 1991). If an author agrees with
our thinking, we are less apt to question his or her research and the evidence
provided. Thus, we need to think clearly about our experiences, question our
own assumptions, and challenge what we think we know, even when the data
agree with what we believe is true.
APA formatting requires data to be treated as the plural of datum (APA 6thed., 3.19, p.
79). If you have any doubt on how to use the term data in a sentence, substitute the word
toys—just like toys ARE us, data ARE us.
How you state something is almost as important as what you state. Paraphrasing
is preferred over direct quotes. Often writers who are developing their own voice
have a tendency to use too many direct quotes from other authors. This is
tedious for the reader, and likely to leave him or her wondering what you have to
say that is original.
Wherever possible, paraphrase the work of other authors instead of quoting them
directly. A paraphrase is not a summary. You can think about a paraphrase as
explaining a main or complex point to a colleague. When you paraphrase you
include the ideas or information from a source by rephrasing those thoughts or
information in your own words. A successful paraphrasing is concise, precise,
does not change the meaning you are trying to convey, and is properly cited.
Without proper citation, your paraphrase could be construed as plagiarism.
When using scholarly paraphrasing , your voice is heard and supports the
research findings. This means that you should not begin a sentence by naming
the author you are paraphrasing (do not start a sentence with According to… or
that Research supports or Research shows). The focus of the sentence should not
be on the author(s) but rather on the important thoughts offered in the
paraphrase. Ideas are usually more important than who presented them-unless
you are comparing/contrasting different authors. By keeping the ideas in the
foreground (and citations in the background), continuity, clarity, and
comprehension are all improved. This reflects a high level of scholarly writing.
Incorrect: Matthews (2015) pointed out that there are three major forms of
research: quantitative, qualitative, and mixed-methods.
Correct: There are three major forms of research: quantitative, qualitative, and
mixed (Mathews, 2015).
Incorrect: Miller (2016) said “students should buy all my books” (p. 43).
Correct: In an effort to increase book sales, Miller (2009) intimated that students
should purchase all his books.
Incorrect: In a study by Smith (2014) it was found that disasters occur more in
the summer than winter.
Correct: Disasters occur more in the summer than in the winter (Smith, 2014).
When using a correct paraphrasing version your voice is heard and supports the
research findings.
To reiterate: Do not use the phrase ‘according to,’ or start a sentence with an
authors’ name or ‘The author found…’or ‘Researchers
determined/showed/.found’…. When reporting the findings from a study, start
with the findings rather than the author.
Incorrect: Simon and Goes (2018) found that doctoral students have difficulty
paraphrasing.
Correct: Doctoral students have difficulty paraphrasing (Simon & Goes, 2018).
Words are weapons vested with great power. They create, as well as mirror,
reality. They serve to advance certain ideals, images, stereotypes, paradigms, and
sets of assumptions. They play an important role in creating the conditions for
scholarly discourse. They frame what is considered to be the limits of acceptable
practices, philosophies, and purposes. To be effective, they need to be governed
by logic at the most abstract level of critical analysis.
For example, the following statement found at
http://www.connected.org/learn/bangemann.html does not adhere to good
scholarship:
New information and communication technologies are key contributors to the
evolution of teaching and learning methods, and must therefore be fully
integrated in the education system.
Critique: It does not necessarily follow that because technologies could change
teaching and learning methods that they therefore must be used. This apparently
logical statement is misleading in its attempt to convince the reader that the use
of technology in teaching and learning is inevitable and beneficial. To evaluate
the efficacy of this argument, it is prudent to ask some basic and underlying
questions. What exactly needs changing, if anything? Why should it be changed?
How can communication technologies help achieve these desired changes, if at
all? Also, what is meant by the term “fully integrated.”
When you write in an academic manner, you are identified as a member of the
club of scholars. Those who consume what you write can substantiate it.
Academic writing enables academicians to express ideas more forcefully and
intelligently and helps eliminate ambiguity. A negative effect is that academic
writing can also be used (intentionally or unintentionally) to intimidate those not
in the club. Scholarly literacy is a moving target, and it is crucial that you keep
up with the professional literature to be aware of the terminology and guidelines
in current use. A problem delineated in 1995 could have been solved by the date
you begin your research. When you frame the problem for your dissertation, be
certain to baste your study with the most current research.
To cope with the demands of a discipline, you must be able to grasp the
implications of important concepts that permeate the literature. For example, in
reading scholarly work you frequently come across terms such as paradigm,
theory, validity, and bias. It is important to understand the meaning of these
terms in the context in which they are found. Many of these important terms are
in various sections of your Recipes for Success.
Scholars are expected to analyze and synthesize rather than merely summarize
information. A scholar studying the causes of the Civil Rights movement of the
1960s does more than summarize the movement; rather, the scholar would
examine closely the reasons—practical, moral, psychological, social, economic,
and dramatic—that led to the Civil Rights movement. Analysis and synthesis
reveal the essence of the subject and lead to a greater understanding of
truth.
Example: In the event of the case occurring where a social services worker is
unable to find the location of the domicile of the applicant who has been
involved in the initiation of the request for exemption request (RER) form,
he/she shall make a note of the incident of the Unsuccessfully Attempted (UA)
files. [53 words]
Revision: Social workers who cannot find where an RER applicant lives should
write a note in the UA file. [18 words]
Lard Factor = (53-18)/53 = 66%
“George chopped down the cherry tree” sounds a lot better than “The cherry tree was
chopped down by George.” The former is simple and straightforward; the latter is
wordy, clumsy, and lardy. Occasionally you will have no choice but to use passive voice
—for instance, when the subject of the sentence is unknown—but in most cases use the active voice
Right or wrong, most significant research contributions are hidden in education
and/or discipline-specific journals and use scholarly jargon that could limit the
practical application of the findings. The following sections will help you
become familiar with the language of the scholar, help you sound more like a
scholar when you use these terms appropriately, and assist you in devouring and
digesting the scholarly literature.
Once your study is complete, then you will reference your study in the past tense
as well.However, any statement regarding a theory, program, concept, or policy
that is still in effect, should be in the present tense.
For example:
1. If simulation technology is still going on, then: “Simulation technology
provides techniques designed to enhance the skills of healthcare
providers”, rather than ‘provided’ techniques designed to enhance the
skills of healthcare providers.
2. If the simulation technology was used in a study then: “To determine
the efficacy of simulation technology, Brown (2016) surveyed 140
healthcare professionals who used this technology and 140 healthcare
professionals who did not use the technology”.
Per pp. 42-43 of the APA Publication Manual, you should use past tense or
present perfect tense for discussing literature, an action or condition that
occurred at a specific time in the past.
For example, Smith (2015) found or Smith (2015) has found..., or “Children
confuse the source of their memories more often than adults” (Barney, 2013;
Jones, 2015).
Tense Usage in Your Final Study
In the proposal chapters of a dissertation (1-3), a common error is to neglect to
change future tense to past tense and to remove language referring to the
proposal. If you search for will or propose, you can locate proposal remnants and
areas to update the dissertation so that the completed study is referenced only in
the past tense.
Use present tense to discuss implications and to present conclusions. There are
ways to write in active voice and use past tense by rephrasing sentences, such as
in the following examples:
Incorrect: Passive voice: “Semi-structured interviews were
conducted with 20 mid-level managers to explore their lived experiences”.
Incorrect: Anthropomorphism: “Semi-structured interviews
identified the lived experiences of 20 mid-level managers (See APA page
69).”
Correct: Twenty mid-level managers participated in semi-structured
interviews and shared their lived experiences.
Use past tense to describe the results, but present tense to discuss implications
when discussing your conclusions.
Example: “The weight of livestock increased as the nutritional value of
feed increased. These results suggest that feeds higher in nutritional value
contribute to greater weight gain in livestock.” (Use past tense to indicate
what you found [weight increased], but present tense to suggest what result
implies)
Chapter Introduction: When you are explaining the contents of a chapter in the
chapter, the present tense is used.
Example: Chapter 2 includes a review of the literature.
Chapter Summary: Use the past tense to explain what the current chapter
included, and the present tense to explain the contents of the next chapter.
Example: Chapter 2 included a review of the literature. Chapter 3 includes
a discussion of the methodology used in the study.
Qualitative and quantitative research approaches or paradigms are like fraternal twins. They both have
similar origins, yet do not necessarily have similar appearances. Of the fraternal twins, the quantitative
approach is the left-brained sibling that is more analytical and numbers focused, answering questions
about relationships among measured variables with the purpose of explaining, predicting, and controlling
a phenomenon. The qualitative approach is the right-brained sibling, or a more global form of research,
being more interpretative in nature and seeking to describe phenomena from the point of view of the
participant. The qualitative twin is usually more verbose and explained in narrative form instead of
numbers. The left-brained sibling is more taciturn and to the point.
Qualitative researchers assume reality is subjective and multiple as seen by participants in their study.
Qualitative researchers assume that research is context bound, but that patterns and theories can be
explicated to develop a profound understanding of a situation or phenomenon. The key philosophical
assumption of qualitative research, as noted by Merriam (1997), is the view that reality is constructed by
individuals interacting with their social worlds. It is assumed that meaning is embedded in people's
experiences, and that this meaning can be mediated through the investigator's own perceptions. The key
concern is to understand the phenomenon of interest from the participants' perspectives, not the
researcher's.
Researchers who select a mixed-method paradigm make the assumption that integration of the qualitative
and quantitative traditions within the same study can be seen as complementary to each other, especially
if the aim of a study is to determine the efficacy of a program, policy, or treatment. This assumption is
supported by the research of Greene and Caracelli (2003). These authors contend that underlying the
notion of a mixed methods approach is the pragmatic assumption that to judge the value of a social
program or policy, or the efficacy of a treatment, an evaluator should employ whatever methods will
generate supportive evidence to draw conclusions and make decisions.
There have been many lengthy and complex discussions and arguments surrounding the topic of which
twin is better. Different methodologies become popular at different social, political, historical, and
cultural times. Every methodology has its specific strengths and weaknesses. What you will find, in
selecting your methodology, is that your instincts probably lean toward one of the twins. Listen to these
instincts as you will find it more productive to conduct the type of research with which you will feel
comfortable, especially if you’re to keep your motivation levels high. However, be aware that the problem
you pose might lend itself better to one type of research paradigm and method over another. If this is the
case, you might have a harder time justifying your chosen design if it goes against finding the solution to
the problem you pose and the purpose of your study.
Camp 1: Epistemology presupposes ontology
This is the realist view, which contends that in order to know (episteme) there
must be something real (ontos) to know. It is a belief favored by those who
employ quantitative methodologies. Members of this camp contend there is a
solution to a problem that can be found using the scientific method of deduction.
Researchers who prefer the quantitative paradigm support this view.
Cutting Board
1. Which of the active reading strategies above do you already employ?
_____________________________________
2. Which of these suggestions do you need to practice more fully?
________________________________________________________________
3. Of the two camps, which do you favor?
_______________________________
Why?
________________________________________________________________
1 cup “O” rganize Your Time
By planning your future, you can live in the present....Time is one of your most
valuable resources, and it is important that you spend it wisely.
—Lee Berglund, founder of Personal Resource Systems
The key point in time management is recognizing the finite nature of time as a
resource. This is both good news and bad news. The bad news, of course, is that
time is limited. It moves at the same rate and there is no way to manipulate the
passage of time. The good news is that time is a constant. It is known and, hence,
its stability provides a basis for predicting future outcomes.
Time management includes good program planning whereby resources (people,
time) can be used effectively. Daily work is easier when a model provides a
continuing guide for action and various levels of accountability and
responsibility and when essential tasks and sequences of tasks are specified
along with a timeline for completion.
Managing time is a decision process. It is a set of choices that parse time as a
finite resource among tasks that are competing for this resource. The
effectiveness of such decisions is an outcome of task achievement skills as well
as the priority to which each task was assigned. The quality and quantity of any
outcome are dependent on the skill with which the task was addressed and the
amount of time that was devoted to the task.
The late Stephen Covey (1996) once told a great story about time management.
The story involved a science teacher who asked her students to comment on her
attempt to fill a Mason jar. The first item in were big rocks. When the students
thought the jar was filled to capacity, she then added gravel. When the students
agreed it was now full, she added sand. When the jar appeared full she finally
added water. The lesson from this story is that in our lives, we have big rocks,
gravel, sand and water. The natural tendency seems to favor the latter three
elements, leaving little space for the big rocks. However, if we don’t put our big
rocks in first, they will not make it into our jar.
Make a list of your big rocks. Then make a plan to ensure that your big rocks are
put first into your schedule. Amazingly, the other stuff will still get done. Make
sure your dissertation is a big rock! Also remember, your rocks cannot be larger
than your jar. An excellent first step in effective time and activity management is
to write down your plan for completing your dissertation. On the Cutting Board
below, write down your big rocks and also the day that you plan to complete
your dissertation (DCD). (You might want to revisit this after you have
completed PHASE 1 of your Recipes for Success.)
Cutting Board
An excellent first step in effective time and activity management is to write
down your goals. On the Cutting Board below, write down the day that you plan
to complete your dissertation (DCD). (You might want to revisit this after you
have completed PHASE 1 of your Recipes for Success.)
My big rocks are: _______________________________________________
My DCD will be __________________ (date). At that time I will have
successfully completed the written part of my dissertation/research project and
sent it to the proper authorities.
Next, it is important that you recognize other things that you have to do and want
to do between now and DCD.
On the Cutting Board below, write down the things in your life that you have to
do and then the things that are not on the list that you want to do between now
and DCD.
Cutting Board
1. I have to do the following activities between now and DCD:
_______________________________________________________________
2. In addition, I want to do the following activities:
_______________________________________________________________
Good Job! Now let us break this down into smaller bites and make a plan for
next week. First, fill in the following calendar with all the time that you will be
attending to your “have to” tasks. Next, fill in quality time that you can dedicate
to your research. Choose something that you want to do that is not on the
schedule and plan for that as well.
Monday Tuesday Wednesday Thursday Friday Saturday Sunday
Cutting Board
1. Make an affirmation for the next 7 days. Share this with a colleague.
By ____________, I will have achieved the following goals in my research:
_________________________________________________________________
__________________________________________________________________
2. Make an affirmation for the next month.
______________________________________________________________
By ___________, I will have achieved the following goals in my research:
3. Share this with a colleague.
4. Begin each new week with a similar affirmation until DCD.
5. Remember, to achieve a goal it must be
Conceivable—capable of being put into words
Believable—to you
Achievable—so you have the strength, energy, and time to accomplish
it
There are many activities that you can do to support yourself in the preparation
and serving of your feast. You might want to learn how to learn a software
program such as Excel, SPSS, NVivo, or ATLAS.ti; familiarize yourself with
American Psychological Association (APA) formatting; learn how to do online
searches on the Internet (http://tinyurl.com/9tb2rqt ); or purchase a new
computer. Software companies such as Guildford (http://www.guilford.com)
provide programs to assist you with some helpful research tasks. PERRLA
(http://tinyurl.com/5s34fzp) integrates with Word and walks you through, step-
by-step, a correctly formatted APA paper. The price is under $30 and it can be
used for all your research papers. You also might need to obtain office supplies,
research a variety of preliminary topics, take a refresher course in statistics at a
local college, arrange for child care, join a Listserv or web discussion on
dissertation writing (such as dissertationrecipes.com)!, join a professional
organization, consult with advisors in your field or in other fields, relieve
yourself from a prior responsibilities, and so on. Consider keeping a daily
dissertation journal where you will record your successes and the challenges you
face.
On the Cutting Board below, make a list of five things that will support you in
completing your research by DCD (dissertation completion date) and the times
that you will be able to attend to these things.
Cutting Board
You might wish to consider using different colors to indicate different types of
references; for example, pink (or highlighting) could be used for texts, green for
periodicals, yellow for research reports, etc. This system will be extremely
helpful to you when preparing the research/literature chapter of your dissertation
(see PHASE 3) or research project and when compiling your bibliography or
reference list. You might also wish to create real and virtual folders to store
articles and references that you obtain related to your topic.
You can use your laptop computer, PDA, or smartphone to take notes; you’ll find
excellent note-taking capabilities in the simple text editors that come bundled
with most machines, and strong organizing capabilities in the database software
that is easily available.
Note Taking
An alternative form of note taking to traditional outline form of note taking is
mind mapping. Some characteristics of a mind map are that it
1. stimulates the way that most people think
2. is a means of brainstorming that allows your thoughts to flow freely
3. helps you to categorize information and determine how this
information relates to other information
4. gives you an overview of your project
Figure 1 illustrates a mind map of a mind map. Carefully study the mind map for
its structure, purpose, and usefulness. Notice a mind map requires only one page
(preferably a blank page held horizontally) where related ideas are linked
together. The Roman numerals that are used in traditional outlining appear as
branches on a mind map.
Note: Researchers claim that people who have switched from traditional
outlining to mind mapping significantly increase their retention and heighten
their organizational skills. In addition, mind mapping is fun, easy, and creative.
Try using colored pens or pencils when creating mind maps. Experiment with
each branch construed in a different color. You might want different shades of a
particular color to signify supporting ideas. Check out a review of major mind
mapping software packages at http://tiny.cc/jmjukw
Mind Mapping
Tony Buzan of the Learning Methods Group in England originated mind
mapping. This technique is based on research findings that show that the brain
works primarily with key concepts in an interrelated and integrated manner.
Traditional thinking opts for columns and rows as illustrated by traditional
outlining techniques. Buzan felt that working out from a core idea would suit the
brain's thinking patterns better. The brain also needs a way to slot in ideas that
are relevant to the core idea. To achieve those ends, Buzan developed mind
mapping.
Various theories from brain research positively support the construction of mind
maps as a tool for learning. Gardner’s theory of multiple intelligences portends
that there are different methods of processing learning and different ways of
knowing. The flexibility and variety of structure of the mind map enables you to
draw on your learning strengths to construct connections and relationships which
are most meaningful. Sternberg’s triarchic theory of intelligence posits that one
must go beyond a linear structure of knowledge to a synthesis of relationships
and meanings before one can adequately use that knowledge. Mind maps
encourage the synthesis of topics and the independent development of a
structure of relationships and connections.
Finally, study your mind map and look for interrelationships and terms that
appear more than once. Mind mapping is an excellent technique not only for
generating new ideas, but also for developing your intuitive capacity. It is
especially useful for identifying all the issues and sub issues related to a
problem, as well as possible solutions to a problem. To do that, use the main
branches on your mind map for solutions. The sub branches from each of them
become the perceived benefits and obstacles related to these solutions. Mind
mapping also works well for outlining presentations, papers, and book chapters.
In fact, it is useful in a wide variety of situations. For more information on mind
mapping see http://www.edrawsoft.com/freemind.php and http://mindjet.com/.
Figure 1. A mind map of mind mapping
A helpful tool closely related to mind mapping is concept mapping. Concept
mapping was developed by Prof. J. D. Novak at Cornell University in the 1960s
and is based on the theories of David Ausubel, stressing the importance of prior
knowledge in being able to learn about new concepts. Novak posited,
"Meaningful learning involves the assimilation of new concepts and propositions
into existing cognitive structures." For more on concept mapping, check out
http://tiny.cc/xrjukw.
Cutting Board
Use a blank sheet of paper to create a mind map of a topic that you might want
to research. Put the name of the topic in the middle of the paper. Use branches to
indicate main ideas (beliefs, attitudes, and opinions) related to this topic. Several
excellent computer programs are available for mind mapping, including Verio by
Microsoft and Inspiration.
PIE Writing
To transform your notes into written passages, you might want to try a formula
developed by Hanau (1975). Hanau advanced the idea that written materials
contain statements that are the declaration of beliefs, attitudes, or opinions.
These statements are the keys that allow the reader to understand what you, the
writer, are trying to convey. Each paragraph should contain a statement or
statements in conjunction with supporting evidence that have the function of
elucidating the statement. These supporting elements can be classified into one
of the three categories, which can be remembered by the acronym PIE: Proof,
Information, or Examples.
1. Proof - any kind of supporting documentation that a statement is true
and/or important. In a dissertation or research paper, proof usually comes
from a review of related research, a quote from a well-known person,
current statistical data, a statement by an authority figure, or information
from archival data.
2. Information - any clarifying material, such as a definition, that limits the
scope of your statements and seeks to clarify what your statements mean
within a certain context. This clarification brings your supporting material
to bear in an effective manner.
3. Example – a concrete illustration that serves to elucidate any statement
that you make while attesting to a statement’s truth or importance.
It is not necessary to have all three supporting elements in every paragraph of
your paper, but you will probably wish to include at least two pieces of PIE per
paragraph. You also need not adhere to any particular order of presenting a
statement with its accompanying pieces of PIE, so long as your paragraphs are
clear, sufficiently detailed, and coherent. You want to make sure that there is
sufficient evidence for you to make a strong point and that the evidence is
relevant, reliable, and representative.
Also, ensure that you are including a complete scholarly argument for all of your
paragraphs. Such an argument includes your main points supplemented by
paraphrased source material and evidence where appropriate, and a critical
analysis of this evidence and how it relates to your argument or proposed
research. Any assertion of fact must include a credible, relevant source as
support for the assertion, or must be written to clearly indicate that the assertion
is hypothesized but not yet supported in evidence. Rather than just summarizing
sources, string the ideas from different sources together to make a persuasive
argument. Include any critical assessments of the source as appropriate, e.g.
weak methodology, questionable model, small sample, inadequate data analysis,
etc. Finally, connect the ideas in this paragraph to your next paragraph, so the
narrative flows in a coherent direction.
Evidence can come from either primary or secondary sources. A primary source
is an original source of data that puts as few intermediaries as possible between
the production and the study of data. For example, if one was studying the way
Shakespeare used metaphors, books written by Shakespeare would be primary
sources and books written about Shakespeare would be secondary sources. A
primary source is an original document containing firsthand information about a
topic.
Secondary sources are opinions or interpretation of others on the topic (your
published research will become a secondary source when someone wishes to
quote it).
A secondary source contains commentary on, or discussion about, a primary
source. The most important feature of secondary sources is that they offer an
interpretation of information gathered from primary sources. In most
dissertations and formal research projects, the overwhelming sources of evidence
should come from peer-reviewed current primary sources. An individual
document might be a primary source in one context and a secondary source in
another. Time is a defining element. For example, a newspaper article reporting
on a murder is not a primary source unless the author was at the scene of the
crime, but a newspaper article from the 1860s might be a primary source for
Civil War research. When in doubt, check with your committee members.
If your paragraphs are only statements without any pieces of PIE, the
predominant impression that comes across is assertion without foundation.
Similarly, having pieces of PIE without a statement makes it difficult for the
reader to comprehend the point of what is written. Once your paragraph has a
statement with a satisfactory helping of PIE, you are ready to move to the next
paragraph.
Statements of fact must be supported with sufficient evidence. Assertions made without
adequate substantiation can be perceived as rhetoric or opinion. Your extensive
research of the topic might convince you that your statements are supportable, but you
must convince readers by providing validation for your assertions. An exception would
be information considered common knowledge.
Common knowledge is a bit ephemeral. Generally, it is considered information that the average, educated
reader would likely accept as reliable without a thorough search or exhaustive research. Common
knowledge is presumed to be shared by members of a specific community — an institution, a
geographical region, a particular race, ethnic group, religion, industry, academic discipline, professional
association, or other such classification. Regardless of how common certain knowledge is considered, if
an exact quotation or close paraphrasing of this knowledge is taken from a published source, then the
statement must be credited to the original author and source to avoid plagiarism.
If there is any doubt about whether or not to cite a source, the formal nature of academic writing expects
the source to be cited. It is preferable to err by assuming information is not commonly known, than to
make a false assumption that information is commonly known. The lack of a necessary citation may leave
the reader with an impression that an author is sloppy with their scholarship or even plagiarizing a source.
In short, when in doubt, cite the source.
Cutting Board
Look at the main branches of the mind map that you created on a topic that you
might wish to research. See if you can support these ideas (main branches) with
the PIE elements described in this section.
1 cup “E”nter Information Into a Computer, Journal, or Tape Deck
Alfieri, the great Italian dramatist, allegedly had his servants tie him to his
writing table so that he would be forced to write.
Hopefully, you will not need to go to the extremes that Alfieri did to discipline
yourself into putting your thoughts and notes into the formation of your feast.
Much of what makes writing challenging is trying to write and edit at the same
time. Get your ideas on paper, or in a Word document, first; critique, rework, and
polish them later. Critiquing ideas as you are trying to express them often
represses them. If you reach a stumbling block, try to write past it—often great
ideas lie right beyond the hurdle, tempting you to give up. If you are still stuck,
take a break and think over the problem in a relaxed setting. By the time you
return, you will probably have the answer.
Be prepared to write at any time, in any place. Keep pen and paper in your car,
purse, gym bag, pocket, etc., or keep your PDA or iPad handy or your laptop in
standby mode. Most modern mobile phones now have excellent voice-
recognition capabilities, and if you wish to catch a great idea or thought easily,
you can simply speak into your mobile. This is particularly useful because great
ideas often come when you are not trying for them. You can save yourself a
significant amount of time by transposing your written notes, on a regular basis,
into your computer. It would be good to have a computer nearby as you read
articles and books for your dissertation research.
If you are fortunate enough to have a dedicated secretary who can transcribe
your dictation onto a word processor, that might be your ideal method for
formulating your dissertation. This method of transcription offers you the liberty
of creating your feast while in the midst of traffic. You can experiment with tape
recording your ideas and using voice dictation software like Dragon Naturally
Speaking™ ( http://www.nuance.com/naturallyspeaking/) to transcribe your
notes. Voice-activated word processing programs are getting better each day.
Cutting Board
1. What will you use to create your feast?
_________________________________________________________
2. When will be your first (next) time to use this method?
_________________________________________________________
The following is a rubric that can be used to help develop better writing skills.
You might want to write a few pages and then ask someone whose opinion
and editing skills you value to critique what you have written based on this
rubric. (See Scoring Key below.)
Creating a Journal
Journals and diaries have a long history as forms of self-expression. By
keeping a dissertation journal, you can write away stress, anxiety, indecision,
problems, unfinished business, confusion, writer's block, and procrastination.
There are many benefits to be gained by keeping a journal: Some benefits
include the following:
1. Writing can flow without self-consciousness or inhibition.
2. Your thought processes and mental habits can be revealed.
3. You can improve your memory.
4. You can provide tangible evidence of mental processes.
5. You can obtain mental growth through critical reflection.
6. You can help make meaning out of what is experienced or read as it
relates to your research.
7. You can articulate connections between new information and what you
know.
One type of journal you might wish to keep is known as a reader response
journal or literature log. Here you can record the response to your readings. It
enables you to enter the literature in your own voice. If you would like to
keep your personal diary or journal online, Cam Development at
http://www.camdevelopment.com/mpd.htm has software to assist you.
Cutting Board
1. Put an asterisk (*) near the objects that you now own.
2. Put an exclamation point (!) next to the ones that you feel you
should own.
3. Write down any other objects you will be needing or wanting to
have in your dissertation kitchen/workspace:
_____________________________________________________________________
PICK YOUR REPAST
[Choose Your Topic]
½ cup “P” ossess Knowledge of What a Dissertation Is and Is Not
Before you go through the process of selecting a dissertation project and
topic, keep in mind that a dissertation is not necessarily
1. a Nobel Prize project
2. the final answer to a pressing problem
3. the last research paper that you will write
4. about the hottest topic in your field
5. going to excite all your friends.
What then is a dissertation (I hear you cry)? A dissertation is the written
report of a formal research project required to complete a doctoral degree,
designed around solving an important problem in your profession that you
1. could “put your arms around”; in a couple of minutes you could tell
someone in your profession, as well as in another profession, what it is
about
2. already know a great deal about where it will fit into a larger picture
3. are truly concerned and curious about
4. could conceivably present at a professional meeting
5. are willing to dedicate a great deal of time to complete
6. can call your own
7. can proclaim is researchable, original, and contributory to your
profession and to society [see (K) conduct a ROC bottom test]
8. will know when you have completed, i.e., ascertained enough
information to answer your research questions and accept or not accept
your hypotheses
One major difference between dissertations and research papers written in
undergraduate classes or for master theses is size. Most college papers are
around 10 pages, and a master thesis averages about 30 pages. By contrast,
the average dissertation is over 150 pages in length. The number of pages
required indicates the breadth and scope of the research necessary to complete
a dissertation. For a regular research paper, it is only necessary to conduct
enough research to answer the paper topic. For a dissertation, it is necessary
to conduct research that thoroughly investigates the issues and themes
concomitant with the problem that you will resolve, and to pose either a new
question or a new answer to a valid topic. A dissertation is judged as to
whether or not it makes an original and unique contribution to scholarship.
Other research projects, such as a master's thesis, are judged by whether or
not they demonstrate mastery of available scholarship in the presentation of
an idea.
Since the dissertation serves as the starting point for producing publications,
ideally the topic should be the foundation for a substantial research stream
(Van Slyke et al., 2003),. Make sure you choose a topic that is of great interest
to you. For most people, the dissertation is the largest research project they
will ever do. It is important that the topic matches your long-term interests
and abilities. Your interest is even more important when considering how long
you are likely to be working on the topic. It will likely take longer than you
think to complete. With some finesse, you will be crafting articles based on
the dissertation for some time.
As stated earlier, doctoral writing is the highest level of academic writing. The
Council of Graduate Schools (2005) described the purpose for doctoral-level
research as being able to apply generally accepted theory to a current problem
in order to find a viable solution. Thus, you need to identify a research
problem, design an empirical study to address the problem in an appropriate
manner, and then carry out the research while abiding by ethical principles.
Research ethics is essential for the protection of participants’ rights, safety,
dignity, and wellbeing (U.S. Department of Health and Human Services,
1979). Ethical procedures constitute principles for research ethical protocols
and standards: informed consent, privacy of participants, avoiding harm,
cognizance of vulnerable groups, participants' right, data restriction, data
storage, and conflicts of interest (Stacey & Stacey, 2012). Most universities,
healthcare organizations, and schools have an Internal Review Board (IRB)
that must grant ethical approval prior to conducting the research and enrolling
participants in a study. When the research is conducted, you will write up the
study and, in so doing, make an original contribution to your profession that
provides evidence of originality in thinking.
1 cup “I” dentifying Your Style
On the Cutting Board that follows, you will find a typology of major ways in
which people make inquiries, adapted from Mitroff and Kilmann’s
Methodological Approaches to Social Science (1978). Answer each question
and record your answers in the spaces provided. This will give you an
opportunity to discover what method(s) of doing research would work well
for you.
Cutting Board
Read each statement below and indicate on the accompanying Likert-type
scale how strongly you agree with each declaration.
Note: This activity is not designed to serve as a model for survey design. It is instead intended to help
give you get a taste of a variety of research methodologies that fit your research style and enable you
to solve the problem you pose. This survey is based on Mitroff and Kilman’s (1978, 1983) typologies
of research and uses a 4-point Likert-type scale. The questions are intentionally complex and force a
commitment to one view rather than allowing for a neutral view or a no opinion option.
Below are research topics that would likely appeal to the people with
archetypes described above who wish to conduct an investigation of the
relationship between smoking and health. Note: Smoking is the leading cause
of statistics . Read each research topic and see if the research topic
described for your archetype appeals to you.
If asked to choose a research topic on smoking and health, where funding is
not a concern, the following topics would be of interest to:
I. Conceptual Theorist: Determine the correlation between smoking
and diseases, smoking and personality types, why people smoke, and as
many multiple correlations as one can ascertain between smoking and
other factors.
II. Analytical Scientist: Determine definitively if cigarette smoking
causes cancer. Simulate smoking in laboratory animals and determine if
cancer is caused.
III. Particular Humanist: Study a smoker and determine why this person
started smoking and any ill effects attributed to smoking. Have cancer
patients who have smoked keep a diary and study their feelings and
concerns.
IV. Conceptual Humanist. Survey ex-smokers and determine the most
effective ways each person was able to stop smoking. Use this
information to develop a program to help people stop smoking.
Phenomenology: Here the meaning of an experience is narrated using
story and description. Phenomenology is an attempt by qualitative
researchers to “discover participants’ lived experiences and how they
make sense of them” (Babbie, 1998, p. 281). Phenomenologists focus
on persons who have shared the same experiences and on eliciting
commonalities and shared meanings. For example, a researcher might
want to know how the voters of Palm Beach County, Florida, viewed
the controversy surrounding the presidential election of 2000. The
election was a specific event and the voters might have a variety of
responses that can be analyzed for common threads. The research is
very personal, and the results are written more as stories than as
principles, yet the researcher stays somewhat detached. Any way the
participant can describe their lived phenomenal experience can be used
to gather data in a phenomenological study.
The primary means of data collection is usually through interviews to
gather the participants' descriptions of their experience, however, the
participants' written or oral self-report, or even their aesthetic
expressions (e.g. art, narratives, or poetry) can be used to understand
their lived experiences around a phenomenon. These expression
methods are particularly helpful when working with children. A
phenomenological long interview consists of a series of pre-
determined open-ended questions that prompt, but do not lead,
participants in the discussion and explanation of their experiences
through the narrative process. The initial questions are followed by
probing questions to reveal lived experiences and perceptions
(Merriam & Tisdell, 2015). These include, “Can you expand on …”
or “Tell me more about …”. Additional inquiry is generated from the
developing concepts and categories identified from participants
during the interviews and from previous study participants’
interviews.
There are multiple types of phenomenological designs (empirical,
interpretive, Husserlian, among others). If you are planning on conducting a
phenomenological study you will need to explain what type of
phenomenological design was chosen over other types of phenomenological
designs relative to the problem and purpose of the study.
Quasi-experimental: When a true experimental design is not available
to a researcher for various reasons, e.g., where intact groups are already
formed, when treatment cannot be withheld from a group, or when no
appropriate control or comparison groups are available, the researcher
can use a quasi-experimental design. As in the case of the true
experimental design, quasi-experiments involve the manipulation of one
or more independent variables and the measurement of a dependent
variable. There are three major categories of quasi-experimental design:
the nonequivalent-groups designs, cohort designs, and time-series
designs (Cook & Campbell, 1979).
Cutting Board
The table below summarizes these different options and relates them to the
type of researcher who is most likely to use these methods and designs:
Research Method Brief Type
Action research Participatory ‐ problem identification, solution, III
solution review
Appreciative inquiry Helps groups identify solutions III, IV
Case Study research Group observation to determine how and why a III
situation exists
Causal ‐ comparative Identify causal relationship among variable that can't IV
be controlled
research
Content analysis Analyze text and make inferences IV
Correlational research Collect data and determine level of correlation I
between variables
Critical Incident Identification of determining incident of a critical III
event
technique
Delphi research Analysis of expert knowledge to forecast future I, IV
events
Descriptive research Study of "as is" phenomena I
Design based research/ Identify meaningful change in practices II
decision analysis
Ethnographic Cultural observation of a group III, IV
Evaluation research Study the effectiveness of an intervention or program IV
Experimental research Study the effect of manipulating a variable or II
variables
Factor analysis Statistically assess the relationship between large I
numbers of variables
Grounded Theory Produce a theory that explains a process based on III, IV
observation
Hermeneutic research Study the meaning of subjects/texts (exegetics is text III
only) by concentrating on the historical meaning of
the experience and its developmental and cumulative
effects on the individual and society
Historical research historical data collection and analysis of person or IV
organization
Meta-analysis research Seek patterns in data collected by other studies and I,II
formulate principals
Narrative research Study of a single person's experiences IV
Needs assessment Systematic process of determine the needs of a
defined demographic population II
Phenomenography Answer questions about thinking and learning I, II
Phenomenology Make sense of lived experiences of participants III, IV
Manipulation of variables in populations without
Quasi-Experimental benefit of random assignment or control group II
Q-method A mixed-method approach to study I
subjectivity ‐ patterns of thought
Regression ‐ discontinuity Cut-off score assignment of participants to group II
(non ‐ random) used to study effectiveness of an
design (RD) intervention
Repertory grid analysis Interview process to determine how a person I
interprets the meaning of an experience
Retrospective record Study of historic data collected about a prior II
intervention (both effected and control group)
review
Semiology Studies the meaning of symbols II, III
Cutting Board
1. Which of these study methods appealed the most to you?
________________________________________________
2. Which appealed the least to you?
_____________________________________________________________
3. Did you approve of the study defined for your archetype? ___
Explain: _______________________________
4. Which three methodologies appeal to you the most?
_________________________________________________
Why?
__________________________________________________________
5. Which three methodologies appeal the least to you?
________________
Why?
__________________________________________________________
Keep knowledge in mind when you select your dissertation topic. Check
out http://tinyurl.com/2vkamsn for more suggestions on choosing a topic to
research
Cutting Board
1. What is (are) your professional role(s) or the role that you are
seeking (e.g., are you an educator, physician, nurse, administrator,
actress, lawyer, political scientist, manager, accountant, salesperson,
media person, anthropologist, engineer, computer scientist)?
Dr. M. is an educator.
Dr. I. is
__________________________________________________________________
2. What is (are) your principal area(s) of interest (PI) or your
subspecialty within your profession?
Dr. M. is a mathematics educator and consultant.
Dr. I. is
_______________________________________________________________
3. What area(s) of your PI are you most enthusiastic about or involved
with? (What made you decide to go into this profession? What keeps
you in the profession?)
Dr. M. is interested and involved with helping people overcome
mathematics anxiety, technology in the classroom, teacher training,
statistics, and the future of mathematics education. She enjoys
mathematics and believes that every person can be successful in
mathematics if they are given the opportunity to do math their way.
Dr. I. is interested and involved with:
The reasons Dr. I chose this profession are:
4. What are some problems that you are interested in that you believe
need some new light or need to be looked at critically for the first time?
Dr. M. believes that calculators are not being used in the elementary
school classroom because of the anxiety of elementary school teachers.
Dr. I. believes that:
______________________________________________________________
5. Restate the most pressing problem you have described using a
preferred style of inquiry and/or method:
Dr. M. (a conceptual theorist who favors correlational research): What
is the relationship between mathematics anxiety and lack of calculator
use in the classroom?
Dr. I.:
________________________________________________________________
6. Select a title (topic) based on this problem:
Dr. M.: The Wasted Resource: Attitudinal Problems in Calculator Use
among Elementary School Teachers.
Dr. I.:
___________________________________________________________________
Titles "should be a concise statement of the main topic and should identify the actual
variables or theoretical issues under investigation and the relationship between them"
(APA 6th ed, p. 23). Your title is limited to 15 words. The title needs to be very clear, and
provide readers with information about what to expect from the research paper. The
problem and the type of investigation should be discernable from the title:
Fantastic! You have yourself a research topic, Dr. I. Now, before you start
your celebration, you will need to (K)conduct the ROC bottom test to see if
the topic you have selected has the attributes of researchability and originality
and if it is contributory.
1 cup “K” (c)onduct the ROC bottom test
Cutting Board
1. How do you know that the research problem is important?
__________________________________________
2. What are the journals, texts, and periodicals that deal with this topic?
_________________________________
3. What Web sites you can visit to obtain information?
___________________________________________
4. How can you obtain access to files and documents you will need?
_______________________________________
5. How will you access a sample of your population or the population
itself? ________________________________
6. How will you know when you have obtained the information you are
seeking? _____________________________
7. How will you obtain authorization to do your research?
________________________________________________
8. How do you know that the research is free of any ethical problems?
___________________________________________________
9. What knowledge and skills do you have to conduct the research?
_____________________________________________________
10. What are your qualifications to undertake this research?
___________________________________________________
11. How will you obtain the support of people that are essential to your
project?
___________________________________________________
12. What assumptions will you need to make to assure that you can obtain
reliable information? What can you do to make certain those
assumptions are probably met?
___________________________________________________
13. Do you have the financial resources you need to conduct the study?
______________________________________________________
14. How will you overcome the limitations and obstacles needed to
conduct the study?
______________________________________________________
1/3 cup “O” riginality
According the Council of Graduate Schools (2005)
(http://www.socialresearchmethods.net/kb/index.htm ), in its most general
sense, original describes research that has not been done previously or a
project that creates new knowledge; it implies that there is some novel twist,
fresh perspective, new hypothesis, or innovative method that makes the
research project a distinctive and unique contribution. An original project,
although built on existing research, should not duplicate someone else's work.
A “yes” to one or more question on the Cutting Board below will satisfy the
“O” requirement and indicate that your topic has originality. If all the answers
are no’s, you might want to go FISH.
Cutting Board
1. Will this study provide some new way to look at an existing
problem? ___
Cutting Board
1. Is there a need in your profession, community, or society to know the
results of this study?
2. Will there be people in your profession or people who plan to enter this
profession who will be need the information that this study will
ascertain? ___
3. Will people outside of your profession gain new insight into something
in your profession after this study is complete? ___
4. Will some members of society, or society at large, suffer if this study is
NOT done? ___
5. Will the results of the study likely change the perception of people in
your field or profession? ___
Once you have passed the ROC bottom test, you will have made a major step
toward obtaining your goal. Congratulations! You should feel proud and
happy.
A doctoral dissertation or formal research project must have a very high level
of quality and integrity. The entire research project and paper must be clear,
lucid, and logical; have an appropriate theoretical base; contain appropriate
statistical analysis (if needed); and have proper citations.
Now that you have nailed down your topic, you will need to develop a solid
problem statement. When this major task is accomplished, put this statement
in your working environment and carry a copy of it with you whenever you
are working on your dissertation. It is important to never lose sight of what
you are researching and why you are conducting this research.
Cutting Board
In bold print write out your research topic again in the space below:
The Problem Statement
The greatest challenge to any thinker is stating the problem in a way that will
allow a solution. —Bertrand Russell (1872 - 1970)
The heart of a doctoral dissertation, and most formal research projects, is the
PROBLEM STATEMENT. This is the place where most assessors go first to
understand and appraise the merits of your proposal or your research. After
reading the problem statement, the reader will know why you are doing this
study and be convinced of its importance. In 250 words or less (about 1–3
paragraphs) you must convince the reader that this study must be done!
The reason you write a doctoral dissertation or formal research study is
because society, or one of its institutions, has some pressing problem that
needs closer attention. The problem statement delineates this problem while
hinting at the nature of the study—correlation, evaluative, historical,
experimental, etc.—that is, how you will (did) solve the problem. A problem
is appropriate for doctoral research if the problem leads to a “study [that] will
contribute to knowledge and practice” (Creswell, 2005, p. 64). To paraphrase
Maslow (1970), a problem not worth solving is not worth solving well. The
research problem serves as the basis for the interrelatedness of the distinct
elements entailed in the study.
Once a clear and lucid problem statement is formed, all the research you put
into your dissertation should be focused on obtaining a solution. You will be
judged by the degree to which you find the answer to the problem you pose
and thus achieve your purpose.
Warning: Do not solve a problem in research as a ruse for achieving self-
enlightenment.
A problem (or research question) that results in a “yes” or “no” response is not suitable
for formal research. For example, a problem such as determining how many hours of
homework is appropriate for elementary school students is not a research problem, but
the researcher can form a suitable problem statement around this topic with a bit of
finesse. If you can present evidence that elementary school students and parents do not
understand why students are given several hours of homework a day, then a study can be
designed to determine what benefits, if any, homework has for elementary school students.
Determining if stock options are beneficial for employee morale is not a problem (actually it is a
proposed solution) and is not appropriate for research, because this statement leads to a binary
conclusion (either it is beneficial or not). However, if a problem retaining quality employees exists,
then a study can be conducted to determine what types of benefits can increase job commitment.
A problem statement that is too narrowly focused might direct the researcher
only toward trivia. A statement that is too broad might not adequately
delineate the relationships or concepts involved in the study. Development of
a well-constructed problem statement leads to the logical outgrowth of well-
constructed research questions or hypotheses and supports all aspects of a
research project. Many researchers have difficulty formulating a succinct
problem statement. The following activity can assist you in preparing a
delectable problem statement. Further suggestions are offered in PHASE 3.
Cutting Board
By answering the following questions, you will be able to develop mouth-
watering problem statement. Fill in the blanks as best you can, and from these
ingredients try to cook up a delicious problem statement.
1. What is wrong that needs to be addressed ?
______________________________________
2. Where is this problem found (what profession(s), subspecialty)? [This
will help in your literature review]
_____________________________________________________________
3. What are some of the ill effects of this problem on society at large
and/or some subset of society? [This will help with your background
section]
_____________________________________________________________
4. Why are you interested in this problem? [This will help in your
significance statement] Why would someone else be interested in this
problem? __________________________________
5. Who is affected? (What group would care about this problem?) [This
will help define the sample and population and help justify significance]
What part of this problem can this study help solve?
___________________________
6. How can this study help (assist in making wiser choices, debunk a
myth)? [This will help define your purpose and significance]
______________________________________________________________
7. What professional value will the research have? (Clarify an ambiguous
point or theory, look at a new aspect of a problem, aid in an important
decision-making process, etc.) [This helps establish the purpose and
significance] What types of journals would be interested in publishing
this study?________________ _________________________
8. What needs to be done (analyze, describe, evaluate, test, understand,
determine). [This will help decide the method(s) and instruments to be
used] ___________________________________________________
9. What topics, subjects, or issues are involved (stock market, drugs,
violence, language development, glass ceiling, assessment, euthanasia,
etc.) [This will help in the literature review]
_________________________
10. How does the study relate to the development or the refinement of
theory? [This will help with your theoretical framework]
___________________________________________________
11. What could result from this study (clarify, debunk, relieve, assist, create,
recommend)? [This will help in interpreting the results]
__________________________________
12. What harm would (could) be done if this study was NOT done? [This
will help with the significance of the study].
______________________________________
13. (optional) What has already been done about the situation? What hasn’t
been done? Who is requesting such a study? [This will help with your
literature review]. ________________________________
14. Where can you find the latest data or evidence to support the depth of
this problem?
___________________________________________
Cutting Board
1. To help you develop your problem statement, fill in the blanks:
There is a problem in ___________ (societal organization). Despite
_________________ (something that should be happening),
________________ is occurring. This problem has negatively impacted
____________ (victims of problem) because ______________. A
possible cause of this problem is ___________. Perhaps a study that
investigates ______________ by ______________ (method) could
remedy the situation.
2. Use the following checklist to make certain the problem statement
meets doctoral level requirements.
QUALITATIVE QUANTITATIVE
Theory development Theory testing
Naturalistic or organic settings Synthetic settings
Subjective Objective
Observations, interviews Tests, surveys
Descriptive
statistics Descriptive
and inferential statistics
Generates hypothetical
propositions Generates
predictive relationships
Philosophical roots: Phenomenology Philosophical roots:
Positivism,
Goal: Understanding, description, Goal: Prediction,
control, generate hypotheses
confirmation, test hypotheses
Some questions to answer in designing a qualitative study:
Are the basic characteristics or assumptions of a qualitative study
clearly stated?
Will the reader have an understanding on how this qualitative study
differs from a quantitative study?
Is there information provided so that a reader will understand the
origins of the qualitative design for this research study?
Will the reader gain an understanding on how the experiences of the
researcher shape his or her values and bias to the research?
Is information provided on how the researcher will gain entry to
research sites (if needed) and how approval will be obtained to collect
data?
Are the procedures for collecting data thoroughly and clearly discussed?
Are reasons provided for the particular method of data collection?
Are methods to code information set forth?
Are the specific data analysis procedures identified in relation to
specific research designs, such as for ethnographic approaches,
grounded theory, case studies, and phenomenology?
How will information validity (i.e., the instrument measures what it
purports to measure) and reliability (consistency) will be assured [see
verification of information in a qualitative study]? Are definitions,
delimitations (boundaries), and limitations (weaknesses) stated?
Are the research outcomes presented in view of existing theory and the
literature? Is there a contribution to the existing theory base? Are you
developing altogether new theory?
FOR YOUR INFORMATION AND EDUCATION
Internal validity is the extent to which one can draw valid conclusions
about the causal effects of one variable on another. It depends on the
extent to which extraneous variables have been controlled by the
researcher.
Internal reliability is the extent to which items in an instrument are
correlated with one another and measure the same construct. Cronbach’s
alpha is usually used to measure this.
External validity is the generalizability or the extent to which the
findings in the study are relevant to participants and settings beyond
those in the study.
External reliability is consistency or stability of a measure when
repeated measurement gives the same result.
Trochim (2004) points out the relationship between reliability and
validity through an interesting metaphor. He uses a set of concentric
circles, where the center is the target or the true value of the variable
that the researcher is trying to measure. See an illustration below.
(Source: University of Northern Texas Health Sciences Center,
http://www.hsc.unt.edu/departments/cld/AssessmentReliabilityValidity.cfm
Imagine that your instrument is a means of shooting at the target and
each “shot” is a measure of the variable. If you measure the concept
perfectly you hit the target at the bull’s-eye; if you don’t, you are
missing the bull’s-eye. If you find most of your shot points are clustered
in one small place, but away from the center, the instrument is reliable,
but not valid. If the points are scattered all over the target such that the
mean value would be the center, the instrument is valid but not reliable
[However, it is important to note the school of thought that in scholarly
research reliability is a prerequisite for measurement validity]. If most
of your points are scattered in one half of the target, but not near the
center, the instrument is neither valid nor reliable. If most of the points
are near the center then your instrument is both valid and reliable. As a
scholarly researcher you strive to have your instruments be both valid
and reliable. http://tinyurl.com/d5akva
Other types of validity and reliability exist. For example, concurrent
validity is a method of determining validity by correlating results with
known objective measures. An example would be to validate a measure
of political conservatism by correlating it with reported voting behavior.
Construct validity hypothesizes a relationship between scores obtained
for one variable with scores obtained from another variable that is
known to be associated with it. For example, depression is a construct
regarding a personality trait manifested by behaviors such as lethargy,
loss of appetite, difficulty in concentrating on tasks, and so forth.
Verification in a Qualitative Study
Validity and reliability must be addressed in a qualitative study. The accuracy,
dependability, and credibility of the information depend on it. In quantitative
research, reliability refers to the ability to replicate the results of a study. In
qualitative research, there’s no expectation of replication. In qualitative
research, there is some debate surrounding the use of the term reliability, and
some prefer the term dependability.
There are various ways to address validity and reliability (or dependability),
in qualitative studies including triangulation of information among different
sources, receiving feedback from informants (member checking), and forming
the unique interpretation of events. Other methods to help with dependability
include creating an audit trail, and coding of part or all of the data by another
person for coding comparison. An audit trail should include all field notes and
any other records kept of what you do, sees, hear, think, etc. These notes
include where and when the inquiry took place, what was said and observed,
and describes your thoughts about how to proceed with the study, sampling
decisions, ethical concerns, and so on.
In addition, qualitative software analysis helps to assure accuracy. When
working with qualitative software such as NVivo, or ATLAS.ti, you record the
attributes of the case by classifying the node or setting the node's classification
in the node's properties and then setting attribute values related to the study. In
ATLAS.ti this is referred to as Hermeneutic Unit. You need to share these
nodes, units, tables, and graphs with the reader. For more information check
out: http://tinyurl.com/zrslcpq and http://tinyurl.com/zfalgro
Creswell (1997) provides an example of a qualitative procedure. The opening
descriptions of the qualitative research paradigm, which has been taken from
several authors (as cited in Creswell, p. 161), are as follows:
The intent of qualitative research is to understand a particular social
situation, event, role, group, or interaction. It is largely an investigative
process where the researcher gradually makes sense of a social
phenomenon by contrasting, comparing, replicating, cataloguing and
classifying the object of study. . . . This entails immersion in the
everyday life of the setting chosen for the study; the researcher enters
the informant’s world and through ongoing interaction, seeks
informants' perspectives and meanings.
When you run a Word Frequency query in NVivo, the results are
displayed in Detail View. You can view the results as a list on the
Summary pane or as a visualization on the Word Cloud pane. The
word cloud visualization displays up to 100 words in varying font
sizes, where frequently occurring words are in larger fonts.
Criteria that are meaningful within an evaluative process include the
following: credibility criterion, persistent observation, member checks, and
expert review (Guba & Lincoln ,1989),. The credibility criterion is similar to
internal validity, with the focus of establishing a match between the responses
of the experts (e.g., teachers, administrators, and parents in an educational
study) and those realities represented by the evaluator and designer of the
instrument (the researcher and the research in this study).
Persistent observation (Guba & Lincoln, 1986, pp. 303-304) requires
sufficient observation to enable the evaluator to identify those characteristics
and elements in the situation that are most relevant to the issue pursued and to
focus on the details.
Member checking is the process of verifying information with the targeted
group. It allows the stakeholder the chance to correct errors of fact or errors of
interpretation. Member checks add to the validity of the observer’s
interpretation of qualitative observations. Member checking includes
verifying words said with transcripts of dialogues for accuracy. During these
checks, you can ask participants for clarification. Member checking also
involves the process of checking with research participants whether the
identified concepts and codes fit one’s personal experience. In your research
report, make certain that you describe how the results of the check elaborated
or restricted conclusions.
Expert review is one of the primary evaluation strategies used in both
formative (How can this program be improved?) and summative (What is the
effectiveness and worth of program?) evaluation. It is often a good idea to
provide experts with some sort of instrument or guide to ensure that they
critique all of the important aspects of the program to be reviewed.
VREP ensures the clarity of the items, and the relevance to the problem and
constructs to the study. VREP requires the researcher to clarify operational
definitions and include the domains and constructs under investigation. To use
VREP effectively, you need to assign meaning to a variable by specifying the
activities and operations necessary to measure, categorize, or manipulate the
variable. For example, to measure the construct of successful aging the
following domains could be included: degree of physical disability (low
number); prevalence of physical performance (high number), and degree of
cognitive impairment (low number). If you were to measure creativity, this
construct is generally recognized to consist of flexibility, originality, elaboration,
and other concepts. Prior studies can be helpful in establishing the domains of a
construct.
Now that you have selected your topic, you are in an excellent position to
determine what you will use to cook up your research. You need to commit to
a method. Research methodology refers to the broad perspective from which
you will view the problem, make the investigation, and draw inferences. Most
methods are subsets of qualitative and quantitative paradigms. A brief
discussion of several research methods was presented earlier. We will now
take a closer look at some of these methods through the perspective of time.
This will give you an opportunity to validate that you are using the proper
recipe to successfully prepare your feast.
Although no single research method is likely to describe each aspect of the
problem you are planning to investigate, there are most likely general
categories into which your study will fall. There is no universal standard for
categorizing research designs and different authors might use different names
of designs in their discussions of them. Thus what is shown here is intended
more to be informative than exhaustive. This lack of universalism also causes
problems when critiquing research as many published studies do not identify
the design used. Selecting an appropriate design for a study involves
following a logical thought process. A calculating mind is required to explore
all possible consequences of using a particular design in a study. It is highly
recommended that you find an excellent primer on the methodology you
choose and cite this in your study at appropriate times.
Choose Your Method Wisely
Don’t be too quick in running away from using a quantitative method because
you fear statistics. A qualitative approach to research can yield new and
exciting understandings, but it should not be undertaken because of a fear of
quantitative research. A well-designed quantitative research study can often
be accomplished in very clear and direct ways. A similar study of a qualitative
nature usually requires considerably more time and a burden to create new
paths for analysis where previously no path had existed. Choose your method
wisely!
After reading the descriptions below, find the classification that best describes
the nature of your study. We will primarily use problems associated with low
socioeconomic class and its relation to education. An example will be
provided to show how each of these methods could analyze a different aspect
of the problem.
Past Perspective
If your primary interest is in past events or factors in the past that have
contributed to the problem you are researching, then your method will likely
be historical or causal-comparative.
Historical Research
The researcher looks back at significant events in the relatively distant past
and seeks, by gathering and analyzing contemporary descriptions of the event,
to provide a coherent and objective picture of what happened and arrive at
conclusions about the causes, effects, or trends of past events that might be
helpful in explaining the present or anticipating future events. The historical
researcher deals with the meaning of events. There is usually a reconstruction
of the past in relation to a particular theory or conceptual scheme. The heart
of this research is the interpretation of facts and events to determine not just
what happened, but why they happened. The data of historical research are
subject to two types of evaluation: to determine if a document is authentic
and, if indeed it is authentic, what the document means. The researcher is
concerned with external or internal evidence and subjects the data to external
or internal criticism.
Historical research deals with the meaning of events. The heart of the
historical method is not the accumulation of facts, but rather the interpretation
of the facts (Leedy & Ormrod, 2001). The principle product of historical
research is context—an understanding of the organizational, individual,
social, political, and economic circumstances in which phenomena occur
(Mason & McKenney, 1997).
A study of 19th century teaching practices with children of
low socioeconomic class using teacher diaries as primary
sources.
Content Analysis
The researcher examines a class of social artifacts, typically written
documents. Topics appropriate for content analysis include any form of
communication answering who says what? to whom? why? how? and with
what effect? This is an unobtrusive method of doing research, but it is limited
to recorded information. Coding is used to transform raw data into
standardized, quantitative form. Data are analyzed through the use of official
or quasi-official statistics.
Content analysis examines words or phrases within a wide range of texts,
including books, book chapters, essays, interviews, and speeches as well as
informal conversation and headlines. By examining the presence or repetition
of certain words and phrases in these texts, a researcher is able to make
inferences about the philosophical assumptions of a writer, a written piece,
the audience for which the piece is written, and even the culture and time in
which the text is embedded. Due to its wide array of applications, researchers
in literature and rhetoric, marketing, psychology, and cognitive science, as
well as many other fields use content analysis. http://tinyurl.com/9voe7mq
Documents from Title 1 programs are analyzed over a 10-year
period to determine any patterns or trends in entitlements.
Present Perspective
If your study adopts a viewpoint that is in the present time, then you will
likely be examining a phenomenon as it occurs with a view to understanding
its nature, organization, and the way it changes.
Developmental Research
The researcher examines patterns and sequences of growth and change over
time. This research can be done as a longitudinal study (the same group
examined over a period of time) or as a cross-sectional study (different groups
examined at the same time that might represent different ages or other
classifications). Check out the following URL to learn more about
developmental research techniques: http://tinyurl.com/2flrtms
A group of freshman students from a high-risk school are
studied to examine the factors that affect the ability to
graduate in 4 years.
Descriptive Research
Descriptive research is the study of a phenomenon “as it is” without making
any “changes or modifications” to it (Leedy & Ormrod, 2001, p. 191).
Descriptive designs address the “what” and “how” rather than “why”
questions. Although descriptive approach to a study requires the researcher to
observe and describe the phenomenon of interest, the process of description is
more precise, accurate, and carefully done than is usual in causal descriptions.
In descriptive studies, both survey and interviews can be used to collect data
(Babbie, 1973). Surveys can result in large samples for the study while
interviews can also provide detailed insights of the experiences of individuals.
Unlike experimental research, no treatment is manipulated or controlled by
the researcher. This method is not used to determine “cause and effect”
relationships (p. 191).
The descriptive researcher makes a systematic analysis and description of the
facts and characteristics of a given population or event of interest. The
purpose of this form of research is to provide a detailed and accurate picture
of the phenomenon as a means of generating hypotheses and pinpointing
areas of needed improvements. Descriptive studies are designed to gain more
information about a particular characteristic within a particular field of study.
A descriptive study may be used to develop theory, identify problems with
current practice, justify current practice, make judgments, or identify what
others in similar situations may be doing. Descriptive research can combine
correlational, developmental, and observation methods. A descriptive study
tries to discover answers to the questions who, what, when, where, and,
sometimes, how. The researcher creates a profile of a group of problems,
people, or events. The descriptive study is popular in business research
because of its versatility across disciplines (Cooper & Schindler, 2002).
techniques: http://tinyurl.com/2flrtms
A descriptive study of an urban ghetto is carried out to
understand what programs are available to preschool children
of low socioeconomic status and how effective these
programs are in accomplishing their goals.
Correlational Research
The researcher investigates one or more characteristics of a group to discover
the extent to which the characteristics vary together. Descriptive and
correlational studies examine variables in their natural environments and do
not include researcher-imposed treatments. Correlational studies display the
relationships among variables by such techniques as cross-tabulation and
correlations. Correlational studies are also known as ex post facto studies.
This literally means from after the fact. The term is used to identify that the
research has been conducted after the phenomenon of interest has occurred
naturally. The main purpose of a correlational study is to determine
relationships between variables, and if a relationship exists, to determine a
regression equation that could be used make predictions to a population. In
bivariate correlational studies, the relationship between two variables is
measured. Through statistical analysis, the relationship will be given a degree
and a direction. The degree of relationship determined how closely the
variables are related. This is usually expressed as a number between -1 and
+1, and is known as the correlation coefficient. A zero correlation indicates no
relationship. As the correlation coefficient moves toward either -1 or +1, the
relationship gets stronger until there a perfect correlation at the end points.
The significant difference between correlational research and experimental or
quasi-experimental design is that causality cannot be established through
manipulation of independent variables. This leads to the pithy truism:
Correlation does not imply causation. For example, in studying the
relationship between smoking and cancer, the researcher begins with a sample
of those who have already developed the disease and a sample of those who
have not. The researcher then looks for differences between the two groups in
antecedents, behaviors, or conditions such as smoking habits. If it is found
that there is a relationship between smoking and a type of cancer, the
researcher cannot conclude that smoking caused the cancer. Further research
would be needed to draw such a conclusion.
The relationship between socioeconomic status and school
achievement of a group of urban ghetto children is examined.
Causal-Comparative Research
The researcher looks at present characteristics of a problem, views them as
the result of past causal factors, and tries, by examining those past factors, to
discover the causes, critical relationships, and meanings suggested by the
characteristics. Usually two or more groups are compared using these criteria.
Causal-comparative and correlational methods are similar in that both are
nonexperimental methods because they do not involve manipulation of an
independent variable, which is under the control of an experimenter, and
random assignment of participants is not possible. This implies that variables
need to be observed as they occur naturalistically. As a result, the key, and
omnipresent problem, in nonexperimental research is that an observed
relationship between an independent variable and a dependent variable might
not be causal but instead the result of the operation of a third variable.
Quasi-experimental design
Quasi-experimental designs were developed to provide alternate means for
examining causality in situations not conducive to experimental control. As
in the case of the true experimental design, quasi-experiments involve the
manipulation of one or more independent variables and the measurement of a
dependent variable. The designs have been developed to control threats to
validity as possible in situations where at least one of the three elements of
true experimental research is lacking (i.e., manipulation, randomization,
control group). There are many types of quasi-experimental design. Most are
adaptations of experimental designs where one of the three elements is
missing. The three major categories of quasi-experimental design are the
nonequivalent-groups designs, cohort designs, and time-series designs (Cook
& Campbell, 1979). The nonequivalent-groups design is the most frequently
used quasi-experimental design (Heppner, Kivlighan, & Wampold, 1992;
Huck & Cormier, 1996). This design is similar to the pretest-posttest control
group experimental design. The difference is the nonrandom assignment of
participants to their respective groups in the quasi-experimental design.
Cohort designs are typically stronger than nonequivalent-groups design
because cohorts are more likely to be closer to equal at the outset of the
experiment (Heppner et al., 1992). An example of such a cohort would be
students at middle school Alpha and students at middle school Beta during a
similar time frame. The third class of quasi-experimental designs is the time-
series design, characterized by multiple observations over time (e.g.,
Kivligham & Jauquet, 1990) and involve the same participant observations to
record differences attributed to some treatment or similar but different
participants. In the interrupted time-series design (the most basic of this
class), a treatment is introduced at some point in the series of observations
(Heppner et al., 1992).
The researcher studies groups of prodigious young musicians from two inner-city
schools to determine which group progressed more in a 6-month period. One group
participated in a mentorship program, and the other did not. For more information,
check out Burns and Grove (1993, pp. 305–316).
Case Study
Case study is a type of qualitative research that concentrates on a single unit
or entity, with boundaries established by the researcher (Lichtman & Taylor,
1993). The case study method refers to descriptive research based on a real-
life situation, problem, or incident and situations calling for analysis,
planning, decision making, or action with boundaries established by the
researcher. Case study research is often used when the questions are how and
why, rather than what and how many, and when particularistic, descriptive,
heuristic, and inductive phenomena are considered. Sudzina and Kilbane
(1992) maintain that the method requires that every attempt be made to
provide an unbiased, multidimensional perspective in presenting the case and
arriving at solutions.
According to Goetz and LeCompte (1984), there are eight points in the case
study process where important theoretical decisions need to be made: focus
and purpose, research design, choice of participants, settings and context, the
role of the researcher, data collection strategies, data analysis methods, and
findings and interpretations. Case studies use inductive logic to discover the
reality behind the data collected through the study. Good case studies benefit
from multiple sources of evidence. Experts in case method recommend a
combination of the following sources: a) Direct observations (e.g., human
actions or a physical environment; b). Interviews (e.g., open-ended
conversations with key participants); c) Archival records (e.g., student or
medical records); d) Documents (e.g., newspaper articles, letters, e-mails,
memos, reports); e) Participant-observation (e.g., being identified as a
researcher but also filling a real-life role in the scene being studied); f)
Physical artifacts (e.g., computer downloads, photos, posters, that is, objects
that surround people physically and provide them with immediate sensory
stimuli to carry out activities).
A high school in a low socioeconomic area is studied to gather data for an analysis of
attitudes and practices as they relate to drug education.
Phenomenology
This type of research has its roots in existentialism. Phenomenology is a 20th-
century philosophical movement dedicated to describing the structures of
experience as they present themselves to consciousness, without recourse to
theory, deduction, or assumptions from other disciplines such as the natural
sciences. Phenomenology is both a philosophy and a research method. The
purpose of phenomenological research is to describe experiences as they are
lived in phenomenological terms (i.e., to capture the lived experience of study
participants). The philosophers from which phenomenology emerged include
Husserl, Kierkegaard, Heidegger, and Sartre.
The phenomenological perspective include beliefs in the investigator as a
learner; the plurality and multiplicity of the internalized culture, and the
uniqueness of individuals. The researcher acknowledges and embraces
individual differences within groups or cultures, accepting that individuals
may internalize different elements from diverse phenomena to meet their
respective needs.
Phenomenologists view the person as integral with the environment. The
focus of phenomenological research is people’s experience in regard to a
phenomenon and how they interpret their experiences. Phenomenologists
agree that there is not a single reality; each individual has his or her own
reality. This is considered true even of the researcher’s experience in
collecting data and analyzing it. “Truth is an interpretation of some
phenomenon; the more shared that interpretation is the more factual it seems
to be, yet it remains temporal and cultural” (Munhall & Stetson, 1989). There
are four aspects of the human experience that are of interest to the
phenomenological researcher:
1. Lived space (spatiality)
2. Lived body (corporeality)
3. Lived human relationships (relationality)
4. Lived time (temporality)
All of these aspects are taken into consideration with the understanding that
people see different realities in different situations, in the company of
different people, and at different times. The feelings expressed about one’s
life in an interview given at a certain time might be different from those given
at another time.
The broad question that phenomenologists want answered is as follows: What
is the meaning of one’s lived experience? The only reliable source of
information to answer this question is the person who has experienced this
phenomenon. Understanding human behavior or experience requires that the
person interprets the action or experience for the researcher, and the
researcher must then interpret the explanation provided by each person.
The first step in conducting a phenomenological study is to identify the
phenomenon to explore. Next, the researcher develops research questions.
Two factors need to be considered in developing the research questions:
What are the necessary constituents of this feeling or experience?
What does the existence of this feeling or experience indicate
concerning the nature of the human being?
After developing the research question, the researcher identifies the sources of
the phenomenon being studied and from these sources seeks individuals who
are willing to describe their experience(s) with the phenomenon in question.
These individuals must understand and be willing to express their inner
feelings and describe any physiological experiences that occur with the
feelings.
Data are collected through a variety of means: observation, interactive
interviews, videotapes, and written descriptions by participants. Typically, the
majority of data are collected by in-depth conversations in which the
researcher and the participant are fully interactive. In most phenomenological
studies, the investigator collects data from individuals who have experienced
the phenomenon under investigation. Typically, this information is collected
through long interviews. A phenomenological long interview consists of a
series of pre-determined open-ended questions that prompt, but do not lead,
participants in the discussion and explanation of their experiences through the
narrative process. The initial questions are followed by probing questions to
reveal lived experiences and perceptions (Moustakas, 1994). These include,
“Can you expand on …” or “Tell me more about …”. Additional inquiry is
generated from the developing concepts and categories identified from
participants during the interviews and from previous study participants’
interviews.
Analysis begins when the first data are collected. This analysis will guide
decisions related to further data collection. The meanings attached to the data
are expressed within the phenomenological philosophy. The outcome of
analysis is a theoretical statement responding to the research question.
Statements are validated by examples of the data, often direct quotes from the
participants. The researcher also depends heavily on his or her intuitive skills.
It is usually wise for the researcher to frame his or her own feelings, attitudes,
biases, and understandings of the phenomenon prior to conducting a
phenomenological study and bracket this information prior to conducting the
interviews.
Husserl conceived of phenomenology as a means of philosophical inquiry to
examine and suspend all assumptions about the nature of any reality. Three
terms emerged from Husserl’s concept of phenomenological inquiry: epoché,
reduction, and bracketing. Epoché, borrowed from the Greek skeptics, refers
to questioning of assumptions to examine a phenomenon fully.
Reduction is the consideration of only the basic elements of an inquiry
without concern for what is accidental or trivial. Bracketing is the setting
aside of some portion of an inquiry, so as to look at the whole. These three
concepts are often used synonymously to explain the suspended judgment
necessary for phenomenological inquiry. From the Husserlean philosophical
stance, only from this point of suspended judgment can inquiry proceed
unencumbered from masked assumptions about the nature of the phenomenon
observed. More information on phenomenology is available at
http://www.phenomenologycenter.org/phenom.htm
A researcher spends several months at an inner-city high school to determine the perceptions of the
teachers and students with respect to school policies.
Q-method
This methodology was invented in 1935 by British physicist-
psychologist William Stephenson (1953) and is most often associated with
quantitative analysis due to the statistical procedures involved. However,
Stephenson was looking to reveal the subjectivity involved in any situation—
e.g., in aesthetic judgment, choosing a particular profession, perceptions of
organizational roles, political attitudes, appraisals of health care, experiences
of bereavement—which is most often associated with qualitative methods.
Proponents of Q-methodology claim that it “combines the strengths of both
qualitative and quantitative research traditions” (Dennis & Goldberg, 1996, p.
104) and serves as a bridge between the two (Sell & Brown, 1984).
Some of the quantitative obstacles to the wider use of Q-method are reduced
with the advent of the software package Q-Method (Atkinson, 1992).
A researcher invites Head Start graduates to characterize the education rendered
by sorting statements (each typed on a separate card) into a quasi-normal
distribution ranging from “most like the education provided” (+5) to “most unlike
the education provided” (-5), the result being a Q-sort table. The Q-sorting
session is followed by focused interviews during which Head Start graduates are
invited to expand on their experiences.
Future Perspective
If your prime interest is the future, and you plan to study a current situation
for the purpose of contributing to a decision about it, changing it, or
establishing a policy about it, you will probably use one of the following
research methodologies:
Applied or Evaluative Research
This type of research is concerned primarily with the application of new
knowledge to the solution of day-to-day problems. The knowledge obtained is
thus contextual. Its purpose is to improve a process by testing theoretical
constructs in actual situations. This approach is based on the premise that the
development and application of theories of explanation and new methods of
analyses are essential to guide empirical research and to assist in the informed
interpretation of evaluative findings. In medical research, a cardiologist might
monitor a group of heart disease patients to see if the diet prescribed by the
American Heart Association is truly effective. A great deal of social research
fits into this category as it attempts to establish whether various organizations
and institutions are fulfilling their purpose and if implemented policies are
effective. The relationship between researcher and participant is one of expert
and client.
Many social action programs have been researched in this manner. It
highlights the symbols of measurement and scientific neutrality but attempts
to minimize the influence of the behavioral science perspective.
An income-enhanced program for raising the socioeconomic status of parents of
preschool children is evaluated for its effects upon school performance of children.
Action Research
This is a type of applied research that is more concerned with immediate
application, rather than the development of a theory. Slight variations on
action research include participatory research, collaborative inquiry,
emancipatory research, action learning, and contextual action research. Action
research focuses on specific problems in a particular situation and usually
involves those who can immediately create change. Bogdan and Biklen
(1992) described action research as a systematic collection of information that
is designed to bring about social change. This kind of research allows that
there could be more than one right way to develop solutions to problems.
The beginnings of action research date back to Lewin (1946). In his study of
group decision and social change, Lewin used his model to describe how to
change people’s relationship to food. His research consisted of analysis, fact-
finding, conceptualization, planning, execution, more fact-finding,
conceptualization, etc. Marrow (1969) saw the Lewin model as a means of
studying participants through changing them and seeing the effect. This type
of inquiry is based on the belief that in order to gain insight into a process,
one must introduce a change and then observe its variable effects and new
dynamics.
Action research is neither quantitative nor qualitative research. It has been
argued that it is more of a tool for change than true research. Action research
“is a way of doing research and working on solving a problem at the same
time” (Cormack, 1991, p. 155). Consulting projects often take on an element
of action research. The research takes place in real-world situations and aims
to solve real problems. The initiating researcher makes no attempt to remain
objective, but openly acknowledges his or her bias to the other participants.
Be sure to check in advance with your dissertation mentor and committee to
make certain they are amenable to this type of design.
The method was developed to allow researchers and participants to work
together to analyze social systems with a view to changing them. In other
words, it was developed to achieve specific goals. It is seen as a community-
based method and has frequently been employed in a wide range of settings
from schools and health clinics to businesses and industry.
The approach might include doing some baseline measures using
questionnaires, observation, or other research methods as an assessment of the
problem. Objectives are then set and decisions made about how to bring about
a change. When change plans are put into action, progress is monitored,
changing the plans as necessary or appropriate. Once the change has been
implemented, a final assessment is made and conclusions drawn,
accompanied by the writing of a report on the project for those involved or for
dissemination to others.
Therefore, action research “is a process containing both investigation and the
use of its findings” (Smith, 1986, as cited in Cormack, 1991, p. 155). The role
of the researcher is to assist practitioners to take control of and change their
own work.
Action research, generally, has the following characteristics and components:
1. Includes an educational component
2. Deals with individuals as members of social groups
3. Is problem focused, context specific, and future oriented
4. Involves a change intervention
5. Aims at improvement and involvement
6. Involves a cyclic process in which research, action, and evaluation are
interlinked
7. Creates an interrelationship where those involved are participants in the
change process
Stringer (1996) used the phrase Look, Think, Act in his book Action
Research: A Handbook for Practitioners. To conduct action research involves
identifying the problem, discussing the problem with practitioners,
conducting a thorough search of the literature, redefining the problem,
selecting an evaluation model, implementing a change, collecting data,
receiving feedback, making recommendations, and disseminating the results
to a larger audience.
A program in which teachers are given in-service workshops and new materials to use
with low socioeconomic status children is implemented in two pilot schools, evaluated
as it progresses, and continually modified to become more effective.
Delphi Method
In the Delphi method, a group of experts are asked, through a series of
surveys, to elucidate a current situation, and make forecasts regarding likely
scenarios and possible resolutions. Responses are initially made
independently and subsequently by consensus, in order to discard any
extreme views (Discenza, Howard, & Schenk, 2002). A traditional Delphi
method brings together panelists in at least three rounds of surveys. This
technique engages experts in issues around interests affecting the topic of
research. You can obtain names and addresses of experts by using a snowball
sample, where you ask experts in the field for lists of other experts in the
field. During the first round, open-ended questions are sent to experts in the
area of study. A second round of questions is developed based on the
responses obtained in the first round of questioning. The respondents usually
rate the responses on a Likert-type scale. The third round of questioning
summarizes the questions of the second round, including the group’s mean
response to the questions from the Likert-type scale. The panel members are
asked to reconsider previous answers in reference to the group’s mean and
revise their answers if desired. The participants are requested to provide a
rationale for answers outside the mean. This justification provides the
researcher with information why an expert’s response differs from the
majority of the group and adds richness to the data. In some circumstances,
this wide array of expert opinion can generate a range of alternative solutions
to issues and problems facing the researcher (Discenza et al., 2002). Delphi
techniques are used when the problem does not lend itself to precise
analytical techniques, but can benefit from subjective judgments on a
collective basis.
Delphi is primarily used in two modes: exploratory (to find out what’s out
there) and refinement (using expert judgments anonymously elicited to fine-
tune quantitatively oriented estimates). For the technique to work, the
respondents’ estimates need to be calibrated for over- or underestimation
errors, the questions need to be neutrally phrased, and some technique or
researcher oversight is necessary to control for the inclusion of mutually
exclusive data components in the Delphi analysis. This technique is gaining
more popularity as members of e-mail lists feed back information and perhaps
try to come to a consensus on future directives. The Delphi method was
originally developed at the RAND Corporation by Olaf Helmer and Norman
Dalkey.
As it is difficult to make summaries of other than quantitative responses, the
questions used in the Delphi process are often quantitative, e.g., “What will
the price of crude oil be in 20 years?” On the basis of this type of response,
the researcher could calculate descriptive statistics such as the mean, standard
deviation, and the ranges. One advantage of the method is that you can
readily use the range as a measure of the reliability of the forecast. Of course,
nothing prevents using qualitative or any other type of questioning if the
nature of the object so requires. If the respondents are amenable to the extra
effort, they may be asked to justify their opinion, especially if it differs from
that of the majority. The Delphi procedure is normally repeated until the
respondents are no longer willing to adjust their responses. You should be
aware that this design tends to have a high “drop out” rate of participants,
given the multiple rounds of data collection and review.
The modified Delphi technique, according to Custer, Scarcella, and Stewart
(1999, p. 2), is similar to the full Delphi in terms of procedure (i.e., a series of
rounds with selected experts) and intent (i.e., to elucidate current situations,
predict a likely future occurrence, and to arrive at consensus regarding the
resolution of a profound problem). The major modification consists of
beginning the process with a set of carefully selected items. These preselected
items may be drawn from various sources including related competency
profiles, synthesized reviews of literature, and interviews with selected
content experts. The primary advantage of this modification to the Delphi is
that it (a) typically improves the initial round response rate and (b) provides a
solid grounding in previously developed work.
A researcher directs identical questions to a group of experts, asking them to give their opinions on
how the future of the Internet might affect the future of education. In the next step, the researcher
creates a summary of all the replies she or he has received, sends this to the respondents, and asks if
any expert wants to revise his or her original response.
Nouveau Cuisine
Below is a list of some nontraditional meals that have been successfully
served at modern day banquets.
Heuristic Research
In action research, hypotheses are being created and tested whereas in
heuristic research, the investigator encourages individuals to discover their
own hypotheses in relation to a problem and decide on methods that would
enable them to investigate further on their own. The heuristic approach to
predict seeks to neither determine nor cause causal relationships. The
heuristic methodology, by design, does not quantify the experience by tools of
measurements, ratings, or scores. Heuristic researchers seeks to reveal more
fully the essence or meaning of a phenomenon of human experience; discover
the qualitative aspects, rather than the quantitative dimensions of the
phenomenon; engage one’s total self; and evoke a personal and passionate
involvement and active participation in the process. The researcher
illuminates thought through careful descriptions, illustrations, metaphors,
poetry, dialogue, and other creative renderings.
In heuristic research, the emphasis is on personal commitment rather than
linear methodologies. Its purpose is to describe a meaningful pattern as it
exists in the universe without any redesigned plan, thus eliminating
suggestive speculation. This type of research intrinsically tends to be more
open-ended than most.
FOR YOUR INFORMATION AND EDUCATION
Ethnographic
An ethnographic study has its roots in anthropology and seeks to develop an
understanding of the cultural meanings people use to organize and interpret
their experiences. This can be done through an “emic” approach (studying
behaviors from within a culture) or through an “etic” approach (studying
behaviors from outside the culture and examining similarities and differences
across cultures). Data are usually obtained through participant observation by
the researcher or research assistant and then verified with the group living the
phenomenon. Ethnography focuses on the culture of a group of people.
The goal of critical ethnographic research is to empower groups and
individuals, thereby facilitating social change. In contrast to traditional
ethnographic research where the researcher seeks primarily to describe or
understand (not change) the conditions of the group being studied, critical
ethnography is more in line with participatory action research where a
researcher assumes a critical stance and the researcher becomes a change
agent who is collaboratively developing structures intended to critique and
support the transformation of the communities being studied. (Barab et al,
2004). In this methodology the researcher often plays many roles in the
research process including: observer, facilitator, mentor, tutor, and advocator.
Ethnographic researchers can study broadly defined cultures (e.g.,
Californians, incarcerated teens, and indigenous people) in what is sometimes
referred to as a macro-ethnography. Alternatively, they might focus on more
narrowly defined cultures (e.g., the culture of the homeless in San Francisco,
online mathematics teachers in traditional universities) referred to as micro-
ethnography. An underlying assumption of the ethnographer is that every
human group eventually evolves a culture that guides the members’ view of
the world and the way they structure their experiences.
The aim of the ethnographer is to learn from (rather than to study) members
of a cultural group to understand their worldview as they define it.
Ethnographic researchers sometimes refer to emic and etic perspectives. An
emic perspective refers to the way the members of the culture envision their
world—it is the insider’s view. The etic perspective, by contrast, is the
outsider’s interpretation of the experiences of that culture.
Ethnographers strive to acquire an emic perspective of a culture under study.
Moreover, they strive to reveal what has been referred to as tacit knowledge.
Tacit knowledge is information about the culture that is so deeply embedded
in cultural experiences that members do not talk about it or might not even be
consciously aware of it.
Ethnographers almost invariably undertake extensive fieldwork to learn about
the cultural group in which they are interested. Ethnographic research is
typically a labor-intensive endeavor that requires long periods of time in the
field; months and even years of fieldwork might be required. In most cases,
the researcher strives to participate actively in cultural events and activities.
The study of a culture requires a certain level of intimacy and trust with
members of the cultural group, which can best be developed over time and by
working directly with those members as an active participant. The concept of
researcher as instrument is frequently used by anthropologists to describe the
significant role the ethnographer plays in analyzing and interpreting a culture.
The steps of ethnographic research include identifying the
culture to be studied, conducting a thorough literature
review, identifying the significant variables within the
culture, gaining entrance into the culture, immersing oneself in the culture,
acquiring informants, gathering data, analyzing data, describing the culture,
and developing theory. Data collection involves primarily observation and
interview. The researcher might become a participant/observer in the culture
during the course of the study. Analysis involves identifying the meanings
attributed to objects and events by members of the culture. Members of the
culture often validate these meanings before finalizing the results. More
information on ethnography can be found at http://tinyurl.com/9gt2koy
High school students from low socioeconomic families videotape different types of educational
institutions that they have attended; determine, from their perspective, the most pressing problems
within these institutions and make recommendations as to how these problems might best be
remedied.
FOR YOUR INFORMATION AND EDUCATION
Cutting Board
And the winner is . . .
1. Which research methodology best describes the way you plan to do your
study?
2. Why is this the best methodology for the problem you want to study?
3. Which methodology was the runner up? Why?
4. If you plan to triangulate your findings, explain how you will do this.
Terrific! You have just taken another important step toward successfully
completing your research project. Keep cooking! In the next Cutting Board,
you can demonstrate your scholarly literacy by matching the research term
with its description. Check your answers and make sure you master the
nuances of the ones that you missed. It would be a good idea to create a
flowchart to see the steps you will need to apply your selected methodology
to the problem you will resolve.
Cutting Board
BE AWARE OF HEALTH HAZARDS
[Ethics of Research]
Now that you have selected a research topic and have placed it into a
particular category, you are in an excellent position to digest research ethics.
The dictionary defines ethics as moral principles or rules of conduct. Morals
are defined as that concerning right and wrong. Research ethics are,
therefore, the rules of right and wrong concerning research. Since research
almost always involves people, it is important that your research does not
affect people in a negative way.
Research inherently contains many paradoxes. As a researcher you need
freedom to investigate, that is, find out as much information as possible about
the population you are studying, while adhering to an individual’s right to
privacy. Ethical principles have been established to balance these issues.
Starting in the 4th edition of the APA manual (a writing standard for social science
research published by the American Psychological Association), it was suggested
that researchers “write about the people in their study in a way that acknowledges
their participation.” Before this edition was published, researchers often described
the individuals in their study as “subjects” (p. 49). However, most researchers now
use less impersonal terms, such as participants, co-researchers, individuals,
respondents, etc. In general, if a person signs a permission slip, he or she should be
referred to as a participant.
Ethical guidelines, delineated in the Belmont report, include respecting of
personal autonomy and diminished autonomy, following the principles of
beneficence and justice, gaining informed consent, assessing risks and
benefits, and selecting subjects fairly (U.S. Department of Health and Human
Services, 1979). Participation requires informed consent. Certain vulnerable
populations or classes of individuals — such as children — may have limited
capacity to make voluntary and informed decisions. To this end, there are
additional safeguards for research involving children. Approval for working
with children requires assent from the child as well as consent from the
parent/guardian and requires a full IRB board review.
1 cup RESPONSIBILITY
Ad hoc definitions: Whenever a term can be defined in more than one way,
you must decide which of the possible definitions seems most sensible and
which definition lends itself best to efficient data collection. It is also
important that you clearly define all concepts in your study that might be
unfamiliar to your reader.
A classic case of the need for a proper definition was found in 1955 when the
population of London was reported in three different studies as
5,200 325,000 8,315,000,
whereas New York City was reported (in three different studies) to have a
population of:
1,910,000 10,350,000 8,050,000.
What became clear from these accounts is how meaningless a comparison
between the populations of these cities (or any city) is without clearly
defining geographic boundaries.
Rubber graphs: The human eye has difficulty assimilating raw data or
columns of numbers. Graphs often aid in making information more easily
understood. A line graph is customarily used to note trends or compare
amounts. The vertical axis generally has the measures (quantities)
represented. The scales that are used will affect the appearance of the graph
and should not be used to deceive people.
The murky notion of cause and effect: The story is told about a man who
wrote a letter to an airline requesting that their pilots cease turning on the
little light that says “FASTEN SEAT BELTS,” because every time that light
went on, the ride got bumpy. Be aware that in all correlational studies, effects
may be wrongly attributed to factors that were merely causally associated
rather than cause-and-effect related.
1 cup COMPETENCE
You should be qualified to carry out your research project. You should look at
the problem you plan to study critically, and then as objectively as possible,
and judge your own abilities to devise procedures appropriate for examining
the problem.
A competent researcher possesses certain personal qualities such as creativity,
flexibility, curiosity, determination, objectivity, tolerance of frustration,
logical reasoning abilities, and the ability to make scholarly observations.
(Having come this far in your Recipes for Success, you have already
demonstrated many of these qualities.)
1 cup MORAL AND LEGAL ISSUES
The legal and social rules of the community you are investigating should be
respected. If you think your work could break either the legal or the social
rules of the community, your research efforts should be curtailed until these
issues are resolved.
Researchers are expected to be proactive in designing and performing
research to ensure that the dignity, welfare, and privacy of research
participants are protected and that information about an individual remains
confidential. Participants often need to be assured that all personal
information given to the researcher will be seen only by those who are
carrying out the research project. It is unethical to discuss a person or the
information he or she gives you in confidence with your family or friends.
There is one exception to the confidentiality rule—legal obligation. If
someone tells you about a serious legal offense or crime, you may have to
break the confidentiality rule and notify the proper authorities.
Most research reports on groups of people keep the identity of individuals
anonymous. When reporting anecdotal cases, identities can be hidden by the
use of a false name or initials.
When you interview or test people you should explain to them the
following:
1. Who will see the information they give,
2. What will be done with the information, and
3. How their privacy will be protected.
It is important for you, the researcher, to be viewed as a person concerned
about the people being studied. You should not leave confidential
questionnaires, surveys, papers, or interview notes lying around so that
anyone can read them. It is best to keep these items somewhere safe or, if
possible, locked away so that they don’t fall into the wrong hands.
Plagiarism
In conducting research, we continually engage with other people’s ideas: we
read them in texts, hear them in lectures we attend, share ideas on e-mail,
discuss them with others, and incorporate them into our own writing. As a
result, it is very important that we give credit where credit is due. Plagiarism
is using someone else’s ideas and words without clearly acknowledging the
source of the information. As a general rule, if the idea is not totally yours,
give credit to the source of the idea. If ever there is a question of "I don't
remember the source," abandon the particular citation or conduct a search to
locate the reference. A student needs to always follow the specific
guidelines issued by the university to avoid complications. If you find as you
write that you're following one or two of your sources too closely,
deliberately look back in your notes for other sources that take different or
contrasting views; there is a set limit to how many words you can quote and
paraphrase before you need to obtain permission from the copyright holder.
Over reliance on a few sources makes the authenticity of your research
questionable.
It is important to remember that when you cite a source you are engaging in a
conversation with other researchers and scholars and adding credibility to
your own research. By responding reasonably to those who oppose your
views, you are acknowledging that there are valid counterarguments. Thus,
appropriate quoting and citing of sources shows respect to the creators of
these ideas and arguments—honoring thinkers and their intellectual property
—and adds integrity to your work. As a researcher, you have a vested interest
in maintaining a respect for intellectual property and giving proper attribution
to ideas and words. Other forms of plagiarism include having someone else
write a paper for you, paying someone else to write a paper for you, and
submitting as your own someone else’s unpublished work, either with or
without permission. There are also serious consequences to pay if you are
found guilty of plagiarism.
Harris (2001) offers some helpful suggestions on how to avoid plagiarism.
To prevent plagiarism, you must give credit whenever you use:
another person’s idea, opinion, or theory;
any facts, statistics, graphs, drawings—any pieces of information—that
are not common knowledge;
quotations of another person’s actual spoken or written words; or
paraphrases of another person’s spoken or written words.
These guidelines are adapted from the Student Code of Rights,
Responsibilities, and Conduct found at Indiana University’s Web page:
http://tinyurl.com/6hma3. With the advent of Web and the proliferation of
distance learning institutions, a limitless reservoir of digital documents can be
downloaded, studied, and—in some cases, unfortunately—plagiarized. If
plagiarism is suspected, the Web can also be used to search for the original
source. Pasting key, specific phrases into a search engine like yahoo.com or
google.com will often lead to the original source. In addition, websites like
http://www.plagiarismchecker.com/ and turnitin.com be used to run a
plagiarism check and find documents with contents that match or are similar
to the contents in a research paper. Increasingly, universities require a
plagiarism check on all proposals and final dissertation drafts submitted for
review. You can avoid problems by checking your own work prior to sharing
it with your mentor, committee members, or the university.
To use the Turnitin service, you need to go to Turnitin.com, have a class account
number, and a class password. You will then be able to set up your own student
login account. Once you have an account, you will be guided on how to submit
your papers electronically from your own computer. You will select the “Submit”
icon for the appropriate assignment, navigate to the electronic version of your
paper, and click the “Submit” button. The Turnitin.com service compares your
paper to several types of sources: student papers, paper mills (services that sell term
papers), online books and journals, and Internet sites. Any matches found will be
highlighted in the originality report and linked to their original sources. Two views are available: side-
by-side and print version. Compare both views to decide which works best for you. If the highlighted
region is properly cited, then this is not plagiarism. It would be a good idea to view the videos that
Turnitin provides at http://www.turnitin.com/en_us/training/student-training.
Another method to check for plagiarism is to review the document in Word or
Excel and check
File --->Properties to see who originally drafted the document and when.
Another site used to combat plagiarism is http://scout.cs.wisc.edu. The Glatt
Company at http://www.plagiarism.com/ provides software that helps “detect
and deter” plagiarism. There is a student tutorial that provides computer-
assisted instruction on what constitutes plagiarism and how to avoid it. This
includes definitions of direct and indirect plagiarism, when and how to
provide attribution, and a mastery test of concepts. This software is typically
used in academic institutions or in the legal profession for cases of copyright
infringement.
The research to be undertaken should be the researcher’s and the researcher’s
alone; however, expert assistance may be sought. If APA editors or
statisticians, for example, are consulted, it is still the responsibility of the
researcher to understand, explain, defend, and synthesize this outsourced
work and to be accountable for every aspect of the research and the research
paper. A good consultant is a good coach who will teach you as well as assist
you with improving your study. A good online source on how to avoid
plagiarism can be found at http://tinyurl.com/2u9w72k
An excellent APA/ dissertation/ editor/coachwho has assisted hundreds of doctoral
students make it to the finish line is Toni Williams: www.formandstyle.com or
http://www.linkedin.com/pub/toni-williams/8/195/966.
Internal Review Boards: Ethical Issues Related to Conducting Research
Using Human Participants
FOR YOUR INFORMATION AND EDUCATION
Here are some solutions to the most frequently occurring ethical
challenges in doctoral research as identified by Institutional Review Boards
(IRB) at a variety of universities:
1. Use anonymous or non-identifying methods when possible. Anonymity is
the simplest way to avoid pressuring subordinates, students, or other
vulnerable individuals to participate in your doctoral research. However, if
you know or can identify a participant, this person is not anonymous.
There are steps you can take to protect a person’s identify so that the
consumers of your study will not be able to ascertain who this person is.
This is usually accomplished through assigning pseudonyms or
establishing a code like P1 to indicate the first participant.
2. Separate your researcher role from all others. This avoids unethical
dynamics in which a subordinate, colleagues, student, or client mistakenly
believes that the data collection is part of their job, education, treatment, or
any other aspect of your professional role.
3. Pay close attention to alignment among the research question, planned
analyses, and types of data collection proposed. The IRB can only approve
those specific components of data collection that show promise of
effectively addressing the research question(s).
4. Use existing data whenever possible. This avoids burdening others with
risky or time-consuming tasks, just for the benefit of your doctoral study.
When collection of new data poses substantial time demands or
privacy/safety risks to participants, the research design will be closely
examined so that the potential benefits can be weighed against potential
risks.
5. Use existing measures whenever possible. Make certain that an appropriate
instrument does NOT exist before construction your own instrument.
6. Check and DOUBLECHECK that all IRB materials reflect the final set of
research questions and procedures. The IRB often does not review the
entire proposal and can only approve the procedures that are listed in the
IRB application itself. Thus, all participant recruitment and data collection
procedures MUST be described in the IRB application. If an audit reveals
that a student deviated from that specific list of IRB-approved procedures,
then the data can be invalidated and the final doctoral study rejected.
7. Include expected study outcomes
8. Make sure you describe the characteristics of the population and of the
sample. What data will be collected? What is expected of the subjects as
participants in this study? What questions will be asked of the
participants? Explain, in detail, your research methodology and data
collection processes. Consider the need for a Premises Permission form
for this study. Address any stressors or risks that may be associated with
this study as they pertain to the participants. Explain your research
methodology and data collection processes thoroughly. Include the name
of the organization on the Premises Permission form. If you are
conducting a survey, make certain that you include a copy of the survey
with your application.
9. Make certain you describe the selection criteria that will be used and the
processes for selection, recruitment and enlistment. Explain how potential
participants will be identified? If you plan to use a specific cultural group
(and exclude others) you must justify the exclusion. For example, if you
are studying Latina dropouts you need to justify why other student
dropouts are not part of your study. You can do this by providing current
statistics that indicate this group has the highest dropout rate compared to
their cohorts. However, it is unethical to use classified information to
locate your participants. You can, instead, post a flyer requesting those
who meet the criteria of you study to contact you.
10. If you who wish to work with children under the age of 18, you must
obtain both informed consent (from a parent or guardian) and informed
assent (from the minor child).
11. Explain how the information concerning withdrawal will be communicated
to the participants.
12. When discussing outcomes of the study do NOT use the word will. Be
scholarly by using the words may or could instead.
13. Provide a rationale for the sample size – make sure this is in accord with
the methodology.
The program, G*Power 3.1.7 can calculate and determine the sample size
in quantitative
studies. Considering a moderate effect size of 0.15, a generally accepted
power of 0.80,
and a significance level of 0.05, the desired sample size may be calculated.
This assists
the researcher in presenting the probability of correctly rejecting the null
hypothesis when
it is false in a given sample.
14. If you will be conducting interviews, make sure you indicate explicitly on
the IRB form
whether the intention is to record the interviews, as this is an aspect of
consent. This can
be done by customizing point four of the six points of understanding.
Customize the
Informed Consent form with your name and contact information as
indicated. Describe in
detail the coding system that will be used as it ensures data confidentiality.
Explain how data will be disposed of at the end of the retention time
period.
15. Make certain your name and signature match exactly.
16. If conducting interviews, make sure you state how many hours you expect
each interview to take.
17. If no compensation is offered, make that clear. If you plan to offer a token
gift explain
how this is not related to compensation.
Lists (Seriation: APA Section 3.04 (6th ed.)): For listed items within a
paragraph like this (a) use letters, not numbers, in parentheses; (b) separate
each item with a comma; and (c) use a semicolon if there’s already a comma
in one of the clauses. When listing items vertically, or breaking them out of
the paragraph, use 1, 2, 3, and so forth, each followed by a period. Tab the
first number. If sentences run longer than the first line, keep typing back to
the left margin.
Numbers (APA 6th ed., Sections 4.31-4.34): In general, numbers less than 10
are written out while numbers 10 and greater appear as Arabic numerals.
Exceptions include a series of numbers, numbers preceding elements of time
or measurement (e.g., 4 miles; 2 months, 6 decades, and 18 years), and a
number beginning a sentence (e.g., Fifty-seven percent of those surveyed
disagreed with the statement.). Other exceptions include the number of points
in a Likert-type scale (e.g., a 7-point Likert-type scale). In the APA 5th edition,
numbers less than 10 should appeared as numerals when grouped with
numbers greater than 10 and the number of participants should have a
numeral as well (e.g., 3 participants). Both these exceptions were removed in
the 6th edition. APA 6th edition also indicates approximations of numbers
referring to time should not use numerals (e.g., approximately four weeks
ago).
Examples: Four companies were selected for the study, including 30
employees and 12 managers who agreed to be interviewed.
Reporting statistics: Statistical abbreviations are usually italicized: n, t, SD, p.
Uppercase N indicates the total population; lowercase n is used for samples.
The correct form is t test, no hyphen, italicized t.
Using tables and figures: Permission must be granted by copyright holders if
in your dissertation you plan to use tables and figures from published works
not in the public domain. (APA 6th ed., Section 2.12). Make sure it is clear
why each figure and table is included and what information they provide to
the reader. Figure numbers (and titles about the figure) are placed below the
actual figure
Table numbers (and titles about the table) are placed above the actual table.
3. Age (3.16)
Girl and boy are the correct terms for individuals under age 12, while
young man and young woman or female adolescent or male adolescent
may be used for ages 13-17. Use men and women for ages 18 and over.
The terms elderly and senior are not acceptable as nouns; use older
adults instead.
ACCOUTREMENTS
Utensils and Ingredients
[Instruments]
Assistant Chefs
[Population and Sample]
Serving Platters/Spices
[Statistics]
WHAT WILL YOU USE TO GATHER DATA?
Tests, Inventories, Questionnaires, Interviews, Observations, Archived
Documents
In PHASE 2 of your Recipes for Success you will acquire proper utensils for
the creation and serving of your feast. In addition, during this phase you will
be forming the bulk of ingredients for your main course, which you will
ultimately complete in PHASE 3.
Prepackaged Tests and Inventories
(Just Stir and Serve)
Prepackaged tests and inventories are among the most useful tools for the
chef/researcher. They have been seen at many eloquent banquets in the past.
The benefits of using prepackaged and standardized tests are that the items
and total scores have been carefully analyzed and their validity and reliability
have most likely been established by careful statistical controls.
Most prepackaged tests have norms based upon the performance of many
participants of various ages living in many different types of communities and
geographic areas. Among those to choose from are
1. Achievement tests, which attempt to measure what an individual has
learned. Achievement tests are designed to quantify an individual’s
level of performance based on information that has been deliberately
taught. Most tests used in schools are achievement tests. They are used
to determine individual or group status in academic learning; to
ascertain strengths and weaknesses defined by the test preparer; and as a
basis for awarding prizes, scholarships, or degrees.
2. Aptitude tests, which attempt to predict the degree of achievement
that may be expected from individuals in a particular activity. They are
similar to achievement tests in their measuring of past learning, but
differ in their attempt to measure nondeliberate or unplanned learning.
They are often used to divide students into relatively homogeneous
groups for instructional purposes, identify students for scholarship
grants, screen for educational programs, and purport to predict future
successes.
Cutting Board
1. If you are planning to use a prepackaged test, write the name of the test
and its reliability and validity information in the space below, and
determine how you will obtain the test for use.
________________________________________________________________
2. List the questions that you feel this test will answer with respect to
some variable or characteristic you wish to measure in your study.
Preparing Your Own Questionnaires or Surveys
(Making your meal from scratch)
Questionnaires and surveys are perhaps the most frequently used instruments
for gathering data on population variables. Their appeal rests in their ability to
get to the heart of the research under investigation. There are subtle
differences between a questionnaire and a survey. The questionnaire is more
like fast food and attempts to gather background (demographic)
characteristics such as age, education, and gender. It is used to help feed
hungry policy makers, program planners, evaluators, and researchers by
gathering simple information directly from the people affected. A survey is
generally more like good home cooking; it is complex and more probing and
seeks to elicit the feelings, beliefs, knowledge, experiences, or activities of the
respondents. Both are used in research when a sample of participants is drawn
from a population and the data obtained are analyzed to make inferences
about a population.
Good questionnaires and surveys maximize the relationship between the
answers recorded and the variables the researcher is measuring. The answers
are valuable to the extent that they can predict a relationship to facts or
subjective states of interest. Researchers use three basic types of questions:
multiple choice, numeric open end, and text open end or essays.
One means of ensuring that the questions are germane to the study is for the
researcher to prepare a working table containing a list of questions related to
the hypothesis under investigation. It might help to think of a survey as a test
to measure each variable or construct (something that exists theoretically, like
intelligence, but cannot be directly observed). To grade the test, you need to
create an index to measure each variable and construct. For example, if you
were measuring the construct political correctness with 10 questions, scores
for each question could be assessed on an index ranging from 0 to 4 on a
Likert-type scale, and a total of 0 to 40 points could measure the degree of
political correctness, with 0 indicating an absence of political correctness and
40 indicating totally politically correct.
According to Perseus Development Corporation, http://tinyurl.com/d2w8pgh ,
after you have the foundation of your planning—budgets, deadlines, goals,
and information uses—it is time to plan the survey itself. This involves five
“right” steps.
1. Choosing the right people (sampling)
2. Using the right vehicle (survey formation)
3. Asking the right questions (survey formation)
4. Obtaining the right interpretations (data analyses)
5. Persuasively presenting results the right way (writing the report)
You also need to be aware of what constructs you will measure. These
include, but are not limited to, the following:
1. Demographics - the categories someone fits into, such as age, marital
status, business title, industry, ethnicity, or socioeconomic level
2. Attitude - how people think and feel about something
3. Cognition - the knowledge of subject matter
4. Perception - the way people receive messages and interpret them;
insight, intuition, or knowledge gained by perceiving
5. Needs - what people feel are missing from their lives or a program
6. Behavior - how people react to situations and opportunities and also
how they think they’d react
7. Efficacy - how effective a program or treatment was
8. A consideration that needs to be made is the type of data you will
obtain. There are four types of data that you can acquire. A mnemonic
device used to remember these types of data can be found in the French
word for black, NOIR (nominal, ordinal, interval, and ratio): Nominal
and ordinal data are considered nonparametric data (nonnumerical),
whereas interval and ratio are considered parametric (numerical) data.
9. Nominal (name only) data, or levels of measurement, are characterized
by information that consists of names, labels, or categories only. These
data cannot be arranged in an ordering scheme and are considered to be
the lowest level of measurement. There is no criterion by which values
can be identified as greater than or less than other values. Researchers
cannot, for example, average 12 Democrats and 15 Republicans and
come up with 13.5 Independents. We can, however, determine ratios
and percentages and compare the results to other groups.
10. Ordinal (or ranked) levels of measurement generate data that may be
arranged in some order, but differences between data values either
cannot be determined or are meaningless. For example, we can classify
income as low, middle, or high to provide information about relative
comparisons, but the degrees of differences are not available.
11. Interval level of measurement is similar to the ordinal level, with the
additional property that you can determine meaningful amounts of
differences between data. This level, however, often lacks an inherent
starting point. For example, in comparing the annual mean
temperatures of states, the value of 0 degrees does not indicate no heat
and it would be incorrect to say that 40 degrees is half as warm as 80
degrees. Grade point averages (GPAs) are also considered interval
levels of measuring knowledge. If someone has a 0.0 GPA this does not
mean that they have no knowledge.
12. Ratio level of measurement is considered the highest level of
measurement. It includes an inherent zero starting point and fractional
values. As the name implies, ratios are meaningful for this type of
measurement. The heights of children, distances traveled, waiting
times, and the amount of gasoline consumed are ratio levels of
measurement. A special form of ratio-level measurement is the binary
(or dummy) variable of 1,0. This code represents the presence (1) or
absence (0) of a certain characteristic.
Ordering of Questions
With regard to the ordering of questions on a survey or questionnaire, the
researcher needs to consider ways to encourage the participants to respond
and answer honestly. Ideally, the early questions should be easy and pleasant
to answer. These kinds of questions encourage people to continue the
survey. In telephone or personal interviews they help build rapport with the
interviewer. Grouping together questions on the same topic also makes the
questionnaire or survey easier to answer. Whenever possible, place difficult or
sensitive questions near the end of your survey. Any rapport that has been
built up will make it more likely people will answer these questions. If people
quit at that point anyway, at least they will have answered some of your
questions. Whenever there is a logical or natural order to answer choices, use
it. Make certain that all possible responses are included in the choices. In
general, when using numeric rating scales, higher numbers should mean a
more positive or more agreeing answer. However, an obvious answer to a
question should be avoided.
A general and important guideline to follow is that statistics based on one
level of measurement should not be used for a lower level. Implications made
from interval and ratio data can usually be determined through parametric
methods, whereas implications from ordinal and nominal data require the use
of less sensitive, nonparametric methods.
Another decision that you need to make is whether to use open-ended
questions or closed-ended questions. There are several advantages to open-
ended questions:
1. You will be able to obtain answers that were unanticipated.
2. They tend to describe more closely the real views of the respondent.
3. Respondents will be able to answer questions in their own words.
However, closed-ended questions are usually an easier way of collecting data:
1. The respondent can perform more reliably the task of answering the
question when response alternatives are given.
2. The researcher can perform more reliably the task of interpreting the
meaning of answers when the alternatives are given to the respondent.
3. Providing respondents with a constrained number of categories
increases the likelihood that there will be enough people in any given
category to be analytically interesting.
4. There is a strong belief that respondents find closed-ended questions
to be less threatening than open questions.
Cutting Board
If you are planning to create a questionnaire or survey, fill out the
information below:
1. List four demographic questions you feel would be helpful to know
about your sample:
2. What questions are you seeking answers to in your study?
3. List four other broad questions that you would like to obtain from
your sample:
4. Underline the types of measurements you will most likely use:
a. Nominal (name only, certain trait)
b. Ordinal (a ranking system; Likert-type scale)
c. Interval (fixed differences, but no fixed zero—temperature)
d. Ratio (interval with a fixed zero; height, time, weight)
5. Underline the type of questions you will most likely be asking:
a. Open-ended (subjects fill in the blanks)
b. Closed-ended (multiple choice)
6. What is the population you are studying? Will you be sending
the questionnaire or survey to the whole population or to a subset
(sample) of the population?
7. Will the survey administered be cross-sectional (just once) or
longitudinal (over time)?
8. How will the survey or questionnaire be administered?
Through the mail? Personal interview? Group setting?
9. Approximately how many questions do you plan to have?
10. Will you need permission or help to administer the
questionnaire or obtain a mailing list? If yes, how will you obtain
this assistance?
The following suggestions can help you to eliminate some obstacles that
questionnaire and survey designers often encounter. As you prepare your
questions, check to see that each question you create adheres to the warnings
given.
___1. Use standardized English when writing your questions.
___2. Keep the questions concrete and close to the respondents’
experience.
___3. Be aware of words, names, and views that might automatically bias
results.
___4. Use a single thought per question.
___5. Use short questions and ask for short responses if possible.
___6. Avoid words that might be unfamiliar to the respondent.
___7. Define any word whose meaning might be vague.
___8. Avoid questions with double negatives, such as “This class is
not the worst math class I have ever taken.”
___9. When using multiple-choice questions, make sure all
possibilities are covered.
___10. Be as specific as possible.
___11. Avoid questions with two or more parts.
___12. Give points of reference as comparisons. For example,
instead of asking, “Do you like mathematics?”, you might ask:
Please rank your favorite academic class from most favorite (4), to
least favorite (1):
Social studies ______ English ______ Science ______
Mathematics____
___13. Underline or use bold print for words that are critical to the
meaning of the questions, especially negative words like not.
___14. Ask only important questions since most respondents
dislike long questionnaires or surveys that ask too many
unimportant questions.
___15. Avoid suggestive questions or questions that contain biases.
For example, “Would you support more money in mathematics
education if the schools continue to use the same outdated teaching
methods?” reflects the writer’s bias on mathematics education.
___16. When asking questions regarding ethnic background or
political affiliations, it is a good idea to use alphabetical order.
In addition, the following receive high honors in the culinary research arts:
___17. Efficiency and brevity - it should only be as long as necessary.
___18. Objectivity - the questions should be as objective as the
situation dictates.
___19. Interesting - the questions should be as interesting and as
enjoyable as possible.
___20. Simplicity - it should be simple to administer, score, and
interpret.
___21. Clarity - it is important that the directions be clear so that
each participant can understand exactly the manner in which the
test is to be taken.
A widely used type of ordinal measurement used on closed questionnaires is
the Likert-type scale, named after its creator, Rensis Likert (1932). Likert’s
original scale used five categories: strongly approve, approve, undecided,
disapprove, and strongly disapprove. In a Likert-type scale, points are
assigned to each of the categories being used. The most favorable response is
usually given the most points, that is, favorableness of the attitude, not the
response category itself. A Likert-type scale may use fewer or more than five
categories. In general, the more categories, the better the reliability.
include the home page URL of the journal, the book, publisher,
or the electronic database.
Check out the following URL for more information regarding Likert-type scales:
http://www.socialresearchmethods.net/kb/scallik.php.
The placement of items should be randomized. Placing all of the favorably
worded items first might produce a set or tendency for respondents (e.g., the
participants might fill in all 5s without reading the questions).
The score that the individual receives on a Likert-type scale is the sum of the
scores received on each item. For example, if 25 items are on a questionnaire
and each item contains a minimum of 1 point and a maximum of 5 points,
then the highest possible score would be 125, whereas the lowest possible
score would be 25 (assuming no items were missing).
Cutting Board
Pilot Study
Before the final form of the survey or questionnaire is constructed, it is useful
to conduct a pilot study (or dress rehearsal) to determine if the items are
yielding the kind of information that is needed. The term pilot study is used in
two different ways in social science research. It can refer to so-called
feasibility studies, which are "small scale version[s], or trial run[s], done in
preparation for the major study" (Polit, Beck, & Hungler, 2001, p. 467). It is
also used to refer to the pretesting, or trying out, of a particular research
instrument (Baker, 1994, pp. 182-183). One of the advantages of conducting a
pilot study is that it can give advance warning about where the main research
study could fail, where research protocols might not be followed, or whether
proposed methods or instruments are inappropriate or too complicated. De
Vaus (1993) advised researchers to “check to see if there is any ambiguity, or
if the respondents have any difficulty in responding” (p. 54). Surveys are
pilot-tested to avoid misleading, inappropriate, or redundant questions. Pilot-
testing ensures that the survey or questionnaire can be used properly and that
the information obtained is consistent. Fink and Kosekoff (1985) indicated
that when pilot-testing, look out for a failure to answer questions, respondents
giving several answers to the same question, and written comments in the
margin. These may be indications that the questionnaire or survey is
unreliable and needs revision. Remember, pilot tests should occur after IRB
approval, since you are using part of the study population to pilot the survey.
Administering the questionnaire or survey personally and individually to a
small group of respondents is a good way to proceed with your pilot study,
but it could be conducted electronically or through other means. Well-
designed and well-conducted pilot studies can inform the researcher on the
research process and about likely outcomes. It is important to report the
findings of the pilot studies (this usually appears in chapters 3 or 4 in the
dissertation) and detail the actual improvements made to the study design and
the research process as a result of the pilot findings.
The pilot instrument should invite comments about the perceived relevance of
each question to the stated intent of the research. It would also be beneficial to
provide a means for the respondent to suggest additional questions that the
researcher did not include. Check out http://sru.soc.surrey.ac.uk/SRU35.html
for more information on pilot studies.
Cover Letter
If a questionnaire or survey is administered to an intact group, such as
students in a class or members of a congregation, then you will have the
opportunity to inform the respondents of the intent of the study and motivate
them to complete the questionnaire or survey. However, when questionnaires
or surveys are sent through the mail, it may be difficult to motivate
respondents to fill them out and to return them within a reasonable period of
time. Unless the potential respondents believe that the questionnaire or survey
is of value, it is likely that they will become nonrespondents. For this purpose,
a cover letter usually accompanies the questionnaire or survey.
It is important to inform potential participants that this is part of a doctoral dissertation
for your university. This information should be stated in the cover letter since such
information often adds credibility to the study.
The cover letter should also state the following:
1. The questionnaire or survey will not take a great deal of time to
complete.
2. Each individual’s personal attention to the questionnaire or survey is
of extreme importance to the study.
In addition, the following ingredients would enhance the efficacy of a
cover letter:
1. An introduction: The name of the researcher and the company,
organization, or university that is requesting or approving this study.
2. A purpose: The reason for conducting the study, the use for this
questionnaire or survey, and its value to the investigation should be
explained. The sole intention of a study should not be that a student
expects to obtain a degree by means of a research study that includes the
use of this questionnaire or survey (however important that is). It is
unlikely that a potential respondent would take the time to carefully fill
out a questionnaire for this goal.
Note: You must also be careful not to reveal too much. This might bias
the study and make the results invalid.
3. A set of directions: Explain how the questions are to be answered,
how the questionnaire or survey is to be returned, and if there is some
reasonable deadline for returning it. Indicate whether the respondent
needs to put his or her name on the form and any other relevant
information that should be included with the questionnaire or survey.
4. Return postage: It is unreasonable to ask the respondent to answer
your questionnaire and provide postage for its return to you.
5. The researcher should avoid the use of obvious form letters or letters
for which the initial salutation is
Dear _______________, or “To Whom it may concern:”
6. You should also sign the letter personally.
Extra attention and personal touches demonstrate the sincerity of the research
effort and the importance of the respondents’ participation.
Without your taking the aforementioned information into consideration, it is likely that the
questionnaire or survey will only make it to the nearest trash receptacle.
Cutting Board
This would be an excellent time to create your cover letter. Insert a separate
sheet of paper and compose your cover letter NOW! Check to see that you
have incorporated the ideas suggested in this section.
Going the Extra Mile
Some ways to encourage respondents to perform the task of filling out a
mailed or online questionnaire or survey include:
1. Making personal contact by phone or in person, prior to sending out the
questionnaire.
2. Offering some type of financial compensation or gift.
FOR YOUR INFORMATION AND EDUCATION
One type of incentive to increase response rate is sending a dollar bill
along with the survey
(or offering to donate a dollar to a charity specified by the
respondent). Make certain that you indicate that the dollar is a way of
saying thanks, rather than payment for time spent on the survey. For short
questionnaires, you could put a questionnaire on the back of a small
check. Another possible incentive is to enroll the people who return
completed surveys in a drawing for a prize. You could also offer a copy
of the (nonconfidential) result to the participants.
1. Make the cover letter and questionnaire or survey attractive.
2. Make the cover letter personal.
3. Make repeated contact with nonrespondents.
Remember that if you want (or need) a sample of 500 participants, and
you estimate a 10% response level, you need to mail 5,000 surveys. You
might want to check with your local post office about bulk mail rates—
you can save on postage using this mailing method. However, many
people associate bulk with junk and will throw out the survey without
opening the envelope, thereby lowering the response rate. Also, bulk mail
moves slowly, which increases the time needed to complete your study.
A reasonable sequence of events might be as follows:
1. About 10 days after the initial post or e-mailing, send all
nonrespondents a reminder emphasizing the importance of the study
and the need for a high response rate.
2. About 10 days after the first reminder, mail the remaining
nonrespondents a letter, again emphasizing the importance of a high
rate of return and including another questionnaire for those who
might have thrown away, or expunged, the first one.
3. If the response rate is still not satisfactory, it would be advisable to
call nonrespondents on the telephone.
The challenges of getting the response rate to a reasonable level will
depend on the nature of the sample, the nature of the study, the
motivation of the people to participate in the study, the ease with which
the questionnaire or survey might be completed, and your tenacity.
Cutting Board
If you are planning to mail out questionnaires or surveys, which of the
methods above do you think you will employ to increase the response rate?
___________________________________________________________________
The Personal Interview
The personal interview has many similarities to the questionnaire or survey.
The major advantages of using an interview instead of a questionnaire or
survey are as follows:
1. The response rate is generally high.
2. It is an especially useful technique when dealing with children or an
illiterate population.
3. It eliminates the misinterpretation of a question.
4. The participant is more likely to clarify any misunderstandings.
5. It can encourage a relaxed conversation during which questions can be
asked in any order depending on the response of the interviewee.
6. It provides an opportunity to find out what people really think and believe
about a certain topic through questioning.
7. It is more flexible and allows the interviewer to follow leads during the
interview.
8. The interviewer can interpret body language as an extra
source of information.
A good interviewer exudes HEARTS: honesty, earnestness, adaptability,
reliability, trustworthiness, and sincerity.
Some disadvantages of the interview method are as follows:
1. Time and economy: Questionnaires and surveys can usually be sent
through the postal mail or e-mail; thus, for the price of postage and
printing the questionnaire or survey, or the time to compose an
electronic document, you can reach practically anyone under
consideration. Furthermore, the expense and time involved in training
interviewers and sending them to interview the respondents needs to be
considered.
2. Reliability of information can be questioned because of interviewer
bias.
3. Difficulties often arise in quantifying or statistically analyzing data
obtained from interviews.
To help you put your participants at ease:
1. Begin with easy nonthreatening questions
2. Follow with increasingly specific questions
3. Follow by questions of a sensitive nature
4. Ensure participants they are free to refuse to respond if questions get too
personal
FOR YOUR INFORMATION AND EDUCATION
Interviews are a very common approach in research, particularly in
qualitative designs. Here are some tips to make sure the data you
obtain through interviews are valid, trustworthy, and usable in your
study.
Note: Taping or digitally recording the interviews allows you to do a
more thorough and objective analyses of the data. People are
becoming more comfortable with having their conversations taped. It
is increasingly common to be told that your conversation may be
recorded during a phone interview, and most focus groups in
marketing studies use unobtrusive video recording equipment to
capture what's being said. However, some people are not comfortable
knowing their remarks are being recorded word-for-word. Respect
this. Although it is better to have your interviews recorded, be mindful
of those who prefer that their conversations not be recorded.
1. Distinguish between research questions and interview questions.
Keep these questions separated into systematic components of your
study. Research questions are usually 1-4 overarching questions
guiding your study, are not directly asked of participants, and should
not be yes/no questions (see http://dissertationrecipes.com/wp-
content/uploads/2011/04/Developing-Research-Questions.pdf).
Interview questions are built around the research questions, help you
answer the research questions, and are directed to the person you are
interviewing.
2. Script the interview from start to finish, including introductions,
instructions, probes, etc. A common sequence of events would be: a)
establish rapport, b) briefly explain the intent of your study (without
giving away too much), c) conduct the Interview, d) debrief at the end
of the interview.
20.
All data obtained through questionnaires, surveys, or interviews
adhere to the same ethical system: The privacy of the individual is
respected and weighed against the public’s right to know. Make
certain that you have established defensible safeguards for participant
confidentiality and data security, and that you “do no harm” either
during the interview or in reporting the results.
Participant interviewing is a fun and enlightening element of research.
Good luck with it!
Populations surveyed who are likely to realize that it is in their best interest
for investigators to have accurate information, yet do not wish to be
identified, are more willing to participate in this type of survey. It is
reasonable to expect that if people could be assured anonymity they would
give honest answers.
Another example: Suppose Question 10 on a survey is, “Have you used
illegal drugs in the past week?” Respondents are told to read the question
and flip a coin. They are to answer “no” only if the coin comes up tails
and they have not used illegal drugs. Otherwise, they should answer
“yes.” The proportion of the group that would have answered “no” is then
computed to be twice those that actually responded “no.” (The other half
got heads.) For example, if 40% wrote “no,” then 80% of the sample is
determined to have not used illegal drugs in the past week, and 20% is
determined to have used illegal drugs during the past week.
Cutting Board
1. If you are planning to use the personal interview, list the reasons for
your decision.
2. Who will do the interviewing? Why?
Observation
Obtaining data through observation, both participant and nonparticipant, has
become common. It is perhaps the most direct means of finding out
information, especially if your study is focused on deeds rather than words.
The extent of your personal involvement depends on which of the two
methods you choose.
In nonparticipant observation:
1. Your presence might be known or unknown.
2. You might observe through a device such as a one-way glass or rely
on observations from video or audio taping.
3. The data obtained tend to be fairly subjective.
The major advantage to using participant observation is that you can
experience firsthand the psychological and social conditions that produce
different decisions and practices.
The disadvantages to using participant observation are as follows:
1. You could be influenced by your own interpretation and personal
experiences.
2. Questions of reliability exist since others might interpret an
experience differently than you.
3. Your presence might affect the participants and the situation
observed.
One way to reduce the disadvantages is to read your report to the people
observed and to ask for comments, additions, or deletions prior to its
formalization. Accuracy is the key to making this type of data collection
effective. Special training is needed to move from casual observer to
systematic observer. In using structured observation techniques, the
researcher usually searches for a relationship between independent variables
and a dependent variable. The researcher must thus be able to code and
recode data in a meaningful way and be aware of the potential biases he or
she brings to research. Read more: http://tinyurl.com/22ucdcy
A confounding variable is an extraneous variable that is not a focus of the study but
is statistically related to (or correlated with) the independent variable. This means
that as the independent variable changes, the confounding variable changes along
with it. When there is some other variable that changes along with the independent
variable, then this confounding variable could be the cause of any difference.
Studies that indicate people who eat fast food are less healthy than people who eat
gourmet food might neglect confounding variables such as socioeconomic status, which may be the
root cause of health status
If you plan to observe a K-12 teacher in his or her classroom, since children are involved (a protected
class) you may need to obtain permission from the parents or guardians of these children.
Archival Documents
Archival documents are existing records that contain information about the
past. Data-based archives are important in social and behavioral research. The
National Archives and Records Association, located online at
http://www.archives.gov/welcome/index.html, is an independent federal
agency that serves as a national record keeper and ensures ready access to
essential evidence that documents the rights of American citizens, the actions
of federal officials, and the national experience. These records document the
common heritage and the individual and collective experiences of U.S.
citizens. You can also find a listing of over 5,000 Web sites describing
holdings of manuscripts, archives, rare books, historical photographs, and
other primary sources for the research scholar at http://tinyurl.com/29gpjdk.
Most important, the sample size must represent the characteristics or behavior
of the larger population. More information on sample size can be found at
http://tinyurl.com/2eqpvqc.
FOR YOUR INFORMATION AND EDUCATION
Effect size is a descriptive statistic referring to the measurement of the
strength of a relationship between variables under a specific situation
(Wilkinson, 1999). For instance, if we have data on the salaries of male
and female engineers working for a particular company, and we notice
that on average, male engineers make more money than females in a
particular organization, the difference between the salaries of men and
women is known as the effect size. The greater the effect size, the greater
the salary difference between men and women. A question remains about
whether the effect size is statistically significant or not.
When evaluating an intervention, program, or treatment, a question
often arises: How much effect did this intervention/program/treatment have?
Program evaluations often access data from an entire population of interest,
e.g., all participants in a professional development program for teachers. In
such situations, data on the entire population are available and there is no
need to use inferential testing because there is no need to generalize beyond
the participants. In these situations, descriptive statistics and effect sizes may
be all that is needed to determine the efficacy of the intervention/program or
treatment.
An effect size does not make any statement about whether the apparent
relationship in the data reflects a true relationship in a population. In that way,
effect size complements inferential statistics such as p-values. Among other
uses, effect size measures play an important role in meta-analysis studies that
summarize findings from a specific area of research, and can be used in lieu
of statistical power analysis, which helps to determine how big a sample size
should be selected for a study so that the results can be generalized to a larger
population.
The concept of effect size is somewhat ubiquitous in many claims made
by various companies regarding products or services. For example, a weight
loss program may boast that Plan D leads to an average weight loss of 25
pounds in a month. An oil company might claim that adding product X to
gasoline increases fuel efficiency by 12 mpg. These are examples of absolute
effect sizes, meaning that they convey the average difference between two
groups (those who participate in a program or treatment and those who do
not) without any discussion of the variability within the groups. For example,
in Plan D an average loss of 25 pounds could indicate that every participant
lost exactly 25 pounds, or half the participants lost 50 pounds and the other
half nada.
The reporting of effect sizes facilitates the interpretation of the
substantive, as opposed to the statistical, significance of a research result.
Effect sizes are particularly prominent in social and medical research.
Relative and absolute measures of effect size convey different information,
and can be used complementarily. It is good practice to present effect sizes for
primary outcomes, that is, outcomes that are expected to be analyzed relevant
to the effects of an intervention/program/treatment under review.
According to Valentine and Cooper (2003), effect size can help
determine whether a difference is real or more likely due to a change of
factors. In meta-analysis, effect size is concerned with different studies that
are combined into a single analysis. The effect size is often measured in three
ways: Standardized mean differences; Odds Ratio; and Correlation
Coefficient.
A Standardized mean difference is a summary statistic in a meta-
analysis when more than one study was conducted to assess the same
outcome but measure the outcome in a variety of ways. For example, if
several studies were conducted to measure mathematics anxiety in high
school students but different psychometric scales were used, it would be
necessary to standardize the results of the studies to a uniform scale before
they can be combined into one summary analysis. In meta-analysis,
standardized effect sizes are used as a common measure that can be
calculated for different studies and then combined into an overall summary.
ZCalc is an excel spreadsheet add on that is used to convert a standardized
mean effect size (ES) into a z-score. Cohen's d and Hedge's are common
measures of ES. A calculator to help you determine the effect size for a
multiple regression study (i.e., Cohen's f2), given a value of R2 is found at:
http://danielsoper.com/statcalc3/calc.aspx?id=5
An odds ratio is a relative measure of risk, indicating how much more
likely it is that someone exposed to a certain factor or treatment under study
will develop an outcome as compared to someone who is not exposed. An
odds ratio of 1 indicates no association between exposure and outcome. Odds
ratios measure both the direction and strength of an association. A free
calculator to measure odds ratio can be downloaded at: http://www.all-
freeware.com/results/odds/ratio/calculator
A correlation coefficient measures the strength and the direction of a
linear relationship between two variables. The linear correlation coefficient is
sometimes referred to as the Pearson product moment correlation coefficient
in honor of its developer Karl Pearson. The correlation coefficient indicates
the degree of a linear relationship between two variables. The correlation
coefficient always lies between -1 and +1. A correlation of -1 indicates perfect
linear negative relationship between two variables, +1 indicates perfect
positive linear relationship, and 0 indicates lack of any linear relationship.
When measuring the effect size using correlation coefficients, a correlation of
r ≥.5 can be characterized as a large correlation, .3 = medium, and .1 = small
(Cohen, 1988). A free calculator to determine a correlation coefficient can be
found at: http://www.alcula.com/calculators/statistics/correlation-coefficient/
When to use Effect Size
The following situations would benefit from reporting the effect size:
1. Program evaluation studies with less than 50 participants tend to lack
sufficient statistical power (this is a determination of sample size to obtain
a given level of significance) for detecting small, medium or possibly even
large effects. In such situations, the results of significance tests can be
misleading because they are subject to Type II errors (incorrectly failing to
reject the null hypothesis). In these situations, it can be more informative
and beneficial to use the effect sizes, possibly complimented with
confidence intervals.
2. For studies involving large sample sizes (e.g., n > 400), a different problem
occurs with significance testing because even small effects are likely to
become statistically significant, although these effects may in fact be
trivial. In these situations, more attention should be paid to effect sizes
than to statistical significance testing.
3. When there is no interest in generalizing the results (e.g., we are only
interested in the results for the sample). In these situations, effect sizes are
sufficient and suitable to determine efficacy.
4. When evaluating an intervention/program/treatment the use of effect sizes
can be combined with other data, such as cost, to provide a measure of
cost-effectiveness. In other words, as noted by McCartney and Dearing
(2002), how much bang (effect size) for the buck (cost) is an
intervention/program or treatment worth?
Advantages and disadvantages of Using Effect Sizes
Some advantages of effect size reporting are that:
1. It tends to be easier for practitioners to intuitively relate to effect sizes
(once the idea of effect size is explained) than to significance testing.
2. Effect sizes facilitate comparisons with internal and external benchmarks.
3. Confidence intervals can be placed around effect sizes (providing an
equivalent to significance testing is used).
However, disadvantages of using effect sizes can include:
1. Most software packages tend to offer limited functionality for creating
effect sizes.
2. Most research methods and statistics courses tend to teach primarily, or
exclusively, classical test theory and inferential statistical methods, and
underemphasize effect sizes, so the academic community might be a bit
skeptical of solely using effect size. In response, there has been a
campaign since the 1980s (see Wilkinson, 1999) to educate social
scientists about the misuse of significance testing and the need for more
common reporting of effect sizes.
FOR YOUR INFORMATION AND EDUCATION
Another very important part of sampling is the nonresponse rate, which
includes people who could not be contacted or who refused to answer
questions. A general rule is to try to keep the nonresponse rate under
25%. To keep the nonresponse rate small, you could ask for the
assistance of a community leader and have that person explain the
purpose and importance of your study in great detail to potential
respondents.
The size of the survey may be decided with statistical precision. A major
concern in choosing a sample size is that it should be large enough so
that it will be representative of the population from which it comes and
from which you wish to make inferences. It ought to be large enough so
that important differences can be found in subgroups such as men and
women, Democrats and Republicans, groups receiving treatment and
control groups, etc.
One issue to consider when using statistical methods in choosing a
sample size is sampling error, also known as margin of error. This is not
an error in the sense of making a mistake, but rather a measure of the
possible range of approximation in the results because a sample was
used. Small differences will almost always exist among samples, and
between them and the population from which they are drawn. One
approach to measuring sample error is to report the standard error of
measurement, computed by dividing the population standard deviation
(if known) by the square root of the sample size. Minimizing sampling
error helps maximize sample representativeness.
If the Stanford-Binet IQ test (where the standard deviation is 15) is administered to
100 participants, then the standard error of the mean would be 15/10 or 1.5.
We can use the following formula to determine the sample size necessary to
discover the true mean value from a population:
where corresponds to a confidence level (found on a table or computer program). Some common
values are 1.645 or 1.96, which might reflect a 95% confidence level (depending on the statistical
hypothesis under investigation), and 2.33, which could reflect a 99% confidence level in a one-tailed
test and 2.575 for a two-tailed test. is the standard deviation, and E is the margin of error.
Example: If we need to be 99% confident that we are within 0.25 lbs of a true mean weight of babies
in an infant care facility, and s = 1.1, we would need to sample 129 babies:
n = [2.575 (1.1)/0.25]2 = 128.3689 or 129
A formula that we can use to determine the sample size necessary to test a hypothesis involving
percentages is:
where n = sample size, = standard score corresponding to a certain confidence level, is an
estimate of the population proportion and = 1 - . We represent the proportion of sampling error
by E, and the estimated proportion or incidence of cases by p.
Example: Suppose that studies conducted 2 years ago found that 18% of drivers talk on a cell phone
while driving. We want to do a study to check if this percentage is still true. We want to estimate, with
a margin of error of 3 percentage points, the percentage of drivers who talk while driving today. If we
need to be 95% confident of our result, how many drivers would we need to survey?
Cutting Board
1. What is the population and sample that you will be studying?
_______________________________________________________________________
3. What measures will you take to see that your sample size is adequate for
your study?
_______________________________________________________________________
SERVING PLATTERS/SPICES
[Statistics]
Featuring: What’s Stat? (You Say?)
How to Exhibit Your Date (a)
How to (Ap)praise Your Date (a)
What’s Stat? (You Say?)
Statistics is like trying to determine how many different colored M&M’s are in a
king size bag by looking at only a carefully selected handful.
The Job of a Statistician Involves C O A I P ing Data:
After data are collected, they are used to produce various statistical numbers
such as means, standard deviations, and percentages. These descriptive numbers
summarize or describe the important characteristics of a known set of data. In
hypothesis testing, descriptive numbers are standardized so that they can be
compared to fixed values (found in tables or in computer programs) that indicate
how unusual it is to obtain the data you collected. Once data are standardized
and significance determined, you may be able to make inferences about an entire
population (universe).
Your Recipes for Success gives you a substantial taste of statistics so that you will
feel comfortable with this aspect of your feast preparation. You have already nibbled
on statistics in the last section when you explored different methods of collecting
data.
You might wish to seek further condiments to add to the knowledge you will acquire
from your Recipes for Success or consult with a statistician after reading the information in PHASE 2
to help you decide which statistics, if any, would be applicable to your study. Mario Triola’s statistics
books are user friendly and offer an excellent foundation for quantitative analyses. You are also
encouraged to use statistical programs like SPSS, Excel, or business calculators to perform the
hackneyed computations that often arise during statistical testing. Dr. Jim Mirabella has written an
excellent manual on how to use SPSS to analyze data for a doctoral dissertation. This is available at
http://www.drjimmirabella.com/ebook . Consulting with a statistician is another viable option to help
you select the proper statistics.
Remember: You are ultimately responsible for the results. You must be aware of why you are using a
certain test, know what assumptions are made when such a test is used, understand what the test results
indicate, and understand how this analysis fits in with your study. Good news: This is not as hard as it
sounds.
The Role of Statistics
Statistics is merely a tool. It is not the be-all and end-all for the researcher. Those
who insist that research is not research unless it is statistical display a myopic
view of the research process. These are often the same folks who are equally
adamant that unless research is experimental research it is not research.
However, without statistics, life would be pretty boring. We would not be able to
plan our budgets, evaluate performances, or enjoy sporting events. (Can you
imagine your favorite sport without any score keeping or statistics?)
One cardinal rule applies: The nature of the data and the problem under
investigation govern which method is appropriate to interpret the data and the
tool of research required to process those data. A historian seeking to answer
problems associated with the assassination of Dr. Martin Luther King, Jr., would
be hard put to produce either a statistical or an experimental study, and yet the
research of the historian can be quite as scholarly and scientifically respectable
as that of any quantitative or experimental study.
Statistics many times describes a quasi-world rather than the real world. You
might find that the mean grade for a class is 81 but not one student actually
received a grade of 81. Consider the person who found out that the average
family has 1.75 children and with heartfelt gratitude exclaimed, “Boy, am I
grateful that I was the first born!” What is accepted statistically is sometimes
meaningless empirically. However, statistics is a useful mechanism and, as Ian
Stewart noted, Statistics is a means of panning precious simplicity from the sea
of complexity. It is a tool that can be applied to practically every discipline!
Quality Schools that have rejected No Child Left Behind (NCLB) have a
more authentic reading program than schools that have embraced
NCLB.
If the null hypothesis is not rejected, this does not lead to the conclusion that no association or differences
exist, but instead that the analysis did not detect any association or difference between the variables or
groups. Failing to reject the null hypothesis is comparable to a finding of not guilty in a
trial. The defendant is not declared innocent. Instead, there is not enough evidence to
be convincing beyond a reasonable doubt. In the judicial system, a decision is made
and the defendant is set free.
9. What is the connection between hypothesis testing and
confidence intervals?
There is an extremely close relationship between confidence intervals and
hypothesis testing. When a 95% confidence interval is constructed, all values in
the interval are considered plausible values for the parameter being estimated.
Values outside the interval are rejected as implausible. If the value of the
parameter specified by the null hypothesis is contained in the 95% interval, then
the null hypothesis cannot be rejected at the 0.05 level. If the value specified by
the null hypothesis is not in the interval, then the null hypothesis can be rejected
at the 0.05 level. If a 99% confidence interval is constructed, then values outside
the interval are rejected at the 0.01 level.
10. What does statistically significant mean?
In English, significant means important. In statistics, it means probably true.
Significance levels show you how likely a result is due to chance. The most
common level, which usually indicates “good enough,” is 0.95. This means that
the finding has a 95% chance of being true. However, this is reported as a 0.05
level of significance, meaning that the finding has a 5% (0.05) chance of not
being true, which is the converse of a 95% chance of being true. To find the
significance level, subtract the number shown from 1. For example, a value of
0.01 means there is a 99% (1 - 0.01 = 0.99) chance of it being true.
11. What is data mining?
Data mining is an analytic process designed to explore large amounts of data in
search of consistent patterns or systematic relationships between variables and
then to validate these findings by applying the detected patterns to new subsets
of data. There are three basic stages in data mining: exploration, model building
or identifying patterns, and validation and verification. If the nature of available
data allows, it is typically repeated until a model is identified. However, in
business decision making, options to validate the model are often limited. Thus,
the initial results often have the status of general recommendations or guides
based on statistical evidence (for example, soccer moms appear to be more likely
to drive a minivan than an SUV).
12. What are the different levels of measurement?
Data come in four types and four levels of measurement, which can be
remembered by the French word for black:
NOIR: nominal (lowest), ordinal, interval, and ratio highest
Nominal Scale Measures in terms of name of
designations or discrete units or categories. Example:
gender, color of home, religion, type of business.
Ordinal Scale Measures in terms of such
values as more or less, larger or smaller, but without
specifying the size of the intervals. Example: rating
scales, ranking scales, Likert-type scales.
Interval Scale Measures in terms of equal
intervals or degrees of difference but without a true zero
point. Ratios do not apply. Example: temperature, GPA,
IQ.
Ratio Scale Measures in terms of equal
intervals and an absolute zero point of origin. Ratios
apply. Example: height, delay time, weight.
A general and important guideline is that the statistics based on one level of
measurement should not be used for a lower level, but can be used for a higher
level. An implication of this guideline is that data obtained from using a Likert-
type scale (a scale in which people set their preferences from say 1 = totally
agree to 7 = totally disagree) should, generally, not be used in parametric tests.
However, there is controversy regarding treating Likert-type scales as interval
data (see below). However, if you cannot use a parametric test, there is almost
always an alternative approach using nonparametric tests.
If the mean, median, and mode are identical, then the shape of the distribution
will be unimodal, symmetric, and resemble a normal distribution. A distribution
that is skewed to the right and unimodal will have a long right tail, whereas a
distribution that is skewed to the left and unimodal will have a long left tail. A
unimodal distribution that is skewed has its mean, median, and mode occur at
different values. For highly skewed distributions, the median is the preferred
measure of central tendency, since a mean can be greatly affected by a few
extreme values on one end.
Cutting Board
1. Arrange your data in a frequency table. If you have administered a
questionnaire or survey, you might wish to list the different responses to
each question in conjunction with the frequency that they were selected.
2. Construct graphs to illustrate the distributions determined by your
frequency table.
3. Compute any statistical numbers that are descriptive of your data such as
means, standard deviations, proportions, percentages, or quartiles. (You
may wish to use a calculator or a computer programmed to determine your
mean and standard deviation with the mere press of a button once
information has been entered in a befitting manner. The manual that comes
with the machine or computer program could prove helpful for this task.)
Definitions
One of the keys to understanding a specialized field is getting to know its
technical vocabulary. As you continue to put together your research project, you
might come across unfamiliar words. The following words often appear in
quantitative studies. Remember to refer to them as needed.
Alternative hypothesis: The hypothesis that is accepted if the null hypothesis is
rejected.
Analysis of variance (ANOVA): A statistical method for determining the
significance of the differences among a set of sample means.
Aggregated data: Data for which individual scores on a measure are combined
into a single group summary score.
Central limit theorem: A mathematical conjecture that informs us that the
sampling distribution of the mean approaches a normal curve as the sample size,
n, gets larger.
Chi-square ( ) distribution: A continuous probability distribution used directly
or indirectly in many tests of significance. The most common use of the chi-
square distribution is to test differences between proportions. Although this test
is by no means the only test based on the chi-square distribution, it has come to
be known as the chi-square test. The chi-square distribution has one parameter,
its degrees of freedom (df). It has a positive skew; the skew is less with more
degrees of freedom. The mean of a chi-square distribution is its df, the mode is
df – 2, and the median is approximately df – 0.7.
Confidence interval: A range of values used to estimate some population
parameter with a specific level of confidence. In most statistical tests, confidence
levels are 95% or 99%. The wider the confidence interval, the higher the
confidence level will be.
Confidence level: A desired percentage of scores (often 95% or 99%) that the
true parameter would fall within a certain range. If a study indicates that the
Democratic candidate will capture 75% of the vote with a 3% margin of error (or
confidence interval) at the 95% level of confidence, then the Democratic
candidate can be 95% sure that she will capture between 72 and 75% of the
votes.
Confounding variable: An extraneous variable that is not a focus of the study but
is statistically related to (or correlated with) the independent variable. This
means that as the independent variable changes, the confounding variable
changes along with it.
Correlation: A relationship between variables such that increases or decreases in
the value of one variable tend to be accompanied by increases or decreases in the
other.
Correlation coefficient: A measurement between -1 and 1 indicating the strength
of the relationship between two variables.
Critical region: The area of the sampling distribution that covers the value of the
test statistic that is not due to chance variation. In most tests it represents
between 1 and 5% of the graph of the distribution.
Critical value: The value from a sampling distribution that separates chance
variation to variation that is not due to chance.
Cronbach’s alpha: This is a measure of internal reliability or consistency of the
items in an index. Used often with tests that employ Likert-type scales. Values
range from 0 to 1.0. Scores toward the high end of that range (above 0.70)
suggest that the items in an index are measuring the same thing.
Data: Facts and figures collected through research. The word data is plural, just
like the word “toys.” Data are us. :) Datum is the singular form of data.
Degrees of freedom (df): The number of values free to vary after certain
restrictions have been imposed on all values. The df depends on the sample size
(n) and dimensionality (number of variables (k)).
Dependent variable: The variable that is measured and analyzed in an
experiment. In traditional algebraic equations of the form y =___ x + ____, it is
usually agreed that y is the dependent variable.
Dependent samples: The values in one sample are related to the values in
another sample. Before and after results are dependent samples.
Descriptive statistics: The methods used to summarize the key characteristics of
known population and sample data.
Effect size: The degree to which a practice, program, or policy has an effect
based on research results, measured in units of standard deviation. If a researcher
finds an effect size of d = .5 for the effect of a test preparation program on SAT
scores, this means the average student who participates in the program will
achieve one-half standard deviation above the average student who does not
participate. If the standard deviation is 20 points, then the effect size translates
into eight additional points, which will increase a student’s score on the test.
Experiment: A process that allows observations to be made. In probability an
experiment can be repeated over and over under the same conditions.
External validity: The degree to which results from a study can be generalized to
other participants, settings, treatments and measures.
Exploratory data analysis: Any of several methods, pioneered by John Tukey, of
discovering unanticipated patterns and relationships but presenting quantitative
data visually.
F distribution: A continuous probability distribution used in tests comparing two
variances. It is used to compute probability values in the ANOVA. The F
distribution has two parameters: degrees of freedom numerator (dfn) and degrees
of freedom denominator (dfd).
Goodness of fit: Degree to which observed data coincide with theoretical
expectations.
Histogram: A graph of connected vertical rectangles representing the frequency
distribution of a set of data.
Hypothesis: A statement or claim that some characteristic of a population is true.
Hypothesis test: A method for testing claims made about populations; also called
the test of significance. In Recipes for Success, the CANDOALL method is
presented to help understand how to test hypotheses.
Independent variable: The treatment variable. In traditional algebraic equations
of the form y =___ x + ____, it is usually agreed that x is the independent
variable.
Inferential statistics: The methods of using sample data to make generalizations
or inferences about a population.
Interval scale: A measurement scale in which equal differences between numbers
stand for equal differences in the thing measured. The zero point is arbitrarily
defined. Temperature is measured on an interval scale.
Kurtosis: The shape (degree of peakedness) of a curve that is a graphic
representation of a unimodal frequency distribution. It indicates the degree to
which data cluster around a central point for a given standard deviation. It can be
expressed numerically and graphically.
Kruskal-Wallis test: A nonparametric hypothesis test used to compare three or
more independent samples. It is the nonparametric version of a one-way
ANOVA for ordinal data.
Left-tail test: Hypothesis test in which the critical region is located in the
extreme left area of the probability distribution. The alternative hypothesis is the
claim that a quantity is less than (<) a certain value.
Level of significance: The probability level at which the null hypothesisis
rejected. Usually represented by the Greek letter alpha ( ).
Linear Structural Relations (LISREL): A computer program developed by
Jöreskog that is used for analyzing covariance structures through structural
equation models. It can be used to analyze causal models with multiple
indicators of latent variables and relationships between the latent variables. It
goes beyond more typical factor analysis.
Mean: A measure of central tendency, the arithmetic average; the sum of scores
divided by the number of scores.
Median: A measure of central tendency that divides a distribution of scores into
two equal halves so that half the scores are above the median and half are below
it.
Mode: A measure of central tendency that represents the most fashionable, or
most frequently occurring, score.
Multiple regression: Study of linear relationships among three or more variables.
Nominal data: Data that are names only, with no real quantitative value. Often
numbers are arbitrarily assigned to nominal data, such as male = 0, female = 1.
Nonparametric statistical methods: Statistical methods that do not require a
normal distribution or that data be interval or rational.
Normal distribution: Gaussian curve. A theoretical bell-shaped, symmetrical
distribution based on frequency of occurrence of chance events.
Null hypothesis: The null hypothesis is a hypothesis about a population
parameter. It assumes no change or status quo (=). The purpose of hypothesis
testing is to test the viability of the null hypothesis in the light of the data.
Depending on the data, the null hypothesis either will or will not be rejected as a
viable possibility. We do not use the term accept when referring to the results of
a statistical test.
Odds in favor: The number of ways an event can happen compared to the
number of ways that it cannot happen.
Ogive: A graphical method of representing cumulative frequencies.
One-tailed test: A statistical test in which the critical region lies in one tail of the
distribution. If the alternative hypothesis has a <, then you will conduct a left-
tailed test. If it contains a >, then it will be right-tailed test.
One-way ANOVA: Analysis of variance involving data classified into groups
according to a single criterion.
Operational definition: A concise definition of a term characterized by the
functional use of that term. Operational definitions focus on prototypical usage
or usage in practice. Operational definitions need to be concise and no more than
one to three sentences in length.
Ordinal scale: A rank-ordered scale of measurement in which equal differences
between numbers do not represent equal differences between the things
measured. A Likert-type scale is a common ordinal scale.
Outlier: A single observation far away from the rest of the data. One definition
of "far away" is less than Q1 − 1.5 × IQR or greater than Q3 + 1.5 × IQR where
Q1 and Q3 are the first and third quartiles, respectively, and IQR is the
interquartile range (equal to Q3 − Q1). These values define the so-called inner
fences, beyond which an observation would be labeled a mild outlier. Outliers
can be indicative of the occurrence of a phenomenon that is qualitatively
different than the typical pattern observed or expected in the sample; thus, the
relative frequency of outliers could provide evidence of a relative frequency of
departure from the process or phenomenon is typical for the majority of cases in
a group.
Parameter: Some numerical characteristic of a population. If the mean score on a
midterm exam for a statistics class was 87%, this score would be a parameter. It
describes the population composed of all those who took the test. Population
parameters are usually symbolized by Greek letters, such as for the mean and
for standard deviation.
Parametric methods: Types of statistical procedures for testing hypotheses or
estimating parameters based on population parameters that are measured on
interval or rational scores. Data are usually normally distributed.
Pie chart: Graphical method of representing data in the form of a circle
containing wedges.
Population: All members of a specified group.
Probability: A measure of the likelihood that a given even will occur.
Mathematical probabilities are expressed as numbers between 0 and 1.
Probability distribution: Collection of values of a random variable along with
their corresponding probabilities.
p value: The probability that a test statistic in a hypothesis test is at least as
extreme as the one actually obtained. A p value is found after a test statistic is
determined. It indicates how likely the results of an experiment were due to a
chance happening.
Qualitative variable: A variable that is often measured with nominal data.
Quantitative variable: A variable that is measured with interval and rational data.
Random sample: A subset of a population chosen in such a way that any member
of the population has an equal chance of being selected.
Range: The difference between the highest and the lowest score.
Ratio scale: A scale that has equal differences and equal ratios between values
and a true zero point. Heights, weights, and time are measured on rational scales.
Raw score: A score obtained in an experiment that has not been organized or
analyzed.
Regression line: The line of best fit that runs through a scatterplot.
Right-tailed test: Hypothesis test in which the critical region is located in the
extreme right area of the probability distribution. The alternative hypothesis is
the claim that a quantity is greater than (>) a certain value.
Sample: A subset of a population.
Sampling error: Errors resulting from the sampling process itself.
Scattergram: The points that result when a distribution of paired values are
plotted on a graph.
Sign test: A nonparametric hypothesis test used to compare samples from two
populations.
Significance level: The probability that serves as a cutoff between results
attributed to chance happenings and results attributed to significant differences.
Skewed distribution: An asymmetrical distribution.
Spearman’s rank correlation coefficient: Measure of the strength of the
relationship between two variables.
Spearman’s rho: A correlation statistic for two sets of ranked data.
Standard deviation: The weighted average amount that individual scores deviate
from the mean of a distribution of scores, which is a measure of dispersion equal
to the square root of the variance. At least 75% of all scores will fall within the
interval of two standard deviations from the mean. At least 89% of all scores
will fall within three standard deviations from the mean. The 68, 95, 99.7 rule
(applies generally to a variable X having normal (bell-shaped or mound-shaped)
distribution with mean (the Greek letter mu) and standard deviation (the
Greek letter sigma). However, this rule does not apply to distributions that are
“very” nonnormal. The rule states: approximately 68% of the observations fall
within one standard deviation of the mean; approximately 95% of the
observations fall within two standard deviations of the mean; and approximately
99.7% of the observations fall within three standard deviations of the mean.
Another general rule is this: If the distribution is approximately normal, the
standard deviation is approximately equal to the range divided by 4.
Standard error of the mean: The standard deviation of all possible sample means.
Standard normal distribution: A normal distribution with a mean of 0 and a
standard deviation equal to 1.
Statistic: A measured characteristic of a sample.
Statistics: The collection, organization, analysis, interpretation, and prediction of
data.
t distribution: Theoretical, bell-shaped distribution used to determine the
significance of experimental results based on small samples. Also called the
Student t distribution.
t test (Student t test): Significance test that uses the t distribution. A Student t test
deals with the problems associated with inference based on small samples.
Test statistic: Used in hypothesis testing, it is the sample statistic based on the
sample data. We obtain test statistics by plugging in data we gathered into a
formula.
Two-tailed test of significance: Any statistical test in which the critical region is
divided into the two tails of the distribution. The null hypothesis is usually a
variable equal to a certain quantity. When the alternative hypothesis is not equal,
this implies < or > as alternatives. This yields a two-tailed test.
Type I error: The mistake of rejecting the null hypothesis when it is true.
Type II error: The mistake of failing to reject the null hypothesis when it is false.
Uniform distribution: A distribution of values evenly distributed over the range
of possibilities.
Variable: Any measurable condition, event, characteristic, or behavior that is
controlled or observed in a study.
Variance: The square of the standard deviation; a measure of dispersion.
Wilcoxon rank-sum test: A nonparametric hypothesis test used to compare two
independent samples.
z score: Also known as a standard score. The z score indicates how far and in
what direction an item deviates from its distribution’s mean, expressed in units
of its distribution’s standard deviation. The mathematics of the z score
transformation are such that if every item in a distribution is converted to its z
score, the transformed scores will necessarily have a mean of 0 and a standard
deviation of 1.
How to (Ap)praise your Date (a)
Statistical Hypothesis Testing
Featuring:
Essential Steps in Hypothesis Testing S3d2CANDOALL
How to Choose Desirable Spices (Tests)
Testing Claims About: Means, Standard Deviations, Proportions, and
Relations
Nonparametric Tests
In this section you will explore statistical hypothesis testing to determine how
much of what you anticipated would happen actually did happen. The model for
hypothesis testing in Recipes includes an 8-step process called the CANDOALL
model. For another approach, you might wish to check out
http://tinyurl.com/379u8gv.
Hypotheses are educated guesses derived by logical analysis using induction or
deduction based on your knowledge of the problem and your purpose for
conducting the study. They can range from very general statements to highly
specific ones. Most quantitative research studies focus on testing hypotheses.
After the data are collected and organized in some logical manner, such as a
frequency table, and a descriptive statistic (mean, standard deviation, percentage,
etc.) is computed, then a statistical test is often used to analyze the data, interpret
what this analysis means in terms of the problem, and make predictions about a
population based on these interpretations.
You might wish to visit this section once before all your data are collected and
then plan to revisit once your data are known.
When you use statistics, you are comparing your numerical results to a number that is
reflective of a chance happening and determining the significance of the difference
between these two numbers. If you are planning to use statistical hypothesis testing as
part of your dissertation, thesis, or research project, you should read this section
slowly and carefully, paying close attention to key words and phrases. Make sure you
are familiar with all the terminology employed.
If your study involves quantitative data and the testing of hypotheses, you will undoubtedly find the
examples in this part of your Recipes for Success extremely beneficial. As you continue to read this
information actively, you will become familiar with the techniques of the statistician and determine
which statistical tests would work best for your study. Remember to keep a positive mental attitude as
well as an open and inquisitive mind as you digest the information in this section.
There is a myth that statistical analysis is a difficult and unpleasant process that
requires a thorough understanding of advanced mathematics. This is not the
case. Statistical hypothesis testing can be fun and easy. Although many esoteric
tests exist (just as there are many exotic spices in the universe), most researchers
use mundane tests (the way most chefs prepare delicious meals with common
spices). The mundane spices for statistical hypothesis testing are z tests, t tests,
chi-square tests, ANOVA (F tests), and rho tests. As you carefully and cheerfully
read this section, you will learn which of these spices might best complement
your meal. Just as in cooking, sometimes you will find more than one spice that
could be apropos and could enhance your dishes. In analyzing your data, you
will likely find more than one type of statistical test that would be appropriate
for your study, and the choice is often yours to make.
To test her claim, Ms. R samples 36 students (n = 36) who have been using
NMTR and finds that by mid-year the mean average of this group is 7.8.
However, since the standard deviation of the population is 0.76, this could
indicate that the sample students are just within normal boundaries.
Statistical hypothesis testing will be used to determine if the sample mean score
of 7.8 represents a statistically significant increase from the population mean of
7.5 or if the difference is more likely due to chance variation in reading scores.
Before the eight-step statistical test (CANDOALL) recipe is employed, you need
to procure five preliminary pieces of information: three begin with the letter “s”
and two with the letter “d”: S3d2
(s) What is the substantive hypothesis?
(What does the researcher think will happen based on a sound theoretical
framework?)
Ms. R claims that NMTR will significantly increase the average reading level of
seventh-grade students by mid-year.
Cutting Board
Write one substantive hypothesis that is apropos to your study, i.e., what do you
think (claim) that your study will reveal?
(s) How large is the sample that was studied?
Ms. R sampled 36 seventh-grade students (n = 36) who have been using
NMTR.
(s) What descriptive statistic was determined by the sample?
The mean average of the sample, , was 7.8.
(d) What type of data were collected?
Ms. R used interval data.
(If data were nominal or ordinal then a nonparametric test would be called
for.)
(d) What type of distribution did the data form? To use parametric statistical
tests usually requires a normal (or close to normal) distribution of the data.
Graphical methods such as histograms are very helpful in identifying skewness
in a distribution. If the statistic you are testing is a mean and the data type is ratio
or interval, then a z test or t test will likely be applied. These tests are pretty
robust and can be applied even if the data are skewed.
Ms. R will not need to be concerned about the distribution of her data.
Cutting Board
1. Determine the sample size for your study (n = ____).
2. What descriptive statistic was obtained from your study?
___________________
(You may wish to give an approximate value or result that you think you
might obtain from your study to practice applying this process.)
______________________________
Now we are ready to take the information obtained by the three Ss and two Ds
and employ an eight-step recipe to create a delectable statistical test.
We will determine if Ms. R’s claim, “NMTR increases the reading level of
seventh graders,” is statistically correct.
1. Identify the claim (C) to be tested and express it in symbolic form. The
claim is about the population and that is reflected by the Greek letter :
> 7.5.
That is, Ms. R claims that the mean reading score, , of the seventh-grade
students who could use NMTR is greater than the population mean, 7.5.
Write your claim in symbolic form.
2. Express in symbolic form the statement that would be true, the
alternative (A), if the original claim is false. All cases must be covered.
Write the opposite of your claim in symbolic form (remember to cover all
possibilities).
_____________________________________________________________
3. Identify the null (N) and alternative hypothesis.
Note: The null hypothesis should be the one that contains no change (an equal
sign)
H0: (Null hypothesis)
H1: > 7.5 (Alternative hypothesis)
Determine the null and alternative hypotheses in your study.
_____________________________________________________________
Note: A statistical test is designed to reject or fail to reject the statistical null
hypothesis being examined.
4. Decide (D) the level of significance, , based on the seriousness of a
type I error, which is the mistake of rejecting the null hypothesis when it is
in fact true. Make small if the consequences of rejecting a true are
severe. The smaller the value, the less likely you will be to reject the
null hypothesis. Alpha values 0.05 and 0.01 are very common. The default
is 0.05.
Ms. R chooses = 0.05
5. Order (O) a statistical test relevant to your study—see Table 1. Since the
claim involves a sample mean and n > 30, we can compute a z value and
use a z test. A z value, or test value, is a number we compute that can be
graphed as a point on the horizontal scale of the standard normal
distribution (bell-shaped) curve. This point indicates how far from the
population mean (expected mean under the null conditions) our sample
mean is and thus enables us to determine how unusual our research
findings are.
Claim is tested
HO H1
Correct
Fail to Type II Error
HO Reject
Null
Decision
Type I
H1 Correct
Error
Reject Null
Note that two kinds of errors are represented in the table. Many statistics
textbooks present a point of view that is common in business decision making:
, the type I error, must be kept at or below 0.05, and that, if at all possible,
, or beta, the type II error rate, must be kept low as well. Statistical power,
which is equal to 1 - , must be kept correspondingly high. Ideally, power
should be at least .90 to detect a reasonable departure from the null hypothesis.
FOR YOUR INFORMATION AND EDUCATION
The central limit theorem (the distribution of a mean will tend to be normal
as the sample size increases, regardless of the distribution from which the
mean is taken) implies that for samples of sizes larger than 30 (n > 30), the
sample means can be approximated reasonably well by a normal (z)
distribution. The approximation gets better as the sample size, n, becomes
larger. Do not confuse the CLT with a BLT (Bacon/Lettuce/Tomato
sandwich) .
When you compute a z value, you are converting your mean to a mean of 0
and your standard deviation to a standard deviation of 1. This allows you to
use the standard normal distribution curve and its corresponding table to
determine the significance of your values regardless of the actual value of
your mean or standard deviation.
A standard normal probability distribution is a bell-shaped curve (also called
a Gaussian curve in honor of its discoverer, Karl Gauss) for which the mean,
or middle value, is 0 and the standard
deviation, the place where the curve starts to bend, is equal to 1 on the
right and -1 on the left. The area under every probability distribution
curves is equal to 1 or 100%). Since a Gaussian curve is symmetric about
the mean, it is important to note that the mean divides this curve into two
equal areas of 50%. Approximately 68% of the data are within one
standard deviation of the mean.
Blood cholesterol levels, heights of adult women, weights of 10-year-old
boys, diameters of apples, scores on standardized test, etc., are all
examples of collections of values whose frequency distributions resemble
the Gaussian curve.
If you were to record all the possible outcomes from the toss of 100
different coins by graphing the number of heads that could occur on the
horizontal axis (0,1,2,3...100) and the frequency with which each of the
number of heads could occur on the vertical axis, you will produce a
graph resembling the normal distribution. (The most frequent results
would cluster around 50 heads and become less and less frequent as you
consider values further and further from 50.)
6. Do the Arithmetic (A): Determine the test statistic, critical value or
values, and the critical region.
The sample mean, , of 7.8 is equivalent to a z value of 2.37. This z value is
the test statistic and was computed using the following formula:
,
where = sample mean, µ = population mean, n = size of sample, and
= population standard deviation.
Thus, z = (7.8 - 7.5) / (0.76)/6 = 2.37 (to the nearest hundredth).
Z values usually vary between -3 and +3. If they are outside this range, the null
hypothesis will almost always be rejected (or an error was made).
is often called the standard error of the mean or the standard deviation of the
sample means.
Sometimes you can substitute the standard deviation of the sample, s, for the population standard
deviation, , if sigma ( ) is unknown.
The = 0.05 level requires us to find a z value that will separate the curve into
two unequal regions: the smaller one with an area of 0.05 (5%) and the larger
one with an area of 100% - 5% or 95% (0.95, often referred to as a 95%
confidence level).
Z values indicate the percentage of area under the bell-shaped curve from the
mean (middle) toward the right tail of the curve. Thus, for an alpha of 0.05, we
need to determine what z value will cut off an area of 45% (0.4500) from the
mean toward the right tail (we already know that 50% of the area is on the left
side of the mean. We obtain 95% by adding 45% to 50%).
Hunting through the vast array of four-digit numerals in the table, we find our
critical value to be between 1.6 (see z column) + .04 (1.64), which (reading
down the .04 column) determines an area of .4495, and 1.6 +.05 (1.65), which
determines an area of .4505. Thus, if we take the mean average of 1.64 and 1.65,
we can blissfully determine the critical value to be 1.645 and the critical region
to be all z values greater than 1.645. This determination requires us to reject the
null hypothesis if our test statistics (z value) is greater than 1.645.
Because there is only one alternative hypothesis (H1), > 7.5, we call this is a one-
tailed (right-tailed) test. If our alternative hypothesis was 7.5, a two-tailed test
would be used since there would be two alternatives: < 7.5 or > 7.5.
An entry in the table is the proportion of the area under the entire standard
normal curve, which is between z = 0 and a positive value of z. Areas for
negative values are obtained by symmetry.
7. Look (L) to reject the null hypothesis if the test statistic is in the critical
region. Fail to reject the null hypothesis if the test statistic is not in the
critical region.
Ms. R’s z value is in the critical region since 2.37 > 1.645. Thus we will reject
the null hypothesis.
8. Restate the previous decision in lay (L) or simple nontechnical terms.
We have reason to believe that NMTR improves the reading level of seventh-
grade students.
FOR YOUR INFORMATION AND EDUCATION
If your sample size, n, is less than 30 and the population standard deviation
is unknown, but there is a normal distribution in the population, then you
can compute a t statistic
,
where s is the standard deviation of the sample, µ is the mean under
contention, and n is the size of the sample.
Notice that z and t statistics are computed exactly the same way; the only
difference is in their corresponding values of significance when you (or your
computer) checks these values on the graph or table.
In this example, we tested a claim about a numerical value mean test score
on a standardized test. The sample data came from a population known to
have a normal distribution. We were thus able to use parametric methods in
our hypothesis testing.
In general, when you test claims about interval or rational parameters (such
as mean, standard deviation, or proportions), and some fairly strict
requirements (such as the sample data has come from a normally distributed
population) are met, you should be able to use parametric methods.
If you do not meet the necessary requirements for parametric methods, do
not despair. It is very likely that there are alternative techniques,
appropriately named nonparametric methods, that you will be able to use
instead.
Cutting Board
Testing Claims About Two Means
In this section we will discuss a claim made about two means (such that the
mean of one group is less than, greater than, or equal to the mean of another
group). The researcher will first need to determine if the groups are dependent,
i.e., the values in one sample are related to the values in another sample. This
includes before and after tests, tests involving spouses, relationships between an
employer and an employee, or if the groups are independent, i.e., the values in
one sample are not related to the values in another sample. This includes
comparing an experimental group to a control group or samples from two
different populations such as the eating habits of people in Michigan versus
Hawaii.
If the researcher can answer “yes” to one of the questions below, then the
identical statistical test described in this section can be employed. Is the
researcher claiming
___1. One product, program, or treatment is better than another?
___2. One group is better (or worse ) than another (with respect to
some variable)?
___3. An experimental program was effective?
Many real and practical situations involve testing hypotheses made about two
population means. For example, a manufacturer might want to compare output
on two different machines to see if they obtain the same result. A nutritionist
might wish to compare the weight loss that results from patients on two different
diet plans to determine which is more effective. A chef might want to decide
which entrée goes better with a meal. A psychologist mightwant to test for a
difference in mean reaction times between men and women to determine if
women respond quicker than men in an emergency situation.
If the two samples (groups) are dependent—the values in one sample are related
to the values in the other in some way—a t statistic is computed and a simple
paired t test may be used to test your claim. Computing the differences between
the related means and then obtaining the mean of all these differences leads to
this t statistic.
If the two samples are independent, i.e., the values in one sample are not related
to the values in the other, and the size of each group, n, ≥ 30, or the standard
deviations of the population are known, then a simple z statistic might be
computed and a paired z test could be ordered. In this case, the differences in the
population means are computed and subtracted from the differences in the
sample means. The result is divided by the square root of the sum of each
variance divided by the respective sample size.
BUT, if the two samples are independent, the sample size (n) < 30 for each
group, and the population standard deviation is not known, then whom do you
call? Answer: The F (team) test. The F test is used first to see if the standard
deviations are equal. (A relatively small F value indicates that the standard
deviations are the same.) If the F value is relatively small, then the researcher, or
much more likely a computer, would need to perform a t test to test the claim.
This involves a very hackneyed computation. However, if the F test was to yield
a relatively large F value, this would lead to a more benign t test.
Once the mean and standard deviation are computed for each sample, it is
customary to identify the group with the larger standard deviation as Group 1
and the other sample as Group 2.
The nonparametric counterpart of the paired z or t test is the Wilcoxon signed-
rank test if samples are dependent and the Wilcoxon rank-sum test if samples are
independent.
If you can answer yes to both questions below, you can use the identical
statistical test described in this section. Are you claiming that
__ 1. There is a relationship or correlation between two factors, two events,
or two characteristics?, and
__ 2. The data are at least of the interval measure?
To perform regression and correlational analyses:
1. Record the information in table form.
2. Create a scatter diagram see any obvious relationship or trends.
3. Compute the correlation coefficient r, also known as the Pearson
correlation coefficient factor, to obtain objective analysis that will uncover
the magnitude and significance of the relationship between the variables.
4. Determine if r is statistically significant. If r is statistically significant, then
regression analysis can be used to determine the relationship between the
variables.
Example: Suppose a randomly selected group of teachers is given the Survey on
Calculator Use (SOCU) to measure how they integrate calculators in their
classrooms and then tested for their levels of math anxiety using the Math
Anxiety Rating Scales or MARS test:
1. The results for each participant is recorded in table form (some of these
values appear below):
MARS SOCU
123.00 15.00
145.00 12.00
154.00 11.00
121.00 16.00
230.00 5.00
300.00 4.00
145.00 10.00
124.00 17.00
145.00 11.00
165.00 12.00
138.00 14.00
312.0 4.00
The researcher’s hypothesis is that teachers who have lower levels of math
anxiety are more likely to use calculators in their classes. (Note: The
independent variable (x) is the math anxiety level, determined by MARS, and is
being used to predict the dependent variable (y), the use of calculators, as
measured by SOCU.)
H0: r = 0 (there is no relationship)
H1: r 0 (there is a relationship)
Note: These will usually be hypotheses in regression analysis.
2. Draw a scatter diagram:
The points in the figure above seem to follow a downward pattern, so we suspect
that there is a relationship between level of math anxiety and the use of
calculators by teachers surveyed, but this is somewhat subjective.
3. Compute r.
To obtain a more precise and objective analysis we can compute the linear
coefficient constant, r. Computing r is a tedious exercise in arithmetic but
practically any statistical computer program or scientific calculator would
willingly help you along. In our example, the very user-friendly program SPSS
determined that r = ‑0.882.
Some of the properties of this number r are as follows:
1. The computed value of r must be between -1 and +1. (If it's not then
someone or something messed up.)
2. A strong positive correlation would yield an r value close to +1; a
strong negative linear correlation would be close to -1.
3. If r is close to 0, we conclude that there is no significant linear
correlation between x and y.
Checking the table, we find that with a sample size of 10 (n = 10), the
value r = -0.9169, indicating a strong negative correlation between the use of
calculators and measures of math anxiety levels. The r-squared number (0.779)
indicates that a person’s math anxiety might explain 84% of his or her calculator
usage (or nonusage).
4. If there is a significant relation, then regression analysis is used to
determine what that relationship is.
5. If the relation is linear, the equation of the line of best fit can be
determined. (For two variables, the equation of a line can be expressed as y = mx
+ b, where m is the slope and b is the y–intercept.)
Thus, the equation of the line of best fit would be
S = -.9169 M + 21.614.
The nonparametric counterpart to the Pearson r is the Spearman rank correlation
coefficient (rs), Spearman’s rho, or Kendall’s tau ( ).
Cutting Board
How alike are two people’s tastes in television shows? The following activity
will employ the nonparametric, Spearman rank correlation coefficient test to
help determine the answer to this question. You will need a friend or a relative to
perform this activity.
1. In Column I of the chart provided in Step 3, list 10 different TV shows
that you and a friend or relative are familiar with. Try to have at least one
news show, a situation comedy, a mystery, a variety show, a talk show, and
a drama. Include shows that you like as well as those that you dislike.
2. In Column II, rank the shows that are listed, where 1 is your favorite (the
one you would be most inclined to watch) and 10 is your least favorite (the
one you would be least inclined to watch).
3. Have your friend or relative do a similar ranking in Column III.
I II III IV V
TV Shows Your Ratings F/R Ratings d
2
d
A
B.
C.
D.
E.
F
G.
H.
I.
J.
4. Use the graph below to plot the ordered pairs consisting of the two
rankings. Label the points with the letters corresponding to the shows in
the list. If the two rankings were identical, the points would be on a
starting line pointing northeast and forming a 45-degree angle with both
axes. If you were in total disagreement, then the points would be on a
straight line pointing southeast and also form a 45-degree angle with both
axes.
Friend’s Rating
Your Rating
5. Although the scattergram you created might give you an impression of
how the two ratings match or correlate with each other, it is probably not
very definitive. To determine how closely correlated these rankings are, we
can use the r statistics and the Spearman rank correlation coefficient,
which we will compute in the steps that follow.
6. Go back to the chart in step 3 and compute d, the difference between the
two ratings for each show, and d2, that is, (d)(d). After you have all the d2,
add them up.
7. The formula for finding the rank correlation is:
.
8. To do this on your calculator, multiply the sum of your d2 numbers by 6.
Divide this product by 990, which is the denominator, (10)(99). Store this
number in memory +. Compute 1 minus memory recall. The number in the
display is your r number. It should be between -1 and +1.
9. A Spearman table indicates that for your sample size of 10, an r value of
.564 or greater would indicate a positive correlation with an alpha of 0.10,
or a negative value less than -.564 would indicate a negative correlation
with an alpha value of 0.10. The closer r is to 1 or -1, the stronger the
relation. An r value close to 0 indicates no particular relation. What can
you conclude from this test? Should you and this other person turn on the
tube when you are together or would it be better to find a different
activity?
Critical Values of Spearman’s Rank Correlation Coefficient: rs (rho)
n 0.10 0.05 0.02 0.01
10 .564 .648 .745 .794
A regression equation based on old data is not necessarily valid. The regression
equation relating used car prices and ages of cars is no longer usable if it is based
on data from the 1960s. Often a scattergram is plotted to get a visual view of the
correlation and possible regression equation.
Nonlinear relationships can also be determined, but due to the fact that
more complex mathematics is used to describe and interpret data, they
are used considerably less often. The following are characteristics of all
linear correlational studies:
1. Main research questions are stated as null hypotheses, i.e., no
relationship exists between the
variables being studied.
2. In simple correlation, there are two measures for each individual in the sample.
3. To apply parametric methods, there must be at least 30 individuals in the study.
4. Can be used to measure the degree of relationships, not simply whether a relationship
exists.
5. A perfect positive correlation is 1.00; a perfect negative (inverse) is -1.00.
6. A correlation of 0 indicates no linear relationship exists.
7. If two variables, x and y, are correlated so that r = .5, then we say that (0.5)(2) or 0.25
or 25% of their variation is common, or variable x can predict 25% of the variance in y
TABLE 2. Types of Correlations
Bivariate correlation is when there are only two variables being investigated.
These definitions help us determine which statistical test can be used to
determine correlation and regression.
Continuous scores: Scores can be measured using a rational scale
Ranked data: Likert-type scales, class rankings
Dichotomy: Participants classified into two categories—Republican versus
Democrat
Artificial – Pass/ fail (arbitrary decision); true dichotomy (male/female).
The Pearson product-moment correlation coefficient (that is a mouthful!), or
Pearson r, is the most common measure of the strength of the linear relationship
between two variables. The Spearman rank correlation coefficient, or Spearman
r (which we performed above), used for ranked data or when you have a sample
size less than 30 (n < 30), is the second most popular measure of the strength of
the linear relationship between two variables. To measure the strength of the
linear relationship between test items for reliability purposes, Cronbach alpha is
the most efficient method of measuring the internal consistency. Table 2 below
can be used to determine which statistical technique is most appropriate with
respect to the type of data the researcher collects.
Multivariate Correlational Statistics
If you wish to test a claim that multiple independent variables might be used to
make a prediction about a dependent variable, several possible tests can be
constructed. Such studies involve multivariate correlational statistics.
Discriminant Analysis – This is a form of regression analysis designed for
classification. It is used to determine the correlation between two or more
predictor variables and a dichotomous criterion variable. The main use of
discriminant analysis is to predict group membership (e.g., success/nonsuccess)
from a set of predictors. If a set of variables is found that provides satisfactory
discrimination, classification equations can be derived, their use checked out
through hit/rate tables, and if good, they can be used to classify new participants
who were not in the original analysis. In order to use discriminant analysis, the
following ingredients, or assumptions (conditions), are needed:
1. At least twice the number of participants as variables in study
2. Groups have the same variance/covariance structures
3. All variables are normally distributed
For more information, check out http://tinyurl.com/33snxtz
Canonical Correlation – This is also a form of regression analysis used with two
or more independent variables and two or more dependent variables. It is used to
predict a combination of several criterion variables from a combination of
several predictor variables. For example, suppose a researcher was interested in
the relationship between a student’s conation and school achievement. She or he
may wish to use several measures of conation (number of hours spent on
homework, receiving help when needed, class participation) and several
measures of achievement (grades, cores on achievement tests, teacher
evaluation). The two clusters of measurement could be studied with canonical
correlations.
Path Analysis – A type of multivariate analysis in which causal relations among
several variables are represented by graphs or path diagrams showing how
causal influences traveled. It is used to test theories about hypothesized causal
links between variables that are correlated. Researchers can calculate direct and
indirect effects of independent variables that are not usually done with ordinary
multiple regression analysis.
Factor Analysis – Used to reduce a large number of variables to a few factors by
combining variables that are moderately or highly correlated with one another.
Factor analysis is often used in survey research to see if a long series of
questions can be grouped into shorter sets of questions, each of which describes
an aspect or factor of the phenomenon being studied.
Differential Analysis – Used to examine correlation between variables among
homogeneous subgroups within a sample; can be used to identify moderator
variables that improve a measure's predictive validity.
Multiple Linear Regression – Used to determine the correlation between a
criterion variable and a combination of two or more predictor variables. The
coefficient for any particular predictor variable is an estimate of the effect of that
variable while holding constant the effects of the other predictor variables. As in
any regression method we need the following conditions to be met: We are
investigating linear relationships; for each x value, y is a random variable having
a normal distribution. All of the y variables have the same variance; for a given
value of x, the distribution of y values has a mean that lies on the regression line.
Results are not seriously affected if departures from normal distributions and equal
variances are not too extreme.
The following example illustrates how a researcher might use different
multivariate correlational statistics in a research project:
Suppose a researcher has, among other data, scores on three measures for a group of
teachers working overseas:
1. Years of experience as a teacher
2. Extent of travel while growing up
3. Tolerance for ambiguity
Research Question: Can these measures (or other factors) predict the degree of
adaptation to the overseas culture they are working on?
Discriminant Analysis - Hypothesis 1: The outcome is dichotomous between those who
adapted well and those who adapted poorly based on these three measures. Hypothesis
2: Knowing these three factors could be used to predict success.
Multiple Regression - Hypothesis: Some combination of the three predictor measures
correlates better with predicting the outcome measure than any one predictor alone.
Canonical Correlation - Hypothesis: Several measures of adaptation could be quantified,
i.e., adaptation to food, climate, customs, etc. based on these predictors.
Path Analysis - Hypothesis: Childhood travel experience leads to tolerance for
ambiguity and desire for travel as an adult, and this makes it more likely that a teacher
will score high on these predictors, which will lead them to seek an overseas teaching
experience and adapt well to the experience.
Factor Analysis - Suppose there are five more (a total of eight) adaptive measures that
could be determined. All eight measures can be examined to determine whether they
cluster into groups such as education, experience, personality traits, etc.
To compute the correlation between gender (male/female) and employment status
(employed/unemployed), you could use a phi coefficient. You couldn’t use it for age
and income, however, because these are not dichotomous variables.
Kendall’s tau could be used to compute the correlation between feelings about a new
health plan (not in favor/in favor/highly in favor) and health of a patient
(unhealthy/healthy/very healthy).
Before you move into your final PHASE, use the Cutting Board below to assist
you in deciding what spices you can use in your study.
Cutting Board
1. Underline the terms that best complete the sentence. I will be testing a
claim about
a mean, a standard deviation, a proportion, 2 means, 2 variances, a
relationship between 2 variables, the independence of 2 variables,
relationship between more than 2 variables.
2. If you will be using nonparametric testing, underline the test(s) that you
think you will use (you might wish to read the information on
nonparametrics first).
Sign test, Wilcoxon signed-rank test, Wilcoxon rank-sum test, Kruskal-
Wallis test, Spearman rank correlation, runs test, Friedman, McNemar,
Mann-Whitney U, Fisher
3. If you plan to use parametric testing, underline the test(s) that you plan to
use:
Z test, t test, paired t or z test, χ2, r, F test, Pearson r, other: ___________
4. Why did you choose the test(s) in (2) and/or (3)?
After your data are collected, make sure you visit this section again and fill
out all information that is relevant to your research study. Once this is
accomplished, you will be able to fully digest chapter 4 of your
dissertation.
4. What assurance do you have that you have met the assumptions and
prerequisites to use this test?
TEST YOUR QUANTITATIVE ACUMEN
1. Descriptive Statistics A) The
consistency in which the same results occur.
2. Ex Post Facto B)
Experimental studies that are not double-
blinded and might cause bias on the part of the
researcher.
3. Inferential Statistics C) A mode of
inquiry in which a theory is proposed and
hypotheses are made in advance of gathering
data about a specific phenomenon.
Hypothetical-deductive theory.
4. Factorial Design D) A form of
descriptive research in which the investigator
looks for relationships that may explain
phenomena that have already taken place.
5. A Priori E) A method used
to depict systematically the facts and
characteristics of a given population or area of
interest.
6. Validity F) Differences in
independent variables relevant to a study are
controlled.
7. Reliability G) A method that
rigorously explores the efficacy of a program,
treatment, or product.
8. Rosenthal effect H) A method used
to study the effects of more than one
independent variable on more than one
dependent variable.
9. Hawthorne Effect I) A set of
procedures used to test hypothesis or estimate
the parameters in a population.
10. Quasi-Experimental J) Participants
appear to make progress just because they are
in a study.
11. Covariant Analysis K) A method in
which a sample of convenience is used and
then treated to determine if there is any
significant differences pre- and post-treatment.
12. Evaluative
Research L) The extent to which data measure
what they purport to.
Answers: 1- E, 2-D, 3-I, 4-H, 5-C, 6-L, 7-A, 8-B, 9-J, 10-K, 11-F, 12-G
Bootstrapping
If you have a limited amount of data from which to obtain estimates of statistics
for a population consider bootstrapping. The sampling distribution for those
estimates can be approximated by drawing new samples from the original data
and then computing statistics from each sample obtained.
Bootstrapping is often used as an alternative to inferences based on parametric
assumptions when those assumptions are in doubt, or where parametric inference
is impossible (lack of normality, inadequate sample size, large variances, etc.) or
requires very complicated formulas to obtain standard errors.
An advantage of bootstrapping is its simplicity. It is straightforward to derive
estimates of standard errors and confidence intervals for complex estimators of
complex parameters of the distribution, such as percentile points, proportions,
odds ratio, and correlation coefficients. It is also an appropriate way to control
and check the stability of the results.
Because bootstrapping is (under some conditions) asymptotically consistent, it
does not provide general finite-sample guarantees. Furthermore, the results can
be overly optimistic. The apparent simplicity may conceal the fact that important
assumptions are being made when undertaking the bootstrap analysis (e.g.
independence of samples) where these would be more formally stated in other
approaches. Bootstrapping is a way of testing the reliability of the dataset.
Adèr et al.(2008) recommend the bootstrap procedure for the following
situations:
1. When the theoretical distribution of a statistic of interest is complicated or
unknown. Since the bootstrapping procedure is distribution-independent it
provides an indirect method to assess the properties of the distribution
underlying the sample and the parameters of interest that are derived from
this distribution.
2. When the sample size is insufficient for straightforward statistical
inference. If the underlying distribution is well-known, bootstrapping
provides a way to account for the distortions caused by the specific sample
that may not be fully representative of the population.
3. When power calculations have to be performed, and a small pilot sample is
available. Most power and sample size calculations are heavily dependent
on the standard deviation of the statistic of interest. If the estimate used is
incorrect, the required sample size will also be wrong. One method to get
an impression of the variation of the statistic is to use a small pilot sample
and perform bootstrapping on it to get impression of the variance.
To see an example of bootstrapping check out: http://tinyurl.com/9v2cr5f
Nonparametric Tests
To understand the idea of nonparametric statistics (the term nonparametric was
first used by Wolfowitz, 1942) first requires a basic understanding of parametric
statistics that we have just studied in some detail. The concept of statistical
significance testing is based on the sampling distribution of a particular statistic
as well as a basic knowledge of the underlying distribution of a variable. Once
these are known, then we can make predictions about how, in repeated samples
of equal size, this particular statistic will behave, that is, how it is distributed.
For example, if we draw 100 random samples of 100 female children aged 10,
each from the general population, and compute the mean height in each sample,
then the distribution of the standardized means across samples will likely
approximate the normal distribution. Now imagine that we take an additional
random sample in a particular city ("Kiddysville”) where we suspect that 10-
year-old children are taller than the average population. If the mean height in
that sample falls outside the upper 95% tail area of the z distribution, then we
conclude that the 10-year-old children of Kiddysville appear to be taller than the
average population or with 95% confidence we conclude that the children of
Kiddysville are taller than normal.
In the above example, we relied on our knowledge that, in repeated samples of
equal size, the standardized means (for height) will be distributed following the z
distribution (with a particular mean and variance). However, this will only be
true if in the population the variable of interest (height in our example) is
normally distributed, that is, if the distribution of people of particular heights
follows the normal distribution (the bell-shaped distribution).
For many variables of interest, we simply do not know for sure that this is the
case. For example, is income distributed normally in the population? Probably
not. The incidence rates of AIDS are not normally distributed in the population.
The number of car accidents is also not normally distributed, and neither are
many other variables in which a researcher might be interested. Another factor
that often limits the applicability of tests based on the assumption that the
sampling distribution is normal is the size of the sample of data available for the
analysis (sample size, n). We can assume that the sampling distribution is normal
even if we are not sure that the distribution of the variable in the population is
normal, as long as our sample is large enough (e.g., 100 or more observations).
However, if our sample is very small, then those tests can be used only if we are
sure that the variable is normally distributed, and there is often no way to test
this.
After this somewhat lengthy statistical feast, you may hunger for statistical
procedures that allow you to process data of low quality, from small samples, on
variables about which little is known (about their distribution). Nonparametric
methods were developed to fill a need when little is known about the parameters
of the variable of interest in the population (hence the name nonparametric). In
more technical terms, nonparametric methods do not rely on the estimation of
parameters (such as the mean or the standard deviation) describing the
distribution of the variable of interest in the population. Therefore, these
methods are also often (and more appropriately) called parameter-free or
distribution-free methods.
Advantages of Nonparametric Methods
1. Can be applied to a wide variety of situations since they do not require
normally distributed populations.
2. Can be applied to nominal data.
3. Computations are usually simpler.
4. Tend to be easier to understand.
Disadvantages of Nonparametric Methods
1. They tend to waste data. Exact numerical data are reduced to qualitative
form.
2. The tests are less sensitive; therefore, we need stronger evidence to reject
the null hypothesis.
3. A few of the most popular parametric tests, their nonparametric
equivalence, and the efficacy of the nonparametric test are found in the
table that follows.
TABLE 3. Parametric and Nonparametric Tests
Efficacy of
Nonparametric
Test with
Nonparametric Normal
Application Parametric Test Test Population
t test or z test Sign test or 0.63
Two dependent
Wilcoxon
samples
signed-rank
Two t test or z test Wilcoxon rank- 0.95
independent sum
samples
Several ANOVA (F test) Kruskal-Wallis 0.95
independent test
samples
Linear - Pearson Rank 0.95
Correlation Correlation -
Spearman
One sample t test or z test Mann Whitney
against a U
population
Change in McNemar test
nominal data
The sign test is the oldest of all nonparametric statistical tests and one of the
easiest nonparametric tests to use. It is also considered crude and insensitive
because it has a tendency to waste information. Using this test, for example, a
person who lost 80 pounds on a diet would be considered the same as a person
who lost 1 pound! The level of significance can be estimated without the help of
a calculator or table. If the sign test indicates a significant difference and another
test does not, you should seriously rethink whether the other test is valid. It may
be of use when it is only necessary (or possible) to know if observed differences
between two conditions are significant. It is commonly used in a before-and-
after experiment where the researcher can simply assign a + to each case where
the results were higher after treatment and a – when the opposite were true or
where two treatments are being compared for the same participants.
To use the sign test, we need two dependent samples. The sign test can be used
with any type of data where a change can be determined.
Example: 14 right-handed pilots were tested to determine if there was a difference
between reaction times using their right and left hand. Use a 0.05 significance level to
test the claim of no difference in reaction times.
Right: 189 97 116 165 116 129 171 155 112 102 188 158
121 133
Left: 220 171 121 191 130 134 168 187 123 111 180 186
143 156
Sign of - - - - - - + - - - + -
- - difference:
Using the s3d2 CANDOALL recipe: We will decide on the sign test. We
perform the arithmetic to determine x, the number of times the less frequent
sign occurs: 12 negative and 2 positive gives us x = 2. This is a two-tailed
test since we are testing to see if there is a difference between using the
right or the left hand. The table below comes from a critical sign test chart.
We will look to reject or fail to reject the null hypothesis if x is less than or
equal to the value in the table. Since 2 < 3, we reject the null and in lay
terms we can conclude that there is reason to believe there is a difference in
reaction time.
Critical values for the sign test
0.01 (one
0.005 (one tail) 0.025 (one 0.05 (one
tail) or or tail) or tail) or
0.01 (two 0.02 (two 0.05 (two 0.10 (two
n tails) tails) tails) tails)
14 1 2 2 3
A sign test can be used to compare participants’ attitudes about purchasing a software
program (interested or not interested) before and after having viewed a demonstration
of the software.
The Wilcoxon Rank-Sum Test
The Wilcoxon rank-sum test is a nonparametric equivalent of the unpaired t test.
It is used to test the hypothesis that two independent samples have come from
the same population. Because it is nonparametric, it makes no assumptions about
the distribution of the data.
The t test tests the hypothesis that the means of the two groups differ. The
Wilcoxon rank-sum test tells us more generally whether one group is better than
the other.
The way the test works is to rank all the data from both groups. Thus, the
smallest value will be given a rank of 1, the second smallest will have a rank of
2, and so on. Where values are tied, they are given an average rank. The ranks
for each group are added together (hence the term rank-sum test). The sums of
the ranks used to be compared with tabulated critical values to generate a p
value, although now computer programs are better suited for the job. For small
sample sizes, it is still perfectly feasible to do the test manually if you don’t have
the necessary software or if you like to get in and work with the data at a more
basic level.
The Wilcoxon Signed-Rank Test
The Wilcoxon signed-rank test is a nonparametric equivalent of the paired t test.
It is used to test the hypothesis that two paired samples have come from the
same population. Because it is nonparametric, it makes no assumptions about the
distribution of the data.
For example, suppose we are interested in whether a particular drug given for
depression affects the liver enzyme ALT. If we measure the study participants’
ALT before and after they take the drug, we have matching pairs of data. We
might think of testing the data with a paired t test, but this would not be
appropriate because ALT values are not normally distributed. The Wilcoxon
signed-rank test can be used instead.
The test works as follows: For each participant, we subtract the post-drug ALT
value from the pre-drug value. This gives us a number for each participant,
which may be positive, negative, or zero. We then rank all those numbers in
order, ignoring the sign. Finally, we add the ranks of the positive numbers and
the ranks of the negative numbers (in a similar way to the ranks of the two
groups in the Wilcoxon rank-sum test). The summed ranks can then, if
necessary, be compared with tabulated critical values to generate a p value,
although it would be far more likely that the test would be done by appropriate
software like with your SPSS program.
A comparison of student attitudes about school (very excited, moderately excited,
neutral, moderately bored, very bored) before and after taking a course on study habits.
Examples of Other Nonparametric Statistical Tests
The McNemar Test
The philosophy of the McNemar test is similar to that of the chi-square
test. It assumes that you are dealing with research questions where two
variables are related. Here, our hypothesis usually is that the difference is
significant between the precondition and postcondition in the same groups.
The McNemar test can be used with either nominal or ordinal data and is
especially useful with before-and-after measurements of the same
participants.
The McNemar test determines the significance of any observed change by
setting up a fourfold table of frequencies to represent the first and second
responses.
Example: Suppose a group was interested in the support for a new mental
health clinic in a certain area. A town meeting was held to discuss the pros
and cons of such a clinic in the area. Participants were surveyed before and
after the town meeting.
In the table that follows, A represents those who were in favor before and
not in favor after, B represents those who were in favor before and after, C
represents those not in favor before and not in favor after, and D represents those
not in favor before and in favor after. A + D represents the number who changed
their mind. Since the data are nominal and the study involves before-and-after
measurements of two related samples, the McNemar test can be used to see if the
change from one point of view is different from the change to the other point of
view (i.e., is there is a difference between A and D). The null hypothesis would
be that there is no change. If the test value is higher than the critical value, we
would reject the null hypothesis.
Before/After Do Not Favor Favor
Favor A B
Oppose C D
The chi-square distribution is used for this test. The test value is
with df = 1.
Compute the test chi-square = [(|A- D| - 1) 2 / (A + D)] and compare the value
obtained to the critical value of 3.84. If the test value >3.84, reject the null;
otherwise, fail to reject the null.
High school students are surveyed on their knowledge of family planning and birth
control. Half the group is given a workshop on this topic. The survey is readministered
to both groups after the seminar to assess the knowledge gained.
Kruskal-Wallis Test
We used ANOVA to test hypotheses that influences among several (k) sample
means are due to chance. The parametric F test requires that all the involved
populations possess normal distributions with variances that are approximately
equal. The Kruskal-Wallis test is a nonparametric alternative that does not
require normal distributions or equal variances. Like many nonparametric tests,
the Kruskal-Wallis test uses the ranks of the data rather than their raw values to
calculate the statistic. In using the Kruskal-Wallis test (also called the H test) we
test the null hypothesis that independent and random samples come from the
same or identical populations. We compute the test statistic, H, which has a
distribution that can be approximated by the chi-square distribution with k - 1
degrees of freedom, as long as there are at least three random samples and each
sample has at least five observations.
The computation of an H value involves considering all observations as if they
came from the same group, ranking the entire group from lowest to highest, and
in cases of ties, assigning to each observation the mean of the ranks involved.
Then return each number to its sample and find the sum of the ranks and the
sample size.
The income from a random sample of 50 parents from five different schools (10
parents from each school) in one school district is collected to test the claim that the
income level of the schools are equal.
Fisher’s Exact Test
In Fisher’s exact test, we compute a test statistic for measures of association that
relate two nominal variables. It is used mainly in 2 x 2 frequency tables when the
expected frequency is too small to use the chi-square test. A phi ( ) coefficient
can be generated, which is a symmetric measure equivalent to a Pearson’s
correlation coefficient.
One of the limitations of Fisher’s exact test is that the data must be dichotomous
and the elements must originate from two different sources. To compute the
correlation between sex (male/female) and employment status
(employed/unemployed), you could use a phi coefficient. However, this cannot
be used with age and income, per se, since these are not dichotomous variables.
A 2 x 2 contingency table is constructed and usually set up as follows:
Accepted into Rejected from
SW Program SW Program Total
From the East 9 2 11
From the West 7 6 13
Total 16 8 24
TABLE = [ 9 , 2 , 7 , 6 ]
Left: p value = 0.9725849149728539
Right: p value = 0.15574101494144757
Two-tail: p value = 0.21079553102705903
Conclusion: There would be no reason to believe that, based on the data, a
person from the west was more likely to be accepted into an SW program.
Computations are based on factorials, which become prohibitive if the numbers
get large. 8! = 40,320!!!
Compare attitudes toward marijuana (harmful or not harmful) of 12th graders
who did (n = 9) and did not (n = 6) complete a DARE program in elementary
school.
Calculate the chi-square statistic ( ) test value by completing the following
steps:
For each observed number in the table, subtract the corresponding expected
number (O - E).
Square the difference [(O - E)2 ].
Divide the squares obtained for each cell in the table by the expected number for
that cell [(O - E)2 / E ].
Sum all the values for (O - E)2 / E. This is the chi-square statistic.
Assumptions in using the chi-square test of goodness of fit:
The sample values are independent and identically distributed.
The sample values are grouped in categories, and the counts of the number
of sample values occurring in each category are recorded.
The hypothesized distribution is specified in advance, so that the number
of observations that should appear each category, assuming the
hypothesized distribution is the correct one, can be calculated without
reference to the sample values.
Guidance
The chi-square test involves using the chi-square distribution to approximate the
underlying exact distribution. The approximation becomes better as the expected
cell frequencies grow larger and may be inappropriate for tables with very small
expected cell frequencies. For tables with expected cell frequencies less than 5,
the chi-square approximation might not be reliable.
A standard (and conservative) rule to follow is to avoid using the chi-square test
for tables with expected cell frequencies less than 1, or when more than 20% of
the table cells have expected cell frequencies less than 5.
Koehler and Larntz (1980) suggested that if the total number of observations is
at least 10, the number of categories is at least 3, and the square of the total
number of observations is at least 10 times the number of categories, then the
chi-square approximation should be reasonable.
A key assumption of the chi-square test of independence is that each participant
contributes data to only one cell. Therefore the sum of all cell frequencies in the
table must be the same as the number of participants in the experiment. Consider
an experiment in which each of 12 participants threw a dart at a target once using
their preferred hand and once using their nonpreferred hand. The data are shown
below:
Hit Missed
Preferred hand 9 3
Non-preferred hand 4 8
It would not be valid to use the chi-square test of independence on these data
because each participant contributed data to two cells: one cell based on their
performance with their preferred hand and one cell based on their performance
with their nonpreferred hand. The total of the cell frequencies in the table is 24,
but the total number of participants is only 12.
In the test for independence, the claim is that the row and column variables are
independent of each other. This is the null hypothesis. The multiplication rule
states that if two events were independent then the probability of both occurring
is the product of their probabilities. This is the theory upon which the test of
independence is based. If we reject the null hypothesis, then the assumption is
assumed wrong and the row and column variables are dependent. There is an
excellent application of this test at http://tinyurl.com/39k32gj.
Mann Whitney U
This is a nonparametric test to determine if there is a difference between two
independent groups. It is used when the data for two samples are measured on at
least an ordinal scale in rank order. It is the nonparametric equivalent of the t
test. Although ordinal measures are used, it is assumed that data are continuously
distributed. The test assesses whether the degree of overlap between the two
observed distributions is less than would be expected by chance on the null
hypothesis that the two samples are drawn from a single population.
The Mann-Whitney U test is one of the most popular of the nonparametrics.
There is no such thing as a free lunch, of course, so the Mann-Whitney U is less
powerful (more conservative or less likely to find a difference if a real difference
exists) than a t test.
It is for
1. Independently drawn random samples, the sizes of which need not be the
same.
2. Sample sizes where the larger is n ≥ 9.
(If both sample sizes are eight or fewer measures, then other tests can be
applied.)
Like a Kruskal-Wallis, the Mann-Whitney U works by first ranking the data.
The way it works is that the scores in both groups are combined into one data set
ranked from lowest to highest. The rank of each score is recorded and when two
or more scores are tied, all of the tied scores get the same rank—a rank equal to
the average of the positions in the ordered array. For example, if three scores are
tied for positions 3, 4, and 5, all would be assigned the rank of 4. If the sums of
the ranks are very different, the p value will be small.
The p value answers the following question: If the populations really have the
same median, what is the chance that random sampling would result in a sum of
ranks as far apart (or more so) as observed in this experiment?
A group of students were given a sensitivity test and scores were ranked by gender as
follows:
The sum of the rank numbers for males equals 182 (1+2+3+7+8+...+26+27), while the
sum of the rank number for females equals 314 (4+5+6+11+....+30+31).
The expected sums for males and females are 224 and 272, respectively; the standard
deviation of the expected sums is 25.19; and the p value of the observed divergence
equals 0.04779. Thus, at the 0.05 level of significance, we can conclude that females
tend toward the higher rank numbers while males tend toward the lower rank numbers.
The Cochran Q Test and the Friedman Test
Recall that the philosophy of the McNemar test is similar to that of the chi-
square test. It assumes that you are dealing with research questions in which two
categorical variables are related. Here, our hypothesis usually is that the
difference is significant between the precondition and the postcondition in the
same groups’ responses. The Cochran Q test is an extension of these tests for
studies having more than two dependent samples. It tests the hypothesis that the
proportion of cases in a category is equal for several related categories. The
object of the Cochran Q test is to investigate the significance of the differences
between many treatments (k) on the same n elements with a binomial
distribution. Some of the limitations of this test are that it is assumed that there
are k series of observations on the same n elements. The observations are
dichotomous and 0 or 1 represents the observations in the two classes. In
addition, the number of elements must be sufficiently large, usually greater than
10.
The test statistics computed is referred to as a Q value. This approximately
follows a chi-square distribution with k - 1 degrees of freedom. The null
hypothesis that the k samples come from one common dichotomous distribution
is rejected if Q is larger than the critical value.
When the data are at least ordinal, the Friedman two-way ANOVA is
appropriate. The object of the Friedman test for multiple treatments of a series of
participants is to investigate the significance of the differences in responses for
several treatments (k) applied to several participants (n). It tests matched
samples, ranking each case and calculating the mean rank for each variable
across all cases. It uses these ranks to compute a test statistic. The product is a
two-way table where the rows represent participants and the columns represent
the treatment conditions.
Characteristics
The Friedman test is frequently called a two-way analysis on ranks. It is at the
same time a generalization of the sign test and the Spearman rank correlation
test. The Friedman test models the ratings of n (rows) judges on k (columns)
treatments. One popular application of the Friedman test is found in wine
tastings, where each judge rates a collection of wines independently of the other
judges. The null hypothesis would be that the ratings of the judges are not
related (e.g., they cannot distinguish the wines).
The limitation of the Friedman test is that we need to assume that a participant’s
response to one treatment is not affected by the same participant’s response to
another treatment and that the response distribution for each participant is
continuous. The test statistic is often referred to as a G value. If this value
exceeds the critical chi-square value obtained from a chi-square table with k - 1
degrees of freedom, the null hypothesis that the effects of the k treatments are all
the same is rejected.
If ties occur in the ranking procedure, one has to assign the average rank for each
series of equal results. For example, four entries can be assigned the rank of
12.25.
How the Friedman Test Works
The Friedman test is a nonparametric test that compares three or more paired
groups. The Friedman test first ranks the values in each matched set (each row)
from low to high. Each row is ranked separately. It then sums the ranks in each
group (column). If the sums are very different, the p value will be small (and the
null hypothesis will be rejected). The whole point of using a matched test is to
control for experimental variability between participants, thus increasing the
power of the test. Some factors you don’t control in the experiment will increase
(or decrease) all the measurements in a participant. Because the Friedman test
ranks the values in each row, it is not affected by sources of variability that
equally affect all values in a row (since that factor won’t change the ranks within
the row).
If clinical therapists take part in a new intervention program on drug abuse and their
attitudes about this program are measured before, during, and after the program, and
they are asked if this program is worthwhile, Cochran's Q could test the hypothesis that
the proportion of strongly agree responses will differ for the clinicians taking part in
the intervention depending on which of the three time periods is being considered. The
Friedman test could be used to test the hypothesis: There will be at least one difference
among the median attitude scores at pre-intervention, at post-intervention[JG1], and
at a 6-month follow-up for the clinicians who took part in the intervention.
Children who are living in a residential treatment program return to their families after
staying for 3 months, 6 months, or 12 months. Cochran's Q test can be used to
determine if there is a difference in the level of comfort in taking home a family
member after these time periods. Friedman’s test could be used to determine whether
there is a difference in the family satisfaction scores of 64 children who were
discharged from the residential treatment center at these different time intervals.
Qualitative Analysis
Bogdan and Biklen (1982) defined qualitative data analysis as “working with
data, organizing it, breaking it into manageable units, synthesizing it, searching
for patterns, discovering what is important and what is to be learned, and
deciding what you will tell others” (p. 145). Qualitative researchers tend to use
inductive analysis of data, meaning that the critical themes emerge from the data
(Patton, 1990). If you are planning to use qualitative analysis in your
dissertation, you need to place the raw data into logical, meaningful categories,
examine them in a holistic fashion, and find a way to communicate this
interpretation to others.
Sitting down to organize a pile of raw data can be a daunting task. It can involve
literally hundreds of pages of interview transcripts, memos, e-mails, documents,
field notes, and documents. The mechanics of handling large quantities of
qualitative data can range from physically sorting using Post-its, index cards, or
slips of paper to using one of the several computer software programs designed
to aid in this task. To save time, it is highly recommend that you consider one of
the programs like NVivo, Qualrus, or Atlas.ti. There is a bit of a learning curve
when using these software programs. NVivo 10 software includes a multiple
language base, team research application, and provides users the options of
search and simple data manipulation. Creswell (2007), in evaluating an earlier
version NVivo7, described NVivo 10 software as simpler to use with a security
feature that allows storage of multiple databases in a single file.
An overview of NVivo is at http://tinyurl.com/8gt6o2g. Many doctoral students
seek coaching on NVivo or other qualitative software. Our recommendation is:
Karen I. Conger, Ph.D., DataSense, LLC, Research Consultant Specialists in
QSR Software Ph: (661) 831-3521 Fax: (661) 215-9379 Email:
[email protected] Web: http://www.datasense.org/
Analysis begins with identification of the themes emerging from the raw data, a
process sometimes referred to as open coding (Strauss & Corbin, 1990). Open
coding is the first level of the conceptual analysis (Shank,2006). In analyzing in-
depth interviews, the objective is to breakdown the interview narratives in order
to compare responses for similarities and differences. Interview responses are
analyzed line by line to ensure the saturation of all identified categories is
achieved. During open coding, you will identify and tentatively name the
conceptual categories into which the phenomena observed will be grouped. The
goal is to create descriptive, multidimensional categories that form a preliminary
framework for analysis. Words, phrases, or events that appear to be similar can
be grouped into the same category. These categories may be gradually modified
or replaced during the subsequent stages of analysis.
As the raw data are broken down into manageable chunks, you should also
devise an audit trail—that is, a method for identifying these data chunks
according to the source and the context. If the data are generated from a
participant, it is a good idea to acknowledge this in the research report. Most
participants, however, are provided with code names such as P1, P2, etc.
Qualitative research reports are characterized by the use of voice in the text; that
is, participant quotes that illustrate the themes being described.
The next stage of analysis involves a reexamination of the categories identified
to determine how they are linked, a process sometimes called axial coding
(Strauss & Corbin, 1990). According to Shank (2006), the axial coding is the
second level of data analysis in which the information from the open coding is
assembled and presented into a coding paradigm. Whereas the open coding stage
allows the researcher to classify the data into categories, axial coding
reassembles the data in new connected categories. During this stage, the
categories identified in open coding are compared and combined in new ways as
the researcher begins to assemble the “big picture.” Coding serves a dual
purpose—to describe and to acquire a new understanding of a phenomenon of
interest. During axial coding, you will build a conceptual model and determine
whether sufficient data exist to support that interpretation.
Finally, you will translate the conceptual model into the story line that will be
read by others. Ideally, the analysis will be a rich account and “closely
approximates the reality it represents” (Strauss & Corbin, 1990, p. 57).
Although the stages of analysis appear to be linear, in practice they may occur
simultaneously and repeatedly. When you are conducting axial coding, you
might determine that the initial categories identified must be revised, leading to a
reexamination of the raw data. Additional data collection might be needed if you
uncover gaps in the data. Informal analysis begins the moment you begin your
data collection, and this will help guide subsequent data collection.
An interesting data analysis exercise was presented by ‘Deborah” and found in
the Academic Exchange Quarterly (Spring 2005). Deborah asks students to
review the items in their recycling bins and suggests the following scenario:
Imagine you are an archeologist in the year 3000 visiting your geographical
location. This area has been unoccupied for hundreds of years, but recent
excavations have revealed that there was a thriving civilization here until some
natural or human disaster forced the population to abandon the city. Little is
known of these people’s way of life. The data in this bag (recycled items)
recovered from the site appear to represent the contents of a home or
neighborhood refuse site. Due to special conditions in the soil, these items have
been preserved in remarkably good condition.
Your job is to sort these items into from three to seven categories to shed light on
the general research question: “What was the culture of these people like?” Give
each category a name and be prepared to present your categories to the class,
explaining and interpreting them in order to arrive at a tentative description and
analysis of this culture. This activity can elucidate an understanding of the
potential of carefully conducted qualitative research to uncover cultural
meanings, to build theory, and to provide recommendations for further study.
The 5 P’s
Preliminary Preparation: Prospectus (Concept paper), and Proposal
Planning
(A recipe for the construction of a dissertation research proposal)
Writing a Concept Paper or Prospectus
Cutting Board
The specific elements of the concept paper (prospectus) may vary depending
upon the academic program and the chosen degree. Programs typically provide
a grading rubric that serves as an outline for the required components, and
students are encouraged to follow those rubrics closely in developing their
Concept Paper/Prospectus.
Carefully examine the following sample concept paper (prospectus) and then
vigilantly put together one for your research proposal. Make sure you share this
with the members of your committee and those who will be closely involved
with approving your research.
Prospectus for YOUR Study
Title:
Introduction: (sketch)
Purpose: (sketch)
Significance: (sketch)
Limitations/Scope:
Delimitations:
Assumptions:
Theoretical Framework:
Background: (sketch)
Nature of the Study: (select type(s)): Provide a rationale for the paradigm
(qualitative/quantitative/mixed), as well as your reasons for choosing a particular
methodology.
Definitions: Make sure these are unique connotations for terms in the study.
Provide references for each definition.
Population/sample
Cutting Board
1. Which of the methods (1-4) above would you be most comfortable using in
your introduction?______
2. Put yourself in the position of the reader. What about this study would
capture your interest? Why is it important?
3. On a separate piece of paper, do a mind map of your introduction and
attach it here:
¼ cup Problem Statement
The Problem Statement section is the heart of the research paper. The mind
seems to follow its own equivalent of Newton’s law of inertia and becomes
aroused to intense analysis only when some dilemma presents itself. Systematic
thought is driven by the failure of established ideas, by a sense that something is
wrong, by a belief that something needs closer attention, or by old ideas and
methods that are no longer adequate. The problem statement provides the logical
foundation upon which you will build the rest of the study. The scope of your
study, its ability to make a point, and the amount of research you need to do to
make that point depend heavily on the initial specification of the problem or
problems under investigation. The Problem Statement section deals with the
reality of the problem you are investigating or the necessity of a program you are
analyzing. The objective of a problem statement is
1. To persuade your reader that the project is feasible, appropriate, and
worthwhile
2. To capture and maintain your reader’s attention
The research methodology being employed often helps to dictate the problem
statement. A viable problem statement is both concise and precise, explains
something that is wrong and in need of correction, and is often anchored with the
most current numbers or statistics (from reliable sources) to illustrate its
significance. A lack of research on the topic, often termed a “gap” in research in
and of itself is not a problem. The general and specific problems need to be
identified in addition to the specific gap in the literature that the study is
designed to fill. It needs to be clear that the paradigm and methodology selected
are capable of resolving the specific problem. There should be few or no direct
quotes in the problem statement. Most problem statements are 200 to 250
words.
The following are drafts of potential problem statements that can be used in
conjunction with the research methodologies specified for investigating the
relationship between socioeconomic class and education. It is important that
references and citations be included in the actual problem statement when
appropriate.
Historical Research. Following the Civil War, teachers perceived children from
the low socioeconomic strata of society as less intelligent. Such perceptions have
had a detrimental effect on children in this group (Smith,2016). It is important
that a historical study be conducted to learn about causes, effects, or trends after
the Civil War that caused this problem to arise in order to explain present events
and anticipate future events.
Phenomenological Research. *Excerpt from A Phenomenological Study of
Female Executives in Information Technology Companies in the
Washington, D.C., Area: Dissertation by Dr. Tammie Page (2005), University of
Phoenix, School of Advanced Studies.
… Despite equal opportunity and antidiscrimination laws, less than % of
executive-level positions are held by women (Meyerson & Fletcher, 2004).
Discriminatory practices, collectively known as the glass ceiling, contributed to
the current situation (van Vianen & Fischer, 2004). This is particularly evident in
the IT industry, where only 5.1% of executives are women (Melymuka, 2005).
Gender inequity violates the principle of equal treatment for all employees and
can lead to problems with retention, morale, and performance. Focusing on
executive-level women’s perceived gender inequity rather than actual gender
inequity is beneficial because perceptions of organizational conditions affect
work-related attitudes and behaviors (Sanchez & Brock, 2003). This
phenomenological study identified skill sets, coping mechanisms, and strategies
used by executive-level women in IT companies to sustain their positions within
such companies. Although various studies have found males and females to be
equal in leadership competence (Maher, 1997; Pounder & Coleman, 2002;
Thompson, 2005), women often face socially prompted stereotypes about
masculinity and femininity that undermine their credibility as organizational
leaders (Carli & Eagly, 2001).
Check to see if the Problem Statement you developed in PHASE 1 seeks an
answer to one or more of the questions listed below:
My research will determine or examine the following:
___1. What is wrong with society, or with one of its institutions, that has
caused this problem or allowed this problem to exist?
___2. What has failed in society that has caused this problem?
___3. What is missing in society that has allowed this problem to
develop?
___4. What happened that has become interesting and important enough
to study?
___5. What historical description of an event has become open to
reexamination?
___6. A program that was in need of studying, evaluating, or analyzing.
___7. A need to develop a program that could contribute to society or one
of its institutions.
___8. A need to analyze a current theory in light of new events.
___9. A relationship between the problem and a factor or factors that
could be contributing to the problem.
___10. An inequity that exists in society.
Despite this lengthy description of how to develop the Problem Statement, and
as we saw in PHASE 1, the crafting of the statement itself, when complete,
should be relatively brief (one or two paragraphs). There is much to think about,
but not a great deal to write. In fact, as long as it adequately conveys what you
intend, the shorter the problem statement, the better. Strive to make your
problem statement succinct and specific. Remember; A lack of research in and
of itself is not a problem, but can point to opportunities for further research.
Cutting Board
1. Which of the question(s) above does your study address?
2. What research methodology best describes your study? (Check back to
PHASE 1—What’s Cooking ?)
3. In PHASE 1 you created a problem statement prior to conducting your
research. Rewrite that statement in the space below:
_________________________________________________________________________
_________________________________________________________________________
4. Make sure that you have stated the problem precisely and concisely. If that
is not the case, rewrite the problem sentence with your new insight:
_________________________________________________________________________
2 cups Background
The Background section offsets the brevity of the Problem Statement section.
Here you will elaborate on why the problem you are investigating is of pressing
societal concern or theoretical interest. This is the place in your paper where you
want to make your reader as interested in the problem as you are and help
elucidate the need to shed further light on this problem. Try to find a natural
starting point. For example, the call for educational change is often traced back
to the 1983 document “A Nation at Risk” (Check out
http://www.ed.gov/pubs/NatAtRisk/risk.html). Follow this with germinal or
classical works that have contributed to furthering the problem or added to the
solution of the problem.
Carefully read the statements below. Put a check next to the ones that apply to
your research project and could potentially be used in the Background section of
your paper.
___1. There are knowledgeable observers (political figures, theorists,
newscasters, professional in the field, etc.) who have attested to the
importance of this problem.
___2. There are statistics that attest to the depth and spread of this problem.
___3. There is documentation of the failure of certain aspects of society
that have contributed to this problem and call for further examination.
___4. There are theoretical issues in need of reexamination.
___5. There are programs, events, mandates, rulings, and/or documents
that have called attention to this problem.
Cutting Board
1. When was the problem first acknowledged?
2. Which statement(s) above pertain to your problem?
3. How will (did) you obtain information to support these statements
(books, videos, articles, consulting with authorities)?
4. Give at least three reasons why the problem you chose is (was)
important and valid to you, society, or some institution in society:
5. If applicable, give at least two concrete examples of the problem:
6. If applicable, what programs, documents, rulings, or mandates have
addressed similar issues?
7. To what public statistics, political trends, theoretical controversy does
your study relate?
8. What group has been adversely affected by this problem?
9. How was attention first called to the problem? (Name any key figure or
figures that assisted in bringing this problem into focus.)
10. What are the most important critical events related to this problem?
½ cup Purpose
The Purpose Statement section deals with the reason the study was or will be
conducted and describes what your study will accomplish or has accomplished.
It succinctly creates direction, scope, and the means of data collection. The
purpose statement includes a list of specific objectives accomplished. The
objectives should be stated as outcomes, not as procedures; however, the
procedures will enable the outcomes to be realized or to be found
unattainable (COEHS, 2005).
If the intent of the problem statement is to appeal to the heart, the intent of the
purpose statement is to appeal to the brain. The purpose statement is like a
compass; changes in the purpose statement change the direction, and in turn the
focus, of the study. The purpose of the study needs to be described in a logical,
explicit manner. The purpose statement is also like a menu that focuses your
reader’s attention on the essentials and intentions of your study (feast). Thus, the
reader will be better able to judge whether your approach is or was effective.
Purpose statements are usually supplemented with additional information for
clarification, but a single, succinct sentence that captures the essence of the study
should identify the (a) research method, (b) the problem investigated, (c) the
audience to which the problem is significant, and (d) the setting of the study.
This example illustrates these key elements: “The purpose of this (a) qualitative,
descriptive research study is (was) to analyze (b) the personal value patterns and
profiles of (c) Generation X and Generation Y managers at an (d) information
technology company in the Pacific Northwest.” In most proposals and
dissertations, the section relating to the purpose is about ½ to ¾ of a page.
Put a check next to the phrases that could best be used to complete the
statement:
The purpose of this research is to
___1. advance knowledge by understanding cause and effect;
___2. provide new answers to old problems;
___3. elucidate what makes the program under investigation
successful or unsuccessful;
___4. change a regretful situation and make it better;
___5. interpret, evaluate, or analyze existing conditions that lead to an
unacceptable situation;
___6. determine to what extent certain factors contributed to the
problem;
___7. determine the need for a particular program or study;
___8. describe a problem that has been given little attention up until
this point, but could have a great impact on society;
___9. understand why a particular condition exists and who is affected
by this condition;
___10. elucidate what aspects of a program are successful and what
aspects are not successful.
Any time you mention the goals, intent, purpose, aim, or objective of
yourstudy there must be alignment and consistency with the methodology and
the scope of the study.
Cutting Board
1. Which of the statements above apply to your study?
__________________________
2. State briefly and precisely what your study intended to do about the
problem you have specified by completing the following sentence: The
purpose of this study was (is) to:
____________________________________________________________________
____________________________________________________________________
3. Who (sample) or what will be part of your study?
____________________________________________________________________
4. Where did (will) your study take place?
____________________________________
5. What variables are measured in your study?
_____________________________________________________________________
Purpose Checklist √
1. Key identifier words are used to signal the reader, such
as ‘The purpose of this study is…” This purpose is in
accord with the problem statement. A descriptive study
would use words like determine, a phenomenological study
would use words like understand the perception of lived
experiences around a phenomenon, a grounded theory
study would use words like develop a theory regarding a
phenomenon.
2. The type of paradigm(s) (qualitative/quantitative/mixed)
is indicated or implied and appropriate for this purpose.
3. The type of method (phenomenological, correlational,
grounded…) is indicated or implied and appropriate for this
purpose.
4. The central phenomena being explored are explicated.
The variables, if quantitative, are clearly defined.
5. The intent of the study (to analyze, determine,
evaluate…) is delineated with words that reflect higher
order thinking skills.
6. The participants in the study (sample and population) are
mentioned.
7. The setting (including geographic location) of the study
is explained.
8. Words are well chosen; statements are free from
contradiction; the statement is free of jargon and clichés.
No unnecessary words are used.
9. The writing has cadence and flows easily; the reader can
sense the person behind the words.
10. The purpose is in accord with and compliments the
problem statement.
1 cup Significance
Just as the Background section elaborates on the problem statement, the
Significance section provides garnishing for the purpose statement. A significant
piece of work provides information that is useful to other scholars in the field
and, ideally, is of such importance that it alters the thinking of scholars in your
profession and society at large. In the Significance section you will justify why
you chose to investigate this problem and the type of research methodology you
chose.
Besides your personal desire and motivation to do research, your wish to obtain
a degree, your need for a good grade, and your craving to get something
published, there needs to be a more global reason for doing a worthwhile study.
You should state who, besides yourself, your immediate family, your teachers,
and close friends, cares that this research is conducted or not conducted. You
should use about ¾ of a page to explain why this is such a unique approach and
who will be thrilled (besides yourself, your family, your teachers, and friends!)
that this study is done. Here is where you tell us what type of contribution you
will be making to your profession and to society.
The statements below are valid reasons for doing research. Put a check next to
the ones that apply to your research project and can potentially be used in the
Significance section of your paper.
__1. This study is able to reach people that were not reached by other
similar studies (i.e., a different population was studied).
__2. This study gives a different perspective on an established problem.
__3. This is an appropriate approach to this particular research problem
although it had not been embraced before.
__4. There was an important benefit to doing the study this way so that
there could be a better understanding of the problem.
__ 5. If this study were not done, some aspect of society would be in
danger.
__ 6. This is the first time an important problem was examined in this
vein.
__ 7. This study has the potential to effect social change.
__ 8. This program was needed to rectify certain wrongs in society.
__ 9. This study provided an objective measure of the success or failure
of an important program.
__ 10. This study will add to the scholarly literature in your field.
__ 11. Policy makers need this information to right a wrong or make
better decisions.
Thus, the significance section addresses the so what of the study and report. It
describes or explains the potential value of the study. This section should
identify the audience for the study and how the results will be beneficial to them.
Remember, a dissertation is conducted to add to the existing knowledge base and
solve a problem – how your particular research will do this should be articulated
in this section.
Cutting Board
1. Which statement(s) above pertain to your study?
2. State in your own words why this study is so important.
3. To whom is your study important, other than yourself?
4. How will society benefit from your study?
5. How will policy makers benefit from your study?
6. How would you respond (in a nice way) to a person who says, “So
what?” to your project?
7. How would you provide a persuasive rationale to the person who says,
“So what?”
8. Write down several reasons why you chose to study the problem in this
way, and what or who will benefit from the results of the study.
1½ cups Nature of the Study
The Nature of the Study section (about 2–3 pages) presents the rationale for
choosing your research design. Here you will address the appropriateness of the
research methodology you chose and how you plan to answer your research
questions and solve the problem you posed based on the tenets of the selected
method. This is the place where you defend your selected methodology and
distinguish it from other research methodologies that have been, or could be,
conducted to investigate the problem. It gives the reader a synopsis of the meal
you are cooking up and places your study with similar types—case study,
historical, correlational, evaluative, phenomenological, experimental, quasi-
experimental, etc., or the way you plan to prepare your feast. Thus, this is where
you will elaborate on the methodology you have chosen and justify why this is
(was) such a great way to investigate this problem. If you chose a qualitative
research method you will probably need to do a little more explaining than if you
chose a quantitative design. Provide details connecting the type of methodology
to the theoretical framework guiding your study. The quantitative researcher will
generally test theories and hypotheses with the intent of generalizing results,
whereas the qualitative researcher will seek to determine patterns to help explain
a phenomenon.
In summary, this section serves two purposes (a) describing and justifying the
methodology (i.e. quantitative, qualitative, mixed-method) and (b) describing
and justifying the design (i.e. case study, phenomenological, correlation). A well-
crafted Nature of the Study can usually be presented in one page and explains
why other methods and designs were not selected. Methodological experts
(rather than authors of general research textbooks) should be cited in this
section. A common error in this section is to restate the purpose and rehash the
problem. Keep it, thorough, simple and clear.
In PHASE 1, we discussed different types of research methodologies. Refer to
this section now, and then answer the questions on the Cutting Board.
Cutting Board
1. From what perspective did you view your problem: past, present, or
future?
2. Which subset(s) of the past, present, or future perspective seemed to apply
the most to your study (e.g., descriptive, correlational, ground theory,
action, heuristic, etc.)?
3. Within the perspective of 1 and 2 above, which of the following do (did)
you do?
a) Describe facts
b) suggest causes
c) Analyze changes
d) Investigate relationships
e) Test causal hypotheses
f) Evaluate efficiency or effectiveness
g) Develop a program
h) Develop a theory
4. To summarize, complete the following statement: The methodology that I
used in my study could best be classified as a (an) ______________ study
because I
_______________________________________________________________
5. Name another type of methodology that could have been used to study the
problem. ________________________________Why did you reject this
methodology?
_______________________________________________________________
Cutting Board
Here are some questions that can help you articulate the theory behind your
inquiry:
1. Examine your title, thesis, topic, research problem, or research questions. In
one sentence, what is the concern you are investigating?
Example: Minority students in urban high schools are not doing well on
standardized tests in mathematics.
_________________________________________________________________________
2. Brainstorm on what you consider to be the key variables in your research.
Example: Mathphobia, high stakes testing, high school students, unprepared
teachers, racism, poor funding, teaching techniques, socio-economic
conditions
_________________________________________________________________________
3. Read and review related current literature on this topic. Conduct a key word
search to locate articles related to your topic.
4. Identify germinal and key authors who have advanced this area of inquiry:
Example: Coleman, Freire, Kohn, Oakes, Thomas, Rothstein, Jacobsen,
Tobias, Wigfield, Silver….
_________________________________________________________________________
5. List the constructs and variables that might be relevant to your study. In a
quantitative study list the possible DVs and IVs
Example: Dependent variables: Mathematics Anxiety, Self-efficacy, Socio-
economic class, ethnicity, race, teaching philosophies, teaching techniques
Independent Variable: Performance on high stakes mathematics tests.
_________________________________________________________________________
6. Consider how these variables directly relate to the theory. Does the theory or
theories provide guidance for how these variables might behave? Explain the
connection between the theory and the variables.
_________________________________________________________________________
7. Revise your search and add the word “theory” to your key words to find the
theories and theorist most in line with your thinking.
Example: Critical Race Theory, Constructivism, Social Cognitive Theory
_________________________________________________________________________
8. Discuss the assumptions or propositions of each theory and point out its’
relevance to your research.
Example: Constructivism holds that learning always builds upon knowledge that
a student already knows and can build prior knowledge and experience known as
schema. Because all learning is filtered through pre-existing schemata,
constructivists suggest that learning is more effective when a student is actively
engaged in learning mathematics rather than attempting to receive knowledge
passively. A wide variety of methods claim to be based on constructivist learning
theory. Most of these methods rely on some form of guided discovery where the
teacher limits direct instruction and attempts to lead the student through
questions and activities to discover, discuss, appreciate, and verbalize the new
knowledge.
Critical Race Theory: CRT theorists question the social and cultural assumptions
of whiteness and blackness inequalities. Building awareness across cultural
groups is difficult when dealing with racial differences, and might explain why
some ethnic groups perform differently in mathematics or other academic fields.
Qualitative research questions tend to be open and probative in nature and must
reflect the intent of the study. Research questions should be manageable and
contain appropriate restriction, qualification, and delineation. The formulation of
research questions reflects the selection of the research method and design.
Many qualitative research questions ask how or why events occur, or what are
the perceptions and experiences of participants. Qualitative research questions
are often exploratory in nature, and are designed to generate hypotheses that
could be tested later in quantitative studies. The questions are in accord with the
chosen design.
For example, in a phenomenological study the research questions should look to
determine what the lived experiences of participants are regarding a specific
phenomenon (e.g. “What are the lived experiences of teachers in Louisiana who
taught during the aftermath of Katrina?”). In a grounded theory study the
research questions should seek to develop a theory grounded in data (e.g., “What
emergent theory or theories connect(s) the need for intrapreneurship with the
actions leading to enacting intrapreneurial activities in the biotechnology
industry?”) In a Delphi study, the questions need to be future oriented (e.g.,
“How can future teacher training programs prepare faculty for improving the
education of African American males?”). Most dissertations are guided by 1-3
substantive and specific research questions.
There is no set number of research questions, though typical dissertations have
between one and three profound research questions. Research questions in
qualitative studies tend to be open and probative in nature and reflect the intent
of the study. Research questions need to be manageable and contain appropriate
restriction, qualification, and delineation. The formulation of research questions
guides the selection of the research method and design. Keep your questions
close to the topic you are researching. Questions that are too abstract or obtuse
make it difficult for the reader to determine your question’s relevance and intent.
However, you need to link your question to a larger context and make sure the
questions are consistent with the problem, purpose, and methodology.
Many qualitative studies have research questions that ask how or why events
occur. Qualitative research questions are often exploratory in nature and are
designed to generate hypotheses to be tested later in quantitative studies. The
questions are reflective of the design. For example, in a phenomenological study
the research questions should look to determine the lived experiences regarding
a specific phenomenon. In a grounded theory study, the research questions
should seek to develop a theory grounded in data. In a Delphi study, the
questions are usually future oriented.
After forming a research question, ask yourself: What possible answers could be
given to this question? Once you write your research questions, step aside for a
while. Let the questions aerate, and then come back and think about possible
responses to the research questions. Make certain that a profound answer is
required.
Use the following table to check your proposed research questions:
Statement Check
1. The research questions are precise and concise,
there are no unnecessary words.
2. The research questions are manageable and
contain appropriate restriction, qualification, and
delineation
3. The research questions arise logically from the
problem statement.
4. The research questions reflect the type of study
that will be conducted.
5. The research questions are probative in nature.
Words like how, what, or why are used.
5. The research questions are of sufficient depth to
warrant graduate level research.
6. The research questions do not require a binary
(yes/no) or numerical response.
7. There are no pronouns such as you, they, we, us,
etc. in the research questions.
8. The research questions are broad enough to guide
the entire study.
9. The purpose statement explains how the research
questions will be answered.
10. Each research question is answerable by the
methodological tools available to you.
11. In quantitative studies, the Independent
Variable(s) [IV] and the Dependent Variable(s)
[DV] are delineated.
Hypotheses
A research hypothesis is a conjectural declarative statement of the results the
researcher expects to find among the variables a researcher intends to study. For
quantitative studies, hypotheses are testable and variables are measurable. If
confirmed, a hypothesis will support a theory. A research question might include
several variables (constructs) and thus several research hypotheses might be
needed to indicate all the anticipated relationships (Cooper & Schindler, 2003).
Accordingly, the number of hypotheses is determined in an explanation of
relationships among variables (constructs) or comparisons to be studied. A
research hypothesis is essential to quantitative studies, but usually does not
appear in qualitative studies. The rationale for making predictions (hypotheses)
usually comes from hypothesized relationships suggested by prior research or
from personal experiences and anecdotal data.
If a social psychologist theorized that racial prejudice is due to ignorance, then the
more highly educated a person, the less their prejudice should be. A hypothesis to test
this theory:
There is an inverse relationship between the education of a person and the degree of
racial prejudice.
The following should be clear from the hypothesis and research questions:
1. What variable is the researcher manipulating or is the presumed cause, or
predictor, in a study? (This is the independent variable, or IV.) In the example, the
IV is education level.
2. What results are expected or what is the presumed effect of the study or the
predicted result? (This is the dependent variable, or DV). In the example, the DV
is the level of racial prejudice.
To test this hypothesis, the researcher could survey people with varying degrees of
education. The survey would consist of a way to measure education level and a way to
measure the degree of racial prejudice. A Pearson test could be used to test the
hypothesis.
When formulating your hypotheses, the rationale for these expectations should
be made explicit in light of your review of the research and statement of theory.
If a survey is used to measure the IV and DV, there should be consistency among
the answers and a way to grade each participant on each variable. Check out the
discussion of survey research in PHASE 2. When you construct your data
collection instrument, make certain that you are aware of how you will measure
each variable. For further assistance, check out
http://www.socialresearchmethods.net/.
Research hypotheses are sometimes referred to as working or substantive
hypotheses. They are usually directional; that is, a researcher might believe there
is a positive or inverse relationship, or something is more or less than a certain
accepted notion or condition. For example, There is a positive relationship
between the amount of homework and test scores, or This new program will
require less hours of time to train technicians. However, they can also be
nondirectional, such as, There is a relationship between homework and test
scores, or There is a difference in training time between the two programs. The
nondirectional hypotheses show less bias and are appropriate when conflicting
information exists.
There is a difference between a substantive hypotheses and a statistical
hypothesis. The former speculates, somewhat informally, on what you assume
your study will reveal. The latter is a formal, testable conjecture that can be
translated into mathematical symbols.
Example of a substantive hypothesis: Teachers who have integrated calculators
into their personal lives are more likely to use calculators in their classrooms
than teachers who rarely use calculators in their personal lives.
Example of statistical hypotheses:
H0: There is a relationship between teachers using calculators in the
classroom and using calculators every day.
r = 0
and the alternative hypothesis:
H1: There is no relationship between teachers using calculators in the
classroom and using calculators every day.
Statistical hypotheses (as discussed in PHASE 2) usually come in pairs (the null or no
change hypothesis, which contains =, and the alternative or opposite hypothesis) and are
expressed symbolically. In chapter 1, only the substantive hypotheses or your
expectations need to be expressed. (Statistical hypotheses belong in chapter 3).
Check the phrase(s) below that best complete(s) the following sentence: I
believe that my study will disclose
___1. The extent to which a problem affects society or one of its
institutions
___2. A relationship between an independent and dependent variable(s)
___3. A new theory to an old or new problem or condition
__4. That a program (or treatment) evaluated is effective (or ineffective)
___5. A significant relationship between factors scrutinized and a
problem under investigation
___6. A need to make a change in an attitude/condition
___7. Conditions that exist that contribute to a problem studied
___8. Specific conditions that contribute to solutions of a problem studied
___9. One program is more effective than another program
___10. A need for a particular study or program
Cutting Board
State as clearly and succinctly as possible what you expected the results of
your study to show:
½ cup Scope, Limitations, and Delimitations
The scope is what the study covers, and is closely connected to the problem
framed. The limitations are the constraints that are beyond your control but
could affect the study outcomes. The type of methodology you select will have
limitations regarding aspects such as generalizability. The delimitations are set
by you. Deciding which group to study or the type of data collected are your
choices.
Scope of the Study
The scope of the study refers to the parameters under which the study will be
operating. The problem you seek to resolve will fit within certain parameters.
Think of the scope as the domain of your research—what’s in the domain, and
what is not. You need to make it as clear as possible what you will be studying
and what factors are within the accepted range of your study. For example, if you
are studying the ill effects of bullying on middle school children, the scope could
include both face-to-face bullying and cyber-bullying in grades 6 through 8.
Limitations
Limitations are potential weaknesses in your study and are out of your control. If
you are using a conventional oven, food in the middle racks often are
undercooked while the food closest to the burner and the top can be well done. If
you are using a sample of convenience, as opposed to a random sample, then the
results of your study cannot be generally applied to a larger population, only
suggested. If you are looking at one aspect, say achievement tests, the
information is only as good as the test itself. Another limitation is time. A study
conducted over a certain interval of time is a snapshot dependent on conditions
occurring during that time. You must explain how you intend to deal with the
limitations so as not to affect the outcome of the study.
Limitations are matters and occurrences that arise in a study which are out of the
researcher's control. Limitations are conditions that restrict the scope of the
study, may affect the outcome, and cannot be controlled by the researcher. They
limit the extensity to which a study can go, and sometimes affect the end result
and conclusions that can be drawn. Every study, no matter how well it is
conducted and constructed, has limitations. This is one of the reasons why we
do not use the words "prove" and "disprove" with respect to research findings. It
is always possible that future research may cast doubt on the validity of any
hypothesis or conclusion from a study. Your study might have access to only
certain people in an organization, certain documents, and certain data. These are
limitations. Subsequent studies may overcome these limitations. A small sample
size is a limitation; research findings may not apply to a broader population.
Participants and researcher’s biases can be another limitation affecting the
generalizability of the study.
Limitations of Qualitative Studies
A limitation associated with qualitative study is related to validity and reliability.
“Because qualitative research occurs in the natural setting it is extremely
difficult to replicate studies” (Wiersma, 2000, p. 211). When you select certain
methodologies and designs, for example phenomenology, they come with
limitations over which you may have little control.
Limitations of Case Studies
We cannot make causal inferences from case studies, because we cannot rule out
alternative explanations. It is always unclear about the generality of the findings
of a case study. A case study involves the behavior of one person, group, or
organization. The behavior of this one unit of analysis may or may not reflect
the behavior of similar entities. Case studies may be suggestive of what may be
found in similar organizations, but additional research would be needed to verify
whether findings from one study would generalize elsewhere.
Limitations of Correlational Studies
Correlational research merely demonstrates that we can predict the behavior of
one variable from the behavior of another variable. If a relationship exists then
there is an association between variables. However, two variables can be
associated without there being a causal relationship between the variables. If we
find that X is associated with Y, it could mean that X caused Y, or Y caused X, or
some “third” (confounding) variable caused both X and Y without there being
any causal relationship between X and Y. Correlational research may also have
limitations with respect to the generality of the findings. Perhaps the study
involved a specific group of people, or that the relationship between the
variables was only investigated under some situation or circumstance. Thus, it
may be uncertain whether the correlational findings will generalize to other
people or situations.
½ cup Assumptions
Assumptions in your study are things that are somewhat out of your control, but
if they disappear your study would become irrelevant. For example, if you are
doing a study on the middle school music curriculum, there is an underlying
assumption that music will continue to be important in the middle school
program. If you are conducting a survey, you need to assume that people will
answer truthfully. If you are choosing a sample, you need to assume that this
sample is representative of the population to which you wish to make inferences.
Leedy and Ormrod (2001) noted, “Assumptions are so basic that, without them,
the research problem itself could not exist” (p. 62).
You must justify that each assumption is “probably” true, otherwise the study
cannot progress. To assume, for example, that participants will answer honestly,
you can explain how anonymity or confidentiality will be preserved and that the
participants are volunteers who may withdraw from the study at any time and
with no ramifications. To assure the reader that a survey will get to the heart of
the research problem and enable the researcher to answer the research questions,
a pilot study is often performed.
It is important to be mindful that the paradigm you choose
(qualitative/quantitative/mixed) comes equipped with assumptions. A
quantitative researcher contends that reality is objective and singular and distinct
from the researcher. A qualitative researcher contends that reality is subjective
and multiple as revealed through the perspective of the participants in a study. A
mixed-methods researcher assumes that these ontological assumptions can be
resolved.
CHAPTER 2
SOUP/SALAD
(Research Review)
Sit down before fact as a little child; be prepared to give up every conceived
notion, follow humbly wherever and whatever abysses nature leads, or you
will learn nothing.
—Thomas Huxley
A thorough, sophisticated, and extensive literature review is the foundation and
inspiration for substantial and contributory research. The complex nature of
scholarly research requires a thorough and critical review of studies related to
the problem you plan to solve. Acquiring the skills and knowledge required to be
a scholar, includes the ability to analyze and synthesize the research in a field of
specialization. Such scholarship is a prerequisite for making a significant and
original contribution to your field or to your profession. It is important to pare
down information, looking for the most current and most relevant information.
The literature review is an integrated critical essay that analyzes and synthesizes
the most relevant and current published knowledge on the topic under
investigation. This review should reflect a critical conversation between your
sources about themes related to your study topic. On a paragraph level, there
should be multiple sources within a paragraph where findings are
compared/contrasted/analyzed, and if possible, synthesized, along with a
discussion on how these references relate to your study.
The review is organized around major ideas and themes. It generally consists of
about 40-60 pages in the proposal, and usually more than that in the dissertation.
Most dissertations contain between 150 and 250 references.
You need to keep track of all the sources that you reference in your literature
review. You need to review critically other studies that have tried to answer the
questions that you are asking and solve problems similar to the one you framed.
You need to summarize these studies, compare them, contrast them, organize
them, comment on their validity, and stir similar ones together. You need to
make certain that you properly cite each quote, paraphrase, or idea that you get
from a source. When analyzing a research study, report on the samples that were
used and how they were selected, what instruments were used to obtain data, and
the conclusions made. A substantive, thorough, and scholarly literature review is
a prerequisite for doing substantive, thorough, and scholarly research. To be
useful, scholarly research must be cumulative; it must build on and learn from
prior research on the same or related problem under investigation. It must also
clarify and resolve inconsistencies and tensions in the literature and thereby
make a genuine contribution to the state of knowledge in the field (Boote &
Beile, 2005).
Scholarly, peer-reviewed journal articles, as opposed to other types
of publications, require authors to document and make verifiable the sources of
the facts, ideas, and methods used to arrive at the authors' insights and
conclusions. Scholarly journal articles, unlike web-based or popular magazine
articles, are designed and structured to provide the elements necessary to most
thoroughly evaluate the validity and truth of the authors' position. Most
dissertations require that the overwhelming number of references (within the
past 5 years) come from scholarly journal articles.
Primary sources are also preferred over secondary sources. Primary sources
enable the researcher to get as close as possible to what actually happened and
reflect the individual viewpoint of a participant or observer. A secondary source
is a work that interprets or analyzes a historical event or phenomenon. It is
generally at least one step removed from the event. Many people consider
secondary references hearsay. If you find a quote in a secondary source, verify.
Locating original sources helps ensure the information presented is accurate in
the context of the original intent of the study. For example, If you are interested
in the works of Max Weber, and you are not fluent in German, then consider
reading Weber’s Theory of Social and Economic Organization, translated by
Henderson and Parsons in 1947, rather than depending on another author’s
interpretation of Weber’s works.
Hernandez and White (1989) revealed a 43.7% inaccuracy rate for direct quotes
used in secondary sources. Previous studies concerning paraphrasing cited in the
Hernandez and White research revealed a 30% error rate, both minor and major.
At issue is original intent. “Many changes do not adversely affect meaning, but
as we have tried to illustrate, changes do occur which alter meaning but which
often cannot be recognized by the reader as a deviation from the original”
(Hernandez & White, 1989, p. 510).
It is important to locate original sources in your literature review to ensure that
you are representing the full and correct content of the source. Choosing not to
use the source document puts you at strong risk of using the bias or paraphrasing
of an intermediate that might not have accurately represented the original source.
By reviewing the original source, you will ensure that meaning is not lost in
translation. According to Wright and Armstrong (2007), there is a high
prevalence of faulty citations in scholarly papers. These infractions impede the
growth of scientific knowledge. Faulty citations include omissions of relevant
papers, incorrect references, and quotation errors that misreport findings. Please
check out: http://tinyurl.com/2cvtg3b.
Make sure you perform a 180 degree search; that is, conduct searches using words that
support and refute your beliefs. For example, if you believe that preschool is important to a
healthy start in education, you should conduct searches on the advantages and
disadvantages of preschool education.
Encyclopedias or dictionaries, of any kind, including the very popular
Wikipedia, dictionary.com, and Merriam Webster, are not primary sources and should not
be cited or used as evidence in doctoral research. They can, however, be useful to help
gather some background information and to point the way to more reliable sources.
An acronym and explanation for what the
research/literature review chapter does is LEADS. That is, it leads the reader to
the understanding of how your study fits into a larger picture of things, how
others have dealt with and been affected by the problem, and why you chose to
study the problem the way you did:
LEADS
The literature/research review chapter is one of the most important parts of your
dissertation or research project. It puts your research into a set with other studies
and documents that have dealt with comparable issues. It gives you the
knowledge to become an expert in the area that you are investigating and points
out what your study will do that others have not done. A thorough review of the
literature also safeguards against undertaking a study that might have already
been conducted, might not be feasible, or might not be of much value when set
against what needs to be researched in a particular field. A good review of the
literature that critically synthesizes ideas and methods related to your topic is an
indication of an accomplished scholar.
It is your responsibility to present a fair and balanced discussion of alternative
viewpoints. For example, if you are researching the ill effects of the glass ceiling
for woman executives, you need to look at studies that claim that there is no such
thing as a glass ceiling. You are expected to scrutinize each study you present
and challenge dubious beliefs based on sound logic and empirical evidence. It is
imperative that you explain how you searched the literature and how you judged
the suitability and quality of the literature reviewed.
The review will also include the most important aspects of the theory that you
will examine or test and substantiate the rationale or conceptual framework for
your study. You will also present relevant studies to justify each variable that is
part of your study. The literature review should articulate what research needs to
be conducted and provide a basis to compare your research to prior studies.
If there is a limited amount of literature on your topic, then place your topic in a
larger set or sets and describe the literature in these areas. For example, a
researcher investigating the effectiveness of employee leasing could look at
leasing machinery and new ideas in business. In the past, a literature review was
expected to be exhaustive, but this is no longer possible. Instead the review
should be extensive and place your study among existing literature. Your job is
to tie this literature into a cogent whole. Your approach should be analytical as
well as descriptive.
Conducting a thorough and extensive literature review is one of the most
important early steps in a research project. This is also one of the most humbling
experiences you're likely to have because you're likely to find out that just about
any worthwhile idea you will have has been thought of before, at least to some
degree. Do not despair, you will also find holes in prior studies that your study
can plug.
A personal note from Simon: Every time I teach a research methods course, I have at least one student
complain that he or she could not find anything in the literature related to his or her topic. And virtually
every time, I am able to determine that the student was only looking for articles that were exactly the
same as the research topic posed. A literature review is designed to identify related research and to set the
current research project within a conceptual and theoretical context. When looked at that way, there is
almost no topic that is so new or unique that we cannot locate relevant and informative related research.
One good search engine you should check out is www.scholar.google.com. Another is found at
http://www.highbeam.com/library/index.asp . Most universities provide entry into professional data bases
appropriate for your degree.
Overall the literature review provides a path from prior studies to the current
study, integrates knowledge, and stimulates new ideas. Most paragraphs of the
review contain at least two peer-reviewed studies that are compared and
contrasted. Existing and historically germinal literature provide a contextual
framework within which the research design is situated. The review also needs
to provide an academic foundation for the methods, and research design chosen.
Similar studies that used the same and different methodologies to resolve similar
problems should be included in the review.
Check back to PHASE 1 in your Recipes for Success’s PROCEED section and
review the information on how to read efficiently to make sure that you are
efficacious in your probing for information. Although there is no set rule on how
many sources you need to consult for your dissertation, most chefs tend to
review between 100 and 200 dishes or studies related to their topic, and most
literature/research reviews constitute about ¼ to ½ of the written research paper.
These numbers will vary depending on
1. How unique your study is
2. How far back in time you choose to go
3. How you define the related topics
Just as there are restaurants that only serve soup and salad as a meal, the
research/ literature review itself could be a study. However in a dissertation, this
would be extremely rare. Sometimes a key word search does not yield a
sufficient amount of references. Do not despair… there is a software program
that can help with finding key words. This program can also search the key
words and provides you with a list of search engines that can assist. The cost of
the program is about $40 and can be found at www.brainstormsw.com/
In the research/literature review chapter, you will slowly illuminate how careful
you were in preparing your exemplary meal and how familiar you are with the
previous meals that have been prepared in this area. As your readers nibble on
the information you adeptly dish out, you can unveil in this chapter why you
chose your main course, why you decided to serve the meal the way you did,
and why the utensils you chose were appropriate for this type of feast. Keep in
mind that every reference should relate back to your study. Every reference cited
in chapter 1 should be elaborated on in chapter 2. Most universities (and research
journals) require that the overwhelming majority of the citations come from
peer-reviewed journals that publish refereed articles. A refereed article is an
article that has been carefully reviewed and scrutinized by scholars or experts in
the research topic of the article who are not members of the editorial staff or
board. In many cases, one or more external readers have subjected the article to a
blind review process. Walden University has prepared a guide to assist in finding
acceptable scholarly material at http://library.waldenu.edu/HowDoI_23037.htm
Evaluating a source can begin even before you have the source in hand. You can
initially appraise a source by first examining the bibliographic citation—a
written description of a book, journal article, essay, or some other published
material. Bibliographic citations characteristically have three main components:
author, title, and publication information. These components can help you
determine the usefulness of this source for your paper.
Make certain that you explain all of the following in your research report: title
searches, keyword searches, the number of articles you reviewed, and the
journals you researched.
Recipe for Appraising an Author
1. What are the author’s credentials—educational background, past
writings, or experience—in this area? Is the book or article written on a
topic in the author’s area of expertise?
2. Is the author associated with an institution or organization? What are
the basic values or goals of the organization or institution?
3. Have you seen the author’s name cited in other sources or
bibliographies? Other scholars cite respected authors frequently. For this
reason, always note those names that appear in many different sources.
4. What is the author’s worldview (What presuppositions or assumptions
are held consciously or subconsciously about the basic makeup of the
world)?
As Merriam (1997) pointed out, “How the investigator views the world affects
the entire process—from conceptualizing a problem, to collecting and analyzing
data, to interpreting the findings” (p. 53). To know how a researcher construes
the shape of the social world, and aims to give us a credible account of it, is to
know our conversational dinner partner.
If a critical realist, a critical theorist, and a social phenomenologist are
competing for our attention, we need to know where each is coming from. Each
will have diverse views of what is real, what can be known, and how these social
facts can be faithfully rendered (Creswell, 2002). In a quantitative study, theories
are usually employed deductively and need to be placed in the beginning of the
study. The researcher will generally present a theory (for example, why
calculators are not being used in a classroom), gather data to test the theory, and
then return to the theory at the end of the study to confirm or disconfirm.
In a qualitative study, an inductive mode of development tends to be used.
Usually the qualitative researcher is more concerned with building a theory than
testing it. A theoretical framework can be introduced in the beginning but will
generally be modified and adjusted as the study proceeds. The theory or theories
presented should be consistent with the type of qualitative design. It is generally
something to develop, rather than to test, that shapes the research process and
creates a visual model of the theory as it emerges. It can also be compared and
contrasted with existing theories at the completion of the research (Simon &
Francis, 2001). The theoretical framework serves as a sieve from which
information flows.
A worldview should pass certain tests. First, it should be rational. It should not
ask us to believe contradictory things. Second, it should be supported by
evidence and consistent with what we observe. Third, it should give a satisfying
comprehensive explanation of reality and enable us to explain why things are the
way they are. Fourth, it should provide a satisfactory basis for living. It should
not leave us feeling compelled to borrow elements of another worldview in order
to live in this world. How you determine right from wrong helps determine your
worldview. Some people believe that ethics are relative or situational, while
others assert that ethical behavior is a universal fixed idea. Some people believe
they have no free choice since all acts are entirely determined, while others
believe the opposite.
In determining your worldview regarding the meaning of history, you might find
you believe that history is determined as part of a mechanistic universe. Or you
might believe that history is a linear stream of events linked by cause and effect
but without purpose. Some people believe that history is meaningless because
life is absurd.
One who adapts a postmodern worldview believes that everything is
predominantly contextual. A realist’s worldview, in contrast, is that there are
predominantly absolutes—good versus bad; you are either with us or against us.
History has shown the tragic results of a "might makes right" worldview held by
despots and anarchists. Frequently the worldview within expository writing is
not overtly written. It is written between the lines. (That is why most school
curricula have hidden curricula within them.)
When you are researching articles and books, you should be trying to discern the
worldview of the authors. If the worldview is not overtly stated, it might be
because of the following reasons:
1. The writing is sloppy.
2. The writer knows her or his audience and knows that those particular
readers already are aware of the unstated worldview.
3. The writer just expects the audience to know the unstated worldview as
part of the general background information and through silence tells the
reader that the reader is expected to know the unstated worldview. (This
position contains the hidden message that tells the reader, "If you haven't
done the requisite background studies, then do so if you want to fully
comprehend what I've written.")
4. The writer assumes and presumes that the readers already know what the
underpinning worldview is within the writing. This kind of assuming and
presuming might be irresponsible on the part of the writer, but not
necessarily. (That is part of the writer's worldview: he or she assumes and
presumes that the readers within a given discipline will have already done
their homework and assumes and presumes that the readers will happily
accept that responsibility.)
5. The writer might tacitly hold the worldviews that underpin the writing and
not know that he or she doesn't know that he or she holds those
worldviews.
Any worldview model is an abstraction derived from certain observed
phenomena, but is not a picture of those phenomena. Most would grant that in
ethnically diverse classrooms a prima facie case can be made for worldview
variations as a factor in the education process. The principal assumptions in this
author’s worldview theory in education are that the students in most, if not all,
classrooms have subtle, worldview variations and that these variations constitute
an important factor in achievement and attitude development.
Check the Date of Publication
1. When was the source published? This date is often located on the face of the
title page below the name of the publisher. If it is not there, look for the
copyright date on the reverse of the title page. On Web pages, the date of the
last revision is usually at the bottom of the home page and sometimes on every
page.
2. Is the source current or out of date for your topic? Topic areas of continuing
and rapid development, such as technology, demand more current information.
On the other hand, topics in the arts often require material written many years
ago.
Check the Edition or Revision
Is this a first edition of this publication or not? Further editions indicate a source
has been revised and updated to reflect changes in knowledge, include
omissions, and harmonize with its intended readers’ needs. Also, many printings
or editions may indicate that the work has become a standard source in the area
and is reliable.
Check the Publisher
If a university press publishes the source, it is likely to be scholarly. However,
the fact that the publisher is reputable does not necessarily guarantee quality. It
does show that the publisher has a high regard for the source being published.
Check the Title of Journal
Is this a scholarly or a popular journal? This distinction is important because it
indicates different levels of complexity in conveying ideas. If you need help in
determining the type of journal, you may wish to check your journal title in the
latest edition of Katz’s Magazines for Libraries (Uri’s Ref Z 6941 .K21 1995) or
Ulrich's Serials Analysis System: http://tinyurl.com/dk2wdk.
Ulrich’s Publication Directory provides information on over 300, 000 serial
publications. By consulting Ulrich’s Publication’s Directory, you can evaluate
publications to determine their credibility as a doctoral source. By typing the
name of a source in the search field, Ulrich’s will indicate type of source such as
trade publication, scholarly/academic journal, consumer magazine. It is likely
that your university will provide free access to this site.
Keyword searching is a powerful and flexible way to find books, periodicals,
and other materials. A keyword search looks for any word or combination of
words in the author, title, and subject fields of databases. Keyword searches use
connectors to search for two or more words in specific ways. The three most
useful connectors are AND, OR, and ADJ. AND specifies both words must
appear somewhere in the document, narrowing your search. OR specifies that
either word may appear in the record. ADJ specifies that the words must be
adjacent and in the same order, thus guaranteeing that the words are searched as
a phrase. This can also be accomplished using quotation marks: “child
psychology” works the same as child adj psychology.
In addition, most databases allow you to use a truncation symbol. Although the
symbol used for truncation varies, the most common symbol is *. A truncation
symbol placed at the end or middle of a term will retrieve variations of that
word. For example, “child*” will return hits on child, child’s, childhood,
children, etc. Keyword searching allows you to enter any word or string of
words. The database will search for all occurrences of the word(s) in citations,
abstracts, and depending on the availability, full-text.
The questions below can be used to describe and assess the merits of previous
studies and could be included when writing the literature/research review in your
research paper. It is unlikely that any one study will provide the answers to all
these questions, but the questions can serve as a guideline for critical reviews.
1. What was done? Was it effective?
2. When did the study take place? What was the accepted belief at this
time?
3. Where did this study or event take place?
4. Who was involved?
5. What methodologies were used? How does the methodological choice
affect the research findings?
6. What were the limitations? How were these limitations addressed?
7. What types of instruments were used?
8. What was the sample and population studied?
9. What did this add to the knowledge or solution of the problem?
10. What recommendations were made?
11. What contributions were made at the practical and scholarly level?
12. Who was affected by this study or program?
13. What are the similarities between this study and your study?
14. Was this an appropriate means of dealing with the problem?
15. How does this study compare to other similar studies?
16. How does this study compare to the study you are conducting?
Cutting Board
As you examine articles, textbooks, speeches, video presentations, Web pages,
documentaries, etc. that are related to your topic and the problem you are
investigating, determine how logically the arguments were presented. If you find
flaws in logic, make certain they are noted. Elaborate on how the materials dealt
with the problem you are investigating and what you are doing differently.
In the space below, write down key words that are closely related to your
research.
Remember to write down the following information, if applicable, after you have
examined a source: Author, publisher, city of publisher, copyright date, title,
page number, name of periodical, date, volume number, quotes you plan to use,
and page number of quotes. An excellent source to help you evaluate a reference
can be found at http://tinyurl.com/327scqq.
CHAPTER 3
½ MAIN COURSE
(Methodology: What Did You Do?)
Good job! Now that you have reached this point in your Recipes for Success,
you are ready to put together many of the ingredients that you have carefully
amassed in PHASES 1 and 2 and create a splendid main course.
In chapter 3 you will spoon-feed your guests as you elaborate, in great detail, the
research design that you selected and how it applies to your study. A research
design is the “procedures for collecting, analyzing, and reporting research”
(Creswell, 2002, p. 58) in a quantitative, qualitative, or mixed paradigm
approach. Creswell suggested a litmus test to understand or decide between these
paradigms. The test of a quantitative approach is whether explaining or
predicting relationships among the variables is important, along with measuring,
assessing the impacts of the variables, testing theories or broad explanations, and
applying the results to a wider group than the population being studied. A
qualitative study generally uses a naturalistic approach that seeks to understand
phenomena in context-specific settings. Where quantitative researchers seek
causal determination, prediction, and generalization of findings, qualitative
researchers seek, instead, illumination, understanding, and extrapolation to
similar situations. For further elaboration, check out http://tinyurl.com/2c6b9ck.
Chapter 3 is where you elaborate on why the method and design you chose are
appropriate to solve the problem you posed. If a qualitative design was chosen,
an argument about how a quantitative method would not solve the problem
should be included, with sources. Make certain to use a germinal book on the
method to help justify your selection. Also let your work marinate so all parts
come together and tenderize as needed to make your feast palatable to your
guests.
The Cutting Board activity that follows can be used to prepare a delicious and
nutritious chapter 3. For your proposal in a quantitative study, this is usually 10-
15 pages; in your dissertation, it is usually 15-20 pages. In qualitative studies
this is usually doubled. Make certain you have obtained a cookbook, that is, a
classical or germinal text to help guide you through this section. For example, if
you are doing a case study design, you should consult with Yin or Stake, for
grounded theory Glaser or Strauss are your “men,” and for appreciative inquiry
you will likely turn to Cooperrider and Srivastava for guidance.
Cutting Board
Begin chapter 3 with a restatement of your problem and purpose and describe
how the selected research design derives logically from both the problem and
purpose. Give a brief overview of the dishes you are serving or what the reader
can look forward to in this chapter. Next, elaborate on the rationale of the
paradigm you chose (qualitative, quantitative, or mixed methods) and the
appropriateness, including a discussion of why the proposed design
(experimental, Delphi, phenomenology, correlational, etc.) will accomplish the
study goals, why the design is the optimal choice for this specific study, and why
other likely choices would be less effective.
Continue with a discussion of the population and the sample. Justify the sample
size and explain the geographical region where the study takes place. Explain
how the participants in the study are protected from any harm or ill effects. Then
discuss the type and appropriateness of the data collected. Elaborate on the
instruments chosen and their reliability and validity. Identify and justify the type
of data analyses that will be done.
Each section of chapter 3 should be highlighted in some way. You want to
convince the reader that you have (had) a well thought out plan to collect,
organize, analyze, and interpret data. You must convince the reader that you can
(did) achieve the purpose of your study.
It is usually easier to use an instrument that has an established cooking record
rather than to create your own. This means that it has probably already been
shown to be both valid and reliable. An excellent place to check for an
appropriate instrument is at http://www.unl.edu/buros/. However, if you have
created your own instrument for data collection, then you must describe what
you have done to see that it is valid (does what it purports to do) and how you
know it is reliable (consistent). Panels of experts, pilot studies, and content
analysis can help in this respect.
Qualitative researchers have few strict guidelines for when to stop the
data collection process. Criteria include (a) exhaustion of resources, (b)
emergence of regularities, and (c) overextension or going too far beyond
the boundaries of the research (Guba, 1978). The decision to stop
sampling must take into account the research goals, the need to achieve
depth through triangulation of data sources, and the possibility of greater
breadth through an examination of a variety of sampling sites.
Bogdan and Biklen (1982) defined qualitative data analysis as “working with
data, organizing it, breaking it into manageable units, synthesizing it, searching
for patterns, discovering what is important and what is to be learned, and
deciding what you will tell others” (p. 145). Qualitative researchers tend to use
inductive analysis of data, meaning that the critical themes emerge out of the
data (Patton, 1990). Qualitative analysis requires some creativity, for the
challenge is to place the raw data into logical, meaningful categories; to examine
them in a holistic fashion; and to find a way to communicate this interpretation
to others. The role of the researcher in the data collection procedure should be
described.
Stir and fry all the ingredients together and arrange them in a pleasing and
delectable manner and you will have ½ of your main course and chapter 3 of
your dissertation complete! Savor the taste.
CHAPTER 4
OTHER ½ OF MAIN COURSE
(Presentation and Analysis of Data)
Here is where you provide the punch line, or tell the reader what you discovered
from your study. You have already made the preliminary preparation for this
chapter in PHASE 2 of your Recipes for Success. You can use that information
to guide you through the writing of chapter 4 of your dissertation. Chapter 4
presents, in sufficient detail, the research findings and data analyses and
describes the systematic and careful application of the research methods. There
is no single way to analyze the data; therefore, the organization of chapter 4 and
analysis procedures will relate to the research design and research methods you
selected. However, there are general guidelines to follow and components to
include. The presentation and analysis chapter of your dissertation usually
contains many of the garnishes listed below and provides an affriander (addition
to a dish to give it a more appetizing appearance).
Check each ingredient that you plan to include. (Once you have successfully
incorporated a particular component into the body of your paper, acknowledge
that accomplishment by highlighting that task with a colorful pen or form an
electronic list and use the highlighting feature in Word.)
___1. A detailed description of the data uncovered and the data that
were analyzed (include means, percentages, standard deviations, t or z
values, rho values, chi-square values, p values, alpha values, ANOVA,
etc.)
___2. Tables and graphs depicting your data
___3. The results of your hypothesis testing, include the assumptions
___4. The statistical significance of your findings
___5. The answer to every research question you posed
___6. A summary of any interviews conducted, with direct quotes to
support your analyses
___7. Any observations that you, or a research assistant, made in
relationship to the problem
___8. If you used surveys or tests, explain how each item was
weighted and how it was used to help you arrive at your conclusions
Excellent software tools are available to help with both quantitative and
qualitative data analyses. One of the most popular software packages to
summarize and analyze quantitative data, and generate tables and graphs, is
SPSS. Statistical analyses range from basic descriptive statistics, such as
determining means and standard deviations, to advanced inferential statistics,
such as regression models, ANOVA, and factor analysis. SPSS also contains
several tools for manipulating data (in a good and ethical way ), including
functions for recoding data and computing new variables as well as merging and
aggregating datasets. A good introduction to SPSS is found at
http://tinyurl.com/963dj8b. For qualitative data collected by text, images, or
sound, an excellent software program is NVivo, created by QSR International
(Richards, 2002). NVivo software tools require sensitivity to detail, content, and
accurate access to information (Richards, 2002). The program allows the data to
be examined at increasing levels of understanding and generate an informed
range of alternative solutions to complex issues and problems facing the
qualitative researcher. Make sure you include some of the excellent charts,
graphs, and figures that NVivo can generate.
The only drawback is that most qualitative software programs are quite
challenging to the novice user and have a fairly steep learning curve to master.
Data Sense http://www.datasense.org offers excellent face-to-face and online
training and project consultation to individuals and groups utilizing the most
current version of QSR software. Data Sense can work with neophytes as well as
professionals and have successfully guided hundreds of scholarly researchers
through the myriad of documents obtained through interview transcripts, field
notes, brochures, e-mails, and memos.
Prior to the oral defense, talk to your committee chair regarding areas of concern
based on comments received from committee members. Be well prepared for
your presentation—academically, mentally, and physically. Make sure that you
practice your presentation and pace yourself well. If possible rehearse in a mock
defense. A friend or family member can drill you with some of the questions that
appear in this document. You need to be well rested and focused before your
defense. It would be good to have a glass of water available, and remind yourself
to periodically take deep breaths. In your preparation, don’t try to memorize all
the studies cited in your dissertation, but do know the details of a few key studies
that form the basis of your conceptual and theoretical framework and the reason
for your investigation.
Make certain you begin your presentation by introducing yourself and thanking
your audience for attending. Your chair cannot tell you the specific questions the
examiners will ask, but she or he can direct your attention to issues or areas that
require some thinking or additional research. If possible, speak to graduates of
your program about their experiences at their oral defense.
Your dissertation committee chair is usually the moderator for your defense, and
he or she will explain the rules on procedure and protocol. During the defense,
the committee could ask for further elaboration on the research methods
employed in the study; question your findings, conclusions and contributions;
elaboration on the relevance of your study to your profession and society at
large. Specific to your study, you need to be ready to discuss: why and how you
selected the problem to investigate; the instrument for data collection you chose;
the basic assumptions of your study; the theoretical and conceptual framework;
the methodology you chose; the way your data were analyzed; and how you
solved your problem, reached your conclusions, answered your research
questions, and obtained your purpose. In this way, you and your examiners can
reach more extensive insights into the area that you researched.
Some other helpful hints include treating your presentation as a public speaking
engagement. There could be people outside your profession at your
presentation, so avoid using jargon from your field or presenting too much
detail. You need to explain in simple, concise language (a) what you did, (b)
why you did it, (c) how you did it, (d) what you found, and (e) what the results
mean.
Don’t speak too fast and don’t read from your notes. Try to keep friendly chit-
chat to a minimum. Don’t spend too much time on any one issue. Don’t rush to
answer each question. It is perfectly acceptable to think for a couple of seconds,
or ask the questioner if you are on the right track. If you are not clear about the
question ask for clarification. Try to be concise and to the point, but at the same
time demonstrate that you have a good grasp of the complex issues involved in
your study. In other words, do not give superficial answers, but at the same time,
stay focused and speak with authority. Balance is important. Spend
proportionately larger amounts of time with very important matters and less
time with matters of medium importance. Quickly mention or pass by matters of
small importance.
5.
Usually participants will be given a phone number and code for the
conference call. It is a good idea to send a friendly e-mail reminder a day or
two before the defense. It is also good to have a phone number to reach the
participants, just in case. Make sure you provide your personal phone and
mobile phone numbers. Bad connections, disconnections, and such may
(and do) happen. Be prepared. Allow for mishaps. Keep the agenda flexible.
Your chair will likely have alternative scenarios if a committee member
loses connection, but just in case, be sure to have some alternatives of your
own, especially if it is your chair that loses his or her connection.
6. It is a good idea to record the teleconference. Most free conferencing
services provide this, and your academic institution may use their own
conferencing system for capturing and documenting the conference.
7. Review the recording of the oral presentation while revising your
dissertation.
8. Try to remove all distracting noise from your environment. A barking
dog or a radio in the background can divert your audience’s attention from
your presentation.
9. Follow up with a thank-you note. These are always appreciated.
ABSTRACT
Menu
An abstract is a concise summary of your study and a useful tool for others to
have a clear grasp of the research that was conducted. Because on-line search
databases typically contain only abstracts, it is extremely important to write a
complete but concise description of your work to entice potential readers into
obtaining a copy of the full paper. An abstract is one paragraph that is not
indented. The abstract includes a summary of the problem, purpose,
methodology, conclusions, and results. The abstract should be concise and
precise and contain no redundancies. References and citations are not used in an
abstract. This serves as a menu for your feast. When putting together your
abstract gather together the following ingredients:
1. A statement of the problem you have investigated
2. A brief description of the research method and design
3. A brief statement of the research questions (do not write the questions)
4. A brief description of the theoretical framework
5. Major findings and their significance (if you have conducted a test of
hypotheses, include the critical value and the p value)
6. Conclusions and recommendations
A reader should be able to decide from the abstract whether or not to read the
entire dissertation. Since it is not part of the dissertation, it should neither be
numbered nor counted as a page. To ensure the abstract title does not appear in
the table of contents, use the “Normal Indent” formatting (APA Formatting
toolbar or heading 5a on the headings toolbar). Note it is preformatted.
The abstract should be completed after you have written the research paper. The
abstract provides a clear summary of the paper, indicating both content and tone
of the paper. First-person narrative should not be used in the abstract. If learners
want to publish the abstract of their research project to Dissertation Abstracts
International (DAI), a clearinghouse of abstracts, the abstract should be no
longer than 350 words. APA-publishable abstracts must be no longer than 120
words. The abstract paragraph should not be indented.
Nine items usually need to be present in the abstract for a proposal and 10 items
for a dissertation:
1. State problem in one sentence
2. The paradigm: qualitative/quantitative/mixed method)
3. The method
4. A statement of the purpose
5. Rephrase the research question as a statement
6. The theoretical framework
7. The participants
8. The data collection techniques
9. An explanation of how data were or will be analyzed and managed
10. In the dissertation – the results and recommendations
In addition, you might need to explain how your study is aligned with the
university’s mission and goals.
Abstracts do not have paragraphs; the text is 1 long paragraph with no
indentation. Check with the university on the number of words permitted. Rarely
can an abstract be more than one page.
Here is a template and sample abstract:
[briefly state the problem] Despite more than 50 years of attempts to improve
mathematics education and the simultaneous prevalence of fears associated with
learning mathematics in the United States, the problem of mathematics anxiety
among students still remains. [briefly state the purpose and nature of the
study]This qualitative phenomenological study focused on understanding
college students’ perceptions regarding the phenomenon of mathematics anxiety.
[briefly state the research questions as statements]The research questions
explored the lived experiences of participants regarding mathematics anxiety.
[briefly state the theoretical or conceptual framework]Conceptually this
study was framed within theories of motivation, disposition, and constructivist
learning. [briefly state the means of data collection]Data were collected
through in-depth interviews, which provided detailed descriptions of the
participants’ experiences and created the basis for analysis. [briefly state the
sample and population]Twelve participants from a university in the
Northeastern United States were selected for participation based on their self-
disclosures of overcoming fear of mathematics. A series of taped and transcribed
interviews were conducted. [briefly state how data were analyzed] A line-by-
line analysis of participants’ responses was conducted, leading to the disclosure
of critical themes that included causes of and strategies for reducing
mathematics anxiety. [briefly state the findings]The results of this study
provide insight for mathematics teachers at all grade levels on how clear,
methodical explanations of mathematical principles and algorithms, motivational
practices, hands-on activities, use of different models, and positive and
supportive learning environments can enhance student attitudes toward
mathematics. [briefly state how positive social change, or the university’s
mission, could come from the study]This study contributes to positive social
change by providing practical classroom strategies that can reduce students’
mathematics anxiety. By reducing mathematics anxiety, more students may elect
to take math-related courses and enter rewarding math-related careers.
Form and style:
(2.5 cm)
Abstract [or ABSTRACT]
(double space)
Title
(double space)
by
(double space)
Author
(double space)
Text (double spaced and about 1.5 pages)
Before you submit your dissertation to committee, make certain you check each
of these items,
Item Comment Check
An abstract is a concise summary of a research
study and a useful tool for others to have a clear
grasp of the research that was conducted. Because
on-line search databases typically contain only
abstracts, it is extremely important to write a
complete but concise description of your work to
entice potential readers into obtaining a copy of the
full paper.
Abstract An abstract consists of one unindented paragraph.
The abstract includes summaries of the problem,
Content purpose, methodology, conclusions, and results.
The abstract should begin with a restatement of the
general problem, which still likely exists. The
abstract should be concise and precise, and contain
no redundancies or conflicting information.
References and citations are not used in an abstract.
An abstract should never be more than 350 words
or one page, and some universities have much
shorter word limits. An abstract (like the
dissertation) is written in the past tense.
Here are some form and style tips for the abstract:
A. limit the abstract to one unindented paragraph
that does not exceed one page;
B. maintain the scholarly language;
C. keep the abstract concise, accurate, and
readable;
D. use correct English;
Abstract E. ensure each sentence adds value to the reader’s
Form & Style understanding of the research; and
F. Use the full name of any acronym and include
the acronym in parentheses.
Do not include references or citations in the
abstract. Per APA 6.0 style, except at the start of a
sentence, use numerals in the abstract, not written
out numbers. The abstract, like the dissertation, is
written in the past tense.
Note the difference between an alpha value and a p-
value. Alpha (α) sets the standard for how extreme the
data must be before the decision to reject the null
hypothesis is made. The p-value indicates how extreme
Alpha Versus the data are. We compare the p-value with the alpha to
determine whether the observed data are statistically
p-value significantly different from the null hypothesis.
See http://statistics.about.com/od/Inferential-
Statistics/a/What-Is-The-Difference-Between-Alpha-
And-P-Values.htm
Acknowledgments
You can now celebrate the birth of an excellent contribution to society and a
remarkable repast. You probably want to write thank-you cards by adding an
acknowledgments page after the table of contents to show your appreciation to
everyone who helped you create this feast.
Thank you for using your Recipes for Success to assist you in the preparation of
this culinary delight. If you know a person who is ABD (All But Dissertation) or
someone in need of a guide to successfully complete a research paper, please
complete contact us at dissertationrecipes.com or fill out the form below and
we will see that they receive their own copy of the Dissertation and Scholarly
Research Recipes for Success.
BON APPETIT!
SUGGESTED READINGS
Fundamentals of Educational Research, by Gilbert Sax, Prentice Hall, New
Jersey, 1979
This work is a practical guide to graduate-level research in education. It shows
how to select a research project, how to conduct the research, and how to
interpret the research. It carries the reader from analysis to presentation of
research.
How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life,
by Thomas Gilovich, Free Press, New York, 1991
Gilovich explains in detail the truth to Artemus Ward’s famous expression, “It
ain’t so much the things we don’t know that get us in trouble. It’s the things we
know that just ain’t so.” He examines how questionable and erroneous beliefs
are formed and how they are maintained. Despite popular opinion, people do not
hold questionable beliefs simply because they have not been exposed to the
relevant evidence or because they are unintelligent or gullible. Many
questionable and erroneous beliefs have purely cognitive origins and can be
traced to imperfections in our capacities to process information and draw
conclusions. They are not the products of irrationality, but of flawed rationality.
Research Methods in Education: A Practical Guide, by Robert Slavin, Prentice
Hall, New Jersey, 1984
This text is primarily designed to serve as a basic resource for a course on
research methods of education but it can also be used by anyone who expects to
conduct social science research. Its intent was to show how to use research
designs and procedures to get the best possible answers to the best possible
questions. It discusses research design issues in the light of the limitations and
realities of institutional settings.
How to Conduct Surveys: A Step-by-Step Guide, by Arlene Fink and Jacqueline
Kosekoff, Sage Publications, Beverly Hills, CA, 1985
The purpose of this guide is to help the reader organize a rigorous survey and
evaluate the credibility of existing surveys. Its aim is for simplicity rather than
embellishment.
Research in Education: An Introduction, by Bill Turney and George Robb,
Dryden Press, Hinsdale, Illinois, 1971
This book deals with issues such as What constitutes research? What is the
scientific approach to research? How do you select and evaluate a research
problem? It offers advice on using the library in educational research and
discusses in detail techniques and tools of the educational researcher.
Tests, Measurement and Evaluation: A Developmental Approach, by Arthur
Bertrand and Joseph P. Cebula, Addison Wesley, Menlo Park, CA, 1980.
The testing movement in America has come under severe criticism in recent
years by those who claim that there is too much emphasis on standardized
instruments to measure intelligence and achievement. Some have even referred
to such tests as dehumanizing and do not provide accurate assessment of
individual differences.
The author takes the view that tests in and of themselves are not dangerous, but
feels that when used properly they can provide the classroom teacher with a
helpful set of assessment tools. It takes a developmental approach to learning
and growth, emphasizing the need to understand each developmental stage of
physical, cognitive, and personal growth and how each stage dramatically affects
the others throughout a child’s life.
MARILYN K. SIMON, PhD, has been actively involved in Mathematics and
Computer Education since 1969 and has taught all levels of mathematics and
study skill development from pre-school through graduate school with
extraordinary results. She has published numerous books on mathematics
education, scholarly research, high stakes test-preparation, and online learning.
Dr. Simon is a faculty member at Walden University and the University of
Phoenix, School of Advanced Studies, where she supervises doctoral students.
She is also one of the nation’s authorities on Overcoming Mathanxiety and
online learning, and is an adjunct faculty member at Pacific Oaks College,
Colorado State University, Global, and for school districts and businesses across
the nation, and is an international lecturer on online learning and women and
mathematics. Dr. Simon is also a quality reviewer of dissertations for several
graduate schools.
Dr. Simon is the president of MathPower, and co-founder of Best-Prep,
educational consulting firms. She has conducted post doctorate research at the
Institute of Advanced Studies in Princeton, NJ, and was selected as an
Outstanding Woman of America, and as a mathematics education delegate to
South Africa.
http://www.linkedin.com/pub/marilyn-simon/2/b88/509
JIM GOES, PhD is Managing Partner of Cybernos, LLC, Adjunct Associate
Professor of Management at the University of Oregon, and Faculty Mentor at
Walden University. His teaching includes corporate strategy, organization theory,
entrepreneurship, public health, and various executive education seminars. His
current research and consulting work is focused on change and innovation in
business, health systems, public health, and rural communities.
Dr. Goes has authored a variety of published works in the areas of strategic
management, organizational change, and business and public policy, and has
given well over 100 presentations in business and academic venues. He serves
as co-editor of Advances in Health Care Management, and his research and
writing have been recognized with awards from the Academy of Management,
A. T. Kearney, and the American College of Healthcare Executives. He was a
Fulbright Teaching Fellow in India in 2013.
http://www.linkedin.com/in/jimgoes
ACKNOWLEDGMENTS
I want to take the opportunity to thank many individuals. First and foremost, I
thank my husband, Dr. Ronald Simon, my sons Matthew and Jonathon, their
amazing wives Meg and Cristy, and our incredible grandson Oliver for their
continuous emotional support of my efforts. The simultaneous hard work and
delight of writing this book was always wonderfully encircled by the context of
their love. A special thank you to Jim Goes for helping to raise the quality of
Recipes to must have resource, and the amazing Toni Williams, an uber
APA/dissertation editor!
The first edition of this book was simmering in my mind for many years and, for
that reason, I thank all of the students who have sat through my statistics and
research methods classes and all the wonderful doctoral students that I have had
the honor to mentor. Each of you has added to the original book, and subsequent
editions, in your own special way and has a special place in my heart.
- Marilyn K. Simon, Ph.D.
There are many books and guides on the market about dissertation writing and
development, but this book is unique in tone and character -- truly a useful guide
for those for who learn best by doing, experimenting, and feeling their way into
a great dissertation. My thanks first go to my longtime colleague Marilyn
Simon, for her vision of a different model of dissertation mentoring, and for
inviting me to collaborate on this fully revised volume and companion website.
The many compliments we have received on earlier editions point to a real need
for a “dissertation guide for the rest of us”.
Special thanks also to my family (Susan, Christopher, and Matthew) for their
support in this project and sacrifice of my time to the writing and editing
process. Finally, thanks to many colleagues at Walden University and the
University of Phoenix for sharing their collective wisdom on dissertation
mentoring and for their review and suggestions on previous versions of this
book.
- Jim Goes, Ph.D.
REFERENCES
Adèr, H. J., Mellenbergh G. J., & Hand, D. J. (2008). Advising on research
methods: A consultant's companion. Huizen, The Netherlands: Johannes
van Kessel Publishing.
Agar, M. (1996). Professional stranger: An informal introduction to
ethnography (2nd ed.). New York: Academic Press.
American Psychological Association. (1994). Publication manual of the
American Psychological Association (4th ed.). Washington, DC: Author.
American Psychological Association. (2001). Publication manual of the
American Psychological Association (5th ed.). Washington, DC: Author.
American Psychological Association. (2009). Publication manual of the
American Psychological Association (6th ed.). Washington, DC: Author.
Atkinson, J. R. (1992). Q-Method (Version 1.0) [Computer software].
Kent, OH: Computer Center, Kent State University.
Babbie, E. (1998). The practice of social research (8th ed.). Belmont, CA:
Wadsworth.
Babbie, E. (2001). The practice of social research. Australia: Wadsworth
Thomson Learning.
Baker, T. L. (1994). Doing social research (2nd ed.). New York: McGraw-
Hill.
Barab, Sasha, Michael K. Thomas, Tyler Dodge, Kurt Squire, and Markeda
Newell. 2004. Critical design ethnography: Designing for change.
Anthropology & Education Quarterly 35 (2):254-
268.doi:10.1525/aeq.2004.35.2.254
Barrett, F. J., Thomas, G. F., & Hocevar, S. P. (1995). The central role of
discourse in large-scale change: A social construction perspective. Journal
of Applied Behavioral Science, 31, 352-372.
Beane, A. L. (1999). Bully free classroom. Minneapolis: Free Press.
Bertrand, A., & Cebula. J. P. (1980). Tests, measurement and evaluation.
Menlo Park, CA: Addison Wesley.
Blair, J. (2003, February 5). Cyber bullying. Education Week, pp. 3-4.
Bloland, P. A. (1992). Qualitative research in student affairs. Los Angeles,
CA: University of California at Los Angeles. (ERIC Document
Reproduction Service No. ED 347 487)
Bogdan, R. C., & Biklen, S. K. (1982). Qualitative research for education:
An introduction to theory and methods. Boston: Allyn & Bacon.
Bogdan , R. C., & Biklen, S. K. (1992). Introduction to qualitative
research for education: An introduction to theory and methods. Boston:
Allyn & Bacon.
Boote, D. N., & Beile, P. (2005). Scholars before researchers: On the
centrality of the dissertation literature review in research preparation.
Educational Research, 34(6), 3-15.
Borg, W. R. (1987). Applying educational research. New York: Longman.
Borgatti, S. B., Everett, M. G., & Freeman, L. C. (1999). Ucinet 5 for
Windows: Software for social network analysis. Boston: Analytic
Technologies.
Bradburn, N., Sudman, S., and Associates. (1979). Improving interview
methods and questionnaire design. San Francisco: Jossey-Bass.
Brooks, A. J., & Penn, P. E. (1999). Final report to National Institute on
Drug Abuse. Five years, twelve steps, and REBT in the treatment of dual
diagnosis. Journal of Rational Emotive and Cognitive-Behavior Therapy,
18, 197-208.
Burns, J. M. (1989). The American experiment. New York: Knopf.
Burns, N., & Grove, K. (1993). The practice of nursing research: Conduct,
critique and utilization (2nd ed.). Philadelphia: Saunders.
Bushe, G. (1995). Advances in appreciative inquiry as an organization
development intervention. Organization Development Journal, 13(3), 14-
22.
Charles, J. (2003). Diversity management: An exploratory assessment of
minority group representation in state government. Public Personnel
Management, 32, 561-567. Retrieved July 6, 2005, from EBSCOhost:
Business Source Premier database.
Charmaz, K. (2000). Grounded theory: Objectivist and constructivist
methods. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative
research (2nd ed., pp. 509-535). Thousand Oaks, CA: Sage.
Clements, D. (1990). Mathematical modeling: A case study approach.
Cambridge, MA: Cambridge University Press.
Clough, P., & Nutbrown, C. (2002). A student’s guide to methodology.
London, England: Sage.
COEHS. (2005, January). Guidelines for MS Plan: A thesis and doctoral
dissertation research proposal. Retrieved May 16, 2005, from
http://www.coe.usu.edu/brs/PLANA.htm
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design &
analysis for field settings. Chicago: Rand McNally.
Cooper, D. R., & Schindler, P. S. (2002). Business research methods (8th
ed.). Boston: Irwin.
Cooperrider, D. (1990) Positive image, positive action: The affirmative
basis of organizing. In S. Srivastava & D. Cooperrider (Eds.), Appreciative
management and leadership: The power of positive thought and action in
organizations. San Francisco: Jossey-Bass.
Cormack, D. (1991). Team spirit motivation and commitment team
leadership and membership. team evaluation. Grand Rapids, MI: Pyranee
Books.
Council of Graduate Schools. (2005). Distinguishing characteristics of the
dissertation research and dissertations. Retrieved May 11, 2005,
from http://www.cgsnet.org/PublicationsPolicyRes/appendixa.htm
Covey, S. (1996). First things first. New York: Free Press.
Crane, B. (2004). Retrieved August 1, 2004, from
http://web.isp.cz/jcrane/IB/triangulation.html
Creswell, J. W. (1994). Research design: Qualitative & quantitative
approaches. Thousand Oaks, CA: Sage.
Creswell, J. (1997). Research design: Qualitative and quantitative
approaches. Thousand Oaks, CA: Sage.
Creswell, J. (2002). Research design: Qualitative and quantitative
approaches. Thousand Oaks, CA: Sage.
Creswell, J. W. (2003). Research design: Qualitative, quantitative, and
mixed methods approaches (2nd ed.). Thousand Oaks, CA: Sage.
Creswell, J. W. (2005). Educational research: Planning, conducting, and
evaluating quantitative and qualitative research (2nd ed.) [electronic
version]. Upper Saddle River, NJ: Pearson Education.
Crossen, C. (1995). The tainted truth: The manipulation of fact in
America. New York: Simon & Schuster.
Crowl, T. K. (1993). Fundamentals of educational research. Madison, WI:
Brown and Benchmark.
Custer, R. L., Scarcella, J. A., & Stewart, B. R. (1999). The modified
Delphi technique: A rotational modification. Journal of Vocational and
Technical Education, 15(2), 1–11. Retrieved August 31, 2004, from
http://scholar.lib.vt.edu/ejournals/JVTE/v15n2/custer.html
Dalkey, N. (1984). The Delphi method: An experimental study of group
opinion. Thousand Oaks, CA: Sage.
Dennis, K. E., & Goldberg, A. P. (1996). Weight control self-efficacy types
and transitions affect weight-loss outcomes in obese women. Addictive
Behaviors, 21, 103–116.
Denyer, D., & Pilbeam, C. (2013, September). Doing a literature review in
business and management.
Presentation to the British Academy of Management Doctoral
Symposium in Liverpool, UK.
Denzin, N. K., & Lincoln, Y. S. (Eds.). (2000). Handbook of qualitative
research (2nd ed.). Thousand Oaks, CA: Sage.
Delva, M. D., Kirby, J. R., Knapper, C. K., & Birtwhistle, R. V. (2002).
Postal survey of approaches to learning among Ontario physicians:
Implications for continuing medical education. British Medical Journal,
325, 1218-1222.
De Vaus, D. A. (1993). Surveys in social research (3rd ed.), London: UCL
Press.
Discenza, R., Howard, C., & Schenk, K. (Eds.). (2002). The design and
management of effective distance learning programs. Hershey, PA: Idea.
Durkheim, E. (1951). Suicide (J. A. Spaulding, Trans., & G. Simpson,
Ed.). New York: Free Press.
Dzurec, L. C. (1989). The necessity for and evolution of multiple
paradigms for nursing research: A poststructuralist perspective. Advances
in Nursing Science, 11(4), 69-77.
Eisenhart, M. (2001). Changing the conceptions of culture and
ethnographic methodology: Recent thematic shifts and their implications
for research on teaching. In V. Richardson (Ed.), The handbook of research
on teaching (4th ed.). Washington, DC: American Educational Research
Association.
Eisner, E. W. (1997). The new frontier in qualitative research
methodology. Qualitative Inquiry, 3, 259–273.
Erlandson, D., Harris, E., Skipper, B., & Allen, S. (1993). Doing
naturalistic inquiry. A guide to methods. Newbury Park, CA: Sage.
Fink, A., & Kosekoff, J. (1985). How to conduct surveys: A step-by-step
guide. Beverly Hills, CA: Sage.
Fowler, F. J., Jr. (1988). Survey research methods. Newbury Park, CA:
Sage.
Francis, J.J., Johnston, M., Robertson, C., Glidewell, L., Entwistle, V.
Eccles, M. P., & Grimshaw, J. M. (2010). What is an adequate sample
size? Operationalizing data saturation for theory-based interview studies.
Psychology and Health, 25, 1229-1245. doi:10.1080/08870440903194015
Gall, M., Borg, W., & Gall, J. (1996). Educational research (6th ed.). New
York: Longman.
Gardner, H. (1983). Frames of mind. New York: Basic Books.
Gay, L. R. (1996). Educational research: Competencies for analysis and
application (4th ed.). Beverly Hills, CA: Sage.
Gay, L. R., & Airasian, P. (2000). Educational research: Competencies for
analysis and application (6th ed.). Upper Saddle River, NJ: Prentice Hall.
Gergen, K. J. (1990). Beyond life narratives in the therapeutic encounter.
In J. E. Birren et al. (Eds.), Aging and biography (pp. 205-223). New
York: Springer.
Gilovich, T. (1991). How what we know isn’t so. New York: Free Press.
Glaser, B. (1992). Basics of grounded theory analysis. Mill Valley, CA:
Sociology Press.
Glaser, B., & Strauss, A. (1967). The discovery of grounded theory.
Chicago: Aldine.
Glaser, B. G. (1998). Doing grounded theory: Issues and discussions. Mill
Valley, CA: Sociology Press.
Goetz, J. P., & LeCompte, M. D. (1984). Ethnography and qualitative
design in educational research. San Diego, CA: Academic Press.
Goldstein, M., & Goldstein, N. (1985). How we know: The experience of
science: An interdisciplinary approach. New York: Plenum Press.
Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a
conceptual framework for mixed-method evaluation design. Educational
Evaluation and Policy Analysis, 11, 255-274.
Guba, E. (1978). Toward a methodology of naturalistic inquiry in
educational evaluation. Monograph 8. Los Angeles: UCLA Center for the
Study of Evaluation
Guba, E. (1990). The paradigm dialogue. Newbury, CA: Sage.
Guba, E., & Lincoln, Y. S. (1986). Effective evaluation. San Francisco:
Jossey-Bass.
Guba, E., & Lincoln, Y. S. (1989). Fourth generation evaluation. San
Francisco: Jossey-Bass.
Hanau, L. (1975). The study game: How to play and win with "statement-
pie." New York: Barnes & Noble Books.
Harris, R. (2001). The plagiarism handbook: Strategies for preventing,
detecting, and dealing with plagiarism. Los Angeles, CA: Pyrczak.
Heckerman, J. & Breese. Causal Independence for Probability Assessment
and Inference Using Bayesian Networks. IEEE Transactions on Systems,
Man, and Cybernetics, 26:826-831, 1996.
Heppner, P. P., Kivlighan, D. M., & Wampold, B. E. (1992). Research
design in counseling. Pacific Grove, CA: Brooks/Cole.
Hernandez, N., & White, A. (1989). Pass it on: Errors in direct quotes in a
sample of scholarly journal articles. Journal of Counseling and
Development, 67, 509.
Hitchcock, G., & Hughes, D. (1995). Research and the teacher: A
qualitative introduction to school based research. London: Routledge.
Huck, S. W., & Cormier, W. H. (1996). Principles of research design. In C.
Jennison (Ed.), Reading statistics and research (2nd ed., pp. 578-622).
New York: Harper Collins.
Hummel, J., & Huitt, W. (1994). What you measure is what you get.
GaASCD Newsletter: The Reporter, pp. 10–11.
Hurston, L. A. (2004). Speak, So You Can Speak Again: The Life of Zora
Neale Hurston. New York: Doubleday. p. 5. ISBN 0-385-49375-4.
Jaccard, J., & Wan, C. K. (1996). LISREL approaches to interaction effects
in multiple regression. Thousand Oaks, CA: Sage.
Kerlinger, F. N. (1986). Foundations of behavioral research (3rd ed.). Fort
Worth, TX: Holt, Rinehart, and Winston.
Kivlighan, D. M., & Jauquet, C. A. (1990). Quality of group member
agendas and group session climate. Small Group Research, 21, 205-219.
Koehler, K. J., & Larntz, K. (1980). An empirical investigation of
goodness-of-fit statistics for sparse multinomials. Journal of the American
Statistical Association, 75, 336–344.
Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of
test reliability. Psychometrika, 2(3), 151–160.
Kuhn, T. (1962). The structure of scientific revolutions. Chicago:
University of Chicago Press.
Leedy, P., & Ormrod, J. E. (2001). Practical research planning and design
(8th ed.). New York: Macmillan.
Legewie, H., & Schervier-Legewie, B. (2004). Forschung ist harte Arbeit,
es ist immer ein Stück Leiden damit verbunden. Deshalb muss es auf der
anderen Seite Spaß machen. Anselm Strauss interviewed by Heiner
Legewie and Barbara Schervier-Legewie. Forum: Qualitative Social
Research On-line Journal, 5(3), Art. 22.
Lewin, K. (1946). Group decision and social change. In S. B. Merriam &
E. L. Simpson (Eds.), A guide to research for educators and trainers of
adults. Malabar, FL: Krieger.
Lewin, K. (1952). Field theory in social science. London: Tavistock.
Lichtman, M., & Taylor, S. I. (1993). The first book of Lotus 1-2-3 (3rd
ed.). Indianapolis, IN: Sams.
Lierr, P., & Smith, M. J. (n.d). Frameworks for research. Retrieved June 7,
2005, from http://64.233.167.104/search?
q=cache:u5koEnvEKNEJ:homepage.psy.utexas.edu
/HomePage/Class/Psy394/
Likert, R. (1932, June). A technique for the measurement of attitudes.
Archives of Psychology, 140.
Lindstone, J., & Turoff, M. (1975). The Delphi method, techniques and
applications. London: Addison-Wesley.
Marrow, A. F. (1969). The practical theorist: The life and work of Kurt
Lewin. New York: Basic Books.
Marshall, J., & Friedman, H. L. (2012). Human verses computer-aided
qualitative data analysis ratings: Spiritual content in dream reports and
diary entries. The Humanistic Psychologist, 40, 329-342.
doi:10.1080/08873267.2012.724255
Maslow, A. (1970). Motivation and personality. New York: Harper &
Row.
Mason, R., & McKenney, J. (1997). An historical method for MIS
research: Steps and assumptions. Retrieved September 17, 2004, from
GALILEO.
McPhillip, J. (1997). Needs analysis: Tools for the human services of
education. Beverly Hills, CA: Sage.
Mead, G. H. (1934). Mind, self, and society. Englewood Cliffs, NJ:
Prentice-Hall.
Merriam, S. B. (1988). Finding your way through the maze: A guide to the
literature on adult learning. Lifelong Learning, 11(6), 4–7.
Merriam, S. B. (1997). Qualitative research and case study applications in
education. San Francisco: Jossey-Bass.
Merriam, S. B., & Tisdell, E. (2015). Qualitative research: A guide to
design and implementation. San Francisco, CA: Jossey-Bass.
Miles, M., & Huberman, M. (1984). Qualitative data analysis: A
sourcebook of methods. Newbury Park, CA: Sage.
Mills-Novoa, B. (1997). The use of qualitative methods in the evaluation
of grant-funded projects. In J. Ferguson (Ed.), The grantseeker’s guide to
project evaluation (2nd ed., pp. 63-69). Alexandria, VA: Capitol.
Mitroff, I., & Kilman, R. H. (1978). Methodological approaches to social
science. San Francisco: Jossey-Bass.
Mitroff, I., & Kilman, R. H. (1983). Intellectual resistance to useful
knowledge: An archetypal social analysis. In R. H. Kilman, K. W.
Thomas, D. P. Slevin, R. Nath, & S. L .Jerrell (Eds.), Producing useful
knowledge for organizations (pp. 266–280). New York: Praeger.
Morris, M., & Muzychka, M. (2002). Participatory research and action. A
guide to becoming a researcher for social change. Ottawa, ON: Canadian
Research Institute for the Advancement of Women.
Morse, J. (1989). Qualitative nursing research: A free-for-all? In J. M.
Morse (Ed.), Qualitative nursing research: A contemporary dialogue (pp.
14-22). Rockville, MD: Aspen.
Moustakas, C. (1961). Heuristic research. In J. Bugental (Ed.), Challenges
of humanistic psychology. New York: McGraw-Hill.
Munhall, K. G., & Stetson, R. H. (1989). R. H. Stetson’s motor phonetics
(2nd ed.). New York: Little, Brown.
Parker, P., & Parker, J. (2003). Alcohol abuse a medical dictionary,
bibliography, and annotated research guide to Internet references. San
Diego, CA: Icon Health.
Patton, M. Q. (1990). Qualitative evaluation and research methods (2nd
ed.). Newbury Park, CA: Sage.
Philips, G., & Brown, W. (1989). Making sense of your world. Salem, WA:
Sheffield.
Polit, D., Beck, C., & Hungler, B. (2001). Essentials of nursing research:
Methods, appraisal and utilization (5th ed.). Philadelphia: Lippincott
Williams & Wilkins.
Polit, D., & Hungler, B. (1991). Nursing research: Principals and methods
(4th ed.). Philadelphia: Lippincott.
Popper, K. (1935). Logik der Forschung. Berlin: Springer Verlag.
Reason, P., & Rowan, J. (1987). A sourcebook of new paradigm research.
New York: Longman.
Remenyi, D. (1998). Central ethical considerations for masters and
doctoral research in business and management studies. South African
Journal of Business Management, 29(3), 109–118. Retrieved May 6, 2005,
from EBSCOhost database.
Richards, L. (2002). NVivo: Using NVivo in qualitative research.
Melbourne, Australia: QSR International.
Rosnow, R. L. (1991). Inside rumor: A personal journey. American
Psychologist, 46, 484-496.
Rosnow, R. L., & Rosenthal, R. (2013). Beginning behavioral research: A
conceptual primer (7th ed.). Pearson.
Rubin, A. (2007). Practitioner's guide to using research for evidence-
based practice. New York: Wiley.
Rudestam, K. E., & Newton, R. R. (2007). Surviving your dissertation: A
comprehensive guide to content and process (3rd ed.). Los Angeles: Sage.
Salkind, N. J. (1985). Theories of human development (2nd ed.). New
York: Wiley.
Sandelowski, M. (1986). The problem of rigor in qualitative research.
Advances in Nursing Science, 8(3), 27-37.
Sax, G. (1979). Fundamentals of educational research. Englewood Cliffs,
NJ: Prentice Hall.
Seidman, I. (2006). Interviewing as qualitative research: A guide for
researchers in education and the social sciences (3rd ed.). New York:
Teachers College Press.
Sell, D. K., & Brown, S. R. (1984). Q methodology as a bridge between
qualitative and quantitative research: Application to the analysis of attitude
change in foreign study program participants. In J. L. Vacca & H. A.
Johnson (Eds.), Qualitative research in education (Graduate School of
Education Monograph Series) (pp. 79–87). Kent, OH: Kent State
University, Bureau of Educational Research and Service.
Siegel, L. (1992). Criminology. St. Paul, MN: West.
Shank, G. D. (2006). Qualitative research: A personal skills approach
(2nd ed.). Upper Saddle River, NJ: Pearson Education
Simon, M., & Francis, B. (2001). The dissertation and research cookbook
(3rd ed.). Dubuque, IA: Kendall- Hunt.
Simpson, M. (1989). A guide to research for educators and trainers of
adults. Malabar, FL: Krieger.
Sire, J. (1997). The universe next door: A basic worldview catalog. New
York: Prentice Hall.
Slavin, R. (1984). Research methods in education: A practical guide.
Englewood Cliffs, NJ: Prentice Hall.
Sproull, N. (1995). Handbook of research methods: A guide for
practitioners in the social sciences. Metchen, NJ: Scarecrow Press.
Stacey, A., & Stacey, J. (2012). Integrating sustainable development into
research ethics protocols. Electronic Journal of Business Research
Methods, 10, 54-63. Retrieved from http://www.ejbrm.com
Stake, R. E. (1978, February). The case study method in social inquiry.
Educational Researcher, 7(2), 5–8.
Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA:
Sage.
Stephenson, W. (1953). The study of behavior: Q-technique and its
methodology. Chicago: University of Chicago Press.
Strauss, A. L., & Corbin, J. M. (1998). Basics of qualitative research:
Techniques and procedures for developing grounded theory. Thousand
Oaks, CA: Sage.
Stringer, E .T. (1996). Action research: A handbook for practitioners.
Thousand Oaks, CA: Sage.
Sudzina, M., & Kilbane, C. (1992). Applications of a case study text to
undergraduate teacher preparation. In H. E. Klein (Ed.), Forging new
partnerships with cases, simulations, games and other interactive methods.
Needham, MA: WACRA.
Suskie, L. (1996). Questionnaire survey research: What works (2nd ed.).
Washington, DC: Association for International Research.
Szent-Györgyi, A. (1947). Chemistry of Muscular Contraction. New York:
Academic Press,
Taylor R., & Meinhardt, R. (1985). Defining computer information needs
for small business, a Delphi Method. Journal of Small Business
Management, 23, 3.
Thomeé, R., Grimby, G., Wright, B. D., & Linacre, J. M. (1995). Rasch
analysis of Visual Analog Scale measurements before and after treatment
of patellofemoral pain syndrome in women. Scandinavian Journal of
Rehabilitation Medicine 27, 145–151.
Thompson, J. (1990). Hermeneutic inquiry. In L. E. Moody (Ed.),
Advancing nursing science through research. Newbury Park, CA: Sage.
Tower, J. G., Brown, J., & Cheek, W. K. (1992). Verification: The key to
arms control in the 1990s. Dulles, VA: Brassey’s.
Triola, M. (1999). Elementary statistics (7th ed.). Chicago: Addison-
Wesley.
Trochim, W. M. (2004). The research methods knowledge base. Retrieved
from http://www.socialresearchmethods.net/
Turney, B., & Robb, G. (1971). Research in education: An introduction.
Hinsdale, IL: Dryden Press.
U.S. Department of Health and Human Services. (1979). The Belmont
Report. Retrieved from http://www.hhs.gov
Van Slyke, C., Bostrom, R., Courtney, J., McLean, E., Snyder, C., &
Watson, T. (2003). Experts' advice to information systems doctoral
students. Communications of AIS, 12, 469–480.
Vohra, V. (2014). Using the multiple case study design to decipher
contextual leadership behaviors in Indian organizations. Electronic Journal
of Business Research Methods, 12, 54-65. Retrieved from
http://www.ejbrm.com
Vygotsky, L. S. (1978). Mind in society: The development of higher
psychological processes (M. Cole, V. John-Steiner, S. Scribner, & E.
Souberman, Eds.). Cambridge, MA: Harvard University Press.
Wiersma, W. (2000). Research methods in education: An introduction.
Boston,
MA. Allyn and Bacon.
Weber, R. P. (1990). Basic content analysis (2nd ed.). Newbury Park, CA:
Sage.
Wood & Brink. (1989). Principles of string theory. New York: Plenum
Press.
Yager, J. (1991). Business protocol: How to survive and succeed in
business. New York: Wiley.
Yin, R. (2005). Introducing the world of education: A case study reader.
Thousand Oaks, CA: Sage.
Yin, R. (2004). The case study anthology. London: Sage.
Yin, R. K. (2003). Case study research (5th ed.). Thousand Oaks, CA:
Sage.
Zuber-Skerritt, O. (1996). New directions in action research. London:
Falmer Press.
INDEX