0% found this document useful (0 votes)
69 views17 pages

Chapter Thre

This chapter discusses the methodology used for the study. It begins by explaining the importance of situating research within a philosophical paradigm. It then reviews the key paradigms of interpretivism, positivism, and post-positivism. The chapter argues that the post-positivist paradigm is most appropriate for this study because it allows for both qualitative and quantitative methods to understand stakeholders' potentially differing views of the complex construct of a wellness spa. The chapter also discusses the study's ontological and epistemological stances in further justification of the post-positivist approach.

Uploaded by

joniel ajero
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views17 pages

Chapter Thre

This chapter discusses the methodology used for the study. It begins by explaining the importance of situating research within a philosophical paradigm. It then reviews the key paradigms of interpretivism, positivism, and post-positivism. The chapter argues that the post-positivist paradigm is most appropriate for this study because it allows for both qualitative and quantitative methods to understand stakeholders' potentially differing views of the complex construct of a wellness spa. The chapter also discusses the study's ontological and epistemological stances in further justification of the post-positivist approach.

Uploaded by

joniel ajero
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 17

3.

CHAPTER THREE: METHODOLOGY

Chapter Two provides the theoretical and empirical background for this study,

and utilizes the wellness and the spa literatures. The review of literature

discusses a variety of definitions and wellness construct modalities. The review

also identifies that the components, products and services that collectively

constitute the construct of wellness within a spa setting are incomplete, and

therefore represents a gap in the literature. The review also highlights the

absence of a stakeholder orientation towards the wellness spa construct and

how the construct can be measured. While there is evidence of a need to

address this gap, a comprehensive theoretical and empirical framework for a

solution has not so far been developed. To address this need, this study

investigates how wellness and spa stakeholders define the domain of the

construct of a wellness spa and what they consider to be the principle

components of the construct.

The purpose of this chapter is to explain and to justify the research design of

the study. This will be preceded by an explanation of the epistemological

underpinning of the study and the research approach used to address the

research question. The chapter is structured by two interrelated sections; firstly,

research methodology, and secondly research design. The research

methodology section introduces the major paradigms and their assumptions,

including the post-positivist paradigm, which guides the study. The rationale for

adopting a post positivist paradigm in the study is also discussed.


The second section of this chapter will focus on the research design and

includes an extensive review of the C-OAR-SE procedure (Rossiter, 2002),

which forms the framework for construct definition and the development of a

measurement tool.

3.1 Paradigms

All research should be situated within a philosophical paradigm which informs

and guide's researcher approach to inquiry (Polit, Beck & Hungler, 2001).

A paradigm describes a way of looking at natural phenomena that encompass a

set of philosophical assumptions and acts as a theoretical framework to

underpin scientific inquiry (Kuhn, 1970; Polit et al, 2001). The selection of a

research paradigm should reflect assumptions concerning knowledge, reality

and the role of the researcher. According to Greene (2008) the development of

a methodological or research paradigm in the social and behavioral sciences

requires a thorough critique of four interrelated but distinct domains. The

domains are firstly, philosophical assumptions and stances; secondly, inquiry

logic (methodology); thirdly, guidelines for research practice (strategies and

tools that are used to conduct research); and fourthly, sociopolitical

commitments (interests, commitments and power relations surrounding the

location in society (Greene, 2008).

The broad paradigms of positivism and interpretivism govern much of the

research in health, leisure and tourism (Henderson, 2009; Polit et al, 2001).

While some purists believe that positivism and interpretivism cannot and should

not be mixed (Johnson & Onwuegbuzie, 2004) others argue against this

perceived incompatibility thesis (Howe, 1988).


3.1.1 The Interpretive paradigm

The interpretive paradigm is based on the philosophy that reality is socially

constructed and interpreted through language, consciousness and shared

meanings (Schulenkorf, 2009). According to Crotty (1998, p. 67), the

the interpretative approach "looks for culturally derived and historical situated

interpretations of the social life-world´ and interpretive studies aim to

understand the context of a phenomenon through the meanings that people

assign to it (Myers, 1997). Interpretive researchers generally use qualitative

methods such as field research, in-depth interviews and participant observation

to ᄂ gain rich insight into people's condition,behaviors and perceptions (Grant

& Giddings, 2002). Interpretive research, however, is sometimes criticized for

failing to demonstrate rigor and credibility (Decrop, 1999).

3.1.2 The Positivist paradigm

In contrast to the interpretive paradigm, positivism considers the object being

researched as possessing measurable properties. These properties exist3

independently of observers and their methods and the relationship between

these properties can be established as scientific facts (Smith, 1998; Veal, 1997).

Science, according to the positive paradigm, is a means for achieving empirical

facts, to understand the world well enough so that it can be predicted and

controlled under the laws of cause and effect (Social Research Methods, 2012).

Within positivism, deductive reasoning is used to postulate theories that can be


tested. Positivism emphasizes the importance of objectivity, systematic and

detailed observation and hypothesis testing (Grant & Giddings, 2002).

Positivism is associated with quantitative research methods such as surveys

and statistics in search of rigor and "how things really are"(Guba,1990,).

Quantification is useful because it provides a broad familiarity with cases,

examines patterns across many cases, shows that a problem is numerically

significant and provides unambiguous information (Ryan, 2006).

While many embrace positivism for its accuracy, it receives criticism on many

levels. Quantitative approaches are not well suited to individual case studies,

thus they will likely exclude the rich complexity of social life. Quantitative

approaches have a tendency to reduce people to numbers. The procedures are

highly structured, thus often preventing the researcher from pursuing

unexpected outcomes or information (Neumann, 2003).

In summary, positivism and interpretavism occupy diametrically opposite

positions. Both have strengths and weaknesses and researchers are

increasingly willing to acknowledge that neither position alone is capable

addressing all search questions. In social science research there is not the

same purity in the positions as may be found in other sciences (Denscombe,

2002). Despite the two paradigms' different assumption about social reality,

social scientists utilize both paradigms whenever necessary (Denscombe, 2002).


3.1.3 The Post-positivist paradigm

The post-positivist paradigm reflects a wholesale rejection of the central beliefs

of positivism. Post-positivism provides an alternative to the traditions and

foundations of positivism by arguing that there are multiple and competing 5555555555555

views of science as well as multiple truths in the empirical world (Guba &

Lincoln, 1994). Post-positivism views researchers as being value-laden as a

result of their cultural experiences and world-views. The paradigm recognizes

that people are always biased by their perceptions of reality. Thus post-positivist researchers assert that
the truth can only be approximated and can

never be explained perfectly or completely (Onwuegbuzie, Johnson, & Collins,

2009). The post-positivist approach enables the integration of quantitative and

qualitative methods so that a problem can be investigated by incorporating the

subjects experiences of the phenomenon (Giddings, 2006). The integration is describe as 'critical
multipism' (Guba and Lincoln,1994), 'critical' implies that, as a positivism ᄂ rigor, precision, logical
reasoning and attention to evidence are required. Unlike positivism, however, the post-positivist
approach is not confined to what can be physically observed. Kant ( as

cited in Stanford Encyclopedia of Philosophy, 2012) asserts that one's

knowledge of the world is based on knowledge of the phenomena. 'Multiplism'

refers to the fact that research can generally be approached from several

perspectives. These multiple perspectives are used to define research goals, to

choose research questions, research methods and the analysis and

interpretation of results (Cook, 1985). Post-positivists values in research are not

about being either subjective or objective. Rather, the values emphasize

multiplicity and complexity as hallmarks of humanity (Ryan, 2006). This

perspective is shared by Kant (as cited in Stanford Encyclopedia of Philosophy,

2012) in that a person prescribes the structure of the world, as she experiences it, phenomenally, not in
itself, or noumenally.
3.2 Justification for using the post-positivist research paradigm

People are attracted to and shape research problems that match their personal

view of seeing and understanding the world (Schwandt, 1989; Glesne, 1999).

The selection of the post-positivist paradigm for this study is not only influenced

by the specific nature of the research questions, but also by the ontological and

epistemological positions selected by the researcher (Giddings, 2006).

Ontology is the nature of the reality that researchers investigate; epistemology

is the relationship between the reality being investigated and the researcher;66

methodology is the technique used by the researcher in exploring the reality in question (Healy & Perry,
2000).

It is this a researcher's ontological belief that the nature of wellness spas is not

necessarily subjective or objective; the nature of wellness spas is multiple and

complex. Alternatively, given the literature provides no clear definition of the

domain of the wellness spa construct, so multiple views of stakeholders should

be sought (L. S. Giddings, personal communication, May 4th, 2011).

Stakeholder definitions of the domain of the construct will be used to build a

ramework for scale development and the development of an effective

measurement tool to evaluate the construct.

Epistemologically, the researchers own experience with spas and with wellness

does not permit an objective perception of reality. While the concepts of

'Wellness' and 'spa' are embedded in history,the wellness spa constract itself

is relatively new, and the truth about what constitutes a wellness spa may
evolve over time. Therefore, this study can only establish warranted

assertability as opposed to absolute truth (Crossan, 2003).

3.3 Research design

Before data collection and analysis in social science can be carried out a

research design is needed. The research design should reflect the purpose(s)

of the study, and detail the researchers overall plan for addressing the research

question (Polit et al, 2001).

This study involves identifying defining items, which would be used to form a

scale to measure the wellness attributes of a spa. As highlighted in the literature

a clear definition of the wellness spa construct is needed to identify the items to

be included in the scale.

3.3.1 The C-OAR-SE procedure

The research design selected for this study follows a six-step procedure called

C-OAR-SE (Rossiter, 2002) an acronym for: Construct definition, Object

classification, Attribute classification, Rater identification, Scale formation, and

Enumeration and reporting.

The C-OAR-SE procedure (Rossiter, 2002) focuses on the generation and

selection of items to form a scale to measure a construct. C-OAR-SE is a

thorough process providing content validity and a measurement approach


(Lloyd, 2011). Rossiter (2002) draws on previous works on conceptualization of

constructs and attribute classification including McGuire (1989), Blalock (1964),

Edwards and Bagozzi (2000), Bollen and Lennox (1991) and Law and Wong

(1999), however the total C-OAR-SE procedure is new.

The C-OAR-SE approach is easily differentiated from other scale development

procedures. C-OAR-SE is grounded in rationalism rather than empiricism in

that it is based on rational, expert judgment, rather than statistical validation

(Rossiter, 2007). Other scale development approaches (Churchill, 1979,

Nunnally, 1967) adopt exploratory factor analysis models to identify the

componentality of constructs and with the expectation that unless a 0.80 level is

reached by coefficient alpha, a multi-item measure cannot be useful

(Diamantopoulos, 2005).

C-OAR-SE satisfies the appeal for greater relevance to the people behind the

numbers (Churchill, 1979). C-OAR-SE encourages a more flexible and open-minded approach to scale
development by relying on content validity, which

ensures the items properly represent the construct (Nunnally, 1978; Rossiter,

2002). C-OAR-SE allows for reflective and formative perspectives and for

single or multi-item scales to be included (Lloyd, 2007). C-OAR-SE is based on

expert content validation and does not rely solely on statistics or psychometrics

(Rossiter, 2011).
The COAR-SE procedure involves six steps. These are outlined in the following

subsections.

3.3.1.1 Step 1: Construct definition (C-OAR-SE)

Construct definition involves the conceptual definition of the construct. Rossiter

(2002 pg.308) describe a construct as ' a conceptual term used to describe a

phenomenon of theoretical interest´ Rossiter (2002) and Diamantopoulos

(2005) agree that construct definition should specify the object, the attribute,

and the rater entity.

are the judges or the

perceivers (Rossiter, 2002). Once the conceptual definition is identified in this

step, steps two and three need to be carried out to arrive at a more complete

definition.
3.3.1.2 Step 2: Object representation (C-OAR-SE)

Object representation classifies the object as being concrete or abstract.

According to Rossiter (2002), a concrete object is one in which nearly all raters

describe the object identically; an abstract object means different things to

different raters. Objects can be singular or can have multiple constituents or

components that form the object. Hadwich, Georgi, Tuzovic, Buttner & Bruhn

(2010) highlight the importance of this step in that the measurement of

constructs would not require multiple items if object and attribute could be

conceptualized as concrete or singular.

3.3.1.3 Step 3: Attribute classification (C-OAR-SE)

Attribute classification clarifies the principle components on which the object is

being judged. The attribute of the construct can be classified as either

. The attribute can be

singular or can have multiple constituents or components that form the attribute.

If the attribute is concrete then the construct can be measured by a single-item.

If the object is formed the main components add to form the attribute. The

components must be concrete and all components must be included in the

scale (Rossiter, 2002). An eliciting attribute has internal traits or states that

have outward manifestations. These mental and physical activities are the
concrete components (Rossiter, 2002). Understanding an attributed

classification indicates which measurement model is to be used (Hadwich et al,

2010).

3.3.1.4 Step 4: Rater-entity identification (C-OAR-SE)

Rossiter (2002, p 318) states that 'constructs differ depending on whose

perspective they represents and concludes, The rater entity is part of the

construct´ Rater-entity identification categorizes raters into three types:

individual, expert and group. An individual rater self-rates a personal attribute

when the object is oneself. Group raters are generally a sample of consumers,

industry buyers, managers or employees who rate an external object. Expert

raters can be described as a small group of judges with expertise regarding the

construct (Rossiter, 2002). Each rater type requires a different approach to

reliability assessment (Diamantopoulos, 2005).

3.3.1.5 Step 5: Scale formation (C-OAR-SE)

Scale formation refers to the combination of object and attributes to form a

common scale, and involves putting together object item parts with their

corresponding attribute item parts to form scale items. Scale items include the

question or 'stem and the answer or leaves' within this step the number of

questions needed to form the scale will be identified and the appropriate rating

scale decided. Rossiter (2002) suggests that the question wording should refer

to the object and the answer format should refer to the attribute. Each question

and answer should be pre-tested for comprehension. Finally the selected items

are randomized in terms of order within the scale with a view to minimizing
response-set artifacts in the obtained scores (Rossiter, 2002). Andrews (1984)

highlights the necessity of this requirement when the same answer format used

for multiple item 'methods variance' is to ve held to a minimum.

3.3.1.6 Step 6: Enumeration (C-OAR-SE)

Enumeration is the method that produces a total scale score derived from the

indices and averages of the scale items. The type of scale varies depending on

the combination of object and attribute classification. Rossiter (2002) proposes

six distinct ways to enumerate; these vary "from single-item score equaling the

total score to two types of index, a double index, an average, and averages

which are then indexed" (Rossiter, 2002, p. 324). An index is usually a

summation of item scores (Rossiter, 2002); when using formed attributes,

Rossiter (2002) follows the profile rule (Law, Wong, & Mobley, 1998) where a

minimum level for each component must be exceeded. Finally the scores are

transformed into a meaningful range and the reliability of the scale score

reported. The enumeration rule implies that indices will receive an absolute

total score and items of eliciting attributes will receive average scores.
3.3.2 Validity of the C-OAR-SE procedure

Rossiter's (2002) procedure for scale development challenges conventional

procedures by emphasizing the need to ensure that a measure represents the

construct in a valid way as opposed to using statistical analysis of the measure

itself to define the construct (Lloyd, 2010a). Therefore the C-OAR-SE

procedure for scale development relies totally on content validity to prove it

provides better measures than traditional procedures such as those of Churchill

(1979) and Nunnally (1978).

Rossiter (2002 p. 315) suggests all that is needed is a ³set of distinct

components as decided by raters´ and the involvement throughout of expert

judges. These main components are present in the measuring scale for each

rater group (in this study rater groups are groups of stakeholders) because the

items representing them are the defining items for the attribute. Rossiter (2002,

p. 311) believes ³content validity is established in that the items are a good

representation of the construct´ and are sufficient for use in the scale (Lloyd,

2011).

Content validity is established by conducting pre-interviews with raters followed

by the involvement of expert raters throughout the process. This rational

approach, can utilize both single-item measures as well as multiple-item

measures. Rossiter (2002) believes this approach is sufficient if the object and

attribute are identified as concrete and singular and that statistical analysis of

the measure results in the loss of a scale's validity (Lloyd, 2007).


In contrast to the C-OAR-SE procedure, procedures developed by Churchill

(1979) and Nunnally (1967) use a multiple-item approach to identify the

componentality of constructs. Most marketing research, in particular with

regard to measuring marketing constructs, has been influenced by the

conventional procedure designed by Churchill (1979), (Bergkvist & Rossiter,

2007; West, 2006). Rossiter (2002) suggests Churchill (1979) appears to focus

on psychometric measure-to-score correspondence rather than on construct

definition-to-measurement correspondence (Lloyd, 2010). Rossiter (2011)

argues that if a measure is not highly content-valid then no subsequent

psychometric properties can help it.

The wording of the question part of the item is seen as a fundamental aspect of

content validity within C-OAR-SE. The common focus on scale statistics by

researchers neglects to examine items on which the statistics are based

(Rossiter, 2007). Researchers developing new scale measures such as Collier

and Beinstock (2006) believe that a large number of poorly worded items will

somehow (cancel out) content errors thus ensuring content validity.

3.3.3 Reliability of C-OAR-SE

By estimating the precision of a score obtained from a scale, reliability can be

estimated of the score but not of the scale itself (Weiss & Davison, 1981).

Therefore content validity of the scale must be correctly established before

precision scores can be assumed to represent what they are meant to represent.
The method for estimating scale-score reliability utilized in C-OAR-SE differs

according to the rater entity and the type of attribute in the construct. Rossiter

(2002) believes the rater entity makes a fundamental difference to the way

reliability is assessed in the C-OAR-SE procedure by affecting the way

precision is estimated. If experts are used as the rater entity, the reliability of the

mean score increases with the number of experts. In addition, if an attribute is

formed then the list of main components and their sub components as ratified

by expert agreement are reliable. That is, if the rater understands the items and

rates them truthfully, the final score will be precise and thus reliable (Rossiter,

2002, p. 328).

Rossiter (2002) also dismisses the use of other approaches to the

establishment of reliability, such as test-retest reliability, arguing that if people

give different answers to the same items on two occasions for no apparent

reason, then all that this inconsistency shows is that the item is ambiguous and

not concrete. Such ambiguity indicates a content validity issue (Rossiter, 2002,

p. 328). Therefore test-retest reliability provides no information about the

accuracy of scores obtained from the test.

3.3.4 Criticisms of the C-OAR-SE procedure

Rossiter (2002) posits that the C-OAR-SE procedure is grounded in rationalism

rather than in empiricism. Therefore, there is no empirical test, beyond expert

agreement, that can prove that C-OAR-SE produces scales that are more valid

than those produced by the traditional procedures.


Diamantopoulos (2005) criticizes the C-OAR-SE procedure for its sole reliance

on content validity. However, since the first publication of the C-OAR-SE article,

the conceptualization of validity and reliability proposed by Borsboom,

0HOOHQEXUJK ᄂ DQG YDQ +HHUGHQ



KDYH VXSSRUWHG 5RVVLWHU¶V
ᄃᄃ definitions of validity and reliability. Borsboom et al (2004) believe the question

is whether a scale or test measures what it is supposed to measure (i.e., is it

content-valid?). Establishing content validity is the main purpose of C-OAR-SE

(Lloyd, 2007).

Diamantopoulos (2005) is also critical of the inclusion of the rater entity in the

construct definition. Other researchers, (Aaker, 1996; Aaker, Kumar, & Day,

2004), however, find rater entity highly relevant.

3.4 Justification for adopting the C-OAR-SE procedure for this study

To design and develop an accurate measurement tool it is important to

understand what it is that the tool is actually trying to measure. This requires a

clear definition of the construct in question and an understanding of its traits

(Hair et al, 2002). The definition of a construct must therefore precede the

identification of the properties or attributes of the object being studied. A good

measurement scale should also be reliable and valid: it should measure what it

You might also like