Multi Decision Criteria Analysis
Multi Decision Criteria Analysis
Abstract
In multi-criteria decision analysis, a crucial step is the assignment of weights to criteria. These weights establish the
relative level of importance of each of the relevant criteria in making a complex decision. Many methods have been
developed for conducting this operation, and they generally fall into two classes: ratio assignment and approximate
techniques. This paper reviews the major methods available in the literature, discusses their use, strengths and
limitations. Our goal is to provide a taxonomy for aiding future decision analysts in determining the best weighting
methods to use given the particulars of the decision to be made. The paper concludes with a summary of the methods
to act as this guide for method selection in multi-criteria decision environments.
Key Words: multi-criteria decision making, multi-criteria decision analysis, attribute weighting, MCDM, MCDA
1. Introduction
Keeney and Raiffa (1976) describe decision analysis as a "prescriptive approach...to think hard and systematically
about some important real problems" (p. vii). Thus, at its core, it helps us to understand how we should make decisions.
In decision making, the main principle which is followed is to choose the best alternative or rather course of action
from a multitude of alternatives. It may also include construction of total or partial order from the same multitude of
alternatives (Ahn, 2011).It is the formal process of choosing from among a candidate set of decisions to determine
which alternative is most valuable to the decision maker. Complicating most real decisions is that we wish to achieve
multiple aims; that is, we evaluate candidate solutions based on a potentially large number of criteria. For an
apartment, we may consider cost, size, location, and amenities, whereas a job selection problem may cause us to
consider salary, growth potential, location, and benefits. Each of these criteria is important to us and we need to
consider them all to comprehensively evaluate our problem using multi-criteria decision analysis (MCDA).
While the literature on individual techniques and MCDA in general is vast, this paper focuses on summarizing
common weighting methods, their advantages and disadvantages, and ultimately, on providing a taxonomy for aiding
the reader in the determination of an appropriate methods for use given the particulars of a given decision to be made.
Research has shown that additive models are the most extensively used model in multi-criteria decision analysis (von
Nitzsch & Weber, 1993). While solving multi-criteria decision problems, a number of weight criteria are used globally.
When it comes to robustness of such methods, a prominent and accurate one comprises of introduction of surrogate
weights (Danielson & Ekenberg, 2016). Additionally, many weighting methods are built on the assumption of the use
of an additive value function. As such, we present the analysis in this paper under the assumption that an additive
value model of the following form is being used by the decision maker:
() = =1 ( ) (1)
Where n is the total number of criteria being considered, wi is the weight of the ith criteria, vi(xi) is the ith value
function, and x is the vector of all criteria values. Within a value function, all weights must add up to 1:
1
=1 = 1 (2)
The use of an additive utility model requires that criteria used in the model are mutually preferentially independent
(Keeney, 1974). This means that the weight assigned to any particular criterion is not influenced by the weight
assigned to any other attributes. For example, consider choosing between departure times of 6 a.m. and 10 a.m. for a
flight and their respective costs are $250 and $300. Mutually preferentially independent means that you prefer the
cheaper flight to the more expensive one regardless of departure time, and you prefer the later flight to the earlier one
regardless of cost. If you prefer the later flight regardless of the ticket price, but the price dictates your departure
preference, then departure time is preferentially independent of cost, but they are not mutually preferentially
independent. If the attributes are not mutually preferential independent, there are techniques to combine them using a
joint utility function that are beyond the scope of this paper. Additionally, all criteria should be mutually exclusive
and collectively exhaustive, that is they represent the entirety of relevant criteria and each of independent of all others.
This paper does not consider strategies for assessing weight factors in other choice frameworks (e.g., ordered-
weighted averaging or multiplicative utility models). Nor does it consider techniques for obtaining the coefficients of
linear models, proper or improper, in general. This research also does not focus on defining value functions for use in
Eq. 1, but rather our focus is on how a decision maker should best determine the appropriate weights for different
criteria in a MCDA problem. The aim is to provide a comprehensive assessment of the state of the art of weighting
methods, as well as a comparison analysis of the use of these methods in the context of a realistic problem. Throughout
this paper, we refer to the person or persons using any particular technique for eliciting attribute weights as simply the
user. Such users include decision analysts, risk managers, engineers and decision makers. As of now, it can be seen
that a large number of biases exist between human judgement and decision making which shows a significant deviation
from a normative rule of probability or utility theory. The same existing biases are used by various decision and risk
analysts to argue for the use of modeling and analysis tools in decision making to minimize errors (Montibeller &
Winterfeldt, 2015).
As observed in the literature over the past three decades, there are several important pitfalls to be aware of when
assigning attribute weights as described below:
Objective and attribute structure. The structure of the objectives and the selection of weighting methods
affect results, and should be aligned to avoid bias.
Attribute definitions affect weighting. The detail with which certain attributes are specified affects the
weight assigned to it; that is, the division of an attribute can increase or decrease the weight of an attribute.
For example, weighing price, service level, and distance separately as criteria for a mechanic selection led
to different results than weighing shop characteristics (comprised of price and service level) and distance
did (Pyhnen, Vrolijk, & Hmlinen, 2001).
Number of attributes affects method choice. It is very difficult to directly or indirectly weight when one
has to consider many attributes (e.g., double digits or more) owing to the greater difficulty associated with
answering all the questions needed for developing attribute weights; Miller (1956) advocates the use of five
to nine attributes to avoid cognitive overburden.
2
More attributes is not necessarily better. As the number of attributes increases, there is a tendency for
the weights to equalize, meaning that it becomes harder to distinguish the difference between attributes in
terms of importance as the number of significant attributes increases (W. G. Stillwell, von Winterfeldt, &
John, 1987).
Attribute dominance. If one attribute is weighted heavier than all other attributes combined, the
correlation between the individual attribute score and the total preference score approaches one.
Weights compared within but not among decision frameworks. The interpretation of an attribute weight
within a particular modeling framework should be the same regardless of the method used to obtain
weights (Pyhnen, 1998); however, the same consistency in attribute weighting cannot said to be present
across all multi-criteria decision analysis frameworks (Choo, Schoner, & Wedley, 1999).
Consider the ranges of attributes. People tend to neglect accounting for attribute ranges when assigning
weights using weighting methods that do not stress it (Fischer, 1995; von Nitzsch & Weber, 1993); rather,
these individuals seem to apply some intuitive interpretation of weights as a very generic degree of
importance of attributes, as opposed to explicitly stating ranges, which is preferred (Belton & Gear, 1983;
Korhonen & Wallenius, 1996; Salo & Hmlinen, 1997). This problem could occur when evaluating job
opportunities. People may assume that salary is the most important factor, but if the salary range is very
narrow (e.g., a few hundred dollars), then other factors such as vacation days or available benefits may in
fact be more important in the decision makers happiness.
There is no superior method for eliciting attribute weights, independent of a problems context. The concept of
flexible elicitation has been continuously in use to improve the applicability of classical tradeoff elicitation procedure
(de Almeida, de Almeida, Costa, & de Almeida-Filho, 2016). Consequently, users should be aware of how each
method works, its drawbacks and advantages, the types of questions asked by the method, how these answers are used
to generate weights, and how different the weights might be if other methods are used. Peer reviewers should be
mindful of how each of these methods for eliciting attribute weights are used in practice and how users of these
methods interpret the results. The specific method for eliciting attribute weights itself is not the only ingredient in
stimulating the discussion. The weighting methods are only tools used in the analysis, and one should focus on the
process for how the weights are used (Pyhnen, 1998).
While other categorizations of methods could be examined, this paper will concentrate on two general approaches
for assigning weights to preference attributes within the context of a multi attribute utility model: ratio assignment
and approximate techniques. The difference between these methods lies in the nature of the questions posed to elicit
weights. Ratio assignment techniques assign a score to each attribute based on its absolute importance relative to a
standard reference point or relative importance with respect to other attributes. The resulting weights are obtained by
taking the ratio of each individual attribute score to the sum of the scores across all attributes. Approximate techniques
assign an approximate weight to each attribute strictly according to their ranking relative to other attributes with
respect to importance. Approximate techniques appeal to principles of order statistics to justify weights in the absence
3
of additional information on relative preference. Ratio assignment techniques will first be discussed, followed by
approximate techniques.
This paper presents the methods and specifics of several techniques for weighting, along with a detailed explanation
of them. Section 2 presents ratio assignment techniques and Section 3 presents approximate techniques. They are
ordered first by direct and then indirect methods and then notionally from easiest for the decision maker to implement
and use to the more difficult weighting methods, requiring more time and resources to set up the weights for the
attributes. Borcherding, Eppel, and Von Winterfeldt (1991) suggest that the ratio, swing, tradeoff, and pricing out
methods are the most commonly used in MCDA, however more recently researchers have focused on direct methods
for determining weights, including equal and rank-order weighting methods. Another method involving ranked
weights is to develop weights based on intensity of dominance rather than based on approximate weights. Utilizing
this method has resulted in giving more accurate results in multiattribute decision making (Ahn & Park, 2008).
Weights are often obtained judgmentally with indirect methods (W. Stillwell, Seaver, & Edwards, 1981); therefore,
direct methods that remove some of the subjectivity while determining appropriate weights have become increasingly
popular. Finally Section 4 provides a comprehensive overview and comparison of the techniques and concludes the
paper with final thoughts and conclusions.
Ratio Assignment Techniques ask decision makers questions where answers imply a set of weights corresponding
to the users subjective preferences. The result of this questioning is a set of scores or points assigned to each attribute
from which the corresponding weights are calculated after normalizing each attribute score with respect to the total
score across all attributes. A variety of ratio assignment techniques exist, including: 1) direct assignment technique
(DAT), 2) Simple multi attribute rating technique (SMART) and its variants, 3) swing weight technique (SWING),
and 4) Simple pairwise comparison (PW). Each is discussed in the following subsections. The method is introduced,
its steps are described, and then its strengths and limitations are presented.
The Direct Assignment Technique (DAT) asks users to assign weights or scores directly to preference attributes.
For example, the user may need to divide a fixed pot of points (e.g., 100) among the attributes. Alternatively, users
may also be asked to score each attribute over some finite scale (e.g., 0 to 100) and the resulting weights are calculated
by taking the ratio of individual scores to the total score among all attributes.
4
those of lesser importance. For example, if the total pot consists of 100 points, users would assign a portion of this
total among the set of attributes.
The second approach considers a finite range of potential scores and asks user to assign a point value to each
attribute according to its importance, where higher importance attributes receive more points than those of lesser
importance. For example, if a range of scores ranging from 0 to 100 is considered, users would choose a point value
between these limits to establish the relative importance among attributes.
As previously mentioned, it is important that an objective is established and the ranges (swing) for each attribute
be defined, so a common example will be used throughout this paper. We will consider the purchase of a car for a
small family, early in their careers, with one small child and a short commute. One could see that the relative weighting
of attributes might change if the problem definition changed, e.g., if the decision maker had a long commute or large
family.
The attributes will use notional ranges for the remainder of this paper. Again, the relative weight that a decision
maker would apply may be impacted by the range. A narrow purchase price range of $20,000 to $20,200 would have
less importance than that of a larger range. The criteria used for the analysis of this choice are Purchase Price,
Attractiveness, Reliability, Gas Mileage, and Safety Rating. Assume a fixed pot of 1000 points to be divided among
the five attributes. These criteria, their abbreviations, their least and most preferred value, and scores are shown in
Table 1.
= (3)
In the car buying example, based on a fixed budget of points, the weights for each attribute can be readily
calculated using Eq. 3. This result is shown in Table 2.
5
2.1.2 Strengths of this Approach
This approach is the most straightforward of the techniques presented in this paper for eliciting attribute weights
in that it does not require the user to formally establish a rank order of attributes a priori. The number of questions
needed to assign weights using the direct assignment technique is equal to the number of preference attributes. Thus,
the effort required to obtain attribute weights scales linearly with the number of attributes.
The Simple Multi Attribute Rating Technique (SMART) (Edwards, 1977; von Winterfeldt & Edwards, 1986) is
an approach for determining weighting factors indirectly through systematic comparison of attributes against the one
deemed to be the least important. SMART consists of two general activities: 1) Rank order attributes according to the
relative importance overall, and 2) Select either the least or most important attribute as a reference point and assess
how much more or less important the other attributes are with respect to the reference point. This step involves
calculating attribute weights from ratios of individual attribute scores to the total score across all attributes.
Methodological improvements to SMART, known as SMARTS and SMARTER, were proposed by Edwards and
Barron (1994). SMARTS (SMART using Swings) uses linear approximations to single-dimension utility functions,
an additive utility model, and swing weights to improve weighting (Edwards & Barron, 1994). SMARTER (SMART
Exploiting Ranks) builds on SMARTS but substitutes the second of the SMARTS swing weighting steps, instead
using calculations based on ranks.
6
be either from most to least important or from least to most important. A number of approaches exist to assist in
holistic ranking, the most popular and well known being a pairwise ranking (Saaty, 1980).
For example, consider our automobile purchase problem. The output from Elicitation Step 1 would be a rank
ordering of the five relevant criteria from least to most important as shown in Table 3.
7
Using the point scores assigned to each of the attributes in Step 3, the final step of the SMART process is to
calculate attribute weights. This is done by normalizing each attribute score against the total score among all attributes
as shown in Eq. 3. In the car buying example, the total points distributed among all five preference attributes is 50 +
100 + 150 + 300 + 400 = 1000 points. The corresponding weights for each attribute are calculated as shown in Table
5. Note that this method can generate precisely the same weights as the DAT method (assuming the correct points are
used in both).
Table 5: Weight Calculation for SMART
Abbreviation Criteria Formula Weight
(G) Gas Mileage 50/1000 =0.050
(A) Attractiveness 100/1000 =0.100
(S) Safety 150/1000 =0.150
(R) Reliability 300/1000 =0.300
(P) Purchase Price 400/1000 =0.400
Sum 1000 points =1.00
The Swing Weighting Technique (von Winterfeldt & Edwards, 1986) is an approach for determining weighting
factors indirectly through systematic comparison of attributes against the one deemed to be most important. SWING
consists of two general activities: 1) Rank order attributes according to the relative importance of incremental changes
in attribute values considering the full range of possibilities; and 2) Select either the least or most important attribute
as a reference point and assess how much more or less important the other attributes are with respect to the reference
point. This step involves the calculation of attribute weights as the ratio of points assigned to an attribute to the total
points assigned to all attributes.
8
Just as was the case with the SMART method, this method begins by rank ordering relevant attributes. For
illustration purposes, we will use the same rank ordering as before (shown in Table 3).
9
2.3.2 Strengths of this Approach
SWING considers the utility over the full range of attributes. SWING need not be repeated if old attributes are
removed or new attributes are added unless the one being removed is also the one that is most important or the one
being added assumes the role of being the most important.
The number of questions needed to assign weights using the SWING technique is equal to the one less than the
number of preference attributes. Thus, the effort required to obtain attribute weights scales linearly with the number
of attributes.
The simple pairwise comparison technique for eliciting weights systematically considers all pairs of attributes in
terms of which is more important. For each pairwise comparison, a point is assigned to the attribute that is considered
more important. In the end, attribute weights are determined as the ratio of points assigned to each attribute divided
by the total number of points distributed across all attributes.
10
Purchase Price vs. Safety Rating Purchase Price Wins
Attractiveness vs. Reliability Reliability Wins
Attractiveness vs. Gas Mileage Attractiveness Wins
Attractiveness vs. Safety Rating Safety Wins
Reliability vs. Gas Mileage Reliability Wins
Reliability vs. Safety Rating Reliability Wins
Gas Mileage vs. Safety Rating Safety Wins
The point distribution obtained using these ten comparisons is shown in Table 8.
Note that the least important attribute in the above example has a score of zero points (as it won none of the
pairwise comparisons). The resulting weight factor in this case will be zero unless some constant offset or systematic
bias is applied to all scores. Such an offset or bias desensitizes the resulting weights of the attributes to changes in the
points distributed to each via a pairwise ranking procedure the greater the offset, the less sensitive the resulting
weighting distribution will be to small changes in attribute scores. For example, if an offset of 2 points or 10 points is
used, the revised score distributions shown in Table 9 would result.
11
(A) Attractiveness 1/10 =0.1
(G) Gas Mileage 0/10 =0.0
Sum 10 points =1.00
To demonstrate the impact of imposing an offset or systematic bias to the attribute scores, the weights obtained
from adding 2 points and 10 points to each are shown in Table 11.
As the size of the offset or bias increases, the weights become equally distributed across attributes. In the case of
an infinite offset, the approach results mirror the equal weighting technique.
12
example, a key problem in case of Analytic Hierarchy Process is the definition of actual meaning of priority vector
derived from the principal eigen value method used in AHP (e Costa & Vansnick, 2008).
3. Approximate Techniques
Approximate Techniques establish weights based primarily on the ordinal rankings of attributes based on relative
importance. Approximate techniques adopt the perspective that the actual attribute weights are samples from a
population of possible weights, and the distribution of weight may be thought of as a random error component to a
true weight (Jia, Fischer, & Dyer, 1998). As a result, approximate techniques seek the expected value of attribute
weights and use these expected weights in utility models. A variety of approximate techniques exist, including: 1)
equal weighting, 2) rank ordered centroid technique, 3) rank summed weighting technique, and 4) rank reciprocal
technique.
The Equal Weighting Technique assumes that no information is known about the relative importance of
preference attributes or that the information pertinent to discriminating among attributes based on preference is
unreliable. Under these conditions, one can adopt maximum entropy arguments and assume that the distribution of
true weights follow a uniform distribution (Kapur, 2009).
13
Alternative techniques should be used if this assumption is not applicable, or if more information exists that could
assist in establishing a quantitative difference between attributes.
When some information is available to help distinguish between attributes on the basis of importance, alternative
techniques will produce better estimates of attribute weights. When the number of attributes is 10 or less, it is more
useful to spend resources to first establish a rank ordering of the attributes using group discussion or pairwise ranking
and then follow-up with an alternative approximate techniques.
The Rank Ordered Centroid Technique assumes knowledge of the ordinal ranking of preference attributes with
no other supporting quantitative information on how much more important one attribute is relative to others (Barron
& Barrett, 1996). As a consequence of this assumption, it is assumed that the weights are uniformly distributed on
the simplex of rank ordered weights (Jia et al., 1998).
ROC Step 2: Calculate the rank ordered centroid for each attribute
The Rank Ordered Centroid Technique assigns to each of N rank ordered attributes a weight wi according to Eq.
5.
1 N 1
wi
N k i k
(5)
where again the attributes are ordered from most important (i = 1) to least important (i = N).
14
In our car buying example, the weights assigned to each attribute can be calculated as shown in Table 13. Note
that the predefined formula used in the approximate techniques limits the number of weights available for assignment.
Thus, while we can see the ordinality of weight preferences remains, the magnitude of weights and distance between
them has changed. This is the tradeoff that a decision maker must make; is more control over weights preferred or
not? If so, using a ratio assignment technique provides more control. If time is more crucial or if decision makers are
not as informed regarding the problem, an approximate technique more prove more appropriate.
15
To approximate attribute weights, the Rank Summed Weighting technique uses information on the rank order of
attributes on the basis of importance combined with the weighting of each attribute in relation to its rank order (W.
Stillwell et al., 1981).
where again the attributes are ordered from most important (i = 1) to least important (i = N).
The rank exponent weighting technique is a generalization of the rank sum weighting technique, as shown in Eq. 7.
(+1)
= (7)
=1(+1)
In this case, a p of 0 results in equal weights, p = 1 is the rank sum, and increasing p values further disperse the
weight distribution among attributes.
In the car buying example above, the weights assigned to each attribute can be calculated using the Rank Summed
Weighting technique as shown in Table 14. Once again, ordinality of criteria preference remains when compared with
previous methods, but the spread of weights changes due to the predetermined rank summed formula.
16
(S) Safety 2 5 1 3
w3 0.200
5 5 1
(A) Attractiveness 2 5 1 4
w4 0.133
5 5 1
(G) Gas Mileage 2 5 1 5
w5 0.067
5 5 1
3.3.2 Strengths of this Approach
The Rank Summed Weighting technique provides a means for coming up with meaningful weights based solely
on ordinal rankings of attributes based on importance. This is particularly helpful since in situations consisting of
many users with diverse opinions, rank orderings of attributes may be the only aspect of preference that can achieve
consensus agreement. Calculating weights using the Rank Summed Weighting technique can be easily implemented
using standard spreadsheet tools or calculated using a calculator.
The rank reciprocal method is similar to the ROC and RS methods. It involves uses the reciprocal of ranks, divided
by the sum of the reciprocals (W. Stillwell et al., 1981).
17
RR Step 2: Calculate the rank summed weight for each attribute
The Rank Reciprocal Weighting Technique assigns to each of N rank ordered attributes a weight wi according to
Eq. 7.
1
= (7)
=1 1/
where again the attributes are ordered from most important (i = 1) to least important (i = N). In the car buying example
above, the weights assigned to each attribute can be calculated using the Rank Reciprocal Weighting technique as
shown in Table 15. Again, ordinality of criteria preference remains when compared with previous methods, but the
spread of weights changes due to the predetermined rank reciprocal formula.
4. Conclusions
This paper has discussed eight major techniques for computing weights in a multi-criteria decision making
environment that have been developed and utilized over the last several decades. In a comparison of the categories of
ratio assignment and approximate techniques, Jia et al. (1998) found that:
The selection accuracy of quantitatively stated ratio weights was as good or better than that of the best
approximate methods under all conditions studied (except when the assessed weights are purely random).
18
Because linear decision models are quite robust with respect to change of weights (Dawes & Corrigan, 1974),
using approximate weights yields satisfactory quality under a wide variety of circumstances.
Despite the robustness of linear models, even noisy information about the ranking of attributes improves
decisions substantially. When response error is present, decision quality decreases as the number of attributes
or the number of alternatives rated against these attributes increases.
Based on the literature that was reviewed, the observed advantages and disadvantages of each technique, as well
as its potential uses, are summarized in the taxonomy shown in Table 16.
When faced with many attributes, it is often more convenient to use approximate techniques for assigning attribute
weights, in the absence of more information. One can also use approximate techniques for an initial weighting and
19
further refine using a ratio assignment technique. Whenever possible, rationale should accompany any judgments
leading to attribute weights. Rationale includes documenting the information and reasoning that support each
individual judgment, even if it is based strictly on intuition. Providing justification increases model transparency and
exposes the model to critical review.
Further, when possible, it is useful to apply more than one technique for eliciting weights of preference attributes.
If the results following the honest application of two or more techniques are the same, the credibility of the
corresponding utility model is increased. In contrast, if the results are not the same, the disagreement provides a basis
for settling differences in opinion, discussing model limitations and assumptions, and diagnosing hidden biases.
Ultimately, there is no one universal right way to conduct weighting for a MCDA problem. As discussed earlier,
ordinality is preserved when using any of the techniques correctly. However, more coarse weights can be determined
using approximate techniques and more refined weights are possible using ratio techniques. Which method is
appropriate depends on the problem context. This paper attempted to benefit practitioners by providing a
comprehensive review and comparison of common weighting methods. It is the hope of the authors that is has provided
a comprehensive taxonomy and guide for those who need to choose a method for weighting.
5. References
Ahn, B. S. (2011). Compatible weighting method with rank order centroid: Maximum entropy ordered weighted
averaging approach. European Journal of Operational Research, 212(3), 552-559.
Ahn, B. S., & Park, K. S. (2008). Comparing methods for multiattribute decision making with ordinal weights.
Computers & Operations Research, 35(5), 1660-1670.
Barron, F., & Barrett, B. (1996). Decision quality using ranked attribute weights. Management Science, 42(11),
1515-1523.
Belton, V., & Gear, T. (1983). On a short-coming of Saatys method of analytic hierarchies. Omega, 3, 228-230.
Borcherding, K., Eppel, T., & Von Winterfeldt, D. (1991). Comparison of weighting judgments in multiattribute
utility measurement. Management Science, 37(12), 1603-1619.
Choo, E. U., Schoner, B., & Wedley, W. C. (1999). Interpretation of criteria weights in multicriteria decision making.
Computers & Industrial Engineering, 37, 527-541.
Danielson, M., & Ekenberg, L. (2016). A robustness study of state-of-the-art surrogate weights for MCDM. Group
Decision and Negotiation, 1-15.
Dawes, R. M., & Corrigan, B. (1974). Linear models in decision making. Psychological Bulletin, 81, 95-106.
de Almeida, A. T., de Almeida, J. A., Costa, A. P. C. S., & de Almeida-Filho, A. T. (2016). A new method for elicitation
of criteria weights in additive models: Flexible and interactive tradeoff. European Journal of Operational
Research, 250(1), 179-191.
Dyer, J. S. (1990). Remarks on the Analytic Hierarchy Process. Management Science, 35(3), 249 - 258.
e Costa, C. A. B., & Vansnick, J.-C. (2008). A critical analysis of the eigenvalue method used to derive priorities in
AHP. European Journal of Operational Research, 187(3), 1422-1428.
Edwards, W. (1977). How to use multiattribute utility measurement for social decisionmaking. IEEE Transactions on
Systems, Man, and Cybernetics, 7(5), 326-340.
Edwards, W., & Barron, F. (1994). SMARTS and SMARTER: Improved simple methods for multiattribute utility
measurement. Organizational Behavior and Human Decision Processes, 60, 306-325.
Fischer, G. W. (1995). Range sensitivity of attribute weights in multiattribute value models. Organizational
Behavior and Human Decision Processes, 62, 252-266.
Jia, J., Fischer, G. W., & Dyer, J. S. (1998). Attribute weighting methods and decision quality in the presence of
response error: A simulation study. Journal of Behavioral Decision Making, 11(2), 85-105.
Kapur, J. N. (2009). Maximum entropy principles in science and engineering. New Dehli, India: New Age.
Keeney, R. L. (1974). Multiplicative Utility Functions. OPERATIONS RESEARCH, 22(1), 22-34.
20
Keeney, R. L., & Raiffa, H. G. (1976). Decisions with multiple objectives: Preferences and value tradeoffs. New York:
Wiley & Sons.
Korhonen, P., & Wallenius, J. (1996). Behavioral Issues in MCDM: Neglected research questions. Journal of
Multicriteria Decision Analysis, 5, 178-182.
Miller, G. A. (1956). The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capability for Processing
Information. Psychological Review, 63(2), 81-97.
Montibeller, G., & Winterfeldt, D. (2015). Cognitive and motivational biases in decision and risk analysis. Risk
Analysis, 35(7), 1230-1251.
Pyhnen, M. (1998). On attribute weighting in value trees. PhD Dissertation. Helsinki University of Technology.
Pyhnen, M., Vrolijk, H., & Hmlinen, R. P. (2001). Behavioral and procedural consequences of structural
variation in value trees. European Journal of Operational Research, 134(1), 216-227.
doi:http://doi.org/10.1016/S0377-2217(00)00255-1
Riabacke, M., Danielson, M., & Ekenberg, L. (2012). State-of-the-art prescriptive criteria weight elicitation.
Advances in Decision Sciences, 2012.
Saaty, T. L. (1980). The analytic hierarchy process. New York: McGraw Hill.
Salo, A. A., & Hmlinen, R. P. (1997). On the measurement of preferences in the Analytic Hierarchy Process.
Journal of Multicriteria Decision Analysis, 6, 309-343.
Stillwell, W., Seaver, D., & Edwards, W. (1981). A comparison of weight approximation techniques in multiattribute
utility decision making. Organizational Behavior and Human Performance, 28, 62-77.
Stillwell, W. G., von Winterfeldt, D., & John, R. S. (1987). Comparing hierarchical and non-hierarchical weighting
methods for eliciting multiattribute value models. Management Science, 33(4), 442-450.
U.S. Coast Guard. (1994). Coast guard process improvement guide: Total quality tools for teams and individuals,
2nd Ed. Boston, MA: U.S. Government Printing Office.
Velasquez, M., & Hester, P. T. (2013). An analysis of multi-criteria decision making methods. International Journal
Of Operations Research, 10(2), 56-66.
von Nitzsch, R., & Weber, M. (1993). The effect of attribute ranges on weights in multiattribute utility
measurements. Management Science, 39(8), 937-943.
von Winterfeldt, D., & Edwards, W. (1986). Decision analysis and behavioral research. Cambridge, MA: Cambridge
University Press.
Wallenius, J., Dyer, J. S., Fishburn, P. C., Steuer, R. E., Zionts, S., & Deb, K. (2008). Multiple Criteria Decision Making,
Multiattribute Utility Theory: Recent Accomplishments and What Lies Ahead. Management Science, 54(7),
1339-1340.
21