Resnicow 2004
Resnicow 2004
1, 145158
2004
RESNICOW
Yale University
BRUNO H
REPP
Haskins Laboratories
Expression of emotion in music performance is a form of nonverbal communication to which people may be differentially receptive. The recently developed Mayer-Salovey-Caruso Emotional Intelligence Test assesses individual differences in the ability to identify, understand, reason with, and manage emotions using hypothetical scenarios that are conveyed pictorially or in writing. The test currently does not include musical or spoken items. We asked 24 undergraduates to complete both that test and a listening test in which they tried to identify the intended emotions in performances of classical piano music. Emotional intelligence and emotion recognition in the music task were significantly correlated (r = .54), which suggests that identification of emotion in music performance draws on some of the same sensibilities that make up everyday emotional intelligence. Received September 17, 2003, accepted March 3, 2004
recent years, the amount of research on the emotions conveyed by music has increased considerably (for reviews, see Juslin & Sloboda, 2001; Juslin & Laukka, 2003). The most systematic research program is being pursued by Patrik Juslin (e.g., Juslin, 1997a, 1997b, 2000), following groundbreaking work by Alf Gabrielsson (Gabrielsson & Juslin, 1996; Gabrielsson & Lindstrm, 1995). The focus in that research is on communication of basic emotions from a performer to a listener via music performance. Musicians are instructed to play a tune in different ways, so as to convey happiness, sadness, anger, or fear, and listeners are required to identify these emotions or rate the degree to which they are expressed by
N
Address correspondence to Bruno H. Repp, Haskins Laboratories, 270 Crown St., New Haven, CT 06511-6695. (e-mail: [email protected]) ISSN: 0730-7829. Send requests for permission to reprint to Rights and Permissions, University of California Press, 2000 Center St., Ste. 303, Berkeley, CA 94704-1223. 145
146
each performance. In addition, the acoustic properties of the performances (the cues conveying the emotions) and their relations to both the performers intentions and the listeners responses are analyzed in detail, using Brunswiks (1956) lens model as a theoretical and methodological framework. The studies of Juslin and others (reviewed in Juslin & Laukka, 2003) have demonstrated that the four basic emotions can be communicated quite effectively through music performance. Although both performers and listeners have been found to exhibit individual differences with regard to their use of different performance cues, individual differences in sensitivity to emotional information have not received special attention in this research. It is likely, however, that both performers and listeners do vary in their general emotional sensitivity and in their receptivity to emotional information in music. Interestingly, Juslin (1997a) found little effect of musical training on listeners ability to recognize emotions in music. Thus, this ability may be part of a more general ability to recognize emotions, which may apply also to facial and vocal expressions, and perhaps even to other situations in which emotions must be recognized or dealt with in some way. Mayer and Salovey (1993, 1997; Salovey & Mayer, 1990) use the term emotional intelligence for this more general ability, which has received much discussion in both scientific forums and in the popular press (e.g., Goleman, 1995; Matthews, Zeidner, & Roberts, 2003). In recent years, Mayer, Salovey, and Caruso (2002a, 2002b) have developed an abilitybased test of emotional intelligence, the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). An earlier version of this test, the Multifactor Emotional Intelligence Scale (MEIS) included some musical items (Mayer, Caruso, & Salovey, 1999), but administrative difficulties led to their being discarded for the MSCEIT, which contains only pictorial and written items. The 141 MSCEIT items are intended to measure four aspects of emotional intelligence: (1) perceiving emotions, (2) using emotions to facilitate thought, (3) understanding emotions, and (4) managing emotions. Each of the four branches of the test contains two tasks with multiple items. Perceiving emotions is measured with Faces and Pictures (identifying the extent to which different emotions are expressed in a series of faces and abstract designs). Using emotions to facilitate thought is measured with Sensations (generating emotions and matching sensations to them) and Facilitation (judging moods that best accompany or assist certain cognitive tasks and behaviors). Understanding emotions is measured with Blends (identifying emotions that could be combined to form other emotions) and Changes (selecting an emotion that results from intensifying another emotion). Managing emotions is measured with Emotion
Emotional Intelligence
147
Management (judging efficiency of actions to obtain a specific emotional outcome for a character in a story) and Emotional Relationships (judging efficiency of actions to use in management of another persons feelings). The MSCEIT demonstrates adequate reliability and does not overlap especially with standard measures of personality or analytic intelligence (Lopes, Salovey, & Straus, 2003; Mayer, Salovey, Caruso, & Sitarenios, 2001, 2003). The purpose of the present study was to explore whether the recognition of emotions in music performance is related to emotional intelligence, as assessed by the MSCEIT, and particularly to the first branch of the MSCEIT, which assesses emotion identification from faces and pictures. Although we created our own musical materials, we followed the procedures used by Juslin (2000) fairly closely.
Method
PARTICIPANTS
Twenty-four undergraduate students (15 women, 9 men) at Yale University between the ages of 18 and 24 volunteered to participate and were paid $12 each. They were recruited through advertisements on campus and on student activity e-mail lists. Their musical training ranged from 0 to 15 years of instruction on one or more instruments.
MATERIALS
Emotional Intelligence Test Emotional intelligence was measured with the MSCEIT, Version 2.0, which is distributed and scored by Multi-Health Systems, Inc. (http://www.eqi.mhs.com). Scoring was done according to a general consensus criterion, based on the responses of a large number of individuals who have taken the test in the past. For example, if 84% of these individuals said that there is a moderate amount of happiness in a particular abstract design, then an individual participants score is incremented by .84 if he or she gives that particular response (Mayer et al., 2002b; Salovey, Kokkonen, Lopes, & Mayer, 2004). The summed item scores are subsequently converted to normed standard scores with a population mean of 100 and a standard deviation of 15, as is customary in psychometric tests of intelligence-related constructs. Based on a sample of more than 2000 takers of the MSCEIT selected randomly from a normative sample of 5000, the split-half reliability of the fullscale MSCEIT using this consensus scoring approach is .93, and of the four branches, .91, .79, 80, and .83, respectively (Mayer et al., 2003). Musical Materials The musical stimuli consisted of three short piano pieces: Prelude No. 6 in D minor (Andante espressivo) from Johann Sebastian Bachs Twelve Little Preludes (Vienna: Universal-Edition, 1951), Childrens Song in C major (No. 2, Andante) from Bla Bartks For Children (London: Boosey & Hawkes, 1947), and Dialogue(No. 3, Andante) from Vincent Persichettis Little Piano Book (Bryn Mawr, PA: Elkan-Vogel, 1954). Their beginnings are shown in Figure 1. The pieces were selected with the follow-
148
J. S. Bach/Rntgen No. 6 in D minor from Kleine Prludien und Fugen. 1951 Universal Edition A.G., Wien/UE 323. renewed. All rights reserved. Used by permission of European American Distributors LLC, sole U.S. and Canadian representative for Universal Edition A.G., Wien.
New revised edition 1946 by Boosey & Hawkes, Inc. Sole agents for Benelux States, British Commonwealth, France, Italy, Portugal, Scandinavian States, Spain, Switzerland, United Kingdom, U.S.A. For all other countries, Edition Musica Budapest. Reprinted by permission of Boosey & Hawkes, Inc.
Fig. 1. Beginnings of the three musical pieces: Bach, Bartk , and Persichetti.
ing criteria in mind: They should be short (although the Bach piece was longer than the other two pieces), in different compositional styles (baroque, folk song, and 20th century tonal, respectively), relatively unfamiliar, structurally homogeneous, and sufficiently neutral in inherent emotional content to lend themselves to being performed with different emotional intentions. The pieces were performed by author B.R. (age 58), a classically trained amateur pianist, on a Yamaha Clavinova CLP-611 digital piano and recorded in MIDI format on a Macintosh Quadra 660AV computer. Each piece was recorded five times, first with an expression deemed appropriate for the music (referred to as normal henceforth), and then with four different emotional intentions: happiness, sadness, anger, and fearfulness (in that order). In carrying out these intentions, B.R. relied primarily on his musical intuitions (rather than on his explicit knowledge of previous research findings on expressive cues to emotion) and also tried to keep the performances within aesthetically acceptable bounds. The performances were later played back on the same instrument and recorded onto a compact disc. Later analyses confirmed that the performances had some of the properties that have been found to be associated with happy, sad, angry, and fearful expressions in
Emotional Intelligence
149
previous studies of music performance (e.g., Juslin, 2000; for a summary, see Table 11 in Juslin & Laukka, 2003). As can be seen in Table 1, happy and angry performances had a much faster tempo (shorter mean beat duration) than sad and fearful performances; fearful performances had much higher timing variability (coefficient of variation of beat duration) than other performances; and angry performances were louder (mean key depression velocity) than happy performances, whereas sad and fearful performances were softer. (To listen to the performances, find the link following the Resnicow et al. reference at <http://www.haskins.yale.edu/haskins/STAFF/ repp.html>.)
PROCEDURE
Participants completed the MSCEIT before the music test, with at least 1 day in between. They were e-mailed login information to enter the MSCEIT web page and took the test on the Internet at their leisure. The music test was conducted in a reasonably quiet room. The performances were played through a Windows Media Player 9 Series on a Dell Inspiron 4100 laptop computer that was connected to an Aiwa NSH-220 stereo system. The volume was set to a level that was considered comfortable by all participants. The performances were blocked by piece. Bach was always first, Persichetti second, and Bartk last. For each piece, the normal performance was played first, and then the other four performances were played in a random order that was different for each piece and for each participant. Participants were told that the normal performance served as a standard relative to which the other performances should be judged. After each performance, participants rated the degree to which each of the four emotions (happy, sad, angry, and fearful; always in that order) was conveyed by the performance. A numerical scale ranging from 0 to 10 was provided for each emotion on a response sheet, and participants circled one of the numbers. None of the participants, when interviewed after the test, reported familiarity with any of the pieces.
(C) Relative loudness (MIDI velocity units) Bach 61.8 65.5 54.1 71.0 52.1 Bartk 50.1 58.8 44.9 68.3 44.9 Persichetti 54.8 61.9 49.9 70.4 49.2 NOTE(A) mean duration of intervals between quarter-note beats (inversely related to tempo), with final beat excluded; (B) coefficient of variation of interbeat interval (standard deviation as a percentage of the mean); (C) mean velocity (positively related to loudness) of all key strokes, in MIDI units (range: 0127).
150
Results
EMOTIONAL INTELLIGENCE TEST
Each participant received a total score for the MSCEIT as well as separate scores for two area levels (experiental and strategic emotional intelligence). The experiential area level is defined as the combination of the perceiving and facilitating branches, and the strategic area level is defined as the combination of the understanding and managing branches. Participants also received separate scores for each of the four branches of the test and for each subtest in each branch, although the test authors do not recommend interpreting these subtest scores because of insufficient reliability (Mayer et al., 2003). Total scores ranged from 78 to 142, with a mean of 110.2 and a standard deviation of 16.4. Women tended to have higher scores than men (M = 114.2 vs. 103.7), but the difference did not reach significance, t(22) = 1.57, p < .14, because the highest score was obtained by a man. (The next 11 rank-ordered scores were all obtained by women.) Years of musical training were not correlated with the overall score (r = .08, ns).
MUSIC TEST
The music test was of course not a normed psychometric instrument like the MSCEIT. However, we did not notice any abnormalities in the distributions of ratings. The overall distribution was strongly skewed toward the left, with ratings of 1 being most frequent (although ratings of 0 were infrequent). This is not surprising because only 20% of the ratings concerned intended emotions. Figure 2 shows the mean ratings for the musical performances. The normal performance of the Bach piece (top panel) was rated as rather sad, which is consistent with its relatively slow tempo and minor key. The Bach performance intended as happy was rated as only slightly more happy, but as much less sad and more angry than the normal performance. The ratings of the sad performance were indistinguishable from those of the normal performance. Apparently, there was a limit to how sad this piece could sound, or to how sad the performer could make it sound. The angry performance was rated as more angry and less sad than the normal performance, similar to the happy performance. The fearful performance was rated only as somewhat less sad than the normal performance, but not as more fearful. Overall, it seems that the Bach performances were not very successful in conveying the intended emotions. The Bartk piece (center panel) was rated as moderately happy when performed the normal way, which is consistent with its major key and moderate tempo. The happy performance was rated as only slightly
Emotional Intelligence
151
Fig. 2. Mean emotion ratings for the five performances of each musical piece, with standard-error bars.
152
more happy, but as less sad. The sad performance was clearly rated as more sad as well as less happy than the normal performance. The angry performance was rated as much more angry and less sad, and the fearful performance was rated as more fearful as well as less happy. Thus, the Bartk performances were more successful in conveying the intended emotions. The Persichetti piece (bottom panel) was rated as somewhat sad in the normal performance. The happy performance was rated as more happy as well as less sad and more angry. The sad performance was rated as more sad, the angry performance as much more angry as well as less sad, and the fearful performance as more fearful than the normal performance. Overall, the Persichetti performances were most successful in conveying the intended emotions. From the ratings given to each performance by each participant, we calculated emotion recognition scores as follows: For the normal performance, we expressed the degree to which it was judged as happy, sad, angry, or fearful by dividing the rating of the relevant emotion by the sum of all four emotion ratings. This resulted in four baseline scores, one for each judged emotion. For each of the other performances, we divided the rating of the intended emotion by the sum of all four emotion ratings, and then subtracted the baseline score for the intended emotion. The resulting difference score expressed the extent to which an emotion was conveyed when it was intended, relative to when it was not specifically intended. The mean difference scores for the three pieces are shown in Figure 3. The double standard-error bars, based on between-participant variability, are equivalent to 95% confidence intervals. A mean difference score significantly greater than zero indicates successful communication of an emotion. As was already suggested by the data in Figure 2, the Bach performances successfully conveyed only two of the four emotions (happiness and anger), the Bartk performances conveyed three (all but happiness), and the Persichetti performances conveyed all four emotions. Although a nonsignificant mean difference score indicates that the intended emotion was difficult to recognize in a particular performance, variation in individual scores for that performance may still be meaningful. Individual participants total scores for the music test were obtained by averaging their difference scores across the four emotions and the three pieces. Total scores ranged from .034 to .281, with a mean of .116 and a standard deviation of .052. Women had slightly higher scores than men, but the difference was far from significance, F(1,22) = 1.7, p < .21. The correlation with years of musical training was low (r = .08) and nonsignificant.
Emotional Intelligence
153
Fig. 3. Mean difference scores (score of the intended emotion minus score of the same emotion in the normal performance) for the intended emotions in the three musical pieces, with double standard-error bars.
154
The correlation between the total scores of the two tests was significant, r(22) =.54, p < .01, and this constitutes the main finding of our study. The total music test score correlated significantly with the experiential area level score of the MSCEIT, r(22) = .58, p < .01, but not with the strategic score, r(22) = .31, p > .10. The higher correlation with the experiential score makes sense because that score reflects how accurately a person can read and express emotion, and how well a person can compare that emotional stimulation to other sorts of sensory experiences (e.g., colors or sounds) whereas the strategic score indexes how accurately a person understands what emotions signify (e.g., that sadness typically signals a loss) and how emotions in him/herself and others can be managed. (Quoted from the MSCEIT Interpretive Guide provided by Multi-Health Systems, Inc.) The two area level scores themselves were moderately correlated, r(22) = .51, p < .01, as well. Of the two branches of the MSCEIT that contribute to the experiential score, Branch 2 (Using Emotions to Facilitate Thought) correlated more highly with the total music test score, r(22) = .51, p < .01, than did Branch 1 (Perceiving Emotions), r(22) = .47, p < .05. This may seem surprising, but it could easily have been due to sampling error in this small sample, and certainly does not represent a significant difference. Correlations with the branches contributing to the strategic score, Branch 3 (Understanding Emotions), r(22) = .21, p > .10, and Branch 4 (Managing Emotions), r(22) = .20, p > .10, were positive but not significant. The branch scores themselves were all positively intercorrelated, with the highest correlation obtaining between Branches 1 and 2, r(22) = .49, p < .01, and the lowest between Branches 2 and 3, r(22) = .30, p > .10, as appears to be the case in most research involving the MSCEIT. (The correlation matrix of branch scores has a positive manifold, as it should; Mayer et al., 2003.) Both of the two subtests of Branch 1, Faces and Pictures, correlated only weakly with the total music score, r(22) = .38, p < .10, in each case. Interestingly, the highest correlation with any component test was with the Sensations task of Branch 2, r(22) = .55, p < .01. This task requires the generation of a certain mood in order to then reason with that mood (MSCEIT Interpretive Guide). It should be kept in mind, however, that the test authors do not recommend interpreting subtest level scores. When the music scores for the three pieces were considered separately, it was the Bach score that showed the highest correlation with the total MSCEIT score, r(22) = .46, p < .05. By contrast, the Bartk and Persichetti scores were not significantly correlated with the MSCEIT, r(22) =
Emotional Intelligence
155
.33, p > .10, in both cases. The intercorrelations of the three music scores were quite low, ranging from .16 to .28. This may suggest that the three pieces assessed different aspects of emotion recognition, but it could also be due to low reliability. Like the subtest scores of the MSCEIT, the scores for individual pieces are better not interpreted.
Discussion
The significant correlation between the overall scores of the MSCEIT and the music test suggests that individual differences in sensitivity to emotion conveyed by music performance are related to individual differences in emotional intelligence. In particular, they seem to be related to the ability to generate a mood in the service of cognitive tasks and, to a lesser extent, to the ability to recognize emotional information in faces and pictures. The former relationship makes sense because, in order to judge the emotion conveyed by a music performance, a person may have to internally simulate correlates of that emotion (i.e., empathy) or access explicit knowledge about such correlates. The second relationship is also reasonable because both tasks involve recognizing emotions in sensory input. However, whereas the information is visual and static in the MSCEIT, it is auditory and dynamic in the musical test. We suspect that an even higher correlation might be obtained between the musical test score and that of a test that assesses the ability to recognize emotion in spoken language, which is likewise auditory and dynamic. Recognizing emotion in speech is undoubtedly an important aspect of emotional intelligence, although it is not currently part of the MSCEIT. Although recognizing emotion in music performance is of less importance in everyday life, it probably requires much the same processes and sensitivities as recognizing emotion in speech. This would be entirely consistent with evidence suggesting that the emotional cues in music performance are very similar to those in speech (Juslin & Laukka, 2003). Thus, recognition of emotion in both speech and music performance may be a legitimate aspect of emotional intelligence and might be considered for inclusion in future versions of the MSCEIT. There was a tendency, albeit nonsignificant, for women to achieve higher scores on both the MSCEIT and the music test. A small difference between sexes in the same direction has been observed among earlier takers of the MSCEIT (Mayer et al., 2002b), as well as in other studies of nonverbal communication of emotion (Hall, 1978, 1984). The recognition of emotion in music performance must be distinguished from the recognition of emotional content in musical structure that remains constant across different manners of performance. Such con-
156
tent includes mode, pitch register, range, and contour, dissonance, harmonic progression, and rhythm, which in turn exert constraints on the aesthetically permissible range of variation of performance parameters such as tempo and loudness. Thus, the Bach Prelude used in our study is inherently a somewhat sad piece on account of its minor key and slow tempo, whereas the major-key and moderately paced Bartk piece makes a quietly happy impression, and the Persichetti piece is rather neutral, lacking a strong tonality. These inherent characteristics were reflected in participants ratings of the normal performances, which represented the performers intuitive responses to the respective musical structures. By calculating emotion recognition scores as deviations from baseline scores for the normal performances, we took into account each pieces inherent mood. It should be noted that the emotions judged in this study (following Juslin, 2000) differ in the extent to which they are associated with music and music performance, and with piano performance in particular. It is easy to name examples of piano pieces (regardless of style) that are inherently happy or sad, because of the close relationship of these emotions to tempo and mode. However, inherently angry piano music is much less common, and inherently fearful piano music is extremely rare. Likewise, a fearful performance is not a natural occurrence in musical practice, and author B.R. felt least confident in producing it. This may explain why fearfulness was not recognized as well as the other emotions. Although fearfulness can be conveyed successfully by musical means (Juslin, 1997b; Juslin & Laukka, 2003), a fearful performance is a laboratory construct, not an option that musicians would consider spontaneously. This leads us to add a caveat about performances of music with explicit emotional intent. Essentially, these are experimental stimuli that impose modes of performance that are rarely required in realistic situations. Musicians seek to understand the inherent characteristics of a composition and to render these as faithfully as possible. Their performance may convey happiness or sadness, but only if the music being played is indeed inherently happy or sad, respectively. To play an inherently sad, or even an inherently neutral, piece in a happy manner would be aesthetically inappropriate and self-indulgent. Thus, the methods used so successfully by Juslin, and copied by us, may be seen as the imposition of extramusical intentions onto music performance, somewhat like changing ones tone of voice or facial expression in order to disguise ones true feelings. Although emotion is conveyed thereby via music, it is more or less inappropriate emotion because the performers intentions are different from what the music seems to ask for. Nevertheless, research using these methods has been highly successful in revealing performance cues that may also communicate emotional content in normal music performance. Because
Emotional Intelligence
157
these cues are likely to be more subtle in normal performance than in the typical psychological experiment, they especially may require emotional intelligence to be detected and appreciated. In summary, our study suggests a connection between sensitivity to musical emotion and everyday emotional intelligence that should be of interest to researchers working in both areas. To be sure, replication of our findings is desirable because they are based on a small sample of participants (hence, confidence intervals surrounding the reported correlations are broad), and because the musical materials were produced by a single individual who was not a professional musician. Nevertheless, researchers concerned with musical emotion can now be even more confident that they are dealing with an aspect of human communication that is related to real-life situations in which correct recognition of emotion is important. Conversely, our results may encourage researchers concerned with everyday emotional intelligence to pay more attention to the information conveyed by dynamic auditory events that are the result of emotionally charged action, such as speech and music.1
References
Brunswik, E. (1956). Perception and the representative design of experiments. Berkeley, CA: University of California Press. Gabrielsson, A., & Juslin, P. N. (1996). Emotional expression in music performance: Between the performers intention and the listeners experience. Psychology of Music, 24, 6891. Gabrielsson, A., & Lindstrm, E. (1995). Emotional expression in synthesizer and sentograph performance. Psychomusicology, 14, 94116. Goleman, D. (1995). Emotional intelligence. New York: Bantam. Hall, J. A. (1978). Gender effects in decoding nonverbal cues. Psychological Bulletin, 85, 845857. Hall, J. A. (1984). Nonverbal sex differences: Communication accuracy and expressive style. Baltimore: Johns Hopkins University Press. Juslin, P. N. (1997a). Emotional communication in music performance: A functionalist perspective and some data. Music Perception, 14, 383418. Juslin, P. N. (1997b). Perceived emotional expression in synthesized performances of a short melody. Musicae Scientiae, 1, 225256. Juslin, P. N. (2000). Cue utilization in communication of emotion in music performance: Relating performance to perception. Journal of Experimental Psychology, 6, 17971813. Juslin, P. N., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin, 129, 770814. Juslin, P. N., & Sloboda, J. A. (2001). Music and emotion: Theory and research. Oxford: Oxford University Press. Juslin, P. N., & Zentner, M. (2001). Current trends in the study of music and emotion. Musicae Scientiae, Special Issue 20012002, 321. 1. This paper is based on an undergraduate senior research project by J.R. conducted under the supervision of B.R., with advice from P.S. The research was supported by NIH grant MH51230 to B.R. We are grateful to Susan Holleran for doing the performance analyses and to three reviewers for helpful comments.
158
Lopes, P. N., Salovey, P., & Straus, R. (2003). Emotional intelligence, personality, and the perceived quality of social relationships. Personality and Individual Differences, 3, 641659. Matthews, G., Zeidner, M., & Roberts, R. D. (2003). Emotional intelligence: Science and myth. Cambridge, MA: MIT Press. Mayer, J. D., & Salovey, P. (1993). The intelligence of emotional intelligence. Intelligence, 17, 433442. Mayer, J. D., & Salovey, P. (1997). What is emotional intelligence? In P. Salovey & D. Sluyter (Eds.), Emotional development and emotional intelligence: Educational implications (pp. 331). New York: Basic Books. Mayer, J. D., Caruso, D. R., & Salovey, P. (1999). Emotional intelligence meets traditional standards for an intelligence. Intelligence, 27, 267298. Mayer, J. D., Salovey, P., & Caruso, D. (2002a). Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT), Version 2.0. Toronto, Canada: Multi-Health Systems. Mayer, J. D., Salovey, P., & Caruso, D. (2002b). Mayer-Salovey-Caruso Emotional Intelligence Test Users Manual. Toronto, Canada: Multi-Health Systems. Mayer, J. D., Salovey, P., Caruso, D., & Sitarenios, G. (2001). Emotional intelligence as a standard intelligence. Emotion, 1, 232242. Mayer, J. D., Salovey, P., Caruso, D. R., & Sitarenios, G. (2003). Measuring emotional intelligence with the MSCEIT V2.0. Emotion, 3, 97105. Salovey, P., Kokkonen, M., Lopes, P., & Mayer, J. (2004). Emotional intelligence: What do we know? In A. S. R. Manstead, N. H. Frijda, & A. H. Fischer (Eds.), Feelings and emotions: The Amsterdam Symposium (pp. 319338). New York: Cambridge University Press. Salovey, P., & Mayer, J. D. (1990). Emotional intelligence. Imagination, Cognition, and Personality, 9, 185211.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.