Tools For Analyzing Talk Part 1: The CHAT Transcription Format
Tools For Analyzing Talk Part 1: The CHAT Transcription Format
Brian MacWhinney
Carnegie Mellon University
July 1, 2018
https://doi.org/10.5072/FK2NS0V31W
When citing the use of TalkBank and CHILDES facilities, please use this reference to
the last printed version of the CHILDES manual:
MacWhinney, B. (2000). The CHILDES Project: Tools for Analyzing Talk. 3rd
Edition. Mahwah, NJ: Lawrence Erlbaum Associates
This allows us to systematically track usage of the programs and data through
scholar.google.com.
Part 1: CHAT 2
1 Introduction................................................................................................................. 5
2 The CHILDES Project................................................................................................. 7
2.1 Impressionistic Observation...................................................................................... 7
2.2 Baby Biographies........................................................................................................... 8
2.3 Transcripts...................................................................................................................... 8
2.4 Computers..................................................................................................................... 10
2.5 Connectivity.................................................................................................................. 10
3 From CHILDES to TalkBank..................................................................................12
3.1 Three Tools................................................................................................................... 12
3.2 Shaping CHAT............................................................................................................... 13
3.3 Building CLAN.............................................................................................................. 13
3.4 Constructing the Database....................................................................................... 14
3.5 Dissemination.............................................................................................................. 14
3.6 Funding.......................................................................................................................... 15
3.7 How to Use These Manuals....................................................................................... 15
3.8 Changes.......................................................................................................................... 16
4 Principles..................................................................................................................... 17
4.1 Computerization.......................................................................................................... 17
4.2 Words of Caution......................................................................................................... 18
4.2.1 The Dominance of the Written Word...........................................................................18
4.2.2 The Misuse of Standard Punctuation...........................................................................19
4.2.3 Working With Video........................................................................................................... 19
4.3 Problems With Forced Decisions............................................................................ 20
4.4 Transcription and Coding......................................................................................... 20
4.5 Three Goals................................................................................................................... 21
5 minCHAT.................................................................................................................... 22
5.1 minCHAT – the Form of Files.................................................................................... 22
5.2 minCHAT – Words and Utterances......................................................................... 22
5.3 Analyzing One Small File........................................................................................... 23
5.4 Next Steps...................................................................................................................... 24
5.5 Checking Syntactic Accuracy.................................................................................... 24
6 Corpus Organization............................................................................................... 25
6.1 File Naming................................................................................................................... 25
6.2 Metadata........................................................................................................................ 25
6.3 The Documentation File............................................................................................ 27
7 File Headers................................................................................................................ 29
7.1 Hidden Headers........................................................................................................... 29
7.2 Initial Headers............................................................................................................. 30
7.3 Participant-Specific Headers................................................................................... 35
7.4 Constant Headers........................................................................................................ 35
7.5 Changeable Headers................................................................................................... 37
8 Words............................................................................................................................ 41
8.1 The Main Line............................................................................................................... 42
8.2 Basic Words................................................................................................................... 42
Part 1: CHAT 3
12 CHAT-CA Transcription........................................................................................ 86
13 Disfluency Transcription........................................................................................ 89
14 Transcribing Aphasic Language.......................................................................91
15 Arabic and Hebrew Transcription.......................................................................95
16 Specific Applications............................................................................................... 97
16.1 Code-Switching.......................................................................................................... 97
16.2 Elicited Narratives and Picture Descriptions...................................................98
16.3 Written Language..................................................................................................... 98
16.4 Sign and Speech......................................................................................................... 99
17 Speech Act Codes................................................................................................... 101
17.1 Interchange Types................................................................................................. 101
17.2 Illocutionary Force Codes.................................................................................... 102
18 Error Coding.......................................................................................................... 105
18.1 Word level error codes......................................................................................... 105
18.1.1 Phonological errors [* p]............................................................................................. 105
18.1.2 Semantic errors [* s]..................................................................................................... 105
18.1.3 Neologisms [* n]............................................................................................................. 106
18.1.4 Morphological errors [* m:a].....................................................................................106
18.1.5 Dysfluencies [* d]........................................................................................................... 108
18.1.6 Missing Words................................................................................................................. 108
18.1.7 General Considerations................................................................................................ 108
18.2 Utterance level error coding (post-codes)......................................................108
References........................................................................................................................ 111
Part 1: CHAT 5
1 Introduction
This electronic edition of the CHAT manual is being continually revised to keep pace
with the growing interests of the language research communities served by the TalkBank
and CHILDES communities. The first three editions were published in 1990, 1995, and
2000 by Lawrence Erlbaum Associates. After 2000, we switched to the current
electronic publication format. However, in order to easily track usage through systems
such as Google Scholar, we ask that users cite the version of the manual published in
2000, when using data and programs in their published work. This is the citation:
MacWhinney, B. (2000). The CHILDES project: Tools for analyzing talk. 3rd edition.
Mahwah, NJ: Lawrence Erlbaum Associates.
In its earlier version, this manual focused exclusively on the use of the programs for
child language data in the context of the CHILDES system (https://childes.talkbank.org).
However, beginning in 2001 with support from NSF, we introduced the concept of
TalkBank (https://talkbank.org)to include a wide variety of language databases. These
now include:
1. AphasiaBank (https://aphasia.talkbank.org) for language in aphasia,
2. ASD Bank (https://asd.talkbank.org ) for language in autism,
3. BilingBank (https://biling.talkbank.org) for the study of bilingualism and code
switching,
4. CABank (https://ca.talkbank.org) for Conversation Analysis, including the large
SCOTUS corpus,
5. CHILDES (https://childes.talkbank.org) for child language acquisition,
6. ClassBank (https://class.talkbank.org) for studies of language in the classroom,
7. DementiaBank (https://dementia.talkbank.org) for language in dementia,
8. FluencyBank(https://fluency.talkbank.org) for the study of childhood fluency
development,
9. HomeBank (https://homebank.talkbank.org) for daylong recordings in the home,
10. PhonBank (https://phonbank.talkbank.org) for the study of phonological
development,
11. RHDBank (https://rhd.talkbank.org) for language in right hemisphere damage,
12. SamtaleBank (https://samtalebank.talkbank.org) for Danish conversations.
13. SLABank (https://slabank.talkbank.org) for second language acquisition, and
14. TBIBank (https://tbi.talkbank.org) for language in traumatic brain injury,
The current manual maintains some of the earlier emphasis on child language,
particularly in the first sections, while extending the treatment to these further areas and
formats in terms of new codes and several new sections. We are continually adding
Part 1: CHAT 6
corpora to each of these separate collections. In 2018, the size of the text database is
800MB and there is an additional 5TB of media. All of the data in TalkBank are freely
open to downloading and analysis with the exception of the data in the clinical language
banks which are open to clinical researchers using passwords. The CLAN program and
the related morphosyntactic taggers are all free and opensourced through GitHub.
Fortunately, all of these different language banks make use of the same transcription
format (CHAT) and the same set of programs (CLAN). This means that, although most
of the examples in this manual rely on data from the CHILDES database, the principles
extend easily to data in all of the TalkBank repositories. TalkBank is the largest open
repository of data on spoken language. All of the data in TalkBank are transcribed in the
CHAT format which is compatible with the CLAN programs.
Using conversion programs available inside CLAN (see the CLAN manual for
details), transcripts in CHAT format can be automatically converted into the formats
required for Praat (praat.org), Phon (phonbank.talkbank.org), ELAN
(tla.mpi.nl/tools/elan), CoNLL, ANVIL (anvilsoftware.org), EXMARaLDA
(exmaralda.org), LIPP (ihsys.com), SALT (saltsoftware.com), LENA
(lenafoundation.org), Transcriber (trans.sourceforge.net), and ANNIS (corpus
tools.org/ANNIS).
TalkBank databases and programs have been used widely in the research literature.
CHILDES, which is the oldest and most widely recognized of these databases, has been
used in over 7000 published articles. PhonBank has been used in 480 articles and
AphasiaBank has been used in 212 presentations and publications. In general, the longer
a database has been available to researchers, the more the use of that database has
become integrated into the basic research methodology and publication history of the
field.
Metadata for the transcripts and media in these various TalkBank databases have been
entered into the two major systems for accessing linguistic data: OLAC, and VLO
(Virtual Language Observatory). Each transcript and media file has been assigned a PID
(permanent ID) using the Handle System (www.handle.net), and each corpus has
received an ISBN and DOI (digital object identifier) number.
For ten of the languages in the database, we provide automatic morphosyntactic
analysis using a series of programs built into CLAN. These languages are Cantonese,
Chinese, Dutch, English, French, German, Hebrew, Japanese, Italian, and Spanish. The
codes produced by these programs could eventually be harmonized with the GOLD
ontology. In addition, we can compute a dependency grammar analysis for each of these
10 languages. As a result of these efforts, TalkBank has been recognized as a Center in
the CLARIN network (clarin.eu) and has received the Data Seal of Approval
(datasealofapproval.org). TalkBank data have also been included in the SketchEngine
corpus tool (sketchengine.co.uk).
Part 1: CHAT 7
stormy intercourse of human life, yet depending on parental authority and the beck of
elders.
Augustine's outline of early word learning drew attention to the role of gaze, pointing,
intonation, and mutual understanding as fundamental cues to language learning. Modern
research in word learning (Bloom, 2000) has supported every point of Augustine's
analysis, as well as his emphasis on the role of children's intentions. In this sense,
Augustine's somewhat fanciful recollection of his own language acquisition remained the
high water mark for child language studies through the Middle Ages and even the
Enlightenment. Unfortunately, the method on which these insights were grounded
depends on our ability to actually recall the events of early childhood – a gift granted to
very few of us.
2.3 Transcripts
The limits of the diary technique were always quite apparent. Even the most highly
trained observer could not keep pace with the rapid flow of normal speech production.
Anyone who has attempted to follow a child about with a pen and a notebook soon
realizes how much detail is missed and how the notetaking process interferes with the
ongoing interactions.
The introduction of the tape recorder in the late 1950s provided a way around these
limitations and ushered in the third period of observational studies. The effect of the tape
recorder on the field of language acquisition was very much like its effect on
Part 1: CHAT 9
ethnomusicology, where researchers such as Alan Lomax (Parrish, 1996) were suddenly
able to produce high quality field recordings using this new technology. This period was
characterized by projects in which groups of investigators collected large data sets of tape
recordings from several subjects across a period of 2 or 3 years. Much of the excitement
in the 1960s regarding new directions in child language research was fueled directly by
the great increase in raw data that was possible through use of tape recordings and typed
transcripts.
This increase in the amount of raw data had an additional, seldom discussed, conse
quence. In the period of the baby biography, the final published accounts closely
resembled the original database of note cards. In this sense, there was no major gap
between the observational database and the published database. In the period of typed
transcripts, a wider gap emerged. The size of the transcripts produced in the 60s and 70s
made it impossible to publish the full corpora. Instead, researchers were forced to publish
only highlevel analyses based on data that were not available to others. This led to a
situation in which the raw empirical database for the field was kept only in private stocks,
unavailable for general public examination. Comments and tallies were written into the
margins of ditto master copies and new, even less legible copies, were then made by
thermal production of new ditto masters. Each investigator devised a projectspecific
system of transcription and projectspecific codes. As we began to compare handwritten
and typewritten transcripts, problems in transcription methodology, coding schemes, and
crossinvestigator reliability became more apparent.
Recognizing this problem, Roger Brown took the lead in attempting to share his tran
scripts from Adam, Eve, and Sarah (Brown, 1973) with other researchers. These
transcripts were typed onto stencils and mimeographed in multiple copies. The extra
copies were lent to and analyzed by a wide variety of researchers. In this model,
researchers took their copy of the transcript home, developed their own coding scheme,
applied it (usually by making pencil markings directly on the transcript), wrote a paper
about the results and, if very polite, sent a copy to Roger. Some of these reports (Moerk,
1983) even attempted to disprove the conclusions drawn from those data by Brown
himself!
During this early period, the relations between the various coding schemes often
remained shrouded in mystery. A fortunate consequence of the unstable nature of coding
systems was that researchers were very careful not to throw away their original data, even
after it had been coded. Brown himself commented on the impending transition to
computers in this passage (Brown, 1973, p. 53):
It is sensible to ask and we were often asked, “Why not code the sentences for
grammatically significant features and put them on a computer so that studies could
readily be made by anyone?” My answer always was that I was continually discovering
new kinds of information that could be mined from a transcription of conversation and
Part 1: CHAT 10
never felt that I knew what the full coding should be. This was certainly the case and
indeed it can be said that in the entire decade since 1962 investigators have continued to
hit upon new ways of inferring grammatical and semantic knowledge or competence
from free conversation. But, for myself, I must, in candor, add that there was also a factor
of research style. I have little patience with prolonged “tooling up” for research. I
always want to get started. A better scientist would probably have done more planning
and used the computer. He can do so today, in any case, with considerable confidence
that he knows what to code.
With the experience of three more decades of computerized analysis behind us, we
now know that the idea of reducing child language data to a set of codes and then
throwing away the original data is simply wrong. Instead, our goal must be to
computerize the data in a way that allows us to continually enhance it with new codes and
annotations. It is fortunate that Brown preserved his transcript data in a form that
allowed us to continue to work on it. It is unfortunate, however, that the original
audiotapes were not kept.
2.4 Computers
Just as these data analysis problems were coming to light, a major technological
opportunity was emerging in the shape of the powerful, affordable microcomputer.
Microcomputer wordprocessing systems and database programs allowed researchers to
enter transcript data into computer files that could then be easily duplicated, edited, and
analyzed by standard dataprocessing techniques. In 1981, when the Child Language
Data Exchange System (CHILDES) Project was first conceived, researchers basically
thought of computer systems as large notepads. Although researchers were aware of the
ways in which databases could be searched and tabulated, the full analytic and
comparative power of the computer systems themselves was not yet fully understood.
Rather than serving only as an “archive” or historical record, a focus on a shared data
base can lead to advances in methodology and theory. However, to achieve these
additional advances, researchers first needed to move beyond the idea of a simple data
repository. At first, the possibility of utilizing shared transcription formats, shared codes,
and shared analysis programs shone only as a faint glimmer on the horizon, against the
fog and gloom of handwritten tallies, fuzzy dittos, and idiosyncratic coding schemes.
Slowly, against this backdrop, the idea of a computerized data exchange system began to
emerge. It was against this conceptual background that CHILDES (the name uses a one
syllable pronunciation) was conceived. The origin of the system can be traced back to the
summer of 1981 when Dan Slobin, Willem Levelt, Susan ErvinTripp, and Brian
MacWhinney discussed the possibility of creating an archive for typed, handwritten, and
computerized transcripts to be located at the MaxPlanckInstitut für Psycholinguistik in
Nijmegen. In 1983, the MacArthur Foundation funded meetings of developmental
researchers in which Elizabeth Bates, Brian MacWhinney, Catherine Snow, and other
Part 1: CHAT 11
2.5 Connectivity
Since 1984, when the CHILDES Project began in earnest, the world of computers has
gone through a series of remarkable revolutions, each introducing new opportunities and
challenges. The processing power of the home computer now dwarfs the power of the
mainframe of the 1980s; new machines are now shipped with builtin audiovisual
capabilities; and devices such as CDROMs and optical disks offer enormous storage
capacity at reasonable prices. This new hardware has now opened up the possibility for
multimedia access to digitized audio and video from links inside the written transcripts.
In effect, a transcript is now the starting point for a new exploratory reality in which the
whole interaction is accessible from the transcript. Although researchers have just now
begun to make use of these new tools, the current shape of the CHILDES system reflects
many of these new realities. In the pages that follow, you will learn about how we are
using this new technology to provide rapid access to the database and to permit the
linkage of transcripts to digitized audio and video records, even over the Internet.
Part 1: CHAT 12
homepage you click on **Index to Corpora** and then Dutch. For there,
you might want to read about the contents of the CLPF corpus for early
phonological development in Dutch. You then click on the CLPF link and it
takes you to the fuller corpus description with photos from the contributors.
From links on that page you can either browse the corpus, download the
transcripts, or download the media.
In addition to these basic manual resources, there are these further facilities for
learning CHAT and CLAN, all of which can be downloaded from the talkbank.org
and childes.talkbank.org server sites:
1. Nan Bernstein Ratner and Shelley Brundage have contributed a manual
designed specifically for clinical practitioners called the SLP’s Guide to
CLAN.
2. There are versions of the manuals in Japanese and Chinese.
3. Davida Fromm has produced a series of screencasts describing how to use
basic features of CLAN.
Sacco, and Gergely Sikuta. Barbara Pan, Jeff Sokolov, and Pam Rollins also provided a
reading of the final draft of the 1995 version of the manual.
3.5 Dissemination
Since the beginning of the project, Catherine Snow has continually played a pivotal
role in shaping policy, building the database, organizing workshops, and determining the
shape of CHAT and CLAN. Catherine Snow collaborated with Jeffrey Sokolov, Pam
Rollins, and Barbara Pan to construct a series of tutorial exercises and demonstration
analyses that appeared in Sokolov & Snow (1994). Those exercises form the basis for
similar tutorial sections in the current manual. Catherine Snow has contributed six major
corpora to the database and has conducted CHILDES workshops in a dozen countries.
Several other colleagues have helped disseminate the CHILDES system through
workshops, visits, and Internet facilities. Hidetosi Sirai established a CHILDES file
server mirror at Chukyo University in Japan and Steven Gillis established a mirror at the
University of Antwerp. Steven Gillis, Kim Plunkett, Johannes Wagner, and Sven
Strömqvist helped propagate the CHILDES system at universities in Northern and
Central Europe. Susanne Miyata has brought together a vital group of child language
researchers using CHILDES to study the acquisition of Japanese and has supervised the
translation of the current manual into Japanese. In Italy, Elena Pizzuto organized
symposia for developing the CHILDES system and has supervised the translation of the
manual into Italian. Magdalena Smoczynska in Krakow and Wolfgang Dressler in Vienna
have helped new researchers who are learning to use CHILDES for languages spoken in
Eastern Europe. Miquel Serra has supported a series of CHILDES workshops in
Barcelona. Zhou Jing organized a workshop in Nanjing and Chienju Chang organized a
workshop in Taipei.
The establishment and promotion of additional segments of TalkBank now relies on a
wide array of inputs. Yvan Rose has spearheaded the creation of PhonBank. Nan
Bernstein Ratner has led the development of FluencyBank. Audrey Holland, Davida
Fromm, and Margie Forbes have worked to create AphasiaBank. Johannes Wagner has
created SamtaleBank and segments of CABank. Jerry Goldman developed the SCOTUS
segment of CABank. Roy Pea contributed to the development of ClassBank. Within each
of these communities, scores of other scholars have helped with donations of corpora,
analyses, and ideas.
3.6 Funding
From 1984 to 1988, the John D. and Catherine T. MacArthur Foundation supported
the CHILDES Project. In 1988, the National Science Foundation provided an equipment
grant that allowed us to put the database on the Internet and on CDROMs. From 1989,
the CHILDES project has been supported by an ongoing grant from the National Insti
Part 1: CHAT 16
If you are primarily interested in analyzing data already stored in TalkBank, you do
not need to learn the CHAT transcription format in much detail and you will only need to
use the editor to open and read files. In that case, you may wish to focus your efforts on
learning to use the CLAN programs. If you plan to transcribe new data, then you also
need to work with the current manual to learn to use CHAT.
Teachers will also want to pay particular attention to the sections of the CLAN
manual that present a tutorial introduction. Using some of the examples given there, you
Part 1: CHAT 17
can construct additional materials to encourage students to explore the database to test
out particular hypotheses.
The TalkBank system was not intended to address all issues in the study of language
learning, or to be used by all students of spontaneous interactions. The CHAT system is
comprehensive, but it is not ideal for all purposes. The programs are powerful, but they
cannot solve all analytic problems. It is not the goal of TalkBank to provide facilities for
all research endeavors or to force all research into some uniform mold. On the contrary,
the programs are designed to offer support for alternative analytic frameworks. For
example, the editor now supports the various codes of Conversation Analysis (CA)
format, as alternatives and supplements to CHAT format.. Moreover, we have developed
programs that convert between CHAT format and other common formats, because we
know that users often need to run analyses in these other formats.
3.8 Changes
The TalkBank tools have been extensively tested for ease of application, accuracy,
and reliability. However, change is fundamental to any research enterprise. Researchers
are constantly pursuing better ways of coding and analyzing data. It is important that the
tools keep progress with these changing requirements. For this reason, there will be
revisions to CHAT, the programs, and the database as long as the TalkBank Project is
active.
Part 1: CHAT 18
4 Principles
The CHAT system provides a standardized format for producing computerized tran
scripts of facetoface conversational interactions. These interactions may involve
children and parents, doctors and patients, or teachers and secondlanguage learners.
Despite the differences between these interactions, there are enough common features to
allow for the creation of a single general transcription system. The system described here
is designed for use with both normal and disordered populations. It can be used with
learners of all types, including children, secondlanguage learners, and adults recovering
from aphasic disorders. The system provides options for basic discourse transcription as
well as detailed phonological and morphological analysis. The system bears the acronym
“CHAT,” which stands for Codes for the Human Analysis of Transcripts. CHAT is the
standard transcription system for the TalkBank and CHILDES (Child Language Data
Exchange System) Projects. All of the transcripts in the TalkBank databases are in CHAT
format.
What makes CHAT particularly powerful is the fact that files transcribed in CHAT
can also be analyzed by the CLAN programs that are described in the CLAN manual,
which is an electronic companion piece to this manual. The CHAT programs can track a
wide variety of structures, compute automatic indices, and analyze morphosyntax.
Moreover, because all CHAT files can now also be translated to a highly structured form
of XML (a language used for text documents on the web), they are now also compatible
with a wide range of other powerful computer programs such as ELAN, Praat,
EXMARaLDA, Phon, Transcriber, and so on.
The TalkBank system has had a major impact on the study of child language. At the
time of the last monitoring in 2016, there were over 7000 published articles that had
made use of the programs and database. In 2016, the size of the database had grown to
over 110 million words, making it by far the largest database of conversational
interactions available anywhere. The total number of researchers who have joined as
members across the length of the project is now over 5000. Of course, not all of these
people are making active use of the tools at all times. However, it is safe to say that, at
any given point in time, well over 100 groups of researchers around the world are
involved in new data collection and transcription using the CHAT system. Eventually the
data collected in these various projects will all be contributed to the database.
4.1 Computerization
Public inspection of experimental data is a crucial prerequisite for serious scientific
progress. Imagine how genetics would function if every experimenter had his or her own
individual strain of peas or drosophila and refused to allow them to be tested by other ex
Part 1: CHAT 19
perimenters. What would happen in geology, if every scientist kept his or her own set of
rock specimens and refused to compare them with those of other researchers? In some
fields the basic phenomena in question are so clearly open to public inspection that this is
not a problem. The basic facts of planetary motion are open for all to see, as are the basic
facts underlying Newtonian mechanics.
Unfortunately, in language studies, a free and open sharing and exchange of data has
not always been the norm. In earlier decades, researchers jealously guarded their field
notes from a particular language community of subject type, refusing to share them
openly with the broader community. Various justifications were given for this practice. It
was sometimes claimed that other researchers would not fully appreciate the nature of the
data or that they might misrepresent crucial patterns. Sometimes, it was claimed that only
someone who had actually participated in the community or the interaction could
understand the nature of the language and the interactions. In some cases, these
limitations were real and important. However, all such restrictions on the sharing of data
inevitably impede the progress of the scientific study of language learning.
Within the field of language acquisition studies it is now understood that the
advantages of sharing data outweigh the potential dangers. The question is no longer
whether data should be shared, but rather how they can be shared in a reliable and
responsible fashion. The computerization of transcripts opens up the possibility for many
types of data sharing and analysis that otherwise would have been impossible. However,
the full exploitation of this opportunity requires the development of a standardized
system for data transcription and analysis.
nonstandard learner strings to standard forms of the adult language. For example, when a
child says “put on my jamas,” the transcriber may instead enter “put on my pajamas,”
reasoning unconsciously that “jamas” is simply a childish form of “pajamas.” This type
of regularization of the child form to the adult lexical norm can lead to misunderstanding
of the shape of the child's lexicon. For example, it could be the case that the child uses
“jamas” and “pajamas” to refer to two very different things (Clark, 1987; MacWhinney,
1989).
There are two types of errors possible here. One involves mapping a learner's spoken
form onto an adult form when, in fact, there was no real correspondence. This is the prob
lem of overnormalization. The second type of error involves failing to map a learner's
spoken form onto an adult form when, in fact, there is a correspondence. This is the
problem of undernormalization. The goal of transcribers should be to avoid both the
Scylla of overnormalization and the Charybdis of undernormalization. Steering a course
between these two dangers is no easy matter. A transcription system can provide devices
to aid in this process, but it cannot guarantee safe passage.
Transcribers also often tend to assimilate the shape of sounds spoken by the learner to
the shapes that are dictated by morphosyntactic patterns. For example, Fletcher (1985)
noted that both children and adults generally produce “have” as “uv” before main verbs.
As a result, forms like “might have gone” assimilate to “mightuv gone.” Fletcher
believed that younger children have not yet learned to associate the full auxiliary “have”
with the contracted form. If we write the children's forms as “might have,” we then end
up mischaracterizing the structure of their lexicon. To take another example, we can note
that, in French, the various endings of the verb in the present tense are distinguished in
spelling, whereas they are homophonous in speech. If a child says / mʌnʒ/ “eat,” are we to
transcribe it as first person singular mange, as second person singular manges, or as the
imperative mange? If the child says /mãʒe/, should we transcribe it as the infinitive
manger, the participle mangé, or the second person formal mangez?
CHAT deals with these problems in three ways. First, it uses IPA as a uniform way
of transcribing discourse phonetically. Second, the editor allows the user to link the
digitized audio record of the interaction directly to the transcript. This is the system
called “sonic CHAT.” With these sonic CHAT links, it is possible to doubleclick on a
sentence and hear its sound immediately. Having the actual sound produced by the child
directly available in the transcript takes some of the burden off of the transcription
system. However, whenever computerized analyses are based not on the original audio
signal but on transcribed orthographic forms, one must continue to understand the limits
of transcription conventions. Third, for those who wish to avoid the work involved in IPA
transcription or sonic CHAT, that is a system for using nonstandard lexical forms, that
the form “might (h)ave” would be universally recognized as the spelling of “mightof”,
Part 1: CHAT 21
the contracted form of “might have.” More extreme cases of phonological variation can
be annotated as in this example: popo [: hippopotamus].
whether an omission is grammatical or not. In that case, it may be helpful to have some
way of blurring the distinction. CHAT has certain symbols that can be used when a
categorization cannot be made. It is important to remember that many of the CHAT
symbols are entirely optional. Whenever you feel that you are being forced to make a
distinction, check the manual to see whether the particular coding choice is actually
required. If it is not required, then simply omit the code altogether.
Slobin's view of the pressures shaping human language can be extended to analyze
the pressures shaping a transcription system. In many regards, a transcription system is
much like any human language. It needs to be clear in its markings of categories, and still
preserve readability and ease of transcription. However, transcripts address rather
different audiences. One audience is the human audience of transcribers, analysts, and
readers. The other audience is the digital computer and its programs. To deal with these
two audiences, a system for computerized transcription needs to achieve the following
goals:
Clarity: Every symbol used in the coding system should have some clear and
definable realworld referent. Symbols that mark particular words should always be
spelled in a consistent manner. Symbols that mark particular conversational patterns
should refer to consistently observable patterns. Codes must steer between the Scylla of
overregularization and the Charybdis of underregularization discussed earlier.
Distinctions must avoid being either too fine or too coarse. Another way of looking at
clarity is through the notion of systematicity. Codes, words, and symbols must be used in
a consistent manner across transcripts. Ideally, each code should always have a unique
meaning independent of the presence of other codes or the particular transcript in which it
is located. If interactions are necessary, as in hierarchical coding systems, these
interactions need to be systematically described.
Readability: Just as human language needs to be easy to process, so transcripts need
to be easy to read. This goal often runs directly counter to the first goal. In the TalkBank
system, we have attempted to provide a variety of CHAT options that will allow a user to
maximize the readability of a transcript. We have also provided clan tools that will allow
a reader to suppress the less readable aspects in transcript when the goal of readability is
more important than the goal of clarity of marking.
Ease of data entry: As distinctions proliferate within a transcription system, data
entry becomes increasingly difficult and errorprone. There are two ways of dealing with
this problem. One method attempts to simplify the coding scheme and its categories. The
problem with this approach is that it sacrifices clarity. The second method attempts to
help the transcriber by providing computational aids. The CLAN programs follow this
path. They provide systems for the automatic checking of transcription accuracy,
methods for the automatic analysis of morphology and syntax, and tools for the
semiautomatic entry of codes. However, the basic process of transcription has not been
automated and remains the major task during data entry.
Part 1: CHAT 24
5 minCHAT
CHAT provides both basic and advanced formats for transcription and coding. The
basic level of CHAT is called minCHAT. New users should start by learning minCHAT.
This system looks much like other intuitive transcription systems that are in general use
in the fields of child language and discourse analysis. However, eventually users will find
that there is something they want to be able to code that goes beyond minCHAT. At that
point, they should move on to learning midCHAT.
Here is a sample that illustrates these principles. This file is syntactically correct and
uses the minimum number of CHAT conventions while still maintaining compatibility
with the CLAN commands.
@Begin
@Languages: eng
@Participants: CHI Ross Child, FAT Brian Father
@ID: eng|macwhinney|CHI|2;10.10||||Target_Child|||
@ID: eng|macwhinney|FAT|35;2.||||Target_Child|||
*ROS: why isn't Mommy coming?
%com: Mother usually picks Ross up around 4 PM.
*FAT: don't worry.
*FAT: she'll be here soon.
*CHI: good.
@End
learn just enough about minCHAT and minCLAN to see your path through these four
crucial steps:
5. entry of a small set of your data into a CHAT file,
6. successful running of the CHECK command inside the editor to guarantee
accuracy in your CHAT file,
7. development of a series of codes that will interface with the particular CLAN
commands most appropriate for your analysis, and
8. running of the relevant CLAN commands, so that you can be sure that the
results you will get will properly test the hypotheses you wish to develop.
If you go through these steps first, you can guarantee in advance the successful
outcome of your project. You can avoid ending up in a situation in which you have
transcribed hundreds of hours of data in a way that does not match correctly with the
input requirements for CLAN.
brushing your teeth. It may be hard at first to remember to use the command, but the
more you use it the easier it becomes and the better the final results.
Part 1: CHAT 28
6 Corpus Organization
6.1 File Naming
Each TalkBank database consists of a collection of corpora, organized into larger
folders by languages and language groups. For example, there is a toplevel folder called
Romance in which one finds subfolders for Spanish, French, and other Romance
languages. Within the Spanish folder, there are then dozens of further folders, each of
which has a single corpus. With a corpus, files may be further grouped by individual
children or groups of children. For longitudinal corpora, we recommend that file names
use the age of the child followed by a letter if there are several recordings from a given
day. For example, the transcript from the fourth taping session when the child was 2;3;22
would be called 20322d.cha. It is better to use ages for file names, rather than dates or
other material.
6.2 Metadata
Increasingly, researchers rely on Internet systems to locate and retrieve language data
and resources. There are currently several systems designed to facilitate this process and
we have adapted the indexing and registration of materials in the CHILDES and
TalkBank systems to provide information that can be incorporated into these systems.
The two systems designed specifically to deal with linguistic data are OLAC (Online
Language Archives Community at www.languagearchives.org) and VLO (Virtual
Language Observatory at vlo.clarin.eu). These systems allow researchers to search for
whole corpora or single files, using terms such as Cantonese, video, gesture, or aphasia.
In order to publish or register TalkBank data within these systems, we create a
0metadata.cdc file at the top level of each corpus in TalkBank. Some of the fields in this
metadata file are designed for indexing in OLAC and some are designed for the CMDI
system used by VLO and the related facility called The Language Archive (tla.mpi.nl).
Because of the highly specific nature of the terms and the software used for regular
harvesting and publication of these data, we do not require users to create the
0metadata.cdc files. The following table explains what keywords are expected within
each field of these files. The first fields listed are for OLAC and the later ones are for
CMDI. For CMDI, the values unknown and unspecified are also available for most of the
fields.
For the CMDI/VLO/CLARIN system, there must be a cmdi.xml file for each
transcript. To create these several thousand files, we use a CLAN program that takes the
information from the 0metadata.cdc files and from the header lines in each transcript.
The information in the @ID field is particularly important in this process. It also relies
on the fact that we use an isomorphic file system for indexing media files. Fortunately,
users do not need to concern themselves with all these many additional technical details.
pseudonyms. Anonymization is not necessary when the subject of the transcriptions is the
researcher's own child, as long as the child grants permission for the use of the data.
History. There should be detailed information on the history of the project. How was
funding obtained? What were the goals of the project? How was data collected? What
was the sampling procedure? How was transcription done? What was ignored in
transcription? Were transcribers trained? Was reliability checked? Was coding done?
What codes were used? Was the material computerized? How?
Codes. If there are projectspecific codes, these should be described.
Biographical data. Where possible, extensive demographic, dialectological, and
psychometric data should be provided for each informant. There should be information
on topics such as age, gender, siblings, schooling, social class, occupation, previous
residences, religion, interests, friends, and so forth. Information on where the parents
grew up and the various residences of the family is particularly important in attempting to
understand sociolinguistic issues regarding language change, regionalism, and dialect.
Without detailed information about specific dialect features, it is difficult to know
whether these particular markers are being used throughout the language or just in certain
regions.
Situational descriptions. The readme file should include descriptions of the contexts
of the recordings, such as the layout of the child's home and bedroom or the nature of the
activities being recorded. Additional specific situational information should be included
in the @Situation and @Comment fields in each file.
Part 1: CHAT 32
7 File Headers
The three major components of a CHAT transcript are the file headers, the main tier,
and the dependent tiers. In this chapter we discuss creating the first major component –
the file headers. A computerized transcript in CHAT format begins with a series of
“header” lines, which tells us about things such as the date of the recording, the names of
the participants, the ages of the participants, the setting of the interaction, and so forth.
A header is a line of text that gives information about the participants and the setting.
All headers begin with the “@” sign. Some headers require nothing more than the @ sign
and the header name. These are “bare” headers such as @Begin or @New Episode. How
ever, most headers require that there be some additional material. This additional material
is called an “entry.” Headers that take entries must have a colon, which is then followed
by one or two tabs and the required entry. By default, tabs are usually understood to be
placed at eightcharacter intervals. The material up to the colon is called the “header
name.” In the example following, “@Media” and “@Date” are both header names
@Media: abe88, video
@Date: 25JAN1983
The text that follows the header name is called the “header entry.” Here, “abe88
movie” and “25JAN1983” are the header entries. The header name and the header entry
together are called the “header line.” The header line should never have a punctuation
mark at the end. In CHAT, only utterances actually spoken by the subjects receive final
punctuation.
This chapter presents a set of headers that researchers have considered important.
Except for the @Begin, @Languages, @Participants, @ID, and @End headers, none of
the headers are required and you should feel free to use only those headers that you feel
are needed for the accurate documentation of your corpus.
@Font
Part 1: CHAT 33
This header is used to set the default font for the file. This line appears at the
beginning of the file and its presence is hidden in the CLAN editor. When this header is
missing, CLAN tries to determine which font is most appropriate for use with the current
file by examining information in the @Languages and @Options headers. If CLAN’s
choice is not appropriate for the file, then the user will have to change the font. After this
is done, the font information will be stored in this header line. Files that are retrieved
from the database often do not have this header included, thereby allowing CLAN and
the user to decide which font is most appropriate for viewing the current file.
@UTF8
This hidden header follows after the @Font header. All files in the database use this
header to mark the fact that they are encoded in UTF8. If the file was produced outside
of CLAN and this header is missing, CLAN will complain and ask the user to verify
whether the file should be read in UTF8. Often this means that the user should run the
CP2UTF program to convert the file to UTF8.
@PID
This hidden header follows after the @UTF header and it declares the value of the
transcript for the Handle System (www.handle.net) that allows for persistent
identification of the location of digital objects. These numbers are then further processed
using the CMDI metadata scheme for publication and harvesting over the web through
the CLARIN (www.clarin.eu) schema that creates access through TLA (The Language
Archive; https://tla.mpi.nl/) and the VLO (Virtual Language Observatory;
https://vlo.clarin.edu ), as well as parallel methods from OLAC (Online Language
Archives Community).
These values can be entered into any system that resolves PIDs to locate the required
resource, such as the server at https://128.2.71.222:8000. For example one of the files
from the MacWhinney corpus has this number 11312/c000440681 which refers to the
CMDI metadata file for that transcript. If you change the 1 to 2, then it refers to the
transcript itself. If you change the 1 to 3, it refers to the media, if that exists. There are
also PID numbers in the 0metadata.cdc file that accompanies each corpus. When those
numbers end in 1, they refer to the CMDI file associated with the corpus. If you change
that 1 to 2, it refers to the .zip file that you can download for the corpus.
@ColorWords
This hidden header stores the color values that users create when using the Color
Keywords dialog.
Part 1: CHAT 34
@Begin
This header is always the first visible header placed at the beginning of the file. It is
needed to guarantee that no material has been lost at the beginning of the file. This is a
“bare” header that takes no entry and uses no colon.
@Languages:
This is the second visible header; it tells the programs which language is being used
in the dialogues. Here is an example of this line for a bilingual transcript using Swedish
and Portuguese.
@Languages: swe, por
The language codes come from the international ISO 6393 standard. For the
languages currently in the database, these threeletter codes and extended codes are used:
We continually update this list, and CLAN relies on a file in the lib/fixes directory
called ISO639.cut that lists the current languages. There are special conditions for
certain languages. For example, tone languages like Cantonese, Mandarin, and Thai are
allowed to have Romanized word forms that include tone numbers. In addition, Chinese
words in nonRoman characters can use numbers to disambiguate homonyms.
In multilingual corpora, several codes can be combined on the @Languages line. The
first code given is for the language used most frequently in the transcript. Individual
utterances in a second or third most frequent languages can be marked with precodes as
in this example:
*CHI: [ eng] this is my juguete@s .
In this example, Spanish is the most frequent language, but the particular sentence is
marked as English. The @Languages header lists spa for Spanish, and then eng for
English. Within this English sentence, the use of a Spanish word is then marked as @s.
When the @s is used in the main body of the transcript without the [ eng], then it
indicates a shift to English, rather than to Spanish. Please see the section on code
switching annotation for further details on the use of these codes for interactions with
codeswitching.
@Participants:
This is the third visible header. Like the @Begin and @Participants headers, it is
obligatory. It lists all of the actors within the file. The format for this header is XXX
Name Role, XXX Name Role, XXX Name Role. XXX stands for the threeletter speaker
ID. Here is an example of a completed @Participants header line:
@Participants: SAR Sue_Day Target_Child, CAR Carol Mother
Participants are identified by three elements: their speaker ID, their name and their
role:
Speaker ID. The speaker ID is usually composed of three letters. The code may be
based either on the participant's name, as in *ROS or *BIL, or on her role, as in *CHI or
*MOT. In corpora studying single children, the form *CHI should always be used for the
Target_Child, as in this example.
@Participants: CHI Mark Target_Child, MOT Mary Mother
Name. The speaker's name can be omitted. If CLAN finds only a threeletter ID and a
role, it will assume that the name has been omitted. In order to preserve anonymity, it is
often useful to include a pseudonym for the name, because the pseudonym will also be
used in the body of the transcript. For CLAN to correctly parse the participants line,
multipleword name definitions such as “Sue Day” need to be joined in the form
“Sue_Day.”
Role. After the ID and name, you type in the role of the speaker. There are a fixed set
of roles specified in the depfile.cut file used by CHECK and we recommend trying to use
these fixed roles whenever possible. Please consult that file for the full list. You will also
see this same list of possible roles in the “role” segment of the “ID Headers” dialog box.
All of these roles are hardwired into the depfile.cut file used by CHECK. If one of these
standard roles does not work, it would be best to use one of the generic age roles, like
Adult, Child, or Teenager. Then, the exact nature of the role can be put in the place of the
name, as in these examples:
@Participants: TBO Toll_Booth_Operator Adult,AIR Airport_Attendant
Adult, SI1 First_Sibling Sibling, SI2 Second_Sibling Sibling, OFF
MOT_to_INV OffScript, NON Computer_Talk Non_Human
@Options:
This header is not obligatory, but it is frequently needed. When it occurs, it must
follow the @Participants line. This header allows the checking programs (CHECK and
the XML validator) to suspend certain checking rules for certain file types. The spelling
of these options is casesensitive.
1. CA (Conversation Analysis). Use of this option suspends the usual requirement
for utterance terminators.
2. CAUnicode. This options is needed for CA transcripts using East Asian scripts
in order to automatically load Arial Unicode instead of CAFont. Unfortunately,
overlap alignment is not right in a variable width font like Arial Unicode.
3. heritage. Use of this option tells CHECK and the validator not to look at the
content of the main lines at all. This radical blockage of the function of CHECK
is only recommended for people working with CA files done in the traditional
Jeffersonian format. When this option is used, text may be placed into italics, as
in traditional CA.
4. sign. Use of this option permits the use of all capitals in words for Sign Language
notation.
5. IPA. Use of this option permits the use of IPA notation on the main line.
Part 1: CHAT 37
6. multi. Use of this option tells CHECK and Chatter to expect multiple bullets on a
single line. This can be used for data that come from programs like Praat that
mark time for each word.
7. caps. This option turns off CLAN’s restriction against having capital letters
inside words.
8. bullets. This option turns off the requirement that each timemarking bullet
should begin after the previous one.
@ID:
This header is used to control programs such as STATFREQ, output to Excel, and
new programs based on XML. The form of this line is:
@ID: language|corpus|code|age|sex|group|SES|role|education|custom|
There must be one @ID field for each participant. Often you will not care to encode
all of this information. In that case, you can leave some of these fields empty. Here is a
typical @ID header.
@ID: en|macwhinney|CHI|2;10.10||||Target_Child|||
To facilitate typing of these headers, you can run the CHECK program on a new
CHAT file. If CHECK does not see @ID headers, it will use the @Participants line to
insert a set of @ID headers to which you can then add further information. Alternatively,
you can use the INSERT program to create these fields automatically from the
information in the @Participants line. For even more complete control over creation of
these @ID headers, you can use the dialog system that comes up when you have an open
CHAT file and select “ID Headers” under the Tiers Menu pulldown. Here is a sample
version of this dialog box:
Part 1: CHAT 38
Here are some further specifications of the codes in the fields for the @ID header.
Language: as in the ISO codes table given above
Corpus: a oneword label for the corpus in lowercase
Code: the threeletter code for the speaker in capitals
Age: the age of the speaker (see below)
Sex: either “male” or “female” in lowercase
Group: any single word label.
Common abbreviations include: LI language impaired, LT late talker, SLI
specific language impaired, TD typically developing, ASD, RHD, AD,
TBI.
Eth, SES: Ethnicity (White, Black, Latino, Asian, Pacific), SES (WC for working
class, UC for upper class, MC for middle class, LI for limited income)
Note: if both Ethnicity and SES are given, there is a comma separating
them. If Ethnicity is not listed, it is assumed to be White.
Role: the role as given in the @Participants line
Education: educational level of the speaker (or the parent): Elem, HS, UG, Grad, Doc
Custom: any additional information needed for a given project
Part 1: CHAT 39
It is important to use the correct format for the Target_Child’s age. This field uses
the form years;months.days as in 2;11.17 for 2 years, 11 months, and 17 days. The fields
for the months and days should always have two places. Using this format is important
when it comes to ordering data by age in spreadsheet systems such as Excel. This often
means that you need to add leading zeroes, as in 2;05.06 and 5;09.01. However, you do
not need to add any leading zeroes before the years. If you do not know the child's age in
days, you can simply use years and months, as in 6;04. with a period after the months. If
you do not know the months, you can use the form 6; with the semicolon after the years.
If you only know the child’s birthdate and the date of the transcript, you can use the
DATES program to compute the child’s age.
@Media:
This header is used to tell CLAN how to locate and play back media that are linked to
transcripts. The first field in this header specifies the name of the media file. Extensions
should be omitted. If the media file is abe88.wav, then just enter “abe88”. Then declare
the format as “sound” or “video”. It is also possible to add the terms “missing” or
“unlinked” after the media type. So the line has this shape:
@Media: abe88, sound, missing
@End
Like the @Begin header, this header uses no colon and takes no entry. It is placed at
the end of the file as the very last line. Adding this header provides a safeguard against
the danger of undetected file truncation during copying.
headers will follow the @Media field. These headers, which are all optional, describe
various general facts about the file.
@Exceptions:
This allows for special word forms in certain corpora.
@Interaction Type:
The possible entries here include: constructed computer phonecall telechat meeting
work medical classroom tutorial private family sports religious legal face_to_face
@Location:
This header should include the city, state or province, and country in which the
interaction took place. Here is an example of a completed header line:
@Location: Boston, MA, USA
@Number:
The possible entries here include: two three four five more audience
@Recording Quality:
Possible entries here are: poor, fair, good, and excellent.
@Room Layout:
@Tape Location:
This header indicates the specific tape ID, side and footage. This is very important for
identifying the spot on the analog tape from which the transcription was made. The entry
for this header should include the tape ID, side and footage. Here is an example of this
header:
@Tape Location: tape74, side a, 104
Part 1: CHAT 41
@Time Duration:
It is often necessary to indicate the time at which the audiotaping began and the
amount of time that passed during the course of the taping, as in the following header:
@Time Duration: 12:3013:30
This header provides the absolute time during which the taping occurred. For most
projects what is important is not the absolute time, but the time of individual events
relative to each other. This sort of relative timing can provided by coding on the %tim
dependent tier in conjunction with the @Time Start header described next. However, this
type of coding is really only needed for older transcripts for which there is no media.
None of the CLAN programs make use of this information.
@Time Start:
If you are tracking elapsed time on the %tim tier, the @Time Start header can be used
to indicate the absolute time at which the timing marks begin. If a new @Time Start
header is placed in the middle of the transcript, this “restarts” the clock. This method is
really only appropriate for older transcripts for which there is no media. Transcripts
linked to media will not need this information. None of the CLAN programs make use of
this information.
@Time Start: 12:30
@Transcriber: *
This line identifies the people who transcribed and coded the file. Having this
indicated is often helpful later, when questions arise. It also provides a way of
acknowledging the people who have taken the time to make the data available for further
study.
@Transcription:
The possible entries here are: eye_dialect, partial, full, detailed, coarse, checked
@Warning:
This header is used to warn the user about certain defects or peculiarities in the
collection and transcription of the data in the file. Some typical warnings are as follows:
1. These data are not useful for the analysis of overlaps, because overlapping was
not accurately transcribed.
2. These data contain no information regarding the context. Therefore they will be
inappropriate for many types of analysis.
Part 1: CHAT 42
@Activities:
This header describes the activities involved in the situation. The entry is a list of
component activities in the situation. Suppose the @Situation header reads, “Getting
ready to go out.” The @Activities header would then list what was involved in this, such
as putting on coats, gathering school books, and saying goodbye.
@Bck:
Diary material that was not originally transcribed in the CHAT format often has
explanatory or background material placed before a child's utterance. When converting
this material to the CHAT format, it is sometimes impossible to decide whether this
background material occurs before, during, or after the utterance. In order to avoid having
to make these decisions after the fact, one can simply enter it in an @Bck header.
@Bck: Rachel was fussing and pointing toward the cabinet where the
cookies are stored.
*RAC: cookie [/] cookie.
These headers are used to mark the beginning of a “gem” for analysis by GEM. If
there is a colon, you must follow the colon with a tab and then one or more code words.
@Blank
Part 1: CHAT 43
This header is created by the TEXTIN program. It is used to represent the fact that
some written text includes a blank line or new paragraph. It should not be used for
transcripts of spoken language.
@Comment:
This header can be used as an allpurpose comment line. Any type of comment can be
entered on an @Comment line. When the comment refers to a particular utterance, use
the %com line. When the comment refers to more general material, use the @Comment
header. If the comment is intended to apply to the file as a whole, place the @Comment
header along with the constant headers before the first utterance. Instead of trying to
make up a new coding tier name such as “@Gestational Age” for a special purpose type
of information, it is best to use the @Comment field, as in this example:
@Comment: Gestational age of MAR is 7 months
@Comment: Birthweight of MAR is 6 lbs. 4 oz
Another example of a special @Comment field is used in the diary notes of the
MacWhinney corpus, where they have this shape:
@Comment: DiaryBrian – Ross said “I don’t need to throw my blocks
out the window anymore.”
@Date:
This header indicates the date of the interaction. The entry for this header is given in
the form daymonthyear. The date is abbreviated in the same way as in the @Birth
header entry. Here is an example of a completed @Date header line:
@Date: 01JUL1965
Because we have some corpora going back over a century, it is important to include
the full value for the year. Also, because the days of the month should always have two
digits, it is necessary to add a leading “0” for days such as “01”.
These headers are used to mark the end of a “gem” for analysis by the GEM
command. If there is a colon, you must follow the colon with a tab and then one or more
code words. Each @Eg must have a matching @Bg. If the @Eg: form is used, then the
text following it must exactly match the text in the corresponding @Bg: You can nest
one set of @Bg@Eg markers inside another, but double embedding is not allowed. You
can also begin a new pair before finishing the current one, but again this cannot be done
for three beginnings.
Part 1: CHAT 44
@G:
This header is used in conjunction with the GEM program, which is described in the
CLAN manual. It marks the beginning of “gems” when no nesting or overlapping of
gems occurs. Each gem is defined as material that begins with an @g marker and ends
with the next @g marker. We refer to these markers as “lazy” gem markers, because
they are easier to use than the @Bg: and @Eg: markers. To use this feature, you need to
also use the +n switch in GEM. You may nest at most one @Bg@Eg pair inside a series
of @G headers.
@New Episode
This header simply marks the fact that there has been a break in the recording and that
a new episode has started. It is a “bare” header that is used without a colon, because it
takes no entry. There is no need to mark the end of the episode because the @New
Episode header indicates both the end of one episode and the beginning of another.
@New Language:
This header is used to indicate the shift from the initially most frequent language
listed in the @Languages header to a new most frequent language. This header should
only be used when there is a marked break in a transcript from the use of one language to
a fairly uniform use of another language.
@Page:
This header is used to indicate the page from which some text is taken. It should not
be used for spoken texts.
@Situation:
This changeable header describes the general setting of the interaction. It applies to
all the material that follows it until a new @Situation header appears. The entry for this
header is a standard description of the situation. Try to use standard situations such as:
“breakfast,” “outing,” “bath,” “working,” “visiting playmates,” “school,” or “getting
ready to go out.” Here is an example of the completed header line:
@Situation: Tim and Bill are playing with toys in the hallway.
There should be enough situational information given to allow the user to reconstruct the
situation as much as possible. Who is present? What is the layout of the room or other
space? What is the social role of those present? Who is usually the caregiver? What
Part 1: CHAT 45
activity is in progress? Is the activity routinized and, if so, what is the nature of the
routine? Is the routine occurring in its standard time, place, and personnel configuration?
What objects are present that affect or assist the interaction? It will also be important to
include relevant ethnographic information that would make the interaction interpretable
to the user of the database. For example, if the text is parentchild interaction before an
observer, what is the culture's evaluation of behaviors such as silence, talking a lot,
displaying formulaic skills, defending against challenges, and so forth?
Part 1: CHAT 46
8 Words
Words are the basic building blocks for all sentential and discourse structures. By
studying the development of word use, we can learn an enormous amount about the
growth of syntax, discourse, morphology, and conceptual structure. However, in order to
realize the full potential of computational analysis of word usage, we need to follow
certain basic rules. In particular, we need to make sure that we spell words in a consistent
manner. If we sometimes use the form doughnut and sometimes use the form donut, we
are being inconsistent in our representation of this particular word. If such inconsistencies
are repeated throughout the lexicon, computerized analysis will become inaccurate and
misleading. One of the major goals of CHAT analysis is to maximize systematicity and
minimize inconsistency. In the Introduction, we discussed some of the problems involved
in mapping the speech of language learners onto standard adult forms. This chapter spells
out some rules and heuristics designed to achieve the goal of consistency for wordlevel
transcription.
One solution to this problem would be to avoid the use of words altogether by
transcribing everything in phonetic or phonemic notation. But this solution would make
the transcript difficult to read and analyze. A great deal of work in language learning is
based on searches for words and combinations of words. If we want to conduct these
lexical analyses, we have to try to match up the child's production to actual words. Work
in the analysis of syntactic development also requires that the text be analyzed in terms of
lexical items. Without a clear representation of lexical items and the ways that they
diverge from the adult standard, it would be impossible to conduct lexical and syntactic
analyses computationally. Even for those researchers who do not plan to conduct lexical
analyses, it is extremely difficult to understand the flow of a transcript if no attempt is
made to relate the learner's sounds to items in the adult language.
At the same time, attempts to force adult lexical forms onto learner forms can
seriously misrepresent the data. The solution to this problem is to devise ways to indicate
the various types of divergences between learner forms and adult standard forms. Note
that we use the term “divergences” rather than “error.” Although both learners
(MacWhinney & Osser, 1977) and adults (Stemberger, 1985) clearly do make errors,
most of the divergences between learner forms and adult forms are due to structural
aspects of the learner's system.
This chapter discusses the various tools that CHAT provides to mark some of these
divergences of child forms from adult standards. The basic types of codes for divergences
that we discuss are:
special learnerform markers,
codes for unidentifiable material,
1. codes for incomplete words,
Part 1: CHAT 47
None of these characters or the space can be used within words. Other nonletter
characters such as the plus sign (+) or the at sign (@) can be used within words to express
special meanings. This punctuation set applies to the main lines and all coding lines with
the exception of the %pho and %mod lines which use the system described in the chapter
on Dependent Tiers. Because those systems make use of punctuation markers for special
characters, only the space can be used as a delimiter on the %pho and %mod lines. As the
CLAN manual explains, this default punctuation set can be changed for particular
analyses.
*SAR: I got a bingbing@c.
Here the child has invented the form bingbing to refer to a toy. The word bingbing is not
in the dictionary and must be treated as a special form. To further clarify the use of these
@c forms, the transcriber should create a file called “0lexicon.cdc” that provides glosses
for such forms.
The @c form illustrated in this example is only one of many possible special form
markers that can be devised. The following table lists some of these markers that we have
found useful. However, this categorization system is meant only to be suggestive, not ex
haustive. Researchers may wish to add further distinctions or ignore some of the
categories listed. The particular choice of markers and the decision to code a word with a
marker form is one that is made by the transcriber, not by CHAT. The basic idea is that
CLAN will treat words marked with the special learnerform markers as words and not as
fragments. In addition, the MOR program will not attempt to analyze special forms for
part of speech, as indicated in the final column in this table.
We can define these special markers in the following ways:
Addition can be used to mark an unintelligible string as a word for inclusion on the
%mor line. MOR then recognizes xxx@a as w|xxx. It also recognizes xxx@a$n as, for
example n|xxx. Adding this feature will still not allow inclusion of sentences with
unintelligible words for MLU and DSS, because the rules for those indices prohibit this.
Babbling can be used to mark both lowlevel early babbling. These forms have no
obvious meaning and are used just to have fun with sound.
Childinvented forms are words created by the child sometimes from other words
without obvious derivational morphology. Sometimes they appear to be sound variants of
other words. Sometimes their origin is obscure. However, the child appears to be
convinced that they have meaning and adults sometimes come to use these forms
themselves.
Dialect form is often an interesting general property of a transcript. However, the
coding of phonological dialect variations on the word level should be minimized, because
it often makes transcripts more difficult to read and analyze. Instead, general patterns of
phonological variation can be noted in the readme file.
Echolalia form can be marked for individual words. If a whole utterance is echoed,
then it is better to use the [+ imit] postcode.
Familyspecific forms are much like childinvented forms that have been taken over
by the whole family. Sometimes the source of these forms are children, but they can also
be older members of the family. Sometimes the forms come from variations of words in
another language. An example might be the use of undertoad to refer to some mysterious
being in the surf, although the word was simply undertow initially.
General special form marking with @g can be used when all of the above fail.
However, its use should generally be avoided. Marking with the @ without a following
letter is not accepted by CHECK.
Interjections can be indicated in standard ways, making the use of the @i notation
usually not necessary. Instead of transcribing “ahem@i,” one can simply transcribe ahem
following the conventions listed later.
Letters can either be transcribed with the @l marker or simply as singlecharacter
words. If it is necessary to mark a letter name as plural, it is possible to add a suffix, as in
m@ls.
Multiple letters or strings of letters are marked as @k (as in “kana”).
Neologisms are meant to refer to morphological coinages. If the novel form is
monomorphemic, then it should be characterized as a childinvented form (@c), family
specific form (@f), or a test word (@t). Note that this usage is only really sanctioned for
Part 1: CHAT 50
CHILDES corpora. For AphasiaBank corpora, neologisms are considered to be forms
that have no real word source, as is typical in jargon aphasia.
Nonvoiced forms are produced typically by hearingimpaired children or their
parents who are mouthing words without making their sounds.
Onomatopoeias include animal sounds and attempts to imitate natural sounds.
Phonological consistent forms (PCFs) are early forms that are phonologically
consistent, but whose meaning is unclear to the transcriber. Often these forms are
protomorphemes.
Quoting or Metalinguistic reference can be used to either cite or quote single
standard words or special child forms.
Secondlanguage forms derive from some language not usually used in the home.
These are marked with a second letter for the first letter of the second language, as in
@s:zh for Mandarin words inside an English sentence.
Part of speech codes. You can also mark the part of speech of a second language
word by using the form @s$ as in perro@s$n to indicate that the Spanish word perro
(dog) is a noun. You can use the same method without the @s for L1 words. Thus, the
form goodbyes$n will be recognized as n|goodbyes.
Sign language use can be indicated by the @sl.
Sign and speech use involves making a sign or informal sign in parallel with saying
the word.
Singing can be marked with @si. Sometimes the phrase that is being sung involves
nonwords, as in lalaleloo@si. In other cases, it involves words that can be joined by
underscores. However, if a larger passage is sung, it is best to transcribe it as speech and
just mark it as being sung through a comment line.
Test words are nonce forms generated by the investigators to test the productivity of
the child's grammar.
Unibet transcription can be given on the main line by using the @u marker.
However, if many such forms are being noted, it may be better to construct a @pho line.
With the advent of IPA Unicode, we now prefer to avoid the use of Unibet, relying
instead directly on IPA.
Word play in older children produces forms that may sound much like the forms of
babbling, but which arise from a slightly different process. It is best to use the @b for
forms produced by children younger than 2;0 and @wp for older children.
Excluded forms can be marked with @x.
Userdefined special forms can be marked with @z followed by up to five letters of
a userdefined code, such as in word@z:rftd. This format should be used carefully,
Part 1: CHAT 51
because it will be difficult for the MOR program to evaluate words with these codes
unless additional detailed information is added to the sf.cut file.
The @b, @u, and @wp markers allow the transcriber to represent words and
babbling words phonologically on the main line and have CLAN treat them as full lexical
items. This should only be done when the analysis requires that the phonological string
be treated as a word and it is unclear which standard morpheme corresponds to the word.
If a phonological string should not be treated as a full word, it should be marked by a
beginning &, and the @b or @u endings should not be used. Also, if the transcript
includes a complete %pho line for each word and the data are intended for phonological
analysis, it is better to use yy (see the next section) on the main line and then give the
phonological form on the %pho line. If you wish to omit coding of an item on the %pho
line, you can insert the horizontal ellipsis character … (Unicode character number 2026).
This is a single character, not three periods, and it is not the ellipsis character used by
MSWord.
Familyspecific forms are special words used only by the family. These are often de
rived from child forms that are adopted by all family members. They also include certain
“caregiverese” forms that are not easily recognized by the majority of adult speakers but
which may be common to some areas or some families. Familyspecific forms can be
used by either adults or children.
The @n marker is intended for morphological neologisms and overregularizations,
whereas the @c marker is intended to mark nonce creation of stems. Of course, this
distinction is somewhat arbitrary and incomplete. Whenever a childinvented form is
clearly onomatopoeic, use the @o coding instead of the @c coding. A fuller
characterization of neologisms can be provided by the error coding system presented in a
separate chapter.
Use the symbol xxx when you cannot hear or understand what the speaker is saying.
If you believe you can distinguish the number of unintelligible words, you may use
several xxx strings in a row. Here is an example of the use of the xxx symbol:
*SAR: xxx .
*MOT: what ?
*SAR: I want xxx .
Sarah's first utterance is fully unintelligible. Her second utterance includes some
unintelligible material along with some intelligible material. The MLU and MLT
commands will ignore the xxx symbol when computing mean length of utterance and
other statistics. If you want to have several words included, use as many occurrences of
xxx as you wish.
Use the symbol yyy when you plan to code all material phonologically on a %pho
line. If you are not consistently creating a %pho line in which each word is transcribed in
IPA in the order of the main line, you should use the @u or & notations instead. Here is
an example of the use of yyy:
*SAR: yyy yyy a ball .
%pho: ta gə ə bal
The first two words cannot be matched to particular words, but their phonological
form is given on the %pho line.
This symbol must be used in conjunction with an %exp tier which is discussed in the
chapter on dependent tiers. This symbol is used on the main line to indicate material that
a transcriber does not know how to transcribe or does not want to transcribe. For
example, it could be that the material is in a language that the transcriber does not know.
This symbol can also be used when a speaker says something that has no relevance to the
interactions taking place and the experimenter would rather ignore it. For example, www
could indicate a long conversation between adults that would be superfluous to
transcribe. Here is an example of the use of this symbol:
*MOT: www.
%exp: talks to neighbor on the telephone
Disfluencies such as fillers, phonological fragments, and repeated segments are all
coded by a preceding &. More specifically, & may be used for fillers and &+ for
fragments (please see the chapter on disfluency coding for the details). Material
following the ampersand symbol will be ignored by certain CLAN commands, such as
MLU, which computes the mean length of the utterance in a transcript. If you want a
command such as FREQ to count all of the instances of phonological fragments, you
would have to add a switch such as +s"&*" (or +s"&+*").
When a word is incomplete, but the intended meaning seems clear, insert the missing
material within parentheses. Do not use this notation for fully omitted words, only for
words with partial omissions. This notation can also be used to derive a consistent
spelling for commonly shortened words, such as (un)til and (be)cause. CLAN will treat
items that are coded in this way as full words. For programs such as FREQ, the
parentheses will essentially be ignored and (be)cause will be treated as if it were because.
The CLAN programs also provide ways of either including or excluding the material in
the parentheses, depending on the goals of the analysis.
*RAL: I been sit(ting) all day .
The inclusion or exclusion of material enclosed in parentheses is well supported by
CLAN and this same notation can also be used for other purposes when necessary. For
example, studies of fluency may find it convenient to code the number of times that a
word is repeated directly on that word, as in this example with three repetitions of the
word dog.
JEF: that's a dog [x 3].
By default, the programs will remove the [x 3] form and the sentence will be treated
as a three word utterance. This behavior can be modified by using the +r switch.
The coding of word omissions is a difficult and unreliable process. Many researchers
will prefer not to even open up this particular can of worms. On the other hand,
researchers in language disorders and aphasia often find that the coding of word
Part 1: CHAT 54
omissions is crucial to particular theoretical issues. In such cases, it is important that the
coding of omitted words be done in as clear a manner as possible.
To code an omission, the zero symbol is placed before a word on the text tier. If what
is important is not the actual word omitted, but its part of speech, then a code for the part
of speech can follow the zero. Similarly, the identity of the omitted word is always a
guess. The best guess is placed on the main line. Here is an example of its use:
*EVE: I want 0to go.
It is very difficult to know when a word has been omitted. However, the following
criteria can be used to help make this decision for English data:
9. 0art: Unless there is a missing plural, a common noun without an article is
coded as 0art.
10. 0v: Sentences with no verbs can be coded as having missing verbs. Of
course, often the omission of a verb can be viewed as a grammatical use of
ellipsis.
11. 0aux: In standard English, sentences like “he running” clearly have a
missing auxiliary.
12. 0subj: In English, every finite verb requires a subject.
13. 0pobj: Every preposition requires an prepositional object (pobj). However,
often a preposition may be functioning as an adverb. The coder must look at
the verb to decide whether a word is functioning as a preposition as in “John
put on 0pobj” or an adverb as in “Mary jumped up.”
In English, there seldom are solid grounds for assigning codes like 0adj, 0adv, 0obj,
0prep, or 0dat. Items marked as omitted are not included in the MLU count.
become impossible. On the other hand, there is no reason to avoid using these forms if a
set of standards can be established for their use. Other programs rely on the use of dic
tionaries of words. If the spellings of words are indeterminate, the analyses produced will
be equally indeterminate. For that reason, it is helpful to specify a set of standard
spellings for marginal words. If you have doubts about the spellings of certain words,
you can look in the 0allwords.cdc file this is included in the /lex folder of the MOR
gramar for each language. The words there are listed in alphabetical order..
8.6.1 Letters
To transcribe letters, use the @l symbol after the letter. For example, the letter “b”
would be b@l. Here is an example of the spelling of a letter sequence.
*MOT: could you please spell your name ?
*MAR: it's m@l a@l r@l k@l .
The dictionary says that “abc” is a standard word, so that is accepted without the @l
marking. In Japanese, many letters refer to whole syllables or “kana” such as ro or ka.
To represent this as well as strings of letters in English, use the @k symbol, as in ka@k
or jklmn@k. Using this form, the above example could better be coded as:
*MOT: could you please spell your name?
*MAR: it's mark@k.
However, in this case, the spelling is counted as one word, not four.
line to represent a multiword English gloss for a single stem, as in “lose_flowers” for
defleurir.
Because the hyphen is used on the %mor line to indicate suffixation, it is important to
avoid confusion between the standard use of the hyphen in compounds such as “blue
green” and the use of the dash in CHAT. To do this, use the plussign to replace the
hyphen, as in blue+green instead of bluegreen.
8.6.3 Capitalization
In general, capitals are only allowed at the beginnings of words. However, they can
also occur later in a word in these cases:
1. if the @Options tier includes: "sign" or "CA".
2. If @u is used at the end of the word.
3. After the + symbol.
4. After the _ underscore symbol.
5. The word is listed on the @Exceptions tier.
6. The capital letter is preceded by prefix, like "Mac", which is specified in the
depfile using the code [UPREFS Mac].
8.6.4 Acronyms
Acronyms should be transcribed by using the component letters as a part of a “linked”
form. In compounds, the @l marking is not used, since it would make the acronym
unreadable. Thus, USA is be written as U_S_A. In this case, the first letter is capitalized
in order to mark it as a proper noun. Other examples include M_I_T, C_M_U, M_T_V,
E_T, I_U, C_three_P_O, and R_two_D_two. The recommended way of transcribing the
common name for television is just tv. This form is not capitalized, since it is not a proper
noun. Similarly, we can write cd, vcr, tv, and dvd. The underscore is the best mark for
combinations that are not true compounds such as m_and_ms for the M&M candy.
Acronyms that are not actually spelled out when produced in conversation should be
written as words. Thus UNESCO would be written as Unesco. The capitalization of the
first letter is used to indicate the fact that it is a proper noun. There must be no periods
inside acronyms and titles, because these can be confused with utterance delimiters.
fractions, logarithms, and so on. All should be written out in words, as in “eight thousand
two hundred and twenty dollars” for $8220, “twenty nine point five percent” for 29.5%,
“seven fifteen” for 7:15, “ten o'clock a@l m@l” for 10:00 AM, and “four and three
fifths.”
Titles such as Dr. or Mr. should be written out in their full capitalized form as Doctor
or Mister, as in “Doctor Spock” and “Mister Rogers.” For “Mrs.” use the form “Missus.”
Kinship Forms
8.6.7 Shortenings
One of the biggest problems that the transcriber faces is the tendency of speakers to
drop sounds out of words. For example, a speaker may leave the initial “a” off of
“about,” saying instead “ 'bout.” In CHAT, this shortened form appears as (a)bout. clan
can easily ignore the parentheses and treat the word as “about.” Alternatively, there is a
CLAN option to allow the commands to treat the word as a spelling variant. Many
common words have standard shortened forms. Some of the most frequent are given in
the table that follows. The basic notational principle illustrated in that table can be
extended to other words as needed. All of these words can be found in Webster's Third
New International Dictionary.
Part 1: CHAT 58
More extreme types of shortenings include: “(what)s (th)at” which becomes “sat,”
“y(ou) are” which becomes “yar,” and “d(o) you” which becomes “dyou.” Representing
these forms as shortenings rather than as nonstandard words facilitates standardization
and the automatic analysis of transcripts.
Two sets of contractions that cause particular problems for morphological analysis in
English are final apostrophe s and apostrophe d, as in John’s and you’d. If you transcribe
these as John (ha)s and you (woul)d, then the MOR program will work much more
efficiently.
Shortenings
Examples of Shortenings
(a)bout don('t) (h)is (re)frigerator
an(d) (e)nough (h)isself (re)member
(a)n(d) (e)spress(o) -in(g) sec(ond)
(a)fraid (e)spresso nothin(g) s(up)pose
(a)gain (es)presso (i)n (th)e
(a)nother (ex)cept (in)stead (th)em
(a)round (ex)cuse Jag(uar) (th)emselves
ave(nue) (ex)cused lib(r)ary (th)ere
(a)way (e)xcuse Mass(achusetts) (th)ese
(be)cause (e)xcused micro(phone) (th)ey
(be)fore (h)e (pa)jamas (to)gether
(be)hind (h)er (o)k (to)mato
b(e)long (h)ere o(v)er (to)morrow
b(e)longs (h)erself (po)tato (to)night
Cad(illac) (h)im prob(ab)ly (un)til
doc(tor) (h)imself (re)corder wan(t)
The marking of shortened forms such as (a)bout in this way greatly facilitates the
later analysis of the transcript, while still preserving readability and phonological
accuracy. Learning to make effective use of this form of transcription is an important part
of mastering use of CHAT. Underuse of this feature is a common error made by
beginning users of CHAT.
8.6.8 Assimilations
Words such as “gonna” for “going to” and “whynt cha” for “why don't you” involve
complex sound changes, often with assimilations between auxiliaries and the infinitive or
a pronoun. None of these forms can be found in Webster's Third New International
Dictionary. If you transcribe these forms as single morphemes, they will be counted as
single morphemes and programs like MOR will recognize them as wholes and give them
Part 1: CHAT 59
the part of speech category of mod:aux. The verbs that are treated this way by MOR are:
coulda, gonna, gotta, hadta, hafta, hasta, mighta, oughta, shoulda, and wanna.
In addition to the mod:aux verbs, other common assimilations include forms listed in
this table.
Assimilations
Unlike the mod:aux group, further types of assimilations are nearly limitless. Some of the
most common assimilations are listed in the vclit.cut file in MOR. However, it is not
possible to list all possible assimlations or to assign them to particular parts of speech.
So, these other assimilations need to be treated as two or more morphemes. To do this,
you should use the replacement notation, as in
*CHI: lemme [: let me]
If you do this, MOR and the other programs will work on the material in the square
brackets, rather than the lemme form. An even simpler way of representing some of these
forms is by noting omitted letters with parentheses as in: “gi(ve) me” for “gimme,” “le(t)
me” for “lemme,” or “d(o) you” for “dyou.”
Filled pauses are treated in a different way. They are preceded by the ampersand
hyphen mark (&) which allows them to be ignored as words. Specifically these forms
are used to mark the various forms of filled pauses: &ah, &eh, &er, &ew, &hm, &
mm, &uh, &uhm, and &um.
Colloquial Forms
third solution is to ignore the dialectal variation and simply transcribe the standard form.
If this is being done, the practice must be clearly noted in the readme file. None of these
forms are in Webster's Third New International Dictionary.
Dialectal Variants
Baby Talk
Abbreviations in Dutch
Some forms that should probably remain with their standard apostrophes include
'smorgens, 'sochtends, 'savonds, 'snachts, and the apostrophes plural form.
Part 1: CHAT 64
9 Utterances
The basic units of CHAT transcription are the morpheme, the word, and the utterance.
In the previous two chapters we examined principles for transcribing words and
morphemes. In this chapter we principles for delimiting utterances.
These five forms of transcription will lead to markedly different analytic outcomes
for programs such as MLU (mean length of utterance). The first two forms will all be
counted as having one utterance with four morphemes for an MLU of 4.0. The third and
fourth forms will be counted as having one utterance with one morpheme for an MLU of
1.0. The fifth form will be counted as having four utterances each with one morpheme for
an MLU of 1.0.
Of course, not all analyses depend crucially on the computation of MLU, but prob
lems with deciding how to compute MLU point to deeper issues in transcription and anal
ysis. In order to compute MLU, one has to decide what is a word and what is an utterance
and these are two of the biggest decisions that one has to make when transcribing and
analyzing child language. In this sense, the computation of MLU serves as a
Part 1: CHAT 65
methodological trip wire for the consideration of these two deeper issues. Other analyses,
including lexical, syntactic, and discourse analyses also require that these decisions be
made clearly and consistently. However, because of its conceptual simplicity, the MLU
index places these problems into the sharpest focus.
The first two forms of transcription all make the basic assumption that there is a
single utterance with four morphemes. Given the absence of any clear syntactic relation
between the four words, it seems difficult to defend use of this form of transcription.
The third and fourth forms of transcription treat the successive productions of the
word “milk” as repeated attempts to produce a single word. This form of transcription
makes sense if the child was simply perseverating. If the third form of transcription is
used, the commands will, by default, treat the utterance as having only one morpheme.
For the fourth form of transcription, CLAN provides two possibilities. The default is to
treat the fourth type as a variant of the third form. However, there is also a CLAN option
that allows the user to override this default and treat each word as a separate morpheme.
This then allows the researcher to compute two different MLU values. The analysis with
repetitions excluded could be viewed as the one that emphasizes syntactic structure and
the one with repetitions included could be viewed as the one that emphasizes
productivity.
Finally, if there is evidence that the word is not simply a repetition, it would seem
best to use the fifth form of transcription. This is particularly true if the intonation pattern
indicates repeated insistence on a basic singleword message.
The example we have been discussing involves a simple case of word repetition. In
other cases, researchers may want to group together nonrepeated words for which there
is only partial evidence of syntactic or semantic combination. Consider the contrast
between these next two examples. In the first example, the presence of the conjunction
“and” motivates treatment of the words as a syntactic combination:
*CHI: red, yellow, blue, and white.
However, without the conjunction or other intonational evidence, the words are best
treated as separate utterances:
*CHI: red.
*CHI: yellow.
*CHI: blue.
*CHI: white.
As the child gets older, the solidification of intonational patterns and syntactic struc
tures will give the transcriber more reason to group words together into utterances and to
code retracings and repetitions as parts of larger utterances.
Part 1: CHAT 66
The types of elements that occur as initial satellites include vocatives and
communicators (well, but, sure, gosh). Elements that occur as final satellites include
question markers (okay?, see?) and sentence final particles, as well as vocatives and
communicators. The use of these prefixing and suffixing interactional markers is
particularly important for Asian languages that use sentence final particles. These satellite
markers should be surrounded by spaces, since they will be treated as separate word
forms by MOR and GRASP. Use of these markers helps improve syntactic analysis, and
provides a more realistic characterization of utterances.
Or the child may use a single utterance repeatedly, but each time with a slightly
different purpose. For example, when putting together a puzzle, the child may pick up a
piece and ask:
*CHI: where does this piece go?
This may happen nine times in succession. In both of these examples, it seems unfair
from a discourse point of view to treat each utterance as a mere repetition. Instead, each
Part 1: CHAT 67
is functioning independently as a full communication. One may want to mark the fact that
the lexical material is repeated, but this should not affect other quantitative measures.
9.5 Retracing
When a speaker abandons an utterance, sometimes another speaker will take a turn.
In that case, the first utterance can be marked with a trailing off terminator, as discussed
below. However, if the first speaker continues, then the transcriber has a choice to make.
One possibility is to mark the abandoned and retraced material with the [//] symbol along
with some scoping markers. However, if the abandonment of the first segment is
followed by a significant pause, then it would be better to consider it as a trailing off and
then to begin a new utterance with the following material. In either case, it is a good idea
to mark the fact that there was a long pause by inserting a pause marker with a time
value, such as (3.5) for 3.5 seconds.
*MOT: if this is the one you want, you will have to take your
spoon out of the other one.
The utterance in this main tier extends for two lines in the computer file. When it is
necessary to continue an utterance on the main tier onto a second line, the second line
must begin with a tab. CLAN is set to expect no more than 2000 characters in each main
tier, dependent tier, or header line.
Period .
A period marks the end of an unmarked (declarative) utterance. Here are some exam
ples of unmarked utterances:
*SAR: I got cold.
*SAR: pickle.
*SAR: no.
For correct functioning of CLAN, periods should be eliminated from abbreviations.
Thus “Mrs.” should be written as Mrs and E.T. should become E+T. Only proper nouns
and the word “I” and its contractions are capitalized. Words that begin sentences are not
capitalized.
Question Mark ?
The question mark indicates the end of a question. A question is an utterance that uses
a whquestion word, subject verb inversion, or a tag question ending. Here is an example
of a question:
*FAT: is that a carrot ?
The question mark can also be used after a declarative sentence when it is spoken
with the rising intonation of a question.
Exclamation Point !
An exclamation point marks the end of an imperative or emphatic utterance. Here is
an example of an exclamation:
*MOT: sit down!
If this utterance were to be conveyed with final rising contour, it would instead be:
*MOT: sit down ?
Part 1: CHAT 69
9.7 Separators
CHAT allows for the use of several conventional punctuation features that have no
formal role in the transcription system. We call these “separators” and distinguish them
from terminators, which have a formal role, and the various CA intonation marks.
Comma ,
The comma is used widely throughout CHAT transcripts to represent a combination
of features such as pause, syntactic juncture, intonational drop, and others. Although it
has no formal definition or systematic characterization, it is fine to use this symbol. The
use of comma to mark level intonation in CA is replaced by the use of the mark →.
Semicolon ;
The semicolon is used primarily to mark syntactic structures in corpora such as the
SCOTUS oral arguments from the Supreme Court. Most conversational transcripts do not
need to use this mark. The use of semicolon to mark a light final drop in CA is replaced
by the use of the mark ↘.
Colon :
In order to use the colon as a separator, it must be surrounded by spaces. The colon is
also used within words to mark lengthening.
Other
Transcribers should avoid using other separators, because most of them have special
meanings in CHAT.
with the falling mark ↓ after the final word, then followed by the question mark, as in
this example:
*MOT: Are you going to store↓ ?
Final rise fall contour can be represented with ↑↓ and final fallrise can be
represented with ↓↑ .
Primary Stress ˈ
The Unicode symbol (U02C8) can be used to mark primary stress. It is placed right
before the stressed syllable, as in this example:
MOT: baby want baˈna:nas ?
Secondary Stress
The Unicode symbol (U02CC) can be used to mark secondary stress. It is placed
right before the stressed syllable, as in this example:
MOT: baby want ˌbaˈna:nas ?
Lengthened Syllable :
A colon within a word indicates the lengthening or drawling of a syllable. This mark
should be attached to a vowel or continuant, because it is difficult to drawl an obstruent:
MOT: baby want bana:nas?
A pause between syllables may be indicated as in this example:
MOT: is that a rhi^noceros ?
There is no special CHAT symbol for a filled pause. Instead, &ah, &eh, &er, &
ew, &hm, &mm, &uh, &uhm, and &um are used to mark the various forms of filled
pauses.
Blocking ^
Part 1: CHAT 71
Speakers with marked language disfluencies often engage in a form of word attack
known as “blocking” (Bernstein Ratner et al., 1996). This form of word attack is marked
by a caret or up arrow placed directly before the word.
It is important to remember that these codes must fully characterize complete local
events. If your intention is to mark that a stretch of words has been mumbled, then you
should use the scoped codes discussed in the next chapter. However, if you only wish to
code that some mumbling or singing occurs at a particular point, then you can use this
simpler form.
Simple event forms can also be used to mark actions such as running and reading.
When these actions are transitive, as in imit: (imitation), point: and move: they can also
take an object. For example, a very common vocalizer is &=imit:motor for an imitation
of the sound of a motor. The table below illustrates this use of compound simple codes.
Part 1: CHAT 72
The object of the &=imit codes indicates the noise source being imitated vocally.
The objects of the &=ges codes indicate the meaning of the gestures being used. The
objects of activities such as &=walk and &=run indicate the direction or goal of the
walking or running. For actions such as &=slurp and &=eat used by themselves, the code
represents the auditory results of the slurping or eating.
Finally, you can compose codes using parts of the body as in &=head:yes to indicate
nodding “yes” with the head. Some codes of this type include: &=head:yes, &=head:no,
&=head:shake, &=hands:no, &=hands:hello, &=eyes:open, &=mouth:open, and
&=mouth:close.
This form of coding is compact and can be easily searched. Moreover, it is easy to
locate at a point within an ongoing utterance without breaking up the readability of the
utterance. Whenever possible, try to use this form of coding as a substitute for writing
longer comments on the comment line or inserting complex local events on the main line.
Like the simple local events, these complex local events are assumed to occur exactly
at the position marked in the text and not to extend over some other events. If the
material is intended as a comment over a longer scope of events, use the form of the
Part 1: CHAT 73
scoped comments given in the next section. This form of coding can also be used at the
very beginning of utterances to replace the earlier “precodes” that marked things like the
specific addressee, events just before the utterance, or the background to the utterance.
9.10.4 Pauses
The third type of local event is the unfilled pause, which takes up a specified duration
of time at the point marked by the code. Pauses that are marked only by silence are coded
on the main line with the symbol (.). Longer pauses between words can be represented as
(..) and a very long pause as (…) This example illustrates these forms:
*SAR: I don't (..) know .
*SAR: (...) what do you (...) think ?
If you want to be exact, you can code the exact length of the pauses in seconds, as in
these examples.
*SAR: I don't (0.15) know .
*SAR: (13.4) what do you (2.) think ?
Here the asterisk marks some description of the long event. For example, a speaker
could begin pounding on the table at the point marked by &{l=pounding:table and then
continue until the end marked by &}l=pounding:table.
Here the asterisk marks some description of a long nonverbal event. For example, a
speaker could begin waving their hands at the point marked by &{l=waving:hands and
then continue until the end marked by &}l=waving:hands.
Trailing Off +…
The trailing off or incompletion marker (plus sign followed by three periods) is the
terminator for an incomplete, but not interrupted, utterance. Trailing off occurs when
speakers shift attention away from what they are saying, sometimes even forgetting what
they were going to say. Usually the trailing off is followed by a pause in the conversation.
After this lull, the speaker may continue with another utterance or a new speaker may
produce the next utterance. Here is an example of an uncompleted utterance:
*SAR: smells good enough for +...
*SAR: what is that?
If the speaker does not really get a chance to trail off before being interrupted by
another speaker, then use the interruption marker +/. rather than the incompletion symbol.
Do not use the incompletion marker to indicate either simple pausing (.), repetition [/], or
retracing [//]. Note that utterance fragments coded with +… will be counted as complete
utterances for analyses such as MLU, MLT, and CHAINS. If your intention is to avoid
treating these fragments as complete utterances, then you should use the symbol [/]
discussed later.
If the utterance that is being trailed off has the shape of a question, then this symbol
should be used.
When a question is produced with great amazement or puzzlement, it can be coded
using this symbol. The utterance is understood to constitute a question syntactically and
pragmatically, but an exclamation intonationally.
Interruption +/.
This symbol is used for an utterance that is incomplete because one speaker is
interrupted by another speaker. Here is an example of an interruption:
*MOT: what did you +/.
*SAR: Mommy.
*MOT: +, with your spoon.
Part 1: CHAT 75
Some researchers may wish to distinguish between an invited interruption and an
uninvited interruption. An invited interruption may occur when one speaker is prompting
his addressee to complete the utterance. This should be marked by the ++ symbol for
othercompletion, which is given later. Uninvited interruptions should be coded with the
symbol +/. at the end of the utterance. An advantage of using +/. instead of +... is that
programs like MLU are able to piece together the two segments and treat it as a single
utterance when a segment with +/. is followed by +, on the next utterance.
If the utterance that is being interrupted has the shape of a question, then this symbol
should be used.
Self-Interruption +//.
Some researchers wish to be able to distinguish between incompletions involving a
trailing off and incompletions involving an actual selfinterruption. When an
incompletion is not followed by further material from the same speaker, the +... symbol
should always be selected. However, when the speaker breaks off an utterance and starts
up another, the +//. symbol can be used, as in this example:
*SAR: smells good enough for +//.
*SAR: what is that?
There is no hard and fast way of distinguishing cases of trailing off from self
interruption. For this reason, some researchers prefer to avoid making the distinction.
Researchers who wish to avoid making the distinction should use only the +... symbol.
If the utterance being selfinterrupted is a question, you can use the +//? symbol.
Transcription Break +.
It is often convenient to break utterances at phrasal boundaries in order to mark
overlaps. When this is done, the first segment is ended with the +. terminator, as in this
example:
*SAR: smells good enough for me +.
*MOT: but +.
*SAR: if I could have some.
*MOT: why would you want it?
Quotation “ and ”
Part 1: CHAT 76
For marking short quotation stretches inside an utterance, the begin doublequote (“,
Unicode 201C) and end doublequote (”, Unicode 201D) symbols can be used. These can
be entered in the CLAN editor using F2' and F2" respectively.
During story reading and similar activities, a great deal of talk may involve direct
quotation. In order to mark off this material as quoted, a special symbol can be used, as in
the following example:
*CHI: and then the little bear said +"/.
*CHI: +" please give me all of your honey.
*CHI: +" if you do, I'll carry you on my back.
The use of the +"/. symbol is linked to the use of the +" symbol. Breaking up quoted
material in this way allows us to maintain the rule that each separate utterance should be
on a separate line. This form of notation is only used when the material being quoted is a
complete clause or sentence. It is not needed when a few words are being quoted in
noncomplement position. In those cases, use the standard single and double quotation
marks described just above.
This symbol is used when the material being directly quoted precedes the main
clause, as in the following example:
*CHI: +" please give me all of your honey.
*CHI: the little bear said +".
This symbol is used in conjunction with the +"/. and +". symbols discussed earlier. It
is placed at the beginning of an utterance that is being directly quoted.
Quick Uptake +^
Part 1: CHAT 77
Sometimes an utterance of one speaker follows quickly on the heels of the last
utterance of the preceding speaker without the customary short pause between utterances.
An example of this is:
*MOT: why did you go?
*SAR: +^ I really didn't.
Self Completion +,
The symbol +, can be used at the beginning of a main tier line to mark the completion
of an utterance after an interruption. In the following example, it marks the completion of
an utterance by CHI after interruption by EXP. Note that the incompleted utterance must
be terminated with the incompletion marker.
*CHI: so after the tower +/.
*EXP: yeah.
*CHI: +, I go straight ahead.
Other Completion ++
A variant form of the +, symbol is the ++ symbol which marks “latching” or the com
pletion of another speaker's utterance, as in the following example:
*HEL: if Bill had known +...
*WIN: ++ he would have come.
Part 1: CHAT 78
10 Scoped Symbols
Up to this point, the symbols we have discussed are inserted at single points in the
transcript. They refer to events occurring at particular points during the dialogue. There is
another major class of symbols that refers not to particular points in the transcript, but to
stretches of speech. These marker symbols are enclosed in square brackets and the
material to which they relate can be enclosed in angle brackets. The material in the square
brackets functions as a descriptor of the material in angle brackets. If a scoped symbol
applies only to the single word preceding it, the angle brackets need not be marked,
because CLAN considers that the material in square brackets refers to a single preceding
word when there are no angle brackets. There should be no other material entered
between the square brackets and the material to which it refers. Depending on the nature
of the material in the square brackets, the material in the angle brackets may be
automatically excluded from certain types of analysis, such as MLU counts and so forth.
Scoped symbols are useful for marking a wide variety of relations, including
paralinguistics, explanations, and retracings.
This marker is used to insert a bullet that can be clicked to display a picture. This
field is also used in the gesture coding system discussed in the CLAN manual. The
format of these files is not fixed by CHAT, but many of the same conventions are used.
One additional code used there is the @T: header which marks the place of the insertion
of a video picture taken from a movie as a thumbnail representation of what is happening
at a particular moment in the interaction.
This marker is used to insert a bullet that can be clicked to display a text file.
Paralinguistic events, such as “coughing,” “laughing,” or “yelling” can be marked by
using square brackets, the =! symbol, a space, and then text describing the event.
*CHI: that's mine [=! cries].
This means that the child cries while saying the word “mine.” If the child cries
throughout, the transcription would be:
*CHI: <that's mine> [=! cries].
In order to indicate crying with no particular vocalization, you should use the &=cries
“simple form” notation discussed earlier, as in
*CHI: &=cries .
This same format of [=! text] can also be used to describe prosodic characteristics
such as “glissando” or “shouting” that are best characterized with full English words.
Paralinguistic effects such as soft speech, yelling, singing, laughing, crying, whispering,
whimpering, and whining can also be noted in this way. For a full set of these terms and
details on their usage, see Crystal (1969) or Trager (1958). Here is another example:
*NAO: watch out [=! laughing].
Stressing [!]
This symbol can be used without accompanying angle brackets to indicate that the
preceding word is stressed. The angle brackets can also mark the stressing of a string of
words, as in this example:
*MOT: Billy, would you please <take your shoes off> [!].
This symbol can be used without accompanying angle brackets to indicate that the
preceding word is contrastively stressed. If a whole string of words is contrastively
stressed, they should be enclosed in angle brackets.
Duration [# time]
This symbol indicates the duration in seconds of the preceding material that has been
marked with angle brackets as in:
*MOT: I could use <all of them> [# 2.2] for the party.
Explanation [= text]
This symbol is used for brief explanations on the text tier. This symbol is helpful for
specifying the deictic identity of objects and people.
*MOT: don't look in there [= closet]!
Explanations can be more elaborate as in this example:
*ROS: you don't scare me anymore [= the command “don't scare me
anymore!”].
An alternative form for transcribing this is:
*ROS: you don't scare me any more.
%exp: means to issue the imperative “Don't scare me anymore!”
Replacement [: text]
Earlier we discussed the use of a variety of nonstandard forms such as “gonna” and
“hafta.”. In order for MOR to morphemicize such words, the transcriber can use a
replacement symbol that allows clan to substitute a target language form for the form ac
tually produced. Here is an example:
*BEA: when ya gonna [: going to] stop doin(g) that?
*CHA: whyncha [: why don’t you] just be quiet!
In this example, “gonna” is followed by its standard form in brackets. The colon that
follows the first bracket tells CLAN that the material in brackets should replace the
preceding word. The replacing string can include any number of words, but the thing
being replaced can only be a single word, not a series of words. There must be a space
following the colon, in order to keep this symbol separate from other symbols that use
Part 1: CHAT 81
letters after the colon. This example also illustrates two other ways in which CHAT and
clan deal with nonstandard forms. The lexical item “ya” is treated as a lexical item
distinct from “you.” However, the semantic equivalence between “ya” and “you” is
maintained by the formalization of a list of dialectal spelling variations. The string
“doin(g)” is treated by CLAN as if it were “doing.” This is done by simply having the
programs ignore the parentheses, unless they are given instructions to pay attention to
them, as discussed in in the CLAN manual. From the viewpoint of CLAN, a form like
“doin(g)” is just another incomplete form, such as “bro(ther).”
In order for replacement to function properly, nothing should be placed between the
replacing string and the string to be replaced. For example, to mark replacement and error
using the [*] code, one should use the form:
goed [: went] [*]
rather than:
goed [*] [: went]
When the error involves the incorrect use of a real word, the double colon form of the
replacement string may be used, as in:
piece [:: peach] [*]
For further details on this usage, please see the chapter on Error Coding.
Sometimes it is difficult to choose between two possible transcriptions for a word or
group of words. In that case an alternative transcription can be indicated in this way:
*CHI: we want <one or two> [=? one too].
Instead of placing comment material on a separate %com line, it is possible to place
comments or any type of code directly on the main line using the % symbol in brackets.
Here is an example of this usage:
*CHI: I really wish you wouldn't [% said with strong raising of
eyebrows] do that.
You should be careful with using comments on the main line. Overuse of this particular
notational form can make a transcript difficult to read and analyze. Because placing a
Part 1: CHAT 82
comment directly onto the main line tends to highlight it, this form should be used only
for material that is crucial to the understanding of the main line.
Often audiotapes are hard to hear because of interference from room noise, recorder
malfunction, vocal qualities, and so forth. Nonetheless, transcribers may think that,
through the noise, they can recognize what is being said. There is some residual uncer
tainty about this “best guess.” This symbol marks this in relation to the single preceding
word or the previous group of words enclosed in angle brackets.
*SAR: I want a frog [?]
In this example, the word that is unclear is “frog.” In general, when there is a symbol
in square brackets that takes scoping and there are no preceding angle brackets, then the
single preceding word is the scope. When more than one word is unclear, you can
surround the unclear portion in angle brackets as in the following example:
*SAR: <going away with my mommy> [?] ?
During the course of a conversation, speakers often talk at the same time.
Transcribing these interactions can be trying. This and the following two symbols are
designed to help sort out this difficult transcription task. The “overlap follows” symbol
indicates that the text enclosed in angle brackets is being said at the same time as the
following speaker's bracketed speech. They are talking at the same time. This code must
be used in combination with the “overlap precedes” symbol, as in this example:
*MOT: no (.) Sarah (.) you have to <stop doing that> [>] !
*SAR: <Mommy I don't like this> [<].
*SAR: it is nasty.
Using these overlap indicators does not preclude making a visual indication of
overlap in the following way:
*MOT: no (.) Sarah (.) you have to <stop doing that> [>] !
*SAR: <Mommy I don't like this> [<].
*SAR: it is nasty.
CLAN ignores the series of spaces, treating them as if they were a single space.
The “overlap precedes” symbol indicates that the text enclosed in angle brackets is
being said at the same time as the preceding speaker's bracketed speech. This code must
be used in combination with the “overlap follows” symbol. Sometimes several overlaps
occur in a single sentence. It is then necessary to use numbers to identify these overlaps,
as in this example:
*SAR: and the <doggy was> [>1] really cute and
it <had to go> [>2] into bed.
*MOT: <why don't you> [<1] ?
*MOT: <maybe we could> [<2].
If you don't want to mark the exact beginning and end of overlaps between speakers
and only want to indicate the fact that two turns overlap, you can use this code at the
beginning of the utterance that overlaps a previous utterance, as in this example:
*CHI: we were taking them home.
*MOT: +< they had to go in here.
This marking simply indicate that the mother's utterance overlaps the previous child
utterance. It does not indicate how much of the two utterances overlap. If you need to
combine this mark with other utterance linker marks, place this one first, followed by a
space. It would not make sense to combine this mark with the +^ mark.
Repetition [/]
If there are pauses and fillers between the initial material and the retracing, they
should be placed after the repetition symbol, as in:
*HAR: it's [/] (.) &um (.) it's [/] it's (.) a &um (.) dog.
Part 1: CHAT 84
When a word or group of words is repeated several times with no fillers, all of the
repetitions except for the last are placed into a single group, as in this example:
*HAR: <it's it's it's> [/] it's (.) a &um (.) dog.
By default, all of the clan commands except mlu, mlt, and modrep include repeated
material. This default can be changed by using the +r6 switch.
Multiple Repetition [x N]
An alternative way of indicating several repetitions of a single word uses this form:
*HAR: it's [x 4] (.) a &um (.) dog.
This form indicates the fact that a word has been repeated four times. If this form is used,
it is not possible to get a count of the repetitions to be added to MLU. However, because
this is not usually desirable anyway, there are good reasons to use this more compact
form when single words are repeated. For some illustrations of the use of this type of
coding for the study of disfluencies such as stuttering, consult Bernstein Ratner, Rooney,
and MacWhinney (1996).
Retracing [//]
This symbol is used when a speaker starts to say something, stops, repeats the basic
phrase, changes the syntax but maintains the same idea. Usually, the correction moves
closer to the standard form, but sometimes it moves away from it. The material being
retraced is enclosed in angle brackets. If there are no angle brackets, CLAN assumes that
only the preceding word is being retraced. In retracing with correction, it is necessarily
true that the material in the angle brackets is different from what follows the retracing
symbol. Here is an example of this:
*BET: <I wanted> [//] &uh I thought I wanted to invite Margie.
Retracing with correction can combine with retracing without correction, as in this
example:
*CHI: <the fish is> [//] the [/] the fish are swimming.
Sometimes retracings can become quite complex and lengthy. This is particularly true
in speakers with language disorders. It is important not to underestimate the extent to
which retracing goes on in such transcripts. By default, all of the clan commands except
mlu, mlt, and modrep include retraced material. This default can be changed by using the
+r6 switch.
Reformulation [///]
Part 1: CHAT 85
When none of the material being corrected is included in the retracing, it is better to
use the [///] marker than the [//] marker.
In some projects that place special emphasis on counts of particular disfluency types,
it may be more convenient to code retracings through a quite different method. For
example, the symbols [/] and [//] are used when a false start is followed by a complete
repetition or by a partial repetition with correction. If the speaker terminates an
incomplete utterance and starts off on a totally new tangent, this can be coded by using
the [/] symbol:
*BET: <I wanted> [/] uh when is Margie coming?
If the material is coded in this way, CLAN will count only one utterance. If the coder
wishes to treat the fragment as a separate utterance, the +... and +//. symbols that were
discussed on page
73
should be used instead. By default, all of the CLAN programs
except MLU, MLT, and MODREP include repeated material. This default can be
changed by using the +r6 switch.
This symbol is used primarily when reformatting SALT files to CHAT files, using the
SALTIN command. SALT does not distinguish between filled pauses such as “uh”,
repetitions ([/]), and retracings ([//]); all three phenomena and possible others are treated
as “mazes.” Because of this, SALTIN uses the [/?] symbol to translate SALT mazes into
chat hesitation markings.
Material marked in this way will automatically be excluded by analysis on the %mor line
and from the other programs such as DSS, IPSyn, VOCD, GRASP etc that operate on
that line.
If you wish to conduct analyses such as MLU and MLT based on clauses rather than
utterances as the basic unit of analysis, you should mark the end of each clause with this
symbol. It is not necessary to mark the scope of this symbol, since it is assumed to apply
to all the material before it up to the beginning of the utterance or up to the preceding [^c]
marker. It is possible to create additional userdefined singleletter codes using the
format of [^c *], such as [^c err] which could be defined as a marker of a clause that
includes an error, or [^c 0s] for a clause with no subject, etc. Then, inside the MLU and
MLT programs, you need to add the +c switch to specify exactly which codes of this type
should be recognized.
Postcodes [+ text]
Postcodes are symbols placed into square brackets at the end of the utterance. They
should include the plus sign and a space after the left bracket. There is no predefined set
of postcodes. Instead, postcodes can be designed to fit the needs of your particular
project. Unlike scoped codes, postcodes must apply to the whole utterance, as in this
example:
*CHI: not this one. [+ neg] [+ req] [+ inc]
Part 1: CHAT 87
Language precodes are used to mark the switch to a different language in multilingual
interactions. The text in these codes should come from the threeletter ISO codes used in
the @Languages header.
Sometimes we want to have a way of marking utterances that are not really a part of
the main interaction, but are in some “back channel.” For example, during an interaction
that focuses on a child, the mother may make a remark to the investigator. We might
want to exclude remarks of this type from analysis by MLT and MLU, as in this
interaction:
*CHI: here one.
*MOT: no, here.
%sit: the doorbell rings.
*MOT: just a moment. [+ bch]
*MOT: I'll get it. [+ bch]
In order to exclude the utterances marked with [+ bch], the s”[+ bch]” switch must
be used with mlt and mlu.
The [+ trn] postcode can force the MLT command to treat an utterance as a turn when
it would normally not be treated as a turn. For example, utterances containing only “0”
are usually not treated as turns. However, if one believes that the accompanying
nonverbal gesture constitutes a turn, one can note this using [+ trn], as in this example:
*MOT: where is it?
*CHI: 0. [+ trn]
%act: points at wall.
Later, when counting utterances with MLT, one can use the +s+”[+ trn]” switch to
force counting of actions as turns, as in this command:
mlt +s+”[+ trn]” sample.cha
Part 1: CHAT 88
11 Dependent Tiers
In the previous chapters, we have examined how CHAT can be used to create file
headers and to code the actual words of the interaction on the main line. The third major
component of a CHAT transcript is the ancillary information given on the dependent
tiers. Dependent tiers are lines typed below the main line that contain codes, comments,
events, and descriptions of interest to the researcher. It is important to have this material
on separate lines, because the extensive use of complex codes in the main line would
make it unreadable. There are many codes that refer to the utterance as a whole. Using a
separate line to mark these avoids having to indicate their scope or cluttering up the end
of an utterance with codes.
It is important to emphasize that no one expects any researcher to code all tiers for all
files. CHAT is designed to provide options for coding, not requirements for coding.
These options constitute a common set of coding conventions that will allow the
investigator to represent those aspects of the data that are most important. It is often
possible to transcribe the main line without making much use at all of dependent tiers.
However, for some projects, dependent tiers are crucial.
All dependent tiers should begin with the percent symbol (%) and should be in lower
case letters. As in the main line, dependent tiers consist of a tier code and a tier line. The
dependent tier code is the percent symbol, followed by a threeletter code ID and a colon.
The dependent tier line is the text entered after the colon that describes fully the elements
of interest in the main tier. Except for the %mor and %gra tiers, these lines do not require
ending punctuation. Here is an example of a main line with two dependent tiers:
*MOT: well go get it!
%spa: $IMP $REF $INS
%mor: ADV|well V|go&PRES V|get&PRES PRO|it!
The first dependent tier indicates certain speech act codes and the second indicates a
morphemic analysis with certain part of speech coding. Coding systems have been devel
oped for some dependent tiers. Often, these codes begin with the symbol $. If there is
more than one code, they can be put in strings with only spaces separating them, as in:
%spa: $IMP $REF $INS
Multiple dependent tiers may be added in reference to a single main line, giving you
as much richness in descriptive capability as is needed.
Part 1: CHAT 89
This tier describes the actions of the speaker or the listener. Here is an example of
text accompanied by the speaker's actions:
*ROS: I do it!
%act: runs to toy box
The %act tier can also be used in conjunction with the 0 symbol when actions are
performed in place of speaking:
*ADA: 0.
%act: kicks the ball
This could also be coded as:
*ADA: 0 [% kicks the ball].
In this case the 0 on the main line is used to indicate that there is an action but no speech.
Or one can use the &= form, as in :
*ADA: &=kicks:ball .
The choice among these three forms depends on the extent to which the coder wants
to keep track of a particular type of dependent tier information.
This tier describes who talks to whom. Use the threeletter identifier given in the par
ticipants header to identify the addressees.
*MOT: be quiet.
%add: ALI, BEA
In this example, Mother is telling Alice and Beatrice to “be quiet.”
This tier is used to provide an alternative possible transcription. If the transcription is
intended to provide an alternative for only one word, it may be better to use the main line
form of this coding tier in the form [=? text].
This tier is used for representing morphological categories in CONNL format to allow
grammatical relations tagging using a CONNL tagger.
This is the general purpose coding tier. It can be used for mixing codes into a single
tier for economy or ease of entry. Here is an example.
*MOT: you want Mommy to do it?
%cod: $MLU=6 $NMV=2 $RDE $EXP
This tier is used to code text cohesion devices.
This is the general purpose comment tier. One of its many uses is to note occurrence
of a particular construction type, as in this example:
*EVE: that's nasty (.) is it?
%com: note tag question
Notations on this line should usually be in common English words, rather than codes.
If special symbols and codes are included, they should be placed in quotation marks, so
that CHECK does not flag them as errors.
This tier is needed only for files that are reformatted from the SALT system by the
SALTIN command.
This line provides a fluent, nonmorphemicized English translation for nonEnglish
data.
*MAR: yo no tengo nada.
%eng: I don't have anything.
Part 1: CHAT 91
This tier codes additional information about errors that cannot be fully expressed on
the main line.
This tier is useful for specifying the deictic identity of objects or individuals. Brief
explanations can also appear on the main line, enclosed in square brackets and preceded
by the = sign and followed by a space.
This tier codes facial actions. Ekman & Friesen (1969, 1978) have developed a com
plete and explicit system for the coding of facial actions. This system takes about 100
hours to learn to use and provides extremely detailed coding of the motions of particular
muscles in terms of facial action units. Kearney and McKenzie (1993) have developed
computational tools for the automatic interpretation of emotions using the system of
Ekman and Friesen.
This tier codes a “flowing” version of the transcript that is as free as possible of tran
scription conventions and that reflects a minimal number of transcription decisions. Here
is an example of a %flo line:
*CHI: <I don’t> [//] I don’t wanna [: want to] look
in a [* the] badroom [* bedroom] or Bill's room.
%flo: I don't I don't wanna look in a badroom or Bill's room.
Most researchers would agree that the %flo line is easier to read than the *CHI line.
However, it gains readability by sacrificing precision and utility for computational analy
ses. The %flo line has no records of retracings; words are simply repeated. There is no
regularization to standard morphemes. Standard English orthography is used to give a
general impression of the nature of phonological errors. There is no need to enter this line
by hand, because FLO command can enter it automatically.
This tier can be used to provide a “translation” of the child's utterance into the adult
language. Unlike the %eng tier, this tier does not have to be in English. It should use an
explanation in the target language. This tier differs from the %flo tier in that it is being
used not to simplify the form of the utterance but to explain what might otherwise be
Part 1: CHAT 92
unclear. Finally, this tier differs from the %exp tier in that it is not used to clarify deictic
reference or the general situation, but to provide a target language gloss of immature
learner forms.
This tier codes gestural and proxemic material. Some transcribers find it helpful to
distinguish between general activity that can be coded on the %act line and more
specifically gestural and proxemic activity, such as nodding or reaching, which can be
coded on the %gpx line.
This tier is used to code dependency structures with tagged grammatical relations
(Sagae, Davis, Lavie, MacWhinney, & Wintner, 2007; Sagae, Lavie, & MacWhinney,
2005; Sagae, MacWhinney, & Lavie, 2004).
This tier is used for training of the MEGRASP grammatical relations tagger. It has
the same form at the %gra tier.
This tier codes intonations, using standard language descriptions.
This tier is used in conjunction with the %pho tier to code the phonological form of
the adult target or model for each of the learner's phonological forms.
This tier codes morphemic segments by type and part of speech. Here is an example
of the %mor tier:
*MAR: I wanted a toy.
%mor: PRO|I&1S V|wantPAST DET|a&INDEF N|toy.
This tier codes paralinguistic behaviors such as coughing and crying.
Transcription on the %pho line should be done using the IPA symbols in Unicode.
Words on the main tier should align in a onetoone fashion with forms on the
phonological tier. This alignment takes all forms produced into account and does not
exclude retraces or nonword forms. On the %pho line it is sometimes important to
describe several words as forming a single phonological group in order to describe liaison
and other assimilation effects within the group. To mark this, the Unicode characters for
U+2039 and U+203A, which appear as ‹ and ›, should be entered on both the main and
%pho lines using F2+< and F2+>.
Parents of deaf children often sign or gesture along with speech, as do the children
themselves. To transcribe this, researchers often place the spoken material on the main
line and the signed material on the %sin line. Words on the %sin tier can consist of any
alphanumberic characters and colons, as in forms such as g:point:toy. Like the %pho and
%mor tiers, the words on the %sin tier must be placed into onetoone correspondence
with words on the main tier. To do this, it may be necessary to enter many “0” forms on
the %sin tier when a word is not matched by a sign or gesture. At other times, several
words on the main tier may align with a single gesture. To mark this grouping, you can
group the forms on the main line with two Unicode bracketing symbols. The beginning
of the group is marked by Unicode U+23A8 ⎨and the close by Unicode U+23AC ⎬.
This tier describes situational information relevant only to the utterance. There is also
an @Situation header. Situational comments that relate more broadly to the file as a
whole or to a major section of the file should be placed in a @Situation header.
*EVE: what that?
*EVE: woof@o woof@o.
%sit: dog is barking
This tier is for speech act coding. Many researchers wish to transcribe their data with
reference to speech acts. Speech act codes describe the function of sentences in discourse.
Often researchers express a preference for the method of coding for speech acts. Many
systems for coding speech acts have been developed. A set of speech act codes adapted
from a more general system devised by Ninio and Wheeler is provided in the chapter on
speech act coding.
This tier is used for older data for which there is no possible linking of the transcript
to the media. It should not be confused with the millisecond accurate timing found in the
bullets inserted by sonic CHAT. The %tim tier is used just to mark large periods of time
during the course of taping. These readings are given relative to the time of the first
utterance in the file. The time of that utterance is taken to be time 00:00:00. Its absolute
time value can be given by the @Time Start header. Elapsed time from the beginning of
the file is given in hours:minutes:seconds. Thus, a %tim entry of 01:20:55 indicates the
passage of 1 hour, 20 minutes, and 55 seconds from time zero. If you only want to track
time in minutes and seconds, you can use the form minutes:seconds, as in 09:22 for 9
minutes and 22 seconds. None of the CLAN programs use the information encoded in
the %tim tier. It is just included for hand analyses.
*MOT: where are you?
%tim: 00:00:00
... (40 pages of transcript follow and then)
*EVE: that one.
%tim: 01:20:55
If there is a break in the interaction, it may be necessary to establish a new time zero.
This is done by inserting a new @Time Start header. You can also use this tier to mark
the beginning and end of a time period by using a form such as:
*MOT: where are you?
%tim: 04:20:2304:21:01
This is the training tier for the POST tagger. It has the same form as the %mor line.
If the comment refers to something that occurred immediately before the utterance in
the main line, you may use the symbol <bef>, as in this example:
*MOT: it is her turn.
%act: <bef> moves to the door
If a comment refers to something that occurred immediately after the utterance, you
may use the form <aft>. In this example, Mother opened the door after she spoke:
*MOT: it is her turn.
*MOT: go ahead.
%act: <aft> opens the door
If neither < bef > or < aft > are coded, it is assumed that the material in the coding tier
occurs during the whole utterance or that the exact point of its occurrence during the
utterance is not important.
Although CHAT provides transcribers with the option of indicating the point of
events using the %com tier and <bef> and <aft> scoping, it may often be best to use the
@Comment header tier instead. The advantage of using the @Comment header is that it
indicates in a clearer manner the point at which an activity actually occurs. For example,
instead of the form:
*MOT: it is her turn.
%act: <bef> moves to the door
one could use the form:
@Comment: Mot moves to the door.
*MOT: it is her turn.
The third option provided by CHAT is to code comments in square brackets right on
the main line, as in this form:
*MOT: [^ Mot moves to the door] it is her turn.
Of these alternative forms, the second seems to be the best in this case.
Part 1: CHAT 96
When you want a particular dependent tier to refer to a particular word on the main
tier, you can use this additional code to mark the scope. For example, here the code
marks the fact that the mother's words 4 through 7 are imitated by the child.
*MOT: want to come sit in my lap?
%act: $sc=47 $IMIT
*CHI: sit in my lap.
Part 1: CHAT 97
12 CHAT-CA Transcription
CHAT also allows transcription that is more closely in accord with the requirements
of CA (Conversational Analysis) transcription. CA is a system devised by Sacks,
Schegloff, and Jefferson (Sacks, Schegloff, & Jefferson, 1974) for the purpose of
understanding the construction of conversational turns and sequencing. It is now used by
hundreds of researchers internationally to study conversational behavior. Recent
applications and formulations of this approach can be found in Ochs, Schegloff, and
Thompson (1996), as well as the related “GAT” formulation of Selting (1998). Workers
in this tradition find CA notation easier to use than CHAT, because the conventions of
this system provide a clearer mapping of features of conversational sequencing. On the
other hand, CA transcription has limits in terms of its ability to represent conventional
morphemes, orthography, and syntactic patterns. By supplementing CHAT transcription
on the word level with additional utterance level codes for CA, the strengths of both
systems can be maintained. To achieve this merger, some of the forms of both CHAT and
CA must be modified. To implement CA format, CHATCA uses these functions:
1. The fact that a transcript is using CA notation is indicated by inserting the term
CA in the @Options header. Older corpora can be maintained in their original
nonCHAT format by entering the word “heritage” on the @Options header tier
before the @ID tiers.
2. Utterances and interTCU pauses are numbered by the automatic line numbering
function.
3. Line numbers can be turned on and off for viewing and printing by using CLAN
options. Line numbers are not stored by themselves in CHAT, although they are
encoded in the XML version of CHAT.
4. After the line number comes an asterisk and then the speaker ID code and a colon
and a tab, as in CHAT format.
5. Tabs are not used elsewhere.
6. CA Overlaps, as marked with the special symbols ⌈ ⌉ ⌊ and⌋, are aligned
automatically by the INDENT program, so hand indentation is not needed.
7. To maintain proper alignment, CLAN uses a special fixedwidth Unicode font.
8. CHAT requires obligatory utterance terminators. However, CA uses terminal
contours instead, as noted in the table below, and these are optional.
9. Instead of marking comments in double parentheses, CHAT uses the [% com]
notation. However, common sounds, gestures, and activities occurring at a point
in an utterance are marked using the &=gesture form.
10. CHAT uses the following forms for marking disfluencies, as further discussed in
the next chapter.
Part 1: CHAT 98
11. pairs of Unicode 21AB leftwards arrow with loop to mark initial segment
repetition as in ↫bbb↫boy
12. pairs of Unicode 2260 notequalto sign to mark blocked segments as in ru≠b
bb≠bber
13. the colon for marking drawls or extensions
14. the ^ symbol for marking a break inside a work
15. forms such as &um for marking filled pauses
16. silent pauses as marked by (.) or (0.6) etc.
17. [/] string for word or phrase repetition
18. string [//] string for retracing
19. +… for trailing off
In addition to these basic utterancelevel CA forms, CHATCA requires the standard
CHAT headers such as these:
@Begin and @End. Using these guarantees that the file is complete.
@Comment: This is a useful general purpose field
@Bg, @Eg: These mark "gems" for later retrieval
@Participants: This field identifies the speakers.
%ges: dependent tiers such as %ges, %spa can be added as needed.
Gail Jefferson continually elaborate the coding of CA features through special marks
during her career. Her creation of new marks was limited, for many years, by what was
available on the typewriter. With the advent of Unicode, we are able to capture all of the
marks she had proposed along with others that she occasionally used. The following
table summarizes these marks of CHATCA.
Unicod
Character Name Char Function F1 +
e
1 up-arrow ↑ shift to high pitch up arrow 2191
2 down-arrow ↓ shift to low pitch down arrow 2193
3 double arrow tilted up ⇗ rising to high 1 21D7
4 single arrow tilted up ↗ rising to mid 2 2197
5 level arrow → level 3 2192
single arrow tilted
6 ↘ falling to mid 4 2198
down
7 double arrow down ⇘ falling to low 5 21D8
8 infinity mark ∞ unmarked ending 6 221E
9 double wavy equals ≈ +≈ no break continuation = 2248
Part 1: CHAT 99
The column marked F1 in the previous table gives methods for inserting the various
nonASCII Unicode characters. For example the smile voice symbol is ☺ is inserted by
F1 and then the letter l. It must be used both before and after the stretch of material with
Part 1: CHAT 100
the smile or laughing voice. After row 32, the items are inserted using F2, instead of F1.
For the most recent version of this symbol set, please consult the current list on the web.
Of these various symbols, there are four that must be placed either at the beginning of
words or inside words. These include the arrows for pitch rise and fall, the inverted
question mark for inhalation, and the ≡ symbol for quick TCU internal uptake. The paired
symbols for intonational stretches such as louder, faster, and slower can be placed
anywhere, except inside comments. They must be used in pairs to mark the beginning
and end of the feature in question.
The triple wavy symbol with a plus (+ ≋) is used to mark a break in a TCU caused by
interruption from another speaker. Use of this symbol can improve readability and
overlap alignment. In this case the triple wavy without a plus can be placed at the end of
the last word of the first segment and then at the beginning of the continuation, where it is
joined with a plus sign and followed by a space, as in +≋ . The ≈ symbol is used in a
parallel way to mark a TCU continuation that is not forced by an interruption from
another speaker. It occurs at the end of the last word of the first segment and in the form
+≈ with a following space at the beginning of the following line. CA transcribers can also
use underlining to represent emphasis on a word or a part of a word. However, if text is
taken from a CHAT file to Word the underlining will be lost.
In general, CA marks must occur either inside words or at the beginnings or ends of
words. In most cases, they should not occur by themselves surrounded by spaces. The
exception to this is the utterance continuator mark +≋ which should be preceded by the
tab mark and followed by a space.
In addition to these features that are basic to CA, our implementation requires
transcribers to begin their transcript with an @Begin line and to end it with an @End
line. Comments can be added using the @Comment format, and transcribers should use
the @Participants header in this form:
@Participants: geo, mom, tim
This line uses only threeletter codes for participant names. By adding this line, it is
possible to have quicker entry of speaker codes inside the editor.
Part 1: CHAT 101
13 Disfluency Transcription
CHAT uses the following forms for marking disfluencies.
Stutteringlike Code Example Notes
disfluencies
prolongation : s:paghetti Place after prolonged segment
broken word ^ spa^ghetti Pause within word
blocking ≠ ≠butter A block before word onset
repeated segment ↫ ↫r-r-r↫rabbit O The curly left arrow brackets the
R like↫ike↫ repetition; iterations are marked
with hyphens
phonological fragment &+ &+sn dog Changes from “snake” to “dog”
other non-word strings & &gara Word play etc.
Typical Disfluencies
whole word repetition follow butter [/] butter Repeated word counts once
word with
[/]
multiple whole word indicate butter [x 7] Indicates that the word ‘butter’
repetition number of was repeated seven times
repetition
s in
brackets
phrase repetition <> [/] <that is a> [/] < > is used to mark repeated
that is a dog. material
word revision [//] a dog [//] beast Revision counts once
phrase revision <> [//] <what did you> Revision counts once
[//] how can you
see it ?
pause (.) or (..) (.) Counts the number of short,
or (…) medium, long pauses
pause duration (2.4) (2.4) Adds up the time values, if marked
filled pause & &um Fillers with underscore count as
&you_know one word
The ≠ character to mark blocking is entered by typing F2 and =
The ↫ character to mark segment repetition is entered by typing F2 and /
Part 1: CHAT 102
Blocking of filled pauses is indicated in this way: &≠you_know
These disfluency types can be traced in FREQ and KWAL commands through the
search strings given in the files called fluencysep.cut and fluencycomb.cut in the
/lib/fluency folder in CLAN. They can be counted automatically using the FLUCALC
program.
Part 1: CHAT 103
Commas can be used as needed to mark phrasal junctions, but they are not used by
the programs and have no tight prosodic definition.
Fragments (phonological) get entered with the ampersandplus symbol attached at
the beginning. So, for all incomplete words, use &+ followed by the graphemes that
capture the sounds produced.
*PAR: so now I can &+sp speak a little bit.
*PAR: and then &+sh &+s &+w we came home.
If you want to mark disfluencies more precisely, you should use the codes in the
preceding section on Disfluency coding.
Gestures can be captured in several ways. You can compose codes using parts of the
body to indicate head nods and shakes, for example, using the ampersand, the equal sign,
the body part, colon, and then the movement or its meaning. You can use up to two
colons for each gesture code and you can use more than one word after the colon if you
connect the words with an underscore symbol.
*PAR: &=head:no .
*PAR: &=hand:hello .
*PAR: see you later &=ges:wave .
*PAR: the woman &=ges:fishing fishing pole water &=casts:pole .
You can also use the %fac and %gpx codes for facial or bodily gestures that extend
throughout longer periods, including the whole sentence.
*PAR: she was fish [/] fish.
%gpx: raising her arm up and down
The various ways of marking Incomplete utterances are described in section 8.11 on
special utterance terminators.
grammar. To see just the list of communicator forms, like in the files in the /lex folder
that begin with “co”.
Fillers are listed in the file cofil.txt in that same folder. There are just a few of these.
They are all entered in this format &uh or &um.
*INV: how do you think your language is these days?
*PAR: well &uh &uh pretty good.
*PAR: well &=laughs tell you the truth, I can't say what I said.
You can put the laugh or sigh on its own line if it serves as the speaker's turn.
*PAR: &=laughs .
A list of these Simple Events appears in the CHAT manual and includes cough, groan,
sneeze, etc.
Neologisms can be marked by putting the @n symbols next to the neologism.
*PAR: oh yes, this is a little sakov@n that's all.
Overlapping speakers can be handled in several ways. The easiest is to use a lazy
overlap marking +< at the beginning of the utterance that overlapped the previous
utterance. This indicates that the second utterance overlapped the previous one, but it
doesn't indicate exactly which words were overlapped. There are other ways to handle
overlaps that you can learn about from the manual as you become more familiar with the
program.
*PAR: that's about.
*INV: +< what about that?
Paraphasias can be marked as errors with an asterisk inside square brackets. If you
know the intended target word, you can indicate it in square brackets next to the error. If
the error is a non word, you can transcribe it in IPA symbols or you can transcribe it
orthographically. If IPA symbols are used, add @u to the end of the error. The CLAN
program will use these replacement words when creating the morphological tier. If 2
colons are used, the morphological tier will use the error word and not the replacement
word.
*PAR: no dubs [: dogs] [*] allowed in the cemetery.
Part 1: CHAT 105
*PAR:
the p
ɪnts@u [: prince] [*] wants to know who the slipper
fits.
Pauses can be captured in the transcription by using a period inside parentheses (.)
indicates an unfilled pause, (..) indicates a longer pause, and (…) indicates a very long
pause. The CHAT manual explains how to code the exact length of a pause in a section
entitled Pauses. Also, if you want to distinguish fluent from disfluent pauses, you can use
(.)d for the latter.
*PAR: I don't (.) know.
*PAR: (…) what do you (…) think?
If you want to be exact, you can code the length of the pauses and enter the minutes,
seconds, and parts of seconds within the parentheses. Minutes precede the colon, seconds
follow the colon, and parts of seconds are given after the period symbol. The following
examples code pauses lasting .5 seconds, 1 minute and 13.41 seconds, and 2 seconds,
respectively. If you are not coding minutes, you do not need the colon at all. Most likely,
the final example of 2 seconds illustrates the type of pause coding that would be most
relevant.
*PAR: I don't (0.5) know.
*PAR: (1:13.41) what do you (2.) think?
Quoted material is likely to occur during story telling and similar activities. To mark
material as quoted, special symbols are used. The +"/. symbols are used at the end of the
sentence that precedes the quoted material. The +" symbols are used to begin the next
line, which contains a complete clause or sentence of quoted material.
*PAR: and so the prince found the slipper.
*PAR: and he said +"/.
*PAR: +" my_gosh I can find the lady who is fitting this slipper.
If the quoted material continues for more lines, use +" at the beginning of each quoted
line.
If the quote precedes the main clause, use +" at the beginning of the quote and then
use +"/. at the end of the main clause.
*PAR: +" my_gosh I can find the lady who is fitting this slipper.
*PAR: the prince said +".
Repetitions are called Retracing Without Correction. The material that is repeated is
enclosed in angle brackets (< >) and followed immediately by the square brackets ([/])
with one slash mark enclosed. If only one word has been repeated once, angle brackets
are not needed and CLAN will assume that the one word before the square brackets with
Part 1: CHAT 106
the slash was repeated. You do not need to use angle brackets or square brackets with the
slash mark when fillers (e.g., uh, um) are repeated.
*PAR: <it was> [/] it was so bad.
*PAR: and the [/] the window was open.
*PAR: and she &+s spilled <the the the> [/] &uh &uh the water on
the floor.
You can indicate several repetitions of a single word by using the square brackets and
inserting an x, a space, and the number of times the word was repeated.
*PAR: it's [x 4] &um a dog.
Revisions are called Retracing With Correction and occur when the speaker changes
something (usually the syntax) of an utterance but maintains the same idea. The material
being retraced is enclosed in angle brackets, followed immediately by the square brackets
with 2 slash marks enclosed. If only one word has been changed, angle brackets are not
needed and CLAN will assume that the one word before the square brackets with the
slash was revised. A change, or correction, should be something clearly identifiable that
changes the syntax but maintains the same idea of the phras
*PAR: well <Cinderella was a> [//] &uh Cinderella is a nice
girl.
*PAR: and then sometimes we [//] I was scared about the traffic.
Invited interruptions occur when one speaker prompts the other speaker to complete
an utterance. These are coded using the +… symbols for trailing off and the ++ symbols
for the other speaker's completion. This may be intentional (cuing) or unintentional.
*INV: how about &+ra +…
*PAR: ++ a radio.
*HEL: if Bill had known +…
*WIN: ++ he would have come.
Shortenings occur when a speaker drops sounds out of words. For example, a
speaker may leave the final "g" off of "running", saying "runnin" instead. In CHAT, this
shortened form should appear as runnin(g). Other examples that demonstrate sound
omissions are (be)cause, prob(ab)ly, (a)bout, (re)member, (ex)cept.
Part 1: CHAT 107
Unintelligible segments of utterances should be transcribed as xxx.
Untranscribed material can be indicated with the letters www. This symbol is used
on a main line to indicate material that a transcriber does not want to transcribe because it
is not relevant to the interaction of interest. This symbol must be followed by the %exp
line, explaining what was transpiring.
*PAR: www.
%exp: talking to spouse
*PAR: www.
%exp: looking through pictures
Utterance segmentation decisions can be challenging. See the guidelines in the first
section of the chapter on Utterances in this manual.
Part 1: CHAT 108
Also, to allow for marking of Hebrew and Arabic prefixes, the # sign is
allowed at the end of the prefix, which is then separated from the stem by a space,
as in we# tiqfoc.
When transcribing geminates use double consonants or double vowels. This system is
expressed in the following two charts:
Vowels
IPA Arabi Name CHAT
c
iː ي ya ii
ɪ, i ِِ kasra i
eː ي ya (ba’den) ee
e - e
ا alef madda aa
aː emphatic
a, ɑ ا short a a
æ َِ fatHa ae
alef madda non- æ:
æː emphatic
uː و waw, long uu
u و waw, short
ʊ ُِ dame u
oː و waw (bantaloːn) oo
o, ɔ و short o
ə not in Arabic e
Part 1: CHAT 109
Consonants
IPA Arabic Name CHAT
إ hamz ʔ
ʔ a
b ب ba b
p p
t ت ta t
θ ث tha tʰ
ʒ .چ jim j
ħ ح ḥa ḥ
x خ xa kʰ
χ qʰ
d د dal d
ð ذ dhal dʰ
r ر ra r
z ز zen z
s س sin s
ʃ ش shin sʰ
sˤ ص sad s
dˤ ض dad d
tˤ ط ṭa ṭ
zˤ ظ ẓa ẓ
ʕ ع ‘ayn ʕ
غ ghay gʰ
ɣ n
f ف fa f
q ق qaf q
ɡ ج gim g
k ك kaf k
l ل lam l
m م mim m
n ن nun n
h ه ha h
w و waw w
j ي ya y
ʧ tsʰ
ʤ dj
v v
Part 1: CHAT 110
16 Specific Applications
The basic CHAT codes can be adapted to work with a variety of more specific
applications. In this chapter, we refer four such applications to illustrate the adaptation of
the general codes to specific uses. A separate document, available from this server,
describes the BTS (Berkeley Transcription System) for sign language.
When codes cannot be adapted for specific projects, it may be necessary to modify
the underlying XML schema for CHAT. When this becomes necessary, please send
email to [email protected].
16.1 Code-Switching
Transcription is easiest when speakers avoid overlaps, speak in full utterances, and
use a single standard language throughout. However, the real world of conversational
interactions is seldom so simple and uniform. One particularly challenging type of
interaction involves codeswitching between two or even three different languages. In
some cases, it may be possible to identify a default language and to mark a few words as
intrusions into the default language. In other cases, mixing and switching are more
intense.
CHAT relies on a system of interlaced marking for identifying the languages being
used in codeswitched interactions.
1. The languages spoken by the various participants must be noted with the @Lan-
guages header tier. See section 7.2 for the relevant ISO-639 codes. The first
language on this line is considered to be the default language until a switch is
marked.
2. Utterances that represent a switch to the second language are marked with
precodes, as in [- eng] for a switch to English. Here is an example:
*MOT: can you see?
*CHI: [spa] no puedo.
3. Individual words that switch away from the default language to the second
language are marked with the @s terminator. If the @Languages header has
“spa, eng”, then the @s marked indicates a swith to English. If the @Languages
header has “eng, spa” then the @s indicates a switch to Spanish. If the switch is to
a language not included in the @Languages header, then the full form must be
used as in word@s:por for switch to a Portuguese word.
4. When the default language of the interaction changes, the change can be marked
with @New Language.
The @s special form marker code may also be used to explicitly mark the use of a
particular language, even if it is not included in the @Languages header. For example,
the code schlep@s:yid can be used to mark the inclusion of the Yiddish word “schlep” in
Part 1: CHAT 111
any text. The @s code can also be further elaborated to mark codeblended words. The
form well@s:eng&cym indicates that the word “well” could be either an English or a
Welsh word. The combination of a stem from one language with an inflection from
another can be marked using the plus sign as in swallowni@s:eng+hun for an English
stem with a Hungarian infinitival marking. All of these codes can be followed by a code
with the $ sign to explicitly mark the parts of speech. Thus, the form recordar@s$inf
indicates that this Spanish word is an infinitive. The marking of part of speech with the $
sign can also be used without the @s.
These techniques are all designed to facilitate the retrieval of material in one language
separately from the other without having to tag each and every word. However, if one
wants to see tags on every word, a transcript created using the above rules can be
reformatted using this command, in which the –l switch adds language tags to every
word:
kwal +d +t* +t@ +t% l filename.cha
Relying further on the –l switch, it is possible to locate codeswitches on the utterance
level in a transcript by using a COMBO command of this type for switches from English
to French:
combo +b2 l +s"\**:^*s:eng^*^\**:^*s:fra" *.cha
Problems similar to those involved in codeswitching occur in studies of narratives
where a speaker may assume a variety of roles or voices. For example, a child may be
speaking either as the dragon in a story or as the narrator of the story or as herself. These
different roles are most easily coded by marking the sixcharacter main line code with
forms such as *CHIDRG, *CHINAR, and *CHISEL for childasdragon, childas
narrator, and childasself.
The first @g marker indicates the first page of the book with the boy, the dog, and the
frog. The second @g marker indicates the second page of the book with the boy sleeping.
When using this lazy gem type of marking, it is assumed that the beginning of each new
gem is the end of the previous gem. Programs such as gem and gemlist can then be used
to facilitate retrieval of information linked to particular pictures or stimuli.
Each written sentence should be transcribed on a separate line with the *TEX: field at
the beginning. Additional @Comment and @Situation fields can be added to add
descriptive details about the writing assignment and other relevant information.
For research projects that do not demand a high degree of accurate rendition of the
actual form of the written words, it is sufficient to transcribe the words on the main line
in normalized standardlanguage orthographic form. However, if the researcher wants to
track the development of punctuation and orthography, the normalized main line should
be supplemented with a %spe line. Here are some examples:
*TEX: Each of us wanted to get going home before the Steeler's
game let out .
%spe: etch of /us wanted too git goin home *,
be/fore the Stillers game let out 0.
In this example, the student had written “ofus” without a space and had incorrectly
placed a space in the middle of “before”. The slash at the beginning of a word marks an
omission and the internal slash marks an extra space. These two marks are used to
achieve onetoone alignment between the main line and the %spe line. This alignment
can be used to facilitate the use of MODREP in the analysis of orthographic errors. It
will also be used in the future by programs that perform automatic comparisons between
the main line and the %spe line to diagnose error types.
The only purpose of the %spe line is to code wordlevel spelling errors, not to code
any higher level grammatical errors or word omissions. Also, the words on the main line
are all given in their standard targetlanguage orthographic form. For clarity, final
Part 1: CHAT 113
punctuation on the main line is preceded by a space. If a punctuation mark is omitted, it is
coded with a zero. Forms that appear on the %spe line that have no role in the main line,
such as extraneous punctuation, are marked with an asterisk.
These conventions focus on the writing of individual words. However, it may also be
necessary to note larger features of composition. When the student crosses off a series of
words and rewrites them, you can use the standard CHAT conventions for retracing with
scoping marked by angle brackets and the [//] symbol. If you want to mark page breaks,
you can use a header such as @Stim: Page 3. If you wish to mark a shift in ink, or
orthographic style, you can use a general @Comment field.
Alternatively, one can combine the codes in a hierarchical system, so that the
previous example would have only the code $dhs:yq. Choice of different forms for codes
depends on the goals of the analysis, the structure of the coding system, and the way the
codes interface with clan.
Users will often need to construct their own coding schemes. However, one scheme
that has received extensive attention is one proposed by Ninio & Wheeler (1986). Ninio,
Snow, Pan, & Rollins (1994) provided a simplified version of this system called INCA
A, or Inventory of Communicative Acts Abridged. The next two sections give the cate
gories of interchange types and illocutionary forces in the proposed INCAA system.
Speech Elicitations
CX Complete text, if so demanded.
EA Elicit onomatopoeic or animal sounds.
EI Elicit imitation of word or sentence by modelling or by explicit
command.
EC Elicit completion of word or sentence.
EX Elicit completion of rote-learned text.
RT Repeat or imitate other's utterance.
SC Complete statement or other utterance in compliance with request.
Commitments
FP Ask for permission to carry out act.
PA Permit hearer to perform act.
PD Promise.
PF Prohibit/forbid/protest hearer's performance of an act.
SI State intent to carry out act by speaker.
TD Threaten to do.
Declarations
DC Create a new state of affairs by declaration.
DP Declare make-believe reality.
ND Disagree with a declaration.
YD Agree to a declaration.
Markings
CM Commiserate, express sympathy for hearer's distress.
EM Exclaim in distress, pain.
EN Express positive emotion.
ES Express surprise.
MK Mark occurrence of event (thank, greet, apologize, congratulate,
etc.).
TO Mark transfer of object to hearer.
XA Exhibit attentiveness to hearer.
Statements
AP Agree with proposition or proposal expressed by previous speaker.
CN Count.
DW Disagree with proposition expressed by previous speaker.
Part 1: CHAT 117
Questions
AQ Aggravated question, expression of disapproval by restating a
question.
AA Answer in the affirmative to yes/no question.
AN Answer in the negative to yes/no question.
EQ Eliciting question (e.g., hmm?).
NA Intentionally nonsatisfying answer to question.
QA Answer a question with a wh-question.
QN Ask a product-question (wh-question).
RA Refuse to answer.
SA Answer a wh-question with a statement.
TA Answer a limited-alternative question.
TQ Ask a limited-alternative yes/no question.
YQ Ask a yes/no question.
YA Answer a question with a yes/no question.
Performances
PR Perform verbal move in game.
TX Read or recite written text aloud.
Evaluations
AB Approve of appropriate behavior.
CR Criticize or point out error in nonverbal act.
DS Disapprove, scold, protest disruptive behavior.
ED Exclaim in disapproval.
ET Express enthusiasm for hearer's performance.
PM Praise for motor acts, i.e. for nonverbal behavior.
Text editing
CT Correct, provide correct verbal form in place of erroneous one.
Vocalizations
YY Make a word-like utterance without clear function.
OO Unintelligible vocalization.
Certain other speech act codes that have been widely used in child language research
can be encountered in the CHILDES database. These general codes should not be
combined with the more detailed INCAA codes. They include ELAB (Elaboration),
Part 1: CHAT 118
18 Error Coding
18.1 Word level error codes
Errors at the word level are marked by placing the [*] symbol after the erroneous
word. If there is a replacement string, such as [: because], that should come before the
error code. When an error occurs in the initial part of a retracing, the [*] symbol is
placed after the error, but before the [/] mark.
To be considered a phonological error, the error must meet these criteria:
1. For onesyllable words, consisting of an onset (initial phoneme or phonemes) plus
vowel nucleus plus coda (final phoneme or phonemes), the error must match on 2
out of 3 of those elements (e.g., onset plus vowel nucleus OR vowel nucleus plus
coda OR onset plus coda). The part of the syllable that is in error may be a
substitution, addition, or omission. For onesyllable words with no onset (e.g.,
eat) or no coda (e.g., pay), the absence of the onset or coda in the error would also
count as a match.
2. For multisyllabic words, the error must have complete syllable matches on all but
one syllable, and the syllable with the error must meet the onesyllable word
match criteria stated above.
Note: If using other criteria for phonological error coding (e.g., overlap of > 50% of
phonemes between error production and target word), some of the n:k and s:ur errors
may qualify.
For errors with related words for known targets, one can add these additional distinctions:
s:r:prep wrong preposition, as in on for in or off for out
s:r:seg word that is a partial segment of the target, as in fire for fireman
Part 1: CHAT 120
s:r:der derivational error using a real word, as in assess for assessment or
humbleness for humility
Errors involving grammatical categories, such as number, case, definiteness, or gender
are coded as [* s:r:gc]. These can be further coded using the relevant part of speech, such
as “art” for article or “pro” for pronoun, and “der” for derivation, as in these examples:
s:r:gc:art definite for indefinite, indefinite for definite, definite for zero
s:r:gc:pro his for her, your for yours, my for mine
18.1.3 Neologisms [* n]
n:k neologism, known target, does not meet phonological error criteria
n:uk neologism, unknown target
n:k:s neologism, known target, stereotypy (recurring nonword)
n:uk:s neologism, unknown target, stereotypy (recurring nonword)
n:k:der neologism, known target, as in integrativity for integration, or
foundament for foundation
Missing regular forms, in which the base lemma appears with no suffix, are coded with
m:0, as in
m:0ing missing progressive suffix
m:03s *missing 3rd person singular suffix
m:0ed missing regular past suffix
m:0s *missing regular plural suffix
m:0's missing possessive suffix
m:0s' missing possessive plural suffix
Part 1: CHAT 121
However, the two codes marked above with * should usually be coded instead as
agreement errors, as noted below.
Substitutions of the base form for irregulars, with omission of the expected marking, are
coded with m:base:*, as in
m:base:s child for children, ox for oxen
m:base:ed come for came, bring for brought
m:base:en take for taken or freeze for frozen
m:base:er badder for worse
m:base:est baddest for worst
Substitutions of an irregular for the base form are coded with m:irr:* in this way:
m:irr:s children for child
m:irr:ed found for find
m:irr:en taken for take
Substitutions between past and perfective irregulars are coded with m:sub:* in this way:
m:sub:ed frozen for froze, seen for saw
m:sub:en froze for frozen, saw for seen
Overregularizations of irregulars are coded in this way:
m:=ed overregularized ed past, as in seed for saw
m:=en overregularized en perfective, as in taked for taken
m:=s overregularized –s plural, as in childs for children
Superfluous markings of regulars are coded in this way:
m:+ing superfluous progressive, as in running for run
m:+3s *superfluous 3rd person singular –s suffix, as in goes for go
m:+ed superfluous regular past, as in walked for walk
m:+en superfluous perfective, as in taken for take
m:+s *superfluous plural, as in gowns for gown
m:+'s superfluous possessive or plural possessive, as in John's for John.
However, the two codes marked above with * should usually be coded as agreement
errors, as noted below.
Double markings of regulars and irregulars are coded in this way:
m:++ing runninging
m:++3s wantses
Part 1: CHAT 122
m:++ed talkeded
m:++en changeded
m:++s sevenses
m:++'s boys's's
Double markings of irregulars are marked using the above codes plus a final :i
m:++ed:i tooked
m:++en:i brokened, takenen
m:++s:i feets
Agreement errors for irregulars are marked in this way:
m:vsg:a verb 3rd singular for unmarked:
has for have, is for are, was for were
m:vun:a verb unmarked for 3rd singular
have for has, are for is, were for was
When agreement errors involve regulars, the :a should be added to the basic code, as in
m:03s:a he want for he wants
m:+3s:a we wants for we want
m:0s:a noun singular for plural, as in two dog for two dogs
m:+s:a noun plural for singular, as in this dogs for this dog
Allomorphy errors in the stem or base are coded in this way:
m:allo knifes for knives, an for a
18.1.5 Dysfluencies [* d]
d:sw dysfluency within word, as in insuhside for inside
though it is the wrong real word, the target can be marked with a single colon or a
double colon. The double colon allows the MOR program to use the actual word
produced rather than the target in its analysis, but also allows programs such as
FREQ to use either form when needed.
*PAR: they cut the &+l lock off the door and tall [: call] [* p:w] the paramedics .
OR
*PAR: they cut the &+l lock off the door and tall [:: call] [* p:w] the paramedics .
Multiple codes may be used if an error is, for example, both a semantic and
phonemic paraphasia, as in this example:
*PAR: it was singing [: ringing] [* s:r] [* p:w] in my ears .
Also, if the error is repeated, you can add “rep” to the error code; if the error is
revised (to another error or the correct word), add “ret” (retraced) to the error
code, as in this example:
*PAR: it’s a little dog [: cat] [* s:rret] [//] cat .
Grammatical error – [+ gram] – includes agrammatic and paragrammatic utterances:
telegraphic speech
speech in which content words (mainly nouns, verbs, and adjectives) are
relatively preserved but many function words (articles, prepositions, conjunctions)
are missing (adapted from Brookshire, 1997)
utterances with frank grammatical errors (without requiring that each utterance be
a complete sentence with a subject and predicate)
Part 1: CHAT 124
utterances with errors in word order, syntactic structure, or grammatical
morphology (Butterworth and Howard, 1987)
utterance level grammatical errors as opposed to word level agreement errors or
missing parts of speech
*PAR: one two bread . [+ gram]
*PAR: whatever I’m think up . [+ gram]
*PAR: is getting want to be wasn’t . [+ gram] [+ jar]
*PAR: when everything that we going out now . [+ gram] [+ jar]
Jargon – [+ jar] – mostly fluent and prosodically correct but largely meaningless
speech, containing paraphasias, neologisms, or unintelligible strings; resembles
English syntax and inflection (adapted from Kertesz, 2007)
*PAR: go and &+ha hack [* s:uk] the gets [* s:uk] be able gable [* s:uk] get &+su
sɪm@u [: x@n] [* n:uk] . [+ jar]
*PAR: get this care [* s:ukret] [//] kɛɹf@u [: x@n] [* n:uk] to eat here . [+ jar]
*PAR: and xxx . [+ jar]
Empty speech – [+ es] – speech that is syntactically correct but conveys little or no
overall meaning, often a result of substituting general words (e.g., thing, stuff) for
more specific words (Brookshire, 1997). Differentiating among “empty speech”,
“jargon”, and “grammatical error” codes may be challenging. In truth, all these
sentences may be meaningless in the conversational context. Briefly, empty speech
utterances should contain general, vague, unspecific referents; jargon utterances
should contain paraphasias and/or neologisms; and paragrammatic utterances (in the
grammatical error category) should have inappropriate juxtapositions of grammatical
elements.
*PAR: we got little things over here . [+ es]
*PAR: there was nothing in that one there . [+ es]
Perseveration – [+ per] – repetition of an utterance when it is no longer appropriate
(Brookshire, 1997)
Circumlocution – [+ cir] – talking around words/concepts
Part 1: CHAT 125
*PAR: and through the help of <the whatever fairy or whoever the [x 3] what>
[//] the lady that is helping Cinderella &um she has <the chance to
check the> [//] the prince check the &+s &uh shoe . [+ cir] [+ gram]
Part 1: CHAT 126
References
Allen, G. D. (1988). The PHONASCII system. Journal of the International Phonetic
Association, 18, 925.
Ament, W. (1899). Die Entwicklung von Sprechen und Denken beim Kinder. Leipzig:
Ernst Wunderlich.
Augustine, S. (1952). The Confessions, original 397 A. D. (Vol. Volume 18). Chicago:
Encyclopedia Britannica.
Bates, E., & MacWhinney, B. (1982). Functionalist approaches to grammar. In E.
Wanner & L. Gleitman (Eds.), Language acquisition: The state of the art (pp.
173218). New York, NY: Cambridge University Press.
Bernstein Ratner, N., Rooney, B., & MacWhinney, B. (1996). Analysis of stuttering
using CHILDES and CLAN. Clinical Linguistics and Phonetics, 10(3), 169188.
Bloom, P. (2000). How children learn the meanings of words. Cambridge, MA: MIT
Press.
Brown, R. (1973). A first language: The early stages. Cambridge, MA: Harvard.
Chafe, W. (Ed.) (1980). The Pear stories: Cognitive, cultural, and linguistic aspects of
narrative production. Norwood, NJ: Ablex.
Clark, E. (1987). The Principle of Contrast: A constraint on language acquisition. In B.
MacWhinney (Ed.), Mechanisms of Language Acquisition (pp. 134). Hillsdale,
NJ: Lawrence Erlbaum Associates.
Crystal, D. (1969). Prosodic systems and intonation in English. Cambridge: Cambridge
University Press.
Crystal, D. (1979). Prosodic development. In P. Fletcher & M. Garman (Eds.), Language
acquisition: Studies in first language development. New York, NY: Cambridge
University Press.
Darwin, C. (1877). A biographical sketch of an infant. Mind, 2, 292294.
Edwards, J. (1992). Computer methods in child language research: four principles for the
use of archived data. Journal of Child Language, 19, 435458.
Ekman, P., & Friesen, W. (1969). The repertoire of nonverbal behavior: Categories,
origins, usage, and coding. Semiotica, 1, 4798.
Ekman, P., & Friesen, W. (1978). Facial action coding system: Investigator's guide. Palo
Alto, CA: Consulting Psychologists Press.
Fletcher, P. (1985). A child's learning of English. Oxford: Blackwell.
GoldmanEisler, F. (1968). Psycholinguistics: Experiments in spontaneous speech. New
York, NY: Academic Press.
Part 1: CHAT 127
Gvozdev, A. N. (1949). Formirovaniye u rebenka grammaticheskogo stroya. Moscow:
Akademija Pedagogika Nauk RSFSR.
Halliday, M. (1966). Notes on transitivity and theme in English: Part 1. Journal of
Linguistics, 2, 3771.
Halliday, M. (1967). Notes on transitivity and theme in English: Part 2. Journal of
Linguistics, 3, 177274.
Halliday, M. (1968). Notes on transitivity and theme in English: Part 3. Journal of
Linguistics, 4, 153308.
Jefferson, G. (1984). Transcript notation. In J. Atkinson & J. Heritage (Eds.), Structures
of social interaction: Studies in conversation analysis (pp. 134162). Cambridge:
Cambridge University Press.
Kearney, G., & McKenzie, S. (1993). Machine interpretation of emotion: Design of
memorybased expert system for interpreting facial expressions in terms of
signaled emotions. Cognitive Science, 17, 589622.
Kenyeres, E. (1926). A gyermek elsö szavai es a szófajók föllépése. Budapest:
Kisdednevelés.
Kenyeres, E. (1938). Comment une petite hongroise de sept ans apprend le français.
Archives de Psychologie, 26, 521566.
Leopold, W. (1939). Speech development of a bilingual child: a linguist's record: Vol. 1.
Vocabulary growth in the first two years (Vol. 1). Evanston, IL: Northwestern
University Press.
Leopold, W. (1947). Speech development of a bilingual child: a linguist's record: Vol. 2.
Soundlearning in the first two years. Evanston, IL: Northwestern University
Press.
Leopold, W. (1949a). Speech development of a bilingual child: a linguist's record: Vol.
3. Grammar and general problems in the first two years. Evanston, IL:
Northwestern University Press.
Leopold, W. (1949b). Speech development of a bilingual child: a linguist's record: Vol.
4. Diary from age 2. Evanston, IL: Northwestern University Press.
LIPPS. (2000). The LIDES manual: A document for preparing and analysing language
interaction data. International Journal of Bilingualism, 4, 164.
Low, A. A. (1931). A case of agrammatism in the English language. Archives of
Neurology and Psychiatry, 25, 556597.
MacWhinney, B. (1989). Competition and lexical categorization. In R. Corrigan, F.
Eckman, & M. Noonan (Eds.), Linguistic categorization (pp. 195242).
Philadelphia, PA: Benjamins.
MacWhinney, B., & Osser, H. (1977). Verbal planning functions in children's speech.
Child Development, 48, 978985.
Part 1: CHAT 128
Malvern, D., Richards, B., Chipere, N., & Purán, P. (2004). Lexical diversity and
language development. New York, NY: Palgrave Macmillan.
Miller, J., & Chapman, R. (1983). SALT: Systematic Analysis of Language Transcripts,
User's Manual. Madison, WI: University of Wisconsin Press.
Moerk, E. (1983). The mother of Eve as a first language teacher. Norwood, N.J.:
ABLEX.
Ninio, A., Snow, C. E., Pan, B., & Rollins, P. (1994). Classifying communicative acts in
children's interactions. Journal of Communication Disorders, 27, 157188.
Ninio, A., & Wheeler, P. (1986). A manual for classifying verbal communicative acts in
motherinfant interaction. Transcript Analysis, 3, 183.
Ochs, E. (1979). Transcription as theory. In E. Ochs & B. Schieffelin (Eds.),
Developmental pragmatics (pp. 4372). New York, NY: Academic.
Ochs, E. A., Schegloff, M., & Thompson, S. A. (1996). Interaction and grammar.
Cambridge: Cambridge University Press.
Parisse, C., & Le Normand, M.T. (2000). Automatic disambiguation of the
morphosyntax in spoken language corpora. Behavior Research Methods,
Instruments, and Computers, 32, 468481.
Parrish, M. (1996). Alan Lomax: Documenting folk music of the world. Sing Out!: The
Folk Song Magazine, 40, 3039.
Pick, A. (1913). Die agrammatischer Sprachstörungen. Berlin: SpringerVerlag.
Preyer, W. (1882). Die Seele des Kindes. Leipzig: Grieben's.
Sacks, H., Schegloff, E., & Jefferson, G. (1974). A simplest systematics for the
organization of turntaking for conversation. Language, 50, 696735.
Sagae, K., Davis, E., Lavie, A., MacWhinney, B., & Wintner, S. (2007). Highaccuracy
annotation and parsing of CHILDES transcripts Proceedings of the 45th Meeting
of the Association for Computational Linguistics (pp. 10441050). Prague: ACL.
Sagae, K., Lavie, A., & MacWhinney, B. (2005). Automatic measurement of syntactic
development in child language Proceedings of the 43rd Meeting of the
Association for Computational Linguistics (pp. 197204). Ann Arbor, MI: ACL.
Sagae, K., MacWhinney, B., & Lavie, A. (2004). Adding syntactic annotations to
transcripts of parentchild dialogs LREC 2004 (pp. 18151818). Lisbon: LREC.
Selting, M., & al., e. (1998). Gesprächsanalytisches Transkriptionssystem (GAT).
Linguistische Berichte, 173, 91122.
Slobin, D. (1977). Language change in childhood and in history. In J. Macnamara (Ed.),
Language learning and thought (pp. 185214). New York, NY: Academic Press.
Sokolov, J. L., & Snow, C. (Eds.). (1994). Handbook of Research in Language
Development using CHILDES. Hillsdale, NJ: Erlbaum.
Part 1: CHAT 129
Stemberger, J. (1985). The lexicon in a model of language production. New York, NY:
Garland.
Stern, C., & Stern, W. (1907). Die Kindersprache. Leipzig: Barth.
Trager, G. (1958). Paralanguage: A first approximation. Studies in Linguistics, 13, 112.
Wernicke, C. (1874). Die Aphasische Symptomenkomplex. Breslau: Cohn & Weigart.