Syntax and Its Explanation
Syntax and Its Explanation
Etymology[edit]
The word syntax comes from Ancient Greek roots: σύνταξις "coordination", which consists
of σύν syn, "together", and τάξις táxis, "ordering".
Topics[edit]
The field of syntax contains a number of various topics that a syntactic theory is often designed to
handle. The relation between the topics is treated differently in different theories, and some of them
may not be considered to be distinct but instead to be derived from one another (i.e. word order can
be seen as the result of movement rules derived from grammatical relations).
Grammatical relations[edit]
Another description of a language considers the set of possible grammatical relations in a language
or in general and how they behave in relation to one another in the morphosyntactic alignment of the
language. The description of grammatical relations can also reflect transitivity, passivization, and
head-dependent-marking or other agreement. Languages have different criteria for grammatical
relations. For example, subjecthood criteria may have implications for how the subject is referred to
from a relative clause or coreferential with an element in an infinite clause.[5]
Constituency[edit]
Constituency is the feature of being a constituent and how words can work together to form a
constituent (or phrase). Constituents are often moved as units, and the constituent can be the
domain of agreement. Some languages allow discontinuous phrases in which words belonging to the
same constituent are not immediately adjacent but are broken up by other constituents. Constituents
may be recursive, as they may consist of other constituents, potentially of the same type.
Early history[edit]
The Aṣṭādhyāyī of Pāṇini, from c. 4th century BC in Ancient India, is often cited as an example of a
premodern work that approaches the sophistication of a modern syntactic theory since works
on grammar had been written long before modern syntax came about.[6] In the West, the school of
thought that came to be known as "traditional grammar" began with the work of Dionysius Thrax.
For centuries, a framework known as grammaire générale, first expounded in 1660 by Antoine
Arnauld in a book of the same title, dominated work in syntax: as its basic premise the assumption
that language is a direct reflection of thought processes and so there is a single most natural way to
express a thought.[citation needed]
However, in the 19th century, with the development of historical-comparative linguistics, linguists
began to realize the sheer diversity of human language and to question fundamental assumptions
about the relationship between language and logic. It became apparent that there was no such thing
as the most natural way to express a thought and so logic could no longer be relied upon as a basis
for studying the structure of language.[citation needed]
The Port-Royal grammar modeled the study of syntax upon that of logic. (Indeed, large parts of Port-
Royal Logic were copied or adapted from the Grammaire générale.[7]) Syntactic categories were
identified with logical ones, and all sentences were analyzed in terms of "subject – copula –
predicate". Initially, that view was adopted even by the early comparative linguists such as Franz
Bopp.
The central role of syntax within theoretical linguistics became clear only in the 20th century, which
could reasonably be called the "century of syntactic theory" as far as linguistics is concerned. (For a
detailed and critical survey of the history of syntax in the last two centuries, see the monumental
work by Giorgio Graffi (2001).[8])
Theories[edit]
See also: Theory of language
There are a number of theoretical approaches to the discipline of syntax. One school of thought,
founded in the works of Derek Bickerton,[9] sees syntax as a branch of biology, since it conceives of
syntax as the study of linguistic knowledge as embodied in the human mind. Other linguists
(e.g., Gerald Gazdar) take a more Platonistic view since they regard syntax to be the study of an
abstract formal system.[10] Yet others (e.g., Joseph Greenberg) consider syntax a taxonomical device
to reach broad generalizations across languages.
Syntacticians have attempted to explain the causes of word-order variation within individual
languages and cross-linguistically. Much of such work has been done within the framework of
generative grammar, which holds that syntax depends on a genetic endowment common to the
human species. In that framework and in others, linguistic typology and universals have been
primary explicanda.[11]
Alternative explanations, such as those by functional linguists, have been sought in language
processing. It is suggested that the brain finds it easier to parse syntactic patterns that are either
right- or left-branching but not mixed. The most-widely held approach is the performance–grammar
correspondence hypothesis by John A. Hawkins, who suggests that language is a non-
innate adaptation to innate cognitive mechanisms. Cross-linguistic tendencies are considered as
being based on language users' preference for grammars that are organized efficiently and on their
avoidance of word orderings that cause processing difficulty. Some languages, however, exhibit
regular inefficient patterning such as the VO languages Chinese, with the adpositional phrase before
the verb, and Finnish, which has postpositions, but there are few other profoundly exceptional
languages.[12] More recently, it is suggested that the left- versus right-branching patterns are cross-
linguistically related only to the place of role-marking connectives (adpositions and subordinators),
which links the phenomena with the semantic mapping of sentences.[13]
Categorial grammar[edit]
Main article: Categorial grammar
Categorial grammar is an approach in which constituents combine as function and argument,
according to combinatory possibilities specified in their syntactic categories. For example, other
approaches might posit a rule that combines a noun phrase (NP) and a verb phrase (VP), but CG
would posit a syntactic category NP and another NP\S, read as "a category that searches to the left
(indicated by \) for an NP (the element on the left) and outputs a sentence (the element on the
right)." Thus, the syntactic category for an intransitive verb is a complex formula representing the
fact that the verb acts as a function word requiring an NP as an input and produces a sentence level
structure as an output. The complex category is notated as (NP\S) instead of V. The category
of transitive verb is defined as an element that requires two NPs (its subject and its direct object) to
form a sentence. That is notated as (NP/(NP\S)), which means, "A category that searches to the
right (indicated by /) for an NP (the object) and generates a function (equivalent to the VP) which is
(NP\S), which in turn represents a function that searches to the left for an NP and produces a
sentence."
Tree-adjoining grammar is a categorial grammar that adds in partial tree structures to the categories.
Functional grammars[edit]
Main article: Functional theories of grammar
Functionalist models of grammar study the form–function interaction by performing a structural and a
functional analysis.
Cognitive grammar
Construction grammar (CxG)
Emergent grammar