Transformational Grammar of English

January 21, 2018 | Author: orangpintar | Category: Phrase, Philology, Human Communication, Linguistics, Rules
Share Embed Donate


Short Description

Download Transformational Grammar of English...

Description

TRANSFORMATIONAL GRAMMAR

A. GENERAL AIMS In linguistics, a transformational grammar, or transformational-generative grammar (TGG), is a generative grammar, especially of a natural language, that has been developed in a Chomskyan tradition. Additionally, transformational grammar is the Chomskyan tradition that gives rise to specific transformational grammars. Much current research in transformational grammar is inspired by Chomsky's Minimalist Program. The aims of Transformational-Generative Grammar (TG for short) are quite different from those of structural linguistics. Instead of attending to a corpus and methods of analysis, TG focuses attention on the fact that all speakers of a natural language are able to form a new sentences and understand utterances they have never heard before. In other words, knowing a language is not merely a matter of storing in one’s head a long list of words or sentences, but being able to produce sentences not heard before. TG assumes that the basis of this ability is knowledge of what may be called a grammar. The Grammar of a language, therefore, consists of the rules (the formulas) that will generate sentences grammatically acceptable to native speakers. The aim of the TG is to formulate these rules as precisely as possible, so that they generate all the possible sentences of the language and none of the impossible ones.

B. GRAMMATICAL THEORIES In the 1960s, Chomsky introduced two central ideas relevant to the construction and evaluation of grammatical theories. The first was the distinction between competence and performance. Chomsky noted the obvious fact that people, when speaking in the real world, often make linguistic errors (e.g. starting a sentence and then abandoning it midway through). He argued that these errors in linguistic performance were irrelevant to the study of linguistic TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

1

competence (the knowledge that allows people to construct and understand grammatical sentences). Consequently, the linguist can study an idealised version of language, greatly simplifying linguistic analysis (see the "Grammaticalness" section below). The second idea related directly to the evaluation of theories of grammar. Chomsky made a distinction between grammars which achieved descriptive adequacy and those which went further and achieved explanatory adequacy. A descriptively adequate grammar for a particular language defines the (infinite) set of grammatical sentences in that language; that is, it describes the language in its entirety. A grammar which achieves explanatory adequacy has the additional property that it gives an insight into the underlying linguistic structures in the human mind; that is, it does not merely describe the grammar of a language, but makes predictions about how linguistic knowledge is mentally represented. For Chomsky, the nature of such mental representations is largely innate, so if a grammatical theory has explanatory adequacy it must be able to explain the various grammatical nuances of the languages of the world as relatively minor variations in the universal pattern of human language. Chomsky argued that, even though linguists were still a long way from constructing descriptively adequate grammars, progress in terms of descriptive adequacy would only come if linguists held explanatory adequacy as their goal. In other words, real insight into the structure of individual languages could only be gained through the comparative study of a wide range of languages, on the assumption that they are all cut from the same cloth.

C. COMPETENCE AND PERFORMANCE DISTINCTION Chomsky makes a distinction between competence and performance. When he speaks of competence he refers to the speaker’s implicit knowledge of his language (his knowledge of the rules). By performance he means the actual use of language in concrete situations. A person’s actual utterances may be ungrammatical or incomplete because he is tired or excited or not paying full attention. The problem for the linguist is to describe the language competence of the speaker by observing his performance. The linguist is interested in competence because he is interested in what is possible in the whole language. Linguistic competence is not always reflected in actual speech. Our linguistic performance is peppered with 'ums' and 'ahs', false starts and sentence fragments. Nevertheless, TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

2

when asked, we are still able to judge the difference between those utterances that live up to the rules of English and those that do not. Although we are probably not consciously aware of any of these rules, our unconscious mastery of them is revealed in our linguistic competence.? D. DEEP STRUCTURE AND SURFACE STRUCTURE In 1957, Noam Chomsky published Syntactic Structures, in which he developed the idea that each sentence in a language has two levels of representation — a deep structure and a surface structure.The deep structure represented the core semantic relations of a sentence, and was mapped on to the surface structure (which followed the phonological form of the sentence very closely) via transformations. Chomsky believed that there would be considerable similarities between languages' deep structures, and that these structures would reveal properties, common to all languages, which were concealed by their surface structures. However, this was perhaps not the central motivation for introducing deep structure. Transformations had been proposed prior to the development of deep structure as a means of increasing the mathematical and descriptive power of Context-free grammars. Similarly, deep structure was devised largely for technical reasons relating to early semantic theory. Chomsky emphasizes the importance of modern formal mathematical devices in the development of grammatical theory: But the fundamental reason for [the] inadequacy of traditional grammars is a more technical one. Although it was well understood that linguistic processes are in some sense "creative", the technical devices for expressing a system of recursive processes were simply not available until much more recently. In fact, a real understanding of how a language can (in Humboldt's words) "make infinite use of finite means" has developed only within the last thirty years, in the course of studies in the foundations of mathematics. (Aspects of the Theory of Syntax, p. 8) Consider the following sentence pairs: •

"The cat chased the mouse."



"The mouse was chased by the cat."



"Where did John drive?"



"John drove (where)." According to the transformational grammar, there is an abstract level of representation

that underlies the syntactical structures of each pair member. For instance, the forms first and TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

3

second sentences correspond to "surface structures." The linguist Noam Chomsky proposed that these surface structures are derived from a common underlying grammatical representation, called their "deep structure." Within the theory, their deep structure is represented in the form of a heirarchical tree depicting the grammatical relationships between the various constituents that make up the sentence, such as the noun phrases* "the cat" and "the mouse," and the verb phrases "chased" and "was chased ." The application of certain transformation rules to this tree produces the surface structures seen above. A phrase structure tree is a form of representation of sentences in which nodes or elements are labelled by syntactic category (noun phrase (NP), verb phrase (VP), prepositional phrase (PP), etc.)

E. DEVELOPMENT OF BASIC CONCEPTS

Though transformations continue to be important in Chomsky's current theories, he has now abandoned the original notion of Deep Structure and Surface Structure. Initially, two additional levels of representation were introduced (LF — Logical Form, and PF — Phonetic Form), and then in the 1990s Chomsky sketched out a new program of research known as Minimalism, in which Deep Structure and Surface Structure no longer featured and PF and LF remained as the only levels of representation. To complicate the understanding of the development of Noam Chomsky's theories, the precise meanings of Deep Structure and Surface Structure have changed over time — by the TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

4

1970s, the two were normally referred to simply as D-Structure and S-Structure by Chomskyan linguists. In particular, the idea that the meaning of a sentence was determined by its Deep Structure (taken to its logical conclusions by the generative semanticists during the same period) was dropped for good by Chomskyan linguists when LF took over this role (previously, Chomsky and Ray Jackendoff had begun to argue that meaning was determined by both Deep and Surface Structure). Terms such as "transformation" can give the impression that theories of transformational generative grammar are intended as a model for the processes through which the human mind constructs and understands sentences. Chomsky is clear that this is not in fact the case: a generative grammar models only the knowledge that underlies the human ability to speak and understand. One of the most important of Chomsky's ideas is that most of this knowledge is innate, with the result that a baby can have a large body of prior knowledge about the structure of language in general, and need only actually learn the idiosyncratic features of the language(s) it is exposed to. Chomsky was not the first person to suggest that all languages had certain fundamental things in common (he quotes philosophers writing several centuries ago who had the same basic idea), but he helped to make the innateness theory respectable after a period dominated by more behaviorist attitudes towards language. Perhaps more significantly, he made concrete and technically sophisticated proposals about the structure of language, and made important proposals regarding how the success of grammatical theories should be evaluated. Chomsky goes so far as to suggest that a baby need not learn any actual rules specific to a particular language at all. Rather, all languages are presumed to follow the same set of rules, but the effects of these rules and the interactions between them can vary greatly depending on the values of certain universal linguistic parameters. This is a very strong assumption, and is one of the most subtle ways in which Chomsky's current theory of language differs from most others.

F. TRANSFORMATIONS In structural linguistics the concern was largely to analyse sentences, i.e. divide them into parts and state the functions of the parts. Such an analysis can show certain similarities and differences between sentences. TG points out that there are certain kinds of relationships between TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

5

sentences. An example of this kind is the pair of active and passive sentences such as: 1. Cheng kicked the ball.

(Noun + Verb + Det + Noun)

2. The ball was kicked by Cheng.

(Det + Noun + Aux + Verb + Prep + Noun)

In IC analysis these two sentences would be described as two different sentences with different sequences of elements, as indicated. TG attempts to explain the native speaker’s intuition that they are closely related. TG regards passive sentences as being formed from the active ones. What sort of changes of must take place in this transformation, as it is called? Notice that the subject (Cheng) and object (the ball) in (1) have changed places in (2) Where the ball is the subject and Cheng is the agent preceded by by .The relationship between the active and passive sentences and the changes that take place in the process of transforming the active sentence into the passive one can be stated in an abstract way by using symbols. The use of symbols enables TG to state that all active sentences with the pattern of Cheng kicked the ball can be converted into a passive sentence. The ball was kicked by Cheng. For example, we could use the following symbols to represent the elements in the active sentence and the symbols would represent other sentences of like form as: NP1 Cheng Mary Frodo

V kicked has eaten will chase

NP2 the ball the apple the Orcs

The change into passive sentence is also represented as follows: NP1 V The ball was kicked An apple has been eaten The Orcs will be chased I could add to the number of sentences in the

by NP2 by the ball by the apple by the Orcs first set indefinitely and they could all be

transformed in to the shape of those in the second set. The use of symbols is a net way of making a general statement. NP 1 and NP1 refer to the first and second noun phrases in the active sentence. V stand for verb. We can say all sentences of the form NP1 – V – NP2 can be changed into NP2 – V – by – NP1. Our statement of the active – passive relationship is less precise than Chomsky’s in syntactic Structures (the rules in TG must be precise and explicit, but it does show us the essential principles. TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

6

Chomsky states that sentences of the form NP1 + Aux – V – NP2 can be changed to those of the form NP2 – Aux + be + en -- V – by – NP 1, NP1, NP2, and V have the same meaning as in our statement. Aux refers to tense and all auxiliary verbs, be + en supplies the passive element (en stands for the past participle. The active sentence is basic. The passive sentences is a “transform” of it. In other words, the passive is derived from the active. Many languages contain active and passive sentences, but the changes in the process of transformation may be different. There are also transformations that change two or more sentences into one. That is, they join sentences together. The first example is: The man who came to dinner is my brother. This sentence is derived from 1. The man is my brother 2. The man cam to dinner. The transformation places the second sentence after man in the first sentence (this is known technically as “embedding”) and then replaces the man in the second sentence (the embedded sentence) by who . This transformation is possible only when the same noun phrase (the man) occurs in both kernel sentences. The usual usage of the term 'transformation' in linguistics refers to a rule that takes an input typically called the Deep Structure (in the Standard Theory) or D-structure (in the extended standard theory or government and binding theory) and changes it in some restricted way to result in a Surface Structure (or S-structure). In TGG, Deep structures were generated by a set of phrase structure rules. For example a typical transformation in TG is the operation of subject-auxiliary inversion (SAI). This rule takes as its input a declarative sentence with an auxiliary: "John has eaten all the heirloom tomatoes." and transforms it into "Has John eaten all the heirloom tomatoes?". In their original formulation (Chomsky 1957), these rules were stated as rules that held over strings of either terminals or constituent symbols or both. X NP AUX Y

X AUX NP Y

In the 1970s, by the time of the Extended Standard Theory, following the work of Joseph Emonds on structure preservation, transformations came to be viewed as holding over trees. By TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

7

the end of government and binding theory in the late 1980s, transformations are no longer structure changing operations at all, instead they add information to already existing trees by copying constituents. The earliest conceptions of transformations were that they were construction-specific devices. For example, there was a transformation that turned active sentences into passive ones. A different transformation raised embedded subjects into main clause subject position in sentences such as "John seems to have gone"; and yet a third reordered arguments in the dative alternation. With the shift from rules to principles and constraints that was found in the 1970s, these construction specific transformations morphed into general rules (all the examples just mentioned being instances of NP movement), which eventually changed into the single general rule of move alpha or Move. Transformations actually come of two types: (i) the post-Deep structure kind mentioned above, which are string or structure changing, and (ii) Generalized Transformations (GTs). Generalized transformations were originally proposed in the earliest forms of generative grammar (e.g. Chomsky 1957). They take small structures which are either atomic or generated by other rules, and combine them. For example, the generalized transformation of embedding would take the kernel "Dave said X" and the kernel "Dan likes smoking" and combine them into "Dave said Dan likes smoking". GTs are thus structure building rather than structure changing. In the Extended Standard Theory and government and binding theory, GTs were abandoned in favor of recursive phrase structure rules. However, they are still present in tree-adjoining grammar as the Substitution and Adjunction operations and they have recently re-emerged in mainstream generative grammar in Minimalism as the operations Merge and Move. G. PHRASE STRUCTURE RULES

Phrase-structure rules are a way to describe a given language's syntax. They are used to break a natural language sentence down into its constituent parts (also known as syntactic categories) namely phrasal categories and lexical categories (aka parts of speech). Phrasal categories include the noun phrase, verb phrase, and prepositional phrase; lexical categories include noun, verb, adjective, adverb, and many others. Phrase structure rules were commonly used in transformational grammar (TGG), although they were not an invention of TGG; rather, TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

8

early TGG's added to phrase structure rules (the most obvious example being transformations; see the page transformational grammar for an overview of the development of TGG.) A grammar which uses phrase structure rules is called a phrase structure grammar - except in computer science, where it is known as just a grammar, usually context-free. Phrase structure rules are usually of the form , meaning that the constituent A is subconstituents B and C. Some examples are:

separated into the two

The first rule reads: An S consists of an NP followed by a VP. This means A sentence consists of a noun phrase followed by a verb phrase. The next one: A noun phrase consists of a determiner followed by a noun. Further explanations of the constituents: S, Det, NP, VP, AP, PP Associated with phrase structure rules is a famous example of a grammatically correct sentence. The sentence was constructed by Noam Chomsky as an illustration that syntactically but not semantically correct sentences are possible. Colorless green ideas sleep furiously can be diagrammed as a phrase tree, as below: wzTooltipColorless green ideas sleep furiously.fBehindDocument0fIsButton1fLayoutInCell1

where S represents

a

TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

grammatical

sentence. 9

The theory of antisymmetry proposed in the early '90s by Richard Kayne is an attempt to derive phrase structure from a single axiom. A number of theories of grammar dispense with the notion of phrase structure rules and operate with the notion of schema instead. Here phrase structures are not derived from rules that combine words, but from the specification or instantiation of syntactic schemata or configurations, often expressing some kind of semantic content independently of the specific words that appear in them. This approach is essentially equivalent to a system of phrase structure rules combined with a noncompositional semantic theory, since grammatical formalisms based on rewriting rules are generally equivalent in power to those based on substitution into schemata.. So, in this type of approach, instead of being derived from the application of a number of phrase structure rules, the sentence "colorless green ideas sleep furiously" would be generated by filling the words into the slots of a schema having the following structure: (NP(ADJ N) VP(V) AP(ADV)) And which would express the following conceptual content X DOES Y IN THE MANNER OF Z Though they are noncompositional, such models are monotonic. This approach is highly developed within Construction grammar, and has had some influence in Head-Driven Phrase Structure Grammar and Lexical functional grammar. Chomsky's system of transformational grammar, though it was developed on the basis of his work with Harris, differs from Harris's in a number of respects. It is Chomsky's system that has attracted the most attention and has received the most extensive exemplification and further development. As outlined in Syntactic Structures (1957), it comprised three sections, or components: the phrase-structure component, the transformational component, and the morphophonemic component. Each of these components consisted of a set of rules operating upon a certain "input" to yield a certain "output." The notion of phrase structure may be dealt with independently of its incorporation in the larger system. In the following system of rules, S stands for Sentence, NP for Noun Phrase, VP for Verb Phrase, Det for Determiner, Aux for Auxiliary (verb), N for Noun, and V for Verb stem.

TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

10

(i) S (ii) NP + VP (iii) NP + Verb + NP (iv) Det + N + Verb + NP (v) Det + N + Verb + Det + N (vi) he + Det + N + Aux + V + Det + N (vii) the + N + Aux + V + Det + N (viii) the + N + Aux + V + the + N (ix) the + man + Aux + V + the + N (x) the + man + Aux + V + the + ball (xi) the + man + will + V + the + ball (xii) the + man + will + hit + the + N

by rule (1) by rule (2) by rule (3) by rule (3) by rule (4) by rule (5) by rule (5) by rule (6) by rule (6) by rule (7) by rule (8)

This is a simple phrase-structure grammar. It generates and thereby defines as grammatical such sentences as "The man will hit the ball," and it assigns to each sentence that it generates a structural description. The kind of structural description assigned by a phrase-structure grammar is, in fact, a constituent structure analysis of the sentence. In these rules, the arrow can be interpreted as an instruction to rewrite (this is to be taken as a technical term) whatever symbol appears to the left of the arrow as the symbol or string of symbols that appears to the right of the arrow. For example, rule (2) rewrites the symbol VP as the string of symbols Verb + NP, and it thereby defines Verb + NP to be a construction of the type VP. Or, alternatively and equivalently, it says that constructions of the type VP may have as their immediate constituents constructions of the type Verb and NP (combined in that order). Rule (2) can be thought of as creating or being associated with the tree structure in Figure 3. Rules (1)-(8) do not operate in isolation but constitute an integrated system. The symbol S (standing mnemonically for "sentence") is designated as the initial symbol. This information is not given in the rules (1)-(8), but it can be assumed either that it is given in a kind of protocol statement preceding the grammatical rules or that there is a universal convention according to which S is always the initial symbol. It is necessary to begin with a rule that has the initial symbol on the left. Thereafter any rule may be applied in any order until no further rule is applicable; in doing so, a derivation can be constructed of one of the sentences generated by the grammar. If the rules are applied in the following order: (1), (2), (3), (3), (4), (5), (5), (6), (6), (7), (8), then assuming that "the" is selected on both applications of (5), "man" on one application of (6), and "ball" on the other, "will" on the application of (7), and "hit" on the application of (8), the following derivation of the sentence "The man will hit the ball" will have been constructed:

TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

11

Many other derivations of this sentence are possible, depending on the order in which the rules are applied. The important point is that all these different derivations are equivalent in that they can be reduced to the same tree diagram; namely, the one shown in Figure 4. If this is compared with the system of rules, it will be seen that each application of each rule creates or is associated with a portion (or subtree) of the tree. The tree diagram, or phrase marker, may now be considered as a structural description of the sentence "The man hit the ball." It is a description of the constituent structure, or phrase structure, of the sentence, and it is assigned by the rules that generate the sentence. It is important to interpret the term generate in a static, rather than a dynamic, sense. The statement that the grammar generates a particular sentence means that the sentence is one of the totality of sentences that the grammar defines to be grammatical or well formed. All the sentences are generated, as it were, simultaneously. The notion of generation must be interpreted as would be a mathematical formula containing variables. For example, in evaluating the formula y + y for different values of y, one does not say that the formula itself generates these various resultant values (2, when y = 1; 5, when y = 2; etc.) one after another or at different times; one says that the formula generates them all simultaneously or, better still perhaps, timelessly. The situation is similar for a generative grammar. Although one sentence rather than another can be derived on some particular occasion by making one choice rather than another at particular places in the grammar, the grammar must be thought of as generating all sentences statically or timelessly. It has been noted that, whereas a phrase-structure grammar is one that consists entirely of phrasestructure rules, a transformational grammar (as formalized by Chomsky) includes both phraseTRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

12

PS rules NP – Aux – V – NP – Aux + be + en – V – by + NP 2 1 S A+B 1 B C+D A a+b C a+e+f and transformational rules (as well as morphophonemic rules). The transformational D d + structure g+h T rules rules depend upon the prior application of the phrase-structure rules and have the effect of (6) A + C + D D + A converting, or transforming, one phrase marker into another. What is meant by this statement may be clarified first with reference to a purely abstract and very simple transformational grammar, in which the letters stand for constituents of a sentence (and S stands for "sentence"):

The first five rules are phrase-structure rules (PS rules); rule (6) is a transformational rule (T rule). The output of rules (1)-(5) is the terminal string a + b + c + e + f + d + g + h, which has associated with it the structural description indicated by the phrase marker shown in Figure 5 (left). Rule (6) applies to this terminal string of the PS rules and the associated phrase marker. It has the effect of deleting C (and the constituents of C) and permuting A and D (together with their constituents). The result is the string of symbols d + g + h + a + b, with the associated phrase marker shown in Figure 5 (right). The phrase marker shown in Figure 5 (left) may be described as underlying, and the phrase marker shown in Figure 5 (right) as derived with respect to rule (6). One of the principal characteristics of a transformational rule is its transformation of an underlying phrase marker into a derived phrase marker in this way. Transformational rules, in contrast with phrase-structure rules, are also formally more heterogeneous and may have more than one symbol on the left-hand side of the arrow. The linguistic importance of these abstract considerations may be explained with reference to the relationship that holds in English between active and passive sentences. Chomsky's rule for relating active and passive sentences (as given in Syntactic Structures) is very similar, at first sight, to Harris's, discussed above. Chomsky's rule is:

TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

13

This rule, called the passive transformation, presupposes and depends upon the prior application of a set of phrase-structure rules. For simplicity, the passive transformation may first be considered in relation to the set of terminal strings generated by the phrase-structure rules (1)-(8) given earlier. The string "the + man + will + hit + the + ball" (with its associated phrase marker, as shown in Figure 4) can be treated not as an actual sentence but as the structure underlying both the active sentence "The man will hit the ball" and the corresponding passive "The ball will be hit by the man." The passive transformation is applicable under the condition that the underlying, or "input," string is analyzable in terms of its phrase structure as NP - Aux - V - NP (the use of subscript numerals to distinguish the two NPs in the formulation of the rule is an informal device for indicating the operation of permutation). In the phrase marker in Figure 4 "the" + "man" are constituents of NP, "will" is a constituent of Aux, "hit" is a constituent of V, and "the" + "ball" are constituents of NP. The whole string is therefore analyzable in the appropriate sense, and the passive transformation converts it into the string "the + ball + will + be + en + hit + by + the + man." A subsequent transformational rule will permute "en + hit" to yield "hit + en," and one of the morphophonemic rules will then convert "hit + en" to "hit" (as "ride + en" will be converted to "ridden"; "open + en" to "opened," and so on). Every transformational rule has the effect of converting an underlying phrase marker into a derived phrase marker. The manner in which the transformational rules assign derived constituent structure to their input strings is one of the major theoretical problems in the formalization of transformational grammar. Here it can be assumed not only that "be + en" is attached to Aux and "by" to NP (as indicated by the plus signs in the rule as it has been formulated above) but also that the rest of the derived structure is as shown in Figure 6. The phrase marker in Figure 6 formalizes the fact, among others, that "the ball" is the subject of the passive sentence "The ball will be hit by the man," whereas "the man" is the subject of the corresponding active "The man will hit the ball" (cf. Figure 4). Although the example above is a very simple one, and only a single transformational rule has been considered independently of other transformational rules in the same system, the passive transformation must operate, not only upon simple noun phrases like "the man" or "the ball," but upon noun phrases that contain adjectives ("the old man"), modifying phrases ("the man in the TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

14

corner"), relative clauses ("the man who checked in last night"), and so forth. The incorporation, or embedding, of these other structures with the noun phrase will be brought about by the prior application of other transformational rules. It should also be clear that the phrase-structure rules require extension to allow for the various forms of the verb ("is hitting," "hit," "was hitting," "has hit," "has been hitting," etc.) and for the distinction of singular and plural. It is important to note that, unlike Harris's, Chomsky's system of transformational grammar does not convert one sentence into another: the transformational rules operate upon the structures underlying sentences and not upon actual sentences. A further point is that even the simplest sentences (i.e.,kernel sentences) require the application of at least some transformational rules. Corresponding active and passive sentences, affirmative and negative sentences, declarative and interrogative sentences, and so on are formally related by deriving them from the same underlying terminal string of the phrase-structure component. The difference between kernel sentences and nonkernel sentences in Syntactic Structures (in Chomsky's later system the category of kernel sentences is not given formal recognition at all) resides in the fact that kernel sentences are generated without the application of any optional transformations. Nonkernel sentences require the application of both optional and obligatory transformations, and they differ one from another in that a different selection of optional transformations is made.

H. MODIFICATIONS IN CHOMSKY'S GRAMMAR Chomsky's system of transformational grammar was substantially modified in 1965. Perhaps the most important modification was the incorporation, within the system, of a semantic component, in addition to the syntactic component and phonological component. (The phonological component may be thought of as replacing the morphophonemic component of Syntactic Structures.) The rules of the syntactic component generate the sentences of the language and assign to each not one but two structural analyses: a deep structure analysis as represented by the underlying phrase marker, and a surface structure analysis, as represented by the final derived phrase marker. The underlying phrase marker is assigned by rules of the base (roughly equivalent to the PS [Phrase-Structure] rules of the earlier system); the derived phrase marker is assigned by the transformational rules. The interrelationship of the four sets of rules is shown diagrammatically in Figure 7. The meaning of the sentence is derived (mainly, if not TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

15

wholly) from the deep structure by means of the rules of semantic interpretation; the phonetic realization of the sentence is derived from its surface structure by means of the rules of the phonological component. The grammar ("grammar" is now to be understood as covering semantics and phonology, as well as syntax) is thus an integrated system of rules for relating the pronunciation of a sentence to its meaning. The syntax, and more particularly the base, is at the "heart" of the system, as it were: it is the base component (as the arrows in the diagram indicate) that generates the infinite class of structures underlying the well-formed sentences of a language. These structures are then given a semantic and phonetic "interpretation" by the other components. The base consists of two parts: a set of categorial rules and a lexicon. Taken together, they fulfill a similar function to that fulfilled by the phrase-structure rules of the earlier system. But there are many differences of detail. Among the most important is that thelexicon (which may be thought of as a dictionary of the language cast in a particular form) lists, in principle, all the vocabulary words in the language and associates with each all the syntactic, semantic, and phonological information required for the correct operation of the rules. This information is represented in terms of what are called features. For example, the entry for "boy" might say that it has the syntactic features: [+ Noun], [+ Count], [+ Common], [+ Animate], and [+ Human]. The categorial rules generate a set of phrase markers that have in them, as it were, a number of "slots" to be filled with items from the lexicon. With each such "slot" there is associated a set of features that define the kind of item that can fill the "slot." If a phrase marker is generated with a "slot" for the head of a noun phrase specified as requiring an animate noun (i.e., a noun having the feature [+ Animate]), the item "boy" would be recognized as being compatible with this specification and could be inserted in the "slot" by the rule of lexical substitution. Similarly, it could be inserted in "slots" specified as requiring a common noun, a human noun, or a countable noun, but it would be excluded from positions that require an abstract noun (e.g., "sincerity") or an uncountable noun (e.g., "water"). By drawing upon the syntactic information coded in feature notation in the lexicon, the categorial rules might permit such sentences as "The boy died," while excluding (and thereby defining as ungrammatical) such nonsentences as "The boy elapsed." One of the most controversial topics in the development of transformational grammar is the relationship between syntax and semantics. Scholars working in the field are now agreed that there is a considerable degree of interdependence between the two, and the problem is how to formalize this interdependence. One school of linguists, called generative semanticists, accept the TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

16

general principles of transformational grammar but have challenged Chomsky's conception of deep structure as a separate and identifiable level of syntactic representation. In their opinion, the basic component of the grammar should consist of a set of rules for the generation of well-formed semantic representations. These would then be converted by a succession of transformational rules into strings of words with an assigned surface-structure syntactic analysis, there being no place in the passage from semantic representation to surface structure identifiable as Chomsky's deep structure. Chomsky himself has denied that there is any real difference between the two points of view and has maintained that the issue is purely one of notation. That this argument can be put forward by one party to the controversy and rejected by the other is perhaps a sufficient indication of the uncertainty of the evidence. Of greater importance than the overt issues, in so far as they are clear, is the fact that linguists are now studying much more intensively than they have in the past the complexities of the interdependence of syntax, on the one hand, and semantics and logic, on the other. Whether it will prove possible to handle all these complexities within a comprehensive generative grammar remains to be seen. The role of the phonological component of a generative grammar of the type outlined by Chomsky is to assign a phonetic "interpretation" to the strings of words generated by the syntactic component. These strings of words are represented in a phonological notation (taken from the lexicon) and have been provided with a surface-structure analysis by the transformational rules (see Figure 7). The phonological elements out of which the word forms are composed are segments consisting of what are referred to technically as distinctive features (following the usage of the Prague school, see below The Prague school). For example, the word form "man," represented phonologically, is composed of three segments: the first consists of the features [+ consonantal], [+ bilabial], [+ nasal], etc.; the second of the features [+ vocalic], [+ front], [+ open], etc.; and the third of the features [+ consonantal], [+ alveolar], [+ nasal], etc. (These features should be taken as purely illustrative; there is some doubt about the definitive list of distinctive features.) Although these segments may be referred to as the "phonemes" /m/, /a/, and /n/, they should not be identified theoretically with units of the kind discussed in the section on Phonology under Structural linguistics. They are closer to what many American structural linguists called "morphophonemes" or the Prague school linguists labelled "archiphonemes," being unspecified for any feature that is contextually redundant or predictable. For instance, the first segment of the phonological representation of "man" will not include the feature [+ voice]; TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

17

because nasal consonants are always phonetically voiced in this position in English, the feature [+ voice] can be added to the phonetic specification by a rule of the phonological component. One further important aspect of generative phonology (i.e., phonology carried out within the framework of an integrated generative grammar) should be mentioned: its dependence upon syntax. Most American structural phonologists made it a point of principle that the phonemic analysis of an utterance should be carried out without regard to its grammatical structure. This principle was controversial among American linguists and was not generally accepted outside America. Not only has the principle been rejected by the generative grammarians, but they have made the phonological description of a language much more dependent upon its syntactic analysis than has any other school of linguists. They have claimed, for example, that the phonological rules that assign different degrees of stress to the vowels in English words and phrases and alter the quality of the relatively unstressed vowel concomitantly must make reference to the derived constituent structure of sentences and not merely to the form class of the individual words or the places in which the word boundaries occur

TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

18

References 1. Boey, Lim Kiat. (1975). An Introduction to Linguistics for The Language Teacher. Singapore. Singapore University Press. 2. Yule, George. The Study of Language: An Introduction. Sydney. Cambridge University Press. 3. Jacobs, Roderick A. et al. English Transformational Grammar. John Wiley and Sons. New York. 4. From Wikipedia, the free encyclopedia, Internet

TRANSFORMATIONAL GRAMMAR_Task.rtf arranged by Ach. Philip, S.Pd

19

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF