原帖由 dengjun 於 2008-9-5 15:09 發表
这段话是从网上搜索来的,其中的观点我已经和几位教授通电话研究过,凡是希望搞清楚语音学和音位学的概念的网友,不妨认真学习一下。
語音學與音位學:
語音學是依據發聲器官的發音(發音語音學)或聲音(聲學語 ... 对唔住,呢个定义系非具体嘅。我完全明白呢个定义。佢所讲嘅精神无错,但因为系台湾人写嘅,佢哋将英文入边两个好重要嘅词翻译错咗。 唔该你用心想下啦:「PHONEMICS」点能同「PHONOLOGY」割裂先得啦?「PHONEMICS」系「PHONOLOGY」不可割裂嘅一部分!
而家请你具体詌推翻我嘅具体论点。
Phonemic distinctions or allophones
If two similar sounds do not belong to separate phonemes, they are called allophones of the same underlying phoneme. For instance, voiceless stops (/p/, /t/, /k/) can be aspirated. In English, voiceless stops at the beginning of a stressed syllable (but not after /s/) are aspirated, whereas after /s/
they are not aspirated. This can be seen by putting the fingers right
in front of the lips and noticing the difference in breathiness in
saying 'pin' versus 'spin'. There is no English word 'pin' that starts
with an unaspirated p, therefore in English, aspirated [pʰ] (the [ʰ] means aspirated) and unaspirated [p] are allophones of the same phoneme /p/.
The /t/
sounds in the words 'tub', 'stub', 'but', 'butter', and 'button' are
all pronounced differently in American English, yet are all intuited to
be of "the same sound", therefore they constitute another example of
allophones of the same phoneme in English. However, an intuition such
as this could be interpreted as a function of post-lexical recognition
of the sounds. That is, all are seen as examples of E /t/ once the word
itself has been recognized.
In English, for example, /p/ and /b/ are distinctive units of sound, (i.e., they are phonemes / the difference is phonemic, or phonematic). This can be seen from minimal pairs such as "pin" and "bin", which mean different things, but differ only in one sound. On the other hand, /p/ is often pronounced differently depending on its position relative to other sounds. For example, the /p/ in "pin" is aspirated
while the same phoneme in "spin" is not. Yet these different
pronunciations are still considered by linguists invoking the
intuitions of native speakers to be the same "sound".
The findings and insights of speech perception and articulation
research complicates this idea of interchangeable allophones being
perceived as the same phoneme, no matter how attractive it might be for
linguists who wish to rely on the intuitions of native speakers. First,
interchanged allophones of the same phoneme can result in
unrecognizable words. Second, actual speech, even at a word level, is
highly co-articulated, so it is problematic to think that one can
splice words into simple segments without affecting speech perception.
In other words, interchanging allophones is a nice idea for intuitive
linguistics, but it turns out that this idea can not transcend what
co-articulation actually does to spoken sounds. Yet human speech
perception is so robust and versatile (happening under various
conditions) because, in part, it can deal with such co-articulation.
There are different methods for determining why allophones should fall
categorically under a specified phoneme. Counter-intuitively, the
principle of phonetic similarity is not always used. This tends to make
the phoneme seem abstracted away from the phonetic realities of speech.
It should be remembered that, just because allophones can be grouped
under phonemes for the purpose of linguistic analysis, this does not
necessarily mean that this is an actual process in the way the human
brain processes a language. On the other hand, it could be pointed out
that some sort of analytic notion of a language beneath the word level
is usual if the language is written alphabetically. So one could also
speak of a phonology of reading and writing.
Development of the field
In ancient India, the Sanskrit grammarian Pāṇini (c. 520–460 BC) in his text of Sanskrit phonology, the Shiva Sutras, discusses something like the concepts of the phoneme, the morpheme and the root. The Shiva Sutras describe a phonemic notational system in the fourteen initial lines of the Aṣṭādhyāyī. The notational system introduces different clusters of phonemes that serve special roles in the morphology of Sanskrit, and are referred to throughout the text. Panini's grammar of Sanskrit had a significant influence on Ferdinand de Saussure, the father of modern structuralism, who was a professor of Sanskrit.
The Polish scholar Jan Baudouin de Courtenay, (together with his former student Mikołaj Kruszewski) coined the word phoneme
in 1876, and his work, though often unacknowledged, is considered to be
the starting point of modern phonology. He worked not only on the
theory of the phoneme but also on phonetic alternations (i.e., what is
now called allophony and morphophonology). His influence on Ferdinand de Saussure was also significant.
Prince Nikolai Trubetzkoy's posthumously published work, the Principles of Phonology (1939), is considered the foundation of the Prague School of phonology. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, though morphophonology was first recognized by Baudouin de Courtenay. Trubetzkoy split phonology into phonemics and archiphonemics; the former has had more influence than the latter. Another important figure in the Prague School was Roman Jakobson, who was one of the most prominent linguists of the twentieth century.
In 1968 Noam Chomsky and Morris Halle published The Sound Pattern of English (SPE), the basis for Generative Phonology. In this view, phonological representations are sequences of segments made up of distinctive features. These features were an expansion of earlier work by Roman Jakobson, Gunnar Fant,
and Morris Halle. The features describe aspects of articulation and
perception, are from a universally fixed set, and have the binary
values + or -. There are at least two levels of representation: underlying representation and surface phonetic representation. Ordered phonological rules govern how underlying representation
is transformed into the actual pronunciation (the so called surface
form). An important consequence of the influence SPE had on
phonological theory was the downplaying of the syllable and the
emphasis on segments. Furthermore, the Generativists folded
morphophonology into phonology, which both solved and created problems.
Natural Phonology was a theory based on the publications of its proponent David Stampe
in 1969 and (more explicitly) in 1979. In this view, phonology is based
on a set of universal phonological processes which interact with one
another; which ones are active and which are suppressed are
language-specific. Rather than acting on segments, phonological
processes act on distinctive features
within prosodic groups. Prosodic groups can be as small as a part of a
syllable or as large as an entire utterance. Phonological processes are
unordered with respect to each other and apply simultaneously (though
the output of one process may be the input to another). The second-most
prominent Natural Phonologist is Stampe's wife, Patricia Donegan; there
are many Natural Phonologists in Europe, though also a few others in
the U.S., such as Geoffrey Pullum. The principles of Natural Phonology were extended to morphology by Wolfgang U. Dressler, who founded Natural Morphology.
In 1976 John Goldsmith introduced autosegmental phonology. Phonological phenomena are no longer seen as operating on one linear sequence of segments, called phonemes or feature combinations, but rather as involving some parallel sequences of features which reside on multiple tiers. Augosegmental phonology later evolved into Feature Geometry,
which became the standard theory of representation for the theories of
the organization of phonology as different as Lexical Phonology and Optimality Theory.
** Phonology,
which originated in the early 1980s as an attempt to unify theoretical
notions of syntactic and phonological structures, is based on the
notion that all languages necessarily follow a small set of principles and vary according to their selection of certain binary parameters.
That is, all languages' phonological structures are essentially the
same, but there is restricted variation that accounts for differences
in surface realizations. Principles are held to be inviolable, though
parameters may sometimes come into conflict. Prominent figures include
Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette,
John Harris, and many others.
In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed Optimality Theory
— an overall architecture for phonology according to which languages
choose a pronunciation of a word that best satisfies a list of constraints
which is ordered by importance: a lower-ranked constraint can be
violated when the violation is necessary in order to obey a
higher-ranked constraint. The approach was soon extended to morphology
by John McCarthy and Alan Prince,
and has become the dominant trend in phonology. Though this usually
goes unacknowledged, Optimality Theory was strongly influenced by
Natural Phonology; both view phonology in terms of constraints on
speakers and their production, though these constraints are formalized
in very different ways.
Broadly speaking ** Phonology (or its descendant, strict-CV phonology) has a greater following in the United Kingdom, whereas Optimality Theory is predominant in North America.
http://en.wikipedia.org/wiki/Phonetics Phonetics
http://en.wikipedia.org/wiki/Phonemics Phonemics
http://en.wikipedia.org/wiki/Phoneme
Phoneme
[ 本帖最後由 penkyamp 於 2008-9-5 15:58 編輯 ] |