22 Phonetics and Historical Phonology

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 1 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

Subject

Key-Topics

DOI:

22. Phonetics and Historical Phonology

JOHN J. OHALA

Linguistics

»

Historical Linguistics

phonetics

10.1111/b.9781405127479.2004.00024.x

Two of the most successful enterprises in linguistics over the past couple of centuries have been (i) in
historical linguistics, reconstruction of the prehistory of languages via the comparative method, and (ii) in
phonetics, the development of methods and theories for understanding the workings of speech, that is, how
it is produced, its acoustic structure, and how it is perceived. My purpose in this chapter is to demonstrate
that the comparative method can be refined and elaborated still more if it is integrated with modern
scientific phonetics. By incorporating phonetics it is possible to implement a research program that
genuinely constitutes “experimental historical phonology” (Ohala 1974).

1 Background

1.1 Taxonomic versus scientific phonetics

In speaking of the integration of phonetics into historical phonology it must be understood that “phonetics”
refers to what I call “scientific phonetics,”

1

not “taxonomic phonetics.” The latter is the traditional, almost

exclusively articulatory phonetics which provides linguistics with the terminology and conceptual framework
for describing speech sounds and their natural classes. This descriptive system reached a high level of
refinement in the late nineteenth century through the efforts of phoneticians such as Alexander Melville Bell,
Otto Jespersen, Paul Passy, Henry Sweet, and Wilhelm Viëtor, and its basic structure has not changed very
much since. Scientific phonetics, on the other hand, has a very long tradition, dating at least from the time
of Galen, the second-century ad anatomist, with important contributions to the present time from other
anatomists, as well as physiologists, physicists, voice teachers, engineers, linguists, and others. It constantly
accumulates new data, methods, and theories on how speech works. Moreover, it tests these theories and
continually refines the evidence adduced in support of them. When the evidence fails to support proposed
theories, it abandons them, as is true of any mature discipline. It is scientific phonetics, not taxonomic
phonetics, that needs to be better joined with historical phonology.

There have, in fact, been many prior attempts to bring about this union. Prior to instrumental studies of
speech there were some conceptions of the workings of speech which were based on impressionistic
auditory or kinesthetic sensations and on direct visual inspection. Even at this stage of development in the
nineteenth century there were some applications of phonetics to historical phonology (Bindseil 1838; Rapp
1836; von Raumer 1863; Weymouth 1856). Some of the early attempts to synthesize speech (von Kempelen
1791; Willis 1830) inspired a few works attempting to explain sound change by reference to physical
properties of speech sounds, as they were understood at the time (Jacobi 1843; Key 1855).

Instrumental study of speech on live, intact, speakers blossomed in the 1860s and 1870s.

2

It is noteworthy

that one of the motivations for such research was at its onset the attempt to understand the mechanisms of
sound change. In 1876 Rosapelly declared optimistically (I translate) that:

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 2 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

From the point of view of linguists, these (physiological) studies seem to be of great
importance, since their science, whose precision grows from day to day, tends to take
experimental study as its point of departure. The comparative study of different languages and
the study of the successive transformations undergone by each of them in the course of its
development have, in fact, permitted the secure formulation of certain laws that one can call
physiological and which have presided over the evolution of language.

Within a couple of decades this program produced, among other works, Rousselot's 1891 dissertation, which
was an attempt to present the physiological basis of some of the sound changes that transformed late Latin
into the regional dialect spoken in his home town.

There is still much of value to be gleaned from such early instrumental phonetic studies. One example is E.
A. Meyer's (1896–7) early discovery of the perturbations of F0 on vowels following voiced and voiceless
consonants -one of the topics that still preoccupies phoneticians both for its value to an understanding of
speech production (Löfqvist et al. 1989) and for its relevance to the phonological development of distinctive
tone from the influence of consonants (Hombert et al. 1979).

In instrumental phonetics, the discovery of the magnitude and range of lawful variation in speech must rank
as one of the major findings of linguistic science, although its full significance for an understanding of
sound change seems not yet to be fully appreciated. Having said this, it must be admitted that much of this
early work in laboratory phonetics had obvious limitations: due to technological constraints it focused almost
exclusively on the articulatory aspect of speech and neglected the acoustic and perceptual aspects. As many
modern studies have shown and as will be emphasized in this chapter, a proper understanding of sound
change requires reference to these other domains. Perhaps the one aspect of early phonetically informed
studies of sound change from which we may still draw inspiration is the expressed belief that sound change
and phonological universals may profitably be studied in the laboratory.

1.2 Constraints of the discussion

The following discussion of the mechanisms of sound change will be constrained in two ways. First, I will for
the most part be concerned only with those sound changes that are independently manifested in similar
form in different languages. The practical effect of this is to filter out changes due to language-specific or
culture-specific factors, for example, the influence of writing, regularization of morphological paradigms,
borrowing, etc. What remains is the vast majority of sound changes that have occupied phonologists’
attention over the past two centuries and which one can assume are caused by the only factors that are
common to all languages at all periods of time: the physical phonetic properties of the speech production
and perception systems. Second, I will focus primarily on the initiation of sound change, that is, the factors
that lead to variant pronunciation norms in the first place, not the subsequent spread or transmission of a
novel norm through the speech community or through the lexicon. The factors influencing the spread of a
sound change are social and psychological and may very well involve language and culture-specific factors.
(However, see Ohala 1995c for speculations on phonetic factors influencing some aspects of the spread of a
sound change.)

2 The Phonetic Basis of Sound Change

2.1 Sound change and synchronic phonetic variation

Detailed phonetic studies present us with two fundamental facts that force us to try to understand sound
change by looking carefully at the phonetics of speech production and speech perception. The first of these
is that there is a huge amount of variation in the way the “same” phonological unit is pronounced, whether
this unit is the phone, syllable, or word. The relatively short list of allophones given in conventional
phonemic descriptions of languages is just the “tip of the iceberg.”

3

Fine-grained instrumental analyses of

speech, especially recent acoustic studies, reveal that the variation is essentially infinite, though generally
showing lawful dependency with respect to the phonetic environment, speech-style, or characteristics of the
individual speaker (Lindblom 1963; Moon and Lindblom 1994; Sproat and Fujimura 1993; Sussman et al.
1991). Most of this variation is difficult to notice perceptually except through the use of controlled listening
tests (e.g., Ohala and Feder 1994). Even after a great deal of ingenious quantitative analysis (Bladon et al.
1984; Miller 1989; Peterson 1951; Syrdal and Gopal 1986) there does not yet exist a universally applicable

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 3 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

1984; Miller 1989; Peterson 1951; Syrdal and Gopal 1986) there does not yet exist a universally applicable
way to normalize this variation in vowels, that is, to extract the linguistically relevant “sames” posited for the
speech signal. Until such normalizations are understood, the validity of most posited phonological units
remains in doubt. It is this situation that I was referring to when I stated above that the wealth of phonetic
variation discovered by instrumental phonetics confronts linguistics with a problem that it has yet to deal
with.

The second fundamental fact that motivates us to look at phonetics for an understanding of sound change is
that a great deal of phonetic variation parallels sound change, that is, synchronic variation, including that
which we find in present-day speech, resembles diachronic variation. The synchronic variation can be found
both in speech production and in speech perception.

2.2 Variation in speech production

For example, the fundamental frequency of vowels is perturbed by the voicing of preceding consonants:
higher initial F0 being found after voiceless consonants and lower initial F0 after voiced ones (Meyer 1896–
7). This parallels the conditioning of new tones in a number of languages (Edkins 1864; Maspero 1912;
Hombert et al. 1979). Svantesson (1983) provides examples of this from two related dialects of Kammu, one
of which has preserved the voicing of initial stops and the other of which has lost the voicing but has
acquired a tonal distinction; see (1):

(1)

Southern Kammu Northern Kammu Translation
klaaŋ

kláaŋ

‘eagle’

glaaŋ

klàaŋ

'stone’

A second example is the fact that the intensity and the duration of the noise element in the release burst of
[t] is greater preceding the high close vowel [i] or the glide [j] than it is before other vowels (Olive et al.
1993: 286; Ohala 1989). This finds a parallel in the phonological histories of numerous languages, for
example, Tai (Li 1977) and Bantu (Guthrie 1967–71) as well as English, where stops develop affricated
releases before high, close vowels, as exemplified in (2):

(2)

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 4 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

Table 22.1 Data from Ikalanga showing that distinctive aspiration has developed on stops that

appeared before the Proto-Bantu super-close vowels but not before the next lower vowels

A third example is the finding that voice onset following a voiceless stop release is longer before high, close
vowels than before low, open vowels (Halle and Smith 1952; Klatt 1975; Ohala 1981b). A diachronic parallel
to this is the development of distinctive aspiration on voiceless stops in Ikalanga (Mathangwane 1996). In
Ikalanga (and many other Bantu languages) distinctive aspiration on certain stops arose out of the height
neutralization of the quality of the two highest front and back vowels, as shown in

table 22.1

.

2.3 Variation in speech perception

Listeners occasionally make errors in perceiving speech. This is especially true when there is minimal higher-
level redundancy from pragmatics, semantics, syntax, and the lexicon. Such a situation is easily duplicated in
laboratory-based confusion studies where isolated nonsense syllables are presented to listeners for
identification. The results from one condition of one published study by Winitz et al. (1972) is given in

table

22.2

.

The confusions shown in

table 22.2

parallel some common, well-documented sound changes, as given in

table 22.3

. The parallels include the pairs of sounds involved, the phonetic environment (especially, whether

the stops are found in palatal or labial environments - where the palatalization or labialization is provided by
secondary articulations or by adjacent vowels or glides), and, in some cases, even the asymmetry in the
direction of the change (/p

j

/ > /t/ is attested but not */t/ > /p

j

/).

4

Many other examples could be given

(see Ohala 1981a, 1993a, 1995b).

To recapitulate:

i much variation can be found in speech production and speech perception;

ii much of this variation parallels sound change.

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 5 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

Table 22.2 Probabilities of identification of initial consonants as /p/, /t/, /k/ in the columns of

the stimuli in the rows

Table 22.3 Examples of sound changes involving large changes in place of articulation

But these two facts immediately raise the question: could this synchronic variation actually be sound change
observed “on the hoof”? Logically this would be difficult to accept, because if this were the case then we
would find sound change progressing at a rate very much faster than we do - in fact, several orders of
magnitude faster than present evidence suggests. All of the sound changes that transformed Proto-Indo-
European over five or six millennia into the present-day Indo-European languages would be accomplished in
a day or less. Somehow pronunciation remains relatively stable over time in spite of the great variation seen
in everyday speech. But if present-day variation is not sound change, then how do we account for the
uncanny similarities between them?

2.4 Variation in speech production = sound change?

The beginnings of a resolution of this paradox comes from experimental phonetics, specifically from studies
of speech perception. Several studies have shown that listeners’ judgments about what it is that they hear in
the speech signal are influenced by the context in which the sounds occur. Pickett and Decker (1960)
showed that listeners’ differentiation between topic and top pick is influenced by the rate at which the
sentence containing these utterances is spoken. Ladefoged and Broadbent (1957) showed that listeners
would identify the same vowel stimulus as /I/ or /ε/ depending on the F1 values of vowels in a precursor
sentence.

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 6 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

Two studies are particularly relevant to an understanding of sound change. Mann and Repp (1981) showed
that listeners divide an /∫/ to /s/ continuum differently depending on the quality of the following vowel.
Some stimuli that would be regarded as an /∫/ before the vowel /a/ are identified as /s/ before the vowel
/u/. Presumably listeners are aware that a rounded vowel such as /u/ lowers the frequency of /s/ toward
that of /∫/ and thus perceptually compensate or normalize for that effect. Similarly, Beddor et al. (1986)
looked at how listeners divided an /ε/ to /æ/ continuum under three conditions: when the vowels were oral
in an oral consonant context (εd-æd), when the vowels were nasalized in an oral consonant context ( d- d),
and when the vowels were nasalized and followed by a nasal consonant ( n- n). They found that in
comparison to the / d/ condition, listeners heard more /æ/ vowels in the continuum in the / d/ condition.
This is to be expected given the kind of distortion of vowel quality created by nasalization. But most
important, in the / n/ condition, the responses were in most cases virtually identical to those in the /εd/
condition. The implication is that when listeners have a contextual nasal consonant to “blame” for the
distortions on the vowel, they are able to factor out those distortions and normalize back to the speaker's
presumed intended vowel quality.

The ability of listeners to normalize variable speech presupposes long experience with variation in speech:
speakers produce variation in their speech but listeners learn how to factor it out or, more precisely, they
learn how to parse the nasalization to the nasal and dissociate it from the effects it has on the vowel.

The fact that listeners use context to adjust their perceptual criteria in recognizing the objects of speech
should not be surprising. This is a manifestation of the phenomenon known in psychology as “perceptual
constancy” and is well studied in other sensory domains. In vision we somehow manage to achieve constancy
in the perception of the size, shape, and color of objects seen even when there are remarkable variations in
those parameters as our eyes register them (Rock 1983).

Thus to return to the question posed above, we can give the following answer regarding the variation seen in
speech production: variation in the production domain does not by itself constitute sound change since
there is no change in the pronunciation norm;
the listener is able (somehow) to reconstruct the speaker's
intended pronunciation. I think we can maintain this position even in cases where speakers may deliberately
(if unconsciously) take articulatory “short cuts” but assume that listeners can nevertheless figure out what
they were aiming at. Similarly, in writing anyone who uses or even invents an abbr. for a wd assumes the
reader can figure out what was intended; no change in the spelling norm for the abbreviated word is
intended or taken.

Of course, occasionally the listener may fail to normalize or correct the contextually determined variations in
the speech signal. In such cases a new norm does develop and a sound change occurs. I have referred to
such cases as “mini-sound changes” - “mini” because at initiation such sound changes are limited to a given
listener and a given word or sound. This is why variations in production resemble sound change; they can
create ambiguity in the speech signal which the listener is unable to resolve. The listener, however, is the
final (unwitting) gatekeeper regarding which production variants become sound changes.

5

2.5 Variation in speech perception = sound change?

In the case of variation in speech perception, we have to answer the question in the affirmative.
Misperceptions are potential sound changes because they may result in a changed pronunciation norm on
the part of listeners if their misperceptions are guides to their own pronunciation.

2.6 “Mini” sound changes

This conception of sound change, however, still does not fully answer the question: even if not all
production variation becomes sound change, the rate at which such mini-sound changes would occur would
still be very high. And perceptual errors probably also occur at a rather high rate. We are still left to wonder
at the discrepancy between the expected high rate of sound change and its actually observed slow rate. The
final resolution to the paradox is to recognize that most listeners’ errors eventually get corrected. Listeners
have more than one opportunity or more than one source from which to learn the pronunciation of words;
the probability of making the same error many times is no doubt quite low. Finally, even in cases where a
listener's error goes uncorrected, it may perish with the individual who made it, that is, other speakers may
not copy it and thus it would go unnoticed by historical linguistics. The norms for pronunciation are
distributed in the minds of all the speakers of the community; it is surely a rare occasion when one
individual's changed norm influences the rest of the population. Mini-sound changes become “maxi-” sound

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 7 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

individual's changed norm influences the rest of the population. Mini-sound changes become “maxi-” sound
changes at a very low rate and this is the reason that, by and large, pronunciation changes so slowly over
the centuries in spite of the variability that can be seen in the speech signal. Nevertheless, the sound
changes that have been documented in the histories of languages are drawn from a pool of synchronic
variation (Ohala 1989).

2.7 Assimilative and dissimilative sound changes

The preceding account of sound change, I believe, applies to the vast majority of common sound changes
considered to be assimilative, that is, where a previously phonetic, purely mechanical, coloring of a sound by
another contextual sound becomes independent of that context and is articulated in its own right. This is the
process commonly known as “phonologization” (Hyman 1976; Jakobson 1931). Examples are the
nasalization of vowels near nasal consonants, the palatalization of consonants near palatal vowels and
glides, the development of tones due to consonantal influence, vowel harmony, changes in vowel quality due
to nasalization, changes in vowel quality due to adjacent consonants, stop epenthesis, etc. However, still
unexplained is a large class of sound changes characterized as dissimilative, that is, where two sounds
sharing one or more phonetic features change such that they become less similar to one another (or, in
some cases, where one of the sounds disappears). To be sure, dissimilation is far less common than
assimilation - to the point that some phonologists seem unwilling to characterize it as a “natural” sound
change or at least on a par with the rest (Bloomfield 1933: 390; Hock 1986: ch. 6; Schane 1972) - but it is
nevertheless a well-documented type of sound change. Given the account above of assimilative sound
change, dissimilation would seem to present a problem: we can give phonetic reasons why a sound will take
on features of adjacent sounds, but why should a sound become less like adjacent ones?

In fact, we have reviewed above reasons why this might happen. In “normal” speech perception, when the
listener correctly figures out the pronunciation intended by the speaker, the listener has had to use some
cognitive strategies to normalize or correctly parse the variable signals received from the speaker.
Assimilative sound change occurs when the listener fails to make that correction. Dissimilative sound change
can happen when the listener inappropriately implements the correction or normalization. For example, in
Russian a low front [a] has become [α] near palatal segments, for example, /stoj + ā/ > /stojā/ ‘standing’
(Darden 1970). Listeners probably expected that vowels would be fronted near palatal consonants and so
discounted this contextual effect and mistakenly created a pronunciation norm where the vowel was back.

There is laboratory evidence that listeners do this kind of “hypercorrection” on occasion. Beddor et al.
(1986), found that under certain conditions listeners identified more of the / n/-/ n/ continuum as / n/
(vis-à-vis the /εd/-/æd/ continuum) when there was slight nasalization on the vowel. Further evidence for
hypercorrection was presented by Ohala and Busà (1995). Dissimilative sound changes, like the assimilative
ones, involve listener error; it is just a different kind of error.

2.8 Terminology

In other works (Ohala 1992a, 1993a) I have referred to the perceptual processes which normalize or “correct”
the kind of coloring that one speech sound imposes on another, that is, when the listener correctly deduces
the signal intended by the speaker, as “correction.” This applies to the vast majority of exchanges between
speaker and listener. The errors of perception, then, are of two types. One is “hypocorrection,” where the
listener fails to implement the corrective strategies and takes the contextually distorted speech signal at face
value. These errors underlie the vast majority of sound changes which are commonly labeled “assimilative”
(but also underlie other sound change types, too; see below). The dissimilative sound changes (and others
based on listener expectations) are those that arise due to listeners’ inappropriately applying corrective
processes I have termed “hypercorrection.”

2.9 Evidence supporting the claim that dissimilation is perceptual hypercorrection

There is evidence in favor of this account of dissimilation.

2.9.1 Which features are subject to dissimilation

First, it is possible to predict which features are and which features are not likely to be subject to
dissimilation. Dissimilation, especially, dissimilation “at a distance,” such as Grassmann's law, which affects
segments that are separated by others which are unaffected by the change, should occur primarily on
features whose acoustic-perceptual cues are known to spread over relatively long time intervals beyond the
immediate “hold” of the segment they are distinctive on. This includes aspiration, glottalization, retroflexion,

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 8 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

immediate “hold” of the segment they are distinctive on. This includes aspiration, glottalization, retroflexion,
palatalization, pharyngealization, labialization, etc. These are most likely to color adjacent segments and
require the listener to “undo” their effects. When the same feature occurs distinctively on two sites within a
word, their long distance diffusion creates maximal ambiguity for the listener. Segments whose distinctive
features do not migrate substantially in time should not be subject to dissimilation. This includes the
features that cue stops, affricates, and voicing.

6

Although there are apparent problem cases that need

further discussion and examination, such as laterals and fricatives,

7

my own survey of the historical

phonology literature seems to support the above predictions (see Ohala 1981a, 1992a).

2.9.2 Preservation of the conditioning environment

Second, based on this account of sound change, different predictions can be made as regards the fate of the
conditioning environment in assimilative and dissimilative sound changes. In assimilative changes, the
conditioning environment may be lost; indeed, if the listener fails to detect the conditioning environment,
this is a transparently obvious reason for the listener to fail to take into account how that environment
might have influenced the target sound. Thus sound changes of the following sort are common:

(3)

This is not to say that all assimilative sound changes lose their conditioning environment. Vowel harmony is
a well-known example where the conditioning environment remains. All I am pointing out here is that the
environment may be lost in such changes. What is common to cases of hypocorrection where the
environment is lost and where it is not lost is that the listener fails to establish any causal link between the
conditioning environment and the conditioned variation.

In dissimilative sound changes, however, the environment may not be lost at the same time as the change
in the target segment or feature. The reason obviously is that the conditioning environment must be
detected by the listener in order that he or she blame that environment for what is thought to be a
distortion on an adjacent segment. Specifically, what is predicted not to happen is a hypothetical variant of
the dissimilation of the /w/ that was part of the historical development of the word for ‘five’ in various
Romance languages, in contrast to the normal retention of the /w/ in other cases as in (4):

(4)

As far as I am aware, this prediction is borne out; dissimilative sound changes invariably retain the
conditioning environment, at least in the earliest stages of their development.

2.9.3 Dissimilation doesn't produce novel segments

With assimilative sound changes it may be possible to create novel contrasts or sound sequences; for
example, when French and Hindi acquired distinctively nasal vowels after loss of adjacent nasal consonants,
the nasal vowels represented additions to the vowel inventory. With dissimilative sound changes, this seems
not to be the case. The result of dissimilative sound changes appears, in general, to be segment types or
sequences that were already present in the language. Dissimilative changes are thus “structure preserving.”
This follows from the fact that it is listeners’ normalization of what they imagine to be a distorted signal that

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 9 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

This follows from the fact that it is listeners’ normalization of what they imagine to be a distorted signal that
leads to dissimilation.

2.9.4 The domain of dissimilation is the word

There is additional evidence supporting my account of dissimilation based on the typical domain over which
dissimilation applies. As background for this it is useful to note that prior explanations for dissimilation have
often invoked the concept of “ease of articulation,” for example, that in Grassmann's law where two aspirates
in a word results in the first one becoming deaspirated, it has been claimed that speakers tried to avoid the
cost of articulating two such physiologically costly segments in a row (Müller 1864; Ladefoged 1984). That
explanation would be plausible except for two facts which don't fit. First, as mentioned above, it is usually
the first of the two aspirates which undergoes deaspiration, whereas cost is a cumulative function and would
be expected to be higher on the second. If articulatory cost was being trimmed, the motivation to do this
would be expected to affect the second, not the first, aspirate. Second, if such physiological cost matters
then dissimilation would be expected to occur on any two sequential aspirates that occur in utterances, even
those which occur in separate successive words. This is not the case, however; Grassmann's law and other
dissimilations occur only within the domain of the word (or possibly, in lexicalized compounds). A word, by
definition, is a fixed collocation of speech sounds which, by their combination and permutation, signal
different meanings. For example, in the word ‘leg’ the vowel [ε] is permanently colored by the preceding [l]
and by the following [g]. This presents the listener with the maximum ambiguity as to what the intended
quality of the vowel is and, as it happens, this word is realized dialectally as [leĭg], where, apparently,
listeners parsed some of the [g] onglide as a diphthongal ending to the vowel. In contrast, elements that are
freely permutable offer listeners the opportunity to hear that sound in many different phonetic environments
and thus enables them to factor out the contextual distortions more easily. The fact that most sound
changes occur within words or within common phrases that may be lexicalized is an argument that sound
change - assimilative as well as dissimilative - is not related to physiological cost but is primarily a parsing
error on the part of the listener.

2.10 Other sound change types

So far I have tried to make the case that common assimilative and dissimilative sound changes arise from
listeners’ perceptual parsing errors. Is there any possibility that other types of sound change could likewise
be shown to be due to parsing errors? Much work still needs to be done, but I think it is at least plausible
that this is the case. It is certainly not possible yet to present anything like a complete argument on this
point, but I can at least share the reasons for my optimism.

2.10.1 Metathesis

Metathesis, the interchange of nearby speech sounds, comes in various forms. For example, Old English (OE)
clapse has given rise to Modern English (ModE) clasp; OE hros to ModE horse. Blevins and Garrett (1998)
have recently presented arguments that vowel-consonant metathesis comes about from listeners’ misparsing
of the speech signal in a way similar to that which I have posited for dissimilation. Consonant-consonant
metathesis of the sort clapse ~ clasp in a great many cases across diverse languages involves interchanges

between adjacent stops and some kind of noisy segment; cf. also Sanskrit hasti ‘elephant’ and Prakrit hat

h

:i,

where, it seems, the distinctive aspiration is the heir, after metathesis, to the earlier /s/. There may be a
psychoacoustic basis for this. Warren (forthcoming) presents evidence that, when presented with sequences
of speech-like sounds with typical speech sound durations, listeners cannot readily “unpack” the order of the
sounds but rather hear the sequence in a holistic way.

2.10.2 Epenthesis

A wide variety of consonant epenthesis, as in

table 22.4

, can readily be accounted for in terms of fortuitous

overlap or coarticulation of both of two articulatory “valves” which are separately associated with the
production of two adjacent segments. For example, in the case of [ls] > [lts], the first segment [l] requires
tongue-palate contact at the midline but no contact at at least one side; the second segment, [s], requires
the reverse: tongue-palate contact at both sides but not at the midline. In the transition between these two
segments, when coarticulation may occur, both of these contact patterns may overlap, thus creating a
complete stop. (See Ohala 1974, 1995a, 1997, forthcoming, for further details.)

2.10.3 Elision

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 10 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

Many forms of elision (apocope, syncope, procope) can plausibly be traced to some speech segments being
obscured by others. Browman and Goldstein (1988), for example, demonstrate how, in one speaker's
utterance of the phrase perfect memory, phonetically [‘p

h

fεkt ‘mεm i], the final /t/, although articulated,

was not evident in the acoustic signal because it was obscured by the overlap of the /k/ and /m/
articulations. Such cases present listeners with little or no evidence of the obscured segment and can thus
lead them to form a novel pronunciation norm. Similarly, it is not difficult to imagine that brief, weakly
articulated, vowels seemed ambiguous to listeners, consistent as well with no vowel or with simple release of
a preceding consonant. This would explain such cases as the loss of unstressed penultimate vowels in the
development from Late Latin to Early French, for example, pêrdêre > pęrdr ‘to lose,’ simŭlo > sẽmbl
‘seem’ (Pope 1934: 112).

Table 22.4Types of epenthesis that can be explained phonetically

Environment of epenthesis

Language Form showing epenthetic stop

Source

Nasal_oral obstruent

English

Thompson Thom + son (proper name)

Oral obstruent_nasal

Sanskrit

viştŋu-; biśtu

viştŋu- ‘Vishnu’

Nasal_oral non-obstruent

Latin

templum

*tem - lo ‘a section’

Homorganic lateral_fricative

English

[Elts]

Else

Homorganic fricative_lateral Latin > Italian

[iskja] Ischia iskla < istla < isla ‘island’

Ejective < stop_[?]

Chumash

k'ap ‘my house’ k + ?ap 1st pers. + ‘house’

Labial nasal_coronal nasal

Landais French

fempne

femina ‘woman’

2.10.4 “Automatic” vowels near tap and trill /r/'s

Ohala and Kawasaki-Fukumori (1997) consider the case of the appearance and disappearance of vowels
between tap and trilled r's. For example, obstruent r's such as [[], r, r] consist of abrupt amplitude
modulations of a vocalic carrier signal (see Ladefoged and Maddieson 1996: 218). Without the carrier signal,
they could not exist. Often, when such r's are in clusters with other consonants, pre- or post-vocalic, a brief
part of the vocalic carrier signal will intervene between them. This so-called “automatic vowel” has been well
studied in Spanish (Gili Gaya 1921; Navarro Tomás 1918; Quilis 1970). It can happen that this brief vocalic
element is misparsed by listeners as a full, intended, vowel, not as a carrier of the r. Menéndez-Pidal (1926:

217–18) provides examples (dial.) such as corónica (< crónica, ‘chronicle’) and p

e

redicto (< predicto ‘I

predict’). Similarly, when a full vowel becomes short and is flanked by an r and another consonant, it might
be parsed by listeners as this automatic vowel and thus discounted.

3 Discussion: The Implications of the Above Account of Sound Change

There are a number of implications of the above account of sound change:

• First, sound change, at least at its very initiation, is not teleological. It does not serve any purpose
at all. It does not improve speech in any way. It does not make speech easier to pronounce, easier to
hear, or easier to process or store in the speaker's brain. It is simply the result of an inadvertent error
on the part of the listener. Sound change thus is similar to manuscript copyists’ errors and
presumably entirely unintended. I leave unaddressed the separate question of whether, after its
initiation, the success of a sound change's transmission and spread may be influenced by teleological
factors (but see Lindblom et al. 1995 for a discussion of this issue). (See Ohala 1995c for a discussion
of phonetic factors in the spread and extension of a sound change.)

• Second, as a correlate of the above: the “change” aspect of sound change is not mentalistic and thus
is not part of either the speaker's or the listener's grammar. Language change results in grammar
change but it is not caused by the grammar, where “grammar” means the psychological representation
of language. There is, to be sure, much cognitive activity - teleology, in fact - in producing and
perceiving speech, but all the evidence we have suggests that this is directed toward preserving, not
replacing, pronunciation norms.

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 11 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

The theoretical literature on sound change contains many claims to the contrary: that sound change
improves communication, that it is implemented by altering the grammar, etc. (e.g., Jespersen 1894;
Kiparsky 1968; King 1969; Martinet 1952; Vennemann 1993; Lindblom et al. 1995). It is not my purpose to
attempt a detailed refutation of these arguments: I am simply presenting an alternative view and marshaling
evidence in support of it. But I proffer one comment on the arguments offered to show that language change
is directed toward some goal: any of several aspects of language can be cited as showing some
improvement due to a given change: the size of the phoneme inventory, the symmetry of the inventory (or
lack of it), the phonotactics, the canonical shape of syllables, morphemes, or words, the opacity of
morphologically related forms, the loss or addition of inflectional affixes, the structure of the lexicon, the
functional load of certain elements, etc., etc. With so many “degrees of freedom” to invoke, where is the
rigor in finding some area of alleged improvement following a specific change? What is the null hypothesis
which the improvement arguments are competing against? I suspect it is not possible to fail to find some
feature which one can subjectively evaluate as an “improvement” following a given sound change. But the
lack of rigor in marshaling the evidence makes such accounts less interesting.

In contrast to the subjectively based teleological accounts of sound change, the phonetically based account
of sound change presented here does offer the possibility of rigorous testing in the laboratory; see below:

• Third, this account identifies the listener as having the lead role in sound change. This is in contrast
with almost two centuries of speculation on the causes of sound change which focused on the
speaker. To be sure, the speaker is responsible for much of the phonetic variation seen in speech, but
it has been shown in speech perception studies that listeners are normally successful in parsing this
variation to its proper sources. Variability created by the speaker makes the speech signal ambiguous
to the listener, but it is the listener who inadvertently makes the error in (re)constructing the
pronunciation norm.

• Fourth, this account permits a full integration of the cumulative results of phonetic studies with
those of historical phonology. The remarkable parallels between synchronic and diachronic variation
are explained. One of the most important aspects of the comparative method is establishing likely
paths, that is, sound changes, between one posited state of a language and another. Phonetics can
assist in evaluating alternative paths. The benefits of this integration do not flow in just one direction,
that is, from phonetics into historical phonology. The cumulative results from historical phonology,
that is, descriptions of the historical development of numerous languages, represent, I think, a vast
treasury of data that, if interpreted properly, provides hints on the workings of speech (Ohala 1974,
1993b). Following these leads can benefit many areas of applied phonetics, from language teaching to
speech technology (synthesis and recognition of speech).

• Perhaps the most important aspect of this view of sound change is that it shows how sound change
can be studied in the laboratory. Fine-grained studies of the articulatory and acoustic details of
speech can show the source of variability and thus perceptual ambiguity in the speech signal. Speech
perception studies can show how listeners accommodate this variability. Listeners’ perceptual errors
constitute what I have called “mini-” sound changes. From such studies it may even be possible to
give a principled rank ordering of sound changes according to their likelihood.

• In historical phonology there has long been a quest to answer this question: “[w]hy do changes in a
structural feature take place in a particular language at a given time, but not in other languages with
the same feature, or in the same language at other times?” (Weinreich et al. 1968: 102). I suggest
that as far as the initiation of sound change is concerned, this question may be unanswerable and not
worth pursuing. If sound change is equivalent to listeners’ errors, then the question reduces to “why
did listeners (or a listener) of a particular language misparse the speech signal at a given time but not
speakers of other languages, etc.?” Given inherent ambiguities in speech, there is some probability in
any and all languages at any and all times that certain misperceptions, that is, mini-sound changes,
will occur. It is rather a question of which of all the mini-sound changes that crop up constantly are
for some reason “selected” via psychological and social factors to be copied by other speakers. The
answer to the “why this language at this time?” question lies in the transmission, not the initiation, of
sound change.

8

I think it will be extremely difficult to get a rigorous answer to this question for a

specific sound change.

• Since the early days of generative linguistics, the grammars that speakers acquire as they learn a
language have been claimed to be simple. This was because grammatical rules are supposed to be

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 12 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

language have been claimed to be simple. This was because grammatical rules are supposed to be
general and generality correlates with simplicity. Feature-counting or the quantitative measure of
what's general gave way in 1968 (Chomsky and Halle 1968) to a qualitative measure of generality,
namely, rules and the grammars that contained them were evaluated according to their degree of
(un)markedness or naturalness or expectedness. Unmarked or natural phonological processes such as
“the obstruents devoice” (versus the marked or unnatural process “obstruents voice”) were preferred.
Since that time many other devices have been introduced to insure that grammars were natural. In
phonology this typically means “phonetically natural.” But it was just determined by decree that
grammars were simple or general and similarly it was by decree that naturalness was made a
desirable property of mental grammars. But this conception needs re-examination. The question is:
can native speakers differentiate between phonetically natural and phonetically unnatural processes in
the sound patterns in their language?

The phonetically-based account of sound change given here provides a sufficient account of how natural
rules get into a language. But no one could seriously maintain that the native speaker is (i) aware of the
history of her or his language and (ii) aware of the physical processes (Boyle's law, fluid dynamics, etc.) that
govern these processes. So the phonetic primitives invoked in the modeling of these processes make no
pretenses of being psychological. The attempts by those who are interested in psychological phonological
grammars and in finding ways to represent phonological processes (the results of sound change) in
phonetically natural ways have been abysmal failures (Ohala 1995b). One possible solution to this is not to
put more phonetic sophistication into psychological grammars but rather to abandon phonetic naturalness
as a necessary feature of them.

In any case, I think it is time that the question of the site of phonetic naturalness in languages’ sound
patterns be re-examined. I take it as demonstrated that historical grammars of language should have
phonetic naturalness; it is not clear that psychological grammars need it.

ACKNOWLEDGEMENTS

For helpful comments on earlier drafts, for collaboration on some of the studies cited here, and for
bibliographic leads, I thank Mariscela Amador, Steve Greenberg, Haruko Kawasaki-Fukumori, Manjari Ohala,
Madelaine Plauché, and Maria-Josep Solé.

1 See Ohala (1991, 1996) for an elaboration of the term “scientific phonetics.”

2 But see Darwin (1803: 119).

3 Although it is difficult to prove, I have the impression that native speaker linguists report less allophonic
variation in their language than do linguists who are not native speakers of the language. If so, there is an
explanation for this which is also highly relevant to an understanding of sound change. I return to this point later.

4 Regarding the asymmetry in the direction of confusion and in the direction of many sound changes, see Ohala
(1983a, 1985a, 1997); Plauché et al. (1997).

5 The claim that sound change is due to listeners’ errors is hardly original; see Bredsdorff (1821); Passy (1890);
Anderson (1973); Allen (1951); Durand (1955); von Essen (1964); Jonasson (1971); among others. Where my
approach differs is that I claim that listeners’ errors constitute the main and the essential factor in sound change
(assuming sound change is taken as “new pronunciation norm”) and that I marshal phonetic evidence in support of
the claim.

6 “Voice” is included in this list because its primary cue is presence or absence of periodic excitation during the
voiced segment itself - the periodicity does not “spread.” For physiological, especially aerodynamic, reasons,
voicing may be assimilated by segments adjacent to other voiced segments, but it doesn't follow from this that the
perceptual cues for voicing have spread.

7 Laterality, per se, does not spread onto adjacent non-lateral segments and so would be expected not to be
subject to dissimilation. Nevertheless, it is well known to dissimilate, for example, in the case of the suffix -alis/-
aris
in Latin: universalis but militaris. However, the acoustic-perceptual cues for laterals include relatively long
spectral transitions and these probably account for their occasional dissimilation. Frication, like voicing, is
perceived via the relative periodicity of the speech signal in a very short time interval. It is predicted not to be
subject to dissimilation.

background image

12/11/2007 03:43 PM

22. Phonetics and Historical Phonology : The Handbook of Historical Linguistics : Blackwell Reference Online

Page 13 of 13

http://www.blackwellreference.com/subscriber/uid=532/tocnode?id=g9781405127479_chunk_g978140512747924

subject to dissimilation.

8 There may be phonetic factors at play in the spread of some sound changes; see Ohala (1995c).

Cite this article

OHALA, JOHN J. "Phonetics and Historical Phonology." The Handbook of Historical Linguistics. Joseph, Brian D. and
Richard D. Janda (eds). Blackwell Publishing, 2004. Blackwell Reference Online. 11 December 2007
<http://www.blackwellreference.com/subscriber/tocnode?id=g9781405127479_chunk_g978140512747924>

Bibliographic Details

The Handbook of Historical Linguistics

Edited by: Brian D. Joseph And Richard D. Janda
eISBN: 9781405127479
Print publication date: 2004


Wyszukiwarka

Podobne podstrony:
Phonetics and Phonology 30 10 13 Gimson Chapter 4 Decri
Phonetics and Phonology 16 10 13 Gimson Chapter 2 Produ
Phonetics and Phonology 09 10 13, Gimson Chapter 1 Communication
Phonetics and Phonology 09 10 13, Phonetic syllabus and phonetic symbols 2013
Little Words Their History, Phonology, Syntax, Semantics, Pragmatics, and Acquisition
Phonetics and Phonology 11 12 13, Gimson Chapter 5 Sounds in Language
Phonetics and phonology V
Phonetics and phonology II
english phonetics and phonology4 glossary
Cheung2011 Selected Pashto Problems II Historical Phonology 1 On Vocalism and Etyma Iran and Caucasu
Phonetics and phonology
Phonetics and phonology IV
Phonetics and phonology III
Phonetics and phonology I
ISO128 22 leader and reference lines
D Stuart Ritual and History in the Stucco Inscription from Temple XIX at Palenque
Information and History regarding the Sprinter
22. MARKSIZM, metodologia historii sztuki

więcej podobnych podstron