different vowels (high sonority) and Icast apparcnt for CV strings with different stop consonants (Iow sonority). Strings with different nasal consonants and glides (moderate sonority) produced intermediate re-cency. In addition, lists of high vowel-contrasting syllablcs (high-moderate sonority) were tested against lists of Iow vowel-contrasling syllables (high sonority). Overall, results suggest that recall performance for the finał positions of an auditory list is predicted by the sonority of the contrast segment. The results are consistent with findings in the categorica! perception and auditory memory literatures, and sup-port the notion of a common dimension underlying these well-known effects and the linguistic notion of sonority. The experiments provide additional evidence to support the view that memory for a sound does not differ according to its identity as a vowel or consonant, but instead is influenced by its acoustic properties. [Work supported by NICHD.J 5SP10. Musical duplex perception: Does perceptual dominance reflect generał principles or specialized modules? Michael D. Hall and Richard E. Pastore (Dept. of Psychol., SUNY, Binghamton, NY 13901)
In a variant of duplex perception (DP), a phonetic module is claimed to take precedence over nonspeech processing based upon maintained phonetic perception despite discontinued nonspeech perception of the critical stimulus component. Recent attempts to replicate these findings with nonspeech stimuli fail to meet proposed criteria for stimuli used in strong dcmonstrations of DP. The present musical ex-periment first established threshold intensities for detecting the sinuso-idal, chord-distinguishing notę in the context of constant-intensity piano notes. Here, AX chord discrimination followed with distinguishing notes varying in intensity. As with speech, complex perception of chords was maintained at intensities significantly below detection threshold for component perception. Both speech and musie findings could demon-strate generał principles of perception, with multiple-component stimuli preserving a strong integrative relationship betwecn components morc readily perceived as singular events with less stimulus energy than is required for isolating individual components. [Work supported by NSF.] pares with those of English monolinguals. Also, formant values of the Korean vowels were compared between these Korean-English bilin-guals and monolingual Korcans (Scoul dialect) to see whether the L2 acąuisition affects their Korean production. Preliminary results par-tially support Flege’s hypothesis. Productions of the new English phones were closer to the English norm than that of the similar phone /a/, but only for highly experienced bilinguals. On the other hand, the monolingual norms for the similar phone /i/ are too close to be callcd merely similar, whereas those for /a/ seem acoustically too different to be categorized as similar. Thus Flege’s notion of similar phone nceds to be refined or restricted.
5SP13. Perception of prominence in CV sequences by Estonian and English listeners. Ilse Lehiste (Dept. of Linguistics, Ohio State Univ., 205 Cunz Hall, Columbus, OH 43210) and Robert Allen Fox (Ohio State Univ., Columbus, OH 43210)
In previous work (I. I^histe and R. A. Fox, J. Acoust. Soc. Am. Suppl. 1 87, S72 (1990)], the perception of “prominence” in sequences of noise tokens by native Estonian and American English listeners while independently manipulating individual token duration and amplitudę were examined. The present study used the same basie cxperimental paradigm but with synthetic CVs ([ba]) rather than noise bursts. The basie CV token was 400 ms in duration with 40-ms formant transitions and a 360-ms steady-state vowel. However, in the experimental trials, one token in each scąuence could be lengihened to 425, 450, 475, or 500 ms and/or one token (not necessarily the same token) could be in-crcased in amplitudę by 3 or 6 dB—these duration and amplitudę changes were independent. Listeners were required to indicate which CV in the scąuence was “morę prominent.” Listening tests were given to 33 native speakers of English in Columbus, Ohio and to 40 native speakers of Estonian in Tallinn, Estonia. As in the earlier study, the responses showed that Estonian listeners were morę sensitive to token duration in making their “prominence” decisions than were the English listeners. Differences between the two seLs of results (obtained using noise versus speech stimuli) will be discussed.
SSPU. A psychologie*] space for place of articulation. Xiao-Feng Li and Richard E. Pastore (Dept. of Psychol., SUNY, Binghamton, NY 13901)
A psychological space for place of articulation was explored using synthetic /ba/ and /da/ stimuli factorially varying in FI- and fl-onset freąuencies. The data from a goodness judgment expcriment were sub-jectcd to the multidimensional scaling. Within the derived space of each phoneme catcgory, the stimulus centered (or the elosest to the center) was designaled as the prototype. The results from a speeded classifica-tion task indicated that the response time to each phoneme is an ordinal function of the goodness measure. The heterogeneity of the resulting goodness spaces and response times in classification suggests that clas-sification of place of articulation could be the consequence of the subject evaluating the membership function relative to the prototype. [Work supported by NSF.]
SSP12. Interference for “new” versus “similar" vowels in Korean speakers of English. Islay Cowieand Sun-Ah Jun (Dept. of Linguistics, Ohio State Univ., 204 Cunz Hall, Columbus, OH 43210)
Flege (1986, 1987) proposed that although the degree of establishment of a “new” phonetic category in the L2 is proportional to the degree of experience in the L2, equivalence classification prevents adult L2 leamers from establishing a phonetic category for “similar” L2 phones. This paper examines the similar English vowels, /i, u, a/, and the new English vowels /l, u/, in productions by Korean-English bi-linguals with different degrees of their experience in English and com-
Discrimination latencies were measured for two species of birds tested using pairs of tokens drawn from three synthetic continua based on the /la/ and /ra/ contrast: full-formant syllables, their sinewave-speech analogs, and the critical f3 distinctions as isolated sinewaves. For the full-formant continuum, both species showed a marked im-provcmcnt in discrimination ncar the /l/-/r/ boundary, whereas for the Fi continuum, neither species showed a peak near the phonetic boundary. These results are comparable to human discrimination of the same continua [Best et al.t Percept. Psychophys. 45, 237-250 (1989)]. How-ever, for the sinewave-speech continuum, budgerigars showed a performance peak mirroring that for the full-formant syllables, like humans who perceived the sinewave-speech stimuli as speech, while zebra finches showed a linear function mirroring their performance for the Fi sinewaves, like humans who perceived the sinewave speech as nonspeech. These data provide new cvidcnce of species similarities and differences in the discrimination of speech and speechlike sounds that strengthen and refine previous findings of sensitivities of the vertebrate auditory system to several acoustic distinctions associated with speech sound categories. [Work supported by NIH Grant DC00198 to RJD.] 5SP15. Adjusting dysarthric speech timing using neural nets. Shirley M. Peters and H. Timothy Bunnell (Speech Processing Lab., Alfred I. duPont Inst., Wilmington, DE 19899)