cate that subjects adopt a postlexical response straiegy when targets focused on a particular targel position. (Work supported by PHS ROI
occur lale in the stimulus word or when attcntion cannot be consistently DCX)11-15.J
2:45-3:00
Break
9SP8. The role of lexical status in the segmentation of fluent speech.
Annę S. Henly and Howard C. Nusbaum (Dept. of Psychol., Univ. of Chicago, 5848 S. University Ave., Chicago, 1L 60637)
Theories of word recognition propose that listeners use lcxical status to segment one word from another in fluent speech. Thus words must be recognized one at a time, in the order in which they were produccd. Tliis leads directly to the following predictions: (1) Words should be easier to identify following a word than following a nonword. (2) The lexical status of a syllable following a word should not affect thc iden-tifleation accuracy of that word. Subjects in the present experiment were asked to identify monosyllabic and trisyllabic target words presented in noise. Target words were presented with preceding word and nonword contcxt syllables. as well as following word and nonword context sylla-bles. Although the results confirm that listeners are able to use lexical status to facilitate segmentation, they also strongly suggest that listeners’ use of lexica! status is quite unlikc the segmentation strategies pro-posed by most models of word recognition. (Research supported by NIDCD.J
9SP9. Listening to the sound of sentences. Howard C. Nusbaum, Kevin J. Broihier (Dept. of Psychol., Univ. of Chicago, 5848 S. Univcrsity Ave.. Chicago, !L 60637), and Judith C. Goodman (University of Califomia. San Diego, CA)
Although intonation conveys a great deal of informalion relevant to understanding sentences, it is unknown how listeners actually use this information. How do listeners integrate information in the intonation of a sentence with information derived from a linguistic analysis of the words in the sentence? Syntactic information from the order of words in a sentence may be processed independently from syntactic information perceived from intonation. On thc other hand, different sources of syntactic information may be treated as equiva)cnt and integral. Subjects were instructed to judge whether the intonation of a sentence was de-clarative or intcrrogative. Statcments and questions were produced in two forms; with a declarative intonation and with an interrogative intonation. In one condition, syntactic structure was constant for ail trials and intonation varied. In a second condition, syntactic structure varied across trials, as did intonation. The results indicate that listeners are unable to ignore the syntactic structure of a sentence in judging the intonation of the sentence. Listeners treat different types of syntactic information as integral. These results suggest that perception of intonation is a direct and integral parł of sentence understanding. (Research supported by NIDCD.)
9SP10. Some effeets of text coherence on tbe comprehension of natural and synthetic speech. James V. Ralston, Scott E. Lively, and David B. Pisoni (Speech Res. Lab., Dept. of Psychol., Indiana Univ., Bloomington, IN 47405)
Subjects listened to naturally and synthctically produced (Votrax Type-n-Talk) passages of varying levcls of difficulty in a sentcnce-by-sentcnce listening time task. Listeners controlled the interscntence in-terval while listening to passages presented in their normal sentence order or in a random sentence order. Subjects listening to Votrax speech fiad significantly longer interscntence rcsponse times in both the nor-mally ordered and the randomly ordered sentence conditions. Further-more, in a recognition test givcn after each passage. subjects* performance varied as a function of speech type, passage difficulty, and recognition qucstion type. Subjects listening to synthetic speech re-sponded morę accurately to word recognition questions than to propo-sition recognition questions. Listeners who heard natura! speech, in contrast, demonstrated better proposition recognition performance. The results indicate that listeners who heard synthetic speech attended morę closely to the acoustic-phonctic input than to the propositions of the passages. The results are discusscd in terms of a limited capacity atten-tional mechanism. (Work supported by NSF IRI 86-17847.]
9SP11. Context effeets in the perception of personal information in the speech signal. John Mullennix (Dept. of Psychol., Wayne State Univ., Detroit. MI 48202), Keith Johnson (UCLA, Los Angeles, CA 90024), and Meral Topcu (Wayne State Univ., Detroit. MI 48202)
The speech signal contains linguistic, personal, and social informa-tion. Many studies have demonstrated that the perception of linguistic information is subjcct to contcxt effeets. This paper is a report of a study conccming contcxt effeets in thc perception of personal information. When listeners were asked to identify the speaker of synthetic stimuli (the vowe! /i/) in terms of male/female attributes, their responses were most affected by F0 and formant values with only a smali effect of glottal waveform shapc. The results of a pcrceptual anchoring study will be reported, in which listeners were asked again to identify thc stimuli on the basis of speaker attributes, but with one endpoint of the synthetic continuum presented morę often than any of the other stimuli. The results of this experiment will be discusscd in terms of the hypothesis that listeners' perceptions of personal information in thc speech signal are influcnced by context. (Work supported by NIH.)
4:00
9SP12. On the perceptual differentiation of spontaneous and prepared speech. Robert E. Remez, Stefanie M. Bems, Jennifer S. Nutter, Jessica M. Lang, Lila Davachi (Dept. of Psychol., Barnard College, 3009 Broadway, New York, NY 10027), and Philip E. Rubin (Haskins Labs., 270 Crown St., New Havcn, CT 06511)
Naive listeners are readily able to differentiate spontaneously produced speech from speech produced from lcxt. The prior studies kave employed lexically. syntactically, and thematically identical pairs of natural sentences extracted from brief fluent monologs ( <40 s in dura-tion), finding relative!y high levels of performance in tests of perceptual differentiation. To determine which attributes of thc speech signal con-tribute to thc perceptual differentiation of spontaneous and prepared
2011
2011