نوع مقاله : مقاله پژوهشی
نویسندگان
1 دانشیار گروه مترجمی زبان انگلیسی، هیأت علمی دانشگاه بینالمللی امام خمینی قزوین، ایران
2 دانشآموخته کارشناسی ارشد زبانشناسی همگانی، گروه مترجمی زبان انگلیسی، دانشگاه بینالمللی امام خمینی قزوین، ایران.
چکیده
کلیدواژهها
عنوان مقاله [English]
نویسندگان [English]
For spoken information transfer to be successful, the listener needs to understand the meaning of the spoken utterance, thus, the message. The message a listener receives is encoded in an acoustic signal, which results from the physiological movements involved in speech production. The process of listening begins once this signal reaches the ear. After the initial psychoacoustic processing of the input, the listener separates speech from other sensory input that might reach the ear (see Bregman: 1990, for a review). The acoustic signal is sent to the auditory cortex via the auditory nerves and is then converted into an abstract representation used to access the mental lexicon, the stored representations of words. The next processing stage is called word recognition. At this stage, the listener has to segment the signal into meaningful discrete units. Once the words are recognized, the following processing stages are concerned with integration: listeners determine the syntactic and semantic properties of individual words and the syntactic and semantic relationships among them, and use this knowledge as well as pragmatic and world knowledge to understand and interpret the utterance.
As such, listeners employ a variety of linguistic patterns, including phonological, morphological, syntactic, and semantic ones to identify word boundaries in speech recognition. Based on the Metrical Segmentation Hypothesis, when linguistic information is not available, listeners use patterns of prosodic variation to identify word boundaries.
کلیدواژهها [English]