注意對(duì)視聽(tīng)整合加工的影響
發(fā)布時(shí)間:2018-07-20 15:40
【摘要】:多感覺(jué)整合(Multisensory Integration)是指,來(lái)自不同感覺(jué)通道(視覺(jué)、聽(tīng)覺(jué)、觸覺(jué)等)的信息同時(shí)同地呈現(xiàn)時(shí),被個(gè)體有效地整合為統(tǒng)一、連貫的知覺(jué)信息的現(xiàn)象。通過(guò)合并來(lái)自不同感覺(jué)通道的信息,多感覺(jué)整合能夠減少知覺(jué)系統(tǒng)的噪音,幫助個(gè)體更好地知覺(jué)信息,在行為上表現(xiàn)為對(duì)同時(shí)呈現(xiàn)的多通道信息的判斷更快更準(zhǔn)確。已有研究將這種對(duì)雙通道信息的加工優(yōu)勢(shì)稱為冗余信號(hào)效應(yīng);仡櫼酝芯坎浑y發(fā)現(xiàn),早期研究多關(guān)注多感覺(jué)整合本身的特性和加工方式。作為最主要的感覺(jué)通道,視聽(tīng)整合加工的特性是研究者們關(guān)注的焦點(diǎn)。近幾年,研究者們逐漸將興趣點(diǎn)轉(zhuǎn)向注意與視聽(tīng)整合加工的關(guān)系上。但是,已有研究大多針對(duì)注意與非注意條件對(duì)視聽(tīng)整合加工的影響是否存在差異進(jìn)行探討,而忽略了注意除可以指向空間和客體之外,還可以指向感覺(jué)通道的靈活性。因此,本研究旨在考察指向不同感覺(jué)通道的注意對(duì)視聽(tīng)整合加工的影響以及注意資源量和注意的起伏在其中的調(diào)制作用。本論文包括三個(gè)研究,共6個(gè)實(shí)驗(yàn)。研究一通過(guò)線索刺激將被試的注意指向不同感覺(jué)通道(只注意視覺(jué)、只注意聽(tīng)覺(jué)、同時(shí)注意視覺(jué)和聽(tīng)覺(jué)),考察了指向不同感覺(jué)通道的注意對(duì)視聽(tīng)整合加工的影響是否不同。實(shí)驗(yàn)1以圖形和短音作為實(shí)驗(yàn)材料,被試的任務(wù)是根據(jù)線索刺激對(duì)相應(yīng)的目標(biāo)刺激進(jìn)行按鍵反應(yīng)。實(shí)驗(yàn)結(jié)果發(fā)現(xiàn),只有在同時(shí)注意視覺(jué)和聽(tīng)覺(jué)(分配性注意)條件下,被試對(duì)視聽(tīng)雙通道目標(biāo)的反應(yīng)最快,即產(chǎn)生冗余信號(hào)效應(yīng),而在只注意視覺(jué)或只注意聽(tīng)覺(jué)(選擇性注意)條件下并沒(méi)有冗余信號(hào)效應(yīng)產(chǎn)生。為檢驗(yàn)冗余信號(hào)效應(yīng)是否源自于雙通道目標(biāo)的視覺(jué)和聽(tīng)覺(jué)成分的整合,采用競(jìng)爭(zhēng)模型分析法對(duì)被試反應(yīng)時(shí)的累積量分布概率進(jìn)行檢驗(yàn)。結(jié)果發(fā)現(xiàn),實(shí)驗(yàn)1中的冗余信號(hào)效應(yīng)源自于視聽(tīng)雙通道目標(biāo)的視覺(jué)和聽(tīng)覺(jué)成分的整合。也就是說(shuō),實(shí)驗(yàn)1結(jié)果表明,只有在分配性注意條件下才會(huì)產(chǎn)生視聽(tīng)整合。實(shí)驗(yàn)2對(duì)實(shí)驗(yàn)1的材料進(jìn)行調(diào)整,以人聲讀出的漢語(yǔ)單字詞作為聽(tīng)覺(jué)刺激,使其與視覺(jué)圖形形成語(yǔ)義一致和不一致兩種條件,考察了指向不同感覺(jué)通道的注意對(duì)視聽(tīng)言語(yǔ)整合加工的影響是否不同。結(jié)果發(fā)現(xiàn),在選擇性注意條件下,語(yǔ)義一致和不一致的視聽(tīng)目標(biāo)均沒(méi)有冗余信號(hào)效應(yīng)產(chǎn)生。在分配性注意條件下,被試對(duì)語(yǔ)義一致的視聽(tīng)目標(biāo)反應(yīng)最快,即在語(yǔ)義一致的視聽(tīng)目標(biāo)上產(chǎn)生冗余信號(hào)效應(yīng),而語(yǔ)義不一致的視聽(tīng)目標(biāo)不具有加工優(yōu)勢(shì)。競(jìng)爭(zhēng)模型分析發(fā)現(xiàn),冗余信號(hào)效應(yīng)來(lái)源于語(yǔ)義一致視聽(tīng)目標(biāo)中視覺(jué)和聽(tīng)覺(jué)成分的整合。也就是說(shuō),實(shí)驗(yàn)2結(jié)果表明,只有在分配性注意條件下,語(yǔ)義一致的視聽(tīng)目標(biāo)才會(huì)產(chǎn)生整合,而語(yǔ)義不一致的視聽(tīng)目標(biāo)不會(huì)產(chǎn)生整合;在選擇性注意條件下,不論語(yǔ)義是否一致,視聽(tīng)目標(biāo)均不會(huì)產(chǎn)生整合。研究二在研究一的結(jié)論基礎(chǔ)上探討了在分配性注意條件下注意負(fù)荷對(duì)視聽(tīng)言語(yǔ)整合加工的影響。包括兩個(gè)實(shí)驗(yàn),分別從視覺(jué)負(fù)荷(實(shí)驗(yàn)3)和聽(tīng)覺(jué)負(fù)荷(實(shí)驗(yàn)4)的角度進(jìn)行考察。實(shí)驗(yàn)3發(fā)現(xiàn),只有在無(wú)視覺(jué)負(fù)荷條件下才會(huì)產(chǎn)生冗余信號(hào)效應(yīng),而在有視覺(jué)負(fù)荷條件下無(wú)冗余信號(hào)效應(yīng)產(chǎn)生。競(jìng)爭(zhēng)模型分析發(fā)現(xiàn),冗余信號(hào)效應(yīng)來(lái)源于語(yǔ)義一致視聽(tīng)目標(biāo)的視覺(jué)和聽(tīng)覺(jué)成分的整合。實(shí)驗(yàn)3結(jié)果表明,即便是在分配性注意條件下,視聽(tīng)目標(biāo)的整合加工仍然受到視覺(jué)注意負(fù)荷的調(diào)制,表現(xiàn)為只有在無(wú)負(fù)荷條件下才會(huì)產(chǎn)生視聽(tīng)整合,而在有視覺(jué)負(fù)荷條件下沒(méi)有產(chǎn)生視聽(tīng)整合加工。實(shí)驗(yàn)4結(jié)果發(fā)現(xiàn),不論是否呈現(xiàn)聽(tīng)覺(jué)負(fù)荷,被試都對(duì)視聽(tīng)雙通道目標(biāo)的反應(yīng)最快,即不論有、無(wú)聽(tīng)覺(jué)負(fù)荷,均產(chǎn)生了冗余信號(hào)效應(yīng)。隨后的競(jìng)爭(zhēng)模型分析發(fā)現(xiàn),在有、無(wú)聽(tīng)覺(jué)負(fù)荷條件下均產(chǎn)生了視聽(tīng)整合。實(shí)驗(yàn)4結(jié)果表明,分配性注意條件下產(chǎn)生的視聽(tīng)整合加工并不受聽(tīng)覺(jué)負(fù)荷的調(diào)制。綜合研究二的結(jié)果可以發(fā)現(xiàn),視覺(jué)和聽(tīng)覺(jué)注意負(fù)荷在影響視聽(tīng)言語(yǔ)整合加工時(shí)具有不對(duì)稱性。研究三采用節(jié)奏化的視聽(tīng)線索,從動(dòng)態(tài)注意理論的角度出發(fā),考察了在分配性注意條件下注意的起伏對(duì)視聽(tīng)言語(yǔ)整合加工的影響及其神經(jīng)機(jī)制。實(shí)驗(yàn)5設(shè)置了目標(biāo)刺激與視聽(tīng)節(jié)奏相符(合拍)、不相符(不合拍)和無(wú)視聽(tīng)節(jié)奏(靜音)三種條件,發(fā)現(xiàn)只有在合拍條件下,被試對(duì)視聽(tīng)雙通道目標(biāo)的反應(yīng)最快,即產(chǎn)生冗余信號(hào)效應(yīng),而在其他兩種條件下均無(wú)冗余信號(hào)效應(yīng)產(chǎn)生。競(jìng)爭(zhēng)模型分析發(fā)現(xiàn),冗余信號(hào)效應(yīng)來(lái)源于視聽(tīng)目標(biāo)的視覺(jué)和聽(tīng)覺(jué)成分的整合。也就是說(shuō),只有當(dāng)視聽(tīng)雙通道目標(biāo)落在注意節(jié)奏的峰值上時(shí)才會(huì)產(chǎn)生視聽(tīng)整合,而在其他兩種條件下,即使視聽(tīng)目標(biāo)的視覺(jué)和聽(tīng)覺(jué)成分同時(shí)呈現(xiàn),仍然無(wú)法進(jìn)行整合加工。實(shí)驗(yàn)6在實(shí)驗(yàn)5的基礎(chǔ)上,設(shè)置合拍和無(wú)聲兩種條件,采用具有較高時(shí)間分辨率的事件相關(guān)電位技術(shù)考察了注意的起伏影響視聽(tīng)言語(yǔ)整合加工的時(shí)間進(jìn)程和神經(jīng)機(jī)制。結(jié)果發(fā)現(xiàn),對(duì)于單通道聽(tīng)覺(jué)目標(biāo)而言,合拍條件下的額區(qū)和中央?yún)^(qū)N1波幅顯著大于無(wú)聲條件,中線和右側(cè)電極上合拍條件下的N1波幅顯著大于無(wú)聲條件;在Pz和P3電極點(diǎn)處,合拍條件下的P2波幅顯著大于無(wú)聲條件下的P2波幅。對(duì)于單通道視覺(jué)目標(biāo)而言,合拍條件下頭皮前部和枕部的N1波幅顯著大于無(wú)聲條件下的N1波幅;在額區(qū)電極處,合拍條件下的P2波幅顯著大于無(wú)聲條件下的P2波幅。綜合上述結(jié)果可以看出,在N1成分上,不論對(duì)于視覺(jué)目標(biāo)還是聽(tīng)覺(jué)目標(biāo),均出現(xiàn)了N1注意效應(yīng),即在合拍條件下的波幅顯著大于無(wú)聲條件下的波幅。這表明在合拍條件下被試更能將注意指向目標(biāo)刺激。結(jié)合已有研究中對(duì)于視聽(tīng)整合加工的腦電數(shù)據(jù)處理方法,在0-500ms時(shí)間窗內(nèi),對(duì)每20ms的ERP波幅數(shù)據(jù)進(jìn)行分析。將不同時(shí)間段內(nèi)單通道聽(tīng)覺(jué)目標(biāo)和視覺(jué)目標(biāo)誘發(fā)的ERP相加(A+V),與視聽(tīng)雙通道目標(biāo)誘發(fā)的ERP(AV)相比,結(jié)果發(fā)現(xiàn),只有在合拍條件下,在121-140ms的頭皮前部右側(cè)腦區(qū),141-160ms的頭皮前部中央?yún)^(qū)均出現(xiàn)AV大于A+V的超加性效應(yīng)。這表明,在本研究的實(shí)驗(yàn)條件下,當(dāng)目標(biāo)刺激處于注意峰值時(shí)才會(huì)產(chǎn)生視聽(tīng)整合,但是這種注意峰值對(duì)視聽(tīng)整合的影響并不是持續(xù)性的,而是在目標(biāo)刺激呈現(xiàn)之后的121-160ms之間產(chǎn)生。綜上所述,注意對(duì)視聽(tīng)整合加工存在一定的影響,表現(xiàn)為只有當(dāng)注意同時(shí)指向視覺(jué)和聽(tīng)覺(jué)通道時(shí)才會(huì)產(chǎn)生視聽(tīng)整合加工,這種效應(yīng)在簡(jiǎn)單刺激的視聽(tīng)整合和視聽(tīng)言語(yǔ)整合加工中均存在。其次,分配性注意條件下視覺(jué)和聽(tīng)覺(jué)注意負(fù)荷對(duì)視聽(tīng)言語(yǔ)整合加工的影響具有不對(duì)稱性,表現(xiàn)為視覺(jué)負(fù)荷下不會(huì)產(chǎn)生視聽(tīng)整合,而聽(tīng)覺(jué)負(fù)荷下仍然存在視聽(tīng)整合。最后,分配性注意條件下的注意起伏對(duì)視聽(tīng)言語(yǔ)整合存在一定的影響,即當(dāng)目標(biāo)刺激處于注意峰值時(shí)才會(huì)產(chǎn)生視聽(tīng)整合加工。同時(shí),這種影響并非持續(xù)性,而是在目標(biāo)刺激呈現(xiàn)后121-160ms之間才會(huì)出現(xiàn)。
[Abstract]:Multi sensory integration (Multisensory Integration) means that when information from different sensory channels (visual, auditory, tactile, etc.) is presented simultaneously, the individual is effectively integrated into a unified and coherent perceptual information. By merging the information from different sensory channels, multi sensory integration can reduce the noise of the perceptual system, and help to reduce the noise of the perceptual system. It is not difficult to find out that early research pays much attention to the characteristics and processing methods of multi sensory integration itself. The characteristics of sensory channel and audio-visual integration are the focus of attention. In recent years, researchers have gradually shifted their interest to the relationship between attention and audio-visual integrated processing. However, most of the existing studies have discussed whether there is a difference in the effect of attention and non attention on audiovisual integration processing, but neglecting attention. In addition to pointing to space and objects, it can also point to the flexibility of sensory channels. Therefore, this study aims to examine the impact of attention directed to different sensory channels on audiovisual integration and the adjustment of attention to the amount of resources and attention. This paper includes three studies and 6 experiments. The subjects' attention points to different sensory channels (only attention to vision, only hearing, visual and hearing), and the effects of attention on visual and auditory integration are investigated. In Experiment 1, graphics and short sounds are used as experimental materials, and the tasks of the subjects are based on clues to the corresponding stimulus. The experimental results show that the response of the subjects to the audio-visual dual channel target is the fastest, that is, the effect of redundant signal is produced only under the condition of visual and auditory (distributive attention), and the effect of redundant signal is not produced under the condition of visual or only attention hearing (selective attention). From the integration of visual and auditory components of a dual channel target, a competitive model analysis is used to test the cumulative distribution probability of the tested response. The results show that the redundant signal effect in Experiment 1 derives from the integration of visual and auditory components of audio-visual dual channel targets. That is to say, the result of Experiment 1 shows that it is only in the distribution nature. Audio-visual integration is produced under the condition of attention. In Experiment 2, the material of Experiment 1 was adjusted, and the Chinese mono words read by human voice were used as auditory stimuli, and the two conditions of semantic consistency and inconsistency were formed, and the different effects of attention on the integration processing of different sensory channels were investigated. Under the selective attention condition, both semantic and inconsistent audio-visual targets have no redundant signal effects. Under the condition of distributive attention, the subjects respond to the semantic consistent audio-visual target most quickly, that is, it produces redundant signal effects on the semantic consistent audio-visual target, and the audio-visual target of the semantic inconsistent audio-visual target does not have the processing advantage. The contention model analysis shows that the redundant signal effect comes from the integration of visual and auditory components in the semantic consistent audio-visual target. That is to say, experiment 2 shows that the semantic consistent audio-visual target will produce integration only under the distributive attention condition, and the semantic inconsistent audio-visual target will not produce integration; At the same time, no matter whether the semantic consistency is consistent, the audio-visual target will not produce integration. In study two, the effect of attention load on audio-visual speech integration was explored on the basis of the study one. Two experiments were carried out from the angle of visual load (Experiment 3) and auditory load (Experiment 4). Experiment 3 found that only There is no redundant signal effect under the condition of no visual load, but there is no redundant signal effect under the condition of visual load. The competitive model analysis finds that the redundant signal effect comes from the integration of visual and auditory components of the semantic consistent audio-visual target. Experiment 3 results show that the audio-visual target is even under the distributive attention condition. The integration process is still modulated by visual attention load, which shows that audiovisual integration is produced only under no load conditions, and audio-visual integration is not produced under visual load conditions. Experiment 4 found that whether or not the auditory load was presented, the subjects had the fastest response to audiovisual dual channel targets, that is, no hearing or hearing. The results of the subsequent competition model showed that audio-visual integration was produced under the condition that there was no auditory load. Experiment 4 showed that audio-visual integration produced under the condition of distributive attention was not modulated by the auditory load. The results of the comprehensive study two could be found in the visual and auditory attention load. The influence of audio-visual speech integration was asymmetrical. Study three used rhythmic audio-visual clues and from the perspective of dynamic attention theory, the effects of attention undulating on audiovisual speech integration processing and its neural mechanism were investigated under the condition of distributive attention. In Experiment 5, the coincidence of target stimulation and audio-visual rhythm was set. It is found that the response of the subjects to the audio-visual dual channel target is the fastest, that is, the effect of redundant signal is produced only under the condition of closing, and there is no redundant signal effect under the other two conditions. The competition model is analyzed and the redundant signal effect comes from the visual and hearing of the audio-visual target. Integration of components. That is to say, audio-visual integration is produced only when the audio-visual dual channel target falls on the peak of the rhythm of attention. Under the other two conditions, even if the visual and auditory components of the audio-visual target are presented simultaneously, it is still impossible to integrate and process. In Experiment 6, on the basis of Experiment 5, two conditions are set up and silent. The time process and neural mechanism of the attentional undulating effects on audiovisual speech integration are investigated with the event related potential technique with high time resolution. The results show that the N1 amplitude in the frontal and central regions is significantly greater than the silent condition for the single channel auditory target. The amplitude of the N1 wave is significantly greater than that of the silent condition; at the Pz and P3 electrode points, the P2 amplitude under the combined condition is significantly greater than the P2 amplitude under the silent condition. For the single channel visual target, the N1 amplitude of the front and occipital parts of the scalp is significantly greater than the N1 amplitude under the silent condition. The P2 wave amplitude at the frontal zone electrode is significant. The P2 amplitude is greater than that under silent conditions. The above results can be seen that the N1 attention effect appears on both visual and auditory targets on the N1 component, that is, the amplitude of the wave amplitude is significantly greater than the amplitude under the silent condition. This shows that the subjects are more able to point to the target stimulation under the co beat condition. In the 0-500ms time window, the ERP wave amplitude data of each 20ms are analyzed in the time window of audio-visual integrated processing. The addition of the single channel auditory target and the visual target induced ERP (A+V) in different time periods is compared with the ERP (AV) induced by the audio-visual dual channel target. The results are found only under the matching condition, in 121-140m. In the right brain region of the anterior part of the scalp of S, the super additive effect of AV greater than A+V appears in the central region of the anterior part of the 141-160ms scalp. This indicates that audio-visual integration is produced when the target stimulus is at the peak of attention, but the effect of this peak of attention on audiovisual integration is not persistent, but at the target stimulus. The emergence of 121-160ms after now. To sum up, attention has been made to the effect of audiovisual integrated processing, which shows that audiovisual integration is produced only when attention is directed to the visual and auditory channels. This effect exists in both simple and exciting audiovisual integration and audio-visual speech integration. Secondly, distributive attention conditions. The effect of visual and auditory attention load on audio-visual speech integration is asymmetrical, which shows that audiovisual integration is not produced under visual load, while audiovisual integration still exists under the auditory load. Finally, attention undulating under the condition of distributive attention has a certain influence on audio-visual speech integration, that is, when the target stimulus is at the attention peak. At the same time, the effect is not continuous, but occurs between 121-160ms after the presentation of the target stimulus.
【學(xué)位授予單位】:天津師范大學(xué)
【學(xué)位級(jí)別】:博士
【學(xué)位授予年份】:2016
【分類(lèi)號(hào)】:B842.3
本文編號(hào):2134002
[Abstract]:Multi sensory integration (Multisensory Integration) means that when information from different sensory channels (visual, auditory, tactile, etc.) is presented simultaneously, the individual is effectively integrated into a unified and coherent perceptual information. By merging the information from different sensory channels, multi sensory integration can reduce the noise of the perceptual system, and help to reduce the noise of the perceptual system. It is not difficult to find out that early research pays much attention to the characteristics and processing methods of multi sensory integration itself. The characteristics of sensory channel and audio-visual integration are the focus of attention. In recent years, researchers have gradually shifted their interest to the relationship between attention and audio-visual integrated processing. However, most of the existing studies have discussed whether there is a difference in the effect of attention and non attention on audiovisual integration processing, but neglecting attention. In addition to pointing to space and objects, it can also point to the flexibility of sensory channels. Therefore, this study aims to examine the impact of attention directed to different sensory channels on audiovisual integration and the adjustment of attention to the amount of resources and attention. This paper includes three studies and 6 experiments. The subjects' attention points to different sensory channels (only attention to vision, only hearing, visual and hearing), and the effects of attention on visual and auditory integration are investigated. In Experiment 1, graphics and short sounds are used as experimental materials, and the tasks of the subjects are based on clues to the corresponding stimulus. The experimental results show that the response of the subjects to the audio-visual dual channel target is the fastest, that is, the effect of redundant signal is produced only under the condition of visual and auditory (distributive attention), and the effect of redundant signal is not produced under the condition of visual or only attention hearing (selective attention). From the integration of visual and auditory components of a dual channel target, a competitive model analysis is used to test the cumulative distribution probability of the tested response. The results show that the redundant signal effect in Experiment 1 derives from the integration of visual and auditory components of audio-visual dual channel targets. That is to say, the result of Experiment 1 shows that it is only in the distribution nature. Audio-visual integration is produced under the condition of attention. In Experiment 2, the material of Experiment 1 was adjusted, and the Chinese mono words read by human voice were used as auditory stimuli, and the two conditions of semantic consistency and inconsistency were formed, and the different effects of attention on the integration processing of different sensory channels were investigated. Under the selective attention condition, both semantic and inconsistent audio-visual targets have no redundant signal effects. Under the condition of distributive attention, the subjects respond to the semantic consistent audio-visual target most quickly, that is, it produces redundant signal effects on the semantic consistent audio-visual target, and the audio-visual target of the semantic inconsistent audio-visual target does not have the processing advantage. The contention model analysis shows that the redundant signal effect comes from the integration of visual and auditory components in the semantic consistent audio-visual target. That is to say, experiment 2 shows that the semantic consistent audio-visual target will produce integration only under the distributive attention condition, and the semantic inconsistent audio-visual target will not produce integration; At the same time, no matter whether the semantic consistency is consistent, the audio-visual target will not produce integration. In study two, the effect of attention load on audio-visual speech integration was explored on the basis of the study one. Two experiments were carried out from the angle of visual load (Experiment 3) and auditory load (Experiment 4). Experiment 3 found that only There is no redundant signal effect under the condition of no visual load, but there is no redundant signal effect under the condition of visual load. The competitive model analysis finds that the redundant signal effect comes from the integration of visual and auditory components of the semantic consistent audio-visual target. Experiment 3 results show that the audio-visual target is even under the distributive attention condition. The integration process is still modulated by visual attention load, which shows that audiovisual integration is produced only under no load conditions, and audio-visual integration is not produced under visual load conditions. Experiment 4 found that whether or not the auditory load was presented, the subjects had the fastest response to audiovisual dual channel targets, that is, no hearing or hearing. The results of the subsequent competition model showed that audio-visual integration was produced under the condition that there was no auditory load. Experiment 4 showed that audio-visual integration produced under the condition of distributive attention was not modulated by the auditory load. The results of the comprehensive study two could be found in the visual and auditory attention load. The influence of audio-visual speech integration was asymmetrical. Study three used rhythmic audio-visual clues and from the perspective of dynamic attention theory, the effects of attention undulating on audiovisual speech integration processing and its neural mechanism were investigated under the condition of distributive attention. In Experiment 5, the coincidence of target stimulation and audio-visual rhythm was set. It is found that the response of the subjects to the audio-visual dual channel target is the fastest, that is, the effect of redundant signal is produced only under the condition of closing, and there is no redundant signal effect under the other two conditions. The competition model is analyzed and the redundant signal effect comes from the visual and hearing of the audio-visual target. Integration of components. That is to say, audio-visual integration is produced only when the audio-visual dual channel target falls on the peak of the rhythm of attention. Under the other two conditions, even if the visual and auditory components of the audio-visual target are presented simultaneously, it is still impossible to integrate and process. In Experiment 6, on the basis of Experiment 5, two conditions are set up and silent. The time process and neural mechanism of the attentional undulating effects on audiovisual speech integration are investigated with the event related potential technique with high time resolution. The results show that the N1 amplitude in the frontal and central regions is significantly greater than the silent condition for the single channel auditory target. The amplitude of the N1 wave is significantly greater than that of the silent condition; at the Pz and P3 electrode points, the P2 amplitude under the combined condition is significantly greater than the P2 amplitude under the silent condition. For the single channel visual target, the N1 amplitude of the front and occipital parts of the scalp is significantly greater than the N1 amplitude under the silent condition. The P2 wave amplitude at the frontal zone electrode is significant. The P2 amplitude is greater than that under silent conditions. The above results can be seen that the N1 attention effect appears on both visual and auditory targets on the N1 component, that is, the amplitude of the wave amplitude is significantly greater than the amplitude under the silent condition. This shows that the subjects are more able to point to the target stimulation under the co beat condition. In the 0-500ms time window, the ERP wave amplitude data of each 20ms are analyzed in the time window of audio-visual integrated processing. The addition of the single channel auditory target and the visual target induced ERP (A+V) in different time periods is compared with the ERP (AV) induced by the audio-visual dual channel target. The results are found only under the matching condition, in 121-140m. In the right brain region of the anterior part of the scalp of S, the super additive effect of AV greater than A+V appears in the central region of the anterior part of the 141-160ms scalp. This indicates that audio-visual integration is produced when the target stimulus is at the peak of attention, but the effect of this peak of attention on audiovisual integration is not persistent, but at the target stimulus. The emergence of 121-160ms after now. To sum up, attention has been made to the effect of audiovisual integrated processing, which shows that audiovisual integration is produced only when attention is directed to the visual and auditory channels. This effect exists in both simple and exciting audiovisual integration and audio-visual speech integration. Secondly, distributive attention conditions. The effect of visual and auditory attention load on audio-visual speech integration is asymmetrical, which shows that audiovisual integration is not produced under visual load, while audiovisual integration still exists under the auditory load. Finally, attention undulating under the condition of distributive attention has a certain influence on audio-visual speech integration, that is, when the target stimulus is at the attention peak. At the same time, the effect is not continuous, but occurs between 121-160ms after the presentation of the target stimulus.
【學(xué)位授予單位】:天津師范大學(xué)
【學(xué)位級(jí)別】:博士
【學(xué)位授予年份】:2016
【分類(lèi)號(hào)】:B842.3
【相似文獻(xiàn)】
相關(guān)會(huì)議論文 前1條
1 林斌;;央行最優(yōu)干預(yù)下人民幣匯率的決定——基于信號(hào)效應(yīng)和資產(chǎn)調(diào)整效應(yīng)的動(dòng)態(tài)分析[A];2009年全國(guó)博士生學(xué)術(shù)會(huì)議論文集[C];2009年
相關(guān)重要報(bào)紙文章 前1條
1 高興;從礦產(chǎn)買(mǎi)賣(mài)新趨勢(shì)看資源股底在何方[N];證券時(shí)報(bào);2013年
相關(guān)博士學(xué)位論文 前1條
1 顧吉有;注意對(duì)視聽(tīng)整合加工的影響[D];天津師范大學(xué);2016年
相關(guān)碩士學(xué)位論文 前1條
1 孫雪萍;政府研發(fā)補(bǔ)貼的信號(hào)效應(yīng)研究[D];中共江蘇省委黨校;2014年
,本文編號(hào):2134002
本文鏈接:http://www.sikaile.net/shekelunwen/xinlixingwei/2134002.html
最近更新
教材專(zhuān)著