天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當前位置:主頁 > 醫(yī)學論文 > 護理論文 >

隱性注意下視聽雙通道腦控字符輸入系統(tǒng)關鍵問題研究

發(fā)布時間:2018-05-26 17:56

  本文選題:隱性注意 + 視聽聯(lián)合 ; 參考:《天津大學》2015年博士論文


【摘要】:腦控字符輸入系統(tǒng)可以幫助重癥殘障病人通過虛擬鍵盤信息實現(xiàn)對外自主交流,但傳統(tǒng)的顯性注意輸入范式很大程度上依賴于使用者的注視轉移水平,并不適用于眼動控制受限的重癥殘障病人。為了解決這一問題,研究者逐漸轉向尋求一些基于隱性注意的腦機交互新方法,尤其是在視覺與聽覺雙通路聯(lián)合刺激方面給予了較高期望,相應研究亟待深入開展。基于上述背景,本文首先設計了三類獨立于眼動控制的隱性注意刺激范式,分別是:視覺刺激范式、聽覺刺激范式以及視聽聯(lián)合刺激范式。通過對不同刺激范式誘發(fā)出的事件相關電位各成分特征、分類正確率的時空分布特性以及溯源結果的分析,探究了視覺刺激和聽覺刺激在不同控制條件下的作用機制和大腦對不同刺激范式的響應特性,并且通過對在線字符輸入速度和實驗中任務負荷的分析,證明了視聽聯(lián)合刺激范式是一種低任務負荷的高效字符輸入范式。在此基礎上,研究中設計并實現(xiàn)了五種視-聽呈現(xiàn)時間差的視聽聯(lián)合刺激范式下的隱性注意腦控字符輸入系統(tǒng)。通過對其誘發(fā)出的事件相關電位進行分析發(fā)現(xiàn),視聽聯(lián)合刺激早期成分與視覺和聽覺均顯著相關,而事件相關電位的潛伏期主要與視覺刺激的呈現(xiàn)時刻相關。呈現(xiàn)時間差在一定范圍內(小于100 ms)的視-聽聯(lián)合刺激范式均可取得較高的分類正確率,且不存在顯著性差異。這一方面顯示了文中采用的分類算法的魯棒性,另一方面也說明只要保證視-聽刺激的刺激呈現(xiàn)時間差在一定范圍內并配以合適的分類器就可以得到較高的分類正確率,而不必對硬件系統(tǒng)和配準時刻有過高的要求。最后本文還提出了一種新穎的隱性注意視聽雙通道刺激范式,即視聽并行刺激范式。實現(xiàn)了利用視覺刺激與聽覺刺激并行完成對目標任務的選擇。文中從事件相關電位的特性與分類識別的結果分析了視聽并行字符輸入的可能性,結果表明并行范式下可誘發(fā)產生穩(wěn)定的事件相關電位,并具有較強的可分性。有超過70%的被試可以達到輸入字符正確率在80%以上?傊,本文設計實現(xiàn)了多種隱性注意視聽刺激范式,探討了視聽聯(lián)合刺激中視覺刺激與聽覺刺激的作用機制,并證明了視聽聯(lián)合刺激范式的優(yōu)勢及視聽并行刺激范式的可行性,研究成果有望在重癥癱瘓腦-機接口康復系統(tǒng)中得到進一步的推廣應用。
[Abstract]:The brain-controlled character input system can help the severely disabled patients to communicate autonomously through virtual keyboard information, but the traditional explicit attention input paradigm largely depends on the user's fixation transfer level. Not suitable for severely disabled patients with limited eye movement control. In order to solve this problem, researchers have gradually turned to seek some new methods of brain-computer interaction based on recessive attention, especially in the combination of visual and auditory stimuli. Based on the above background, three types of implicit attention stimuli are designed, which are visual stimulation paradigm, auditory stimulus paradigm and audio-visual combined stimulation paradigm. The characteristics of the components of event-related potentials induced by different stimulus paradigms, the temporal and spatial distribution of classification accuracy and the traceability results were analyzed. This paper explores the mechanism of visual and auditory stimuli under different control conditions and the characteristics of brain response to different stimuli patterns, and analyzes the speed of online character input and the task load in the experiment. It is proved that the Audio-visual combined stimulation paradigm is an efficient character input paradigm with low workload. On the basis of this, we design and implement five implicit attention control character input systems under the visual and auditory time-delay combined stimulation paradigm. It was found that the early components of the combined audiovisual stimulation were significantly related to the visual and auditory components, while the latency of the event-related potentials was mainly related to the presentation time of the visual stimuli. In a certain range (less than 100 Ms), the visual and auditory stimulus paradigm with time difference can achieve a higher classification accuracy, and there is no significant difference. On the one hand, it shows the robustness of the classification algorithm used in this paper. On the other hand, it also shows that the classification accuracy can be obtained by ensuring that the time difference of visual and auditory stimuli is within a certain range and matching with a suitable classifier. There is no need for the hardware system and registration time to have too high requirements. Finally, this paper proposes a novel implicit attention audiovisual dual channel stimulation paradigm, that is, audiovisual parallel stimulation paradigm. The choice of target task by visual stimulation and auditory stimulation is realized. In this paper, the possibility of audio-visual parallel character input is analyzed from the characteristics of event-related potentials and the results of classification and recognition. The results show that the event-related potentials can be induced in parallel paradigm and have strong separability. More than 70% of the subjects can achieve the input character accuracy of more than 80%. In a word, this paper designs and implements a variety of implicit audio-visual stimulation paradigms, probes into the mechanism of visual stimulation and auditory stimulation in combination with audio-visual stimulation, and proves the advantages of audio-visual co-stimulation paradigm and the feasibility of audio-visual parallel stimulation paradigm. The research results are expected to be further applied in the rehabilitation system of severe paralysis brain-computer interface.
【學位授予單位】:天津大學
【學位級別】:博士
【學位授予年份】:2015
【分類號】:R496;TH789

【相似文獻】

相關期刊論文 前2條

1 葛霽光;沈公羽;;動態(tài)心電快速分析系統(tǒng)的研制[J];醫(yī)療器械;1987年04期

2 ;[J];;年期

相關會議論文 前1條

1 蔡東亦;陳志瑋;張哲榮;陳建智;利德江;;藉由虛擬樣本改善集成法的分類正確率[A];第25屆全國灰色系統(tǒng)會議論文集[C];2014年

相關博士學位論文 前2條

1 安興偉;隱性注意下視聽雙通道腦控字符輸入系統(tǒng)關鍵問題研究[D];天津大學;2015年

2 葉檸;基于腦電信號的腦—機接口的關鍵技術與實驗研究[D];東北大學;2010年

相關碩士學位論文 前2條

1 單外平;基于深度信念網絡的變速器故障分類識別研究[D];華南理工大學;2015年

2 李南南;N-back誘發(fā)腦力負荷信息檢測與識別技術研究[D];天津大學;2014年



本文編號:1938289

資料下載
論文發(fā)表

本文鏈接:http://www.sikaile.net/huliyixuelunwen/1938289.html


Copyright(c)文論論文網All Rights Reserved | 網站地圖 |

版權申明:資料由用戶163e6***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com