基于視頻的運(yùn)動(dòng)人體行為識(shí)別
本文選題:運(yùn)動(dòng)顯著區(qū)域 切入點(diǎn):低秩矩陣分解 出處:《中北大學(xué)》2017年碩士論文 論文類型:學(xué)位論文
【摘要】:隨著計(jì)算機(jī)與互聯(lián)網(wǎng)的發(fā)展,基于計(jì)算機(jī)視覺的人體行為識(shí)別在人類的生產(chǎn)、生活等實(shí)際應(yīng)用中扮演著重要的角色,并取得了長(zhǎng)足的發(fā)展。目前,國(guó)內(nèi)外在該研究領(lǐng)域主要針對(duì)特征提取和分類器設(shè)計(jì)方面進(jìn)行創(chuàng)新和改進(jìn),尤其在特征提取方面,包括全局特征(如運(yùn)動(dòng)能量圖,運(yùn)動(dòng)歷史圖等)、局部特征(如Harris-3D、Hessian、HOG/HOF等)。最近出現(xiàn)的基于軌跡特征的行為識(shí)別方法,其在光流場(chǎng)中提取特征點(diǎn),并對(duì)特征點(diǎn)進(jìn)行跟蹤與軌跡描述以識(shí)別人體行為,該方法識(shí)別效果較好,但由于背景和運(yùn)動(dòng)相關(guān)區(qū)域難以區(qū)分,尤其是相機(jī)存在相對(duì)運(yùn)動(dòng)的情況下,導(dǎo)致運(yùn)動(dòng)特征和軌跡表達(dá)有效性不足的問題,本文在上述基礎(chǔ)上主要做了以下工作:(1)在顯著性軌跡特征提取中,為保證視頻中運(yùn)動(dòng)區(qū)域的時(shí)間相關(guān)性,將視頻在時(shí)間方向劃分為互不重疊的視頻段;其次,為保證其空間的相關(guān)性,將視頻分為互不重疊視頻塊,利用光流信息構(gòu)建視頻的運(yùn)動(dòng)矩陣,針對(duì)出現(xiàn)攝像機(jī)相對(duì)運(yùn)動(dòng)的視頻中,由于運(yùn)動(dòng)相關(guān)區(qū)域通常具有無規(guī)律與不規(guī)則性,通過實(shí)現(xiàn)低秩矩陣分解,將原運(yùn)動(dòng)矩陣分為低秩部分和稀疏殘差部分,從而獲取運(yùn)動(dòng)顯著區(qū)域。(2)在復(fù)雜場(chǎng)景視頻中,基于上述方法檢測(cè)的運(yùn)動(dòng)相關(guān)區(qū)域中會(huì)包含視頻背景中的運(yùn)動(dòng)顯著區(qū)域。為去除背景區(qū)域像素點(diǎn),提出了通過邊緣檢測(cè)與背景差分法結(jié)合獲取完整的運(yùn)動(dòng)目標(biāo)模板,然后結(jié)合上述運(yùn)動(dòng)顯著相關(guān)區(qū)域獲取運(yùn)動(dòng)目標(biāo)顯著區(qū)域。(3)在跟蹤中,采用中值平滑濾波器去除極端值,然后在光流場(chǎng)中采樣并判斷是否位于顯著區(qū)域從而提取顯著特征點(diǎn),最后通過類似于現(xiàn)有的稠密軌跡跟蹤方法完成迭代跟蹤。
[Abstract]:With the development of computer and Internet, human behavior recognition based on computer vision plays an important role in human production, life and other practical applications, and has made great progress. In the field of feature extraction and classifier design at home and abroad, innovation and improvement are carried out, especially in feature extraction, including global features (such as moving energy map). Motion history maps and local features (such as Harris-3D Hessianhe Hessian / Hog / HOF, etc.). Recently developed behavior recognition methods based on trajectory features, which extract feature points in optical flow field, and track and describe feature points to identify human behavior. The method has good recognition effect, but it is difficult to distinguish the background from the motion related areas, especially in the case of relative motion of the camera, which leads to the lack of effectiveness of the expression of motion features and trajectory. Based on the above work, this paper mainly does the following work: 1) in the feature extraction of salient trajectory, in order to ensure the temporal correlation of the moving region in the video, the video is divided into non-overlapping video segments in the time direction; secondly, in order to ensure the temporal correlation of the moving region in the video, the video is divided into non-overlapping video segments. In order to ensure the spatial correlation of the video, the video is divided into non-overlapping video blocks, and the motion matrix of the video is constructed by using the optical flow information. In view of the relative motion of the video camera, the motion related region is usually irregular and irregular. By implementing the low rank matrix decomposition, the original motion matrix is divided into low rank part and sparse residual part, and the motion significant area. The motion-related regions based on the above methods will contain the moving salient regions in the video background. In order to remove the pixels in the background region, a complete template of moving objects is obtained by combining edge detection with background differential method. In tracking, median smoothing filter is used to remove the extreme value, and then sampling and judging whether the salient region is located in the optical flow field to extract the salient feature points. In the end, iterative tracking is accomplished by using dense trajectory tracking methods similar to the existing ones.
【學(xué)位授予單位】:中北大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP391.41
【參考文獻(xiàn)】
相關(guān)期刊論文 前10條
1 侯曉凡;吳成茂;;一種快速的模糊局部C-均值聚類分割算法[J];計(jì)算機(jī)科學(xué);2016年10期
2 程海粟;李慶武;仇春春;郭晶晶;;基于改進(jìn)密集軌跡的人體行為識(shí)別算法[J];計(jì)算機(jī)工程;2016年08期
3 朱煜;趙江坤;王逸寧;鄭兵兵;;基于深度學(xué)習(xí)的人體行為識(shí)別算法綜述[J];自動(dòng)化學(xué)報(bào);2016年06期
4 周山;;計(jì)算機(jī)視覺藝術(shù)在數(shù)字媒體的應(yīng)用[J];電子技術(shù)與軟件工程;2016年03期
5 夏海英;何利平;黃思奇;;基于時(shí)空分布的混合高斯背景建模改進(jìn)方法[J];計(jì)算機(jī)應(yīng)用研究;2015年05期
6 陶志穎;魯昌華;汪濟(jì)洲;蔣薇薇;;一種改進(jìn)型的時(shí)空混合高斯背景建模[J];電子測(cè)量與儀器學(xué)報(bào);2014年09期
7 邵延華;郭永彩;高潮;;基于特征融合的人體行為識(shí)別[J];光電子.激光;2014年09期
8 韓明;劉教民;孟軍英;王震洲;;一種自適應(yīng)調(diào)整K-r的混合高斯背景建模和目標(biāo)檢測(cè)算法[J];電子與信息學(xué)報(bào);2014年08期
9 楊夢(mèng)鐸;李凡長(zhǎng);張莉;;李群機(jī)器學(xué)習(xí)十年研究進(jìn)展[J];計(jì)算機(jī)學(xué)報(bào);2015年07期
10 徐勤軍;吳鎮(zhèn)揚(yáng);;視頻序列中的行為識(shí)別研究進(jìn)展[J];電子測(cè)量與儀器學(xué)報(bào);2014年04期
,本文編號(hào):1576078
本文鏈接:http://www.sikaile.net/kejilunwen/ruanjiangongchenglunwen/1576078.html