天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

復雜場景下視頻目標自動分割算法研究

發(fā)布時間:2018-05-03 18:10

  本文選題:視頻目標分割 + 光流; 參考:《中國科學技術大學》2017年碩士論文


【摘要】:隨著互聯(lián)網基礎設施的逐步升級以及移動終端的快速普及,人們可以越來越方便地拍攝和觀看視頻。視頻由于其本身所攜帶信息的豐富性和生動性,成為了人們生活中重要的信息傳播載體之一。不斷增長的海量視頻數(shù)據(jù)也帶來了如何識別、檢索和理解視頻內容的需求。如何降低視頻內容理解難度,提煉出視頻中的關鍵信息成為當前視頻處理領域的重要研究課題。由于視頻目標分割的研究目標是有效分割出具有顯著性特征的前景目標,所以它在視頻摘要、視頻檢索、動作分析和視頻語義理解等領域擁有廣泛的應用。當前的視頻目標分割算法大多屬于自底向上的方法,通過獲取并分析視頻中顏色和邊緣特征、運動信息等底層特征分割出具有顯著性特點的前景目標。傳統(tǒng)基于人工標注的算法已經不能滿足當前大規(guī)模視頻數(shù)據(jù)環(huán)境下的應用需求。同時,海量視頻中包含的場景和拍攝條件是復雜而多樣的,使得當前的自動化視頻目標分割算法并不能在一些復雜場景中仍保持較好的魯棒性。針對上述問題,本文提出了兩種適用于不同場景的視頻目標自動分割算法。主要研究工作和創(chuàng)新點如下:1.現(xiàn)有基于圖割的算法容易受到背景噪聲和像素點失配的干擾,在一些復雜場景下魯棒性不佳。本文提出了 一種基于光流場和圖割的視頻目標自動分割算法,針對上述問題做了改進。在對前景目標分割前,該算法預先對視頻全局動作特征進行分析,獲得了前景目標的先驗知識,減少了背景噪聲對算法的干擾。針對像素點失配問題,該算法提出了動態(tài)位置模型優(yōu)化機制,利用前景目標的位置模型增強了分割結果的時域連續(xù)性。實驗表明,該算法在鏡頭快速移動、前景目標運動特征不規(guī)律等場景下能夠獲得更加準確和魯棒的分割結果。2.在一些復雜場景下,現(xiàn)有基到候選目標的算法往往會出現(xiàn)分割結果部分缺失的問題,這一問題的根源在于候選目標過于碎片化以及候選目標間的時域映射關系不夠準確。本文提出了一種基于候選目標的改進算法。該算法對原生候選目標進行了時域擴展與合并,不僅改善了候選目標碎片化的問題,還提高了相鄰幀間候選目標的時域連續(xù)性。為了進一步增強模型時域映射關系的準確性,該算法引入了更多圖像特征用于度量模型的邊權值。在多個基準數(shù)據(jù)集上的實驗表明,相較于現(xiàn)有同類算法,該算法對背景噪聲的抗噪能力更強,在背景環(huán)境復雜、水面倒影等場景中分割結果更加完整。
[Abstract]:With the gradual upgrading of Internet infrastructure and the rapid popularity of mobile terminals, people can more and more easily shoot and watch video. Because of the richness and vividness of the information it carries, video has become one of the important carriers of information dissemination in people's life. The growing mass of video data also brings the demand of how to identify, retrieve and understand the video content. How to reduce the difficulty of video content understanding and extract the key information of video has become an important research topic in the field of video processing. Because the research goal of video target segmentation is to segment the foreground target with significant features, it has a wide range of applications in video summarization, video retrieval, action analysis and video semantic understanding. Most of the current video target segmentation algorithms belong to bottom-up methods. By obtaining and analyzing the bottom features such as color edge feature and motion information the foreground target with significant characteristics is segmented. The traditional algorithm based on manual annotation can not meet the needs of the current large-scale video data environment. At the same time, the scene and shooting conditions included in the massive video are complex and diverse, which makes the current automated video target segmentation algorithm can not maintain good robustness in some complex scenes. In order to solve the above problems, this paper proposes two automatic video object segmentation algorithms for different scenes. The main research work and innovation are as follows: 1. The existing algorithms based on graph cutting are vulnerable to background noise and pixel mismatch, and are not robust in some complex scenarios. In this paper, an automatic video object segmentation algorithm based on optical flow field and graph cutting is proposed and improved. Before segmenting the foreground target, the algorithm analyzes the global motion features of the video in advance, obtains the prior knowledge of the foreground target, and reduces the interference of the background noise to the algorithm. To solve the problem of pixel mismatch, the algorithm proposes a dynamic position model optimization mechanism, which enhances the continuity of segmentation results in time domain by using the position model of foreground target. Experimental results show that the proposed algorithm can obtain more accurate and robust segmentation results. In some complex scenarios, the existing algorithms based to candidate targets often have the problem of partial absence of segmentation results. The root of the problem lies in the fragmentation of candidate targets and the inaccuracy of time domain mapping relationship between candidate targets. This paper presents an improved algorithm based on candidate targets. The algorithm extends and combines the original candidate targets in time domain, which not only improves the fragmentation of candidate targets, but also improves the continuity of candidate targets between adjacent frames in time domain. In order to further enhance the accuracy of the temporal mapping of the model, the algorithm introduces more image features to measure the boundary weights of the model. Experiments on several datum datasets show that the proposed algorithm is more robust to background noise than the existing algorithms, and the segmentation results are more complete in the background environment, water surface reflection and other scenes.
【學位授予單位】:中國科學技術大學
【學位級別】:碩士
【學位授予年份】:2017
【分類號】:TP391.41

【參考文獻】

相關期刊論文 前2條

1 周治平;施小鳳;;基于超像素的目標跟蹤方法研究[J];光電工程;2013年12期

2 高尚兵;周靜波;嚴云洋;;一種新的基于超像素的譜聚類圖像分割算法[J];南京大學學報(自然科學版);2013年02期

相關碩士學位論文 前1條

1 宋巖軍;變分光流計算的數(shù)學模型及相關數(shù)值方法[D];青島大學;2007年

,

本文編號:1839541

資料下載
論文發(fā)表

本文鏈接:http://www.sikaile.net/shoufeilunwen/xixikjs/1839541.html


Copyright(c)文論論文網All Rights Reserved | 網站地圖 |

版權申明:資料由用戶fba28***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com