基于Kinect可通行性區(qū)域識(shí)別
[Abstract]:Effective and reliable identification of traversable region is of great significance to the navigation of mobile robots. However, the current research results mainly focus on the limited space that sensors can detect directly, which is still a challenge for open environment. Since the traditional single sensor has its own limitations, such as the complexity of stereo vision algorithm and "shortsightedness", the high cost of lidar, and the combination of sensors put forward a very high requirement for data synchronization between sensors. In this paper, a self-supervised passable area recognition method based on Kinect sensor is proposed. The ground and obstacle can be identified and marked according to certain rules for the close-range area which can be detected effectively by Kinect. The two kinds of recognition tags are projected to the corresponding RGB image space, then the combined features of the visual and recognition tags are extracted in the image space, and the classifier is trained, and the classifier is used to classify and recognize the remote image space. Finally, the traversability of the whole image space is obtained. This paper mainly includes two parts: the first part is the close distance Kinect sensor obstacle recognition, this part introduces the Kinect software and hardware and the depth information acquisition principle, after completing the calibration and the RGB registration, Two-dimensional images combined with three-dimensional space coordinates are used to identify the ground and obstacles. The second part is based on obstacle recognition. In this part, the close area recognition category label is projected to the corresponding image space, and then the image window is divided into blocks using sliding window. Based on the combined visual features of color, texture and geometry extracted from image blocks, the combined features are combined with the category labels of corresponding image blocks to form training data to train the classifier of this paper, Fuzzy ARTMAP,. Finally, the remote image space is classified by using the classifier, and the recognition results are obtained. Finally, the effectiveness of the Kinect close-range detection algorithm and the detection of remote passable area based on image space are verified by two experiments in indoor and outdoor environments.
【學(xué)位授予單位】:杭州電子科技大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類(lèi)號(hào)】:TP391.41;TP242
【參考文獻(xiàn)】
相關(guān)期刊論文 前8條
1 余秀麗;王丹丹;牛磊磊;宋懷波;何東健;胡少軍;耿楠;;Kinect在現(xiàn)代農(nóng)業(yè)信息領(lǐng)域中的應(yīng)用與研究進(jìn)展[J];農(nóng)機(jī)化研究;2015年11期
2 朱濤;蘆利斌;金國(guó)棟;;基于Kinect深度技術(shù)的障礙物在線(xiàn)快速檢測(cè)算法[J];電子設(shè)計(jì)工程;2014年12期
3 丁晨;王君澤;瞿暢;高瞻;;Kinect體感交互技術(shù)及其在醫(yī)療康復(fù)領(lǐng)域的應(yīng)用[J];中國(guó)康復(fù)理論與實(shí)踐;2013年02期
4 楊東方;王仕成;劉華平;劉志國(guó);孫富春;;基于Kinect系統(tǒng)的場(chǎng)景建模與機(jī)器人自主導(dǎo)航[J];機(jī)器人;2012年05期
5 朱效洲;李宇波;盧惠民;張輝;;基于視覺(jué)的移動(dòng)機(jī)器人可通行區(qū)域識(shí)別研究綜述[J];計(jì)算機(jī)應(yīng)用研究;2012年06期
6 胡庭波;吳濤;;一種快速魯棒的越野環(huán)境下自主移動(dòng)機(jī)器人障礙檢測(cè)算法[J];機(jī)器人;2011年03期
7 王璐;陸筱霞;蔡自興;;基于局部顯著區(qū)域的自然場(chǎng)景識(shí)別[J];中國(guó)圖象圖形學(xué)報(bào);2008年08期
8 王榮本;李琳輝;郭烈;金立生;張明恒;;基于立體視覺(jué)的越野環(huán)境感知技術(shù)[J];吉林大學(xué)學(xué)報(bào)(工學(xué)版);2008年03期
相關(guān)博士學(xué)位論文 前2條
1 聶一鳴;高速公路自主駕駛汽車(chē)視覺(jué)感知算法研究[D];國(guó)防科學(xué)技術(shù)大學(xué);2012年
2 朱江;非結(jié)構(gòu)環(huán)境下的移動(dòng)機(jī)器人認(rèn)知與導(dǎo)航避障方法研究[D];湖南大學(xué);2011年
相關(guān)碩士學(xué)位論文 前1條
1 趙亮;基于Kinect的障礙物探測(cè)及三維場(chǎng)景重建研究與實(shí)現(xiàn)[D];西安科技大學(xué);2015年
,本文編號(hào):2399214
本文鏈接:http://www.sikaile.net/kejilunwen/zidonghuakongzhilunwen/2399214.html