天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁(yè) > 科技論文 > 自動(dòng)化論文 >

基于交叉熵的隨機(jī)賦權(quán)網(wǎng)絡(luò)

發(fā)布時(shí)間:2018-01-07 08:24

  本文關(guān)鍵詞:基于交叉熵的隨機(jī)賦權(quán)網(wǎng)絡(luò) 出處:《河北大學(xué)》2017年碩士論文 論文類(lèi)型:學(xué)位論文


  更多相關(guān)文章: 極速學(xué)習(xí)機(jī) 過(guò)擬合 均方誤差損失函數(shù) 交叉熵


【摘要】:近年來(lái),隨著信息技術(shù)與計(jì)算機(jī)應(yīng)用技術(shù)的不斷進(jìn)步發(fā)展,整個(gè)社會(huì)進(jìn)入了大數(shù)據(jù)時(shí)代.因此,如何利用當(dāng)前先進(jìn)的數(shù)據(jù)分析技術(shù),從海量的數(shù)據(jù)中挖掘出所需的信息成為最關(guān)鍵的問(wèn)題.分類(lèi)問(wèn)題作為數(shù)據(jù)分析中的主要問(wèn)題也在不斷地引起人們的關(guān)注.黃廣斌提出了一種結(jié)構(gòu)簡(jiǎn)單的神經(jīng)網(wǎng)絡(luò):極速學(xué)習(xí)機(jī),它是基于最小均方誤差的原則求得矩陣廣義逆,具有訓(xùn)練時(shí)間短,測(cè)試精度較高的優(yōu)點(diǎn).但是,因?yàn)镋LM僅考慮的是訓(xùn)練數(shù)據(jù)經(jīng)驗(yàn)誤差最小化,容易產(chǎn)生過(guò)擬合現(xiàn)象.本文的主要工作是:提出用交叉熵?fù)p失函數(shù)替代均方誤差損失函數(shù),在神經(jīng)網(wǎng)絡(luò)產(chǎn)生過(guò)擬合情況下,比較兩者的測(cè)試精度,以此來(lái)比較二者的泛化能力.具體地,過(guò)擬合現(xiàn)象是機(jī)器學(xué)習(xí)中一種常見(jiàn)的現(xiàn)象,表現(xiàn)為分類(lèi)器能夠100%的正確分類(lèi)訓(xùn)練樣本數(shù)據(jù),但對(duì)于其他數(shù)據(jù)則表現(xiàn)較差,其原因是構(gòu)造的函數(shù)過(guò)于精細(xì)復(fù)雜.在ELM中,通過(guò)計(jì)算隱藏輸出矩陣的廣義逆,找到具有最小二范數(shù)的最優(yōu)解.但由于隱層輸出矩陣的行數(shù)遠(yuǎn)遠(yuǎn)大于列數(shù),即隱層節(jié)點(diǎn)的數(shù)量很多,會(huì)出現(xiàn)過(guò)擬合現(xiàn)象.為了解決這個(gè)問(wèn)題,本文提出一種基于交叉熵的隨機(jī)賦權(quán)網(wǎng)絡(luò)(CE-RWNNs),用交叉熵最小化原理代替均方誤差最小化原理.實(shí)驗(yàn)結(jié)果證明,提出的CE-RWNNs可以一定程度上克服在具有許多隱層節(jié)點(diǎn)的ELM中過(guò)擬合的缺點(diǎn).
[Abstract]:In recent years, with the continuous development of information technology and computer application technology, the whole society has entered the big data era. Therefore, how to use the current advanced data analysis technology. Mining the needed information from massive data has become the most critical issue. Classification, as the main problem in data analysis, has been attracting more and more attention. Huang Guangbin proposed a simple neural network. :. Speed learning machine. It is based on the principle of minimum mean square error to obtain matrix generalized inverse, which has the advantages of short training time and high test accuracy. However, because ELM only considers the minimum empirical error of training data. The main work of this paper is to use the cross-entropy loss function to replace the mean square error loss function. Specifically, over-fitting is a common phenomenon in machine learning, which shows that the classifier can correctly classify the training sample data of 100%. But for other data, the reason is that the constructed function is too fine and complex. In ELM, the generalized inverse of hidden output matrix is calculated. Find the optimal solution with least square norm, but because the number of rows of hidden layer output matrix is far larger than the number of columns, that is, there are many hidden layer nodes, there will be a phenomenon of over-fitting. In order to solve this problem. In this paper, a random weight network based on cross-entropy is proposed. The principle of cross-entropy minimization is used to replace the principle of mean square error minimization. The proposed CE-RWNNs can overcome the shortcoming of overfitting in ELM with many hidden nodes to some extent.
【學(xué)位授予單位】:河北大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類(lèi)號(hào)】:TP18

【參考文獻(xiàn)】

相關(guān)期刊論文 前5條

1 張沛洲;王熙照;顧迪;趙士欣;;基于共軛梯度的極速學(xué)習(xí)機(jī)[J];計(jì)算機(jī)應(yīng)用;2015年10期

2 吳木貴;江彩英;張信華;賴(lài)榮欽;;交叉熵神經(jīng)網(wǎng)絡(luò)及其在閩北大雨以上降水預(yù)報(bào)中的應(yīng)用[J];南京信息工程大學(xué)學(xué)報(bào)(自然科學(xué)版);2012年03期

3 全宇;王忠慶;何苗;;基于交叉熵的神經(jīng)網(wǎng)絡(luò)在病理圖像分析中的應(yīng)用[J];中國(guó)醫(yī)科大學(xué)學(xué)報(bào);2009年06期

4 劉R,

本文編號(hào):1391759


資料下載
論文發(fā)表

本文鏈接:http://www.sikaile.net/kejilunwen/zidonghuakongzhilunwen/1391759.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶48b30***提供,本站僅收錄摘要或目錄,作者需要?jiǎng)h除請(qǐng)E-mail郵箱bigeng88@qq.com