天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁 > 碩博論文 > 信息類博士論文 >

貪婪算法在稀疏學(xué)習(xí)中的應(yīng)用

發(fā)布時(shí)間:2018-01-28 11:43

  本文關(guān)鍵詞: 稀疏學(xué)習(xí) 學(xué)習(xí)速率 貪婪算法 共軛梯度追蹤 偏復(fù)向量梯度算子 出處:《湖北大學(xué)》2016年博士論文 論文類型:學(xué)位論文


【摘要】:稀疏表示是線性表示中最具有代表性的方法,在信號(hào)處理,圖像處理,機(jī)器學(xué)習(xí),計(jì)算機(jī)視覺等領(lǐng)域都有廣泛應(yīng)用.站在不同的角度,這些方法可以被分為不同的類型.比如,根據(jù)稀疏限制中所使用的范數(shù),這些方法可以被分成5類:1)l0范數(shù)最小化;2)lp范數(shù)最小化(0p1);3)l1范數(shù)最小化;4)l2,1范數(shù)最小化;5)l2范數(shù)最小化.根據(jù)算法的可行性,這些方法又可被分為:1)貪婪策略逼近;2)限制優(yōu)化;3)基于鄰近算法的優(yōu)化策略;4)基于同倫算法的稀疏表示.l0范數(shù)下的稀疏表示方法能夠獲得稀疏解,但直接求解相應(yīng)優(yōu)化問題的過程是一個(gè)NP難問題,貪婪逼近策略的提出正是為了解決這個(gè)難題.與其它正則化算法相比,貪婪算法計(jì)算簡(jiǎn)便,易于實(shí)施,在計(jì)算上有著巨大的優(yōu)勢(shì).貪婪算法通過在每次迭代過程中搜索局部最優(yōu)解來逼近整體最優(yōu)解,計(jì)算上的便利所付出的代價(jià)是算法的估計(jì)子與估計(jì)目標(biāo)之間關(guān)系的不確定.但即使貪婪算法不能適用于所有問題,對(duì)許多特殊問題它仍能產(chǎn)生整體最優(yōu)解,此時(shí)如何證明算法的有效性將是一項(xiàng)有意義的工作.基于貪婪算法的設(shè)計(jì)思想,本文對(duì)兩類學(xué)習(xí)問題提出了不同的貪婪型算法,并分析了它們的統(tǒng)計(jì)表現(xiàn).(1)貪婪算法在固定設(shè)計(jì)高斯回歸問題中的分析對(duì)于固定設(shè)計(jì)高斯回歸情形,本文提出了懲罰經(jīng)驗(yàn)松弛貪婪算法,并且通過建立oracle不等式分析了算法的有效性.算法的主要步驟為:首先通過經(jīng)驗(yàn)松弛貪婪算法輸出估計(jì)子序列,然后通過懲罰算法的迭代次數(shù)以及字典的容量,尋找最優(yōu)的估計(jì)子.P.Massart針對(duì)lasso算法建立了oracle型不等式,同時(shí)得到了一類相當(dāng)廣泛的模型選擇引理.本文將該模型選擇引理稍作推廣,分別在字典有限,字典無限可數(shù),字典無限不可數(shù)三種情形加以應(yīng)用,得到了表現(xiàn)算法學(xué)習(xí)性能的oracle型不等式.結(jié)果顯示誤差界受到逼近誤差,懲罰項(xiàng),噪音水平的影響.特別的,當(dāng)目標(biāo)函數(shù)屬于字典的凸包時(shí),算法的收斂速率為O((?)).(2)貪婪算法在稀疏限制優(yōu)化問題中的分析考慮了目標(biāo)函數(shù)為一般損失函數(shù)(非平方損失)的稀疏限制優(yōu)化問題,提出了稀疏共軛梯度法,共軛梯度追蹤法及其變形算法,并分析了算法的有效性.Yuan和Lu(2009)對(duì)傳統(tǒng)的PRP公式進(jìn)行了改進(jìn),提出了新的共軛方向,在此基礎(chǔ)上解決了無限制優(yōu)化問題下,共軛梯度法的全局收斂性,獲得了算法的收斂速度,且實(shí)驗(yàn)效果良好.受此啟發(fā),本文針對(duì)一類廣泛的稀疏限制優(yōu)化問題,建立了三種基于共軛梯度方向?qū)W習(xí)的貪婪型算法,理論分析顯示當(dāng)目標(biāo)函數(shù)滿足限制強(qiáng)凸條件與限制光滑條件時(shí),算法均具有線性收斂速率.(3)貪婪算法在復(fù)值稀疏信號(hào)重構(gòu)問題中的分析考慮了復(fù)值稀疏信號(hào)的恢復(fù)問題.由于建立在復(fù)變量上的平方損失函數(shù)在原點(diǎn)以外是處處不可導(dǎo)的,故基于梯度方向?qū)W習(xí)的算法無法直接應(yīng)用在該問題上.D.H.Brandwood(1983)提出了類似于梯度算子的偏復(fù)向量梯度算子,并且發(fā)現(xiàn)目標(biāo)函數(shù)的偏復(fù)向量梯度方向?qū)?yīng)著函數(shù)變化最大的方向.故通過對(duì)偏復(fù)梯度方向的學(xué)習(xí),本文提出了稀疏復(fù)梯度算法,復(fù)梯度追蹤算法及其變形算法.經(jīng)過分析發(fā)現(xiàn)當(dāng)觀測(cè)矩陣滿足D-RIP條件時(shí),平方損失函數(shù)具有與限制強(qiáng)凸條件與限制強(qiáng)光滑條件非常類似的性質(zhì),基于這些性質(zhì),我們得到了算法的線性學(xué)習(xí)速率.
[Abstract]:Sparse representation is a linear representation method is the most representative, in signal processing, image processing, machine learning, computer vision and other fields are widely used. From different angles, these methods can be divided into different types. For example, according to the norm of the sparse constraints, these methods can be divided into 5 categories: 1) l0 norm minimization; 2) LP norm minimization (0p1); 3) the minimum L1 norm; l2,1 norm minimization; 4) 5) L2 norm minimization. According to the feasibility of the algorithm, these methods can be divided into: 1) the greedy strategy close to 2) limit; optimization; 3) the optimization strategy based on neighbor algorithm; 4) under the.L0 norm sparse representation method to obtain sparse sparse representation based on homotopy algorithm, but the process of solving the corresponding optimization problem is a NP hard problem, greedy approximation strategies is to solve this problem. With other regularization Compared with the algorithm, greedy algorithm is simple, easy to implement, has a huge advantage in computation. The greedy algorithm by local optimal solution to approximate the global optimal solution search in each iteration, the calculation of the convenience of the price paid is not sure of the relationship between the estimator and estimate the target algorithm. But if not greedy the algorithm is applicable to all problems, it can still produce the optimal solutions for many special problems, how to prove the validity of the algorithm will be a meaningful work. The design idea based on the greedy algorithm, greedy algorithm is proposed in this paper. The different of two kinds of learning problems, and analyzes the statistical performance. (1) greedy algorithm in fixed design regression problems in the analysis of Gauss Gauss for a fixed design regression, this paper proposes empirical relaxation greedy algorithm of punishment, and through the establishment of Oracle inequality analysis The effectiveness of the algorithm. The main steps of the algorithm is as follows: firstly, through the experience of relaxation greedy algorithm output estimation sequence, and then through the iterative penalty algorithm number and capacity of the dictionary, to find the optimal estimation of the.P.Massart Oracle type inequality is established for the lasso algorithm, and get a fairly broad class of model selection in this paper lemma. The model selection lemma slightly promotion, respectively in the dictionary, the dictionary can be unlimited, unlimited number can not be a dictionary to be used, a Oracle type inequality was obtained in the performance of algorithm learning performance. The results showed that the error bound by the approximation error, penalty, effect of noise level. Especially, when the target function belongs to the convex hull of the dictionary when the convergence rate is O ((?)). (2) greedy algorithm in sparse limit analysis in optimization problems considering the objective function for general loss function (non square loss The lost) sparse constraint optimization problem, we propose a sparse conjugate gradient method, conjugate gradient method and its deformation tracking algorithm, and analyzes the effectiveness of the.Yuan algorithm and Lu (2009) on the traditional PRP formula was improved, put forward a new conjugate direction, on the basis of solving the unconstrained optimization problem under Global. The convergence of the conjugate gradient method, obtain the convergence rate of the algorithm, and the experimental result is good. Inspired by this, in this paper a wide class of sparse constraint optimization problem, set up three kinds of learning conjugate gradient direction based on greedy algorithm, theoretical analysis shows that when the objective function satisfies the restriction conditions and limitations of strong convex smooth condition and the algorithm has linear convergence rate. (3) the greedy algorithm in the complex analysis of the sparse signal reconstruction problem in consideration of the complex problem of sparse signal recovery. As established in the complex variable on the square loss function at The origin of outside is nowhere differentiable, so based on the gradient direction of learning algorithms can not be applied directly on the issue of.D.H.Brandwood (1983) made a similar gradient operator partial vector gradient operator, and found that the partial gradient direction corresponds to the direction of complex vector function changes the objective function. So based on the partial complex gradient the direction of the study, this paper presents a sparse complex gradient algorithm, algorithm and algorithm of complex deformation gradient tracking. After analysis found that when the observation matrix satisfies the D-RIP condition, the nature of the square loss function with restricted strong convexity conditions and limitations of strong smooth conditions very similar, based on these properties, we get a linear algorithm of learning rate.

【學(xué)位授予單位】:湖北大學(xué)
【學(xué)位級(jí)別】:博士
【學(xué)位授予年份】:2016
【分類號(hào)】:TP301.6

【相似文獻(xiàn)】

相關(guān)期刊論文 前10條

1 陳洪;蔡佳;張玉成;;多分類貪婪算法的一致性[J];湖北大學(xué)學(xué)報(bào)(自然科學(xué)版);2005年04期

2 楊潔;;基于貪婪算法的衛(wèi)星區(qū)域觀測(cè)擺角方案選擇方法[J];廣西科學(xué)院學(xué)報(bào);2006年02期

3 申時(shí)凱;吳紹兵;申浩如;王付艷;管彥慶;;計(jì)算最短公共超串的貪婪算法[J];計(jì)算機(jī)工程與設(shè)計(jì);2007年08期

4 王杰;剛軼金;李鳳光;吳偉巍;;改進(jìn)貪婪算法在博客突發(fā)事件檢測(cè)中的研究[J];計(jì)算機(jī)工程與應(yīng)用;2008年34期

5 周柳陽;高珩;梁翥;;貪婪算法的實(shí)際應(yīng)用[J];硅谷;2009年02期

6 李e,

本文編號(hào):1470665


資料下載
論文發(fā)表

本文鏈接:http://www.sikaile.net/shoufeilunwen/xxkjbs/1470665.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶f3260***提供,本站僅收錄摘要或目錄,作者需要?jiǎng)h除請(qǐng)E-mail郵箱bigeng88@qq.com