基于神經(jīng)網(wǎng)絡的入侵檢測相關技術研究
發(fā)布時間:2018-06-29 05:08
本文選題:入侵檢測 + 神經(jīng)網(wǎng)絡; 參考:《山東大學》2016年博士論文
【摘要】:隨著互聯(lián)網(wǎng)規(guī)模的日漸增大,網(wǎng)絡新興服務逐步影響著人們的日常生活,同時,網(wǎng)絡安全問題也倍受人們關注。面對攻擊行為日益復雜化的發(fā)展趨勢,入侵檢測系統(tǒng)可以通過實時分析獲取的計算機系統(tǒng)、網(wǎng)絡和用戶的事件信息,來評估計算機系統(tǒng)和網(wǎng)絡的安全性。傳統(tǒng)環(huán)境下的入侵檢測技術一直都是各研究機構的研究熱點,如何提高入侵檢測系統(tǒng)的檢測性能至關重要。同時,云計算作為新的計算模式,改變了傳統(tǒng)計算機體系架構,但是其虛擬化、分布式和超大規(guī)模的特點給計算機系統(tǒng)、網(wǎng)絡和用戶帶來了巨大的安全挑戰(zhàn)。為了有效應對這些新的挑戰(zhàn),研究云環(huán)境下的入侵檢測系統(tǒng)同樣具有重要的現(xiàn)實意義。神經(jīng)網(wǎng)絡具有自學習、聯(lián)想記憶和可高速并行計算的特點,使其在很多應用領域都取得了顯著的效果。將神經(jīng)網(wǎng)絡技術應用于入侵檢測領域,已經(jīng)引起了國內(nèi)外相關學者的普遍關注。本文利用神經(jīng)網(wǎng)絡理論,對傳統(tǒng)環(huán)境和云環(huán)境下的入侵檢測系統(tǒng)相關問題進行了研究。本文首先針對傳統(tǒng)環(huán)境下的分布式入侵檢測系統(tǒng)存在中央節(jié)點負載大,易造成單點失效等問題,研究可高速并行計算,易于硬件實現(xiàn),檢測精度高的完全分布式協(xié)同入侵檢測系統(tǒng)(第二章)。然后為彌補傳統(tǒng)環(huán)境下的入侵檢測系統(tǒng)普遍存在缺乏主動防御能力的缺點,研究在目標主機或操作系統(tǒng)遭到破壞之前,可預測即將發(fā)生攻擊行為的入侵預防系統(tǒng)(第三章)。隨著云計算的發(fā)展,傳統(tǒng)環(huán)境下的入侵檢測系統(tǒng)在海量入侵數(shù)據(jù)檢測率和檢測速度方面都存在著局限性,已經(jīng)不能滿足云環(huán)境下入侵檢測系統(tǒng)的需求,因此本文研究了可自主學習、動態(tài)拓展的基于網(wǎng)絡的云入侵檢測系統(tǒng)(第四章)。云計算的核心是虛擬化技術,針對虛擬機在遷移過程中容易因為系統(tǒng)存在的漏洞或后門缺陷遭受病毒或黑客攻擊,造成虛擬機異常遷移等安全問題,本文最后研究了虛擬機遷移調度監(jiān)控系統(tǒng),保障虛擬計算環(huán)境的安全(第五章)。本文的主要創(chuàng)新工作如下:(1)通過對分布式入侵檢測系統(tǒng)的研究提出了一種基于離散細胞神經(jīng)網(wǎng)絡(DTCNN)和狀態(tài)控制細胞神經(jīng)網(wǎng)絡(SCCNN)的完全分布式協(xié)同入侵檢測系統(tǒng)。其中,基于DTCNN的多層檢測模型作為本地節(jié)點檢測分類器,基于改進SCCNN的一維環(huán)形檢測模型作為全局檢測器。每個本地節(jié)點檢測器負責獨立地檢測本地網(wǎng)絡入侵行為,然后周期性地發(fā)送檢測消息與其相鄰節(jié)點交換本地檢測信息,構成全局檢測器。針對本地節(jié)點檢測器的模板參數(shù),提出了基于改進粒子群算法的參數(shù)選擇算法,通過能量函數(shù)約束法構造新的適應度函數(shù)來避免粒子群算法陷入早熟收斂并尋找到參數(shù)最優(yōu)解。針對全局檢測器,提出了一種基于求解線性矩陣不等式的模板參數(shù)求解方法,使系統(tǒng)達到理想的穩(wěn)定輸出,實現(xiàn)檢測應用。仿真實驗結果表明本檢測系統(tǒng)與其他分布式入侵檢測系統(tǒng)相比具有更高的檢測率。(2)通過對入侵預測系統(tǒng)的研究提出了基于神經(jīng)網(wǎng)絡改進時序分析方法的入侵預測模型。為降低入侵預測系統(tǒng)的誤報率和漏報率,提高入侵預測模型預測精度,提出了基于灰色神經(jīng)網(wǎng)絡改進ARIMA的網(wǎng)絡入侵預測模型,采用BP網(wǎng)絡映射灰色預測模型的微分方程解,構造出新的灰色神經(jīng)網(wǎng)絡,對基于ARIMA的網(wǎng)絡入侵預測模型預測殘差進行修正。此外,為提高多尺度網(wǎng)絡流量時序的預測精度,本文還提出基于小波分解和改進最小復雜度回聲狀態(tài)網(wǎng)絡的網(wǎng)絡入侵預測模型(IMCESN-WD),首先對原始網(wǎng)絡流量時序進行小波分解預處理,然后對分解后的各個尺度子序列建立最小均方誤差和誤差變化率改進最小復雜度回聲狀態(tài)網(wǎng)絡的預測模型,最后利用權值因子將子序列預測結果進行整合。仿真實驗證實上述方法可通過對網(wǎng)絡流量數(shù)據(jù)進行建模來衡量網(wǎng)絡的安全狀況,對入侵行為進行預警,預測精度較高。(3)通過對基于網(wǎng)絡的云入侵預測系統(tǒng)的研究提出了一種基于改進生長自組織神經(jīng)網(wǎng)絡的云網(wǎng)絡入侵檢測系統(tǒng)。該系統(tǒng)利用映射規(guī)約主成分分析算法對海量入侵數(shù)據(jù)進行降維,并將降維后的數(shù)據(jù)利用改進的生長自組織神經(jīng)網(wǎng)絡算法進行動態(tài)更新檢測,利用遺傳算法對基于生長自組織神經(jīng)網(wǎng)絡檢測模型拓展出的自組織神經(jīng)網(wǎng)絡子網(wǎng)中的連接權值進行優(yōu)化,加速檢測網(wǎng)絡收斂。仿真實驗表明本方法可以實現(xiàn)對海量入侵數(shù)據(jù)的實時檢測和新型攻擊的擴展檢測,檢測算法與其他算法相比有較高的有效性和可拓展性。(4)通過對虛擬機遷移監(jiān)控系統(tǒng)的研究提出了基于改進細胞神經(jīng)網(wǎng)絡的虛擬機遷移調度方法。遷移調度過程可等價于旅行商問題,通過改進細胞神經(jīng)網(wǎng)絡的能量函數(shù)使輸出的平衡點為實時網(wǎng)絡期望的特征值,系統(tǒng)達到穩(wěn)定狀態(tài)。本文在遷移調度局部規(guī)則和全局規(guī)則的基礎上確定了參數(shù)關系,該網(wǎng)絡模型參數(shù)關系可以轉化為求解約束優(yōu)化問題。然后,基于冒泡排序粒子群算法優(yōu)化模板參數(shù),避免求解參數(shù)過程陷入局部最優(yōu)。仿真實驗表明本文的方法可以制定出有效的虛擬機遷移調度策略,減少了遷移持續(xù)時間和遷移數(shù)據(jù)量。
[Abstract]:With the increasing scale of the Internet, network emerging services have gradually affected people's daily life. At the same time, the network security problem has attracted much attention. In the face of the increasingly complicated development trend of attack behavior, the intrusion detection system can evaluate and estimate the computer system, network and user's event information obtained by real-time analysis. The security of the computer system and network. The intrusion detection technology under the traditional environment has always been the research hotspot of the research institutions. How to improve the detection performance of the intrusion detection system is very important. At the same time, as a new computing model, cloud computing has changed the traditional computer architecture, but its virtualization, distribution and super scale are special. Computer systems, networks and users have brought great security challenges. In order to effectively cope with these new challenges, it is also of great practical significance to study intrusion detection systems in the cloud environment. Neural networks have the characteristics of self learning, associative memory and high speed parallel computing, making it remarkable in many applications. The application of neural network technology in the field of intrusion detection has caused widespread concern at home and abroad. This paper uses neural network theory to study the related problems of intrusion detection system under the traditional environment and cloud environment. This paper first aims at the existence of the central node in the traditional distributed intrusion detection system. When the point load is large and it is easy to cause a single point failure, we can study the complete distributed cooperative intrusion detection system (second chapter) which can be implemented in high speed parallel computing, easy to implement hardware and high detection precision, and then to make up for the shortcomings of the traditional intrusion detection system, which is generally lack of the active defense capability. Before the destruction, the intrusion prevention system (third chapter) can be predicted. With the development of the cloud computing, the intrusion detection system under the traditional environment has limitations in the detection rate and detection speed of massive intrusion data, and can not meet the requirements of the intrusion detection system under the cloud environment. Main learning, dynamic expansion of network based cloud intrusion detection system (fourth chapter). The core of the cloud computing is the virtualization technology. In view of the vulnerability of the system to the virus or hacker attack, the virtual machine is vulnerable to the virus or hacker attacks in the process of migration. Finally, the virtual machine migration is caused by the virtual machine migration. The scheduling monitoring system ensures the security of the virtual computing environment (fifth chapters). The main innovations of this paper are as follows: (1) a fully distributed cooperative intrusion detection system based on the discrete cellular neural network (DTCNN) and the state controlled cell neural network (SCCNN) is proposed by the research of the distributed intrusion detection system. Among them, the system is based on DTC The multi-layer detection model of NN is used as the local node detection classifier, and the one dimension ring detection model based on improved SCCNN is used as the global detector. Each local node detector is responsible for detecting local network intrusion independently, and then periodically sending the detection messages to exchange local detection information with their adjacent nodes to form a global detector. In view of the template parameters of local node detector, a parameter selection algorithm based on Improved Particle Swarm Optimization (PSO) is proposed. A new fitness function is constructed by energy function constraint method to avoid precocious convergence and find the optimal solution of the parameter. A linear matrix inequality is proposed for the global detector. The template parameter solution method makes the system achieve the ideal stable output and realizes the detection application. The simulation experiment results show that the detection system has a higher detection rate compared with the other distributed intrusion detection systems. (2) the intrusion prediction model based on the neural network improved time series analysis method is put forward by the Research of the intrusion prediction system. In order to reduce the false alarm rate and false alarm rate of the intrusion prediction system and improve the prediction accuracy of the intrusion prediction model, a network intrusion prediction model based on the grey neural network improved ARIMA is proposed. A new grey neural network is constructed with the BP network mapping the differential equation solution of the grey prediction model, and the prediction model of network intrusion based on ARIMA is predicted. In addition, in order to improve the prediction accuracy of multiscale network traffic sequence, this paper also proposes a network intrusion prediction model (IMCESN-WD) based on wavelet decomposition and improved minimum complexity echo state network. Firstly, the original network traffic sequence is preprocessed by wavelet decomposition, and then the decomposed sub scale subsequences are built. The minimum mean square error and the error change rate are established to improve the prediction model of the least complex echo state network. Finally, the subsequence prediction results are integrated with the weight factor. The simulation experiment proves that the above method can measure the network security by modeling the network traffic data, early warning and prediction accuracy for the intrusion behavior. (3) a cloud network intrusion detection system based on improved growth self-organizing neural network is proposed through the study of network based cloud intrusion prediction system. The system uses mapped protocol principal component analysis algorithm to reduce the dimension of mass intrusion data, and uses the improved growth self-organizing neural network to reduce the dimensionality after reducing the dimension. The algorithm performs dynamic update detection and optimizes the connection weights in the self-organizing neural network subnet based on the growth self-organizing neural network detection model by genetic algorithm, and accelerates the convergence of the detection network. The simulation experiment shows that this method can realize the real-time detection of massive intrusion data and the extended detection of new attacks. The detection algorithm has higher effectiveness and expansibility compared with other algorithms. (4) a migration scheduling method based on improved cellular neural network is proposed through the study of the virtual machine migration monitoring system. The migration scheduling process can be equivalent to the traveling salesman problem, and the output balance is made by improving the energy function of the fine cell neural network. The parameter relationship is determined on the basis of local rules and global rules of migration and scheduling, and the parameter relation of the network model can be transformed into a constrained optimization problem. Then, the bubble sorting algorithm is used to optimize the template parameters and avoid the solution of the parameter process. Simulation results show that the proposed method can formulate effective migration scheduling strategies for virtual machines, reducing migration duration and migrating data volume.
【學位授予單位】:山東大學
【學位級別】:博士
【學位授予年份】:2016
【分類號】:TP393.08;TP183
,
本文編號:2080982
本文鏈接:http://www.sikaile.net/guanlilunwen/ydhl/2080982.html
最近更新
教材專著