天堂国产午夜亚洲专区-少妇人妻综合久久蜜臀-国产成人户外露出视频在线-国产91传媒一区二区三区

當(dāng)前位置:主頁 > 文藝論文 > 廣告藝術(shù)論文 >

真實感人臉表情合成的關(guān)鍵技術(shù)研究

發(fā)布時間:2018-09-08 17:38
【摘要】:人臉表情動畫技術(shù)作為計算機圖形學(xué)的一個重要分支,一直是廣大研究人員競相追逐的研究熱點。當(dāng)前,該領(lǐng)域已取得大量研究成果,且被廣泛應(yīng)用于影視、廣告和游戲等產(chǎn)業(yè)!督饎偂、《指環(huán)王》、《阿凡達》等作品中使用了大量的計算機合成人臉表情,它們向觀眾展現(xiàn)了人臉表情動畫的無窮魅力。人臉表情合成技術(shù)的發(fā)展已深入人心。隨著技術(shù)的發(fā)展和時代的進步,人們對合成表情動畫的真實感與合成速度的要求也在不斷提高。廣闊的應(yīng)用前景與技術(shù)的可行性必將使這一領(lǐng)域的研究得到越來越多的投入和關(guān)注。 本文綜述了人臉表情合成的發(fā)展現(xiàn)狀,對現(xiàn)有方法進行分類并詳細(xì)分析了各自的優(yōu)缺點。在此基礎(chǔ)上,我們對真實感人臉表情合成中的幾個關(guān)鍵問題進行了深入探討,提出了系統(tǒng)的解決方案,包括人臉運動數(shù)據(jù)的采集與人臉表情的提取、真實感人臉表情的合成以及人臉表情的編輯等。具體地說,本文的工作主要包括以下幾個方面: 給出了一種高精度人臉表情的采集與提取方案。對于光學(xué)運動捕獲系統(tǒng)采集的人臉運動數(shù)據(jù),我們使用徑向基函數(shù)(Radial Basis Functions, RBF)插值將其映射到中性人臉模型所在的坐標(biāo)系下,以獲得中性人臉模型空間的人臉運動數(shù)據(jù)。借助于數(shù)據(jù)采集時標(biāo)定的與人臉表情變化無關(guān)的標(biāo)記點(marker),我們從中提取表演者的臉部表情信息,同時得到相應(yīng)頭部剛體運動。 提出了一種基于拉普拉斯的表情合成技術(shù),在人臉變形中保留人臉模型上已有細(xì)節(jié)特征,保證了合成表情的真實感。對于給定人臉模型,我們首先計算每個頂點的拉普拉斯坐標(biāo)。在表情合成時,保持所有頂點的拉普拉斯坐標(biāo)不變,由表情特征點的位移以及選取的固定點,計算人臉模型上其他所有頂點的新位置,從而合成新的人臉表情。結(jié)合提取的頭部剛體運動,我們可以獲得和表演者表情相似、頭部姿態(tài)一致的目標(biāo)人臉模型。 提出了一種基于測地距離與RBF插值的人臉表情合成新方法。由于人臉模型中嘴巴、眼睛等孔洞區(qū)域的存在,歐氏距離與沿著曲面表面的測地距離差異較大,直接使用傳統(tǒng)的基于歐氏距離的RBF插值容易產(chǎn)生這些孔洞區(qū)域被拉伸的結(jié)果。本文引入了一種近似測地距離的計算規(guī)則,能夠測量從人臉表情特征點到人臉模型上其他頂點的測地距離。使用測地距離衡量人臉模型上頂點間的相互影響,結(jié)合RBF插值,從而合成真實感人臉表情。 表情編輯是真實感人臉表情動畫中的重要步驟,本文提出了一種基于時空的人臉表情動畫編輯方法。我們使用拉普拉斯變形技術(shù)將用戶對人臉表情特征點的編輯效果在空間域上傳播到整個人臉模型。與此同時,用戶對某一幀的編輯效果在時間域上以高斯函數(shù)的衰減模式在給定人臉動畫上的鄰近表情序列間傳播。在編輯過程中,允許用戶指定編輯在時間域上的傳播范圍,這為用戶提供了人臉表情動畫編輯范圍的局部控制。 提出了一種基于二維形變的人臉表情編輯技術(shù)。在表情編輯中,我們保持人臉模型中每個三角面片的形狀和比例,使得它們的變形總和達到最小。同時,根據(jù)對人臉表情變化的觀察,我們約束在變形中人臉外圍輪廓邊長總和不變,以合成自然、真實的人臉表情。該技術(shù)還可以應(yīng)用于服裝設(shè)計的初期,自動地計算服裝在新姿勢下的形態(tài),以獲得服裝在不同姿勢下的效果。這可以避免設(shè)計師對類似的服裝進行簡單、重復(fù)繪制,為設(shè)計師的設(shè)計工作以及與他人進行交流思想提供了一種輔助手段,提高了工作效率。我們選用人體骨架作為服裝變形的驅(qū)動元素,由初始狀態(tài)下的骨架自動獲取變形中的控制點、以新姿勢下的骨架計算這些控制點的目標(biāo)位置,從而驅(qū)動新姿勢服裝的變形。 我們分別使用了多個人臉模型對上述各種方法進行測試,均取得了不錯的實驗結(jié)果。最后,我們對本文的研究工作進行總結(jié),分析了存在的問題,并指出了未來可能的研究方向。
[Abstract]:As an important branch of computer graphics, facial expression animation technology has been a hot research topic for many researchers. At present, a large number of research results have been achieved in this field, and it is widely used in film, television, advertising and games industries. Facial expression, they show the audience the infinite charm of facial expression animation. The development of facial expression synthesis technology has been deeply rooted in the hearts of people. More and more attention has been paid to the research in the field.
This paper reviews the development of facial expression synthesis, classifies the existing methods and analyzes their advantages and disadvantages in detail. On this basis, we discuss several key issues in realistic facial expression synthesis, and propose a systematic solution, including the collection of facial motion data and the extraction of facial expression. In particular, the work of this paper mainly includes the following aspects:
A high-precision facial expression acquisition and extraction scheme is presented. For the facial motion data collected by optical motion capture system, we use Radial Basis Functions (RBF) interpolation to map it to the coordinate system of the neutral face model to obtain the facial motion data in the space of the neutral face model. Markers, which have nothing to do with facial expression changes, are calibrated in data acquisition, from which we extract the facial expression information of performers and obtain the corresponding head movements.
This paper presents a Laplacian-based expression synthesis technique, which preserves the details of the face model and guarantees the authenticity of the expression. For a given face model, we first compute the Laplacian coordinates of each vertex. The displacement of feature points and the selected fixed points are used to calculate the new positions of all other vertices on the face model, and then the new facial expressions are synthesized.
A new method of facial expression synthesis based on geodesic distance and RBF interpolation is proposed in this paper.Owing to the existence of mouth and eye holes in the face model,the Euclidean distance is quite different from the geodesic distance along the surface of the curved surface.The traditional Euclidean distance-based RBF interpolation is easy to produce the result that these holes are stretched. In this paper, an approximate geodesic distance calculation rule is introduced, which can measure the geodesic distance from facial expression feature points to other vertices on the face model.
Emotion editing is an important step in realistic facial expression animation. In this paper, we propose a spatio-temporal facial expression animation editing method. We use Laplacian transformation technology to transmit the editing effect of user's facial expression feature points to the whole face model in the spatial domain. In the editing process, the user is allowed to specify the editing range in the time domain, which provides local control over the editing range of the facial expression animation.
A facial expression editing technique based on two-dimensional deformation is proposed.In facial expression editing, we preserve the shape and proportion of each triangle in the face model to minimize the total deformation.At the same time, according to the observation of facial expression changes, we constrain the sum of the peripheral contours of the face to be invariant in the deformation to synthesize. This technique can also be used in the early stage of fashion design, automatically calculating the shape of the garment in the new posture, so as to obtain the effect of the garment in different postures. The human skeleton is selected as the driving element of clothing deformation, and the control points in the deformation are automatically obtained from the skeleton in the initial state. The target positions of these control points are calculated with the skeleton in the new posture, thus driving the clothing deformation in the new posture.
Several face models are used to test the above methods and good experimental results are obtained. Finally, we summarize the research work, analyze the existing problems and point out the possible research directions in the future.
【學(xué)位授予單位】:浙江大學(xué)
【學(xué)位級別】:博士
【學(xué)位授予年份】:2012
【分類號】:TP391.41

【參考文獻】

相關(guān)期刊論文 前3條

1 裴玉茹;查紅彬;;真實感人臉的形狀與表情空間[J];計算機輔助設(shè)計與圖形學(xué)學(xué)報;2006年05期

2 姚俊峰;陳琪;;計算機人臉表情動畫技術(shù)綜述[J];計算機應(yīng)用研究;2008年11期

3 吳宗敏;函數(shù)的徑向基表示[J];數(shù)學(xué)進展;1998年03期

,

本文編號:2231219

資料下載
論文發(fā)表

本文鏈接:http://www.sikaile.net/wenyilunwen/guanggaoshejilunwen/2231219.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶5fc32***提供,本站僅收錄摘要或目錄,作者需要刪除請E-mail郵箱bigeng88@qq.com