壓縮感知 外文文獻翻譯_第1頁
壓縮感知 外文文獻翻譯_第2頁
壓縮感知 外文文獻翻譯_第3頁
壓縮感知 外文文獻翻譯_第4頁
壓縮感知 外文文獻翻譯_第5頁
已閱讀5頁,還剩5頁未讀, 繼續免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

1、次*琴畢業設計(論文)外文資料翻譯題 目:基于壓縮感知的信號重構算法研究院系名稱:信息科學與工程學院專業班級:電信0702指導教師: 教師職稱: 學生姓名:學 號:外文資料翻譯譯文壓縮采樣Emamnuel J. Candes摘要:從頻域數據中采集和重構圖像的傳統思想和方法遵循的是奈奎斯特定理這一 基本原則。這一原則認為:為了重建圖像,需要獲取的傅里葉采樣的數量必須匹配 圖像的預期分辨率,即圖像的像素點數。本文介紹了一種名為“壓縮采樣”或“壓 縮感知”的新興理論,該理論認為傳統的觀念是不正確的?;蛟S令人吃驚的是,它 有可能從遠遠小于圖像/信號預期分辨率的若干采樣中精確地重構原始圖像或數據。毫無疑

2、問,壓縮采樣具有深遠的意義。例如,它提出一種可能的新數據采集協 議,能比傳統認為所必須的傳感器更少的情況下把模擬信息轉化成數字形式。這個 新的抽樣理論可能會造成數據的采樣和壓縮過程同時進行。在這個簡短的概述中,我們提供一些有關這一新理論的關鍵數學見解,并且給 出了一些壓縮采樣和其他領域的交叉,如統計學、信息論、編碼理論以及理論計算 科學。關鍵詞:壓縮采樣,稀疏,一致不確定性原理,不定線性方程組,最小”范數,線 性規劃,信號恢復,糾錯。1.引言信號處理的一個中心原則是奈奎斯特/香農抽樣定理:無差錯的重構一個信號 所需的采樣數目取決于它的帶寬一一包含該信號有效頻譜的最小間隔。在過去兩年 左右的時間

3、里,出現了另一種“壓縮采樣”理論。這個理論表明超分辨信號和圖像 可以從遠遠少于通常所認為的必要的數據/測量尺寸中重構出來。本文的目的在于 探究并提供一些有關這種新理論的關鍵數學見解。壓縮采樣吸引人的地方在于它與 某些應用科學和工程諸如統計學、信息理論、編碼理論、理論計算科學等領域有著 顯著的交叉和連接作用。我們將試著通過幾個精選的例子來解釋這些聯系。從廣義的觀點,更一般地說,稀疏性和可壓縮性在許多科學領域發揮著并將繼 續發揮基礎性的作用。稀疏性帶來了有效估計;例如,由閾值分割或壓縮算法獲得 的近似估計的質量取決于我們希望估計的信號的稀疏度。稀疏性帶來了高效壓縮; 例如,一種變換編碼器的精度取決

4、于我們希望編碼的信號的稀疏度24。稀疏性帶 來了降維和高效建模。這里的新穎之處在于稀疏性成了數據采集過程的核心,而且 帶來了高效的數據采集協議。事實上,壓縮采樣提出了如何更經濟地將模擬數據轉換成壓縮數字形式20,7。這里的關鍵詞是“經濟地”。眾所周知,因為典型的信號的結構特性,它們可以在沒有太多感性損失的前提下被高效壓縮。例如,現代的轉換編碼器如JPEG2000就是利用許多信號在某些固定基上的稀疏表示,這意味著編碼器可以僅存 儲或傳輸少數的自適應選擇轉換系數而不是所有的信號樣本。它的典型工作方式是 獲取一個完整的信號,計算一系列完整的變換系數,對最大的系數編碼并丟棄所有 其它的系數。對大量的數

5、據采集然后進行壓縮的過程是極其浪費的(你可以想一想, 數碼相機有數百萬的成像傳感器,但最終卻只把照片的像素編碼為幾百炒大?。?這就提出了一個基本的問題:因為大多數信號是可壓縮的,為什么當我們知道它的 大部分數據將會被放棄時還要花這么大的努力獲取所有的數據呢?有沒有可能獲 取壓縮形式的數據從而不需要丟棄任何東西呢? “壓縮采樣”,也稱為“壓縮感知” 20表明這的確是有可能的。本文絕不是一篇關于壓縮采樣的詳盡的概述文獻。這僅僅是作者自己在這一領 域的作品和思想,其中也包括對別人作品的大量參考以及和這些作品相關的偶爾探 討。我們已經盡力把我們的思想組織成與早期發表的這一主題的論文相銜接的邏輯 續接

6、。在我們開始之前,我們想邀請感興趣的讀者也查閱一 TRonald Devore他 也在對此進行研究對于該領域的一篇互補性調研文章17(第5節)。2.欠抽樣測量考慮一個從線性測量y中重構一個向量x e RN的一般性問題,其中關于x和y的 形式為 =(x,中),k = 1, ,K, or y = Ox(2.1)也就是說,我們通過測量X對K維向量kRN的映射來獲取未知信號的信息。 我們感興趣的是在“欠定”條件K口 N下,我們有比未知信號變量少的多的測量值。 在無數的應用中都出現這種類型問題。例如在放射醫學及生物醫學成像中,人們對 圖像感興趣部分收集到的測量數據要比對它無用像素的測量數據少得多。在寬帶

7、無 線電頻率信號分析中,由于當前在模/數轉換器技術方面的局限性,你可能僅僅能 在遠低于奈奎斯特頻率下獲得一個信號。最后,基因表達的研究也提供了這樣的例 子。在此,人們會想從一組較少的特別是數百的觀察值中推斷出成千上萬基因表達水平。乍一看,求解欠定方程組似乎是不可能的,因為我們可以很容易的列舉出它顯 然無法求解的例子。但是現在我們設想一個信號X是可壓縮的,也就是說它實質上由 一些小于N的自由度決定的。例如,假設我們的信號是稀疏的,意味著它可以看成 由一些固定基上的少數向量的疊加來準確或者精確地描述。然后,在這個前提下, 問題就發生根本性的變化,使求解成為可能。事實上,通過求解一個簡單凸優化問 題

8、來精確的或者有時準確的恢復信號是可能的。2.1.非線性采樣定理我們最好先來考慮一個具體的例子。假設現在我們采集到了長度為N的離散信 號x的一套不完整的頻率樣值。(為了簡化論述,我們考慮一個一維的典型問題。這 個理論可以很容易的擴展到更高的維度。例如,我們可能對從欠抽樣的傅里葉數據 中重建二維或三維物體也同樣感興趣。)我們的目標是在只給出傅里葉變換域的k 個樣本條件下重建完整的信號f1 N -1(2.2)= X e - j 2 歡 / NJNtt=0式子中“可見的”頻率wk是所有頻率 0,N-1 的一個子集Q (長為K)。磁 共振成像的原理就是通過測量被選擇的頻率系數來感知一個物體,而且這一原理

9、普 遍應用于許多科學領域,包括天文學。在一般問題的表達式(2.1)中,傳感矩陣中是 通過采樣N X N的離散傅里葉變換變換矩陣的k行獲取的。如果向量x中,: X豐0集的勢少于或等于S,我們就說向量x是S-稀疏的。這樣, Candes Romberg和Tao6給出了幾乎總能通過求解凸優化問題來完全恢復信號的 公式(風廣 |xj) TOC o 1-5 h z HYPERLINK l bookmark23 o Current Document (P1) min| x |x = y(2.3)XeRN1定理2.1(6)假設x是S-稀疏的,并且給出了頻率均勻下隨機抽樣的K個傅里 葉系數。假設觀察值的數量服

10、從 HYPERLINK l bookmark26 o Current Document K C - S log N.(2.4)這樣就能以極大的概率準確地重構x。具體而言,如果在式(2.4)中常數C的形式是22( +1),那么它的成功概率就超過了 1 -O(N書)。第一個結論是,我們可以僅僅通過測量任意設定K下的頻率系數就能夠使信息 不失真。第二個結論是,信號x可以通過一個未設定任何關于x非零數值的凸函數 的最小化來準確地恢復,我們假設它們的位置和幅度都是事先完全未知的。雖然這似乎是一個偉大的壯舉,可人們仍會問它是否是最佳的,或者我們能否 用更少的樣本來恢復信號。答案是,一般來說,我們不能用更少

11、的樣本來重構S-稀 疏信號。有很多例子證明,無論是什么情況,采取什么方法,為準確的重構信號所 需的最少樣本數必須約是S logN。因此,這個定理是苛刻的,而且最小/范數幾乎 只有在有希望通過算法實現的時候才能得到。讀者一定很熟悉奈奎斯特/香農采樣理論,我們可以將我們的結果用另一種表 達形式與其建立簡單的聯系。通過轉變上例中時間和頻率的作用,我們可以把定理 1重寫為一個新的非線性采樣定理。假設一個信號在頻域范圍B = |。|內。如果。是 一個連續的集合,那么我們可以把B作為x的帶寬。此外,如果集合。是已知的, 那么由傳統的奈奎斯特/香農采樣定理可知,信號x可以從頻域B的等時間間隔的抽 樣中完美重

12、構出來。重構可通過一個簡單的sinc插植核進行線性插值獲得?,F在,假設集合。大小仍為B,未知并且不一定是連續的。在這種情況下奈奎 斯特/香農定理是無助的,我們只能假設這個連續的頻率集合是個完整的空間,也 就是說,準確的重構信號需要所有的N個時域采樣。然而定理2.1卻斷定用少得多的 樣本是必然的。求解出(P1)就可以從約B logN倍的時域樣本中完全恢復出信號x。 此外,這些樣本沒有必要精心挑選;幾乎任何這種尺寸的樣本集合都可以使用。因 此我得到一個對奈奎斯特/香農定理的非線性模擬(這樣描述是由于重建過程(P1) 是非線性的):我們可以選定任意且未知的頻率集合B,從時域中任意選取約B logN

13、個樣本來重構信號。外文原文Compressive samplingEmamnuel J. Candes*Abstract. Conventional wisdom and common practice in acquisition and reconstruction of images from frequency data follow the basic principle of the Nyquist density sampling theory. This principle states that to reconstruct an image, the number of F

14、ourier samples we need to acquire must match the desired resolution of the image, the number of pixels in the image. This paper surveys an emerging theory which goes by the name of compressive sampling or compressed sensing, and which says that this conventional wisdom is inaccurate Perhaps surprisi

15、ngly, it is possible to reconstruct images or signals of scientific interest accurately and sometimes even exactly from a number of samples which is far smaller than the desired resolution of the iniage/signal, e.g the number of pixels in the image.It is believed that compressive sampling has far re

16、aching implications. For example, it suggests the possibility of new data acquisition protocols that translate analog information into digital form with fewer sensors than what was considered necessary. This new sampling theory may come to underlie procedures for sampling and compressing data simult

17、aneously.In this short survey, we provide some of the key niathematical insights underlying this new theory, and explain some of the interactions between compressive sampling and other fields such as statistics, information theory, coding theory, and theoretical computer science.Mathematics Subject

18、Classification (2000). Primary 00A69,41 -02,68P30; Secondary 62C65.Keywords. Compressive sampling, sparsity, uniform uncertainty principle, underdertennined systems of linear equations, -minimization, linear programming, signal recovery, error correction.1. IntroductionOne of the central tenets of s

19、ignal processing is the Nyquist/Shannon sampling theory: the number of samples needed to reconstruct a signal without error is dictated by its bandwidth - the length of the shortest interval which contains the support of the spectrum of the signal under study. In the last two years or so, an alterna

20、tive theory of compressive sampling has emerged which shows that super-resolved signals and images can be reconstructed from far fewer data/measurements than what is usually considered necessary. The purpose of this paper is to survey and provide some of the key mathematical insights underlying this

21、 new theory. An enchanting aspect of compressive sampling it that it has significant interactions and bearings on some fields in the applied sciences and engineering such as statistics, information theory, coding*The author is partially supported by an NSF grant CCF-515362.ftoceedings of the Interna

22、tional Congress of Mathematicians, Madrid, Spain, 2006 2006 European Mathematical Societytheory, theoretical computer science, and others as well. We will try to explain these connections via a few selected examples.From a general viewpoint, sparsity and, more generally, compressibility has played a

23、nd continues to play a fundamental role in many fields of science. Sparsity leads to efficient estimations; for example, the quality of estimation by thresholding or shrinkage algorithms depends on the sparsity of the signal we wish to estimate. Sparsity leads to efficient compression; for example,

24、the precision of a transform coder depends on the sparsity of the signal we wish to encode 24, Sparsity leads to dimensionality reduction and efficient modeling. The novelty here is that sparsity has bearings on the data acquisition process itself, and leads to efficient data acquisition protocols.I

25、n fact, compressive sampling suggests ways to economically translate analog data into already compressed digital form |20, |7|. The key word here is economically.* Everybody knows that because typical signals have some structure, they can be compressed efficiently without much perceptual loss. For i

26、nstance, modern transform coders such as JPEG2000 exploit the fact that many signals have a sparse representation in a fixed basis, meaning that one can store or transmit only a small number of adaptively chosen transform coefficients rather than all the signal samples. The way this typically works

27、is that one acquires the full signal, computes the complete set of transform coefficients, encode the largest coefficients and discard all the others. This process of massive data acquisition followed by compression is extremely wasteful (one can think about a digital camera which has millions of im

28、aging sensors, the pixels, but eventually encodes the picture on a few hundred kilobytes). This raises a fundamental question: because most signals are compressible, why spend so much effort acquiring all the data when we know that most of it will be discarded? Wouldnt it be possible to acquire the

29、data in already compressed form so that one does not need to throw away anything? Compressive sampling also known as compressed sensing |20 shows that this is indeed possible.This paper is by no means an exhaustive survey of the literature on compressive sampling. Rather this is merely an account of

30、 the authors own work and thinking in this area which also includes a fairly large number of references to other peoples work and occasionally discusses connections with these works. We have done our best to organize the ideas into a logical progression starting with the early papers which launched

31、this subject. Before we begin, we would like to invite the interested reader to also check the article 117 by Ronald DeVore - also in these proceedings - for a complementary survey of the field (Section 5).2. Undersampled measurementsConsider the general problem of reconstructing a vector x RA,from

32、linear measurements y about x of the form(2,1)yk = x, (pk), k =, K, or y = x.Compressive sampling3That is, we acquire information about the unknown signal by sensing x against K vectors 例 W 呻. We are interested in the underdetermined” case KN, where we have many fewer measurements than unknown signa

33、l values. Problems of this type arise in a countless number of applications. In radiology and biomedical imaging for instance, one is typically able to collect far fewer measurements about an image of interest than the number of unknown pixels. In wideband radio frequency signal analysis, one may on

34、ly be able to acquire a signal at a rate which is far lower than the Nyquist rate because of current limitations in Analog-to-Digital Converter technology. Finally, gene expression studies also provide examples of this kind. Here, one would like to infer the gene expression level of thousands of gen

35、es - that is, the dimension N of the vector x is in the thousands - from a low number of observations, typically in the tens.At first glance, solving the underdertermined system of equations appears hopeless, as it is easy to make up examples for which it clearly cannot be done. But suppose now that

36、 the signal x is compressible, meaning that it essentially depends on a number of degrees of freedom which is smaller than N. For instance, suppose our signal is sparse in the sense that it can be written either exactly or accurately as a superposition of a small number of vectors in some fixed basi

37、s. Then this premise radically changes the problem, making the search for solutions feasible. In fact, accurate and sometimes exact recovery is possible by solving a simple convex optimization problem.2.1. A nonlinear sampling theorem. It might be best to consider a concrete example first. Suppose h

38、ere that one collects an incomplete set of frequency samples of a discrete signal x of length N. (To ease the exposition, we consider a model problem in one dimension. The theory extends easily to higher dimensions. For instance, we could be equally interested in the reconstruction of 2- or 3-dimens

39、ional objects from undersampled Fourier data.) The goal is to reconstruct the full signal f given only K samples in the Fourier domain1N-咒=方匚為廠眼如*(2.2)where the visiblefrequencies 上 are a subset Q (of size K)ofthesetofall frequencies 0,., N 1. Sensing an object by measuring selected frequency coeffi

40、cients is the principle underlying Magnetic Resonance Imaging, and is common in many fields of science, including Astrophysics. In the language of the general problem (2.1), the sensing matrix is obtained by sampling K rows of the N by N discrete Fourier transform matrix.We will say that a vector x

41、is S -sparse if its support / : x, 0 is of cardinality less or equal toS, Then Cands, Romberg and Tao |6 showed that one could almost always recover the signal x exactly by solving the convex program1 (|%| := ;= |i( |)(Pi) min |5|幻 subject to .r = y.(2.3)(Pj) can even be recast as a linear program 3

42、, 15.Theorem 2.1 (6). Assume that x is S-sparse and that we are given K Fourier coefficients with frequencies selected uniformly at random. Suppose that the number of observations obeysK C S logN.(2,4)Then minimizing t reconstructs x exactly with overwhelming probability. In details, if the constant

43、 C is of the fonn 22(6 + 1) in (2.4), then the probability of success exceeds 1 O(NS,).The first conclusion is that one suffers no information loss by measuring just about any set of K frequency coefficients. The second is that the signal .r can be exactly recovered by minimizing a convex functional

44、 which does not assume any knowledge about the number of nonzero coordinates of x, their locations, and their amplitudes which we assume are all completely unknown a priori.While this seems to be a great feat, one could still ask whether this is optimal, or whether one could do with even fewer sampl

45、es. The answer is that in general, we cannot reconstruct S-sparse signals with fewer samples. There are examples for which the iiiinimuin number of samples needed for exact reconstruction by any method, no matter how intractable, must be about S log N. Hence, the theorem is tight and )-minimization

46、succeeds nearly as soon as there is any hope to succeed by any algorithm.The reader is certainly familiar with the Nyquist/Shannon sampling theory and one can reformulate our result to establish simple connections. By reversing the roles of time and frequency in the above example, we can recast Theor

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經權益所有人同意不得將文件中的內容挪作商業或盈利用途。
  • 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論