更改

跳到导航 跳到搜索
删除21字节 、 2021年11月18日 (四) 21:29
无编辑摘要
第32行: 第32行:  
Compressed sensing relies on [[Lp space|L1]] techniques, which several other scientific fields have used historically.<ref name=":4">[http://2.bp.blogspot.com/_0ZCyAOBrUtA/TTwqLEeLvdI/AAAAAAAAEXI/7S0_SnWoC0E/s1600/l1-minimization.JPG List of L1 regularization ideas] from Vivek Goyal, Alyson Fletcher, Sundeep Rangan, [http://www.math.uiuc.edu/%7Elaugesen/imaha10/goyal_talk.pdf The Optimistic Bayesian: Replica Method Analysis of Compressed Sensing]</ref> In statistics, the [[least squares]] method was complemented by the [[Lp norm|<math>L^1</math>-norm]], which was introduced by [[Pierre-Simon Laplace|Laplace]]. Following the introduction of [[linear programming]] and [[George Dantzig|Dantzig]]'s [[simplex algorithm]], the <math>L^1</math>-norm was used in [[computational statistics]]. In statistical theory, the <math>L^1</math>-norm was used by [[George W. Brown (academic)|George W. Brown]] and later writers on [[median-unbiased estimator]]s. It was used by Peter J. Huber and others working on [[robust statistics]]. The <math>L^1</math>-norm was also used in signal processing, for example, in the 1970s, when seismologists constructed images of reflective layers within the earth based on data that did not seem to satisfy the [[Nyquist–Shannon sampling theorem|Nyquist–Shannon criterion]].<ref name=":5">{{Cite journal |doi = 10.1511/2009.79.276 |title = The Best Bits |year = 2009 |last1 = Hayes |first1 = Brian |journal = American Scientist |volume = 97 |issue = 4 |pages = 276 |url = https://semanticscholar.org/paper/7a86258ed58ad2917330128a7503f10f921eb51f }}</ref>  It was used in [[matching pursuit]] in 1993, the [[Lasso regression|LASSO estimator]] by [[Robert Tibshirani]] in 1996<ref name=":6">{{Cite journal |url = http://www-stat.stanford.edu/~tibs/lasso.html |first = Robert |last = Tibshirani |title = Regression shrinkage and selection via the lasso |journal = [[Journal of the Royal Statistical Society, Series B]] |volume = 58 |issue = 1 |pages = 267–288 }}</ref> and [[basis pursuit]] in 1998.<ref name=":7">"Atomic decomposition by basis pursuit", by Scott Shaobing Chen, David L. Donoho, Michael, A. Saunders. SIAM Journal on Scientific Computing</ref>  There were theoretical results describing when these algorithms recovered sparse solutions, but the required type and number of measurements were sub-optimal and subsequently greatly improved by compressed sensing.{{citation needed|date=May 2013}}
 
Compressed sensing relies on [[Lp space|L1]] techniques, which several other scientific fields have used historically.<ref name=":4">[http://2.bp.blogspot.com/_0ZCyAOBrUtA/TTwqLEeLvdI/AAAAAAAAEXI/7S0_SnWoC0E/s1600/l1-minimization.JPG List of L1 regularization ideas] from Vivek Goyal, Alyson Fletcher, Sundeep Rangan, [http://www.math.uiuc.edu/%7Elaugesen/imaha10/goyal_talk.pdf The Optimistic Bayesian: Replica Method Analysis of Compressed Sensing]</ref> In statistics, the [[least squares]] method was complemented by the [[Lp norm|<math>L^1</math>-norm]], which was introduced by [[Pierre-Simon Laplace|Laplace]]. Following the introduction of [[linear programming]] and [[George Dantzig|Dantzig]]'s [[simplex algorithm]], the <math>L^1</math>-norm was used in [[computational statistics]]. In statistical theory, the <math>L^1</math>-norm was used by [[George W. Brown (academic)|George W. Brown]] and later writers on [[median-unbiased estimator]]s. It was used by Peter J. Huber and others working on [[robust statistics]]. The <math>L^1</math>-norm was also used in signal processing, for example, in the 1970s, when seismologists constructed images of reflective layers within the earth based on data that did not seem to satisfy the [[Nyquist–Shannon sampling theorem|Nyquist–Shannon criterion]].<ref name=":5">{{Cite journal |doi = 10.1511/2009.79.276 |title = The Best Bits |year = 2009 |last1 = Hayes |first1 = Brian |journal = American Scientist |volume = 97 |issue = 4 |pages = 276 |url = https://semanticscholar.org/paper/7a86258ed58ad2917330128a7503f10f921eb51f }}</ref>  It was used in [[matching pursuit]] in 1993, the [[Lasso regression|LASSO estimator]] by [[Robert Tibshirani]] in 1996<ref name=":6">{{Cite journal |url = http://www-stat.stanford.edu/~tibs/lasso.html |first = Robert |last = Tibshirani |title = Regression shrinkage and selection via the lasso |journal = [[Journal of the Royal Statistical Society, Series B]] |volume = 58 |issue = 1 |pages = 267–288 }}</ref> and [[basis pursuit]] in 1998.<ref name=":7">"Atomic decomposition by basis pursuit", by Scott Shaobing Chen, David L. Donoho, Michael, A. Saunders. SIAM Journal on Scientific Computing</ref>  There were theoretical results describing when these algorithms recovered sparse solutions, but the required type and number of measurements were sub-optimal and subsequently greatly improved by compressed sensing.{{citation needed|date=May 2013}}
   −
压缩感知依赖于 l 1技术,其他几个科学领域在历史上也曾使用过这种技术。<ref name=":4" />在统计学中,最小二乘法是由Laplace引入的 <math>L^1</math> 范数 来补充的。随着线性规划和 Dantzig 的单纯形算法的引入, <math>L^1</math> 范数被应用于计算统计学。在统计理论中,George w. Brown 和后来的作者在中值无偏估计中使用了 <math>L^1</math> 范数。Peter J. Huber和其他研究稳健统计的人使用了它。 <math>L^1</math> 范数也用于信号处理,例如,在20世纪70年代,当时地震学家根据似乎不符合奎奈斯特—香农准则的数据构建了地球内部反射层的图像。<ref name=":5" />它在1993年被用于匹配追踪,1996年被 Robert Tibshirani 用于 LASSO 估计<ref name=":6" />,1998年被用于基追踪<ref name=":7" />。理论结果描述了这些算法恢复稀疏解的过程,但需要的测量类型和数量都不是最优的,随后通过压缩感知得到了极大的改善。
+
压缩感知依赖于 L1技术,其他几个科学领域在历史上也曾使用过这种技术。<ref name=":4" />在统计学中,最小二乘法是由Laplace引入的 <math>L^1</math> 范数 来补充的。随着线性规划和 Dantzig 的单纯形算法的引入, <math>L^1</math> 范数被应用于计算统计学。在统计理论中,George w. Brown 和后来的作者在中值无偏估计中使用了 <math>L^1</math> 范数。Peter J. Huber和其他研究稳健统计的人使用了它。 <math>L^1</math> 范数也用于信号处理,例如,在20世纪70年代,当时地震学家根据似乎不符合奎奈斯特—香农准则的数据构建了地球内部反射层的图像。<ref name=":5" />它在1993年被用于匹配追踪,1996年被 Robert Tibshirani 用于 LASSO 估计<ref name=":6" />,1998年被用于基追踪<ref name=":7" />。理论结果描述了这些算法恢复稀疏解的过程,但需要的测量类型和数量都是次优的,随后通过压缩感知得到了极大的改善。
       
At first glance, compressed sensing might seem to violate [[Nyquist–Shannon sampling theorem|the sampling theorem]], because compressed sensing depends on the [[Sparse matrix|sparsity]] of the signal in question and not its highest frequency. This is a misconception, because the sampling theorem guarantees perfect reconstruction given sufficient, not necessary, conditions. A sampling method fundamentally different from classical fixed-rate sampling cannot "violate" the sampling theorem. Sparse signals with high frequency components can be highly under-sampled using compressed sensing compared to classical fixed-rate sampling.<ref name=":8">{{Cite journal |url = http://www-stat.stanford.edu/~candes/papers/ExactRecovery.pdf |title = Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Fourier Information |year = 2006 |last1 = Candès |first1 = Emmanuel J. |last2 = Romberg |first2 = Justin K. |last3 = Tao |first3 = Terence |journal = IEEE Trans. Inf. Theory |volume = 52 |issue = 8 |pages = 489–509 |doi=10.1109/tit.2005.862083|arxiv = math/0409186 |citeseerx = 10.1.1.122.4429 }}</ref>
 
At first glance, compressed sensing might seem to violate [[Nyquist–Shannon sampling theorem|the sampling theorem]], because compressed sensing depends on the [[Sparse matrix|sparsity]] of the signal in question and not its highest frequency. This is a misconception, because the sampling theorem guarantees perfect reconstruction given sufficient, not necessary, conditions. A sampling method fundamentally different from classical fixed-rate sampling cannot "violate" the sampling theorem. Sparse signals with high frequency components can be highly under-sampled using compressed sensing compared to classical fixed-rate sampling.<ref name=":8">{{Cite journal |url = http://www-stat.stanford.edu/~candes/papers/ExactRecovery.pdf |title = Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Fourier Information |year = 2006 |last1 = Candès |first1 = Emmanuel J. |last2 = Romberg |first2 = Justin K. |last3 = Tao |first3 = Terence |journal = IEEE Trans. Inf. Theory |volume = 52 |issue = 8 |pages = 489–509 |doi=10.1109/tit.2005.862083|arxiv = math/0409186 |citeseerx = 10.1.1.122.4429 }}</ref>
   −
乍一看,压缩感知信号似乎违反了采样定理,因为压缩感知信号取决于所讨论信号的稀疏程度,而不是其最高频率。这是一个误解,因为在足够(而不是必要)的条件下,采样定理可以保证完美的重构。一种从根本上不同于经典固定速率采样的采样方法不能“违反”采样定理。与经典的固定采样率相比,使用压缩传感可以对具有高频分量的系数信号进行高度欠采样<ref name=":8" />。
+
乍一看,压缩感知信号似乎违反了采样定理,因为压缩感知信号取决于所讨论信号的稀疏程度,而不是其最高频率。这是一个误解,因为在充分(而不是必要)的条件下,采样定理可以保证完美的重构。抽样方法与经典的固定率抽样方法根本不同,它不会“违反”抽样定理。与传统的固定速率采样相比,使用压缩感知可以对具有高频成分的稀疏信号进行高度欠采样<ref name=":8" />。
      第59行: 第59行:  
In order to choose a solution to such a system, one must impose extra constraints or conditions (such as smoothness) as appropriate. In compressed sensing, one adds the constraint of sparsity, allowing only solutions which have a small number of nonzero coefficients. Not all underdetermined systems of linear equations have a sparse solution. However, if there is a unique sparse solution to the underdetermined system, then the compressed sensing framework allows the recovery of that solution.
 
In order to choose a solution to such a system, one must impose extra constraints or conditions (such as smoothness) as appropriate. In compressed sensing, one adds the constraint of sparsity, allowing only solutions which have a small number of nonzero coefficients. Not all underdetermined systems of linear equations have a sparse solution. However, if there is a unique sparse solution to the underdetermined system, then the compressed sensing framework allows the recovery of that solution.
   −
为了选择此类系统的解,必须适当地施加额外的约束或条件(例如平滑度)。在压缩感知中,增加了稀疏性的约束,仅允许具有少量非零系数的解。并非所有欠定系统的线性方程组都有稀疏解。然而,如果对不确定系统有唯一的稀疏解,那么压缩感知框架可以恢复该解。
+
为了选择此类系统的解,必须适当地施加额外的约束或条件(例如平滑度)。在压缩感知中,增加了稀疏性的约束,仅允许具有少量非零系数的解。并非所有欠定系统的线性方程组都有稀疏解。然而,如果对欠定系统有唯一的稀疏解,那么压缩感知框架可以恢复该解。
    
<br>
 
<br>
第73行: 第73行:  
Compressed sensing takes advantage of the redundancy in many interesting signals—they are not pure noise. In particular, many signals are [[sparse matrix|sparse]], that is, they contain many coefficients close to or equal to zero, when represented in some domain.<ref name=":9">Candès, E.J., & Wakin, M.B., ''An Introduction To Compressive Sampling'', IEEE Signal Processing Magazine, V.21, March 2008 [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4472240&isnumber=4472102]</ref> This is the same insight used in many forms of [[lossy compression]].
 
Compressed sensing takes advantage of the redundancy in many interesting signals—they are not pure noise. In particular, many signals are [[sparse matrix|sparse]], that is, they contain many coefficients close to or equal to zero, when represented in some domain.<ref name=":9">Candès, E.J., & Wakin, M.B., ''An Introduction To Compressive Sampling'', IEEE Signal Processing Magazine, V.21, March 2008 [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4472240&isnumber=4472102]</ref> This is the same insight used in many forms of [[lossy compression]].
   −
压缩感知利用了许多有趣信号的冗余性---- 它们不是纯噪音。特别是,许多信号是稀疏的,也就是说,当它们在某个域中表示时,它们包含许多接近或等于零的系数<ref name=":9" />。这与许多形式的有损压缩所使用的见解相同。
+
压缩感知利用了许多有趣信号的冗余性---- 它们不是纯噪音。特别是,许多信号是稀疏的,也就是说,当它们在某个域中表示时,它们包含许多接近或等于零的系数<ref name=":9" />。这与许多形式的有损压缩是相同的。
      第85行: 第85行:  
The least-squares solution to such problems is to minimize the [[L2 norm|<math>L^2</math> norm]]—that is, minimize the amount of energy in the system. This is usually simple mathematically (involving only a [[matrix multiplication]] by the [[pseudo-inverse]] of the basis sampled in). However, this leads to poor results for many practical applications, for which the unknown coefficients have nonzero energy.
 
The least-squares solution to such problems is to minimize the [[L2 norm|<math>L^2</math> norm]]—that is, minimize the amount of energy in the system. This is usually simple mathematically (involving only a [[matrix multiplication]] by the [[pseudo-inverse]] of the basis sampled in). However, this leads to poor results for many practical applications, for which the unknown coefficients have nonzero energy.
   −
解决这类问题的最小二乘方法是使数学上的 l ^ 2 / 数学规范最小化,即使系统中的能量最小化。这通常在数学上是简单的(只涉及矩阵乘以所采样基的伪逆)。然而,这导致许多实际应用的结果很差,因为未知系数具有非零能量。
+
解决这类问题的最小二乘方法是使数学上的 L^2 / 数学规范最小化,即使系统中的能量最小化。这通常在数学上是简单的(只涉及矩阵乘以所采样基的伪逆)。然而,这导致许多实际应用的结果很差,因为未知系数具有非零能量。
     
38

个编辑

导航菜单