# 压缩感知

此词条由Solitude初步翻译。

**Compressed sensing** (also known as **compressive sensing**, **compressive sampling**, or **sparse sampling**) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible.^{[1]} The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals.^{[2]}^{[3]}

Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals.

压缩感知(也称为压缩传感、压缩采样或稀疏采样)是一种**信号处理signal processing**技术，可通寻找**欠定线性系统underdetermined linear systeme**的解决方案，来有效地获取和重构信号。这是基于以下原理：通过优化，可以利用信号的稀疏性从远少于** 奈奎斯特—香农nyquist-Shannon**采样定理所要求的样本中恢复信号。在两种情况下可以进行恢复。第一种是**稀疏性sparsity**，它要求信号在某个域上是稀疏的。第二种是**不相干性incoherence**，它是通过等距特性来实现的，对于稀疏信号来说足够了。

== Overview ==

总览

A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to be possible to perfectly reconstruct a signal from a series of measurements (acquiring this series of measurements is called sampling). Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized.

A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to be possible to perfectly reconstruct a signal from a series of measurements (acquiring this series of measurements is called sampling). Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized.

信号处理工程领域的一个共同目标是从一系列的采样测量数据中重构信号。通常，这个任务是不可能完成的，因为在信号未被测量的时候没有办法重构信号。然而，有了对信号的先验知识或假设，就有可能从一系列测量值(获得这一系列测量称为**采样sampling**)中完美地重构信号。随着时间的推移，工程师们已经提高了对那些假设可行以及如何将其推广的理解。

An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if a real signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly by means of sinc interpolation. The main idea is that with prior knowledge about constraints on the signal's frequencies, fewer samples are needed to reconstruct the signal.

An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if a real signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly by means of sinc interpolation. The main idea is that with prior knowledge about constraints on the signal's frequencies, fewer samples are needed to reconstruct the signal.

信号处理领域的一项早期突破是Nyquist–Shannon采样定理。它指出，如果实际信号的最高频率小于采样率的一半，则可以通过 sinc 插值完美地重构信号。其主要思想是凭借对信号频率约束的先验知识，重构信号所需的样本更少。

Around 2004, Emmanuel Candès, Justin Romberg, Terence Tao, and David Donoho proved that given knowledge about a signal's sparsity, the signal may be reconstructed with even fewer samples than the sampling theorem requires.^{[4]}^{[5]} This idea is the basis of compressed sensing.

Around 2004, Emmanuel Candès, Justin Romberg, Terence Tao, and David Donoho proved that given knowledge about a signal's sparsity, the signal may be reconstructed with even fewer samples than the sampling theorem requires. This idea is the basis of compressed sensing.

大约在2004年，Emmanuel cand，Justin Romberg，Terence Tao，和 David Donoho 证明，只要了解信号的稀疏性，就可以用比采样定理所需的样本更少的样本来重构信号。这个想法是压缩感知的基础。

==History==

历史

Compressed sensing relies on L1 techniques, which several other scientific fields have used historically.^{[6]} In statistics, the least squares method was complemented by the [math]\displaystyle{ L^1 }[/math]-norm, which was introduced by Laplace. Following the introduction of linear programming and Dantzig's simplex algorithm, the [math]\displaystyle{ L^1 }[/math]-norm was used in computational statistics. In statistical theory, the [math]\displaystyle{ L^1 }[/math]-norm was used by George W. Brown and later writers on median-unbiased estimators. It was used by Peter J. Huber and others working on robust statistics. The [math]\displaystyle{ L^1 }[/math]-norm was also used in signal processing, for example, in the 1970s, when seismologists constructed images of reflective layers within the earth based on data that did not seem to satisfy the Nyquist–Shannon criterion.^{[7]} It was used in matching pursuit in 1993, the LASSO estimator by Robert Tibshirani in 1996^{[8]} and basis pursuit in 1998.^{[9]} There were theoretical results describing when these algorithms recovered sparse solutions, but the required type and number of measurements were sub-optimal and subsequently greatly improved by compressed sensing.^{[citation needed]}

Compressed sensing relies on L1 techniques, which several other scientific fields have used historically. In statistics, the least squares method was complemented by the [math]\displaystyle{ L^1 }[/math]-norm, which was introduced by Laplace. Following the introduction of linear programming and Dantzig's simplex algorithm, the [math]\displaystyle{ L^1 }[/math]-norm was used in computational statistics. In statistical theory, the [math]\displaystyle{ L^1 }[/math]-norm was used by George W. Brown and later writers on median-unbiased estimators. It was used by Peter J. Huber and others working on robust statistics. The [math]\displaystyle{ L^1 }[/math]-norm was also used in signal processing, for example, in the 1970s, when seismologists constructed images of reflective layers within the earth based on data that did not seem to satisfy the Nyquist–Shannon criterion. It was used in matching pursuit in 1993, the LASSO estimator by Robert Tibshirani in 1996 and basis pursuit in 1998. There were theoretical results describing when these algorithms recovered sparse solutions, but the required type and number of measurements were sub-optimal and subsequently greatly improved by compressed sensing.

压缩感知依赖于 l 1技术，其他几个科学领域在历史上也曾使用过这种技术。在统计学中，最小二乘法是由Laplace引入的 [math]\displaystyle{ L^1 }[/math] 范数 来补充的。随着线性规划和 Dantzig 的单纯形算法的引入， [math]\displaystyle{ L^1 }[/math] 范数被应用于计算统计学。在统计理论中，George w. Brown 和后来的作者在中值无偏估计中使用了 [math]\displaystyle{ L^1 }[/math] 范数。Peter J. Huber和其他研究稳健统计的人使用了它。 [math]\displaystyle{ L^1 }[/math] 范数也用于信号处理，例如，在20世纪70年代，当时地震学家根据似乎不符合奎奈斯特—香农准则的数据构建了地球内部反射层的图像。它在1993年被用于匹配追踪，1996年被 Robert Tibshirani 用于 LASSO 估计，1998年被用于基追踪。理论结果描述了这些算法恢复稀疏解的过程，但需要的测量类型和数量都不是最优的，随后通过压缩感知得到了极大的改善。

At first glance, compressed sensing might seem to violate the sampling theorem, because compressed sensing depends on the sparsity of the signal in question and not its highest frequency. This is a misconception, because the sampling theorem guarantees perfect reconstruction given sufficient, not necessary, conditions. A sampling method fundamentally different from classical fixed-rate sampling cannot "violate" the sampling theorem. Sparse signals with high frequency components can be highly under-sampled using compressed sensing compared to classical fixed-rate sampling.^{[10]}

At first glance, compressed sensing might seem to violate the sampling theorem, because compressed sensing depends on the sparsity of the signal in question and not its highest frequency. This is a misconception, because the sampling theorem guarantees perfect reconstruction given sufficient, not necessary, conditions. A sampling method fundamentally different from classical fixed-rate sampling cannot "violate" the sampling theorem. Sparse signals with high frequency components can be highly under-sampled using compressed sensing compared to classical fixed-rate sampling.

乍一看，压缩感知信号似乎违反了采样定理，因为压缩感知信号取决于所讨论信号的稀疏程度，而不是其最高频率。这是一个误解，因为在足够（而不是必要）的条件下，采样定理可以保证完美的重构。一种从根本上不同于经典固定速率采样的采样方法不能“违反”采样定理。与经典的固定采样率相比，使用压缩传感可以对具有高频分量的系数信号进行高度欠采样。

==Method==

方法

### Underdetermined linear system

An underdetermined system of linear equations has more unknowns than equations and generally has an infinite number of solutions. The figure below shows such an equation system [math]\displaystyle{ \mathbf{y}=D\mathbf{x} }[/math] where we want to find a solution for [math]\displaystyle{ \mathbf{x} }[/math].

An underdetermined system of linear equations has more unknowns than equations and generally has an infinite number of solutions. The figure below shows such an equation system [math]\displaystyle{ \mathbf{y}=D\mathbf{x} }[/math] where we want to find a solution for [math]\displaystyle{ \mathbf{x} }[/math].

一个欠定系统的线性方程组比方程有更多的未知量，并且通常有无限多的解。下图显示了这样一个方程式系统 [math]\displaystyle{ \mathbf{y}=D\mathbf{x} }[/math]，在这里我们要找到一个 [math]\displaystyle{ \mathbf{x} }[/math]的解。

Underdetermined linear equation system

Underdetermined linear equation system

欠定线性方程组

In order to choose a solution to such a system, one must impose extra constraints or conditions (such as smoothness) as appropriate. In compressed sensing, one adds the constraint of sparsity, allowing only solutions which have a small number of nonzero coefficients. Not all underdetermined systems of linear equations have a sparse solution. However, if there is a unique sparse solution to the underdetermined system, then the compressed sensing framework allows the recovery of that solution.

In order to choose a solution to such a system, one must impose extra constraints or conditions (such as smoothness) as appropriate. In compressed sensing, one adds the constraint of sparsity, allowing only solutions which have a small number of nonzero coefficients. Not all underdetermined systems of linear equations have a sparse solution. However, if there is a unique sparse solution to the underdetermined system, then the compressed sensing framework allows the recovery of that solution.

为了选择此类系统的解，必须适当地施加额外的约束或条件(例如平滑度)。在压缩感知中，增加了稀疏性的约束，仅允许具有少量非零系数的解。并非所有欠定系统的线性方程组都有稀疏解。然而，如果对不确定系统有唯一的稀疏解，那么压缩感知框架可以恢复该解。

===Solution / reconstruction method===

解决方案/重构方法

Example of the retrieval of an unknown signal (gray line) from few measurements (black dots) using the knowledge that the signal is sparse in the Hermite polynomials basis (purple dots show the retrieved coefficients).

从少量测量值（黑点）中检索未知信号（灰线）的示例，方法是利用厄尔米特多项式中的信号稀疏性这一知识（紫色点表示检索到的系数）。
Compressed sensing takes advantage of the redundancy in many interesting signals—they are not pure noise. In particular, many signals are sparse, that is, they contain many coefficients close to or equal to zero, when represented in some domain.^{[11]} This is the same insight used in many forms of lossy compression.

Compressed sensing takes advantage of the redundancy in many interesting signals—they are not pure noise. In particular, many signals are sparse, that is, they contain many coefficients close to or equal to zero, when represented in some domain. This is the same insight used in many forms of lossy compression.

压缩感知利用了许多有趣信号的冗余性---- 它们不是纯噪音。特别是，许多信号是稀疏的，也就是说，当它们在某个域中表示时，它们包含许多接近或等于零的系数。这与许多形式的有损压缩所使用的见解相同。

Compressed sensing typically starts with taking a weighted linear combination of samples also called compressive measurements in a basis different from the basis in which the signal is known to be sparse. The results found by Emmanuel Candès, Justin Romberg, Terence Tao and David Donoho, showed that the number of these compressive measurements can be small and still contain nearly all the useful information. Therefore, the task of converting the image back into the intended domain involves solving an underdetermined matrix equation since the number of compressive measurements taken is smaller than the number of pixels in the full image. However, adding the constraint that the initial signal is sparse enables one to solve this underdetermined system of linear equations.

Compressed sensing typically starts with taking a weighted linear combination of samples also called compressive measurements in a basis different from the basis in which the signal is known to be sparse. The results found by Emmanuel Candès, Justin Romberg, Terence Tao and David Donoho, showed that the number of these compressive measurements can be small and still contain nearly all the useful information. Therefore, the task of converting the image back into the intended domain involves solving an underdetermined matrix equation since the number of compressive measurements taken is smaller than the number of pixels in the full image. However, adding the constraint that the initial signal is sparse enables one to solve this underdetermined system of linear equations.

压缩感知通常从采样的加权线性组合开始，这些采样也称为压缩测量，其基与已知信号稀疏的基不同。Emmanuel cand，Justin Romberg，Terence Tao 和 David Donoho 发现的结果表明，这些压缩测量的数量可以很小，但仍然包含几乎所有有用的信息。因此，将图像转换回指定区域的任务涉及求解欠定矩形方程，因为所采取的压缩测量的次数小于整个图像的像素数。然而，增加初始信号稀疏的约束条件，就可以求解这个欠定的线性方程组。

The least-squares solution to such problems is to minimize the [math]\displaystyle{ L^2 }[/math] norm—that is, minimize the amount of energy in the system. This is usually simple mathematically (involving only a matrix multiplication by the pseudo-inverse of the basis sampled in). However, this leads to poor results for many practical applications, for which the unknown coefficients have nonzero energy.

The least-squares solution to such problems is to minimize the [math]\displaystyle{ L^2 }[/math] norm—that is, minimize the amount of energy in the system. This is usually simple mathematically (involving only a matrix multiplication by the pseudo-inverse of the basis sampled in). However, this leads to poor results for many practical applications, for which the unknown coefficients have nonzero energy.

解决这类问题的最小二乘方法是使数学上的 l ^ 2 / 数学规范最小化，即使系统中的能量最小化。这通常在数学上是简单的(只涉及矩阵乘以所采样基的伪逆)。然而，这导致许多实际应用的结果很差，因为未知系数具有非零能量。

To enforce the sparsity constraint when solving for the underdetermined system of linear equations, one can minimize the number of nonzero components of the solution. The function counting the number of non-zero components of a vector was called the [math]\displaystyle{ L^0 }[/math] "norm" by David Donoho模板:Refn.

To enforce the sparsity constraint when solving for the underdetermined system of linear equations, one can minimize the number of nonzero components of the solution. The function counting the number of non-zero components of a vector was called the [math]\displaystyle{ L^0 }[/math] "norm" by David Donoho.

为了在求解欠定线性方程组时加强稀疏约束，可以使解的非零分量数量最小化。计算向量中非零分量数的函数被 David Donoho 称为 [math]\displaystyle{ L^0 }[/math]范数。

Candès et al. proved that for many problems it is probable that the [math]\displaystyle{ L^1 }[/math] norm is equivalent to the [math]\displaystyle{ L^0 }[/math] norm, in a technical sense: This equivalence result allows one to solve the [math]\displaystyle{ L^1 }[/math] problem, which is easier than the [math]\displaystyle{ L^0 }[/math] problem. Finding the candidate with the smallest [math]\displaystyle{ L^1 }[/math] norm can be expressed relatively easily as a linear program, for which efficient solution methods already exist.^{[12]} When measurements may contain a finite amount of noise, basis pursuit denoising is preferred over linear programming, since it preserves sparsity in the face of noise and can be solved faster than an exact linear program.

Candès et al. proved that for many problems it is probable that the [math]\displaystyle{ L^1 }[/math] norm is equivalent to the [math]\displaystyle{ L^0 }[/math] norm, in a technical sense: This equivalence result allows one to solve the [math]\displaystyle{ L^1 }[/math] problem, which is easier than the [math]\displaystyle{ L^0 }[/math] problem. Finding the candidate with the smallest [math]\displaystyle{ L^1 }[/math] norm can be expressed relatively easily as a linear program, for which efficient solution methods already exist. When measurements may contain a finite amount of noise, basis pursuit denoising is preferred over linear programming, since it preserves sparsity in the face of noise and can be solved faster than an exact linear program.

Candès等证明了对于许多问题，数学 [math]\displaystyle{ L^1 }[/math]范数很可能等价于数学 [math]\displaystyle{ L^0 }[/math]范数，从技术上讲: 这一等效结果使人们能够解决 [math]\displaystyle{ L^1 }[/math]数学问题，这比数学 [math]\displaystyle{ L^0 }[/math]数学问题简单。求数学 [math]\displaystyle{ L^1 }[/math] 范数最小的候选项可以相对容易地表示为一个线性规划，对于这种规划已经存在有效的求解方法。当测量值可能包含有限数量的噪声时，基追踪去噪优于线性规划，因为它保留了面对噪声时的稀疏性，并且可以比精确的线性程序更快地求解线性规划。

=== Total variation based CS reconstruction ===

基于总变化量的CS重建

! ——原因当前文章中最长的部分。-->

==== Motivation and applications ====

动机和应用

===== Role of TV regularization =====

电视正规化的作用

Total variation can be seen as a non-negative real-valued functional defined on the space of real-valued functions (for the case of functions of one variable) or on the space of integrable functions (for the case of functions of several variables). For signals, especially, total variation refers to the integral of the absolute gradient of the signal. In signal and image reconstruction, it is applied as total variation regularization where the underlying principle is that signals with excessive details have high total variation and that removing these details, while retaining important information such as edges, would reduce the total variation of the signal and make the signal subject closer to the original signal in the problem.

Total variation can be seen as a non-negative real-valued functional defined on the space of real-valued functions (for the case of functions of one variable) or on the space of integrable functions (for the case of functions of several variables). For signals, especially, total variation refers to the integral of the absolute gradient of the signal. In signal and image reconstruction, it is applied as total variation regularization where the underlying principle is that signals with excessive details have high total variation and that removing these details, while retaining important information such as edges, would reduce the total variation of the signal and make the signal subject closer to the original signal in the problem.

全变分可以看作是定义在实值函数空间(一元函数的情形)或可积函数空间(多元函数的情形)上的一个非负实值函数。特别是对于信号来说，总变分是指信号绝对梯度的积分。在信号与图像重构中，它是一种全变分正则化方法，其基本原理是在保留重要信息（例如边缘）的情况下，去除过多细节信号的全变分，从而减小信号的全变分，使信号更接近原始信号。

For the purpose of signal and image reconstruction, [math]\displaystyle{ l1 }[/math] minimization models are used. Other approaches also include the least-squares as has been discussed before in this article. These methods are extremely slow and return a not-so-perfect reconstruction of the signal. The current CS Regularization models attempt to address this problem by incorporating sparsity priors of the original image, one of which is the total variation (TV). Conventional TV approaches are designed to give piece-wise constant solutions. Some of these include (as discussed ahead) – constrained l1-minimization which uses an iterative scheme. This method, though fast, subsequently leads to over-smoothing of edges resulting in blurred image edges.^{[13]} TV methods with iterative re-weighting have been implemented to reduce the influence of large gradient value magnitudes in the images. This has been used in computed tomography (CT) reconstruction as a method known as edge-preserving total variation. However, as gradient magnitudes are used for estimation of relative penalty weights between the data fidelity and regularization terms, this method is not robust to noise and artifacts and accurate enough for CS image/signal reconstruction and, therefore, fails to preserve smaller structures.

For the purpose of signal and image reconstruction, [math]\displaystyle{ l1 }[/math] minimization models are used. Other approaches also include the least-squares as has been discussed before in this article. These methods are extremely slow and return a not-so-perfect reconstruction of the signal. The current CS Regularization models attempt to address this problem by incorporating sparsity priors of the original image, one of which is the total variation (TV). Conventional TV approaches are designed to give piece-wise constant solutions. Some of these include (as discussed ahead) – constrained l1-minimization which uses an iterative scheme. This method, though fast, subsequently leads to over-smoothing of edges resulting in blurred image edges. TV methods with iterative re-weighting have been implemented to reduce the influence of large gradient value magnitudes in the images. This has been used in computed tomography (CT) reconstruction as a method known as edge-preserving total variation. However, as gradient magnitudes are used for estimation of relative penalty weights between the data fidelity and regularization terms, this method is not robust to noise and artifacts and accurate enough for CS image/signal reconstruction and, therefore, fails to preserve smaller structures.

为了实现信号和图像的重构，采用了 [math]\displaystyle{ l1 }[/math] 极小化模型。其他方法还包括本文前面讨论过的最小二乘法。这些方法非常慢，并返回的信号不太理想。目前的 CS 正则化模型试图通过合并原始图像的稀疏性先验来解决此问题，其中之一就是总变分(TV)。传统的电视方法被设计成提供分段常数解。其中包括（如前所述)使用迭代方案的约束 l1最小化。这种方法虽然速度很快，但随后会导致边缘过平滑，造成图像边缘模糊。采用迭代重加权的TV方法来减少图像中大梯度值的影响。这已经被用于计算机断层扫描（CT）重建，作为一种被称为边缘保持全变分的方法。然而，由于使用梯度量来估计数据保真度和正则化项之间的相对惩罚权重，该方法对噪声和伪影不具有鲁棒性，对CS图像/信号重建精度不够，因此无法保持较小的结构。

Recent progress on this problem involves using an iteratively directional TV refinement for CS reconstruction.^{[14]} This method would have 2 stages: the first stage would estimate and refine the initial orientation field – which is defined as a noisy point-wise initial estimate, through edge-detection, of the given image. In the second stage, the CS reconstruction model is presented by utilizing directional TV regularizer. More details about these TV-based approaches – iteratively reweighted l1 minimization, edge-preserving TV and iterative model using directional orientation field and TV- are provided below.

Recent progress on this problem involves using an iteratively directional TV refinement for CS reconstruction. This method would have 2 stages: the first stage would estimate and refine the initial orientation field – which is defined as a noisy point-wise initial estimate, through edge-detection, of the given image. In the second stage, the CS reconstruction model is presented by utilizing directional TV regularizer. More details about these TV-based approaches – iteratively reweighted l1 minimization, edge-preserving TV and iterative model using directional orientation field and TV- are provided below.

在此问题上的最新进展涉及将迭代方向的电视优化用于CS重构。该方法分为两个阶段: 第一阶段对初始方向场进行估计和细化，初始方向场定义为通过边缘检测对给定图像进行有噪点式初始估计。第二阶段，利用定向电视调制器提出了 CS 重构模型。下面详细介绍了这些基于 TV 的方法: 迭代重加权 l1最小化、边缘保持 TV 和使用方向场和 TV 的迭代模型。

==== Existing approaches ====

现有方法

##### Iteratively reweighted [math]\displaystyle{ l_{1} }[/math] minimization

反复重新加权[math]\displaystyle{ l_{1} }[/math]最小化

iteratively reweighted l1 minimization method for CS

Cs 的迭代重加权 l1极小化方法

In the CS reconstruction models using constrained [math]\displaystyle{ l_{1} }[/math] minimization,^{[15]} larger coefficients are penalized heavily in the [math]\displaystyle{ l_{1} }[/math] norm. It was proposed to have a weighted formulation of [math]\displaystyle{ l_{1} }[/math] minimization designed to more democratically penalize nonzero coefficients. An iterative algorithm is used for constructing the appropriate weights.^{[16]} Each iteration requires solving one [math]\displaystyle{ l_{1} }[/math] minimization problem by finding the local minimum of a concave penalty function that more closely resembles the [math]\displaystyle{ l_{0} }[/math] norm. An additional parameter, usually to avoid any sharp transitions in the penalty function curve, is introduced into the iterative equation to ensure stability and so that a zero estimate in one iteration does not necessarily lead to a zero estimate in the next iteration. The method essentially involves using the current solution for computing the weights to be used in the next iteration.

In the CS reconstruction models using constrained [math]\displaystyle{ l_{1} }[/math] minimization, larger coefficients are penalized heavily in the [math]\displaystyle{ l_{1} }[/math] norm. It was proposed to have a weighted formulation of [math]\displaystyle{ l_{1} }[/math] minimization designed to more democratically penalize nonzero coefficients. An iterative algorithm is used for constructing the appropriate weights. Each iteration requires solving one [math]\displaystyle{ l_{1} }[/math] minimization problem by finding the local minimum of a concave penalty function that more closely resembles the [math]\displaystyle{ l_{0} }[/math] norm. An additional parameter, usually to avoid any sharp transitions in the penalty function curve, is introduced into the iterative equation to ensure stability and so that a zero estimate in one iteration does not necessarily lead to a zero estimate in the next iteration. The method essentially involves using the current solution for computing the weights to be used in the next iteration.

在在CS重建模型中使用约束[math]\displaystyle{ l_{1} }[/math] 最小化，较大的系数在[math]\displaystyle{ l_{1} }[/math]范数中受到严重惩罚。提出了一个[math]\displaystyle{ l_{1} }[/math]最小化的加权公式，旨在更民主地惩罚非零系数。采用迭代算法构造合适的权重。每次迭代都需要求解一个[math]\displaystyle{ l_{1} }[/math]最小化问题，方法是找到一个更接近于[math]\displaystyle{ l_{0} }[/math]范数的凹惩罚函数的局部最小值。为了保证迭代方程的稳定性，通常在罚函数曲线中引入一个额外的参数，是为了避免惩罚函数曲线的任何急剧变化，从而使一次迭代中的零估计不一定导致下一次迭代中的零估计。该方法本质上涉及使用当前解决方案来计算下一次迭代中要使用的权重。

====== Advantages and disadvantages ======

优缺点

Early iterations may find inaccurate sample estimates, however this method will down-sample these at a later stage to give more weight to the smaller non-zero signal estimates. One of the disadvantages is the need for defining a valid starting point as a global minimum might not be obtained every time due to the concavity of the function. Another disadvantage is that this method tends to uniformly penalize the image gradient irrespective of the underlying image structures. This causes over-smoothing of edges, especially those of low contrast regions, subsequently leading to loss of low contrast information. The advantages of this method include: reduction of the sampling rate for sparse signals; reconstruction of the image while being robust to the removal of noise and other artifacts; and use of very few iterations. This can also help in recovering images with sparse gradients.

Early iterations may find inaccurate sample estimates, however this method will down-sample these at a later stage to give more weight to the smaller non-zero signal estimates. One of the disadvantages is the need for defining a valid starting point as a global minimum might not be obtained every time due to the concavity of the function. Another disadvantage is that this method tends to uniformly penalize the image gradient irrespective of the underlying image structures. This causes over-smoothing of edges, especially those of low contrast regions, subsequently leading to loss of low contrast information. The advantages of this method include: reduction of the sampling rate for sparse signals; reconstruction of the image while being robust to the removal of noise and other artifacts; and use of very few iterations. This can also help in recovering images with sparse gradients.

早期迭代可能会发现不准确的样本估计，然而这种方法将在后期阶段对这些样本进行下采样，从未为较小的非零信号估计值赋予更大的权重。缺点之一是需要定义一个有效的起点，因为由于函数的凹度，每次可能无法获得全局最小值。另一个缺点是，该方法倾向于均匀地惩罚图像梯度，而与基础图像结构无关。这导致边缘特别是低对比度区域的边缘的过度平滑，随后导致低对比度信息的丢失。这种方法的优点包括：减少稀疏信号的采样率；图像重构的同时对去除噪声和其他伪影具有鲁棒性；以及使用很少的迭代次数。这也有助于恢复具有稀疏渐变的图像。

In the figure shown below, **P1** refers to the first-step of the iterative reconstruction process, of the projection matrix **P** of the fan-beam geometry, which is constrained by the data fidelity term. This may contain noise and artifacts as no regularization is performed. The minimization of **P1** is solved through the conjugate gradient least squares method. **P2** refers to the second step of the iterative reconstruction process wherein it utilizes the edge-preserving total variation regularization term to remove noise and artifacts, and thus improve the quality of the reconstructed image/signal. The minimization of **P2** is done through a simple gradient descent method. Convergence is determined by testing, after each iteration, for image positivity, by checking if [math]\displaystyle{ f^{k-1} = 0 }[/math] for the case when [math]\displaystyle{ f^{k-1} \lt 0 }[/math] (Note that [math]\displaystyle{ f }[/math] refers to the different x-ray linear attenuation coefficients at different voxels of the patient image).

In the figure shown below, P1 refers to the first-step of the iterative reconstruction process, of the projection matrix P of the fan-beam geometry, which is constrained by the data fidelity term. This may contain noise and artifacts as no regularization is performed. The minimization of P1 is solved through the conjugate gradient least squares method. P2 refers to the second step of the iterative reconstruction process wherein it utilizes the edge-preserving total variation regularization term to remove noise and artifacts, and thus improve the quality of the reconstructed image/signal. The minimization of P2 is done through a simple gradient descent method. Convergence is determined by testing, after each iteration, for image positivity, by checking if [math]\displaystyle{ f^{k-1} = 0 }[/math] for the case when [math]\displaystyle{ f^{k-1} \lt 0 }[/math] (Note that [math]\displaystyle{ f }[/math] refers to the different x-ray linear attenuation coefficients at different voxels of the ’’’ patient image ’’’).

在下图中，P1表示表示扇形光束几何形状的投影矩阵P的迭代重建过程的第一步，该过程受数据保真度项约束。由于未执行正则化，因此可能包含噪声和伪影。P1的最小值通过共轭梯度最小二乘法求解。P2是迭代重建过程的第二步，其中它利用了保留边缘的总变分正则项来消除噪声和伪像，从而提高了重构图像/信号的质量。P2的最小化通过简单的梯度下降方法完成。收敛是通过在每次迭代后测试图像的正性，检查是否[math]\displaystyle{ f^{k-1} = 0 }[/math] 对于当 [math]\displaystyle{ f^{k-1} \lt 0 }[/math] （注意 F 指在’’’患者图像’’’的不同体素处的不同X射线线性衰减系数。

##### Edge-preserving total variation (TV) based compressed sensing^{[13]}

基于边缘保留的总变分（TV）的压缩感知

压缩感知的边缘保持总变分方法流程图

Flow diagram figure for edge preserving total variation method for compressed sensing

压缩感知的边缘保持总变分方法流程图

This is an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low dose CT through low current levels (milliampere). In order to reduce the imaging dose, one of the approaches used is to reduce the number of x-ray projections acquired by the scanner detectors. However, this insufficient projection data which is used to reconstruct the CT image can cause streaking artifacts. Furthermore, using these insufficient projections in standard TV algorithms end up making the problem under-determined and thus leading to infinitely many possible solutions. In this method, an additional penalty weighted function is assigned to the original TV norm. This allows for easier detection of sharp discontinuities in intensity in the images and thereby adapt the weight to store the recovered edge information during the process of signal/image reconstruction. The parameter [math]\displaystyle{ \sigma }[/math] controls the amount of smoothing applied to the pixels at the edges to differentiate them from the non-edge pixels. The value of [math]\displaystyle{ \sigma }[/math] is changed adaptively based on the values of the histogram of the gradient magnitude so that a certain percentage of pixels have gradient values larger than [math]\displaystyle{ \sigma }[/math]. The edge-preserving total variation term, thus, becomes sparser and this speeds up the implementation. A two-step iteration process known as forward-backward splitting algorithm is used.^{[17]} The optimization problem is split into two sub-problems which are then solved with the conjugate gradient least squares method^{[18]} and the simple gradient descent method respectively. The method is stopped when the desired convergence has been achieved or if the maximum number of iterations is reached.

This is an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low dose CT through low current levels (milliampere). In order to reduce the imaging dose, one of the approaches used is to reduce the number of x-ray projections acquired by the scanner detectors. However, this insufficient projection data which is used to reconstruct the CT image can cause streaking artifacts. Furthermore, using these insufficient projections in standard TV algorithms end up making the problem under-determined and thus leading to infinitely many possible solutions. In this method, an additional penalty weighted function is assigned to the original TV norm. This allows for easier detection of sharp discontinuities in intensity in the images and thereby adapt the weight to store the recovered edge information during the process of signal/image reconstruction. The parameter [math]\displaystyle{ \sigma }[/math] controls the amount of smoothing applied to the pixels at the edges to differentiate them from the non-edge pixels. The value of [math]\displaystyle{ \sigma }[/math] is changed adaptively based on the values of the histogram of the gradient magnitude so that a certain percentage of pixels have gradient values larger than [math]\displaystyle{ \sigma }[/math]. The edge-preserving total variation term, thus, becomes sparser and this speeds up the implementation. A two-step iteration process known as forward-backward splitting algorithm is used. The optimization problem is split into two sub-problems which are then solved with the conjugate gradient least squares method and the simple gradient descent method respectively. The method is stopped when the desired convergence has been achieved or if the maximum number of iterations is reached.

这是一种迭代的CT重构算法，它通过低电流水平（毫安）从低剂量CT获得的高度欠采样数据中重构CT图像。为了减少成像剂量，采用的方法之一是减少扫描仪探测器获得的x射线投影数量。然而，用于重建CT图像的投影数据不足会导致条纹伪影。此外，在标准TV算法中使用这些不充分的投影最终会使问题变得不确定，从而导致无限多可能的解决方案。该方法在原始 TV 范数的基础上增加了一个惩罚加权函数。这样可以更容易地检测图像中明显的不连续性，从而在信号 / 图像重建过程中适应权重来存储恢复的边缘信息。参数[math]\displaystyle{ \sigma }[/math]控制应用于边缘像素的平滑量，以区分它们与非边缘像素 。根据梯度幅值直方图的值自适应地改变 math-sigma / math 的值，使一定百分比的像素具有大于 math-sigma / math 的梯度值。因此，保持边缘的全变分项变得稀疏，从而加快了实现速度。使用了两步迭代过程，称为前向后向拆分算法。优化问题分为两个子问题。然后分别用共轭梯度最小二乘法和简单梯度下降方法求解。当达到所需的收敛性或达到最大迭代次数时，将停止该方法。

===== Advantages and disadvantages =====

优缺点

Some of the disadvantages of this method are the absence of smaller structures in the reconstructed image and degradation of image resolution. This edge preserving TV algorithm, however, requires fewer iterations than the conventional TV algorithm.^{[13]} Analyzing the horizontal and vertical intensity profiles of the reconstructed images, it can be seen that there are sharp jumps at edge points and negligible, minor fluctuation at non-edge points. Thus, this method leads to low relative error and higher correlation as compared to the TV method. It also effectively suppresses and removes any form of image noise and image artifacts such as streaking.

Some of the disadvantages of this method are the absence of smaller structures in the reconstructed image and degradation of image resolution. This edge preserving TV algorithm, however, requires fewer iterations than the conventional TV algorithm. Analyzing the horizontal and vertical intensity profiles of the reconstructed images, it can be seen that there are sharp jumps at edge points and negligible, minor fluctuation at non-edge points. Thus, this method leads to low relative error and higher correlation as compared to the TV method. It also effectively suppresses and removes any form of image noise and image artifacts such as streaking.

该方法的一些缺点是重构图像中缺少较小的结构和图像分辨率的降低。然而，这种保持边缘的 TV 算法比传统的 TV 算法需要更少的迭代次数。通过对重构图像的水平和垂直强度分布的分析，可以看出重构图像在边缘点处存在明显的波动，而在非边缘点处有可以忽略、微小的波动。因此，与 TV 方法相比，该方法具有相对误差小、相关性高的特点。它还有效地抑制和消除任何形式的图像噪声和图像伪影，如条纹。

##### Iterative model using a directional orientation field and directional total variation^{[14]}

使用方向定场和方向总变化的迭代模型

To prevent over-smoothing of edges and texture details and to obtain a reconstructed CS image which is accurate and robust to noise and artifacts, this method is used. First, an initial estimate of the noisy point-wise orientation field of the image [math]\displaystyle{ I }[/math], [math]\displaystyle{ \hat{d} }[/math], is obtained. This noisy orientation field is defined so that it can be refined at a later stage to reduce the noise influences in orientation field estimation.A coarse orientation field estimation is then introduced based on structure tensor which is formulated as:^{[19]} [math]\displaystyle{ J_\rho(\nabla I_{\sigma}) = G_\rho * (\nabla I_{\sigma} \otimes \nabla I_{\sigma}) = \begin{pmatrix}J_{11} & J_{12}\\J_{12} & J_{22}\end{pmatrix} }[/math]. Here, [math]\displaystyle{ J_\rho }[/math] refers to the structure tensor related with the image pixel point (i,j) having standard deviation [math]\displaystyle{ \rho }[/math]. [math]\displaystyle{ G }[/math] refers to the Gaussian kernel [math]\displaystyle{ (0, \rho ^2) }[/math] with standard deviation [math]\displaystyle{ \rho }[/math]. [math]\displaystyle{ \sigma }[/math] refers to the manually defined parameter for the image [math]\displaystyle{ I }[/math] below which the edge detection is insensitive to noise. [math]\displaystyle{ \nabla I_{\sigma} }[/math] refers to the gradient of the image [math]\displaystyle{ I }[/math] and [math]\displaystyle{ (\nabla I_{\sigma} \otimes \nabla I_{\sigma}) }[/math] refers to the tensor product obtained by using this gradient.

To prevent over-smoothing of edges and texture details and to obtain a reconstructed CS image which is accurate and robust to noise and artifacts, this method is used. First, an initial estimate of the noisy point-wise orientation field of the image [math]\displaystyle{ I }[/math], [math]\displaystyle{ \hat{d} }[/math], is obtained. This noisy orientation field is defined so that it can be refined at a later stage to reduce the noise influences in orientation field estimation.A coarse orientation field estimation is then introduced based on structure tensor which is formulated as: [math]\displaystyle{ J_\rho(\nabla I_{\sigma}) = G_\rho * (\nabla I_{\sigma} \otimes \nabla I_{\sigma}) = \begin{pmatrix}J_{11} & J_{12}\\J_{12} & J_{22}\end{pmatrix} }[/math]. Here, [math]\displaystyle{ J_\rho }[/math] refers to the structure tensor related with the image pixel point (i,j) having standard deviation [math]\displaystyle{ \rho }[/math]. [math]\displaystyle{ G }[/math] refers to the Gaussian kernel [math]\displaystyle{ (0, \rho ^2) }[/math] with standard deviation [math]\displaystyle{ \rho }[/math]. [math]\displaystyle{ \sigma }[/math] refers to the manually defined parameter for the image [math]\displaystyle{ I }[/math] below which the edge detection is insensitive to noise. [math]\displaystyle{ \nabla I_{\sigma} }[/math] refers to the gradient of the image [math]\displaystyle{ I }[/math] and [math]\displaystyle{ (\nabla I_{\sigma} \otimes \nabla I_{\sigma}) }[/math] refers to the tensor product obtained by using this gradient.

为了防止边缘和纹理细节过度平滑，并获得对噪声和伪影具有精确和鲁棒性的重构 CS 图像，采用了这种方法。首先，得到图像[math]\displaystyle{ I }[/math]的噪声点方向场的初始估计[math]\displaystyle{ \hat{d} }[/math]。定义这个有噪声的方向场，以便在以后的阶段对其进行细化，以减少方向场估计中的噪声影响，然后基于结构张量引入了一个粗略的方向场估计，其公式如下：[math]\displaystyle{ J_\rho(\nabla I_{\sigma}) = G_\rho * (\nabla I_{\sigma} \otimes \nabla I_{\sigma}) = \begin{pmatrix}J_{11} & J_{12}\\J_{12} & J_{22}\end{pmatrix} }[/math] 这里，[math]\displaystyle{ J_\rho }[/math] 指与具有标准偏差的图像像素点 (i,j) 相关的结构张量[math]\displaystyle{ \rho }[/math]. [math]\displaystyle{ G }[/math] 指高斯核 [math]\displaystyle{ (0, \rho ^2) }[/math] 标准差 [math]\displaystyle{ \rho }[/math]. [math]\displaystyle{ \sigma }[/math] 指图像的手动定义参数[math]\displaystyle{ I }[/math] 在该参数下，边缘检测对噪声不敏感。 [math]\displaystyle{ \nabla I_{\sigma} }[/math] 表示图像的梯度[math]\displaystyle{ I }[/math]和[math]\displaystyle{ (\nabla I_{\sigma} \otimes \nabla I_{\sigma}) }[/math] 是指通过使用此梯度获得的张量积。

The structure tensor obtained is convolved with a Gaussian kernel [math]\displaystyle{ G }[/math] to improve the accuracy of the orientation estimate with [math]\displaystyle{ \sigma }[/math] being set to high values to account for the unknown noise levels. For every pixel (i,j) in the image, the structure tensor J is a symmetric and positive semi-definite matrix. Convolving all the pixels in the image with [math]\displaystyle{ G }[/math], gives orthonormal eigen vectors ω and υ of the [math]\displaystyle{ J }[/math] matrix. ω points in the direction of the dominant orientation having the largest contrast and υ points in the direction of the structure orientation having the smallest contrast. The orientation field coarse initial estimation [math]\displaystyle{ \hat{d} }[/math] is defined as [math]\displaystyle{ \hat{d} }[/math] = υ. This estimate is accurate at strong edges. However, at weak edges or on regions with noise, its reliability decreases.

The structure tensor obtained is convolved with a Gaussian kernel [math]\displaystyle{ G }[/math] to improve the accuracy of the orientation estimate with [math]\displaystyle{ \sigma }[/math] being set to high values to account for the unknown noise levels. For every pixel (i,j) in the image, the structure tensor J is a symmetric and positive semi-definite matrix. Convolving all the pixels in the image with [math]\displaystyle{ G }[/math], gives orthonormal eigen vectors ω and υ of the [math]\displaystyle{ J }[/math] matrix. ω points in the direction of the dominant orientation having the largest contrast and υ points in the direction of the structure orientation having the smallest contrast. The orientation field coarse initial estimation [math]\displaystyle{ \hat{d} }[/math] is defined as [math]\displaystyle{ \hat{d} }[/math] = υ. This estimate is accurate at strong edges. However, at weak edges or on regions with noise, its reliability decreases.

为了提高方位估计的精度，采用高斯核[math]\displaystyle{ G }[/math]对所得结构张量进行卷积，并将[math]\displaystyle{ \sigma }[/math] 设置为较高的值，以考虑未知噪声水平。对于图像中的每个像素(i，j) ，结构张量 j 是一个对称的半正定矩阵。利用[math]\displaystyle{ G }[/math]对图像中的所有像素进行卷积，得到正交特征向量 ω 和υ [math]\displaystyle{ J }[/math] 矩阵。ω指向对比度最大的主取向方向，υ指向对比度最小的结构取向方向。定向场粗初始估计[math]\displaystyle{ \hat{d} }[/math] 被定义为[math]\displaystyle{ \hat{d} }[/math] = υ。该估计在强边缘处是准确的。但是，在边缘较弱或有噪声的区域，其可靠性会降低。

To overcome this drawback, a refined orientation model is defined in which the data term reduces the effect of noise and improves accuracy while the second penalty term with the L2-norm is a fidelity term which ensures accuracy of initial coarse estimation.

To overcome this drawback, a refined orientation model is defined in which the data term reduces the effect of noise and improves accuracy while the second penalty term with the L2-norm is a fidelity term which ensures accuracy of initial coarse estimation.

为了克服这一缺点，定义了一个精炼的方向模型，其中数据项减少了噪声的影响，提高了精度，而具有L2范数的第二个惩罚项是保真度项，可确保初始粗略估计的准确性。

This orientation field is introduced into the directional total variation optimization model for CS reconstruction through the equation: [math]\displaystyle{ min_\Chi\lVert \nabla \Chi \bullet d \rVert _{1} + \frac{\lambda}{2}\ \lVert Y - \Phi\Chi \rVert ^2_{2} }[/math]. [math]\displaystyle{ \Chi }[/math] is the objective signal which needs to be recovered. Y is the corresponding measurement vector, d is the iterative refined orientation field and [math]\displaystyle{ \Phi }[/math] is the CS measurement matrix. This method undergoes a few iterations ultimately leading to convergence.[math]\displaystyle{ \hat{d} }[/math] is the orientation field approximate estimation of the reconstructed image [math]\displaystyle{ X^{k-1} }[/math] from the previous iteration (in order to check for convergence and the subsequent optical performance, the previous iteration is used). For the two vector fields represented by [math]\displaystyle{ \Chi }[/math] and [math]\displaystyle{ d }[/math], [math]\displaystyle{ \Chi \bullet d }[/math] refers to the multiplication of respective horizontal and vertical vector elements of [math]\displaystyle{ \Chi }[/math] and [math]\displaystyle{ d }[/math] followed by their subsequent addition. These equations are reduced to a series of convex minimization problems which are then solved with a combination of variable splitting and augmented Lagrangian (FFT-based fast solver with a closed form solution) methods.^{[14]} It (Augmented Lagrangian) is considered equivalent to the split Bregman iteration which ensures convergence of this method. The orientation field, d is defined as being equal to [math]\displaystyle{ (d_{h}, d_{v}) }[/math], where [math]\displaystyle{ d_{h}, d_{v} }[/math] define the horizontal and vertical estimates of [math]\displaystyle{ d }[/math].

This orientation field is introduced into the directional total variation optimization model for CS reconstruction through the equation: [math]\displaystyle{ min_\Chi\lVert \nabla \Chi \bullet d \rVert _{1} + \frac{\lambda}{2}\ \lVert Y - \Phi\Chi \rVert ^2_{2} }[/math]. [math]\displaystyle{ \Chi }[/math] is the objective signal which needs to be recovered. Y is the corresponding measurement vector, d is the iterative refined orientation field and [math]\displaystyle{ \Phi }[/math] is the CS measurement matrix. This method undergoes a few iterations ultimately leading to convergence.[math]\displaystyle{ \hat{d} }[/math] is the orientation field approximate estimation of the reconstructed image [math]\displaystyle{ X^{k-1} }[/math] from the previous iteration (in order to check for convergence and the subsequent optical performance, the previous iteration is used). For the two vector fields represented by [math]\displaystyle{ \Chi }[/math] and [math]\displaystyle{ d }[/math], [math]\displaystyle{ \Chi \bullet d }[/math] refers to the multiplication of respective horizontal and vertical vector elements of [math]\displaystyle{ \Chi }[/math] and [math]\displaystyle{ d }[/math] followed by their subsequent addition. These equations are reduced to a series of convex minimization problems which are then solved with a combination of variable splitting and augmented Lagrangian (FFT-based fast solver with a closed form solution) methods. It (Augmented Lagrangian) is considered equivalent to the split Bregman iteration which ensures convergence of this method. The orientation field, d is defined as being equal to [math]\displaystyle{ (d_{h}, d_{v}) }[/math], where [math]\displaystyle{ d_{h}, d_{v} }[/math] define the horizontal and vertical estimates of [math]\displaystyle{ d }[/math].

通过以下公式[math]\displaystyle{ min_\Chi\lVert \nabla \Chi \bullet d \rVert _{1} + \frac{\lambda}{2}\ \lVert Y - \Phi\Chi \rVert ^2_{2} }[/math]. [math]\displaystyle{ \Chi }[/math]，将这个方向场引入 CS 重构的方向全变分优化模型。[math]\displaystyle{ \Chi }[/math]是需要恢复的客观信号。Y 是对应的测量向量，d 是迭代的精化取向场，[math]\displaystyle{ \Phi }[/math] 是 CS 测量矩阵。该方法经过几次迭代，最终达到收敛。 [math]\displaystyle{ \hat{d} }[/math] 是重构图像的取向场近似估计[math]\displaystyle{ X^{k-1} }[/math] 从先前的迭代开始(为了检查收敛性和随后的光学性能，使用先前的迭代)。对于由 [math]\displaystyle{ \Chi }[/math] 和 [math]\displaystyle{ d }[/math]表示的两个向量字段，[math]\displaystyle{ \Chi \bullet d }[/math] 表示各自的水平和垂直向量元素的乘法，以及后续的加法。这些方程被化简为一系列的凸极小化问题，然后采用变量分裂和增广拉格朗日(基于FFT快速求解器，采用封闭形式求解封闭形式的解)方法相结合来求解。它(增广拉格朗日)被认为等效于分裂布雷格曼迭代，从而保证了该方法的收敛性。方向字段 d 被定义为等于 [math]\displaystyle{ (d_{h}, d_{v}) }[/math]，其中 [math]\displaystyle{ d_{h}, d_{v} }[/math] 定义了 [math]\displaystyle{ d }[/math]的水平和垂直估计。

Augmented Lagrangian method for orientation field and iterative directional field refinement models

方向场的增广拉格朗日方法及迭代方向场求精模型

The Augmented Lagrangian method for the orientation field, [math]\displaystyle{ min_\Chi\lVert \nabla \Chi \bullet d \rVert _{1} + \frac{\lambda}{2}\ \lVert Y - \Phi\Chi \rVert ^2_{2} }[/math], involves initializing [math]\displaystyle{ d_{h}, d_{v}, H, V }[/math] and then finding the approximate minimizer of [math]\displaystyle{ L_{1} }[/math] with respect to these variables. The Lagrangian multipliers are then updated and the iterative process is stopped when convergence is achieved. For the iterative directional total variation refinement model, the augmented lagrangian method involves initializing [math]\displaystyle{ \Chi, P, Q, \lambda_{P}, \lambda_{Q} }[/math].^{[20]}

The Augmented Lagrangian method for the orientation field, [math]\displaystyle{ min_\Chi\lVert \nabla \Chi \bullet d \rVert _{1} + \frac{\lambda}{2}\ \lVert Y - \Phi\Chi \rVert ^2_{2} }[/math], involves initializing [math]\displaystyle{ d_{h}, d_{v}, H, V }[/math] and then finding the approximate minimizer of [math]\displaystyle{ L_{1} }[/math] with respect to these variables. The Lagrangian multipliers are then updated and the iterative process is stopped when convergence is achieved. For the iterative directional total variation refinement model, the augmented lagrangian method involves initializing [math]\displaystyle{ \Chi, P, Q, \lambda_{P}, \lambda_{Q} }[/math].

方向场的增广拉格朗日方法，[math]\displaystyle{ min_\Chi\lVert \nabla \Chi \bullet d \rVert _{1} + \frac{\lambda}{2}\ \lVert Y - \Phi\Chi \rVert ^2_{2} }[/math]，涉及初始化 [math]\displaystyle{ d_{h}, d_{v}, H, V }[/math] ，然后找出关于这些变量的数学 [math]\displaystyle{ L_{1} }[/math] 的近似最小值。当达到收敛时，会更新拉格朗日乘数，并停止迭代过程。对于迭代方向全变分细化模型，增广拉格朗日方法涉及初始化[math]\displaystyle{ \Chi, P, Q, \lambda_{P}, \lambda_{Q} }[/math]。

Here, [math]\displaystyle{ H, V, P, Q }[/math] are newly introduced variables where [math]\displaystyle{ H }[/math] = [math]\displaystyle{ \nabla d_{h} }[/math], [math]\displaystyle{ V }[/math] = [math]\displaystyle{ \nabla d_{v} }[/math], [math]\displaystyle{ P }[/math] = [math]\displaystyle{ \nabla \Chi }[/math], and [math]\displaystyle{ Q }[/math] = [math]\displaystyle{ P \bullet d }[/math]. [math]\displaystyle{ \lambda_{H}, \lambda_{V}, \lambda_{P}, \lambda_{Q} }[/math] are the Lagrangian multipliers for [math]\displaystyle{ H, V, P, Q }[/math]. For each iteration, the approximate minimizer of [math]\displaystyle{ L_{2} }[/math] with respect to variables ([math]\displaystyle{ \Chi, P, Q }[/math]) is calculated. And as in the field refinement model, the lagrangian multipliers are updated and the iterative process is stopped when convergence is achieved.

Here, [math]\displaystyle{ H, V, P, Q }[/math] are newly introduced variables where [math]\displaystyle{ H }[/math] = [math]\displaystyle{ \nabla d_{h} }[/math], [math]\displaystyle{ V }[/math] = [math]\displaystyle{ \nabla d_{v} }[/math], [math]\displaystyle{ P }[/math] = [math]\displaystyle{ \nabla \Chi }[/math], and [math]\displaystyle{ Q }[/math] = [math]\displaystyle{ P \bullet d }[/math]. [math]\displaystyle{ \lambda_{H}, \lambda_{V}, \lambda_{P}, \lambda_{Q} }[/math] are the Lagrangian multipliers for [math]\displaystyle{ H, V, P, Q }[/math]. For each iteration, the approximate minimizer of [math]\displaystyle{ L_{2} }[/math] with respect to variables ([math]\displaystyle{ \Chi, P, Q }[/math]) is calculated. And as in the field refinement model, the lagrangian multipliers are updated and the iterative process is stopped when convergence is achieved.

在这里，[math]\displaystyle{ H, V, P, Q }[/math] 是新引入的变量，其中 [math]\displaystyle{ H }[/math] = [math]\displaystyle{ \nabla d_{h} }[/math], [math]\displaystyle{ V }[/math] = [math]\displaystyle{ \nabla d_{v} }[/math], [math]\displaystyle{ P }[/math] = [math]\displaystyle{ \nabla \Chi }[/math], [math]\displaystyle{ Q }[/math] = [math]\displaystyle{ P \bullet d }[/math]. [math]\displaystyle{ \lambda_{H}, \lambda_{V}, \lambda_{P}, \lambda_{Q} }[/math] lambda { q } / math 是[math]\displaystyle{ H, V, P, Q }[/math]的拉格朗日乘数。对于每次迭代，计算[math]\displaystyle{ L_{2} }[/math]关于变量[math]\displaystyle{ \Chi, P, Q }[/math] 的近似最小值。像在场优化模型中一样，当收敛时，拉格朗日乘数被更新，并且迭代过程停止。

For the orientation field refinement model, the Lagrangian multipliers are updated in the iterative process as follows:

For the orientation field refinement model, the Lagrangian multipliers are updated in the iterative process as follows:

对于方向场求精模型，拉格朗日乘数在迭代过程中进行如下更新：

[math]\displaystyle{ (\lambda_{H})^k = (\lambda_{H})^{k-1} + \gamma_{H}(H^k - \nabla (d_{h})^k) }[/math]

[math]\displaystyle{ (\lambda_{H})^k = (\lambda_{H})^{k-1} + \gamma_{H}(H^k - \nabla (d_{h})^k) }[/math]

Math ( lambda { h }) ^ k ( lambda { h }) ^ { k-1} + gamma { h }(h ^ k- nabla (d { h }) ^ k) / math

[math]\displaystyle{ (\lambda_{V})^k = (\lambda_{V})^{k-1} + \gamma_{V}(V^k - \nabla (d_{v})^k) }[/math]

[math]\displaystyle{ (\lambda_{V})^k = (\lambda_{V})^{k-1} + \gamma_{V}(V^k - \nabla (d_{v})^k) }[/math]

Math ( lambda { v }) ^ k ( lambda { v }) ^ { k-1} + gamma { v }(v ^ k- nabla (d { v }) ^ k) / math

For the iterative directional total variation refinement model, the Lagrangian multipliers are updated as follows:

For the iterative directional total variation refinement model, the Lagrangian multipliers are updated as follows:

对于迭代方向全变分精化模型，拉格朗日乘数更新如下:

[math]\displaystyle{ (\lambda_{P})^k = (\lambda_{P})^{k-1} + \gamma_{P}(P^k - \nabla (\Chi)^k) }[/math]

[math]\displaystyle{ (\lambda_{P})^k = (\lambda_{P})^{k-1} + \gamma_{P}(P^k - \nabla (\Chi)^k) }[/math]

Math ( lambda { p }) ^ k ( lambda { p }) ^ { k-1} + gamma { p }(p ^ k- nabla ( Chi) ^ k) / math

[math]\displaystyle{ (\lambda_{Q})^k = (\lambda_{Q})^{k-1} + \gamma_{Q}(Q^k - P^{k} \bullet d) }[/math]

[math]\displaystyle{ (\lambda_{Q})^k = (\lambda_{Q})^{k-1} + \gamma_{Q}(Q^k - P^{k} \bullet d) }[/math]

Math ( lambda { q }) ^ k ( lambda { q }) ^ { k-1} + gamma { q }(q ^ k-p ^ { k }{ k } k) / math

Here, [math]\displaystyle{ \gamma_{H}, \gamma_{V}, \gamma_{P}, \gamma_{Q} }[/math] are positive constants.

Here, [math]\displaystyle{ \gamma_{H}, \gamma_{V}, \gamma_{P}, \gamma_{Q} }[/math] are positive constants.

在这里，[math]\displaystyle{ \gamma_{H}, \gamma_{V}, \gamma_{P}, \gamma_{Q} }[/math]都是正常数。

=====Advantages and disadvantages=====

优点和缺点

Based on peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) metrics and known ground-truth images for testing performance, it is concluded that iterative directional total variation has a better reconstructed performance than the non-iterative methods in preserving edge and texture areas. The orientation field refinement model plays a major role in this improvement in performance as it increases the number of directionless pixels in the flat area while enhancing the orientation field consistency in the regions with edges.

Based on peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) metrics and known ground-truth images for testing performance, it is concluded that iterative directional total variation has a better reconstructed performance than the non-iterative methods in preserving edge and texture areas. The orientation field refinement model plays a major role in this improvement in performance as it increases the number of directionless pixels in the flat area while enhancing the orientation field consistency in the regions with edges.

基于峰值信噪比（PSNR）和结构相似性指数(SSIM)度量标准以及用于测试性能的已知地面实况图像，得出迭的结论是，代方向全变分方法在保持边缘和纹理区域方面比非迭代方法具有更好的重构性能。方向场细化模型在提高性能方面起着重要作用，因为它增加了平面区域中无方向性像素的数量，同时提高了边缘区域中方向场的一致性。

==Applications==

应用领域

The field of compressive sensing is related to several topics in signal processing and computational mathematics, such as underdetermined linear-systems, group testing, heavy hitters, sparse coding, multiplexing, sparse sampling, and finite rate of innovation. Its broad scope and generality has enabled several innovative CS-enhanced approaches in signal processing and compression, solution of inverse problems, design of radiating systems, radar and through-the-wall imaging, and antenna characterization.^{[21]} Imaging techniques having a strong affinity with compressive sensing include coded aperture and computational photography.

The field of compressive sensing is related to several topics in signal processing and computational mathematics, such as underdetermined linear-systems, group testing, heavy hitters, sparse coding, multiplexing, sparse sampling, and finite rate of innovation. Its broad scope and generality has enabled several innovative CS-enhanced approaches in signal processing and compression, solution of inverse problems, design of radiating systems, radar and through-the-wall imaging, and antenna characterization. Imaging techniques having a strong affinity with compressive sensing include coded aperture and computational photography.

压缩传感领域涉及到信号处理和计算数学处理中的多个主题，例如欠定线性系统、群验、重命中率、稀疏编码、多路复用、稀疏采样和有限创新率。它的广泛性和通用性使得它能够在信号处理和压缩、逆问题解决、辐射系统设计、雷达和穿墙成像以及天线表征等方面采用多种创新的 cs 增强方法。与压缩感测具有很强亲和力的成像技术包括编码孔径和计算摄影。

Conventional CS reconstruction uses sparse signals (usually sampled at a rate less than the Nyquist sampling rate) for reconstruction through constrained [math]\displaystyle{ l_{1} }[/math] minimization. One of the earliest applications of such an approach was in reflection seismology which used sparse reflected signals from band-limited data for tracking changes between sub-surface layers.^{[22]} When the LASSO model came into prominence in the 1990s as a statistical method for selection of sparse models,^{[23]} this method was further used in computational harmonic analysis for sparse signal representation from over-complete dictionaries. Some of the other applications include incoherent sampling of radar pulses. The work by *Boyd et al.*^{[15]} has applied the LASSO model- for selection of sparse models- towards analog to digital converters (the current ones use a sampling rate higher than the Nyquist rate along with the quantized Shannon representation). This would involve a parallel architecture in which the polarity of the analog signal changes at a high rate followed by digitizing the integral at the end of each time-interval to obtain the converted digital signal.

Conventional CS reconstruction uses sparse signals (usually sampled at a rate less than the Nyquist sampling rate) for reconstruction through constrained [math]\displaystyle{ l_{1} }[/math] minimization. One of the earliest applications of such an approach was in reflection seismology which used sparse reflected signals from band-limited data for tracking changes between sub-surface layers. When the LASSO model came into prominence in the 1990s as a statistical method for selection of sparse models, this method was further used in computational harmonic analysis for sparse signal representation from over-complete dictionaries. Some of the other applications include incoherent sampling of radar pulses. The work by Boyd et al. has applied the LASSO model- for selection of sparse models- towards analog to digital converters (the current ones use a sampling rate higher than the Nyquist rate along with the quantized Shannon representation). This would involve a parallel architecture in which the polarity of the analog signal changes at a high rate followed by digitizing the integral at the end of each time-interval to obtain the converted digital signal.

传统的 CS 重构使用稀疏信号(通常采样率小于Nyquist采样率)以通过约束[math]\displaystyle{ l_{1} }[/math]最小化。这种方法最早的应用之一是在反射地震学中，它使用了来自带限数据的稀疏反射信号来跟踪地下层之间的变化。当 LASSO 模型在20世纪90年代作为一种选择稀疏模型的统计方法进入人们的视野时，这种方法被进一步应用到计算谐波分析中，用于表示来自过于完整的字典中的稀疏信号。其他一些应用包括雷达脉冲的非相干采样。博伊德等人的研究。将 LASSO 模型（用于选择稀疏模型）应用于模数转换器(目前的模型采用比 Nyquist 采样率高的采样率和量化的 Shannon 表示)。这将涉及并行架构，在该架构中，模拟信号的极性以高速率变化，然后在每个时间间隔结束时将积分数字化，以获得转换后的数字信号。

===Photography===

摄影

Compressed sensing is used in a mobile phone camera sensor. The approach allows a reduction in image acquisition energy per image by as much as a factor of 15 at the cost of complex decompression algorithms; the computation may require an off-device implementation.^{[24]}

Compressed sensing is used in a mobile phone camera sensor. The approach allows a reduction in image acquisition energy per image by as much as a factor of 15 at the cost of complex decompression algorithms; the computation may require an off-device implementation.

压缩感知用于手机摄像传感器中。该方法允许以复杂的解压缩算法为代价，将每张图像的图像采集能量降低多达15倍; 计算可能需要离机实现。

Compressed sensing is used in single-pixel cameras from Rice University.^{[25]} Bell Labs employed the technique in a lensless single-pixel camera that takes stills using repeated snapshots of randomly chosen apertures from a grid. Image quality improves with the number of snapshots, and generally requires a small fraction of the data of conventional imaging, while eliminating lens/focus-related aberrations.^{[26]}^{[27]}

Compressed sensing is used in single-pixel cameras from Rice University. Bell Labs employed the technique in a lensless single-pixel camera that takes stills using repeated snapshots of randomly chosen apertures from a grid. Image quality improves with the number of snapshots, and generally requires a small fraction of the data of conventional imaging, while eliminating lens/focus-related aberrations.

莱斯大学的单像素相机使用了压缩感知。贝尔实验室在一个无透镜的单像素相机中使用了这项技术，该相机使用从网格中随机选择的光圈的重复快照来拍摄静态照片。成像质量随着快照数量的增加而提高，通常只需要传统成像数据的一小部分，同时消除了与镜头 / 焦点有关的像差。

===Holography===

全息摄影

Compressed sensing can be used to improve image reconstruction in holography by increasing the number of voxels one can infer from a single hologram.^{[28]}^{[29]}^{[30]} It is also used for image retrieval from undersampled measurements in optical^{[31]}^{[32]} and millimeter-wave^{[33]} holography.

Compressed sensing can be used to improve image reconstruction in holography by increasing the number of voxels one can infer from a single hologram. It is also used for image retrieval from undersampled measurements in optical and millimeter-wave holography.

通过增加可以从单个全息图推断出的体素的数量，可以将压缩感测用于改善全息图中的图像重构。它也用于从光学和毫米波全息术的欠采样测量中的图像检索。

===Facial recognition===

面部识别

Compressed sensing is being used in facial recognition applications.^{[34]}

Compressed sensing is being used in facial recognition applications.

压缩感知正被用于面部识别应用。

===Magnetic resonance imaging===

磁共振成像

Compressed sensing has been used^{[35]}^{[36]} to shorten magnetic resonance imaging scanning sessions on conventional hardware.^{[37]}^{[38]}^{[39]} Reconstruction methods include

Compressed sensing has been used to shorten magnetic resonance imaging scanning sessions on conventional hardware. Reconstruction methods include

压缩感知已经被用来缩短传统硬件上的磁共振成像扫描时间。重构方法包括

- ISTA

- FISTA

- SISTA

- ePRESS
^{[40]}

- EWISTA
^{[41]}

- EWISTARS
^{[42]}etc.

Compressed sensing addresses the issue of high scan time by enabling faster acquisition by measuring fewer Fourier coefficients. This produces a high-quality image with relatively lower scan time. Another application (also discussed ahead) is for CT reconstruction with fewer X-ray projections. Compressed sensing, in this case, removes the high spatial gradient parts – mainly, image noise and artifacts. This holds tremendous potential as one can obtain high-resolution CT images at low radiation doses (through lower current-mA settings).^{[43]}

Compressed sensing addresses the issue of high scan time by enabling faster acquisition by measuring fewer Fourier coefficients. This produces a high-quality image with relatively lower scan time. Another application (also discussed ahead) is for CT reconstruction with fewer X-ray projections. Compressed sensing, in this case, removes the high spatial gradient parts – mainly, image noise and artifacts. This holds tremendous potential as one can obtain high-resolution CT images at low radiation doses (through lower current-mA settings).

压缩感知通过测量更少的傅里叶系数来实现更快的捕获，从而解决了扫描时间过长的问题。这将产生扫描时间相对较短的高质量图像。另一个应用(也在前面讨论)是用较少的 x 射线进行CT重构。在这种情况下，压缩感知可消除高空间梯度部分-主要是图像噪声和伪影。这具有巨大的潜力，因为可以通过低辐射剂量（通过较低的mA电流设置）获得高分辨率的CT图像。

===Network tomography===

网络断层扫描

Compressed sensing has showed outstanding results in the application of network tomography to network management. Network delay estimation and network congestion detection can both be modeled as underdetermined systems of linear equations where the coefficient matrix is the network routing matrix. Moreover, in the Internet, network routing matrices usually satisfy the criterion for using compressed sensing.^{[44]}

Compressed sensing has showed outstanding results in the application of network tomography to network management. Network delay estimation and network congestion detection can both be modeled as underdetermined systems of linear equations where the coefficient matrix is the network routing matrix. Moreover, in the Internet, network routing matrices usually satisfy the criterion for using compressed sensing.

压缩感知在网络层析成像应用于网络管理方面取得了显著的成绩。网络延迟估计和网络拥塞检测都可以被建模为线性方程组的欠定系统，其中系数矩阵是网络路由矩阵。此外，在互联网中，网络路由矩阵通常满足使用压缩感知的标准。

===Shortwave-infrared cameras===

短波红外摄像机

Commercial shortwave-infrared cameras based upon compressed sensing are available.^{[45]} These cameras have light sensitivity from 0.9 µm to 1.7 µm, which are wavelengths invisible to the human eye.

Commercial shortwave-infrared cameras based upon compressed sensing are available. These cameras have light sensitivity from 0.9 µm to 1.7 µm, which are wavelengths invisible to the human eye.

基于压缩感知的商用短波红外摄像机已经上市。这些照相机的感光度在0.9µm到1.7µm之间，人眼看不到这些波长。

===Aperture synthesis in radio astronomy===

射电天文学中的孔径合成
In the field of radio astronomy, compressed sensing has been proposed for deconvolving an interferometric image.^{[46]} In fact, the Högbom CLEAN algorithm that has been in use for the deconvolution of radio images since 1974, is similar to compressed sensing's matching pursuit algorithm.

In the field of radio astronomy, compressed sensing has been proposed for deconvolving an interferometric image. In fact, the Högbom CLEAN algorithm that has been in use for the deconvolution of radio images since 1974, is similar to compressed sensing's matching pursuit algorithm.

在射电天文学领域，已经有人提出用压缩感知去卷积干涉图像。事实上，自1974年以来一直用于无线电图像解卷积的 h gbom CLEAN 算法，类似于压缩感知的匹配追踪算法。

===Transmission electron microscopy===

透射电子显微镜

Compressed sensing combined with a moving aperture has been used to increase the acquisition rate of images in a transmission electron microscope.^{[47]} In scanning mode, compressive sensing combined with random scanning of the electron beam has enabled both faster acquisition and less electron dose, which allows for imaging of electron beam sensitive materials.^{[48]}

Compressed sensing combined with a moving aperture has been used to increase the acquisition rate of images in a transmission electron microscope. In scanning mode, compressive sensing combined with random scanning of the electron beam has enabled both faster acquisition and less electron dose, which allows for imaging of electron beam sensitive materials.

压缩感知结合移动光圈已被用于提高透射电子显微镜图像的采集率。在扫描模式下，压缩传感与电子束的随机扫描相结合，使得采集速度更快且电子剂量更少，从而能够对电子束敏感材料进行成像。

## See also

- 杂讯Noiselet

- 稀疏编码Sparse coding

- 低密度奇偶校验码Low-density parity-check code

- 语音信号中的压缩感知Compressed sensing in speech signals

## Notes

## References

- ↑ CS: Compressed Genotyping, DNA Sudoku – Harnessing high throughput sequencing for multiplexed specimen analysis.
- ↑ Donoho, David L. (2006). "For most large underdetermined systems of linear equations the minimal 1-norm solution is also the sparsest solution".
*Communications on Pure and Applied Mathematics*.**59**(6): 797–829. doi:10.1002/cpa.20132. - ↑ M. Davenport, "The Fundamentals of Compressive Sensing", SigView, April 12, 2013.
- ↑ Candès, Emmanuel J.; Romberg, Justin K.; Tao, Terence (2006). "Stable signal recovery from incomplete and inaccurate measurements" (PDF).
*Communications on Pure and Applied Mathematics*.**59**(8): 1207–1223. arXiv:math/0503066. doi:10.1002/cpa.20124. - ↑ Donoho, D.L. (2006). "Compressed sensing".
*IEEE Transactions on Information Theory*.**52**(4): 1289–1306. doi:10.1109/TIT.2006.871582. - ↑ List of L1 regularization ideas from Vivek Goyal, Alyson Fletcher, Sundeep Rangan, The Optimistic Bayesian: Replica Method Analysis of Compressed Sensing
- ↑ Hayes, Brian (2009). "The Best Bits".
*American Scientist*.**97**(4): 276. doi:10.1511/2009.79.276. - ↑ Tibshirani, Robert. "Regression shrinkage and selection via the lasso".
*Journal of the Royal Statistical Society, Series B*.**58**(1): 267–288. - ↑ "Atomic decomposition by basis pursuit", by Scott Shaobing Chen, David L. Donoho, Michael, A. Saunders. SIAM Journal on Scientific Computing
- ↑ Candès, Emmanuel J.; Romberg, Justin K.; Tao, Terence (2006). "Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Fourier Information" (PDF).
*IEEE Trans. Inf. Theory*.**52**(8): 489–509. arXiv:math/0409186. CiteSeerX 10.1.1.122.4429. doi:10.1109/tit.2005.862083. - ↑ Candès, E.J., & Wakin, M.B.,
*An Introduction To Compressive Sampling*, IEEE Signal Processing Magazine, V.21, March 2008 [1] - ↑ L1-MAGIC is a collection of MATLAB routines
- ↑
^{13.0}^{13.1}^{13.2}Tian, Z.; Jia, X.; Yuan, K.; Pan, T.; Jiang, S. B. (2011). "Low-dose CT reconstruction via edge preserving total variation regularization".*Phys Med Biol*.**56**(18): 5949–5967. arXiv:1009.2288. Bibcode:2011PMB....56.5949T. doi:10.1088/0031-9155/56/18/011. PMC 4026331. PMID 21860076. - ↑
^{14.0}^{14.1}^{14.2}Xuan Fei; Zhihui Wei; Liang Xiao (2013). "Iterative Directional Total Variation Refinement for Compressive Sensing Image Reconstruction".*IEEE Signal Processing Letters*.**20**(11): 1070–1073. Bibcode:2013ISPL...20.1070F. doi:10.1109/LSP.2013.2280571. - ↑
^{15.0}^{15.1}Candes, E. J.; Wakin, M. B.; Boyd, S. P. (2008). "Enhancing sparsity by reweighted l1 minimization".*J. Fourier Anal. Applicat*.**14**(5–6): 877–905. arXiv:0711.1612. doi:10.1007/s00041-008-9045-x. - ↑ Lange, K.: Optimization, Springer Texts in Statistics. Springer, New York (2004)
- ↑ Combettes, P; Wajs, V (2005). "Signal recovery by proximal forward-backward splitting".
*Multiscale Model Simul*.**4**(4): 1168–200. doi:10.1137/050626090. - ↑ Hestenes, M; Stiefel, E (1952). "Methods of conjugate gradients for solving linear systems".
*Journal of Research of the National Bureau of Standards*.**49**(6): 409–36. doi:10.6028/jres.049.044. - ↑ Brox, T.; Weickert, J.; Burgeth, B.; Mrázek, P. (2006). "Nonlinear structure tensors".
*Image Vis. Comput*.**24**(1): 41–55. CiteSeerX 10.1.1.170.6085. doi:10.1016/j.imavis.2005.09.010. - ↑ Goldluecke, B.; Strekalovskiy, E.; Cremers, D.; Siims, P.-T. A. I. (2012). "The natural vectorial total variation which arises from geometric measure theory".
*SIAM J. Imaging Sci*.**5**(2): 537–563. CiteSeerX 10.1.1.364.3997. doi:10.1137/110823766. - ↑ Andrea Massa; Paolo Rocca; Giacomo Oliveri (2015). "Compressive Sensing in Electromagnetics – A Review".
*IEEE Antennas and Propagation Magazine*.**57**(1): 224–238. Bibcode:2015IAPM...57..224M. doi:10.1109/MAP.2015.2397092. - ↑ Taylor, H.L.; Banks, S.C.; McCoy, J.F. (1979). "Deconvolution with the 1 norm".
*Geophysics*.**44**(1): 39–52. doi:10.1190/1.1440921. - ↑ Tibshirani, R (1996). "Regression shrinkage and selection via the lasso" (PDF).
*J. R. Stat. Soc. B*.**58**(1): 267–288. doi:10.1111/j.2517-6161.1996.tb02080.x. - ↑ David Schneider (March 2013). "New Camera Chip Captures Only What It Needs".
*IEEE Spectrum*. Retrieved 2013-03-20. - ↑ "Compressive Imaging: A New Single-Pixel Camera".
*Rice DSP*. Archived from the original on 2010-06-05. Retrieved 2013-06-04. - ↑ "Bell Labs Invents Lensless Camera".
*MIT Technology Review*. 2013-05-25. Retrieved 2013-06-04. - ↑ Gang Huang; Hong Jiang; Kim Matthews; Paul Wilford (2013).
*Lensless Imaging by Compressive Sensing*. 2013 IEEE International Conference on Image Processing.**2393**. pp. 2101–2105. arXiv:1305.7181. Bibcode:2013arXiv1305.7181H. doi:10.1109/ICIP.2013.6738433. ISBN 978-1-4799-2341-0. - ↑ Brady, David; Choi, Kerkil; Marks, Daniel; Horisaki, Ryoichi; Lim, Sehoon (2009). "Compressive holography".
*Optics Express*.**17**(15): 13040–13049. Bibcode:2009OExpr..1713040B. doi:10.1364/oe.17.013040. PMID 19654708. - ↑ Rivenson, Y.; Stern, A.; Javidi, B. (2010). "Compressive fresnel holography".
*Display Technology, Journal of*.**6**(10): 506–509. Bibcode:2010JDisT...6..506R. CiteSeerX 10.1.1.391.2020. doi:10.1109/jdt.2010.2042276. - ↑ Denis, Loic; Lorenz, Dirk; Thibaut, Eric; Fournier, Corinne; Trede, Dennis (2009). "Inline hologram reconstruction with sparsity constraints" (PDF).
*Opt. Lett*.**34**(22): 3475–3477. Bibcode:2009OptL...34.3475D. doi:10.1364/ol.34.003475. PMID 19927182. - ↑ Marim, M.; Angelini, E.; Olivo-Marin, J. C.; Atlan, M. (2011). "Off-axis compressed holographic microscopy in low-light conditions".
*Optics Letters*.**36**(1): 79–81. arXiv:1101.1735. Bibcode:2011OptL...36...79M. doi:10.1364/ol.36.000079. PMID 21209693. - ↑ Marim, M. M.; Atlan, M.; Angelini, E.; Olivo-Marin, J. C. (2010). "Compressed sensing with off-axis frequency-shifting holography".
*Optics Letters*.**35**(6): 871–873. arXiv:1004.5305. Bibcode:2010OptL...35..871M. doi:10.1364/ol.35.000871. PMID 20237627. - ↑ Fernandez Cull, Christy; Wikner, David A.; Mait, Joseph N.; Mattheiss, Michael; Brady, David J. (2010). "Millimeter-wave compressive holography".
*Appl. Opt*.**49**(19): E67–E82. Bibcode:2010ApOpt..49E..67C. CiteSeerX 10.1.1.1018.5231. doi:10.1364/ao.49.000e67. PMID 20648123. - ↑ Engineers Test Highly Accurate Face Recognition
- ↑ Lustig, Michael (2007). "Sparse MRI: The application of compressed sensing for rapid MR imaging".
*Magnetic Resonance in Medicine*.**58**: 1182–1195. doi:10.1002/mrm.21391. - ↑ Lustig, M.; Donoho, D.L.; Santos, J.M.; Pauly, J.M. (2008). "Compressed Sensing MRI;".
*IEEE Signal Processing Magazine*.**25**(2): 72–82. Bibcode:2008ISPM...25...72L. doi:10.1109/MSP.2007.914728. - ↑ Jordan EllenbergEmail Author (2010-03-04). "Fill in the Blanks: Using Math to Turn Lo-Res Datasets Into Hi-Res Samples | Wired Magazine".
*Wired*.**18**(3). Retrieved 2013-06-04. - ↑ Why Compressed Sensing is NOT a CSI "Enhance" technology ... yet !
- ↑ Surely You Must Be Joking Mr. Screenwriter
- ↑ Zhang, Y.; Peterson, B. (2014). "Energy Preserved Sampling for Compressed Sensing MRI".
*Computational and Mathematical Methods in Medicine*.**2014**: 546814. arXiv:1501.03915. Bibcode:2015CMMM.201514104T. doi:10.1155/2014/546814. PMC 4058219. PMID 24971155. - ↑ Zhang, Y. (2015). "Exponential Wavelet Iterative Shrinkage Thresholding Algorithm for Compressed Sensing Magnetic Resonance Imaging".
*Information Sciences*.**322**: 115–132. doi:10.1016/j.ins.2015.06.017. - ↑ Zhang, Y.; Wang, S. (2015). "Exponential Wavelet Iterative Shrinkage Thresholding Algorithm with Random Shift for Compressed Sensing Magnetic Resonance Imaging".
*IEEJ Transactions on Electrical and Electronic Engineering*.**10**(1): 116–117. doi:10.1002/tee.22059. - ↑ Figueiredo, M.; Bioucas-Dias, J.M.; Nowak, R.D. (2007). "Majorization–minimization algorithms for wavelet-based image restoration".
*IEEE Trans. Image Process*.**16**(12): 2980–2991. Bibcode:2007ITIP...16.2980F. doi:10.1109/tip.2007.909318. PMID 18092597. - ↑ [Network tomography via compressed sensing|http://www.ee.washington.edu/research/funlab/Publications/2010/CS-Tomo.pdf]
- ↑ "InView web site".
*inviewcorp.com*. - ↑ |Compressed sensing imaging techniques for radio interferometry
- ↑ Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D. (13 August 2015). "Applying compressive sensing to TEM video: a substantial frame rate increase on any camera".
*Advanced Structural and Chemical Imaging*.**1**(1). doi:10.1186/s40679-015-0009-3. - ↑ Kovarik, L.; Stevens, A.; Liyu, A.; Browning, N. D. (17 October 2016). "Implementing an accurate and rapid sparse sampling approach for low-dose atomic resolution STEM imaging".
*Applied Physics Letters*.**109**(16): 164102. Bibcode:2016ApPhL.109p4102K. doi:10.1063/1.4965720.

## Further reading

- "The Fundamentals of Compressive Sensing" Part 1, Part 2 and Part 3: video tutorial by Mark Davenport, Georgia Tech. at SigView, the IEEE Signal Processing Society Tutorial Library.

- Using Math to Turn Lo-Res Datasets Into Hi-Res Samples Wired Magazine article

- Compressed Sensing Makes Every Pixel Count – article in the AMS
*What's Happening in the Mathematical Sciences*series

Category:Information theory

范畴: 信息论

Category:Signal estimation

类别: 信号估计

Category:Linear algebra

类别: 线性代数

Category:Mathematical optimization

类别: 最优化

This page was moved from wikipedia:en:Compressed sensing. Its edit history can be viewed at 压缩感知/edithistory