更改

跳到导航 跳到搜索
添加6字节 、 2021年11月19日 (五) 10:40
无编辑摘要
第48行: 第48行:     
一个欠定系统的线性方程组比方程有更多的未知量,并且通常有无限多的解。下图显示了这样一个方程式系统 <math> \mathbf{y}=D\mathbf{x} </math>,在这里我们要找到一个 <math> \mathbf{x} </math>的解。
 
一个欠定系统的线性方程组比方程有更多的未知量,并且通常有无限多的解。下图显示了这样一个方程式系统 <math> \mathbf{y}=D\mathbf{x} </math>,在这里我们要找到一个 <math> \mathbf{x} </math>的解。
 
+
[[文件:Underdetermined equation system.png|替代=|缩略图|欠定线性方程组]]
[[文件:Underdetermined equation system.svg|缩略图|欠定线性方程组]]
  −
 
   
In order to choose a solution to such a system, one must impose extra constraints or conditions (such as smoothness) as appropriate. In compressed sensing, one adds the constraint of sparsity, allowing only solutions which have a small number of nonzero coefficients. Not all underdetermined systems of linear equations have a sparse solution. However, if there is a unique sparse solution to the underdetermined system, then the compressed sensing framework allows the recovery of that solution.
 
In order to choose a solution to such a system, one must impose extra constraints or conditions (such as smoothness) as appropriate. In compressed sensing, one adds the constraint of sparsity, allowing only solutions which have a small number of nonzero coefficients. Not all underdetermined systems of linear equations have a sparse solution. However, if there is a unique sparse solution to the underdetermined system, then the compressed sensing framework allows the recovery of that solution.
   第108行: 第106行:  
For the purpose of signal and image reconstruction, <math>l1</math> minimization models are used. Other approaches also include the least-squares as has been discussed before in this article. These methods are extremely slow and return a not-so-perfect reconstruction of the signal. The current CS Regularization models attempt to address this problem by incorporating sparsity priors of the original image, one of which is the total variation (TV). Conventional TV approaches are designed to give piece-wise constant solutions. Some of these include (as discussed ahead) – constrained l1-minimization which uses an iterative scheme. This method, though fast, subsequently leads to over-smoothing of edges resulting in blurred image edges.<ref name = "EPTV" /> TV methods with iterative re-weighting have been implemented to reduce the influence of large gradient value magnitudes in the images. This has been used in [[Tomography|computed tomography]] (CT) reconstruction as a method known as edge-preserving total variation. However, as gradient magnitudes are used for estimation of relative penalty weights between the data fidelity and regularization terms, this method is not robust to noise and artifacts and accurate enough for CS image/signal reconstruction and, therefore, fails to preserve smaller structures.
 
For the purpose of signal and image reconstruction, <math>l1</math> minimization models are used. Other approaches also include the least-squares as has been discussed before in this article. These methods are extremely slow and return a not-so-perfect reconstruction of the signal. The current CS Regularization models attempt to address this problem by incorporating sparsity priors of the original image, one of which is the total variation (TV). Conventional TV approaches are designed to give piece-wise constant solutions. Some of these include (as discussed ahead) – constrained l1-minimization which uses an iterative scheme. This method, though fast, subsequently leads to over-smoothing of edges resulting in blurred image edges.<ref name = "EPTV" /> TV methods with iterative re-weighting have been implemented to reduce the influence of large gradient value magnitudes in the images. This has been used in [[Tomography|computed tomography]] (CT) reconstruction as a method known as edge-preserving total variation. However, as gradient magnitudes are used for estimation of relative penalty weights between the data fidelity and regularization terms, this method is not robust to noise and artifacts and accurate enough for CS image/signal reconstruction and, therefore, fails to preserve smaller structures.
   −
为了实现信号和图像的重构,采用了 <math>l1</math> 极小化模型。其他方法还包括本文前面讨论过的最小二乘法。这些方法非常慢,并返回的信号不太理想。目前的 CS 正则化模型试图通过合并原始图像的稀疏性先验来解决此问题,其中之一就是总变分(TV)。传统的电视方法被设计成提供分段常数解。其中包括(如前所述)使用迭代方案的约束 l1最小化。这种方法虽然速度很快,但随后会导致边缘过平滑,造成图像边缘模糊<ref name="EPTV" />。采用迭代重加权的TV方法来减少图像中大梯度值的影响。这已经被用于计算机断层扫描(CT)重建,作为一种被称为边缘保持全变分的方法。然而,由于使用梯度量来估计数据保真度和正则化项之间的相对惩罚权重,该方法对噪声和伪影不具有鲁棒性,对CS图像/信号重建精度不够,因此无法保持较小的结构。
+
为了实现信号和图像的重构,采用了 <math>l1</math> 极小化模型。其他方法还包括本文前面讨论过的最小二乘法。这些方法非常慢,并返回的信号不太理想。目前的 CS 正则化模型试图通过合并原始图像的稀疏性先验来解决此问题,其中之一就是总变分(TV)。传统的电视方法被设计成提供分段常数解。其中包括(如前所述)使用迭代方案的约束 L1最小化。这种方法虽然速度很快,但随后会导致边缘过平滑,造成图像边缘模糊<ref name="EPTV" />。采用迭代重加权的TV方法来减少图像中大梯度值的影响。这已经被用于计算机断层扫描(CT)重建,作为一种被称为边缘保持全变分的方法。然而,由于使用梯度量来估计数据保真度和正则化项之间的相对惩罚权重,该方法对噪声和伪影不具有鲁棒性,对CS图像/信号重建精度不够,因此无法保持较小的结构。
     
38

个编辑

导航菜单