更改

跳到导航 跳到搜索
删除234字节 、 2021年11月19日 (五) 17:17
无编辑摘要
第122行: 第122行:  
iteratively reweighted l1 minimization method for CS
 
iteratively reweighted l1 minimization method for CS
   −
Cs 的迭代重加权 l1极小化方法
+
Cs 的迭代重加权 L1极小化方法
    
In the CS reconstruction models using constrained <math>l_{1}</math> minimization,<ref name="Original source for IRLS">{{cite journal | last1 = Candes | first1 = E. J. | last2 = Wakin | first2 = M. B. | last3 = Boyd | first3 = S. P. | year = 2008 | title = Enhancing sparsity by reweighted l1 minimization | url = | journal = J. Fourier Anal. Applicat | volume = 14 | issue = 5–6| pages = 877–905 | doi=10.1007/s00041-008-9045-x| arxiv = 0711.1612 }}</ref> larger coefficients are penalized heavily in the <math>l_{1}</math> norm. It was proposed to have a weighted formulation of <math>l_{1}</math> minimization designed to more democratically penalize nonzero coefficients. An iterative algorithm is used for constructing the appropriate weights.<ref name="Iteration">Lange, K.: Optimization, Springer Texts in Statistics. Springer, New York (2004)</ref> Each iteration requires solving one <math>l_{1}</math> minimization problem by finding the local minimum of a concave penalty function that more closely resembles the <math>l_{0}</math> norm. An additional parameter, usually to avoid any sharp transitions in the penalty function curve, is introduced into the iterative equation to ensure stability and so that a zero estimate in one iteration does not necessarily lead to a zero estimate in the next iteration. The method essentially involves using the current solution for computing the weights to be used in the next iteration.
 
In the CS reconstruction models using constrained <math>l_{1}</math> minimization,<ref name="Original source for IRLS">{{cite journal | last1 = Candes | first1 = E. J. | last2 = Wakin | first2 = M. B. | last3 = Boyd | first3 = S. P. | year = 2008 | title = Enhancing sparsity by reweighted l1 minimization | url = | journal = J. Fourier Anal. Applicat | volume = 14 | issue = 5–6| pages = 877–905 | doi=10.1007/s00041-008-9045-x| arxiv = 0711.1612 }}</ref> larger coefficients are penalized heavily in the <math>l_{1}</math> norm. It was proposed to have a weighted formulation of <math>l_{1}</math> minimization designed to more democratically penalize nonzero coefficients. An iterative algorithm is used for constructing the appropriate weights.<ref name="Iteration">Lange, K.: Optimization, Springer Texts in Statistics. Springer, New York (2004)</ref> Each iteration requires solving one <math>l_{1}</math> minimization problem by finding the local minimum of a concave penalty function that more closely resembles the <math>l_{0}</math> norm. An additional parameter, usually to avoid any sharp transitions in the penalty function curve, is introduced into the iterative equation to ensure stability and so that a zero estimate in one iteration does not necessarily lead to a zero estimate in the next iteration. The method essentially involves using the current solution for computing the weights to be used in the next iteration.
第140行: 第140行:  
In the figure shown below, '''P1''' refers to the first-step of the iterative reconstruction process, of the projection matrix '''P''' of the fan-beam geometry, which is constrained by the data fidelity term. This may contain noise and artifacts as no regularization is performed. The minimization of '''P1''' is solved through the conjugate gradient least squares method. '''P2''' refers to the second step of the iterative reconstruction process wherein it utilizes the edge-preserving total variation regularization term to remove noise and artifacts, and thus improve the quality of the reconstructed image/signal. The minimization of '''P2''' is done through a simple gradient descent method. Convergence is determined by testing, after each iteration, for image positivity, by checking if <math>f^{k-1} = 0</math> for the case when <math>f^{k-1} < 0</math> (Note that <math>f</math> refers to the different x-ray linear attenuation coefficients at different voxels of the patient image).
 
In the figure shown below, '''P1''' refers to the first-step of the iterative reconstruction process, of the projection matrix '''P''' of the fan-beam geometry, which is constrained by the data fidelity term. This may contain noise and artifacts as no regularization is performed. The minimization of '''P1''' is solved through the conjugate gradient least squares method. '''P2''' refers to the second step of the iterative reconstruction process wherein it utilizes the edge-preserving total variation regularization term to remove noise and artifacts, and thus improve the quality of the reconstructed image/signal. The minimization of '''P2''' is done through a simple gradient descent method. Convergence is determined by testing, after each iteration, for image positivity, by checking if <math>f^{k-1} = 0</math> for the case when <math>f^{k-1} < 0</math> (Note that <math>f</math> refers to the different x-ray linear attenuation coefficients at different voxels of the patient image).
   −
在下图中,P1表示表示扇形光束几何形状的投影矩阵P的迭代重建过程的第一步,该过程受数据保真度项约束。由于未执行正则化,因此可能包含噪声和伪影。P1的最小值通过共轭梯度最小二乘法求解。P2是迭代重建过程的第二步,其中它利用了保留边缘的总变分正则项来消除噪声和伪像,从而提高了重构图像/信号的质量。P2的最小化通过简单的梯度下降方法完成。收敛是通过在每次迭代后测试图像的正性,检查是否<math>f^{k-1} = 0</math>  对于当 <math>f^{k-1} < 0</math> (注意 F 指在’’’<font color=“#32CD32”>患者图像</font>’’’的不同体素处的不同X射线线性衰减系数。
+
在下图中,P1表示表示扇形光束几何形状的投影矩阵P的迭代重建过程的第一步,该过程受数据保真度项约束。由于未执行正则化,因此可能包含噪声和伪影。P1的最小值通过共轭梯度最小二乘法求解。P2是迭代重建过程的第二步,其中它利用了保留边缘的总变分正则项来消除噪声和伪像,从而提高了重构图像/信号的质量。P2的最小化通过简单的梯度下降方法完成。收敛是通过在每次迭代后测试图像的正性,检查是否<math>f^{k-1} = 0</math>  对于当 <math>f^{k-1} < 0</math> (注: f指的是患者图像不同体素处不同的x射线线性衰减系数)。
 +
 
      第146行: 第147行:  
=====Edge-preserving total variation (TV) based compressed sensing<ref name ="EPTV">{{cite journal | last1 = Tian | first1 = Z. | last2 = Jia | first2 = X. | last3 = Yuan | first3 = K. | last4 = Pan | first4 = T. | last5 = Jiang | first5 = S. B. | year = 2011 | title = Low-dose CT reconstruction via edge preserving total variation regularization | url = | journal = Phys Med Biol | volume = 56 | issue = 18| pages = 5949–5967 | doi=10.1088/0031-9155/56/18/011| pmid = 21860076 | pmc = 4026331 | arxiv = 1009.2288 | bibcode = 2011PMB....56.5949T }}</ref>=====
 
=====Edge-preserving total variation (TV) based compressed sensing<ref name ="EPTV">{{cite journal | last1 = Tian | first1 = Z. | last2 = Jia | first2 = X. | last3 = Yuan | first3 = K. | last4 = Pan | first4 = T. | last5 = Jiang | first5 = S. B. | year = 2011 | title = Low-dose CT reconstruction via edge preserving total variation regularization | url = | journal = Phys Med Biol | volume = 56 | issue = 18| pages = 5949–5967 | doi=10.1088/0031-9155/56/18/011| pmid = 21860076 | pmc = 4026331 | arxiv = 1009.2288 | bibcode = 2011PMB....56.5949T }}</ref>=====
 
基于边缘保留的总变分(TV)的压缩感知<ref name="EPTV" />
 
基于边缘保留的总变分(TV)的压缩感知<ref name="EPTV" />
[[文件:Edge preserving TV.png|链接=link=link=Special:FilePath/Edge_preserving_TV.png|替代=|缩略图|Flow diagram figure for edge preserving total variation method for compressed sensing]]
+
[[文件:Edge preserving TV.png|链接=link=link=link=Special:FilePath/Edge_preserving_TV.png|替代=|缩略图|Flow diagram figure for edge preserving total variation method for compressed sensing]]
 
Flow diagram figure for edge preserving total variation method for compressed sensing
 
Flow diagram figure for edge preserving total variation method for compressed sensing
   第190行: 第191行:       −
[[文件:Augmented Lagrangian.png|链接=link=link=Special:FilePath/Augmented_Lagrangian.png|替代=|缩略图|Augmented Lagrangian method for orientation field and iterative directional field refinement models]]
+
[[文件:Augmented Lagrangian.png|链接=link=link=link=Special:FilePath/Augmented_Lagrangian.png|替代=|缩略图|Augmented Lagrangian method for orientation field and iterative directional field refinement models]]
 
Augmented Lagrangian method for orientation field and iterative directional field refinement models
 
Augmented Lagrangian method for orientation field and iterative directional field refinement models
   第234行: 第235行:     
<math>(\lambda_{P})^k = (\lambda_{P})^{k-1} +  \gamma_{P}(P^k - \nabla (\Chi)^k)</math>
 
<math>(\lambda_{P})^k = (\lambda_{P})^{k-1} +  \gamma_{P}(P^k - \nabla (\Chi)^k)</math>
  −
Math ( lambda { p }) ^ k ( lambda { p }) ^ { k-1} +  gamma { p }(p ^ k- nabla ( Chi) ^ k) / math
        第241行: 第240行:  
<math>(\lambda_{Q})^k = (\lambda_{Q})^{k-1} +  \gamma_{Q}(Q^k - P^{k} \bullet d)</math>
 
<math>(\lambda_{Q})^k = (\lambda_{Q})^{k-1} +  \gamma_{Q}(Q^k - P^{k} \bullet d)</math>
   −
Math ( lambda { q }) ^ k ( lambda { q }) ^ { k-1} +  gamma { q }(q ^ k-p ^ { k }{ k } k) / math
       
38

个编辑

导航菜单