结构相似性

来自集智百科 - 复杂系统|人工智能|复杂科学|复杂网络|自组织
跳到导航 跳到搜索

此词条暂由彩云小译翻译,翻译字数共1930,未经人工整理和审校,带来阅读不便,请见谅。


SSIM is a perception-based model that considers image degradation as perceived change in structural information, while also incorporating important perceptual phenomena, including both luminance masking and contrast masking terms. The difference with other techniques such as MSE or PSNR is that these approaches estimate absolute errors. Structural information is the idea that the pixels have strong inter-dependencies especially when they are spatially close. These dependencies carry important information about the structure of the objects in the visual scene. Luminance masking is a phenomenon whereby image distortions (in this context) tend to be less visible in bright regions, while contrast masking is a phenomenon whereby distortions become less visible where there is significant activity or "texture" in the image.

SSIM is a perception-based model that considers image degradation as perceived change in structural information, while also incorporating important perceptual phenomena, including both luminance masking and contrast masking terms. The difference with other techniques such as MSE or PSNR is that these approaches estimate absolute errors. Structural information is the idea that the pixels have strong inter-dependencies especially when they are spatially close. These dependencies carry important information about the structure of the objects in the visual scene. Luminance masking is a phenomenon whereby image distortions (in this context) tend to be less visible in bright regions, while contrast masking is a phenomenon whereby distortions become less visible where there is significant activity or "texture" in the image.

SSIM 是一个基于感知的模型,它将图像退化看作是结构信息的感知变化,同时也包含了重要的感知现象,包括亮度掩蔽和对比度掩蔽术语。与 MSE 或 PSNR 等其他技术的不同之处在于,这些方法估计的是绝对误差。结构信息是指像素之间有很强的相互依赖性,特别是当它们在空间上很接近的时候。这些依赖关系携带有关视觉场景中对象结构的重要信息。亮度掩蔽是一种现象,即图像失真(在这种情况下)在明亮的区域往往不那么明显,而对比度掩蔽是一种现象,即失真变得不那么明显,有重要的活动或“纹理”的图像。

History

The predecessor of SSIM was called Universal Quality Index (UQI), or Wang–Bovik Index, which was developed by Zhou Wang and Alan Bovik in 2001. This evolved, through their collaboration with Hamid Sheikh and Eero Simoncelli, into the current version of SSIM, which was published in April 2004 in the IEEE Transactions on Image Processing.[1] In addition to defining the SSIM quality index, the paper provides a general context for developing and evaluating perceptual quality measures, including connections to human visual neurobiology and perception, and direct validation of the index against human subject ratings.

The predecessor of SSIM was called Universal Quality Index (UQI), or Wang–Bovik Index, which was developed by Zhou Wang and Alan Bovik in 2001. This evolved, through their collaboration with Hamid Sheikh and Eero Simoncelli, into the current version of SSIM, which was published in April 2004 in the IEEE Transactions on Image Processing. In addition to defining the SSIM quality index, the paper provides a general context for developing and evaluating perceptual quality measures, including connections to human visual neurobiology and perception, and direct validation of the index against human subject ratings.

SSIM 的前身被称为通用质量指数(UQI) ,或者 Wang-Bovik 指数,由 Zhou Wang 和 Alan Bovik 于2001年开发。通过与 Hamid Sheikh 和 Eero Simoncelli 的合作,这种技术演变成了 SSIM 的现有版本,该版本于2004年4月在 IEEE 图像处理会刊上发表。除了定义 SSIM 质量指数外,本文还为开发和评估感知质量指标提供了一般背景,包括与人类视觉神经生物学和感知的联系,以及对照人类受试者评分直接验证该指数。

The basic model was developed in the Laboratory for Image and Video Engineering (LIVE) at The University of Texas at Austin and further developed jointly with the Laboratory for Computational Vision (LCV) at New York University. Further variants of the model have been developed in the Image and Visual Computing Laboratory at University of Waterloo and have been commercially marketed.

The basic model was developed in the Laboratory for Image and Video Engineering (LIVE) at The University of Texas at Austin and further developed jointly with the Laboratory for Computational Vision (LCV) at New York University. Further variants of the model have been developed in the Image and Visual Computing Laboratory at University of Waterloo and have been commercially marketed.

这个基本模型是在德克萨斯州大学奥斯汀分校图像和视频工程实验室(LIVE)开发的,并与纽约大学计算视觉实验室(LCV)进一步合作开发。图像和视觉计算实验室已经开发了该模型的其他滑铁卢大学,并已经在市场上销售。

SSIM subsequently found strong adoption in the image processing community and in the television and social media industries. The 2004 SSIM paper has been cited over 20,000 times according to Google Scholar,[2] making it one of the highest cited papers in the image processing and video engineering fields. It was accorded the IEEE Signal Processing Society Best Paper Award for 2009.[3] It also received the IEEE Signal Processing Society Sustained Impact Award for 2016, indicative of a paper having an unusually high impact for at least 10 years following its publication. Because of its high adoption by the television industry, the authors of the original SSIM paper were each accorded a Primetime Engineering Emmy Award in 2015 by the Television Academy.

SSIM subsequently found strong adoption in the image processing community and in the television and social media industries. The 2004 SSIM paper has been cited over 20,000 times according to Google Scholar, making it one of the highest cited papers in the image processing and video engineering fields. It was accorded the IEEE Signal Processing Society Best Paper Award for 2009. It also received the IEEE Signal Processing Society Sustained Impact Award for 2016, indicative of a paper having an unusually high impact for at least 10 years following its publication. Because of its high adoption by the television industry, the authors of the original SSIM paper were each accorded a Primetime Engineering Emmy Award in 2015 by the Television Academy.

SSIM 随后在图像处理领域以及电视和社交媒体行业得到了广泛应用。根据 Google Scholar 的统计,2004年的 SSIM 论文被引用了超过20,000次,成为图像处理和视频工程领域被引用次数最多的论文之一。它被评为 IEEE 信号处理学会2009年最佳论文奖。它还获得了2016年 IEEE 信号处理协会持续影响奖,这表明一篇论文在其发表后至少10年内具有非同寻常的高影响力。由于被电视行业广泛采用,最初的 SSIM 论文的作者在2015年被电视学院授予黄金时段工程艾美奖。

Algorithm

The SSIM index is calculated on various windows of an image. The measure between two windows [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] of common size [math]\displaystyle{ N\times N }[/math] is:[4]

The SSIM index is calculated on various windows of an image. The measure between two windows x and y of common size N\times N is:

SSIM 索引是在图像的不同窗口上计算出来的。两个窗口 x 和 y 之间的公共大小 N 乘以 N 是:

[math]\displaystyle{ \hbox{SSIM}(x,y) = \frac{(2\mu_x\mu_y + c_1)(2\sigma_{xy} + c_2)}{(\mu_x^2 + \mu_y^2 + c_1)(\sigma_x^2 + \sigma_y^2 + c_2)} }[/math]

\hbox{SSIM}(x,y) = \frac{(2\mu_x\mu_y + c_1)(2\sigma_{xy} + c_2)}{(\mu_x^2 + \mu_y^2 + c_1)(\sigma_x^2 + \sigma_y^2 + c_2)}

Hbox { SSIM }(x,y) = frac {(2 mu _ x mu _ y + c _ 1)(2 sigma _ { xy } + c _ 2)}{(mu _ x ^ 2 + mu _ y ^ 2 + c _ 1)(sigma _ x ^ 2 + sigma _ y ^ 2 + c _ 2)}

with:

  • [math]\displaystyle{ \mu_x }[/math] the average of [math]\displaystyle{ x }[/math];
  • [math]\displaystyle{ \mu_y }[/math] the average of [math]\displaystyle{ y }[/math];
  • [math]\displaystyle{ \sigma_x^2 }[/math] the variance of [math]\displaystyle{ x }[/math];
  • [math]\displaystyle{ \sigma_y^2 }[/math] the variance of [math]\displaystyle{ y }[/math];
  • [math]\displaystyle{ \sigma_{xy} }[/math] the covariance of [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math];
  • [math]\displaystyle{ c_1 = (k_1L)^2 }[/math], [math]\displaystyle{ c_2 = (k_2L)^2 }[/math] two variables to stabilize the division with weak denominator;
  • [math]\displaystyle{ L }[/math] the dynamic range of the pixel-values (typically this is [math]\displaystyle{ 2^{\#bits\ per\ pixel}-1 }[/math]);
  • [math]\displaystyle{ k_1 = 0.01 }[/math] and [math]\displaystyle{ k_2 = 0.03 }[/math] by default.

with:

  • \mu_x the average of x;
  • \mu_y the average of y;
  • \sigma_x^2 the variance of x;
  • \sigma_y^2 the variance of y;
  • \sigma_{xy} the covariance of x and y;
  • c_1 = (k_1L)^2, c_2 = (k_2L)^2 two variables to stabilize the division with weak denominator;
  • L the dynamic range of the pixel-values (typically this is 2^{\#bits\ per\ pixel}-1);
  • k_1 = 0.01 and k_2 = 0.03 by default.

使用:

  • mu _ x x 的平均值;
  • mu _ y y 的平均值;
  • sigma _ x ^ 2 x 的方差;
  • sigma _ y ^ 2 y 的方差;
  • sigma _ { xy } x 和 y 的协方差;
  • c _ 1 = (k _ 1L) ^ 2,c _ 2 = (k _ 2L) ^ 2个变量来稳定弱分母除法;
  • L 像素值的动态范围(通常是每像素2 ^ { # bit }-1) ;。

Formula components

The SSIM formula is based on three comparison measurements between the samples of [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math]: luminance ([math]\displaystyle{ l }[/math]), contrast ([math]\displaystyle{ c }[/math]) and structure ([math]\displaystyle{ s }[/math]). The individual comparison functions are:[4]

The SSIM formula is based on three comparison measurements between the samples of x and y: luminance (l), contrast (c) and structure (s). The individual comparison functions are:

= = 公式成分 = = SSIM 公式是基于 x 和 y 样本之间的三个比较测量值: 亮度(l) ,对比度(c)和结构(s)。个别比较功能如下:

[math]\displaystyle{ l(x,y)=\frac{2\mu_x\mu_y + c_1}{\mu^2_x + \mu^2_y + c_1} }[/math]

l(x,y)=\frac{2\mu_x\mu_y + c_1}{\mu^2_x + \mu^2_y + c_1}

L (x,y) = frac {2 mu _ x mu _ y + c _ 1}{ mu ^ 2 _ x + mu ^ 2 _ y + c _ 1}

[math]\displaystyle{ c(x,y)=\frac{2\sigma_x\sigma_y + c_2}{\sigma_x^2 + \sigma_y^2 + c_2} }[/math]

c(x,y)=\frac{2\sigma_x\sigma_y + c_2}{\sigma_x^2 + \sigma_y^2 + c_2}

C (x,y) = frac {2 sigma _ x sigma _ y + c _ 2}{ sigma _ x ^ 2 + sigma _ y ^ 2 + c _ 2}

[math]\displaystyle{ s(x,y)=\frac{\sigma_{xy} + c_3}{\sigma_x \sigma_y + c_3} }[/math]

s(x,y)=\frac{\sigma_{xy} + c_3}{\sigma_x \sigma_y + c_3}

S (x,y) = frac { sigma _ { xy } + c _ 3}{ sigma _ x sigma _ y + c _ 3}

with, in addition to above definitions:

  • [math]\displaystyle{ c_3 = c_2 / 2 }[/math]

SSIM is then a weighted combination of those comparative measures:

with, in addition to above definitions:

  • c_3 = c_2 / 2

SSIM is then a weighted combination of those comparative measures:

除上述定义外,

  • c _ 3 = c _ 2/2 SSIM 是这些比较指标的加权组合:

[math]\displaystyle{ \text{SSIM}(x,y) = l(x,y)^\alpha \cdot c(x,y)^\beta \cdot s(x,y)^\gamma }[/math]

\text{SSIM}(x,y) = l(x,y)^\alpha \cdot c(x,y)^\beta \cdot s(x,y)^\gamma

Text { SSIM }(x,y) = l (x,y) ^ alpha cdot c (x,y) ^ beta cdot s (x,y) ^ γ

Setting the weights [math]\displaystyle{ \alpha,\beta,\gamma }[/math] to 1, the formula can be reduced to the form shown above.

Setting the weights \alpha,\beta,\gamma to 1, the formula can be reduced to the form shown above.

将权重 alpha、 beta、 γ 设置为1,公式可以简化为如上所示的形式。

Mathematical Properties

SSIM satisfies the identity of indiscernibles, and symmetry properties, but not the triangle inequality or non-negativity, and thus is not a distance function. However, under certain conditions, SSIM may be converted to a normalized root MSE measure, which is a distance function.[5] The square of such a function is not convex, but is locally convex and quasiconvex,[5] making SSIM a feasible target for optimization.

SSIM satisfies the identity of indiscernibles, and symmetry properties, but not the triangle inequality or non-negativity, and thus is not a distance function. However, under certain conditions, SSIM may be converted to a normalized root MSE measure, which is a distance function. The square of such a function is not convex, but is locally convex and quasiconvex, making SSIM a feasible target for optimization.

= = 数学性质 = = = SSIM 满足不可区别的等同原则和对称性质,但不满足三角不等式或非负性,因此不是距离函数。然而,在一定条件下,SSIM 可以转化为归一化的根 MSE 测度,这是一个距离函数。这类函数的平方不是凸的,而是局部凸和拟凸的,使得 SSIM 成为一个可行的优化目标。

Application of the formula

In order to evaluate the image quality, this formula is usually applied only on luma, although it may also be applied on color (e.g., RGB) values or chromatic (e.g. YCbCr) values. The resultant SSIM index is a decimal value between 0 and 1, and value 1 is only reachable in the case of two identical sets of data and therefore indicates perfect structural similarity. A value of 0 indicates no structural similarity. For an image, it is typically calculated using a sliding Gaussian window of size 11x11 or a block window of size 8×8. The window can be displaced pixel-by-pixel on the image to create an SSIM quality map of the image. In the case of video quality assessment,[6] the authors propose to use only a subgroup of the possible windows to reduce the complexity of the calculation.

In order to evaluate the image quality, this formula is usually applied only on luma, although it may also be applied on color (e.g., RGB) values or chromatic (e.g. YCbCr) values. The resultant SSIM index is a decimal value between 0 and 1, and value 1 is only reachable in the case of two identical sets of data and therefore indicates perfect structural similarity. A value of 0 indicates no structural similarity. For an image, it is typically calculated using a sliding Gaussian window of size 11x11 or a block window of size 8×8. The window can be displaced pixel-by-pixel on the image to create an SSIM quality map of the image. In the case of video quality assessment, the authors propose to use only a subgroup of the possible windows to reduce the complexity of the calculation.

为了评价图像质量,这个公式通常只适用于鲁马,虽然它也可以应用于颜色(如 RGB)值或色度(如。YCbCr)值。结果得到的 SSIM 索引是一个介于0和1之间的十进制值,而值1只有在两组相同的数据时才能到达,因此表示完美的结构相似性。值为0表示没有结构相似性。对于图像,通常使用大小为11 × 11的滑动高斯窗口或大小为8 × 8的块窗口来计算。可以在图像上逐个像素地移位窗口,以创建图像的 SSIM 质量映射。在视频质量评估的情况下,作者建议只使用可能窗口的一个子组来降低计算的复杂性。

Variants

Variants

= 变体 = =

Multi-Scale SSIM

A more advanced form of SSIM, called Multiscale SSIM (MS-SSIM)[4] is conducted over multiple scales through a process of multiple stages of sub-sampling, reminiscent of multiscale processing in the early vision system. It has been shown to perform equally well or better than SSIM on different subjective image and video databases.[4][7][8]

A more advanced form of SSIM, called Multiscale SSIM (MS-SSIM) is conducted over multiple scales through a process of multiple stages of sub-sampling, reminiscent of multiscale processing in the early vision system. It has been shown to perform equally well or better than SSIM on different subjective image and video databases.

多尺度 SSIM (Multi-Scale SSIM)是一种更高级的 SSIM 形式,称为多尺度 SSIM (MS-SSIM) ,通过多个阶段的子采样过程在多个尺度上进行,使人想起早期视觉系统中的多尺度处理。结果表明,在不同的主观图像和视频数据库上,该方法的性能与 SSIM 方法相当或更好。

Multi-component SSIM

模板:Vanchor (3-SSIM) is a form of SSIM that takes into account the fact that the human eye can see differences more precisely on textured or edge regions than on smooth regions.[9] The resulting metric is calculated as a weighted average of SSIM for three categories of regions: edges, textures, and smooth regions. The proposed weighting is 0.5 for edges, 0.25 for the textured and smooth regions. The authors mention that a 1/0/0 weighting (ignoring anything but edge distortions) leads to results that are closer to subjective ratings. This suggests that edge regions play a dominant role in image quality perception.

(3-SSIM) is a form of SSIM that takes into account the fact that the human eye can see differences more precisely on textured or edge regions than on smooth regions. The resulting metric is calculated as a weighted average of SSIM for three categories of regions: edges, textures, and smooth regions. The proposed weighting is 0.5 for edges, 0.25 for the textured and smooth regions. The authors mention that a 1/0/0 weighting (ignoring anything but edge distortions) leads to results that are closer to subjective ratings. This suggests that edge regions play a dominant role in image quality perception.

多分量 SSIM 是 SSIM 的一种形式,它考虑到人眼可以在纹理或边缘区域比在光滑区域更精确地看到差异这一事实。最终得到的度量结果是以三类区域的 SSIM 加权平均数来计算的: 边缘、纹理和平滑区域。建议的权重是0.5为边缘,0.25为纹理和光滑的区域。作者提到,1/0/0的权重(忽略任何东西,但边缘失真)导致的结果更接近主观评级。这表明边缘区域在图像质量感知中起主导作用。

The authors of 3-SSIM has also extended model into 模板:Vanchor (4-SSIM). The edge types are further subdivided into preserved and changed edges by their distortion status. The proposed weighting is 0.25 for all four components.[10]

The authors of 3-SSIM has also extended model into (4-SSIM). The edge types are further subdivided into preserved and changed edges by their distortion status. The proposed weighting is 0.25 for all four components.

3-SSIM 的作者还将模型推广到(4-SSIM)。边缘类型根据其变形状态进一步细分为保留边缘和变化边缘。所有四个组成部分的建议权重为0.25。

Structural Dissimilarity

Structural dissimilarity (DSSIM) may be derived from SSIM, though it does not constitute a distance function as the triangle inequality is not necessarily satisfied.

Structural dissimilarity (DSSIM) may be derived from SSIM, though it does not constitute a distance function as the triangle inequality is not necessarily satisfied.

= = = = 结构不同 = = = = 结构不同 = = = = 结构不同(dSSIM)可能来自 SSIM,但它不构成距离函数,因为三角不等式不一定满足。

[math]\displaystyle{ \hbox{DSSIM}(x,y) = \frac{1 - \hbox{SSIM}(x, y)}{2} }[/math]

\hbox{DSSIM}(x,y) = \frac{1 - \hbox{SSIM}(x, y)}{2}

Hbox { DSSIM }(x,y) = frac {1-hbox { SSIM }(x,y)}{2}

Video quality metrics and temporal variants

It is worth noting that the original version SSIM was designed to measure the quality of still images. It does not contain any parameters directly related to temporal effects of human perception and human judgment.[7] A common practice is to calculate the average SSIM value over all frames in the video sequence. However, several temporal variants of SSIM have been developed.[11][6][12]

It is worth noting that the original version SSIM was designed to measure the quality of still images. It does not contain any parameters directly related to temporal effects of human perception and human judgment. A common practice is to calculate the average SSIM value over all frames in the video sequence. However, several temporal variants of SSIM have been developed.

值得注意的是,原始版本的 SSIM 是为了测量静止图像的质量而设计的。它不包含任何与人类知觉和人类判断的时间效应直接相关的参数。通常的做法是计算视频序列中所有帧的平均 SSIM 值。然而,SSIM 的一些时间变体已经被开发出来。

Complex Wavelet SSIM

The complex wavelet transform variant of the SSIM (CW-SSIM) is designed to deal with issues of image scaling, translation and rotation. Instead of giving low scores to images with such conditions, the CW-SSIM takes advantage of the complex wavelet transform and therefore yields higher scores to said images. The CW-SSIM is defined as follows:

The complex wavelet transform variant of the SSIM (CW-SSIM) is designed to deal with issues of image scaling, translation and rotation. Instead of giving low scores to images with such conditions, the CW-SSIM takes advantage of the complex wavelet transform and therefore yields higher scores to said images. The CW-SSIM is defined as follows:

= = = = 复数小波 SSIM = = = = 复数小波 SSIM 的复小波转换变体(CW-SSIM)设计用于处理图像缩放、平移和旋转等问题。CW-SSIM 不会对这种情况下的图像给出较低的分数,而是利用了这种复小波转换,因此对上述图像给出了较高的分数。CW-SSIM 的定义如下:

[math]\displaystyle{ \text{CW-SSIM}(c_x,c_y)=\bigg(\frac{2 \sum_{i=1}^N |c_{x,i}||c_{y,i}|+K}{\sum_{i=1}^N |c_{x,i}|^2+\sum_{i=1}^N |c_{y,i}|^2+K}\bigg)\bigg(\frac{2|\sum_{i=1}^N c_{x,i}c_{y,i}^*|+K}{2\sum_{i=1}^N |c_{x,i}c_{y,i}^*|+K}\bigg) }[/math]

\text{CW-SSIM}(c_x,c_y)=\bigg(\frac{2 \sum_{i=1}^N |c_{x,i}||c_{y,i}|+K}{\sum_{i=1}^N |c_{x,i}|^2+\sum_{i=1}^N |c_{y,i}|^2+K}\bigg)\bigg(\frac{2|\sum_{i=1}^N c_{x,i}c_{y,i}^*|+K}{2\sum_{i=1}^N |c_{x,i}c_{y,i}^*|+K}\bigg)

Text { CW-SSIM }(c _ x,c _ y) = bigg (frac {2 sum _ i = 1} ^ N | c _ { x,i } | | c _ { y,i } | + K }{ sum _ i = 1} ^ N | c _ { x,i } | ^ 2 + sum _ i = 1} ^ N | c _ { y,i } | ^ 2 + K } bigg (frac {2 | sum _ { i = 1} ^ Nc _ { x,i } c _ { y,i } ^ * | + K }{2 sum _ i = 1} ^ N | c _ { x,i } c _ { y,i } ^ * | + K } bigg)

Where [math]\displaystyle{ c_x }[/math] is the complex wavelet transform of the signal [math]\displaystyle{ x }[/math] and [math]\displaystyle{ c_y }[/math] is the complex wavelet transform for the signal [math]\displaystyle{ y }[/math]. Additionally, [math]\displaystyle{ K }[/math] is a small positive number used for the purposes of function stability. Ideally, it should be zero. Like the SSIM, the CW-SSIM has a maximum value of 1. The maximum value of 1 indicates that the two signals are perfectly structurally similar while a value of 0 indicates no structural similarity.[13]

Where c_x is the complex wavelet transform of the signal x and c_y is the complex wavelet transform for the signal y. Additionally, K is a small positive number used for the purposes of function stability. Ideally, it should be zero. Like the SSIM, the CW-SSIM has a maximum value of 1. The maximum value of 1 indicates that the two signals are perfectly structurally similar while a value of 0 indicates no structural similarity.

其中 c _ x 是信号 x 的复小波转换,c _ y 是信号 y 的复小波转换。理想情况下,应该是零。与 SSIM 类似,CW-SSIM 的最大值为1。最大值为1表示两个信号在结构上完全相似,而0表示没有结构相似性。

SSIMPLUS

The SSIMPLUS index is based on SSIM and is a commercially available tool.[14] It extends SSIM's capabilities, mainly to target video applications. It provides scores in the range of 0–100, linearly matched to human subjective ratings. It also allows adapting the scores to the intended viewing device, comparing video across different resolutions and contents.

The SSIMPLUS index is based on SSIM and is a commercially available tool. It extends SSIM's capabilities, mainly to target video applications. It provides scores in the range of 0–100, linearly matched to human subjective ratings. It also allows adapting the scores to the intended viewing device, comparing video across different resolutions and contents.

= = = SSIMPLUS = = = SSIMPLUS 索引是基于 SSIM 的,是一种商业上可用的工具。它扩展了 SSIM 的功能,主要针对视频应用。它提供0-100范围内的分数,与人类的主观评分线性匹配。它还允许根据预期的观看设备调整分数,比较不同分辨率和内容的视频。

According to its authors, SSIMPLUS achieves higher accuracy and higher speed than other image and video quality metrics. However, no independent evaluation of SSIMPLUS has been performed, as the algorithm itself is not publicly available.

According to its authors, SSIMPLUS achieves higher accuracy and higher speed than other image and video quality metrics. However, no independent evaluation of SSIMPLUS has been performed, as the algorithm itself is not publicly available.

据作者介绍,SSIMPLUS 比其他图像和视频质量指标具有更高的精度和更快的速度。然而,没有对 SSIMPLUS 进行独立的评估,因为算法本身并不公开。

cSSIM

In order to further investigate the standard discrete SSIM from a theoretical perspective, the continuous SSIM (cSSIM)[15] has been introduced and studied in the context of Radial basis function interpolation.

In order to further investigate the standard discrete SSIM from a theoretical perspective, the continuous SSIM (cSSIM) has been introduced and studied in the context of Radial basis function interpolation.

为了从理论的角度进一步研究标准离散 SSIM,引入了连续 SSIM (cSSIM) ,并在径向基核函数插值的背景下进行了研究。

Other simple modifications

The r* cross-correlation metric is based on the variance metrics of SSIM. It's defined as r*(x, y) = 模板:Sfrac when σxσy ≠ 0, 1 when both standard deviations are zero, and 0 when only one is zero. It has found use in analyzing human response to contrast-detail phantoms.[16]

The r* cross-correlation metric is based on the variance metrics of SSIM. It's defined as when , when both standard deviations are zero, and when only one is zero. It has found use in analyzing human response to contrast-detail phantoms.

= = = = = 其他简单的修改 = = = = r * 互相关度量是基于 SSIM 的方差度量的。它的定义是,当两个标准偏差都为零时,当只有一个为零时。它已经被用于分析人类对对比细节幻象的反应。

SSIM has also been used on the gradient of images, making it "G-SSIM". G-SSIM is especially useful on blurred images.[17]

SSIM has also been used on the gradient of images, making it "G-SSIM". G-SSIM is especially useful on blurred images.

SSIM 也被用于图像的梯度,使它成为“ G-SSIM”。G-SSIM 特别适用于模糊图像。

The modifications above can be combined. For example, 4-G-r* is a combination of 4-SSIM, G-SSIM, and r*. It is able to reflect radiologist preference for images much better than other SSIM variants tested.[18]

The modifications above can be combined. For example, 4-G-r* is a combination of 4-SSIM, G-SSIM, and r*. It is able to reflect radiologist preference for images much better than other SSIM variants tested.

上面的修改可以组合在一起。例如,4-G-r * 是4-SSIM、 G-SSIM 和 r * 的组合。它能够更好地反映放射科医生对图像的偏好比其他 SSIM 变体测试。

Application

SSIM has applications in a variety of different problems. Some examples are:

SSIM has applications in a variety of different problems. Some examples are:

SSIM 在各种不同的问题中都有应用程序。例如:

  • Image Compression: In lossy image compression, information is deliberately discarded to decrease the storage space of images and video. The MSE is typically used in such compression schemes. According to its authors, using SSIM instead of MSE is suggested to produce better results for the decompressed images.[13]
  • Image Restoration: Image restoration focuses on solving the problem [math]\displaystyle{ y=h * x+n }[/math] where [math]\displaystyle{ y }[/math] is the blurry image that should be restored, [math]\displaystyle{ h }[/math] is the blur kernel, [math]\displaystyle{ n }[/math] is the additive noise and [math]\displaystyle{ x }[/math] is the original image we wish to recover. The traditional filter which is used to solve this problem is the Wiener Filter. However, the Wiener filter design is based on the MSE. Using an SSIM variant, specifically Stat-SSIM, is claimed to produce better visual results, according to the algorithm's authors.[13]
  • Pattern Recognition: Since SSIM mimics aspects of human perception, it could be used for recognizing patterns. When faced with issues like image scaling, translation and rotation, the algorithm's authors claim that it is better to use CW-SSIM,[19] which is insensitive to these variations and may be directly applied by template matching without using any training sample. Since data-driven pattern recognition approaches may produce better performance when a large amount of data is available for training, the authors suggest using CW-SSIM in data-driven approaches.[19]
  • Image Compression: In lossy image compression, information is deliberately discarded to decrease the storage space of images and video. The MSE is typically used in such compression schemes. According to its authors, using SSIM instead of MSE is suggested to produce better results for the decompressed images.
  • Image Restoration: Image restoration focuses on solving the problem y=h * x+n where y is the blurry image that should be restored, h is the blur kernel, n is the additive noise and x is the original image we wish to recover. The traditional filter which is used to solve this problem is the Wiener Filter. However, the Wiener filter design is based on the MSE. Using an SSIM variant, specifically Stat-SSIM, is claimed to produce better visual results, according to the algorithm's authors.
  • Pattern Recognition: Since SSIM mimics aspects of human perception, it could be used for recognizing patterns. When faced with issues like image scaling, translation and rotation, the algorithm's authors claim that it is better to use CW-SSIM, which is insensitive to these variations and may be directly applied by template matching without using any training sample. Since data-driven pattern recognition approaches may produce better performance when a large amount of data is available for training, the authors suggest using CW-SSIM in data-driven approaches.


  • 图像压缩: 在损耗图像压缩中,信息被故意丢弃,以减少图像和视频的存储空间。MSE 通常用于这种压缩方案。作者认为,用 SSIM 代替 MSE 可以获得更好的图像解压效果。
  • 影像复原: 影像复原关注于解决 y = h
  • x + n 的问题,其中 y 是模糊的图像,应该恢复,h 是模糊内核,n 是加性噪声,x 是我们希望恢复的原始图像。用于解决这一问题的传统滤波器是维纳滤波器。然而,维纳滤波器的设计是基于 MSE 的。该算法的作者称,使用 SSIM 变体,特别是 Stat-SSIM,可以产生更好的视觉效果。
  • 模式识别: 由于 SSIM 模拟了人类感知的各个方面,它可以用来识别模式。当遇到图像缩放、平移和旋转等问题时,该算法的作者认为最好使用 CW-SSIM,它对这些变化不敏感,可以不使用任何训练样本直接通过模板匹配应用。由于数据驱动的模式识别方法在大量数据可用于训练时可能产生更好的性能,作者建议在数据驱动方法中使用 CW-SSIM。

Performance comparison

Due to its popularity, SSIM is often compared to other metrics, including more simple metrics such as MSE and PSNR, and other perceptual image and video quality metrics. SSIM has been repeatedly shown to significantly outperform MSE and its derivates in accuracy, including research by its own authors and others.[7][20][21][22][23][24]

Due to its popularity, SSIM is often compared to other metrics, including more simple metrics such as MSE and PSNR, and other perceptual image and video quality metrics. SSIM has been repeatedly shown to significantly outperform MSE and its derivates in accuracy, including research by its own authors and others.

由于 SSIM 的流行,它经常与其他指标进行比较,包括更简单的指标,如 MSE 和 PSNR,以及其他感知图像和视频质量指标。SSIM 在精确度方面一再被证明明显优于 MSE 及其衍生物,包括它自己的作者和其他人的研究。

A paper by Dosselmann and Yang claims that the performance of SSIM is "much closer to that of the MSE" than usually assumed. While they do not dispute the advantage of SSIM over MSE, they state an analytical and functional dependency between the two metrics.[8] According to their research, SSIM has been found to correlate as well as MSE-based methods on subjective databases other than the databases from SSIM's creators. As an example, they cite Reibman and Poole, who found that MSE outperformed SSIM on a database containing packet-loss–impaired video.[25] In another paper, an analytical link between PSNR and SSIM was identified.[26]

A paper by Dosselmann and Yang claims that the performance of SSIM is "much closer to that of the MSE" than usually assumed. While they do not dispute the advantage of SSIM over MSE, they state an analytical and functional dependency between the two metrics. According to their research, SSIM has been found to correlate as well as MSE-based methods on subjective databases other than the databases from SSIM's creators. As an example, they cite Reibman and Poole, who found that MSE outperformed SSIM on a database containing packet-loss–impaired video. In another paper, an analytical link between PSNR and SSIM was identified.

Dosselmann 和 Yang 的一篇论文声称,SSIM 的性能比通常假设的“更接近 MSE”。虽然他们没有对 SSIM 相对于 MSE 的优势提出异议,但是他们陈述了两个指标之间的分析和功能依赖关系。根据他们的研究,除了 SSIM 的创建者提供的数据库之外,还发现了 SSIM 与基于 MSE 的方法在主观数据库上的相关性。他们引用了 Reibman 和 Poole 的例子,他们发现 MSE 在包含丢包受损视频的数据库中的表现优于 SSIM。在另一篇论文中,确定了 PSNR 和 SSIM 之间的分析联系。

See also

  • Mean squared error
  • Peak signal-to-noise ratio
  • Video quality

= 另见 =

  • 均方差
  • 峰值信噪比
  • 视频质量

References

  1. Wang, Zhou; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. (2004-04-01). "Image quality assessment: from error visibility to structural similarity". IEEE Transactions on Image Processing. 13 (4): 600–612. Bibcode:2004ITIP...13..600W. CiteSeerX 10.1.1.2.5689. doi:10.1109/TIP.2003.819861. ISSN 1057-7149. PMID 15376593.
  2. "Google Scholar". scholar.google.com. Retrieved 2019-07-04.
  3. "IEEE Signal Processing Society, Best Paper Award" (PDF).
  4. 4.0 4.1 4.2 4.3 Wang, Z.; Simoncelli, E.P.; Bovik, A.C. (2003-11-01). Multiscale structural similarity for image quality assessment. 2. pp. 1398–1402 Vol.2. doi:10.1109/ACSSC.2003.1292216. ISBN 978-0-7803-8104-9. 
  5. 5.0 5.1 Brunet, D.; Vass, J.; Vrscay, E. R.; Wang, Z. (April 2012). "On the mathematical properties of the structural similarity index" (PDF). IEEE Transactions on Image Processing. 21 (4): 2324–2328. Bibcode:2012ITIP...21.1488B. doi:10.1109/TIP.2011.2173206. PMID 22042163.
  6. 6.0 6.1 Wang, Z.; Lu, L.; Bovik, A. C. (February 2004). "Video quality assessment based on structural distortion measurement". Signal Processing: Image Communication. 19 (2): 121–132. CiteSeerX 10.1.1.2.6330. doi:10.1016/S0923-5965(03)00076-6.
  7. 7.0 7.1 7.2 Søgaard, Jacob; Krasula, Lukáš; Shahid, Muhammad; Temel, Dogancan; Brunnström, Kjell; Razaak, Manzoor (2016-02-14). "Applicability of Existing Objective Metrics of Perceptual Quality for Adaptive Video Streaming" (PDF). Electronic Imaging. 2016 (13): 1–7. doi:10.2352/issn.2470-1173.2016.13.iqsp-206.
  8. 8.0 8.1 Dosselmann, Richard; Yang, Xue Dong (2009-11-06). "A comprehensive assessment of the structural similarity index". Signal, Image and Video Processing. 5 (1): 81–91. doi:10.1007/s11760-009-0144-1. ISSN 1863-1703.
  9. Li, Chaofeng; Bovik, Alan Conrad (2010-01-01). "Content-weighted video quality assessment using a three-component image model". Journal of Electronic Imaging. 19 (1): 011003–011003–9. Bibcode:2010JEI....19a1003L. doi:10.1117/1.3267087. ISSN 1017-9909.
  10. Li, Chaofeng; Bovik, Alan C. (August 2010). "Content-partitioned structural similarity index for image quality assessment". Signal Processing: Image Communication. 25 (7): 517–526. doi:10.1016/j.image.2010.03.004.
  11. "Redirect page". www.compression.ru.
  12. Wang, Z.; Li, Q. (December 2007). "Video quality assessment using a statistical model of human visual speed perception" (PDF). Journal of the Optical Society of America A. 24 (12): B61–B69. Bibcode:2007JOSAA..24...61W. CiteSeerX 10.1.1.113.4177. doi:10.1364/JOSAA.24.000B61. PMID 18059915.
  13. 13.0 13.1 13.2 Zhou Wang; Bovik, A.C. (January 2009). "Mean squared error: Love it or leave it? A new look at Signal Fidelity Measures". IEEE Signal Processing Magazine. 26 (1): 98–117. Bibcode:2009ISPM...26...98W. doi:10.1109/msp.2008.930649. ISSN 1053-5888.
  14. Rehman, A.; Zeng, K.; Wang, Zhou (February 2015). Rogowitz, Bernice E; Pappas, Thrasyvoulos N; De Ridder, Huib (eds.). "Display device-adapted video quality-of-experience assessment" (PDF). IS&T-SPIE Electronic Imaging, Human Vision and Electronic Imaging XX. Human Vision and Electronic Imaging XX. 9394: 939406. Bibcode:2015SPIE.9394E..06R. doi:10.1117/12.2077917.
  15. Marchetti, F. (January 2021). "Convergence rate in terms of the continuous SSIM (cSSIM) index in RBF interpolation" (PDF). Dolom. Res. Notes Approx. 14: 27–32.
  16. Prieto, Gabriel; Guibelalde, Eduardo; Chevalier, Margarita; Turrero, Agustín (21 July 2011). "Use of the cross-correlation component of the multiscale structural similarity metric (R* metric) for the evaluation of medical images: R* metric for the evaluation of medical images". Medical Physics. 38 (8): 4512–4517. doi:10.1118/1.3605634.
  17. Chen, Guan-hao; Yang, Chun-ling; Xie, Sheng-li (October 2006). "Gradient-Based Structural Similarity for Image Quality Assessment". 2006 International Conference on Image Processing: 2929–2932. doi:10.1109/ICIP.2006.313132.
  18. Renieblas, Gabriel Prieto; Nogués, Agustín Turrero; González, Alberto Muñoz; Gómez-Leon, Nieves; del Castillo, Eduardo Guibelalde (26 July 2017). "Structural similarity index family for image quality assessment in radiological images". Journal of Medical Imaging. 4 (3): 035501. doi:10.1117/1.JMI.4.3.035501. PMC 5527267. PMID 28924574.
  19. 19.0 19.1 Gao, Y.; Rehman, A.; Wang, Z. (September 2011). CW-SSIM based image classification (PDF). IEEE International Conference on Image Processing (ICIP11).
  20. Zhang, Lin; Zhang, Lei; Mou, X.; Zhang, D. (September 2012). A comprehensive evaluation of full reference image quality assessment algorithms. pp. 1477–1480. doi:10.1109/icip.2012.6467150. ISBN 978-1-4673-2533-2. 
  21. Zhou Wang; Wang, Zhou; Li, Qiang (May 2011). "Information Content Weighting for Perceptual Image Quality Assessment". IEEE Transactions on Image Processing. 20 (5): 1185–1198. Bibcode:2011ITIP...20.1185W. doi:10.1109/tip.2010.2092435. PMID 21078577.
  22. Channappayya, S. S.; Bovik, A. C.; Caramanis, C.; Heath, R. W. (March 2008). SSIM-optimal linear image restoration. pp. 765–768. doi:10.1109/icassp.2008.4517722. ISBN 978-1-4244-1483-3. 
  23. Gore, Akshay; Gupta, Savita (2015-02-01). "Full reference image quality metrics for JPEG compressed images". AEU - International Journal of Electronics and Communications. 69 (2): 604–608. doi:10.1016/j.aeue.2014.09.002.
  24. Wang, Z.; Simoncelli, E. P. (September 2008). "Maximum differentiation (MAD) competition: a methodology for comparing computational models of perceptual quantities" (PDF). Journal of Vision. 8 (12): 8.1–13. doi:10.1167/8.12.8. PMC 4143340. PMID 18831621.
  25. Reibman, A. R.; Poole, D. (September 2007). Characterizing packet-loss impairments in compressed video. 5. pp. V – 77–V – 80. doi:10.1109/icip.2007.4379769. ISBN 978-1-4244-1436-9. 
  26. Hore, A.; Ziou, D. (August 2010). Image Quality Metrics: PSNR vs. SSIM. pp. 2366–2369. doi:10.1109/icpr.2010.579. ISBN 978-1-4244-7542-1. 

External links

  • Home page
  • Rust Implementation
  • C/C++ Implementation
  • DSSIM C++ Implementation
  • Chris Lomont's C# Implementation
  • qpsnr implementation (multi threaded C++)
  • Implementation in VQMT software
  • Implementation in Python
  • "Mystery Behind Similarity Measures MSE and SSIM", Gintautas Palubinskas, 2014


  • 外部链接 = =
  • 主页
  • 锈迹实现
  • C/C + + 实现
  • DSSIM C + + 实现
  • Chris Lomont 的 C # 实现
  • qpsnr 实现(多线程 C + +)
  • VQMT 软件实现
  • Python 实现
  • “ MSE 和 SSIM 相似性度量背后的奥秘”,Gintautas Palubinskas,2014

Category:Image processing

分类: 图像处理


This page was moved from wikipedia:en:Structural similarity. Its edit history can be viewed at 结构相似性/edithistory