第707行: |
第707行: |
| | | |
| | last1= White |first1= Jim | last2= Steingold | first2=Sam | last3= Fournelle | first3=Connie | | | last1= White |first1= Jim | last2= Steingold | first2=Sam | last3= Fournelle | first3=Connie |
− |
| |
− | | last1= White |first1= Jim | last2= Steingold | first2=Sam | last3= Fournelle | first3=Connie
| |
− |
| |
− | 最后一个白色 | 最初的一个吉姆 | 最后一个骑马 | 最初的两个山姆 | 最后三个女孩 | 最初的三个康妮
| |
− |
| |
− | | title = Performance Metrics for Group-Detection Algorithms
| |
| | | |
| | title = Performance Metrics for Group-Detection Algorithms | | | title = Performance Metrics for Group-Detection Algorithms |
− |
| |
− | | 组检测算法的性能指标
| |
− |
| |
− | | conference = Interface 2004
| |
| | | |
| | conference = Interface 2004 | | | conference = Interface 2004 |
− |
| |
− | 会议界面2004
| |
− |
| |
− | | url = http://www.interfacesymposia.org/I04/I2004Proceedings/WhiteJim/WhiteJim.paper.pdf
| |
| | | |
| | url = http://www.interfacesymposia.org/I04/I2004Proceedings/WhiteJim/WhiteJim.paper.pdf | | | url = http://www.interfacesymposia.org/I04/I2004Proceedings/WhiteJim/WhiteJim.paper.pdf |
− |
| |
− | Http://www.interfacesymposia.org/i04/i2004proceedings/whitejim/whitejim.paper.pdf
| |
| | | |
| }}</ref> | | }}</ref> |
− |
| |
− | }}</ref>
| |
− |
| |
− | {} / ref
| |
| | | |
| :<math> | | :<math> |
| | | |
− | <math>
| + | C_{XY} = \frac{\operatorname{I}(X;Y)}{H(Y)} |
− | | |
− | 数学
| |
− | | |
− | C_{XY} = \frac{\operatorname{I}(X;Y)}{\Eta(Y)} | |
− | | |
− | C_{XY} = \frac{\operatorname{I}(X;Y)}{\Eta(Y)}
| |
− | | |
− | C { XY } frac { operatorname { i }(x; y)}{ Eta (y)}
| |
| | | |
| ~~~~\mbox{and}~~~~ | | ~~~~\mbox{and}~~~~ |
| | | |
− | ~~~~\mbox{and}~~~~
| + | C_{YX} = \frac{\operatorname{I}(X;Y)}{H(X)}. |
− | | |
− | ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
| |
− | | |
− | C_{YX} = \frac{\operatorname{I}(X;Y)}{\Eta(X)}. | |
− | | |
− | C_{YX} = \frac{\operatorname{I}(X;Y)}{\Eta(X)}.
| |
− | | |
− | C { YX } frac { operatorname { i }(x; y)}{ Eta (x)}.
| |
− | | |
− | </math>
| |
| | | |
| </math> | | </math> |
− |
| |
− | 数学
| |
− |
| |
− |
| |
| | | |
| | | |
第776行: |
第734行: |
| 这两个系数的值范围为[0,1] ,但不一定相等。在某些情况下,可能需要一个对称的度量,例如下面的冗余度量: | | 这两个系数的值范围为[0,1] ,但不一定相等。在某些情况下,可能需要一个对称的度量,例如下面的冗余度量: |
| | | |
− | :<math>R = \frac{\operatorname{I}(X;Y)}{\Eta(X) + \Eta(Y)}</math> | + | :<math>R = \frac{\operatorname{I}(X;Y)}{H(X) + H(Y)}</math> |
| | | |
| <math>R = \frac{\operatorname{I}(X;Y)}{\Eta(X) + \Eta(Y)}</math> | | <math>R = \frac{\operatorname{I}(X;Y)}{\Eta(X) + \Eta(Y)}</math> |
第792行: |
第750行: |
| 当变量是独立的时候,它达到最小值为零,最大值为 | | 当变量是独立的时候,它达到最小值为零,最大值为 |
| | | |
− | :<math>R_\max = \frac{\min\left\{\Eta(X), \Eta(Y)\right\}}{\Eta(X) + \Eta(Y)}</math>
| |
− |
| |
− | <math>R_\max = \frac{\min\left\{\Eta(X), \Eta(Y)\right\}}{\Eta(X) + \Eta(Y)}</math>
| |
− |
| |
− | 数学 rmax | frac | 左 Eta (x) ,Eta (y) | 右 Eta (x) + Eta (y)} / math
| |
| | | |
| + | :<math>R_\max = \frac{\min\left\{H(X), H(Y)\right\}}{H(X) + H(Y)}</math> |
| | | |
| | | |
第818行: |
第772行: |
| 另一个对称度量是对称不确定度,由 | | 另一个对称度量是对称不确定度,由 |
| | | |
− | :<math>U(X, Y) = 2R = 2\frac{\operatorname{I}(X;Y)}{\Eta(X) + \Eta(Y)}</math>
| |
| | | |
− | <math>U(X, Y) = 2R = 2\frac{\operatorname{I}(X;Y)}{\Eta(X) + \Eta(Y)}</math> | + | :<math>U(X, Y) = 2R = 2\frac{\operatorname{I}(X;Y)}{Ha(X) + H(Y)}</math> |
− | | |
− | Math u (x,y)2r2 frac { operatorname { i }(x; y)}{ Eta (x) + Eta (y)} / math
| |
| | | |
| | | |
第844行: |
第795行: |
| 如果我们把互信息看作是总相关或对偶总相关的特殊情况,归一化版本分别为, | | 如果我们把互信息看作是总相关或对偶总相关的特殊情况,归一化版本分别为, |
| | | |
− | :<math>\frac{\operatorname{I}(X;Y)}{\min\left[ \Eta(X),\Eta(Y)\right]}</math> and <math>\frac{\operatorname{I}(X;Y)}{\Eta(X,Y)} \; .</math> | + | :<math>\frac{\operatorname{I}(X;Y)}{\min\left[ H(X),H(Y)\right]}</math> and <math>\frac{\operatorname{I}(X;Y)}{H(X,Y)} \; .</math> |
− | | |
− | <math>\frac{\operatorname{I}(X;Y)}{\min\left[ \Eta(X),\Eta(Y)\right]}</math> and <math>\frac{\operatorname{I}(X;Y)}{\Eta(X,Y)} \; .</math>
| |
− | | |
− | [ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| |
| | | |
| | | |
| | | |
| + | This normalized version also known as '''Information Quality Ratio (IQR)''' which quantifies the amount of information of a variable based on another variable against total uncertainty:<ref name=DRWijaya>{{Cite journal| last1= Wijaya |first1= Dedy Rahman | last2= Sarno| first2=Riyanarto| last3= Zulaika | first3=Enny| title = Information Quality Ratio as a novel metric for mother wavelet selection| journal = Chemometrics and Intelligent Laboratory Systems| journal = Chemometrics and Intelligent Laboratory Systems| volume = 160| pages = 59–71| doi = 10.1016/j.chemolab.2016.11.012|year= 2017 }}</ref> |
| | | |
| | | |
− | This normalized version also known as '''Information Quality Ratio (IQR)''' which quantifies the amount of information of a variable based on another variable against total uncertainty:<ref name=DRWijaya>{{Cite journal | + | This normalized version also known as Information Quality Ratio (IQR) which quantifies the amount of information of a variable based on another variable against total uncertainty: |
− | | + | 这个标准化版本也被称为信息质量比率(IQR) ,它根据另一个变量量化了一个变量的信息量,以对抗总的不确定性: |
− | This normalized version also known as Information Quality Ratio (IQR) which quantifies the amount of information of a variable based on another variable against total uncertainty:<ref name=DRWijaya>{{Cite journal
| |
− | | |
− | 这个标准化版本也被称为信息质量比率(IQR) ,它根据另一个变量量化了一个变量的信息量,以对抗总的不确定性: ref name drwijaya { Cite journal | |
− | | |
− | | last1= Wijaya |first1= Dedy Rahman | last2= Sarno| first2=Riyanarto| last3= Zulaika | first3=Enny
| |
− | | |
− | | last1= Wijaya |first1= Dedy Rahman | last2= Sarno| first2=Riyanarto| last3= Zulaika | first3=Enny
| |
− | | |
− | 1 Wijaya | first1 Dedy Rahman | last2 Sarno | first2 Riyanarto | last3 Zulaika | first3 Enny
| |
− | | |
− | | title = Information Quality Ratio as a novel metric for mother wavelet selection
| |
− | | |
− | | title = Information Quality Ratio as a novel metric for mother wavelet selection
| |
− | | |
− | | 标题信息质量比作为母小波选择的一个新的度量
| |
− | | |
− | | journal = Chemometrics and Intelligent Laboratory Systems
| |
− | | |
− | | journal = Chemometrics and Intelligent Laboratory Systems
| |
− | | |
− | 化学计量学与智能实验室系统
| |
− | | |
− | | volume = 160
| |
− | | |
− | | volume = 160
| |
− | | |
− | 第160卷
| |
− | | |
− | | pages = 59–71
| |
− | | |
− | | pages = 59–71
| |
− | | |
− | 第59-71页
| |
− | | |
− | | doi = 10.1016/j.chemolab.2016.11.012
| |
− | | |
− | | doi = 10.1016/j.chemolab.2016.11.012
| |
− | | |
− | 10.1016 / j.chemolab. 2016.11.012
| |
− | | |
− | |year= 2017 }}</ref>
| |
− | | |
− | |year= 2017 }}</ref>
| |
− | | |
− | 2017年开始 / ref
| |
| | | |
| :<math>IQR(X, Y) = \operatorname{E}[\operatorname{I}(X;Y)] | | :<math>IQR(X, Y) = \operatorname{E}[\operatorname{I}(X;Y)] |
| | | |
− | <math>IQR(X, Y) = \operatorname{E}[\operatorname{I}(X;Y)]
| + | = \frac{\operatorname{I}(X;Y)}{H(X, Y)} |
− | | |
− | 数学 IQR (x,y)操作者名{ e }[操作者名{ i }(x; y)]
| |
− | | |
− | = \frac{\operatorname{I}(X;Y)}{\Eta(X, Y)}
| |
− | | |
− | = \frac{\operatorname{I}(X;Y)}{\Eta(X, Y)} | |
− | | |
− | Frac { operatorname { i }(x; y)}{ Eta (x,y)}
| |
− | | |
− | = \frac{\sum_{x \in X} \sum_{y \in Y} p(x, y) \log {p(x)p(y)}}{\sum_{x \in X} \sum_{y \in Y} p(x, y) \log {p(x, y)}} - 1</math>
| |
| | | |
| = \frac{\sum_{x \in X} \sum_{y \in Y} p(x, y) \log {p(x)p(y)}}{\sum_{x \in X} \sum_{y \in Y} p(x, y) \log {p(x, y)}} - 1</math> | | = \frac{\sum_{x \in X} \sum_{y \in Y} p(x, y) \log {p(x)p(y)}}{\sum_{x \in X} \sum_{y \in Y} p(x, y) \log {p(x, y)}} - 1</math> |
| | | |
− | (x,y) log { p (x) p (y)}{ sum { y in x,y) log { p (x) p (y)}-1 / math
| |
| | | |
| | | |
| | | |
| + | There's a normalization<ref name="strehl-jmlr02">{{cite journal| title = Cluster Ensembles – A Knowledge Reuse Framework for Combining Multiple Partitions| journal = The Journal of Machine Learning Research| pages = 583–617 | volume = 3 | year = 2003| last1 = Strehl | first1 = Alexander | last2 = Ghosh | first2 = Joydeep| doi=10.1162/153244303321897735| url=http://www.jmlr.org/papers/volume3/strehl02a/strehl02a.pdf}}</ref> which derives from first thinking of mutual information as an analogue to [[covariance]] (thus [[Entropy (information theory)|Shannon entropy]] is analogous to [[variance]]). Then the normalized mutual information is calculated akin to the [[Pearson product-moment correlation coefficient|Pearson correlation coefficient]], |
| | | |
| | | |
− | There's a normalization<ref name="strehl-jmlr02">{{cite journal
| |
| | | |
− | There's a normalization<ref name="strehl-jmlr02">{{cite journal | + | There's a normalization which derives from first thinking of mutual information as an analogue to [[covariance]] (thus [[Entropy (information theory)|Shannon entropy]] is analogous to [[variance]]). Then the normalized mutual information is calculated akin to the [[Pearson product-moment correlation coefficient|Pearson correlation coefficient]], |
| | | |
− | 有一个标准化的名字“ strehl-jmlr02”{ cite journal
| + | 有一个标准化的名字——它起源于最初把互信息看作是协方差的类比(因此香农熵类似于方差)。然后计算归一化互信息类似于皮尔逊相关系数, |
− | | |
− | | title = Cluster Ensembles – A Knowledge Reuse Framework for Combining Multiple Partitions
| |
− | | |
− | | title = Cluster Ensembles – A Knowledge Reuse Framework for Combining Multiple Partitions
| |
− | | |
− | | title 集群集成——一种组合多分区的知识重用框架
| |
− | | |
− | | journal = The Journal of Machine Learning Research
| |
− | | |
− | | journal = The Journal of Machine Learning Research
| |
− | | |
− | 机器学习研究杂志
| |
− | | |
− | | pages = 583–617 | volume = 3 | year = 2003
| |
− | | |
− | | pages = 583–617 | volume = 3 | year = 2003
| |
− | | |
− | 583-617卷3年2003年
| |
− | | |
− | | last1 = Strehl | first1 = Alexander | last2 = Ghosh | first2 = Joydeep
| |
− | | |
− | | last1 = Strehl | first1 = Alexander | last2 = Ghosh | first2 = Joydeep
| |
− | | |
− | 1 Alexander | last2 Ghosh | first2 Joydeep
| |
− | | |
− | | doi=10.1162/153244303321897735
| |
− | | |
− | | doi=10.1162/153244303321897735
| |
− | | |
− | 10.1162 / 153244303321897735
| |
− | | |
− | | url=http://www.jmlr.org/papers/volume3/strehl02a/strehl02a.pdf}}</ref> which derives from first thinking of mutual information as an analogue to [[covariance]] (thus [[Entropy (information theory)|Shannon entropy]] is analogous to [[variance]]). Then the normalized mutual information is calculated akin to the [[Pearson product-moment correlation coefficient|Pearson correlation coefficient]],
| |
− | | |
− | | url=http://www.jmlr.org/papers/volume3/strehl02a/strehl02a.pdf}}</ref> which derives from first thinking of mutual information as an analogue to covariance (thus Shannon entropy is analogous to variance). Then the normalized mutual information is calculated akin to the Pearson correlation coefficient,
| |
− | | |
− | / / ref,它起源于最初把互信息看作是协方差的类比(因此香农熵类似于方差)。然后计算归一化互信息类似于皮尔逊相关系数,
| |
| | | |
| | | |
第972行: |
第828行: |
| :<math> | | :<math> |
| | | |
− | <math>
| + | \frac{\operatorname{I}(X;Y)}{\sqrt{H(X)H(Y)}}\; . |
− | | |
− | 数学
| |
− | | |
− | \frac{\operatorname{I}(X;Y)}{\sqrt{\Eta(X)\Eta(Y)}}\; . | |
− | | |
− | \frac{\operatorname{I}(X;Y)}{\sqrt{\Eta(X)\Eta(Y)}}\; .
| |
− | | |
− | Frac { operatorname { i }(x; y)}{ sqrt { Eta (x) Eta (y)}} ;.
| |
| | | |
| </math> | | </math> |
− |
| |
− | </math>
| |
− |
| |
− | 数学
| |
− |
| |
− |
| |
− |
| |
− |
| |
| | | |
| === 加权变量 Weighted variants === | | === 加权变量 Weighted variants === |