第837行: |
第837行: |
| each event or object specified by <math>(x, y)</math> is weighted by the corresponding probability <math>p(x, y)</math>. This assumes that all objects or events are equivalent apart from their probability of occurrence. However, in some applications it may be the case that certain objects or events are more significant than others, or that certain patterns of association are more semantically important than others. | | each event or object specified by <math>(x, y)</math> is weighted by the corresponding probability <math>p(x, y)</math>. This assumes that all objects or events are equivalent apart from their probability of occurrence. However, in some applications it may be the case that certain objects or events are more significant than others, or that certain patterns of association are more semantically important than others. |
| | | |
− | Math (x,y) / math 指定的每个事件或对象都由相应的概率 math p (x,y) / math 加权。这假设除了发生概率之外,所有对象或事件都是等效的。然而,在某些应用程序中,某些对象或事件可能比其他对象或事件更重要,或者某些关联模式在语义上比其他模式更重要。
| + | <math>(x, y)</math> 指定的每个事件或对象都由相应的概率<math>p(x, y)</math>进行加权。这假设所有的物体或事件除了发生的概率外都是相等的。然而,在某些应用场景中,某些对象或事件可能比其他对象或事件更重要,或者某些关联模式在语义上比其他模式更重要。 |
| | | |
| | | |
第847行: |
第847行: |
| For example, the deterministic mapping <math>\{(1,1),(2,2),(3,3)\}</math> may be viewed as stronger than the deterministic mapping <math>\{(1,3),(2,1),(3,2)\}</math>, although these relationships would yield the same mutual information. This is because the mutual information is not sensitive at all to any inherent ordering in the variable values (, , ), and is therefore not sensitive at all to the form of the relational mapping between the associated variables. If it is desired that the former relation—showing agreement on all variable values—be judged stronger than the later relation, then it is possible to use the following weighted mutual information . | | For example, the deterministic mapping <math>\{(1,1),(2,2),(3,3)\}</math> may be viewed as stronger than the deterministic mapping <math>\{(1,3),(2,1),(3,2)\}</math>, although these relationships would yield the same mutual information. This is because the mutual information is not sensitive at all to any inherent ordering in the variable values (, , ), and is therefore not sensitive at all to the form of the relational mapping between the associated variables. If it is desired that the former relation—showing agreement on all variable values—be judged stronger than the later relation, then it is possible to use the following weighted mutual information . |
| | | |
− | 例如,确定性映射数学{(1,1) ,(2,2) ,(3,3)} / math 可能被视为比确定性映射数学{(1,3) ,(2,1) ,(3,2)} / math 更强,尽管这些关系将产生相同的互信息。这是因为互信息对变量值(,,)的任何固有顺序都不敏感,因此对相关变量之间的关系映射形式一点也不敏感。如果希望判断前一个关系(即对所有变量值的一致性)比后一个关系强,则可以使用下列加权互信息。
| + | 例如,确定性映射<math>\{(1,1),(2,2),(3,3)\}</math>可能被视为比确定性映射数学<math>\{(1,3),(2,1),(3,2)\}</math>更强,尽管这些关系产生的互信息是相同的。这是因为互信息对变量值(,,)的任何固有顺序都不敏感,因此对相关变量之间的关系映射形式一点也不敏感。如果希望判断前一个关系(即对所有变量值的一致性)比后一个关系强,则可以使用下列加权互信息。 |
| | | |
| :<math> \operatorname{I}(X;Y) | | :<math> \operatorname{I}(X;Y) |
第861行: |
第861行: |
| which places a weight <math>w(x,y)</math> on the probability of each variable value co-occurrence, <math>p(x,y)</math>. This allows that certain probabilities may carry more or less significance than others, thereby allowing the quantification of relevant holistic or Prägnanz factors. In the above example, using larger relative weights for <math>w(1,1)</math>, <math>w(2,2)</math>, and <math>w(3,3)</math> would have the effect of assessing greater informativeness for the relation <math>\{(1,1),(2,2),(3,3)\}</math> than for the relation <math>\{(1,3),(2,1),(3,2)\}</math>, which may be desirable in some cases of pattern recognition, and the like. This weighted mutual information is a form of weighted KL-Divergence, which is known to take negative values for some inputs, and there are examples where the weighted mutual information also takes negative values. | | which places a weight <math>w(x,y)</math> on the probability of each variable value co-occurrence, <math>p(x,y)</math>. This allows that certain probabilities may carry more or less significance than others, thereby allowing the quantification of relevant holistic or Prägnanz factors. In the above example, using larger relative weights for <math>w(1,1)</math>, <math>w(2,2)</math>, and <math>w(3,3)</math> would have the effect of assessing greater informativeness for the relation <math>\{(1,1),(2,2),(3,3)\}</math> than for the relation <math>\{(1,3),(2,1),(3,2)\}</math>, which may be desirable in some cases of pattern recognition, and the like. This weighted mutual information is a form of weighted KL-Divergence, which is known to take negative values for some inputs, and there are examples where the weighted mutual information also takes negative values. |
| | | |
− | 将权重 math w (x,y) / math 放在每个变量值共现的概率上,math p (x,y) / math。这使得某些概率可能比其他概率具有更多或更少的重要性,从而允许量化相关的整体因子或 pr gnaanz 因子。在上面的例子中,对数学 w (1,1) / math,math w (2,2) / math,和 math w (3,3) / math 使用较大的相对权重,对关系数学 w (1,1) ,(2,2) ,(3,3) / math 比对关系数学 w (1,3) ,(2,1) ,(3,2) / math 有更大的信息量,这在某些模式识别的情况下是可取的,等等。这种加权互信息是加权的 kl 散度的一种形式,已知它对某些输入取负值,有些例子中加权互信息也取负值。 | + | 将权重 math w (x,y) / math 放在每个变量值共现的概率上,math p (x,y) / math。这使得某些概率可能比其他概率具有更多或更少的重要性,从而允许量化相关的整体因子或 pr gnaanz 因子。在上面的例子中,对数学 w (1,1) / math,math w (2,2) / math,和 math w (3,3) / math 使用较大的相对权重,对关系数学 w (1,1) ,(2,2) ,(3,3) / math 比对关系数学 w (1,3) ,(2,1) ,(3,2) / math 有更大的信息量,这在某些模式识别的情况下是可取的,等等。这种加权互信息是加权的 kl 散度的一种形式,已知它对某些输入取负值,有些例子中加权互信息也取负值。 |
− | | |
− | | |
− | | |
| | | |
| === 调整后的互信息 Adjusted mutual information === | | === 调整后的互信息 Adjusted mutual information === |