更改

跳到导航 跳到搜索
删除418字节 、 2020年8月9日 (日) 16:52
第326行: 第326行:     
:<math>p(x,y)\approx \sum_w p^\prime (x,w) p^{\prime\prime}(w,y)</math>
 
:<math>p(x,y)\approx \sum_w p^\prime (x,w) p^{\prime\prime}(w,y)</math>
  −
<math>p(x,y)\approx \sum_w p^\prime (x,w) p^{\prime\prime}(w,y)</math>
        第333行: 第331行:  
Alternately, one might be interested in knowing how much more information <math>p(x,y)</math> carries over its factorization. In such a case, the excess information that the full distribution <math>p(x,y)</math> carries over the matrix factorization is given by the Kullback-Leibler divergence
 
Alternately, one might be interested in knowing how much more information <math>p(x,y)</math> carries over its factorization. In such a case, the excess information that the full distribution <math>p(x,y)</math> carries over the matrix factorization is given by the Kullback-Leibler divergence
   −
Alternately, one might be interested in knowing how much more information <math>p(x,y)</math> carries over its factorization. In such a case, the excess information that the full distribution <math>p(x,y)</math> carries over the matrix factorization is given by the Kullback-Leibler divergence
+
Alternately, one might be interested in knowing how much more information 𝑝(𝑥,𝑦) carries over its factorization. In such a case, the excess information that the full distribution 𝑝(𝑥,𝑦) carries over the matrix factorization is given by the Kullback-Leibler divergence
   −
或者,你可能会有兴趣知道 p (x,y) / math 的因式分解会带来多少信息。在这种情况下,完整的分布数学 p (x,y) / 数学传递给矩阵分解的剩余信息是由 Kullback-Leibler 分歧给出的
+
另一方面,人们可能有兴趣知道在因子分解过程中,有多少信息(𝑝(𝑥,𝑦))携带了多少信息。在这种情况下,全分布𝑝(𝑥,𝑦)通过矩阵因子分解所携带的多余信息由Kullback-Leibler散度给出
    
:<math>\operatorname{I}_{LRMA} = \sum_{y \in \mathcal{Y}} \sum_{x \in \mathcal{X}}
 
:<math>\operatorname{I}_{LRMA} = \sum_{y \in \mathcal{Y}} \sum_{x \in \mathcal{X}}
  −
<math>\operatorname{I}_{LRMA} = \sum_{y \in \mathcal{Y}} \sum_{x \in \mathcal{X}}
  −
  −
数学运算符名称{ i }{ LRMA } sum { y } in 数学{ y } sum { x }
      
     {p(x,y) \log{ \left(\frac{p(x,y)}{\sum_w p^\prime (x,w) p^{\prime\prime}(w,y)}
 
     {p(x,y) \log{ \left(\frac{p(x,y)}{\sum_w p^\prime (x,w) p^{\prime\prime}(w,y)}
  −
    {p(x,y) \log{ \left(\frac{p(x,y)}{\sum_w p^\prime (x,w) p^{\prime\prime}(w,y)}
  −
  −
{ p (x,y) log {( frac { p (x,y)}{和 w ^  prime (x,w) p ^  prime }(w,y)}
      
       \right) }},
 
       \right) }},
  −
      \right) }},
  −
  −
开始,开始,
  −
  −
</math>
  −
   
</math>
 
</math>
   −
数学
      
The conventional definition of the mutual information is recovered in the extreme case that the process <math>W</math> has only one value for <math>w</math>.
 
The conventional definition of the mutual information is recovered in the extreme case that the process <math>W</math> has only one value for <math>w</math>.
463

个编辑

导航菜单