更改

跳到导航 跳到搜索
删除21字节 、 2020年10月27日 (二) 18:23
第121行: 第121行:  
1. For every discrete memoryless channel, the channel capacity is defined in terms of the mutual information <math>I(X; Y)</math>,
 
1. For every discrete memoryless channel, the channel capacity is defined in terms of the mutual information <math>I(X; Y)</math>,
   −
1.对于每个离散的无记忆信道,信道容量是根据互信息 i (x; y) </math > 来定义的,
+
1.对于每一个离散的无记忆信道,信道容量是根据互信息 i (x; y) </math > 来定义的,
      第129行: 第129行:  
<math>\ C = \sup_{p_X} I(X;Y)</math>
 
<math>\ C = \sup_{p_X} I(X;Y)</math>
   −
c = sup { p _ x } i (x; y) </math >
+
[ math > c = sup { p _ x } i (x; y) </math ]
      第137行: 第137行:  
has the following property.  For any <math>\epsilon>0</math> and <math>R<C</math>, for large enough <math>N</math>, there exists a code of length <math>N</math> and rate <math>\geq R</math> and a decoding algorithm, such that the maximal probability of block error is <math>\leq \epsilon</math>.
 
has the following property.  For any <math>\epsilon>0</math> and <math>R<C</math>, for large enough <math>N</math>, there exists a code of length <math>N</math> and rate <math>\geq R</math> and a decoding algorithm, such that the maximal probability of block error is <math>\leq \epsilon</math>.
   −
具有以下属性。对于任意 < math > epsilon > 0 </math > 和 < math > r < c </math > ,对于足够大的 < math > n </math > ,存在一个长度为 < math > n </math > 和速率 < math > geq r </math > 的代码和一个解码算法,使得块错误的最大概率为 < math > leq epq </math > 。
+
具有以下属性。对于任何 < math > epsilon > 0 </math > 和 < math > r < c </math > ,对于足够大的 < math > n </math > ,存在一个长度为 < math > n </math > 和速率 < math > geq r </math > 的代码和一个解码算法,使得块错误的最大概率为 < math > leq epq </math > 。
      第153行: 第153行:  
<math>R(p_b) = \frac{C}{1-H_2(p_b)} .</math>
 
<math>R(p_b) = \frac{C}{1-H_2(p_b)} .</math>
   −
[数学] r (p _ b) = frac { c }{1-H _ 2(p _ b)}  
+
1-H _ 2(p _ b)} . </math >
      第215行: 第215行:  
This particular proof of achievability follows the style of proofs that make use of the asymptotic equipartition property (AEP).  Another style can be found in information theory texts using error exponents.
 
This particular proof of achievability follows the style of proofs that make use of the asymptotic equipartition property (AEP).  Another style can be found in information theory texts using error exponents.
   −
这个关于可达成性的特殊证明遵循了使用美国渐近等同分割特性协会(AEP)的证明的风格。另一种风格可以在使用错误指数的信息论文本中找到。
+
这个关于可达成性的特殊证明遵循了使用美国渐近等同分割特性协会(AEP)的证明的风格。另一种风格可以在信息论文本中找到,使用错误指数。
      第223行: 第223行:  
Both types of proofs make use of a random coding argument where the codebook used across a channel is randomly constructed - this serves to make the analysis simpler while still proving the existence of a code satisfying a desired low probability of error at any data rate below the channel capacity.
 
Both types of proofs make use of a random coding argument where the codebook used across a channel is randomly constructed - this serves to make the analysis simpler while still proving the existence of a code satisfying a desired low probability of error at any data rate below the channel capacity.
   −
这两种证明都使用了随机编码参数,其中跨信道使用的码本是随机构造的——这有助于简化分析,同时仍然证明在低于信道容量的任何数据速率下,满足期望的低错误概率的代码的存在。
+
这两种证明都使用了随机编码参数,其中跨信道使用的码本是随机构造的——这有助于使分析更简单,同时仍然证明在低于信道容量的任何数据速率下,满足所需的低错误概率的代码的存在。
      第239行: 第239行:  
  <math>A_\varepsilon^{(n)} = \{(x^n, y^n) \in \mathcal X^n \times \mathcal Y^n </math>
 
  <math>A_\varepsilon^{(n)} = \{(x^n, y^n) \in \mathcal X^n \times \mathcal Y^n </math>
   −
[ math > a _ varepsilon ^ {(n)} = {(x ^ n,y ^ n) in mathcal x ^ n times mathcal y ^ n </math >  
+
数学 x ^ n 乘以数学 y ^ n </math >  
      第291行: 第291行:  
This code is revealed to the sender and receiver.  It is also assumed that one knows the transition matrix <math>p(y|x)</math> for the channel being used.
 
This code is revealed to the sender and receiver.  It is also assumed that one knows the transition matrix <math>p(y|x)</math> for the channel being used.
   −
这段代码向发送者和接收者显示。我们还假设用户知道所使用的通道的转移矩阵。
+
这段代码向发送者和接收者显示。还假设人们知道所使用的通道的转移矩阵。
    
#A message W is chosen according to the uniform distribution on the set of codewords.  That is, <math>Pr(W = w) = 2^{-nR}, w = 1, 2, \dots, 2^{nR}</math>.
 
#A message W is chosen according to the uniform distribution on the set of codewords.  That is, <math>Pr(W = w) = 2^{-nR}, w = 1, 2, \dots, 2^{nR}</math>.
第315行: 第315行:  
Sending these codewords across the channel, we receive <math>Y_1^n</math>, and decode to some source sequence if there exists exactly 1 codeword that is jointly typical with Y.  If there are no jointly typical codewords, or if there are more than one, an error is declared.  An error also occurs if a decoded codeword doesn't match the original codeword.  This is called typical set decoding.
 
Sending these codewords across the channel, we receive <math>Y_1^n</math>, and decode to some source sequence if there exists exactly 1 codeword that is jointly typical with Y.  If there are no jointly typical codewords, or if there are more than one, an error is declared.  An error also occurs if a decoded codeword doesn't match the original codeword.  This is called typical set decoding.
   −
通过信道发送这些码字,我们接收到 y _ (1 ^ n) </math > ,如果存在一个与 y 相同的码字,我们就解码到某个源序列。如果没有共同的典型代码字,或者有多个代码字,则声明错误。如果解码的码字与原始码字不匹配,也会发生错误。这就是所谓的典型集合译码。
+
通过信道发送这些码字,我们接收到 y _ (1 ^ n) </math > ,并解码到某个源序列,如果存在正好与 y 共同典型的一个码字。如果没有共同的典型代码字,或者有多个代码字,则声明错误。如果解码的码字与原始码字不匹配,也会发生错误。这就是所谓的典型集合译码。
      第381行: 第381行:  
P(\text{error}) & {} = P(\text{error}|W=1) \le P(E_1^c) + \sum_{i=2}^{2^{nR}}P(E_i) \\
 
P(\text{error}) & {} = P(\text{error}|W=1) \le P(E_1^c) + \sum_{i=2}^{2^{nR}}P(E_i) \\
   −
P (text { error }) & {} = p (text { error } | w = 1) le p (e1 ^ c) + sum { i = 2} ^ {2 ^ { nR }} p (e1)  
+
P (text { error }) & {} = p (text { error } | w = 1) le p (e_1 ^ c) + sum _ { i = 2} ^ {2 ^ { nR }} p (e_i)
    
& {} \le P(E_1^c) + (2^{nR}-1)2^{-n(I(X;Y)-3\varepsilon)} \\
 
& {} \le P(E_1^c) + (2^{nR}-1)2^{-n(I(X;Y)-3\varepsilon)} \\
第447行: 第447行:  
<math>\le H(W|Y^n) + I(X^n(W);Y^{n})</math> since X is a function of W
 
<math>\le H(W|Y^n) + I(X^n(W);Y^{n})</math> since X is a function of W
   −
因为 x 是 w 的一个函数
+
因为 x 是 w 的一个函数,所以它是一个数学公式
    
#<math>\le 1 + P_e^{(n)}nR + I(X^n(W);Y^n)</math> by the use of [[Fano's Inequality]]
 
#<math>\le 1 + P_e^{(n)}nR + I(X^n(W);Y^n)</math> by the use of [[Fano's Inequality]]
第459行: 第459行:  
<math>\le 1 + P_e^{(n)}nR + nC</math> by the fact that capacity is maximized mutual information.
 
<math>\le 1 + P_e^{(n)}nR + nC</math> by the fact that capacity is maximized mutual information.
   −
通过容量是最大化的互信息这一事实 < math > le 1 + p _ e ^ {(n)} nR + nC </math > 。
+
由于容量是最大化的互信息,因此[ math > le 1 + p _ e ^ {(n)} nR + nC。
      第507行: 第507行:  
for some finite positive constant <math>A</math>. While the weak converse states that the error probability is bounded away from zero as <math>n</math> goes to infinity, the strong converse states that the error goes to 1. Thus, <math>C</math> is a sharp threshold between perfectly reliable and completely unreliable communication.
 
for some finite positive constant <math>A</math>. While the weak converse states that the error probability is bounded away from zero as <math>n</math> goes to infinity, the strong converse states that the error goes to 1. Thus, <math>C</math> is a sharp threshold between perfectly reliable and completely unreliable communication.
   −
为了某个有限的正常数。当弱逆表示错误概率远离零是有界的时候,强逆表示错误概率为1。因此,c </math > 是完全可靠和完全不可靠的通信之间的一个尖锐的门槛。
+
为了某个有限的正常数。当弱逆表示错误概率远离零是有界的时候,强逆表示错误概率远离零是有界的。因此,c </math > 是完全可靠和完全不可靠的通信之间的一个尖锐的门槛。
     
1,564

个编辑

导航菜单