更改

跳到导航 跳到搜索
删除292字节 、 2020年10月25日 (日) 17:50
第1行: 第1行: −
此词条暂由彩云小译翻译,未经人工整理和审校,带来阅读不便,请见谅。{{short description|Limit on data transfer rate}}
+
此词条暂由彩云小译翻译,翻译字数共1670,未经人工整理和审校,带来阅读不便,请见谅。
 
      +
{{short description|Limit on data transfer rate}}
    
{{Information theory}}
 
{{Information theory}}
  −
      
{{redirect|Shannon's theorem|text=Shannon's name is also associated with the [[sampling theorem]]}}
 
{{redirect|Shannon's theorem|text=Shannon's name is also associated with the [[sampling theorem]]}}
  −
  −
  −
        第20行: 第14行:     
在信息论中,有噪信道编码定理定理(有时是香农定理或香农极限)规定,对于通信信道的任何给定程度的噪声污染,通过信道传输离散数据(数字信息)几乎没有错误,可以达到可计算的最大速率。这个结果由克劳德 · 香农在1948年提出,部分基于哈里 · 奈奎斯特和拉尔夫 · 哈特利的早期工作和思想。
 
在信息论中,有噪信道编码定理定理(有时是香农定理或香农极限)规定,对于通信信道的任何给定程度的噪声污染,通过信道传输离散数据(数字信息)几乎没有错误,可以达到可计算的最大速率。这个结果由克劳德 · 香农在1948年提出,部分基于哈里 · 奈奎斯特和拉尔夫 · 哈特利的早期工作和思想。
  −
        第29行: 第21行:  
The Shannon limit or Shannon capacity of a communication channel refers to the maximum rate of error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level.  It was first described by Shannon (1948), and shortly after published in a book by Claude Elwood Shannon and Warren Weaver in 1949 entitled The Mathematical Theory of Communication. ().  This founded the modern discipline of information theory.  
 
The Shannon limit or Shannon capacity of a communication channel refers to the maximum rate of error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level.  It was first described by Shannon (1948), and shortly after published in a book by Claude Elwood Shannon and Warren Weaver in 1949 entitled The Mathematical Theory of Communication. ().  This founded the modern discipline of information theory.  
   −
通信信道的香农限或香农容量指的是当链路受到某一特定噪声水平的随机数据传输误差时,理论上可以在信道上传输的无差错数据的最大速率。它首先由 Shannon (1948)描述,不久之后由克劳德·香农和 Warren Weaver 在1949年出版了一本名为《交流的数学理论》的书。().这创立了现代信息理论学科。
+
通信信道的香农限或香农容量指的是当链路受到某一噪声水平的随机数据传输误差时,理论上可以在信道上传输的无差错数据的最大速率。它首先由 Shannon (1948)描述,不久之后由克劳德·香农和 Warren Weaver 在1949年出版了一本名为《交流的数学理论》的书。().这创立了现代信息理论学科。
 
  −
 
            
== Overview ==
 
== Overview ==
  −
== Overview ==
  −
  −
概览
  −
  −
        第50行: 第34行:     
该定理由 Claude Shannon 在1948年提出,描述了相对于噪声干扰和数据损坏水平的纠错方法的最大可能效率。香农定理在通信和数据存储中有着广泛的应用。这个定理对现代信息论领域具有基础性的重要意义。香农只给出了证明的大纲。1954年,阿米尔 · 范斯坦为这个离散案例提供了第一个严格的证据。
 
该定理由 Claude Shannon 在1948年提出,描述了相对于噪声干扰和数据损坏水平的纠错方法的最大可能效率。香农定理在通信和数据存储中有着广泛的应用。这个定理对现代信息论领域具有基础性的重要意义。香农只给出了证明的大纲。1954年,阿米尔 · 范斯坦为这个离散案例提供了第一个严格的证据。
  −
        第59行: 第41行:  
The Shannon theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R, then if <math>R < C</math> there exist codes that allow the probability of error at the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, C.
 
The Shannon theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R, then if <math>R < C</math> there exist codes that allow the probability of error at the receiver to be made arbitrarily small. This means that, theoretically, it is possible to transmit information nearly without error at any rate below a limiting rate, C.
   −
香农定理指出,给定一个信道容量为 c,信息传输速率为 r 的噪声信道,那么如果数学 r c / math 存在允许接收端误码概率任意小的码。这意味着,从理论上讲,在低于极限速率 c 的情况下,几乎可以毫无差错地传输信息。
+
香农定理指出,给定一个信道容量为 c,信息传输速率为 r 的噪声信道,如果存在允许接收端误码概率任意小的码。这意味着,从理论上讲,在低于限制速率 c 的情况下,几乎可以毫无差错地传输信息。
 
  −
 
        第69行: 第49行:  
The converse is also important. If <math>R > C</math>, an arbitrarily small probability of error is not achievable. All codes will have a probability of error greater than a certain positive minimal level, and this level increases as the rate increases. So, information cannot be guaranteed to be transmitted reliably across a channel at rates beyond the channel capacity.  The theorem does not address the rare situation in which rate and capacity are equal.
 
The converse is also important. If <math>R > C</math>, an arbitrarily small probability of error is not achievable. All codes will have a probability of error greater than a certain positive minimal level, and this level increases as the rate increases. So, information cannot be guaranteed to be transmitted reliably across a channel at rates beyond the channel capacity.  The theorem does not address the rare situation in which rate and capacity are equal.
   −
反过来也很重要。如果数学是 r c / 数学,那么任意小的错误概率是不可能实现的。所有代码将有一个错误的概率大于某一个正极小的水平,这一水平随着速率的增加。因此,不能保证信息能够以超出信道容量的速率可靠地跨信道传输。这个定理并没有解决速率和容量相等的罕见情况。
+
反过来也很重要。如果 < math > r > c </math > ,任意小的错误概率是不可能实现的。所有的代码将有一个错误的概率大于某一个积极的最低水平,这一水平随着速率的增加。因此,信息不能保证以超出信道容量的速率可靠地通过信道传输。这个定理并没有解决速率和容量相等的罕见情况。
 
  −
 
        第79行: 第57行:  
The channel capacity <math>C</math> can be calculated from the physical properties of a channel; for a band-limited channel with Gaussian noise, using the Shannon–Hartley theorem.
 
The channel capacity <math>C</math> can be calculated from the physical properties of a channel; for a band-limited channel with Gaussian noise, using the Shannon–Hartley theorem.
   −
对于带有高斯噪声的有限带宽信道,利用香农-哈特利定理可以从信道的物理特性计算出信道容量数学 c / math。
+
信道容量 < math > c </math > 可以从信道的物理特性计算出来,对于带有高斯噪声的带限信道,使用 Shannon-Hartley 定理。
 
  −
 
        第89行: 第65行:  
Simple schemes such as "send the message 3 times and use a best 2 out of 3 voting scheme if the copies differ" are inefficient error-correction methods, unable to asymptotically guarantee that a block of data can be communicated free of error.  Advanced techniques such as Reed–Solomon codes and, more recently,  low-density parity-check (LDPC) codes and turbo codes, come much closer to reaching the theoretical Shannon limit, but at a cost of high computational complexity. Using these highly efficient codes and with the computing power in today's digital signal processors, it is now possible to reach very close to the Shannon limit. In fact, it was shown that LDPC codes can reach within 0.0045&nbsp;dB of the Shannon limit (for binary Additive white Gaussian noise (AWGN) channels, with very long block lengths).
 
Simple schemes such as "send the message 3 times and use a best 2 out of 3 voting scheme if the copies differ" are inefficient error-correction methods, unable to asymptotically guarantee that a block of data can be communicated free of error.  Advanced techniques such as Reed–Solomon codes and, more recently,  low-density parity-check (LDPC) codes and turbo codes, come much closer to reaching the theoretical Shannon limit, but at a cost of high computational complexity. Using these highly efficient codes and with the computing power in today's digital signal processors, it is now possible to reach very close to the Shannon limit. In fact, it was shown that LDPC codes can reach within 0.0045&nbsp;dB of the Shannon limit (for binary Additive white Gaussian noise (AWGN) channels, with very long block lengths).
   −
简单的方案,例如”发送信息3次,如果副本不同,使用3个投票方案中最好的2个” ,都是低效的纠错方法,不能渐近地保证一个数据块的通信没有错误。诸如 Reed-Solomon 码以及最近的低密度奇偶校验(LDPC)码和 turbo 码等先进技术,更接近于达到理论上的 Shannon 极限,但代价是高计算复杂度。使用这些高效率的代码和当今数字信号处理器的计算能力,现在可以非常接近香农限制。事实上,已经证明 LDPC 码可以达到香农极限的0.0045 dB 以内(对于二进制加性高斯白噪声(AWGN)信道,具有很长的块长度)。
+
简单的方案,例如”发送信息3次,如果副本不同,使用3个投票方案中最好的2个” ,都是低效的纠错方法,不能保证一组数据能够通信无误。诸如 Reed-Solomon 码以及最近的低密度奇偶校验(LDPC)码和 turbo 码等先进技术,更接近于达到理论上的 Shannon 极限,但代价是高计算复杂度。使用这些高效率的代码和当今数字信号处理器的计算能力,现在可以非常接近香农限制。事实上,已经证明 LDPC 码可以达到香农极限的0.0045 dB 以内(对于二进制加性高斯白噪声(AWGN)信道,具有很长的块长度)。
 
  −
 
  −
 
        −
== Mathematical statement ==
      
== Mathematical statement ==
 
== Mathematical statement ==
  −
数学语句
  −
  −
        第110行: 第78行:     
通信系统的基本数学模型如下:
 
通信系统的基本数学模型如下:
  −
  −
  −
        第124行: 第88行:     
通道模型
 
通道模型
  −
  −
  −
        第137行: 第97行:  
A message W is transmitted through a noisy channel by using encoding and decoding functions. An encoder maps W into a pre-defined sequence of channel symbols of length n. In its most basic model, the channel distorts each of these symbols independently of the others. The output of the channel –the received sequence– is fed into a decoder which maps the sequence into an estimate of the message. In this setting, the probability of error is defined as:
 
A message W is transmitted through a noisy channel by using encoding and decoding functions. An encoder maps W into a pre-defined sequence of channel symbols of length n. In its most basic model, the channel distorts each of these symbols independently of the others. The output of the channel –the received sequence– is fed into a decoder which maps the sequence into an estimate of the message. In this setting, the probability of error is defined as:
   −
通过使用编码和解码函数,信息 w 通过噪声信道传输。编码器将 w 映射到预先定义的长度为 n 的信道符号序列。 在其最基本的模型中,信道对这些符号的扭曲是独立于其他符号的。信道的输出——接收序列——被送入解码器,解码器将序列映射到消息的估计值中。在这种情况下,错误的概率定义为:
+
通过使用编码和解码函数,信息 w 通过噪声信道传输。编码器将 w 映射到预先定义的长度为 n 的信道符号序列。在其最基本的模型中,信道对这些符号的扭曲是独立于其他符号的。信道的输出——接收序列——被送入解码器,解码器将序列映射到消息的估计值中。在这种情况下,错误的概率定义为:
    
::<math> P_e = \text{Pr}\left\{ \hat{W} \neq W \right\}. </math>
 
::<math> P_e = \text{Pr}\left\{ \hat{W} \neq W \right\}. </math>
第143行: 第103行:  
<math> P_e = \text{Pr}\left\{ \hat{W} \neq W \right\}. </math>
 
<math> P_e = \text{Pr}\left\{ \hat{W} \neq W \right\}. </math>
   −
数学 p-e-text { Pr }--帽子{ w }-q-w-右。数学
+
[数学 > p _ e = 文本{ Pr }左{ w } neq w 右}。数学
 
  −
 
  −
 
  −
 
        第161行: 第117行:        +
:1. For every discrete memoryless channel, the [[channel capacity]] is defined in terms of the mutual information <math>I(X; Y)</math>,
    +
1. For every discrete memoryless channel, the channel capacity is defined in terms of the mutual information <math>I(X; Y)</math>,
   −
:1. For every discrete memoryless channel, the [[channel capacity]]
+
1.对于每个离散的无记忆信道,信道容量是根据互信息 i (x; y) </math > 来定义的,
 
  −
1. For every discrete memoryless channel, the channel capacity
  −
 
  −
1.对于每个离散的无记忆信道,信道容量
  −
 
  −
 
        第177行: 第129行:  
<math>\ C = \sup_{p_X} I(X;Y)</math>
 
<math>\ C = \sup_{p_X} I(X;Y)</math>
   −
Math  csup { px } i (x; y) / math
+
c = sup { p _ x } i (x; y) </math >
 
  −
 
        第187行: 第137行:  
has the following property.  For any <math>\epsilon>0</math> and <math>R<C</math>, for large enough <math>N</math>, there exists a code of length <math>N</math> and rate <math>\geq R</math> and a decoding algorithm, such that the maximal probability of block error is <math>\leq \epsilon</math>.
 
has the following property.  For any <math>\epsilon>0</math> and <math>R<C</math>, for large enough <math>N</math>, there exists a code of length <math>N</math> and rate <math>\geq R</math> and a decoding algorithm, such that the maximal probability of block error is <math>\leq \epsilon</math>.
   −
具有以下属性。对于任意的数学0 / 数学和数学 r c / 数学,对于足够大的数学 n / 数学,存在一个长度数学 n / 数学和速率数学 geq r / 数学的代码和一个解码算法,使得块错误的最大概率为 leq / epsilon / 数学。
+
具有以下属性。对于任意 < math > epsilon > 0 </math > 和 < math > r < c </math > ,对于足够大的 < math > n </math > ,存在一个长度为 < math > n </math > 和速率 < math > geq r </math > 的代码和一个解码算法,使得块错误的最大概率为 < math > leq epq </math > 。
 
  −
 
        第197行: 第145行:  
2. If a probability of bit error <math>p_b</math> is acceptable, rates up to <math>R(p_b)</math> are achievable, where
 
2. If a probability of bit error <math>p_b</math> is acceptable, rates up to <math>R(p_b)</math> are achievable, where
   −
2.如果可以接受位错数学概率 p b / 数学,那么可以达到数学 r (p b) / 数学的速率
+
2.如果位错概率 < math > p _ b </math > 是可以接受的,那么达到 < math > r (p _ b) </math > 的速率是可以实现的
 
  −
 
        第207行: 第153行:  
<math>R(p_b) = \frac{C}{1-H_2(p_b)} .</math>
 
<math>R(p_b) = \frac{C}{1-H_2(p_b)} .</math>
   −
数学 r (p b) frac {1-h2(p b)} . / math
+
[数学] r (p _ b) = frac { c }{1-H _ 2(p _ b)}  
 
  −
 
        第217行: 第161行:  
and <math>  H_2(p_b)</math> is the binary entropy function
 
and <math>  H_2(p_b)</math> is the binary entropy function
   −
数学 h2(pb) / math 是二元熵函数
+
2(p _ b) </math > 是二元熵函数
 
  −
 
        第227行: 第169行:  
<math>H_2(p_b)=- \left[ p_b \log_2 {p_b} + (1-p_b) \log_2 ({1-p_b}) \right]</math>
 
<math>H_2(p_b)=- \left[ p_b \log_2 {p_b} + (1-p_b) \log_2 ({1-p_b}) \right]</math>
   −
数学 h2(p b)- 左[ p b log 2{ p b } + (1-p b) log 2({1-p b })右] / 数学
+
< math > h _ 2(p _ b) =-左[ p _ b log_2{ p _ b } + (1-p _ b) log_2({1-p _ b })右] </math >
 
  −
 
        第237行: 第177行:  
3. For any <math>p_b</math>, rates greater than <math>R(p_b)</math> are not achievable.
 
3. For any <math>p_b</math>, rates greater than <math>R(p_b)</math> are not achievable.
   −
3.对于任何数学 p b / 数学,大于数学 r (p b) / 数学的速率是无法实现的。
+
3.对于任何 < math > p _ b </math > ,比率大于 < math > r (p _ b) </math > 是无法实现的。
 
  −
 
        第250行: 第188行:       −
  −
  −
  −
== Outline of proof ==
      
== Outline of proof ==
 
== Outline of proof ==
   −
证明大纲
+
As with the several other major results in information theory, the proof of the noisy channel coding theorem includes an achievability result and a matching converse result.  These two components serve to bound, in this case, the set of possible rates at which one can communicate over a noisy channel, and matching serves to show that these bounds are tight bounds.
   −
As with several other major results in information theory, the proof of the noisy channel coding theorem includes an achievability result and a matching converse result.  These two components serve to bound, in this case, the set of possible rates at which one can communicate over a noisy channel, and matching serves to show that these bounds are tight bounds.
+
As with the several other major results in information theory, the proof of the noisy channel coding theorem includes an achievability result and a matching converse result.  These two components serve to bound, in this case, the set of possible rates at which one can communicate over a noisy channel, and matching serves to show that these bounds are tight bounds.
 
  −
As with several other major results in information theory, the proof of the noisy channel coding theorem includes an achievability result and a matching converse result.  These two components serve to bound, in this case, the set of possible rates at which one can communicate over a noisy channel, and matching serves to show that these bounds are tight bounds.
      
与信息论中的其他几个主要结果一样,噪声信道编码定理的证明包括可实现性结果和匹配逆向结果。在这种情况下,这两个组件用于绑定人们可以在噪声信道上进行通信的可能速率集,而匹配则用于表明这些边界是紧边界。
 
与信息论中的其他几个主要结果一样,噪声信道编码定理的证明包括可实现性结果和匹配逆向结果。在这种情况下,这两个组件用于绑定人们可以在噪声信道上进行通信的可能速率集,而匹配则用于表明这些边界是紧边界。
  −
        第276行: 第206行:       −
  −
  −
  −
===Achievability for discrete memoryless channels===
      
===Achievability for discrete memoryless channels===
 
===Achievability for discrete memoryless channels===
  −
离散无记忆信道的可达性
  −
  −
        第294行: 第216行:     
这个关于可达成性的特殊证明遵循了使用美国渐近等同分割特性协会(AEP)的证明的风格。另一种风格可以在使用错误指数的信息论文本中找到。
 
这个关于可达成性的特殊证明遵循了使用美国渐近等同分割特性协会(AEP)的证明的风格。另一种风格可以在使用错误指数的信息论文本中找到。
  −
        第304行: 第224行:     
这两种证明都使用了随机编码参数,其中跨信道使用的码本是随机构造的——这有助于简化分析,同时仍然证明在低于信道容量的任何数据速率下,满足期望的低错误概率的代码的存在。
 
这两种证明都使用了随机编码参数,其中跨信道使用的码本是随机构造的——这有助于简化分析,同时仍然证明在低于信道容量的任何数据速率下,满足期望的低错误概率的代码的存在。
  −
        第313行: 第231行:  
By an AEP-related argument, given a channel, length <math>n</math> strings of source symbols <math>X_1^{n}</math>, and length <math>n</math> strings of channel outputs <math>Y_1^{n}</math>, we can define a jointly typical set by the following:
 
By an AEP-related argument, given a channel, length <math>n</math> strings of source symbols <math>X_1^{n}</math>, and length <math>n</math> strings of channel outputs <math>Y_1^{n}</math>, we can define a jointly typical set by the following:
   −
通过一个与 aep-相关的参数,给定一个通道,长度数学 n / 数学字符串的源符号数学 x 1 ^ { n } / math,以及通道输出数学 y 1 ^ { n } / math 的长度数学 n / 数学字符串,我们可以通过以下方式定义一个联合的典型集合:
+
通过一个与 aep-相关的参数,给定一个通道,长度 < math > n </math > 源符号的字符串 < math > x _ 1 ^ { n } </math > ,以及长度 < math > n </math > n </math > 通道输出的字符串 < math > y _ 1 ^ { n } </math > ,我们可以定义一个联合的典型集合如下:
 
  −
 
        第323行: 第239行:  
  <math>A_\varepsilon^{(n)} = \{(x^n, y^n) \in \mathcal X^n \times \mathcal Y^n </math>
 
  <math>A_\varepsilon^{(n)} = \{(x^n, y^n) \in \mathcal X^n \times \mathcal Y^n </math>
   −
在数学 x ^ n 中,数学 a ^ (n)}(x ^ n,y ^ n)
+
[ math > a _ varepsilon ^ {(n)} = {(x ^ n,y ^ n) in mathcal x ^ n times mathcal y ^ n </math >
 
  −
 
        第333行: 第247行:  
<math>2^{-n(H(X)+\varepsilon)} \le p(X_1^n) \le 2^{-n(H(X) - \varepsilon)}</math>
 
<math>2^{-n(H(X)+\varepsilon)} \le p(X_1^n) \le 2^{-n(H(X) - \varepsilon)}</math>
   −
数学2 ^ {-n (h (x) + varepsilon)} le p (x 1 ^ n) le 2 ^ {-n (h (x)- varepsilon)} / math
+
2 ^ {-n (h (x) + varepsilon)} le p (x _ 1 ^ n) le 2 ^ {-n (h (x)-varepsilon)} </math >
 
  −
 
        第343行: 第255行:  
<math>2^{-n(H(Y) + \varepsilon)} \le p(Y_1^n) \le 2^{-n(H(Y)-\varepsilon)}</math>
 
<math>2^{-n(H(Y) + \varepsilon)} \le p(Y_1^n) \le 2^{-n(H(Y)-\varepsilon)}</math>
   −
数学2 ^ {-n (h (y) + varepsilon)} le p (y 1 ^ n) le 2 ^ {-n (h (y)- varepsilon)} / math
+
2 ^ {-n (h (y) + varepsilon)} le p (y _ 1 ^ n) le 2 ^ {-n (h (y)-varepsilon)}
 
  −
 
        第353行: 第263行:  
<math>{2^{-n(H(X,Y) + \varepsilon)}}\le p(X_1^n, Y_1^n) \le 2^{-n(H(X,Y) -\varepsilon)} \}</math>
 
<math>{2^{-n(H(X,Y) + \varepsilon)}}\le p(X_1^n, Y_1^n) \le 2^{-n(H(X,Y) -\varepsilon)} \}</math>
   −
数学{2 ^ {-n (h (x,y) + varepsilon)} le p (x1 ^ n,y1 ^ n) le 2 ^ {-n (h (x,y)- varepsilon)} / math
+
{2 ^ {-n (h (x,y) + varepsilon)} le p (x _ 1 ^ n,y _ 1 ^ n) le 2 ^ {-n (h (x,y)-varepsilon)}
 
  −
 
        第363行: 第271行:  
We say that two sequences <math>{X_1^n}</math> and <math>Y_1^n</math> are jointly typical if they lie in the jointly typical set defined above.
 
We say that two sequences <math>{X_1^n}</math> and <math>Y_1^n</math> are jointly typical if they lie in the jointly typical set defined above.
   −
我们说两个序列 math { x1 ^ n } / math 和 math y 1 ^ n / math 如果它们处于上面定义的联合典型集合中,就是联合典型的。
+
我们说两个序列 < math > { x1 ^ n } </math > < math > y _ 1 ^ n </math > 如果它们位于上面定义的联合典型集合中,那么它们是共同典型的。
 
  −
 
        第379行: 第285行:  
In the style of the random coding argument, we randomly generate <math> 2^{nR} </math> codewords of length n from a probability distribution Q.
 
In the style of the random coding argument, we randomly generate <math> 2^{nR} </math> codewords of length n from a probability distribution Q.
   −
在随机编码参数的风格中,我们从概率分布 q 随机生成长度为 n 的 math 2 ^ { nR } / math 码字。
+
在随机编码参数的风格中,我们随机从概率分布 q 生成长度为 n 的长度为2 ^ { nR } </math > 的码字。
    
#This code is revealed to the sender and receiver.  It is also assumed that one knows the transition matrix <math>p(y|x)</math> for the channel being used.
 
#This code is revealed to the sender and receiver.  It is also assumed that one knows the transition matrix <math>p(y|x)</math> for the channel being used.
第385行: 第291行:  
This code is revealed to the sender and receiver.  It is also assumed that one knows the transition matrix <math>p(y|x)</math> for the channel being used.
 
This code is revealed to the sender and receiver.  It is also assumed that one knows the transition matrix <math>p(y|x)</math> for the channel being used.
   −
这段代码会透露给发送者和接收者。我们还假设知道所使用通道的转移矩阵数学 p (y | x) / 数学。
+
这段代码向发送者和接收者显示。我们还假设用户知道所使用的通道的转移矩阵。
    
#A message W is chosen according to the uniform distribution on the set of codewords.  That is, <math>Pr(W = w) = 2^{-nR}, w = 1, 2, \dots, 2^{nR}</math>.
 
#A message W is chosen according to the uniform distribution on the set of codewords.  That is, <math>Pr(W = w) = 2^{-nR}, w = 1, 2, \dots, 2^{nR}</math>.
第391行: 第297行:  
A message W is chosen according to the uniform distribution on the set of codewords.  That is, <math>Pr(W = w) = 2^{-nR}, w = 1, 2, \dots, 2^{nR}</math>.
 
A message W is chosen according to the uniform distribution on the set of codewords.  That is, <math>Pr(W = w) = 2^{-nR}, w = 1, 2, \dots, 2^{nR}</math>.
   −
根据码字集上的均匀分布选择消息 w。即 math Pr (w)2 ^ {-nR } ,w 1,2, dots,2 ^ { nR } / math。
+
根据码字集上的均匀分布选择消息 w。也就是,< math > Pr (w = w) = 2 ^ {-nR } ,w = 1,2,dots,2 ^ { nR } </math > 。
    
#The message W is sent across the channel.
 
#The message W is sent across the channel.
第403行: 第309行:  
The receiver receives a sequence according to <math>P(y^n|x^n(w))= \prod_{i = 1}^np(y_i|x_i(w))</math>
 
The receiver receives a sequence according to <math>P(y^n|x^n(w))= \prod_{i = 1}^np(y_i|x_i(w))</math>
   −
接收器根据数学 p (y ^ n | x ^ n (w)) prod { i ^ np (y i | x i (w)) / math 接收序列
+
接收端根据 < math > p (y ^ n | x ^ n (w)) = prod _ { i = 1} ^ np (y _ i | x _ i (w)) </math > 接收一个序列
    
#Sending these codewords across the channel, we receive <math>Y_1^n</math>, and decode to some source sequence if there exists exactly 1 codeword that is jointly typical with Y.  If there are no jointly typical codewords, or if there are more than one, an error is declared.  An error also occurs if a decoded codeword doesn't match the original codeword.  This is called ''typical set decoding''.
 
#Sending these codewords across the channel, we receive <math>Y_1^n</math>, and decode to some source sequence if there exists exactly 1 codeword that is jointly typical with Y.  If there are no jointly typical codewords, or if there are more than one, an error is declared.  An error also occurs if a decoded codeword doesn't match the original codeword.  This is called ''typical set decoding''.
第409行: 第315行:  
Sending these codewords across the channel, we receive <math>Y_1^n</math>, and decode to some source sequence if there exists exactly 1 codeword that is jointly typical with Y.  If there are no jointly typical codewords, or if there are more than one, an error is declared.  An error also occurs if a decoded codeword doesn't match the original codeword.  This is called typical set decoding.
 
Sending these codewords across the channel, we receive <math>Y_1^n</math>, and decode to some source sequence if there exists exactly 1 codeword that is jointly typical with Y.  If there are no jointly typical codewords, or if there are more than one, an error is declared.  An error also occurs if a decoded codeword doesn't match the original codeword.  This is called typical set decoding.
   −
通过信道发送这些码字,我们接收到 math y 1 ^ n / math,如果存在正好与 y 共同典型的一个码字,我们解码到某个源序列。如果没有共同的典型代码字,或者有多个代码字,则声明错误。如果解码的码字与原始码字不匹配,也会发生错误。这就是所谓的典型集解码。
+
通过信道发送这些码字,我们接收到 y _ (1 ^ n) </math > ,如果存在一个与 y 相同的码字,我们就解码到某个源序列。如果没有共同的典型代码字,或者有多个代码字,则声明错误。如果解码的码字与原始码字不匹配,也会发生错误。这就是所谓的典型集合译码。
 
  −
 
        第420行: 第324行:     
该方案的误差概率分为两部分:
 
该方案的误差概率分为两部分:
  −
        第435行: 第337行:  
Second, error can occur if an incorrect X sequence is jointly typical with a received Y sequence.
 
Second, error can occur if an incorrect X sequence is jointly typical with a received Y sequence.
   −
其次,如果一个不正确的 x 序列与一个接收到的 y 序列一起是典型的,就会发生错误。
+
其次,如果一个不正确的 x 序列与一个接收到的 y 序列是共同的典型,则可能发生错误。
 
  −
 
            
*By the randomness of the code construction, we can assume that the average probability of error averaged over all codes does not depend on the index sent.  Thus, without loss of generality, we can assume ''W'' = 1.
 
*By the randomness of the code construction, we can assume that the average probability of error averaged over all codes does not depend on the index sent.  Thus, without loss of generality, we can assume ''W'' = 1.
  −
      
*From the joint AEP, we know that the probability that no jointly typical X exists goes to 0 as n grows large.  We can bound this error probability by <math>\varepsilon</math>.
 
*From the joint AEP, we know that the probability that no jointly typical X exists goes to 0 as n grows large.  We can bound this error probability by <math>\varepsilon</math>.
  −
      
*Also from the joint AEP, we know the probability that a particular <math>X_1^{n}(i)</math> and the <math>Y_1^n</math> resulting from ''W'' = 1 are jointly typical is <math>\le 2^{-n(I(X;Y) - 3\varepsilon)}</math>.
 
*Also from the joint AEP, we know the probability that a particular <math>X_1^{n}(i)</math> and the <math>Y_1^n</math> resulting from ''W'' = 1 are jointly typical is <math>\le 2^{-n(I(X;Y) - 3\varepsilon)}</math>.
  −
  −
  −
        第461行: 第353行:  
Define: <math>E_i = \{(X_1^n(i), Y_1^n) \in A_\varepsilon^{(n)}\}, i = 1, 2, \dots, 2^{nR}</math>
 
Define: <math>E_i = \{(X_1^n(i), Y_1^n) \in A_\varepsilon^{(n)}\}, i = 1, 2, \dots, 2^{nR}</math>
   −
定义: math e i (x1 ^ n (i) ,y1 ^ n) in a varepsilon ^ {(n)} ,i 1,2, dots,2 ^ { nR } / math
+
定义: < math > e _ i = {(x _ 1 ^ n (i) ,y _ 1 ^ n) in a _ varepsilon ^ {(n)}} ,i = 1,2,dots,2 ^ { nR } </math >
 
  −
 
        第472行: 第362行:     
作为消息 i 与消息1发送时接收到的序列一起发生的典型事件。
 
作为消息 i 与消息1发送时接收到的序列一起发生的典型事件。
  −
        第481行: 第369行:  
  <math>
 
  <math>
   −
数学
+
《数学》
    
\begin{align}
 
\begin{align}
第487行: 第375行:  
\begin{align}
 
\begin{align}
   −
Begin { align }
+
开始{ align }
    
P(\text{error}) & {} = P(\text{error}|W=1) \le P(E_1^c) + \sum_{i=2}^{2^{nR}}P(E_i) \\
 
P(\text{error}) & {} = P(\text{error}|W=1) \le P(E_1^c) + \sum_{i=2}^{2^{nR}}P(E_i) \\
第493行: 第381行:  
P(\text{error}) & {} = P(\text{error}|W=1) \le P(E_1^c) + \sum_{i=2}^{2^{nR}}P(E_i) \\
 
P(\text{error}) & {} = P(\text{error}|W=1) \le P(E_1^c) + \sum_{i=2}^{2^{nR}}P(E_i) \\
   −
P ( text { error }) & {} p ( text { error } | w 1) le p (e1 ^ c) + sum { i 2} ^ {2 ^ { nR }} p (ei)
+
P (text { error }) & {} = p (text { error } | w = 1) le p (e1 ^ c) + sum { i = 2} ^ {2 ^ { nR }} p (e1)  
    
& {} \le P(E_1^c) + (2^{nR}-1)2^{-n(I(X;Y)-3\varepsilon)} \\
 
& {} \le P(E_1^c) + (2^{nR}-1)2^{-n(I(X;Y)-3\varepsilon)} \\
第499行: 第387行:  
& {} \le P(E_1^c) + (2^{nR}-1)2^{-n(I(X;Y)-3\varepsilon)} \\
 
& {} \le P(E_1^c) + (2^{nR}-1)2^{-n(I(X;Y)-3\varepsilon)} \\
   −
(e 1 ^ c) + (2 ^ { nR }-1)2 ^ {-n (i (x; y)-3 varepsilon)}
+
& {} le p (e _ 1 ^ c) + (2 ^ { nR }-1)2 ^ {-n (i (x; y)-3 varepsilon)}
    
& {} \le \varepsilon + 2^{-n(I(X;Y)-R-3\varepsilon)}.
 
& {} \le \varepsilon + 2^{-n(I(X;Y)-R-3\varepsilon)}.
第505行: 第393行:  
& {} \le \varepsilon + 2^{-n(I(X;Y)-R-3\varepsilon)}.
 
& {} \le \varepsilon + 2^{-n(I(X;Y)-R-3\varepsilon)}.
   −
(i (x; y)-R-3 varepsilon)}.
+
& {} le varepsilon + 2 ^ {-n (i (x; y)-R-3 varepsilon)}.
    
\end{align}
 
\end{align}
第511行: 第399行:  
\end{align}
 
\end{align}
   −
End { align }
+
结束{ align }
    
</math>
 
</math>
第518行: 第406行:     
数学
 
数学
  −
        第527行: 第413行:  
We can observe that as <math>n</math> goes to infinity, if <math>R < I(X;Y)</math> for the channel, the probability of error will go to 0.
 
We can observe that as <math>n</math> goes to infinity, if <math>R < I(X;Y)</math> for the channel, the probability of error will go to 0.
   −
我们可以观察到,当 math n / math 趋于无穷大时,如果 math r i (x; y) / math 对于通道,错误概率将趋于0。
+
我们可以观察到,当 < math > n </math > 趋于无穷大时,如果 < math > r < i (x; y) </math > 对于通道,错误概率将趋于0。
 
  −
 
        第540行: 第424行:       −
  −
  −
  −
=== Weak converse for discrete memoryless channels===
      
=== Weak converse for discrete memoryless channels===
 
=== Weak converse for discrete memoryless channels===
  −
离散无记忆信道的弱逆
  −
  −
        第557行: 第433行:  
Suppose a code of <math>2^{nR}</math> codewords.  Let W be drawn uniformly over this set as an index.  Let <math>X^n</math> and <math>Y^n</math> be the transmitted codewords and received codewords, respectively.
 
Suppose a code of <math>2^{nR}</math> codewords.  Let W be drawn uniformly over this set as an index.  Let <math>X^n</math> and <math>Y^n</math> be the transmitted codewords and received codewords, respectively.
   −
假设一个 math 2 ^ { nR } / math 代码。让 w 作为索引均匀地绘制在这个集合上。让 math x ^ n / math 和 math y ^ n / math 分别作为传输代码和接收代码。
+
假设一个 < math > 2 ^ { nR } </math > 代码字的代码。让 w 作为索引均匀地绘制在这个集合上。让 < math > x ^ n </math > < math > y ^ n </math > 分别作为传输代码和接收代码。
 
  −
 
        第567行: 第441行:  
<math>nR = H(W) = H(W|Y^n) + I(W;Y^n)</math> using identities involving entropy and mutual information
 
<math>nR = H(W) = H(W|Y^n) + I(W;Y^n)</math> using identities involving entropy and mutual information
   −
用包含熵和互信息的恒等式进行数学 nR h (w) h (w | y ^ n) + i (w; y ^ n) / math
+
使用包含熵和互信息的恒等式 < math > nR = h (w) = h (w | y ^ n) + i (w; y ^ n) </math
    
#<math>\le H(W|Y^n) + I(X^n(W);Y^{n})</math> since X is a function of W
 
#<math>\le H(W|Y^n) + I(X^n(W);Y^{n})</math> since X is a function of W
第573行: 第447行:  
<math>\le H(W|Y^n) + I(X^n(W);Y^{n})</math> since X is a function of W
 
<math>\le H(W|Y^n) + I(X^n(W);Y^{n})</math> since X is a function of W
   −
数学 h (w | y ^ n) + i (x ^ n (w) ; y ^ { n }) / math,因为 x 是 w 的函数
+
因为 x 是 w 的一个函数
    
#<math>\le 1 + P_e^{(n)}nR + I(X^n(W);Y^n)</math> by the use of [[Fano's Inequality]]
 
#<math>\le 1 + P_e^{(n)}nR + I(X^n(W);Y^n)</math> by the use of [[Fano's Inequality]]
第579行: 第453行:  
<math>\le 1 + P_e^{(n)}nR + I(X^n(W);Y^n)</math> by the use of Fano's Inequality
 
<math>\le 1 + P_e^{(n)}nR + I(X^n(W);Y^n)</math> by the use of Fano's Inequality
   −
通过 Fano 不等式的应用,证明了数学1 + p e ^ {(n)} nR + i (x ^ n (w) ; y ^ n) /  
+
利用 Fano 不等式,得到了一个新的数学公式: (1 + p _ e ^ {(n)} nR + i (x ^ n (w) ; y ^ n) </math
    
#<math>\le 1 + P_e^{(n)}nR + nC</math> by the fact that capacity is maximized mutual information.
 
#<math>\le 1 + P_e^{(n)}nR + nC</math> by the fact that capacity is maximized mutual information.
第585行: 第459行:  
<math>\le 1 + P_e^{(n)}nR + nC</math> by the fact that capacity is maximized mutual information.
 
<math>\le 1 + P_e^{(n)}nR + nC</math> by the fact that capacity is maximized mutual information.
   −
数学1 + p e ^ {(n)} nR + nC / math,因为容量是最大化的互信息。
+
通过容量是最大化的互信息这一事实 < math > le 1 + p _ e ^ {(n)} nR + nC </math > 。
 
  −
 
        第595行: 第467行:  
The result of these steps is that <math> P_e^{(n)} \ge 1 - \frac{1}{nR} - \frac{C}{R} </math>.  As the block length <math>n</math> goes to infinity, we obtain <math> P_e^{(n)}</math> is bounded away from 0 if R is greater than C - we can get arbitrarily low rates of error only if R is less than C.
 
The result of these steps is that <math> P_e^{(n)} \ge 1 - \frac{1}{nR} - \frac{C}{R} </math>.  As the block length <math>n</math> goes to infinity, we obtain <math> P_e^{(n)}</math> is bounded away from 0 if R is greater than C - we can get arbitrarily low rates of error only if R is less than C.
   −
这些步骤的结果是数学 p e ^ {(n)} ge 1- frac {1}{ nR }- frac { r } / math。当块长度数学 n / math 趋于无穷大时,我们得到当 r 大于 c 时,数学 p e ^ {(n)} / math 远离0-我们只有当 r 小于 c 时才能得到任意低的误差率。
+
这些步骤的结果是 < math > p _ e ^ {(n)} ge 1-frac {1}{ nR }-frac { c }{ r }{ math > 。当块长度 < math > n </math > 趋于无穷大时,我们得到当 r 大于 c 时 < math > p _ e ^ {(n)} </math > 远离0,我们只有当 r 小于 c 时才能得到任意低的误差率。
 
  −
 
            
=== Strong converse for discrete memoryless channels ===
 
=== Strong converse for discrete memoryless channels ===
  −
=== Strong converse for discrete memoryless channels ===
  −
  −
离散无记忆信道的强逆
  −
  −
        第616行: 第480行:     
沃尔福威茨在1957年证明了一个强逆定理,
 
沃尔福威茨在1957年证明了一个强逆定理,
  −
        第625行: 第487行:  
<math>
 
<math>
   −
数学
+
《数学》
    
  P_e \geq 1- \frac{4A}{n(R-C)^2} - e^{-\frac{n(R-C)}{2}}
 
  P_e \geq 1- \frac{4A}{n(R-C)^2} - e^{-\frac{n(R-C)}{2}}
第631行: 第493行:  
  P_e \geq 1- \frac{4A}{n(R-C)^2} - e^{-\frac{n(R-C)}{2}}
 
  P_e \geq 1- \frac{4A}{n(R-C)^2} - e^{-\frac{n(R-C)}{2}}
   −
P e geq 1- frac { n (R-C) ^ 2}-e ^ {- frac { n (R-C)}{2}
+
P _ e geq 1-frac {4A }{ n (R-C) ^ 2}-e ^ {-frac { n (R-C)}{2}
    
</math>
 
</math>
第638行: 第500行:     
数学
 
数学
  −
        第647行: 第507行:  
for some finite positive constant <math>A</math>. While the weak converse states that the error probability is bounded away from zero as <math>n</math> goes to infinity, the strong converse states that the error goes to 1. Thus, <math>C</math> is a sharp threshold between perfectly reliable and completely unreliable communication.
 
for some finite positive constant <math>A</math>. While the weak converse states that the error probability is bounded away from zero as <math>n</math> goes to infinity, the strong converse states that the error goes to 1. Thus, <math>C</math> is a sharp threshold between perfectly reliable and completely unreliable communication.
   −
有限正常数 a / 数学。当数学 n / math 趋于无穷时,弱逆表示错误概率远离零,而强逆表示错误趋于1。因此,数学 c / math 是完全可靠和完全不可靠的通信之间的一个尖锐的门槛。
+
为了某个有限的正常数。当弱逆表示错误概率远离零是有界的时候,强逆表示错误概率为1。因此,c </math > 是完全可靠和完全不可靠的通信之间的一个尖锐的门槛。
 
  −
 
            
== Channel coding theorem for non-stationary memoryless channels==
 
== Channel coding theorem for non-stationary memoryless channels==
  −
== Channel coding theorem for non-stationary memoryless channels==
  −
  −
非平稳无记忆信道的信道编码定理
      
We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver.
 
We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver.
第663行: 第517行:  
We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver.
 
We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver.
   −
我们假设这个信道是无记忆的,但是它的跃迁概率随时间而变化,这种变化在发射机和接收机中都是众所周知的。
+
我们假设信道是无记忆的,但是它的跃迁概率随时间而变化,这种变化在发射机和接收机中都是已知的。
 
  −
 
        第674行: 第526行:     
然后通过对信道容量的分析,给出信道容量的计算公式
 
然后通过对信道容量的分析,给出信道容量的计算公式
  −
        第683行: 第533行:  
<math>   
 
<math>   
   −
数学
+
《数学》
    
C=\lim \inf \max_{p^{(X_1)},p^{(X_2)},...}\frac{1}{n}\sum_{i=1}^nI(X_i;Y_i).
 
C=\lim \inf \max_{p^{(X_1)},p^{(X_2)},...}\frac{1}{n}\sum_{i=1}^nI(X_i;Y_i).
第689行: 第539行:  
C=\lim \inf \max_{p^{(X_1)},p^{(X_2)},...}\frac{1}{n}\sum_{i=1}^nI(X_i;Y_i).
 
C=\lim \inf \max_{p^{(X_1)},p^{(X_2)},...}\frac{1}{n}\sum_{i=1}^nI(X_i;Y_i).
   −
C lim inf max { p ^ {(x1)} ,p ^ {(x2)} ,... } frac {1}{ n }{ i 1} ^ nI (xi; yi).
+
C = lim inf max { p ^ {(x _ 1)} ,p ^ {(x _ 2)} ,... } frac {1}{ n } sum { i = 1} ^ nI (x _ i; y _ i).
    
</math>
 
</math>
第696行: 第546行:     
数学
 
数学
  −
        第711行: 第559行:  
<math>
 
<math>
   −
数学
+
《数学》
    
C=\lim \inf \frac{1}{n}\sum_{i=1}^n C_i
 
C=\lim \inf \frac{1}{n}\sum_{i=1}^n C_i
第717行: 第565行:  
C=\lim \inf \frac{1}{n}\sum_{i=1}^n C_i
 
C=\lim \inf \frac{1}{n}\sum_{i=1}^n C_i
   −
C  lim  inf  frac { n }{ i 1} ^ n c i
+
1}{ n } sum { i = 1} ^ n c _ i
    
</math>
 
</math>
第729行: 第577行:  
where <math>C_i</math> is the capacity of the ith channel.
 
where <math>C_i</math> is the capacity of the ith channel.
   −
其中 math c i / math 是第 i 通道的容量。
+
其中 c _ i </math > 是第 i 通道的容量。
 
  −
 
            
=== Outline of the proof===
 
=== Outline of the proof===
  −
=== Outline of the proof===
  −
  −
证明大纲
      
The proof runs through in almost the same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in the [[asymptotic equipartition property]] article.
 
The proof runs through in almost the same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in the [[asymptotic equipartition property]] article.
第745行: 第587行:  
The proof runs through in almost the same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in the asymptotic equipartition property article.
 
The proof runs through in almost the same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in the asymptotic equipartition property article.
   −
证明过程与信道编码定理的证明过程几乎一样。可实现性遵循从特定信道的容量实现分布中随机选择每个符号的随机编码。典型论证使用的定义,典型集定义为非平稳的来源定义在渐近等同分割特性的文章。
+
证明过程与信道编码定理的证明过程几乎一样。可实现性遵循从特定信道的容量实现分布中随机选择每个符号的随机编码。典型论证使用的定义,典型集定义为非平稳的来源定义在渐近等同分割特性文章。
 
  −
 
        第755行: 第595行:  
The technicality of lim inf comes into play when <math>\frac{1}{n}\sum_{i=1}^n C_i</math> does not converge.
 
The technicality of lim inf comes into play when <math>\frac{1}{n}\sum_{i=1}^n C_i</math> does not converge.
   −
当数学算法{ i }{ i } ^ n c i / math 不收敛时,lim inf 的技术性就发挥了作用。
+
当[ math ]{ n } sum { i = 1} ^ n c _ i </math > 不收敛时,lim inf 的技术性就发挥了作用。
 
  −
 
  −
 
        −
==See also==
      
==See also==
 
==See also==
  −
参见
      
* [[Asymptotic equipartition property]] (AEP)
 
* [[Asymptotic equipartition property]] (AEP)
  −
      
* [[Fano's inequality]]
 
* [[Fano's inequality]]
  −
      
* [[Rate–distortion theory]]
 
* [[Rate–distortion theory]]
  −
      
* [[Shannon's source coding theorem]]
 
* [[Shannon's source coding theorem]]
  −
      
* [[Shannon–Hartley theorem]]
 
* [[Shannon–Hartley theorem]]
  −
      
* [[Turbo code]]
 
* [[Turbo code]]
      −
  −
  −
  −
  −
  −
==Notes==
      
==Notes==
 
==Notes==
  −
注释
      
{{reflist}}
 
{{reflist}}
      −
  −
  −
  −
  −
  −
==References==
      
==References==
 
==References==
  −
参考资料
      
* [[Thomas M. Cover|Cover T. M.]], Thomas J. A., ''Elements of Information Theory'', [[John Wiley & Sons]], 1991. {{ISBN|0-471-06259-6}}
 
* [[Thomas M. Cover|Cover T. M.]], Thomas J. A., ''Elements of Information Theory'', [[John Wiley & Sons]], 1991. {{ISBN|0-471-06259-6}}
  −
      
*[[Fano|Fano, R. A.]], ''Transmission of information; a statistical theory of communications'', [[MIT Press]], 1961. {{ISBN|0-262-06001-9}}
 
*[[Fano|Fano, R. A.]], ''Transmission of information; a statistical theory of communications'', [[MIT Press]], 1961. {{ISBN|0-262-06001-9}}
  −
      
*[[Amiel Feinstein|Feinstein, Amiel]], "A New basic theorem of information theory", ''[[IEEE Transactions on Information Theory]]'', 4(4): 2-22, 1954.
 
*[[Amiel Feinstein|Feinstein, Amiel]], "A New basic theorem of information theory", ''[[IEEE Transactions on Information Theory]]'', 4(4): 2-22, 1954.
  −
      
* [[David J.C. MacKay|MacKay, David J. C.]], ''[http://www.inference.phy.cam.ac.uk/mackay/itila/book.html Information Theory, Inference, and Learning Algorithms]'', [[Cambridge University Press]], 2003. {{ISBN|0-521-64298-1}}  [free online]
 
* [[David J.C. MacKay|MacKay, David J. C.]], ''[http://www.inference.phy.cam.ac.uk/mackay/itila/book.html Information Theory, Inference, and Learning Algorithms]'', [[Cambridge University Press]], 2003. {{ISBN|0-521-64298-1}}  [free online]
  −
      
*[[Claude E. Shannon|Shannon, C. E.]], [https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6773024 ''A Mathematical Theory of Communication'']. ''The Bell System Technical Journal'' 27,3: 379–423, 1948.
 
*[[Claude E. Shannon|Shannon, C. E.]], [https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6773024 ''A Mathematical Theory of Communication'']. ''The Bell System Technical Journal'' 27,3: 379–423, 1948.
  −
      
*[[Claude E. Shannon|Shannon, C. E.]], [http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html ''A Mathematical Theory of Communication''] Urbana, IL: University of Illinois Press, 1948 (reprinted 1998).
 
*[[Claude E. Shannon|Shannon, C. E.]], [http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html ''A Mathematical Theory of Communication''] Urbana, IL: University of Illinois Press, 1948 (reprinted 1998).
  −
      
*[[Wolfowitz| Wolfowitz, J.]], "[https://projecteuclid.org/download/pdf_1/euclid.ijm/1255380682 The coding of messages subject to chance errors]", ''Illinois J. Math.'', 1: 591–606, 1957.
 
*[[Wolfowitz| Wolfowitz, J.]], "[https://projecteuclid.org/download/pdf_1/euclid.ijm/1255380682 The coding of messages subject to chance errors]", ''Illinois J. Math.'', 1: 591–606, 1957.
      −
  −
  −
  −
  −
  −
==External links==
      
==External links==
 
==External links==
  −
外部链接
      
* [http://www.cs.miami.edu/home/burt/learning/Csc524.142/LarsTelektronikk02.pdf On Shannon and Shannon's law]
 
* [http://www.cs.miami.edu/home/burt/learning/Csc524.142/LarsTelektronikk02.pdf On Shannon and Shannon's law]
  −
      
* [http://cnx.org/content/m10180/latest/ Shannon's Noisy Channel Coding Theorem]
 
* [http://cnx.org/content/m10180/latest/ Shannon's Noisy Channel Coding Theorem]
  −
  −
  −
            
{{DEFAULTSORT:Noisy-Channel Coding Theorem}}
 
{{DEFAULTSORT:Noisy-Channel Coding Theorem}}
  −
      
[[Category:Information theory]]
 
[[Category:Information theory]]
第879行: 第659行:  
Category:Theorems in discrete mathematics
 
Category:Theorems in discrete mathematics
   −
范畴: 离散数学中的定理
+
范畴: 离散数学的定理
    
[[Category:Telecommunication theory]]
 
[[Category:Telecommunication theory]]
1,564

个编辑

导航菜单