更改

跳到导航 跳到搜索
删除190字节 、 2020年11月8日 (日) 11:11
第460行: 第460行:     
在信息论中谈论一种语言的“速率”或“熵”是很常见的,也是很合适的,比如当信源是英文散文时。信息源的速率与其冗余度以及可被压缩程度有关。
 
在信息论中谈论一种语言的“速率”或“熵”是很常见的,也是很合适的,比如当信源是英文散文时。信息源的速率与其冗余度以及可被压缩程度有关。
  −
        第473行: 第471行:  
Communications over a channel—such as an ethernet cable—is the primary motivation of information theory.  However, such channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality.
 
Communications over a channel—such as an ethernet cable—is the primary motivation of information theory.  However, such channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality.
   −
通过信道(例如:英特网电缆)进行通信是信息论的主要动机。然而,这样的信道往往不能产生信号的精确重建; 静默时段内、噪声、其他形式的信号损坏往往会使得信息质量的降低。
+
通过信道(例如:以太网电缆)进行通信是信息论的主要动机。然而,这样的信道往往不能产生信号的精确重建; 静默时段内、噪声、其他形式的信号损坏往往会使得信息质量的降低。
      第494行: 第492行:  
Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let x)}} be the conditional probability distribution function of Y given X. We will consider x)}} to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of , the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the  and is given by:
 
Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let x)}} be the conditional probability distribution function of Y given X. We will consider x)}} to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of , the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the  and is given by:
   −
这里''X''表示单位时间内通过信道发送的信息空间,''Y''表示单位时间内通过信道接收的信息空间。设{{math|''p''(''y''{{pipe}}''x'')}}是给定''X''的''Y''的条件概率分布函数。将{{math|''p''(''y''{{pipe}}''x'')}}视为通信信道的固定属性(表示信道噪声的性质)。那么''X''和''Y''的联合分布完全取决于所选用的信道和{{math|''f''(''x'')}},以及通过信道发送的信息的边缘分布。在这些约束条件下,我们希望最大化信息速率或信号速率,可以通过信道进行通信。对此的适当度量为互信息,信道容量即为最大互信息,且由下式给出:
+
这里''X''表示要发送的信息的空间(全集),''Y''表示单位时间内通过信道接收的信息的空间。设{{math|''p''(''y''{{pipe}}''x'')}}是给定''X''的''Y''的条件概率分布函数。我们将{{math|''p''(''y''{{pipe}}''x'')}}视为通信信道的固定属性(表示信道噪声的性质)。那么''X''和''Y''的联合分布完全取决于所选用的信道和{{math|''f''(''x'')}},以及通过信道发送的信息的边缘分布。在这些约束条件下,我们希望最大化信息速率或信号速率,可以通过信道进行通信。对此的适当度量为互信息,信道容量即为最大互信息,且由下式给出:
 
      
:<math> C = \max_{f} I(X;Y).\! </math>
 
:<math> C = \max_{f} I(X;Y).\! </math>
   −
<math> C = \max_{f} I(X;Y).\! </math>
  −
  −
:<math> C = \max_{f} I(X;Y).\! </math>
      
This capacity has the following property related to communicating at information rate ''R'' (where ''R'' is usually bits per symbol).  For any information rate ''R < C'' and coding error ε > 0, for large enough ''N'', there exists a code of length ''N'' and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error.  In addition, for any rate ''R &gt; C'', it is impossible to transmit with arbitrarily small block error.
 
This capacity has the following property related to communicating at information rate ''R'' (where ''R'' is usually bits per symbol).  For any information rate ''R < C'' and coding error ε > 0, for large enough ''N'', there exists a code of length ''N'' and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error.  In addition, for any rate ''R &gt; C'', it is impossible to transmit with arbitrarily small block error.
第507行: 第501行:  
This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol).  For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error.  In addition, for any rate R &gt; C, it is impossible to transmit with arbitrarily small block error.
 
This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol).  For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error.  In addition, for any rate R &gt; C, it is impossible to transmit with arbitrarily small block error.
   −
信道容量具有以下与以信息速率“R”进行通信有关的属性(其中“R”通常为每个符号的比特数)。对于任意信息速率''R < C''和编码错误ε > 0,对于足够大的''N'',存在长度为''N''和速率大于等于R的代码以及解码算法使得块错误的最大概率小于等于ε;即总是可以在任意小的块错误下进行传输。此外对于任何速率的“ R> C”,不可能以很小的块错误进行发送。
+
信道容量具有以下与以信息速率“R”进行通信有关的属性(其中“R”通常为每个符号的比特数)。对于任意信息速率''R < C''和编码错误ε > 0,存在足够大的长度为''N''和速率大于等于R的代码以及解码算法使得块错误的最大概率小于等于ε;即总是可以在任意小的块错误下进行传输。此外对于任何速率的“ R> C”,不可能以很小的块错误进行发送。
 
  −
 
        第519行: 第511行:       −
 
+
====特定信道容量模型====
 
  −
 
  −
====Capacity of particular channel models====
  −
 
  −
====Capacity of particular channel models====
  −
 
  −
特定信道容量模型
      
* A continuous-time analog communications channel subject to [[Gaussian noise]] — see [[Shannon–Hartley theorem]].
 
* A continuous-time analog communications channel subject to [[Gaussian noise]] — see [[Shannon–Hartley theorem]].
第563行: 第548行:     
::[[File:Binary erasure channel.svg]]
 
::[[File:Binary erasure channel.svg]]
  −
  −
  −
      
==Applications to other fields==
 
==Applications to other fields==
370

个编辑

导航菜单