更改

跳到导航 跳到搜索
无编辑摘要
第58行: 第58行:  
哈特利(Hartley)随后将上述量化结论与奈奎斯特的观察结合起来,观察到可以通过带宽信道放置的独立脉冲数 {\ displaystyle B}B赫兹原为{\ displaystyle 2B}2B 每秒脉冲数,以得出可达到的线路速率的量化指标。
 
哈特利(Hartley)随后将上述量化结论与奈奎斯特的观察结合起来,观察到可以通过带宽信道放置的独立脉冲数 {\ displaystyle B}B赫兹原为{\ displaystyle 2B}2B 每秒脉冲数,以得出可达到的线路速率的量化指标。
   −
Hartley's law is sometimes quoted as just a proportionality between the analog bandwidth, {\displaystyle B} B, in Hertz and what today is called the digital bandwidth, {\displaystyle R} R, in bit/s.[3] Other times it is quoted in this more quantitative form, as an achievable line rate of {\displaystyle R} R bits per second:
+
哈特利定律有时候被用来描述两种比例关系,一是以赫兹为单位的模拟带宽B<ref>{{cite book | title = Introduction to Telecommunications | edition = 2nd| author = Anu A. Gokhale | publisher = Thomson Delmar Learning | year = 2004 | isbn = 1-4018-5648-9 | url = https://books.google.com/books?id=QowmxWAOEtYC&pg=PA37&dq=%22hartley%27s+law%22+proportional }}</ref>,二是以比特/s为单位的数字带宽。哈特利定律还被用来计算线速率R的取值范围<ref>{{cite book | title = Telecommunications Engineering | author = John Dunlop and D. Geoffrey Smith | publisher = CRC Press | year = 1998 | url = https://books.google.com/books?id=-kyPyn3Dst8C&pg=RA4-PA30&dq=%22hartley%27s+law%22 | isbn = 0-7487-4044-9 }}</ref>:
 
  −
哈特利定律有时候被用来描述两种比例关系,一是以赫兹为单位的模拟带宽B,二是以比特/s为单位的数字带宽。哈特利定律还被用来计算线速率R的取值范围:
      
<math>
 
<math>
第66行: 第64行:  
</math>
 
</math>
   −
Hartley did not work out exactly how the number M should depend on the noise statistics of the channel, or how the communication could be made reliable even when individual symbol pulses could not be reliably distinguished to M levels; with Gaussian noise statistics, system designers had to choose a very conservative value of {\displaystyle M} M to achieve a low error rate.
+
哈特利并没有准确的给出M应该如何依赖信道的噪声统计方法,且无法在将单个符号脉冲可靠的计算M电平数的情况下,使通信可靠。所以在高斯噪声存在时,系统设计者需要选择非常保守的M值,从而降低错误率。
   −
哈特利并没有准确的给出M应该如何依赖信道的噪声统计,以及在无法将单个符号脉冲可靠的区分为M电平的情况下如何使通信可靠。所以在高斯噪声存在时,系统设计者需要选择非常保守的M值,从而降低错误率。
+
在哈特利对信道对数度量的观察和奈奎斯特的带宽限制的基础上,香农提出了无差错容量的概念。
   −
The concept of an error-free capacity awaited Claude Shannon, who built on Hartley's observations about a logarithmic measure of information and Nyquist's observations about the effect of bandwidth limitations.
+
哈特利定律可以看作是无误差M信道的容量为2B每秒的符号。有些研究者将其称之为容量。但是这种无误差的信道是理想条件下的,如果M足够小以至于使有噪声的信道几乎没有误差,那么计算结果必然会小于带宽的有噪信道带宽B,即为之后香农-哈特莱定理。
 
  −
在哈特利关于信息对数测量的观察和奈奎斯特的带宽限制的基础上,香农提出了无差错容量的概念。
  −
 
  −
Hartley's rate result can be viewed as the capacity of an errorless M-ary channel of {\displaystyle 2B} 2B symbols per second. Some authors refer to it as a capacity. But such an errorless channel is an idealization, and if M is chosen small enough to make the noisy channel nearly errorless, the result is necessarily less than the Shannon capacity of the noisy channel of bandwidth {\displaystyle B} B, which is the Hartley–Shannon result that followed later.
  −
 
  −
哈特利定律可以看作是无误差M信道的容量为2B符号每秒。有些研究者将其称之为容量。但是这种无误差的信道是理想条件下的,如果M足够小以至于使有噪声的信道几乎没有误差,那么计算结果必然会小于带宽的有噪信道带宽B,即为之后哈特利-香农定律的结论。
      
===噪声的信道编码定理和容量===
 
===噪声的信道编码定理和容量===
   −
Claude Shannon's development of information theory during World War II provided the next big step in understanding how much information could be reliably communicated through noisy channels. Building on Hartley's foundation, Shannon's noisy channel coding theorem (1948) describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption.[5][6] The proof of the theorem shows that a randomly constructed error-correcting code is essentially as good as the best possible code; the theorem is proved through the statistics of such random codes.
+
香农定理展示了如何根据信道的统计描述来计算信道容量,并建立了给定噪声容量为C且信道速率为线速传输信息的信道{\ displaystyle R}R,如果
   −
'''[[克劳德香农]]'''(Claude Shannon)在第二次世界大战中对信息论的研究为在有噪信道中进行可靠信息传输的突破提供了基础。在哈特利的基础上,对于噪声干扰和数据损坏水平,香农的噪声信道编码定理描述了纠错算法的最大效率。通过对随机编码的统计结果,定理表明随机构造的前向错误纠正本质上是最好的编码方式。
+
'''[[克劳德香农]]'''(Claude Shannon)在第二次世界大战中对信息论的研究为在有噪信道中进行可靠信息传输的突破提供了基础。香农的噪声信道编码定理(1948)建立在哈特利的基础上,描述了纠错方法相对于噪声干扰和数据破坏的最大可能效率<ref>{{cite book | author = [[Claude E. Shannon|C. E. Shannon]] | title = The Mathematical Theory of Communication | publisher = Urbana, IL:University of Illinois Press | origyear = 1949| year = 1998}}</ref><ref>{{cite journal | author = [[Claude E. Shannon|C. E. Shannon]] | title = Communication in the presence of noise | url = http://www.stanford.edu/class/ee104/shannonpaper.pdf | format = PDF | journal = [[Proceedings of the Institute of Radio Engineers]] | volume = 37 | issue = 1 | pages = 10–21 | date = January 1949 | url-status = dead | archiveurl = https://web.archive.org/web/20100208112344/http://www.stanford.edu/class/ee104/shannonpaper.pdf | archivedate = 2010-02-08 }}</ref>。通过对这些随机码的统计证明了该定理所定义的随机纠错码与最佳代码效果同样好。
   −
Shannon's theorem shows how to compute a channel capacity from a statistical description of a channel, and establishes that given a noisy channel with capacity C and information transmitted at a line rate {\displaystyle R} R, then if
+
香农定理展示了怎样通过对通道进行统计学描述去计算通道容量,并且证明了在一个有容量<math>C</math>的噪声信道中,信息以一个线速率<math>R</math>传输时,有:
 
  −
香农定理给出根据信道的统计描述来计算信道容量的方法,并且证明了当在一个给定容量为C的有噪信道中,传输线速率
  −
 
  −
从统计描述信道的统计描述计算信道容量的方法,并且证明了在一个有容量 c 的噪声信道中,信息以一个线速率传输为R时有:
      
<math>
 
<math>
第94行: 第82行:  
</math>
 
</math>
   −
there exists a coding technique which allows the probability of error at the receiver to be made arbitrarily small. This means that theoretically, it is possible to transmit information nearly without error up to nearly a limit of {\displaystyle C} C bits per second.
     −
此时的可以理解为存在一种编码方式可以使得接收端的出错率任意小。这意味着从理论上来讲,几乎可以毫无差错的传输信息,最高可以大道C位每秒的速率上限。
+
 
 +
此时的可以理解为存在一种编码方式可以使得接收端的出错率任意小。这意味着从理论上来讲,几乎可以毫无差错的传输信息,最高可以达到<math>C</math>位每秒的速率上限。
    
上面的不等式反过来同样重要:
 
上面的不等式反过来同样重要:
第103行: 第91行:  
C < R
 
C < R
 
</math>
 
</math>
  −
the probability of error at the receiver increases without bound as the rate is increased. So no useful information can be transmitted beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal.
      
反过来的含义为:随着速率的增加,接收端的错误率会一直增加。所以超过信道容量后就不能传输。但是该定理没有解决速率和容量相等的情况(R=C)。
 
反过来的含义为:随着速率的增加,接收端的错误率会一直增加。所以超过信道容量后就不能传输。但是该定理没有解决速率和容量相等的情况(R=C)。
   −
The Shannon–Hartley theorem establishes what that channel capacity is for a finite-bandwidth continuous-time channel subject to Gaussian noise. It connects Hartley's result with Shannon's channel capacity theorem in a form that is equivalent to specifying the M in Hartley's line rate formula in terms of a signal-to-noise ratio, but achieving reliability through error-correction coding rather than through reliably distinguishable pulse levels.
+
在Shannon–Hartley定理考虑的信道中,噪声和信号通过加法合并。也就是说,接收器测量的信号等于编码所需信息的信号与代表噪声的连续随机变量之和。这种加法产生了原始信号值的不确定性。如果接收器具有有关产生噪声的随机过程的某些信息,则原则上可以通过考虑噪声过程的所有可能状态来恢复原始信号中的信息。在Shannon–Hartley定理的情况下,假定噪声是由具有已知方差的高斯过程产生的。由于高斯过程的方差等于其幂,因此通常将此方差称为噪声功率。
   −
香农-哈特利定理定义了受高斯噪声影响的有限带宽连续时间信道的传输容量。它将哈特利的结果与香农的信道容量定理联系起来,其形式等效于将香农信道容量公式中的信噪比替换成哈特利的线速公式中的M,但是是通过纠错编码而不是可区分的脉冲电平来实现可靠性。
+
这样的通道称为加性高斯白噪声通道,因为高斯噪声被添加到信号中。“白色”表示在信道带宽内的所有频率上相等数量的噪声。这种噪声既可能来自随机能源,也可能分别来自发送方和接收方的编码和测量误差。由于独立的高斯随机变量之和本身就是高斯随机变量,因此,如果人们假设这样的误差源也是高斯且独立的,则可以方便地简化分析。
   −
If there were such a thing as a noise-free analog channel, one could transmit unlimited amounts of error-free data over it per unit of time (Note: An infinite-bandwidth analog channel can't transmit unlimited amounts of error-free data, without infinite signal power). Real channels, however, are subject to limitations imposed by both finite bandwidth and nonzero noise.
+
香农-哈特利定理定义了受高斯噪声影响的有限带宽连续时间信道的传输容量。它将哈特利的结果与香农的信道容量定理联系起来,其形式等效于将香农信道容量公式中的信噪比替换成哈特利的线速公式中的M,但是是通过纠错编码来实现可靠性而不是可区分的脉冲电平来实现。
   −
如果存在无噪声的模拟信道,那么每单位时间就可以传输无限量的无错数据(注意: 无限带宽的模拟信道在没有无限信号功率的情况下不能传输无限量的无错数据)。 然而,实际信道会受到有限带宽和非零噪声的限制。
+
如果存在无噪声的模拟信道,那么每单位时间就可以传输无限量的无错数据(注意:无限带宽模拟通道不能传输无限量的无错误数据,没有无限的信号功率)。 然而,实际信道会受到有限带宽和非零噪声的限制。
   −
Bandwidth and noise affect the rate at which information can be transmitted over an analog channel. Bandwidth limitations alone do not impose a cap on the maximum information rate because it is still possible for the signal to take on an indefinitely large number of different voltage levels on each symbol pulse, with each slightly different level being assigned a different meaning or bit sequence. Taking into account both noise and bandwidth limitations, however, there is a limit to the amount of information that can be transferred by a signal of a bounded power, even when sophisticated multi-level encoding techniques are used.
+
带宽和噪声会影响信息在模拟信道上的传输速率。带宽限制本身并不限制最大信息传输速率,因为信号仍然可能在每个符号脉冲上承受无限多的不同电平,每个不同的电平被赋予不同的含义或位序列。 但是,考虑到噪声和带宽限制,采用复杂的多级编码技术,必然也会限制有限功率的信号传输的信息量。
 
  −
带宽和噪声会影响信息在模拟信道上的传输速率。带宽限制本身并不限制最大信息传输速率,因为信号仍然可能在每个符号脉冲上承受无限多的不同电平,每个稍微不同的被赋予不同的意义或位序列。 但是,考虑到噪声和带宽限制,即使采用复杂的多级编码技术,每个略有不同的电平被赋予不同的含义或位序列。然而,考虑到噪声和带宽限制,即使用复杂的多级编码技术,也会限制有限功率的信号传输的信息量。
  −
 
  −
In the channel considered by the Shannon–Hartley theorem, noise and signal are combined by addition. That is, the receiver measures a signal that is equal to the sum of the signal encoding the desired information and a continuous random variable that represents the noise. This addition creates uncertainty as to the original signal's value. If the receiver has some information about the random process that generates the noise, one can in principle recover the information in the original signal by considering all possible states of the noise process. In the case of the Shannon–Hartley theorem, the noise is assumed to be generated by a Gaussian process with a known variance. Since the variance of a Gaussian process is equivalent to its power, it is conventional to call this variance the noise power.
  −
 
  −
在香农-哈特利定理考虑的信道中,噪声和信号会叠加。也就是说,接收端测量的信号等于编码信息和随机噪声信号之和。这种叠加产生了原始信号值的不确定性。如果接收端有一些有关产生噪声的随机过程的信息,则理论上可以通过噪声的所有可能状态来恢复原始信号信息。香农-哈特利定理中,假定的噪声为已知方差的高斯过程。由于高斯过程的方差等于其幂,因此通常将此方差称为噪声功率。
  −
 
  −
Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise is added to the signal; "white" means equal amounts of noise at all frequencies within the channel bandwidth. Such noise can arise both from random sources of energy and also from coding and measurement error at the sender and receiver respectively. Since sums of independent Gaussian random variables are themselves Gaussian random variables, this conveniently simplifies analysis, if one assumes that such error sources are also Gaussian and independent.
  −
 
  −
这样的信道称之为加性高斯白噪声信道,名字源于高斯噪声被叠加到信号中。“白色”表示在信道带宽内的所有频率上相等数量的噪声。这种噪声既可能来自随机能源,也可能分别来自发射端和接收端的编码和测量误差。由于独立的高斯随机变量之和本身就是高斯随机变量,因此,如果人们假设这样的误差源也是高斯且独立的,则可以进行方便的简化分析。
      
==定理含义==
 
==定理含义==
第132行: 第108行:  
===香农的信道容量定律和哈特莱定律比较===
 
===香农的信道容量定律和哈特莱定律比较===
   −
Comparing the channel capacity to the information rate from Hartley's law, we can find the effective number of distinguishable levels M
+
根据哈特利定律将信道容量与信息速率进行比较,我们可以找到可区分级别的有效数量M<ref>{{cite book | title = An Introduction to Information Theory: symbols, signals & noise
将信道容量和哈特莱定律中的信息速率比较,我们能发现可区分的电平的有效数目M:
+
| author = John Robinson Pierce | url = https://archive.org/details/introductiontoin00john | url-access = registration
 +
| quote = information intitle:theory inauthor:pierce.
 +
| publisher = Courier Dover Publications | year = 1980 | isbn = 0-486-24061-4 }}</ref>:
    
<math>2B2B\log _{2}{M} = B2B\log _{2}{(1+ \frac{S}{N})} </math>
 
<math>2B2B\log _{2}{M} = B2B\log _{2}{(1+ \frac{S}{N})} </math>
第139行: 第117行:  
<math>M = \sqrt{1 + \frac{S}{N}} </math>
 
<math>M = \sqrt{1 + \frac{S}{N}} </math>
   −
The square root effectively converts the power ratio back to a voltage ratio, so the number of levels is approximately proportional to the ratio of signal RMS amplitude to noise standard deviation.
+
平方根有效地将功率比转换回电压比,因此电平数大约与信号RMS幅度与噪声标准偏差之比成正比。
 
  −
平方根将功率比转换回电压比,所以电平数与信号RMS幅度和噪声标准偏差之比成正比(近似)。
      
==频率相关(有色噪声)的情况==
 
==频率相关(有色噪声)的情况==
第147行: 第123行:  
In the simple version above, the signal and noise are fully uncorrelated, in which case {\displaystyle S+N} S+N is the total power of the received signal and noise together. A generalization of the above equation for the case where the additive noise is not white (or that the {\displaystyle S/N} S/N is not constant with frequency over the bandwidth) is obtained by treating the channel as many narrow, independent Gaussian channels in parallel:
 
In the simple version above, the signal and noise are fully uncorrelated, in which case {\displaystyle S+N} S+N is the total power of the received signal and noise together. A generalization of the above equation for the case where the additive noise is not white (or that the {\displaystyle S/N} S/N is not constant with frequency over the bandwidth) is obtained by treating the channel as many narrow, independent Gaussian channels in parallel:
   −
在前面的内容中,信号和噪声是完全不相关的,所以<math>S + N</math>是接收信号和噪声的总功率。对于加性噪声不是白色的情况(或S/N在带宽上的频率上不是恒定的),通过将信道并行地看作许多窄的、独立的高斯信道,可以得到上述方程:
+
在前面的内容中,信号和噪声是完全不相关的,所以<math>S + N</math>是接收信号和噪声的总功率。一般而言,对于上述方程在加性噪声不是白噪声的情况下,s+N不是常数,气对于加性噪声不是白噪声的情况(或S/N在带宽上的频率上不是恒定的),通过将信道并行地看作许多窄的、独立的高斯信道,可以得到上述方程:
 +
在上面的简单版本中,信号和噪声是完全不相关的,在这种情况下<math>S + N</math>是接收信号和噪声的总功率。对于加性噪声不是白噪声的情况(或<math> S / N</math>在整个带宽上频率不恒定)是通过将信道视为平行的许多窄的独立高斯信道来获得的:
    
<math>C = \int_{0}^{B}log_{2}{(1 + \frac{S(f)}{N(f)})}df</math>
 
<math>C = \int_{0}^{B}log_{2}{(1 + \frac{S(f)}{N(f)})}df</math>
第162行: 第139行:  
* <math>f</math>为频率,单位为赫兹
 
* <math>f</math>为频率,单位为赫兹
   −
Note: the theorem only applies to Gaussian stationary process noise. This formula's way of introducing frequency-dependent noise cannot describe all continuous-time noise processes. For example, consider a noise process consisting of adding a random wave whose amplitude is 1 or −1 at any point in time, and a channel that adds such a wave to the source signal. Such a wave's frequency components are highly dependent. Though such a noise may have a high power, it is fairly easy to transmit a continuous signal with much less power than one would need if the underlying noise was a sum of independent noises in each frequency band.
+
注意:该定理仅适用于高斯平稳过程噪声。此公式引入依赖于频率的噪声的方法无法描述所有连续时间的噪声过程。例如,考虑噪声处理,该噪声处理包括在任何时间点添加振幅为1或-1的随机波,以及将此类波添加到源信号的通道。这样的波的频率分量是高度相关的。尽管这样的噪声可能具有较高的功率,但如果基础噪声是每个频带中独立噪声的总和,则发射功率要比其所需功率低得多的连续信号相当容易
    
该公式仅适用于高斯平稳过程。
 
该公式仅适用于高斯平稳过程。
第168行: 第145行:  
==近似计算==
 
==近似计算==
   −
For large or small and constant signal-to-noise ratios, the capacity formula can be approximated:
+
对于较大或较小且恒定的信噪比,容量公式可以近似为:
 
  −
对于恒定的信噪比,信道容量公式可以近似计算。
      
===有限带宽===
 
===有限带宽===
  −
When the SNR is large (S/N >> 1), the logarithm is approximated by
      
当SNR较大时(S/N>1),对数近似为
 
当SNR较大时(S/N>1),对数近似为
第198行: 第171行:  
<math>C \approx 1.44\cdot B \cdot \frac{S}{N}</math>
 
<math>C \approx 1.44\cdot B \cdot \frac{S}{N}</math>
   −
In this low-SNR approximation, capacity is independent of bandwidth if the noise is white, of spectral density {\displaystyle N_{0}} N_{0} watts per hertz, in which case the total noise power is {\displaystyle N=B\cdot N_{0}} {\displaystyle N=B\cdot N_{0}}.
+
这种低SNR近似值时,如果噪声为白噪声,则容量与带宽无关,且频谱密度较高。谱密度为<math>N_{0}</math>的时候,总的噪声功率为<math>N = B \cdot N_{0}</math>。
 
  −
这种低SNR近似值时,如果噪声为白色,则容量与带宽无关,且频谱密度较高。谱密度为<math>N_{0}</math>的时候,总的噪声功率为<math>N = B \cdot N_{0}</math>。
      
<math>C \approx 1.44 \cdot \frac{S}{N}</math>
 
<math>C \approx 1.44 \cdot \frac{S}{N}</math>
421

个编辑

导航菜单