更改

跳到导航 跳到搜索
删除17,039字节 、 2020年10月27日 (二) 18:23
第1行: 第1行: −
此词条暂由彩云小译翻译,翻译字数共1005,未经人工整理和审校,带来阅读不便,请见谅。
+
此词条暂由彩云小译翻译,翻译字数共241,未经人工整理和审校,带来阅读不便,请见谅。
    
{{short description|Establishes the limits to possible data compression}}
 
{{short description|Establishes the limits to possible data compression}}
第49行: 第49行:  
In information theory, the '''source coding theorem''' (Shannon 1948)<ref name="Shannon"/> informally states that (MacKay 2003, pg. 81,<ref name="MacKay"/> Cover 2006, Chapter 5<ref name="Cover"/>):
 
In information theory, the '''source coding theorem''' (Shannon 1948)<ref name="Shannon"/> informally states that (MacKay 2003, pg. 81,<ref name="MacKay"/> Cover 2006, Chapter 5<ref name="Cover"/>):
   −
In information theory, the source coding theorem (Shannon 1948) informally states that (MacKay 2003, pg. 81, Cover 2006, Chapter 5):
+
In information theory, the source coding theorem (Shannon 1948)
   −
在信息论中,信源编码定理(Shannon,1948)非正式地指出(MacKay,2003,pg。81,2006年封面,第5章) :
+
在信息论中,信源编码定理(Shannon,1948)  
          
<blockquote>{{mvar|N}} [[Independent and identically distributed random variables|i.i.d.]] random variables each with [[Entropy (information theory)|entropy]] {{math|''H''(''X'')}} can be compressed into more than {{math|''N&thinsp;H''(''X'')}} [[bit]]s with negligible risk of information loss, as {{math|''N'' → ∞}}; but conversely, if they are compressed into fewer than {{math|''N&thinsp;H''(''X'')}} bits it is virtually certain that information will be lost.</blockquote>
 
<blockquote>{{mvar|N}} [[Independent and identically distributed random variables|i.i.d.]] random variables each with [[Entropy (information theory)|entropy]] {{math|''H''(''X'')}} can be compressed into more than {{math|''N&thinsp;H''(''X'')}} [[bit]]s with negligible risk of information loss, as {{math|''N'' → ∞}}; but conversely, if they are compressed into fewer than {{math|''N&thinsp;H''(''X'')}} bits it is virtually certain that information will be lost.</blockquote>
  −
<blockquote> i.i.d. random variables each with entropy  can be compressed into more than  bits with negligible risk of information loss, as ; but conversely, if they are compressed into fewer than  bits it is virtually certain that information will be lost.</blockquote>
  −
  −
I.i.i.d.随机变量中每一个都有熵,可以压缩成多个比特,信息丢失的风险可以忽略不计,但反过来,如果它们被压缩成比特少于比特,信息几乎肯定会丢失。</blockquote >
  −
  −
  −
  −
=== Source coding theorem for symbol codes ===
  −
  −
Let {{math|Σ<sub>1</sub>, Σ<sub>2</sub>}} denote two finite alphabets and let {{math|Σ{{su|b=1|p=∗}}}} and {{math|Σ{{su|b=2|p=∗}}}} denote the [[Kleene star|set of all finite words]] from those alphabets (respectively).
  −
  −
Let  denote two finite alphabets and let }} and }} denote the set of all finite words from those alphabets (respectively).
  −
  −
让我们表示两个有限的字母,并且让}和}分别表示来自这两个字母的所有有限单词的集合。
  −
  −
  −
  −
Suppose that {{mvar|X}} is a random variable taking values in {{math|Σ<sub>1</sub>}} and let {{math|&thinsp;''f''&thinsp;}} be a [[Variable-length code#Uniquely decodable codes|uniquely decodable]] code from {{math|Σ{{su|b=1|p=∗}}}} to {{math|Σ{{su|b=2|p=∗}}}} where {{math|{{!}}Σ<sub>2</sub>{{!}} {{=}} ''a''}}. Let {{mvar|S}} denote the random variable given by the length of codeword {{math|&thinsp;''f''&thinsp;(''X'')}}.
  −
  −
Suppose that  is a random variable taking values in  and let  be a uniquely decodable code from }} to }} where Σ<sub>2</sub>  a}}. Let  denote the random variable given by the length of codeword .
  −
  −
假设这是一个随机变量,其中 σ < sub > 2 </sub > a }}是一个唯一可解码的。用码字长度表示随机变量。
  −
  −
  −
  −
If {{math|&thinsp;''f''&thinsp;}} is optimal in the sense that it has the minimal expected word length for {{mvar|X}}, then (Shannon 1948):
  −
  −
If  is optimal in the sense that it has the minimal expected word length for , then (Shannon 1948):
  −
  −
如果是最优的,在这个意义上,它有最小的预期字长度,那么(香农1948) :
  −
  −
  −
  −
:<math> \frac{H(X)}{\log_2 a} \leq \mathbb{E}[S] < \frac{H(X)}{\log_2 a} +1 </math>
  −
  −
<math> \frac{H(X)}{\log_2 a} \leq \mathbb{E}[S] < \frac{H(X)}{\log_2 a} +1 </math>
  −
  −
{ math > frac { h (x)}{ log _ 2a } leq mathbb { e }[ s ] < frac { h (x)}{ log _ 2a } + 1 </math >
  −
  −
  −
  −
Where <math>\mathbb{E}</math> denotes the [[expected value]] operator.
  −
  −
Where <math>\mathbb{E}</math> denotes the expected value operator.
  −
  −
其中 < math > mathbb { e } </math > 表示期望值操作符。
  −
  −
  −
  −
== Proof: Source coding theorem ==
  −
  −
Given {{mvar|X}} is an [[independent identically-distributed random variables|i.i.d.]] source, its [[time series]] {{math|''X''<sub>1</sub>, ..., ''X<sub>n</sub>''}} is i.i.d. with [[Entropy_(information_theory)|entropy]] {{math|''H''(''X'')}} in the discrete-valued case and [[differential entropy]] in the continuous-valued case. The Source coding theorem states that for any {{math|''ε'' > 0}}, i.e. for any [[information theory#Rate|rate]] {{math|''H''(''X'') + ''ε''}} larger than the [[entropy]] of the source, there is large enough {{mvar|n}} and an encoder that takes {{mvar|n}} i.i.d. repetition of the source, {{math|''X''<sup>1:''n''</sup>}}, and maps it to {{math|''n''(''H''(''X'') + ''ε'')}} binary bits such that the source symbols {{math|''X''<sup>1:''n''</sup>}} are recoverable from the binary bits with probability of at least {{math|1 − ''ε''}}.
  −
  −
Given  is an i.i.d. source, its time series  is i.i.d. with entropy  in the discrete-valued case and differential entropy in the continuous-valued case. The Source coding theorem states that for any , i.e. for any rate  larger than the entropy of the source, there is large enough  and an encoder that takes  i.i.d. repetition of the source, , and maps it to  binary bits such that the source symbols  are recoverable from the binary bits with probability of at least .
  −
  −
特定的是内部识别码。来源,它的时间序列是内部识别码。离散值情形和连续值情形的熵微分熵。信源编码定理指出,对于任何。对于任何大于信号源熵值的速率,都有足够大的编码器,它采用内部识别技术。重复的源,,并映射到二进制位,以便源符号是可恢复的二进制位的概率至少为。
  −
  −
  −
  −
'''Proof of Achievability.''' Fix some {{math|''ε'' > 0}}, and let
  −
  −
Proof of Achievability. Fix some , and let
  −
  −
可达性的证明。修复一些,然后让
  −
  −
  −
  −
:<math>p(x_1, \ldots, x_n) = \Pr \left[X_1 = x_1, \cdots, X_n = x_n \right].</math>
  −
  −
<math>p(x_1, \ldots, x_n) = \Pr \left[X_1 = x_1, \cdots, X_n = x_n \right].</math>
  −
  −
P (x _ 1,点,x _ n) = Pr 左[ x _ 1 = x _ 1,点,x _ n = x _ n 右]
  −
  −
  −
  −
The typical set, {{math|''A''{{su|b=''n''|p=''ε''}}}}, is defined as follows:
  −
  −
The typical set, }}, is defined as follows:
  −
  −
典型的集合,}} ,定义如下:
  −
  −
  −
  −
:<math>A_n^\varepsilon =\left\{(x_1, \cdots, x_n) \ : \ \left|-\frac{1}{n} \log p(x_1, \cdots, x_n) - H_n(X)\right| < \varepsilon \right\}.</math>
  −
  −
<math>A_n^\varepsilon =\left\{(x_1, \cdots, x_n) \ : \ \left|-\frac{1}{n} \log p(x_1, \cdots, x_n) - H_n(X)\right| < \varepsilon \right\}.</math>
  −
  −
[ math > a _ n ^ varepsilon = left {(x _ 1,cdots,x _ n) : left |-frac {1}{ n } log p (x _ 1,cdots,x _ n)-h _ n (x) right | < varepsilon right } . </math >
  −
  −
  −
  −
The [[Asymptotic equipartition property#AEP for discrete-time i.i.d. sources|Asymptotic Equipartition Property]] (AEP) shows that for large enough {{mvar|n}}, the probability that a sequence generated by the source lies in the typical set, {{math|''A''{{su|b=''n''|p=''ε''}}}}, as defined approaches one. In particular, for sufficiently large {{mvar|n}}, <math>P((X_1,X_2,\cdots,X_n) \in A_n^\varepsilon)</math> can be made arbitrarily close to 1, and specifically, greater than <math>1-\varepsilon</math> (See
  −
  −
The Asymptotic Equipartition Property (AEP) shows that for large enough , the probability that a sequence generated by the source lies in the typical set, }}, as defined approaches one. In particular, for sufficiently large , <math>P((X_1,X_2,\cdots,X_n) \in A_n^\varepsilon)</math> can be made arbitrarily close to 1, and specifically, greater than <math>1-\varepsilon</math> (See
  −
  −
渐近等同分割特性表明,对于足够大的数据,由源生成的序列位于典型集合中的概率,} ,正如定义的接近一样。特别是,对于足够大来说,a _ n ^ varepsilon 中的 p ((x _ 1,x _ 2,点,x _ n) </math > 可以任意接近于1,特别是,比 math > 1-varepsilon </math > 更大
  −
  −
[[Asymptotic equipartition property#AEP for discrete-time i.i.d. sources|AEP]] for a proof).
  −
  −
AEP for a proof).
  −
  −
为证明 AEP)。
  −
  −
  −
  −
The definition of typical sets implies that those sequences that lie in the typical set satisfy:
  −
  −
The definition of typical sets implies that those sequences that lie in the typical set satisfy:
  −
  −
典型集合的定义意味着处于典型集合中的序列满足:
  −
  −
  −
  −
:<math>2^{-n(H(X)+\varepsilon)} \leq p \left (x_1, \cdots, x_n \right ) \leq 2^{-n(H(X)-\varepsilon)}</math>
  −
  −
<math>2^{-n(H(X)+\varepsilon)} \leq p \left (x_1, \cdots, x_n \right ) \leq 2^{-n(H(X)-\varepsilon)}</math>
  −
  −
2 ^ {-n (h (x) + varepsilon)} leq p left (x _ 1,cdots,x _ n right) leq 2 ^ {-n (h (x)-varepsilon)} </math >
  −
  −
  −
  −
Note that:
  −
  −
Note that:
  −
  −
请注意:
  −
  −
  −
  −
*The probability of a sequence <math>(X_1,X_2,\cdots X_n)</math> being drawn from {{math|''A''{{su|b=''n''|p=''ε''}}}} is greater than {{math|1 − ''ε''}}.
  −
  −
*<math>\left| A_n^\varepsilon \right| \leq 2^{n(H(X)+\varepsilon)}</math>, which follows from the left hand side (lower bound) for <math> p(x_1,x_2,\cdots x_n)</math>.
  −
  −
*<math>\left| A_n^\varepsilon \right| \geq (1-\varepsilon) 2^{n(H(X)-\varepsilon)}</math>, which follows from upper bound for <math> p(x_1,x_2,\cdots x_n)</math>  and the lower bound on the total probability of the whole set {{math|''A''{{su|b=''n''|p=''ε''}}}}.
  −
  −
  −
  −
Since <math>\left| A_n^\varepsilon \right| \leq 2^{n(H(X)+\varepsilon)}, n(H(X)+\varepsilon)</math> bits are enough to point to any string in this set.
  −
  −
Since <math>\left| A_n^\varepsilon \right| \leq 2^{n(H(X)+\varepsilon)}, n(H(X)+\varepsilon)</math> bits are enough to point to any string in this set.
  −
  −
由于 < math > 左 | a _ n ^ varepslon 右 | leq 2 ^ { n (h (x) + varepslon)} ,n (h (x) + varepslon) </math > 位足以指向该集合中的任何字符串。
  −
  −
  −
  −
The encoding algorithm: The encoder checks if the input sequence lies within the typical set; if yes, it outputs the index of the input sequence within the typical set; if not, the encoder outputs an arbitrary {{math|''n''(''H''(''X'') + ''ε'')}} digit number. As long as the input sequence lies within the typical set (with probability at least {{math|1 − ''ε''}}), the encoder doesn't make any error. So, the probability of error of the encoder is bounded above by {{mvar|ε}}.
  −
  −
The encoding algorithm: The encoder checks if the input sequence lies within the typical set; if yes, it outputs the index of the input sequence within the typical set; if not, the encoder outputs an arbitrary  digit number. As long as the input sequence lies within the typical set (with probability at least ), the encoder doesn't make any error. So, the probability of error of the encoder is bounded above by .
  −
  −
编码算法: 编码器检查输入序列是否在典型集合中; 如果是,它输出典型集合中输入序列的索引; 如果不是,编码器输出任意数字号。只要输入序列位于典型集(至少概率)内,编码器就不会出错。因此,该编码器的错误概率是以上有界的。
  −
  −
  −
  −
'''Proof of Converse.''' The converse is proved by showing that any set of size smaller than {{math|''A''{{su|b=''n''|p=''ε''}}}} (in the sense of exponent) would cover a set of probability bounded away from {{math|1}}.
  −
  −
Proof of Converse. The converse is proved by showing that any set of size smaller than }} (in the sense of exponent) would cover a set of probability bounded away from .
  −
  −
逆向的证明。通过证明任意一个小于}(指数意义下)的集合可以覆盖一个远离有界的概率集合,从而证明了逆向的结论。
  −
  −
  −
  −
== Proof: Source coding theorem for symbol codes ==
  −
  −
For {{math|1 ≤ ''i'' ≤ ''n''}} let {{math|''s<sub>i</sub>''}} denote the word length of each possible {{math|''x<sub>i</sub>''}}. Define <math>q_i = a^{-s_i}/C</math>, where {{mvar|C}} is chosen so that {{math|''q''<sub>1</sub> + ... + ''q<sub>n</sub>'' {{=}} 1}}.  Then
  −
  −
For  let  denote the word length of each possible . Define <math>q_i = a^{-s_i}/C</math>, where  is chosen so that  1}}.  Then
  −
  −
因为 let 表示每个可能的单词的长度。定义 < math > q _ i = a ^ {-s _ i }/c </math > ,在哪里选择以便1}}。然后
  −
  −
  −
  −
:<math>\begin{align}
  −
  −
<math>\begin{align}
  −
  −
1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3
  −
  −
H(X) &=    -\sum_{i=1}^n p_i \log_2 p_i \\
  −
  −
H(X) &=    -\sum_{i=1}^n p_i \log_2 p_i \\
  −
  −
H (x) & =-sum _ { i = 1} ^ n p _ i log _ 2 p _ i
  −
  −
    &\leq -\sum_{i=1}^n p_i \log_2 q_i \\
  −
  −
    &\leq -\sum_{i=1}^n p_i \log_2 q_i \\
  −
  −
    &\leq -\sum_{i=1}^n p_i \log_2 q_i \\
  −
  −
    &=    -\sum_{i=1}^n p_i \log_2 a^{-s_i} + \sum_{i=1}^n p_i \log_2 C \\
  −
  −
    &=    -\sum_{i=1}^n p_i \log_2 a^{-s_i} + \sum_{i=1}^n p_i \log_2 C \\
  −
  −
& =-sum { i = 1} ^ n p _ i log _ 2 a ^ {-s _ i } + sum _ { i = 1} ^ n p _ i log _ 2 c
  −
  −
    &=    -\sum_{i=1}^n p_i \log_2 a^{-s_i} + \log_2 C \\
  −
  −
    &=    -\sum_{i=1}^n p_i \log_2 a^{-s_i} + \log_2 C \\
  −
  −
& =-sum { i = 1} ^ n p _ i log _ 2 a ^ {-s _ i } + log _ 2 c
  −
  −
    &\leq -\sum_{i=1}^n - s_i p_i \log_2 a \\
  −
  −
    &\leq -\sum_{i=1}^n - s_i p_i \log_2 a \\
  −
  −
    &\leq -\sum_{i=1}^n - s_i p_i \log_2 a \\
  −
  −
    &\leq  \mathbb{E} S \log_2 a \\
  −
  −
    &\leq  \mathbb{E} S \log_2 a \\
  −
  −
2 a
  −
  −
\end{align}</math>
  −
  −
\end{align}</math>
  −
  −
结束{ align } </math >
  −
  −
  −
  −
where the second line follows from [[Gibbs' inequality]] and the fifth line follows from [[Kraft's inequality]]:
  −
  −
where the second line follows from Gibbs' inequality and the fifth line follows from Kraft's inequality:
  −
  −
第二行来自吉布斯的不平等,第五行来自卡夫的不平等:
  −
  −
  −
  −
:<math>C = \sum_{i=1}^n a^{-s_i} \leq 1</math>
  −
  −
<math>C = \sum_{i=1}^n a^{-s_i} \leq 1</math>
  −
  −
[数学] c = sum { i = 1} ^ n a ^ {-s _ i } leq 1
  −
  −
  −
  −
so {{math|log ''C'' ≤ 0}}.
  −
  −
so .
  −
  −
所以。
  −
  −
  −
  −
For the second inequality we may set
  −
  −
For the second inequality we may set
  −
  −
对于第二个不等式,我们可以设定
  −
  −
  −
  −
:<math>s_i = \lceil - \log_a p_i \rceil </math>
  −
  −
<math>s_i = \lceil - \log_a p_i \rceil </math>
  −
  −
[数学][数学]
  −
  −
  −
  −
so that
  −
  −
so that
  −
  −
所以
  −
  −
  −
  −
:<math> - \log_a p_i \leq s_i < -\log_a p_i + 1 </math>
  −
  −
<math> - \log_a p_i \leq s_i < -\log_a p_i + 1 </math>
  −
  −
[数学]-log _ a p _ i leq s _ i <-log _ a p _ i + 1
  −
  −
  −
  −
and so
  −
  −
and so
  −
  −
所以
  −
  −
  −
  −
:<math> a^{-s_i} \leq p_i</math>
  −
  −
<math> a^{-s_i} \leq p_i</math>
  −
  −
[ math ] a ^ {-s _ i } leq p _ i
  −
  −
  −
  −
and
  −
  −
and
  −
  −
  −
  −
  −
  −
:<math> \sum  a^{-s_i} \leq \sum p_i = 1</math>
  −
  −
<math> \sum  a^{-s_i} \leq \sum p_i = 1</math>
  −
  −
[ math ] sum a ^ {-s _ i } leq sum p _ i = 1 </math >
  −
  −
  −
  −
and so by Kraft's inequality there exists a prefix-free code having those word lengths. Thus the minimal {{mvar|S}} satisfies
  −
  −
and so by Kraft's inequality there exists a prefix-free code having those word lengths. Thus the minimal  satisfies
  −
  −
因此,根据卡夫的不平等性,存在一种无前缀的代码,其单词长度与前缀无关。因此,最小值满足
  −
  −
  −
  −
:<math>\begin{align}
  −
  −
<math>\begin{align}
  −
  −
1.1.1.2.2.2.2.2.2.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3.3.3.4.3.3.3.3.3.3.3.3.3
  −
  −
\mathbb{E} S & = \sum p_i s_i \\
  −
  −
\mathbb{E} S & = \sum p_i s_i \\
  −
  −
数学家们正在研究这个问题
  −
  −
& < \sum p_i \left( -\log_a p_i +1 \right) \\
  −
  −
& < \sum p_i \left( -\log_a p_i +1 \right) \\
  −
  −
& < sum p _ i left (- log _ a p _ i + 1 right)
  −
  −
& = \sum - p_i \frac{\log_2 p_i}{\log_2 a} +1 \\
  −
  −
& = \sum - p_i \frac{\log_2 p_i}{\log_2 a} +1 \\
  −
  −
和 = sum-p _ i frac { log _ 2 p _ i }{ log _ 2 a } + 1
  −
  −
& = \frac{H(X)}{\log_2 a} +1 \\
  −
  −
& = \frac{H(X)}{\log_2 a} +1 \\
  −
  −
& = frac { h (x)}{ log _ 2 a } + 1
  −
  −
\end{align}</math>
  −
  −
\end{align}</math>
  −
  −
结束{ align } </math >
  −
  −
  −
  −
==Extension to non-stationary independent sources ==
  −
  −
  −
  −
=== Fixed Rate lossless source coding for discrete time non-stationary independent sources===
  −
  −
Define typical set {{math|''A''{{su|b=''n''|p=''ε''}}}} as:
  −
  −
Define typical set }} as:
  −
  −
定义典型的集合}}如下:
  −
  −
  −
  −
:<math>A_n^\varepsilon = \left \{x_1^n \ : \ \left|-\frac{1}{n} \log p \left (X_1, \cdots, X_n \right ) - \overline{H_n}(X)\right| < \varepsilon \right \}.</math>
  −
  −
<math>A_n^\varepsilon = \left \{x_1^n \ : \ \left|-\frac{1}{n} \log p \left (X_1, \cdots, X_n \right ) - \overline{H_n}(X)\right| < \varepsilon \right \}.</math>
  −
  −
< math > a _ n ^ varepsilon = left { x _ 1 ^ n: left |-frac {1}{ n } log p left (x _ 1,cdots,x _ n right)-overline { h _ n }(x) right | < varepsilon right }.数学
  −
  −
  −
  −
Then, for given {{math|''δ'' > 0}}, for {{mvar|n}} large enough, {{math|Pr(''A''{{su|b=''n''|p=''ε''}}) > 1 − ''δ''}}. Now we just encode the sequences in the typical set, and usual methods in source coding show that the cardinality of this set is smaller than <math>2^{n(\overline{H_n}(X)+\varepsilon)}</math>. Thus, on an average, {{math|{{overline|''H<sub>n</sub>''}}(''X'') + ''ε''}} bits suffice for encoding with probability greater than {{math|1 − ''δ''}}, where {{mvar|ε}} and {{mvar|δ}} can be made arbitrarily small, by making {{mvar|n}} larger.
  −
  −
Then, for given , for  large enough, ) > 1 − δ}}. Now we just encode the sequences in the typical set, and usual methods in source coding show that the cardinality of this set is smaller than <math>2^{n(\overline{H_n}(X)+\varepsilon)}</math>. Thus, on an average, (X) + ε}} bits suffice for encoding with probability greater than , where  and  can be made arbitrarily small, by making  larger.
  −
  −
然后,对于给定的,对于足够大的,) > 1-δ }}。现在我们只对序列进行典型集合编码,通常的信源编码方法表明,这个集合的基数小于 < math > 2 ^ { n (overline { h _ n }(x) + varepsilon)} </math > 。因此,在一个平均值上,(x) + ε }位足够用于编码的概率大于,其中,可以任意小,通过使更大。
  −
  −
  −
  −
==See also==
  −
  −
* [[Channel coding]]
  −
  −
* [[noisy channel coding theorem|Noisy Channel Coding Theorem]]
  −
  −
* [[Error exponent]]
  −
  −
* [[Asymptotic equipartition property|Asymptotic Equipartition Property]] (AEP)
  −
  −
  −
  −
==References==
  −
  −
{{Reflist|refs=
  −
  −
{{Reflist|refs=
  −
  −
{通货再膨胀 | 参考文献 =
  −
  −
<ref name="Shannon">[[C.E. Shannon]], "[http://plan9.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf A Mathematical Theory of Communication]", ''[[Bell System Technical Journal]]'', vol. 27, pp.&nbsp;379–423, 623-656, July, October, 1948</ref>
  −
  −
<ref name="MacKay">David J. C. MacKay. ''[http://www.inference.phy.cam.ac.uk/mackay/itila/book.html Information Theory, Inference, and Learning Algorithms]'' Cambridge: Cambridge University Press, 2003. {{ISBN|0-521-64298-1}}</ref>
  −
  −
<ref name="Cover">{{cite book | last = Cover | first = Thomas M. | title = Elements of Information Theory | chapter = Chapter 5: Data Compression | year = 2006  | publisher = John Wiley & Sons | isbn = 0-471-24195-4 }}</ref>}}
      
}}
 
}}
第473行: 第63行:       −
[[Category:Information theory]]
+
=== Source coding theorem for symbol codes ===
    
Category:Information theory
 
Category:Information theory
第479行: 第69行:  
范畴: 信息论
 
范畴: 信息论
   −
[[Category:Coding theory]]
+
Let {{math|Σ<sub>1</sub>, Σ<sub>2</sub>}} denote two finite alphabets and let {{math|Σ{{su|b=1|p=∗}}}} and {{math|Σ{{su|b=2|p=∗}}}} denote the [[Kleene star|set of all finite words]] from those alphabets (respectively).
    
Category:Coding theory
 
Category:Coding theory
第485行: 第75行:  
类别: 编码理论
 
类别: 编码理论
   −
[[Category:Data compression]]
+
 
    
Category:Data compression
 
Category:Data compression
第491行: 第81行:  
类别: 数据压缩
 
类别: 数据压缩
   −
[[Category:Presentation layer protocols]]
+
Suppose that {{mvar|X}} is a random variable taking values in {{math|Σ<sub>1</sub>}} and let {{math|&thinsp;''f''&thinsp;}} be a [[Variable-length code#Uniquely decodable codes|uniquely decodable]] code from {{math|Σ{{su|b=1|p=∗}}}} to {{math|Σ{{su|b=2|p=∗}}}} where {{math|{{!}}Σ<sub>2</sub>{{!}} {{=}} ''a''}}. Let {{mvar|S}} denote the random variable given by the length of codeword {{math|&thinsp;''f''&thinsp;(''X'')}}.
    
Category:Presentation layer protocols
 
Category:Presentation layer protocols
第497行: 第87行:  
分类: 表示层协议
 
分类: 表示层协议
   −
[[Category:Mathematical theorems in theoretical computer science]]
+
 
    
Category:Mathematical theorems in theoretical computer science
 
Category:Mathematical theorems in theoretical computer science
第503行: 第93行:  
范畴: 理论计算机科学中的数学定理
 
范畴: 理论计算机科学中的数学定理
   −
[[Category:Articles containing proofs]]
+
If {{math|&thinsp;''f''&thinsp;}} is optimal in the sense that it has the minimal expected word length for {{mvar|X}}, then (Shannon 1948):
    
Category:Articles containing proofs
 
Category:Articles containing proofs
1,568

个编辑

导航菜单