− | 在信息论中,信源编码定理(Shannon,1948) <ref name="Shannon"/>非正式地表明(MacKay 2003, pg. 81,<ref name="MacKay"/> Cover 2006, Chapter 5<ref name="Cover"/>): | + | 在信息论中,信源编码定理(Shannon,1948) <ref name="Shannon">C.E. Shannon, "A Mathematical Theory of Communication", Bell System Technical Journal, vol. 27, pp. 379–423, 623-656, July, October, 1948</math>非正式地表明(MacKay 2003, pg. 81,<ref name="MacKay">David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1</ref><ref name="Cover">Cover, Thomas M. (2006). "Chapter 5: Data Compression". Elements of Information Theory. John Wiley & Sons. ISBN 0-471-24195-4.</ref>): |
| <blockquote>{{mvar|N}} [[独立和相同分布的随机变量|i.i.d.]]每个随机变量都有[[熵(信息论)|熵]] {{math|''H''(''X'')}}可以压缩成超过{{math|''N H''(''X'')}}。随着{{math|''N'' → ∞}},[[位]]的信息丢失风险可以忽略不计;但反过来说,如果它们被压缩成少于{{math|''N H''(''X'')}} 位,则信息肯定将丢失。</blockquote> | | <blockquote>{{mvar|N}} [[独立和相同分布的随机变量|i.i.d.]]每个随机变量都有[[熵(信息论)|熵]] {{math|''H''(''X'')}}可以压缩成超过{{math|''N H''(''X'')}}。随着{{math|''N'' → ∞}},[[位]]的信息丢失风险可以忽略不计;但反过来说,如果它们被压缩成少于{{math|''N H''(''X'')}} 位,则信息肯定将丢失。</blockquote> |