第1行: |
第1行: |
− | --本词条已由[[用户:Qige96|Ricky]]审校。感谢本词条的初译者高水平的工作,本次审阅是我最舒心的一次。
| + | {{#seo: |
− | | + | |keywords=信息论,信息时代,正规科学,控制论,计算机科学 |
− | | + | |description=信息论,信息时代,正规科学,控制论,计算机科学 |
− | '''Information theory''' studies the [[quantification (science)|quantification]], [[computer data storage|storage]], and [[telecommunication|communication]] of [[information]]. It was originally proposed by [[Claude Shannon]] in 1948 to find fundamental limits on [[signal processing]] and communication operations such as [[data compression]], in a landmark paper titled "[[A Mathematical Theory of Communication]]". Its impact has been crucial to the success of the [[Voyager program|Voyager]] missions to deep space, the invention of the [[compact disc]], the feasibility of mobile phones, the development of the Internet, the study of [[linguistics]] and of human perception, the understanding of [[black hole]]s, and numerous other fields.
| + | }} |
− | | |
− | Information theory studies the quantification, storage, and communication of information. It was originally proposed by Claude Shannon in 1948 to find fundamental limits on signal processing and communication operations such as data compression, in a landmark paper titled "A Mathematical Theory of Communication". Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields.
| |
− | | |
| '''<font color="#ff8000">信息论 Information Theory</font>'''研究的是信息的量化、存储与传播。信息论最初是由'''<font color="#ff8000">克劳德·香农 Claude Shannon</font>'''在1948年的一篇题为'''<font color="#ff8000">《一种通信的数学理论(A Mathematical Theory of Communication)》</font>'''的里程碑式论文中提出的,其目的是找到信号处理和通信操作(如数据压缩)的基本限制。信息论对于旅行者号深空探测任务的成功、光盘的发明、移动电话的可行性、互联网的发展、语言学和人类感知的研究、对黑洞的理解以及许多其他领域的研究都是至关重要的。 | | '''<font color="#ff8000">信息论 Information Theory</font>'''研究的是信息的量化、存储与传播。信息论最初是由'''<font color="#ff8000">克劳德·香农 Claude Shannon</font>'''在1948年的一篇题为'''<font color="#ff8000">《一种通信的数学理论(A Mathematical Theory of Communication)》</font>'''的里程碑式论文中提出的,其目的是找到信号处理和通信操作(如数据压缩)的基本限制。信息论对于旅行者号深空探测任务的成功、光盘的发明、移动电话的可行性、互联网的发展、语言学和人类感知的研究、对黑洞的理解以及许多其他领域的研究都是至关重要的。 |
− |
| |
− |
| |
− | The field is at the intersection of mathematics, [[statistics]], computer science, physics, [[Neuroscience|neurobiology]], [[information engineering (field)|information engineering]], and electrical engineering. The theory has also found applications in other areas, including [[statistical inference]], [[natural language processing]], [[cryptography]], [[neurobiology]], [[human vision]], the evolution and function of molecular codes ([[bioinformatics]]), [[model selection]] in statistics, [[thermal physics]],[[quantum computing]], linguistics, [[plagiarism detection]], [[pattern recognition]], and [[anomaly detection]]. Important sub-fields of information theory include [[source coding]], [[algorithmic complexity theory]], [[algorithmic information theory]], and [[information-theoretic security]].
| |
− |
| |
− | The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, information engineering, and electrical engineering. The theory has also found applications in other areas, including statistical inference, natural language processing, cryptography, neurobiology, human vision, the evolution and function of molecular codes (bioinformatics), model selection in statistics, thermal physics, quantum computing, linguistics, plagiarism detection, pattern recognition, and anomaly detection.
| |
| | | |
| 该领域是数学、统计学、计算机科学、物理学、神经生物学、信息工程和电气工程的交叉学科。这一理论也在其他领域得到了应用,比如推论统计学、自然语言处理、密码学、神经生物学<ref name="Spikes">{{cite book|title=Spikes: Exploring the Neural Code|author1=F. Rieke|author2=D. Warland|author3=R Ruyter van Steveninck|author4=W Bialek|publisher=The MIT press|year=1997|isbn=978-0262681087}}</ref>、人类视觉<ref>{{Cite journal|last1=Delgado-Bonal|first1=Alfonso|last2=Martín-Torres|first2=Javier|date=2016-11-03|title=Human vision is determined based on information theory|journal=Scientific Reports|language=En|volume=6|issue=1|pages=36038|bibcode=2016NatSR...636038D|doi=10.1038/srep36038|issn=2045-2322|pmc=5093619|pmid=27808236}}</ref>、分子编码的进化<ref>{{cite journal|last1=cf|last2=Huelsenbeck|first2=J. P.|last3=Ronquist|first3=F.|last4=Nielsen|first4=R.|last5=Bollback|first5=J. P.|year=2001|title=Bayesian inference of phylogeny and its impact on evolutionary biology|url=|journal=Science|volume=294|issue=5550|pages=2310–2314|bibcode=2001Sci...294.2310H|doi=10.1126/science.1065889|pmid=11743192|s2cid=2138288}}</ref>和功能(生物信息学)、统计学中的模型选择<ref>Burnham, K. P. and Anderson D. R. (2002) ''Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, Second Edition'' (Springer Science, New York) {{ISBN|978-0-387-95364-9}}.</ref>、热物理学<ref>{{cite journal|last1=Jaynes|first1=E. T.|year=1957|title=Information Theory and Statistical Mechanics|url=http://bayes.wustl.edu/|journal=Phys. Rev.|volume=106|issue=4|page=620|bibcode=1957PhRv..106..620J|doi=10.1103/physrev.106.620}}</ref> 、量子计算、语言学、剽窃检测<ref>{{cite journal|last1=Bennett|first1=Charles H.|last2=Li|first2=Ming|last3=Ma|first3=Bin|year=2003|title=Chain Letters and Evolutionary Histories|url=http://sciamdigital.com/index.cfm?fa=Products.ViewIssuePreview&ARTICLEID_CHAR=08B64096-0772-4904-9D48227D5C9FAC75|journal=Scientific American|volume=288|issue=6|pages=76–81|bibcode=2003SciAm.288f..76B|doi=10.1038/scientificamerican0603-76|pmid=12764940|access-date=2008-03-11|archive-url=https://web.archive.org/web/20071007041539/http://www.sciamdigital.com/index.cfm?fa=Products.ViewIssuePreview&ARTICLEID_CHAR=08B64096-0772-4904-9D48227D5C9FAC75|archive-date=2007-10-07|url-status=dead}}</ref>、模式识别和异常检测<ref>{{Cite web|url=http://aicanderson2.home.comcast.net/~aicanderson2/home.pdf|title=Some background on why people in the empirical sciences may want to better understand the information-theoretic methods|author=David R. Anderson|date=November 1, 2003|archiveurl=https://web.archive.org/web/20110723045720/http://aicanderson2.home.comcast.net/~aicanderson2/home.pdf|archivedate=July 23, 2011|url-status=dead|accessdate=2010-06-23}} | | 该领域是数学、统计学、计算机科学、物理学、神经生物学、信息工程和电气工程的交叉学科。这一理论也在其他领域得到了应用,比如推论统计学、自然语言处理、密码学、神经生物学<ref name="Spikes">{{cite book|title=Spikes: Exploring the Neural Code|author1=F. Rieke|author2=D. Warland|author3=R Ruyter van Steveninck|author4=W Bialek|publisher=The MIT press|year=1997|isbn=978-0262681087}}</ref>、人类视觉<ref>{{Cite journal|last1=Delgado-Bonal|first1=Alfonso|last2=Martín-Torres|first2=Javier|date=2016-11-03|title=Human vision is determined based on information theory|journal=Scientific Reports|language=En|volume=6|issue=1|pages=36038|bibcode=2016NatSR...636038D|doi=10.1038/srep36038|issn=2045-2322|pmc=5093619|pmid=27808236}}</ref>、分子编码的进化<ref>{{cite journal|last1=cf|last2=Huelsenbeck|first2=J. P.|last3=Ronquist|first3=F.|last4=Nielsen|first4=R.|last5=Bollback|first5=J. P.|year=2001|title=Bayesian inference of phylogeny and its impact on evolutionary biology|url=|journal=Science|volume=294|issue=5550|pages=2310–2314|bibcode=2001Sci...294.2310H|doi=10.1126/science.1065889|pmid=11743192|s2cid=2138288}}</ref>和功能(生物信息学)、统计学中的模型选择<ref>Burnham, K. P. and Anderson D. R. (2002) ''Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, Second Edition'' (Springer Science, New York) {{ISBN|978-0-387-95364-9}}.</ref>、热物理学<ref>{{cite journal|last1=Jaynes|first1=E. T.|year=1957|title=Information Theory and Statistical Mechanics|url=http://bayes.wustl.edu/|journal=Phys. Rev.|volume=106|issue=4|page=620|bibcode=1957PhRv..106..620J|doi=10.1103/physrev.106.620}}</ref> 、量子计算、语言学、剽窃检测<ref>{{cite journal|last1=Bennett|first1=Charles H.|last2=Li|first2=Ming|last3=Ma|first3=Bin|year=2003|title=Chain Letters and Evolutionary Histories|url=http://sciamdigital.com/index.cfm?fa=Products.ViewIssuePreview&ARTICLEID_CHAR=08B64096-0772-4904-9D48227D5C9FAC75|journal=Scientific American|volume=288|issue=6|pages=76–81|bibcode=2003SciAm.288f..76B|doi=10.1038/scientificamerican0603-76|pmid=12764940|access-date=2008-03-11|archive-url=https://web.archive.org/web/20071007041539/http://www.sciamdigital.com/index.cfm?fa=Products.ViewIssuePreview&ARTICLEID_CHAR=08B64096-0772-4904-9D48227D5C9FAC75|archive-date=2007-10-07|url-status=dead}}</ref>、模式识别和异常检测<ref>{{Cite web|url=http://aicanderson2.home.comcast.net/~aicanderson2/home.pdf|title=Some background on why people in the empirical sciences may want to better understand the information-theoretic methods|author=David R. Anderson|date=November 1, 2003|archiveurl=https://web.archive.org/web/20110723045720/http://aicanderson2.home.comcast.net/~aicanderson2/home.pdf|archivedate=July 23, 2011|url-status=dead|accessdate=2010-06-23}} |
| </ref>。 | | </ref>。 |
− |
| |
− | Important sub-fields of information theory include [[source coding]], [[algorithmic complexity theory]], [[algorithmic information theory]], [[information-theoretic security]], [[Grey system theory]] and measures of information.
| |
− |
| |
− | Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, Grey system theory and measures of information.
| |
| | | |
| 信息论的重要分支包括信源编码、算法复杂性理论、算法信息论、信息理论安全性、灰色系统理论和信息度量。 | | 信息论的重要分支包括信源编码、算法复杂性理论、算法信息论、信息理论安全性、灰色系统理论和信息度量。 |
− |
| |
− |
| |
− | Applications of fundamental topics of information theory include [[lossless data compression]] (e.g. [[ZIP (file format)|ZIP files]]), [[lossy data compression]] (e.g. [[MP3]]s and [[JPEG]]s), and [[channel capacity|channel coding]] (e.g. for [[digital subscriber line|DSL]]). Information theory is used in [[information retrieval]], [[intelligence (information gathering)|intelligence gathering]], gambling, and even in musical composition.
| |
− |
| |
− | Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPEGs), and channel coding (e.g. for DSL). Information theory is used in information retrieval, intelligence gathering, gambling, and even in musical composition.
| |
| | | |
| 信息论在应用领域的基本课题包括无损数据压缩(例如:ZIP压缩文件)、有损数据压缩(例如:Mp3和jpeg格式) ,以及频道编码(例如:DSL)。信息论在信息检索、情报收集、赌博,甚至在音乐创作中也有应用。 | | 信息论在应用领域的基本课题包括无损数据压缩(例如:ZIP压缩文件)、有损数据压缩(例如:Mp3和jpeg格式) ,以及频道编码(例如:DSL)。信息论在信息检索、情报收集、赌博,甚至在音乐创作中也有应用。 |
− |
| |
− |
| |
− | A key measure in information theory is [[information entropy|entropy]]. Entropy quantifies the amount of uncertainty involved in the value of a [[random variable]] or the outcome of a [[random process]]. For example, identifying the outcome of a fair [[coin flip]] (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a {{dice}} (with six equally likely outcomes). Some other important measures in information theory are [[mutual information]], channel capacity, [[error exponent]]s, and [[relative entropy]].
| |
− |
| |
− | A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a dice (with six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy.
| |
| | | |
| 信息论中的一个关键度量是熵。熵量化了一个随机变量的值或者一个随机过程的结果所包含的不确定性。例如,识别一次公平抛硬币的结果(有两个同样可能的结果)所提供的信息(较低的熵)少于识别抛一次骰子的结果(有六个同样可能的结果)。信息论中的其他一些重要指标有:互信息、信道容量、误差指数和相对熵。 | | 信息论中的一个关键度量是熵。熵量化了一个随机变量的值或者一个随机过程的结果所包含的不确定性。例如,识别一次公平抛硬币的结果(有两个同样可能的结果)所提供的信息(较低的熵)少于识别抛一次骰子的结果(有六个同样可能的结果)。信息论中的其他一些重要指标有:互信息、信道容量、误差指数和相对熵。 |
− |
| |
− |
| |
| | | |
| ==概览== | | ==概览== |
− |
| |
− | Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was made concrete in 1948 by Claude Shannon in his paper "A Mathematical Theory of Communication", in which "information" is thought of as a set of possible messages, where the goal is to send these messages over a noisy channel, and then to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the [[noisy-channel coding theorem]] showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent.<ref name="Spikes" />
| |
− |
| |
− | Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was made concrete in 1948 by Claude Shannon in his paper "A Mathematical Theory of Communication", in which "information" is thought of as a set of possible messages, where the goal is to send these messages over a noisy channel, and then to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent.
| |
| | | |
| 信息论主要研究信息的传递、处理、提取和利用。抽象地说,信息可以作为不确定性的解决方案。1948年,克劳德·香农在他的论文《一种通信的数学理论》中将这个抽象的概念具体化,在这篇论文中“信息”被认为是一组可能的信号,这些信号在通过带有噪声的信道发送后,接收者能在信道噪声的影响下以较低的错误概率来重构这些信号。香农的主要结论,有噪信道编码定理,表明在信道使用的许多限制情况下,渐近可达到信息传输速率等于的信道容量,一个仅仅依赖于信息发送所经过的信道本身的统计量。(译注:当信道的信息传输率不超过信道容量时,采用合适的编码方法可以实现任意高的传输可靠性,但若信息传输率超过了信道容量,就不可能实现可靠的传输。) | | 信息论主要研究信息的传递、处理、提取和利用。抽象地说,信息可以作为不确定性的解决方案。1948年,克劳德·香农在他的论文《一种通信的数学理论》中将这个抽象的概念具体化,在这篇论文中“信息”被认为是一组可能的信号,这些信号在通过带有噪声的信道发送后,接收者能在信道噪声的影响下以较低的错误概率来重构这些信号。香农的主要结论,有噪信道编码定理,表明在信道使用的许多限制情况下,渐近可达到信息传输速率等于的信道容量,一个仅仅依赖于信息发送所经过的信道本身的统计量。(译注:当信道的信息传输率不超过信道容量时,采用合适的编码方法可以实现任意高的传输可靠性,但若信息传输率超过了信道容量,就不可能实现可靠的传输。) |
− |
| |
− |
| |
− | Information theory is closely associated with a collection of pure and applied disciplines that have been investigated and reduced to engineering practice under a variety of [[Rubric (academic)|rubrics]] throughout the world over the past half century or more: [[adaptive system]]s, [[anticipatory system]]s, [[artificial intelligence]], [[complex system]]s, [[complexity science]], [[cybernetics]], [[Informatics (academic field)|informatics]], [[machine learning]], along with [[systems science]]s of many descriptions. Information theory is a broad and deep mathematical theory, with equally broad and deep applications, amongst which is the vital field of [[coding theory]].
| |
− |
| |
− | Information theory is closely associated with a collection of pure and applied disciplines that have been investigated and reduced to engineering practice under a variety of rubrics throughout the world over the past half century or more: adaptive systems, anticipatory systems, artificial intelligence, complex systems, complexity science, cybernetics, informatics, machine learning, along with systems sciences of many descriptions. Information theory is a broad and deep mathematical theory, with equally broad and deep applications, amongst which is the vital field of coding theory.
| |
| | | |
| 信息论与一系列纯科学和应用科学密切相关。在过去半个世纪甚至更久的时间里,在全球范围内已经有各种各样的学科理论被研究和化归为工程实践,比如在自适应系统,预期系统,人工智能,复杂系统,复杂性科学,控制论,信息学,机器学习,以及系统科学。信息论是一个广博而深遂的数学理论,也具有广泛而深入的应用,其中编码理论是至关重要的领域。 | | 信息论与一系列纯科学和应用科学密切相关。在过去半个世纪甚至更久的时间里,在全球范围内已经有各种各样的学科理论被研究和化归为工程实践,比如在自适应系统,预期系统,人工智能,复杂系统,复杂性科学,控制论,信息学,机器学习,以及系统科学。信息论是一个广博而深遂的数学理论,也具有广泛而深入的应用,其中编码理论是至关重要的领域。 |
− |
| |
− |
| |
− | Coding theory is concerned with finding explicit methods, called ''codes'', for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and [[error-correction]] (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible.
| |
− |
| |
− | Coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible.
| |
| | | |
| 编码理论与寻找明确的方法(编码)有关,用于提高效率和将有噪信道上传输的数据错误率降低到接近信道容量。这些编码可大致分为数据压缩编码(信源编码)和纠错(信道编码)技术。对于纠错技术,香农证明了理论极限很多年后才有人找到了真正实现了理论最优的方法。 | | 编码理论与寻找明确的方法(编码)有关,用于提高效率和将有噪信道上传输的数据错误率降低到接近信道容量。这些编码可大致分为数据压缩编码(信源编码)和纠错(信道编码)技术。对于纠错技术,香农证明了理论极限很多年后才有人找到了真正实现了理论最优的方法。 |
− |
| |
− |
| |
− | A third class of information theory codes are cryptographic algorithms (both [[code (cryptography)|code]]s and [[cipher]]s). Concepts, methods and results from coding theory and information theory are widely used in cryptography and [[cryptanalysis]]. ''See the article [[ban (unit)]] for a historical application.''
| |
− |
| |
− | A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis. See the article ban (unit) for a historical application.
| |
| | | |
| 第三类信息论代码是密码算法(包括密文和密码)。编码理论和信息论的概念、方法和结果在密码学和密码分析中得到了广泛的应用。 | | 第三类信息论代码是密码算法(包括密文和密码)。编码理论和信息论的概念、方法和结果在密码学和密码分析中得到了广泛的应用。 |
− |
| |
− |
| |
| | | |
| ==历史背景== | | ==历史背景== |
− |
| |
− |
| |
− | {{Main|History of information theory}}
| |
− |
| |
− |
| |
− | The landmark event that ''established'' the discipline of information theory and brought it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the ''[[Bell System Technical Journal]]'' in July and October 1948.
| |
− |
| |
− | The landmark event that established the discipline of information theory and brought it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948.
| |
| | | |
| 1948年7月和10月,克劳德·E·香农在《贝尔系统技术期刊》上发表了经典论文:《一种通信的数学理论》,这就是建立信息论学科并立即引起全世界关注的里程碑事件。 | | 1948年7月和10月,克劳德·E·香农在《贝尔系统技术期刊》上发表了经典论文:《一种通信的数学理论》,这就是建立信息论学科并立即引起全世界关注的里程碑事件。 |
− |
| |
− |
| |
− | Prior to this paper, limited information-theoretic ideas had been developed at [[Bell Labs]], all implicitly assuming events of equal probability. [[Harry Nyquist]]'s 1924 paper, ''Certain Factors Affecting Telegraph Speed'', contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation {{math|1=''W'' = ''K'' log ''m''}} (recalling [[Boltzmann's constant]]), where ''W'' is the speed of transmission of intelligence, ''m'' is the number of different voltage levels to choose from at each time step, and ''K'' is a constant. [[Ralph Hartley]]'s 1928 paper, ''Transmission of Information'', uses the word ''information'' as a measurable quantity, reflecting the receiver's ability to distinguish one [[sequence of symbols]] from any other, thus quantifying information as {{math|1=''H'' = log ''S''<sup>''n''</sup> = ''n'' log ''S''}}, where ''S'' was the number of possible symbols, and ''n'' the number of symbols in a transmission. The unit of information was therefore the [[decimal digit]], which has since sometimes been called the [[Hartley (unit)|hartley]] in his honor as a unit or scale or measure of information. [[Alan Turing]] in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war [[Cryptanalysis of the Enigma|Enigma]] ciphers.
| |
− |
| |
− | Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation (recalling Boltzmann's constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as , where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, which has since sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers.
| |
| | | |
| 在此之前,贝尔实验室已经提出了有限的信息论思想,所有这些理论都隐性地假设了概率均等的事件。Harry Nyquist 在1924年发表的论文《集中影响电报速率的因素(Certain Factors Affecting Telegraph Speed)》中包含一个理论章节,量化了“情报”和通信系统可以传输的“线路速度”,并给出了关系式 {{math|1=''W'' = ''K'' log ''m''}} (参考玻尔兹曼常数) ,其中 ''W'' 是情报传输的速度, ''m'' 是每个时间步长可以选择的不同电压电平数,''K'' 是常数。Ralph Hartley 在1928年发表的论文《信息的传输( Transmission of Information)》中,将单词信息作为一个可测量的量,以此反映接收者区分一系列符号的能力,从而将信息量化为 {{math|1=''H'' = log ''S''<sup>''n''</sup> = ''n'' log ''S''}},其中 ''S'' 是可以使用的符号的数量,''n'' 是传输中符号的数量。因此信息的单位就是十进制数字,为了表示对他的尊敬,这个单位有时被称为 Hartley,作为信息的单位、尺度或度量。1940年,图灵在二战时期破解德国的“迷”密码(Enigma ciphers)的统计分析中使用了类似的思想。 | | 在此之前,贝尔实验室已经提出了有限的信息论思想,所有这些理论都隐性地假设了概率均等的事件。Harry Nyquist 在1924年发表的论文《集中影响电报速率的因素(Certain Factors Affecting Telegraph Speed)》中包含一个理论章节,量化了“情报”和通信系统可以传输的“线路速度”,并给出了关系式 {{math|1=''W'' = ''K'' log ''m''}} (参考玻尔兹曼常数) ,其中 ''W'' 是情报传输的速度, ''m'' 是每个时间步长可以选择的不同电压电平数,''K'' 是常数。Ralph Hartley 在1928年发表的论文《信息的传输( Transmission of Information)》中,将单词信息作为一个可测量的量,以此反映接收者区分一系列符号的能力,从而将信息量化为 {{math|1=''H'' = log ''S''<sup>''n''</sup> = ''n'' log ''S''}},其中 ''S'' 是可以使用的符号的数量,''n'' 是传输中符号的数量。因此信息的单位就是十进制数字,为了表示对他的尊敬,这个单位有时被称为 Hartley,作为信息的单位、尺度或度量。1940年,图灵在二战时期破解德国的“迷”密码(Enigma ciphers)的统计分析中使用了类似的思想。 |
− |
| |
− | Much of the mathematics behind information theory with events of different probabilities were developed for the field of [[thermodynamics]] by [[Ludwig Boltzmann]] and [[J. Willard Gibbs]]. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by [[Rolf Landauer]] in the 1960s, are explored in ''[[Entropy in thermodynamics and information theory]]''.
| |
− |
| |
− | Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory.
| |
| | | |
| 信息论背后的许多数学理论(包括不同概率的事件)都是由路德维希·玻尔兹曼和约西亚·威拉德·吉布斯为热力学领域开发出来的。 | | 信息论背后的许多数学理论(包括不同概率的事件)都是由路德维希·玻尔兹曼和约西亚·威拉德·吉布斯为热力学领域开发出来的。 |
− |
| |
− |
| |
− | In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion that
| |
− |
| |
− | In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion that
| |
| | | |
| 香农的那篇革命性的、开创性的论文,于1944年的年底便已基本在贝尔实验室完成。在这论文里,香农将通信看作一个统计学过程,首次提出了通信的量化模型,并以此为基础推导出了信息论。论文开篇便提出了一下论断: | | 香农的那篇革命性的、开创性的论文,于1944年的年底便已基本在贝尔实验室完成。在这论文里,香农将通信看作一个统计学过程,首次提出了通信的量化模型,并以此为基础推导出了信息论。论文开篇便提出了一下论断: |
− |
| |
− | :"The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."
| |
− |
| |
− | "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."
| |
| | | |
| “通信的基本问题是在一点上精确地或近似地再现在另一点上选择的信息” | | “通信的基本问题是在一点上精确地或近似地再现在另一点上选择的信息” |
− |
| |
− | With it came the ideas of
| |
− |
| |
− | With it came the ideas of
| |
| | | |
| 与此相关的一些想法包括: | | 与此相关的一些想法包括: |
第132行: |
第57行: |
| | | |
| ==信息的度量== | | ==信息的度量== |
− |
| |
− | {{Main|Quantities of information}}
| |
− |
| |
− | Information theory is based on [[probability theory]] and statistics. Information theory often concerns itself with measures of information of the distributions associated with random variables. Important quantities of information are entropy, a measure of information in a single random variable, and mutual information, a measure of information in common between two random variables. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy [[Communication channel|channel]] in the limit of long block lengths, when the channel statistics are determined by the joint distribution.
| |
− |
| |
− | Information theory is based on probability theory and statistics. Information theory often concerns itself with measures of information of the distributions associated with random variables. Important quantities of information are entropy, a measure of information in a single random variable, and mutual information, a measure of information in common between two random variables. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed.
| |
− | The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution.
| |
| | | |
| 信息论基于概率论和统计学,其中经常涉及衡量随机变量的分布的信息。信息论中重要的信息量有:熵(单个随机变量中信息的度量)和互信息(两个随机变量之间的信息的度量)。熵是随机变量的概率分布的一个属性,它限制了从给定分布中独立采样得到的数据的压缩率。互信息是两个随机变量的联合概率分布的一个属性,是当信道的统计量由联合分布确定时,在长块长度的限制下,通过有噪信道的可靠通信的最大速率。 | | 信息论基于概率论和统计学,其中经常涉及衡量随机变量的分布的信息。信息论中重要的信息量有:熵(单个随机变量中信息的度量)和互信息(两个随机变量之间的信息的度量)。熵是随机变量的概率分布的一个属性,它限制了从给定分布中独立采样得到的数据的压缩率。互信息是两个随机变量的联合概率分布的一个属性,是当信道的统计量由联合分布确定时,在长块长度的限制下,通过有噪信道的可靠通信的最大速率。 |
− |
| |
− |
| |
− | The choice of logarithmic base in the following formulae determines the [[units of measurement|unit]] of information entropy that is used. A common unit of information is the bit, based on the [[binary logarithm]]. Other units include the [[nat (unit)|nat]], which is based on the [[natural logarithm]], and the [[deciban|decimal digit]], which is based on the [[common logarithm]].
| |
− |
| |
− | The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm.
| |
| | | |
| 在下列公式中,对数底数的选择决定了信息熵的单位。信息的常见单位是比特(基于二进制对数)。其他单位包括 nat(自然对数)和十进制数字(常用对数)。 | | 在下列公式中,对数底数的选择决定了信息熵的单位。信息的常见单位是比特(基于二进制对数)。其他单位包括 nat(自然对数)和十进制数字(常用对数)。 |
− |
| |
− |
| |
− | In what follows, an expression of the form {{math|''p'' log ''p''}} is considered by convention to be equal to zero whenever {{math|1=''p'' = 0}}. This is justified because <math>\lim_{p \rightarrow 0+} p \log p = 0</math> for any logarithmic base.
| |
− |
| |
− | In what follows, an expression of the form is considered by convention to be equal to zero whenever . This is justified because <math>\lim_{p \rightarrow 0+} p \log p = 0</math> for any logarithmic base.
| |
| | | |
| 下文中,按惯例将 {{math|1=''p'' = 0}} 时的表达式{{math|''p'' log ''p''}}的值视为等于零,因为<math>\lim_{p \rightarrow 0+} p \log p = 0</math>适用于任何对数底。 | | 下文中,按惯例将 {{math|1=''p'' = 0}} 时的表达式{{math|''p'' log ''p''}}的值视为等于零,因为<math>\lim_{p \rightarrow 0+} p \log p = 0</math>适用于任何对数底。 |
− |
| |
| | | |
| ===信源的熵=== | | ===信源的熵=== |
− |
| |
− | Based on the [[probability mass function]] of each source symbol to be communicated, the Shannon [[Entropy (information theory)|entropy]] {{math|''H''}}, in units of bits (per symbol), is given by
| |
− |
| |
− | Based on the probability mass function of each source symbol to be communicated, the Shannon entropy , in units of bits (per symbol), is given by
| |
− | <math>H = - \sum_{i} p_i \log_2 (p_i)</math>
| |
| | | |
| 基于每个用于通信的源符号的概率质量函数,'''<font color="#ff8000">香农熵 Shannon Entropy</font>'''(以比特为单位)由下式给出: | | 基于每个用于通信的源符号的概率质量函数,'''<font color="#ff8000">香农熵 Shannon Entropy</font>'''(以比特为单位)由下式给出: |
| <math>H = - \sum_{i} p_i \log_2 (p_i)</math> | | <math>H = - \sum_{i} p_i \log_2 (p_i)</math> |
− |
| |
− |
| |
− | where {{math|''p<sub>i</sub>''}} is the probability of occurrence of the {{math|''i''}}-th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the [[Shannon (unit)|shannon]] in his honor. Entropy is also commonly computed using the natural logarithm (base [[E (mathematical constant)|{{mvar|e}}]], where {{mvar|e}} is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base {{nowrap|1=2<sup>8</sup> = 256}} will produce a measurement in [[byte]]s per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol.
| |
− |
| |
− | where is the probability of occurrence of the -th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base E (mathematical constant)|, where is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol.
| |
| | | |
| 其中{{math|''p<sub>i</sub>''}}是源符号的第{{math|''i''}}个可能值出现的概率。该方程以比特(每个符号)为单位给出熵,因为它使用以2为底的对数。为表纪念,这个熵有时被称为香农熵。熵的计算也通常使用自然对数(以[[E (mathematical constant)|{{mvar|e}}]]为底数,其中{{mvar|e}}是欧拉数,其他底数也是可行的,但不常用),这样就可以测量每个符号的熵值,有时在公式中可以通过避免额外的常量来简化分析。例如以{{math|1=2<sup>8</sup> = 256}}为底的对数,得出的值就以字节(而非比特)作为单位。以10为底的对数,每个符号将产生以十进制数字(或哈特利)为单位的测量值。 | | 其中{{math|''p<sub>i</sub>''}}是源符号的第{{math|''i''}}个可能值出现的概率。该方程以比特(每个符号)为单位给出熵,因为它使用以2为底的对数。为表纪念,这个熵有时被称为香农熵。熵的计算也通常使用自然对数(以[[E (mathematical constant)|{{mvar|e}}]]为底数,其中{{mvar|e}}是欧拉数,其他底数也是可行的,但不常用),这样就可以测量每个符号的熵值,有时在公式中可以通过避免额外的常量来简化分析。例如以{{math|1=2<sup>8</sup> = 256}}为底的对数,得出的值就以字节(而非比特)作为单位。以10为底的对数,每个符号将产生以十进制数字(或哈特利)为单位的测量值。 |
− |
| |
− |
| |
− | Intuitively, the entropy {{math|''H<sub>X</sub>''}} of a discrete random variable {{math|''X''}} is a measure of the amount of ''uncertainty'' associated with the value of {{math|''X''}} when only its distribution is known.
| |
− |
| |
− | Intuitively, the entropy of a discrete random variable is a measure of the amount of uncertainty associated with the value of when only its distribution is known.
| |
| | | |
| 直观的来看,离散型随机变量{{math|''X''}}的熵{{math|''H<sub>X</sub>''}}是对不确定性的度量,当只知道其分布时,它的值与{{math|''X''}}的值相关。 | | 直观的来看,离散型随机变量{{math|''X''}}的熵{{math|''H<sub>X</sub>''}}是对不确定性的度量,当只知道其分布时,它的值与{{math|''X''}}的值相关。 |
− |
| |
− |
| |
− | The entropy of a source that emits a sequence of {{math|''N''}} symbols that are [[independent and identically distributed]] (iid) is {{math|''N'' ⋅ ''H''}} bits (per message of {{math|''N''}} symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length {{math|''N''}} will be less than {{math|''N'' ⋅ ''H''}}.
| |
− |
| |
− | The entropy of a source that emits a sequence of symbols that are independent and identically distributed (iid) is bits (per message of symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length will be less than .
| |
| | | |
| 当一个信息源发出了一串含有{{math|''N''}}个符号的序列,且每个符号[[独立同分布]]时,其熵为{{math|''N'' ⋅ ''H''}}位(每个信息{{math|''N''}}符号)。 | | 当一个信息源发出了一串含有{{math|''N''}}个符号的序列,且每个符号[[独立同分布]]时,其熵为{{math|''N'' ⋅ ''H''}}位(每个信息{{math|''N''}}符号)。 |
| 如果源数据符号是同分布但不独立的,则长度为{{math|''N''}}的消息的熵将小于{{math|''N'' ⋅ ''H''}}。 | | 如果源数据符号是同分布但不独立的,则长度为{{math|''N''}}的消息的熵将小于{{math|''N'' ⋅ ''H''}}。 |
− |
| |
− |
| |
− |
| |
− |
| |
| | | |
| [[File:Binary entropy plot.svg|thumbnail|right|200px|The entropy of a [[Bernoulli trial]] as a function of success probability, often called the {{em|[[binary entropy function]]}}, {{math|''H''<sub>b</sub>(''p'')}}. The entropy is maximized at 1 bit per trial when the two possible outcomes are equally probable, as in an unbiased coin toss. 伯努利实验的熵,作为一个成功概率的函数,通常被称为二值熵函数, {{math|''H''<sub>b</sub>(''p'')}}。当使用一个无偏的硬币做实验时,两个可能结果出现的概率相等,此时的熵值最大,为1。]] | | [[File:Binary entropy plot.svg|thumbnail|right|200px|The entropy of a [[Bernoulli trial]] as a function of success probability, often called the {{em|[[binary entropy function]]}}, {{math|''H''<sub>b</sub>(''p'')}}. The entropy is maximized at 1 bit per trial when the two possible outcomes are equally probable, as in an unbiased coin toss. 伯努利实验的熵,作为一个成功概率的函数,通常被称为二值熵函数, {{math|''H''<sub>b</sub>(''p'')}}。当使用一个无偏的硬币做实验时,两个可能结果出现的概率相等,此时的熵值最大,为1。]] |
− |
| |
− |
| |
− |
| |
− |
| |
− | If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If 𝕏 is the set of all messages {{math|{{mset|''x''<sub>1</sub>, ..., ''x''<sub>''n''</sub>}}}} that {{math|''X''}} could be, and {{math|''p''(''x'')}} is the probability of some <math>x \in \mathbb X</math>, then the entropy, {{math|''H''}}, of is defined:
| |
− |
| |
− | If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If 𝕏 is the set of all messages }} that could be, and is the probability of some <math>x \in \mathbb X</math>, then the entropy, , of is defined:
| |
| | | |
| 如果一个人发送了1000比特(0s和1s),然而接收者在发送之前就已知这串比特序列中的每一个位的值,显然这个通信过程并没有任何信息(译注:如果你要告诉我一个我已经知到的消息,那么本次通信没有传递任何信息)。但是,如果消息未知,且每个比特独立且等可能的为0或1时,则本次通信传输了1000香农的信息(通常称为“比特”)。在这两个极端之间,信息可以按以下方式进行量化。如果𝕏是{{math|''X''}}可能在的所有消息的集合{{math|{''x''<sub>1</sub>, ..., ''x''<sub>''n''</sub>}}},且{{math|''p''(''x'')}}是<math>x \in \mathbb X</math>的概率,那么熵、{{math|''H''}}和{{math|''H''}}的定义如下: <ref name = Reza>{{cite book | title = An Introduction to Information Theory | author = Fazlollah M. Reza | publisher = Dover Publications, Inc., New York | origyear = 1961| year = 1994 | isbn = 0-486-68210-2 | url = https://books.google.com/books?id=RtzpRAiX6OgC&pg=PA8&dq=intitle:%22An+Introduction+to+Information+Theory%22++%22entropy+of+a+simple+source%22}}</ref> | | 如果一个人发送了1000比特(0s和1s),然而接收者在发送之前就已知这串比特序列中的每一个位的值,显然这个通信过程并没有任何信息(译注:如果你要告诉我一个我已经知到的消息,那么本次通信没有传递任何信息)。但是,如果消息未知,且每个比特独立且等可能的为0或1时,则本次通信传输了1000香农的信息(通常称为“比特”)。在这两个极端之间,信息可以按以下方式进行量化。如果𝕏是{{math|''X''}}可能在的所有消息的集合{{math|{''x''<sub>1</sub>, ..., ''x''<sub>''n''</sub>}}},且{{math|''p''(''x'')}}是<math>x \in \mathbb X</math>的概率,那么熵、{{math|''H''}}和{{math|''H''}}的定义如下: <ref name = Reza>{{cite book | title = An Introduction to Information Theory | author = Fazlollah M. Reza | publisher = Dover Publications, Inc., New York | origyear = 1961| year = 1994 | isbn = 0-486-68210-2 | url = https://books.google.com/books?id=RtzpRAiX6OgC&pg=PA8&dq=intitle:%22An+Introduction+to+Information+Theory%22++%22entropy+of+a+simple+source%22}}</ref> |
− |
| |
| | | |
| :<math> H(X) = \mathbb{E}_{X} [I(x)] = -\sum_{x \in \mathbb{X}} p(x) \log p(x).</math> | | :<math> H(X) = \mathbb{E}_{X} [I(x)] = -\sum_{x \in \mathbb{X}} p(x) \log p(x).</math> |
− |
| |
− |
| |
− | (Here, is the [[self-information]], which is the entropy contribution of an individual message, and {{math|''I''(''x'')}}{{math|𝔼<sub>''X''</sub>}} is the [[expected value]].) A property of entropy is that it is maximized when all the messages in the message space are equiprobable {{math|1=''p''(''x'') = 1/''n''}}; i.e., most unpredictable, in which case {{math|1=''H''(''X'') = log ''n''}}.
| |
− |
| |
− | (Here, is the self-information, which is the entropy contribution of an individual message, and is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable ; i.e., most unpredictable, in which case .
| |
| | | |
| (其中:{{math|''I''(''x'')}}是[[自信息]],表示单个信息的熵贡献;{{math|''I''(''x'')}}{{math|𝔼<sub>''X''</sub>}}为{{math|''X''}}的期望。)熵的一个特性是,当消息空间中的所有消息都是等概率{{math|1=''p''(''x'') = 1/''n''}}时熵最大; 也就是说,在{{math|1=''H''(''X'') = log ''n''}}这种情况下,熵是最不可预测的。 | | (其中:{{math|''I''(''x'')}}是[[自信息]],表示单个信息的熵贡献;{{math|''I''(''x'')}}{{math|𝔼<sub>''X''</sub>}}为{{math|''X''}}的期望。)熵的一个特性是,当消息空间中的所有消息都是等概率{{math|1=''p''(''x'') = 1/''n''}}时熵最大; 也就是说,在{{math|1=''H''(''X'') = log ''n''}}这种情况下,熵是最不可预测的。 |
− |
| |
− |
| |
− | The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit:
| |
− |
| |
− | The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit:
| |
| | | |
| 对于只有两种可能取值的随机变量的信息熵,其特殊情况为二值熵函数(通常用以为底2对数,因此以香农(Sh)为单位): | | 对于只有两种可能取值的随机变量的信息熵,其特殊情况为二值熵函数(通常用以为底2对数,因此以香农(Sh)为单位): |
| | | |
| :<math>H_{\mathrm{b}}(p) = - p \log_2 p - (1-p)\log_2 (1-p).</math> | | :<math>H_{\mathrm{b}}(p) = - p \log_2 p - (1-p)\log_2 (1-p).</math> |
− |
| |
| | | |
| ===联合熵=== | | ===联合熵=== |
− |
| |
− | The {{em|[[joint entropy]]}} of two discrete random variables {{math|''X''}} and {{math|''Y''}} is merely the entropy of their pairing: {{math|(''X'', ''Y'')}}. This implies that if {{math|''X''}} and {{math|''Y''}} are [[statistical independence|independent]], then their joint entropy is the sum of their individual entropies.
| |
− |
| |
− | The of two discrete random variables and is merely the entropy of their pairing: . This implies that if and are independent, then their joint entropy is the sum of their individual entropies.
| |
| | | |
| 两个离散的随机变量{{math|''X''}}和{{math|''Y''}}的'''<font color="#ff8000">联合熵 Joint Entropy</font>'''大致是它们的配对: {{math|(''X'', ''Y'')}}。若{{math|''X''}}和{{math|''Y''}}是独立的,那么它们的联合熵就是其各自熵的总和。 | | 两个离散的随机变量{{math|''X''}}和{{math|''Y''}}的'''<font color="#ff8000">联合熵 Joint Entropy</font>'''大致是它们的配对: {{math|(''X'', ''Y'')}}。若{{math|''X''}}和{{math|''Y''}}是独立的,那么它们的联合熵就是其各自熵的总和。 |
− |
| |
− |
| |
− | For example, if {{math|(''X'', ''Y'')}} represents the position of a chess piece — {{math|''X''}} the row and {{math|''Y''}} the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece.
| |
− |
| |
− | For example, if represents the position of a chess piece — the row and the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece.
| |
| | | |
| 例如:如果{{math|(''X'', ''Y'')}}代表棋子的位置({{math|''X''}} 表示行和{{math|''Y''}}表示列),那么棋子所在位置的熵就是棋子行、列的联合熵。 | | 例如:如果{{math|(''X'', ''Y'')}}代表棋子的位置({{math|''X''}} 表示行和{{math|''Y''}}表示列),那么棋子所在位置的熵就是棋子行、列的联合熵。 |
− |
| |
| | | |
| :<math>H(X, Y) = \mathbb{E}_{X,Y} [-\log p(x,y)] = - \sum_{x, y} p(x, y) \log p(x, y) \,</math> | | :<math>H(X, Y) = \mathbb{E}_{X,Y} [-\log p(x,y)] = - \sum_{x, y} p(x, y) \log p(x, y) \,</math> |
− |
| |
− | Despite similar notation, joint entropy should not be confused with {{em|[[cross entropy]]}}.
| |
− |
| |
− | Despite similar notation, joint entropy should not be confused with .
| |
| | | |
| 尽管符号相似,注意联合熵与交叉熵不能混淆。 | | 尽管符号相似,注意联合熵与交叉熵不能混淆。 |
− |
| |
| | | |
| ===条件熵(含糊度)=== | | ===条件熵(含糊度)=== |
− |
| |
− | The {{em|[[conditional entropy]]}} or ''conditional uncertainty'' of {{math|''X''}} given random variable {{math|''Y''}} (also called the ''equivocation'' of {{math|''X''}} about {{math|''Y''}}) is the average conditional entropy over {{math|''Y''}}:
| |
− |
| |
− | The or conditional uncertainty of given random variable (also called the equivocation of about ) is the average conditional entropy over :
| |
| | | |
| 在给定随机变量{{math|''Y''}}下{{math|''X''}}的'''<font color="#ff8000">条件熵 Conditional Entropy</font>'''(或条件不确定性,也可称为{{math|''X''}}关于{{math|''Y''}}的含糊度))是{{math|''Y''}}上的平均条件熵: <ref name=Ash>{{cite book | title = Information Theory | author = Robert B. Ash | publisher = Dover Publications, Inc. | origyear = 1965| year = 1990 | isbn = 0-486-66521-6 | url = https://books.google.com/books?id=ngZhvUfF0UIC&pg=PA16&dq=intitle:information+intitle:theory+inauthor:ash+conditional+uncertainty}}</ref> | | 在给定随机变量{{math|''Y''}}下{{math|''X''}}的'''<font color="#ff8000">条件熵 Conditional Entropy</font>'''(或条件不确定性,也可称为{{math|''X''}}关于{{math|''Y''}}的含糊度))是{{math|''Y''}}上的平均条件熵: <ref name=Ash>{{cite book | title = Information Theory | author = Robert B. Ash | publisher = Dover Publications, Inc. | origyear = 1965| year = 1990 | isbn = 0-486-66521-6 | url = https://books.google.com/books?id=ngZhvUfF0UIC&pg=PA16&dq=intitle:information+intitle:theory+inauthor:ash+conditional+uncertainty}}</ref> |
| | | |
| :<math> H(X|Y) = \mathbb E_Y [H(X|y)] = -\sum_{y \in Y} p(y) \sum_{x \in X} p(x|y) \log p(x|y) = -\sum_{x,y} p(x,y) \log p(x|y).</math> | | :<math> H(X|Y) = \mathbb E_Y [H(X|y)] = -\sum_{y \in Y} p(y) \sum_{x \in X} p(x|y) \log p(x|y) = -\sum_{x,y} p(x,y) \log p(x|y).</math> |
− |
| |
− | Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that:
| |
− |
| |
− | Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that:
| |
| | | |
| 由于熵能够以随机变量或该随机变量的某个值为条件,所以应注意不要混淆条件熵的这两个定义(前者更为常用)。该类条件熵的一个基本属性为: | | 由于熵能够以随机变量或该随机变量的某个值为条件,所以应注意不要混淆条件熵的这两个定义(前者更为常用)。该类条件熵的一个基本属性为: |
第269行: |
第110行: |
| | | |
| ===互信息(转移信息)=== | | ===互信息(转移信息)=== |
− |
| |
− | ''[[Mutual information]]'' measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of {{math|''X''}} relative to {{math|''Y''}} is given by:
| |
− |
| |
− | Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of relative to is given by:
| |
| | | |
| '''<font color="#ff8000">互信息 Mutual Information</font>'''度量的是某个随机变量在通过观察另一个随机变量时可以获得的信息量。在通信中可以用它来最大化发送和接收信号之间共享的信息量,这一点至关重要。{{math|''X''}}相对于{{math|''Y''}}的互信息由以下公式给出: | | '''<font color="#ff8000">互信息 Mutual Information</font>'''度量的是某个随机变量在通过观察另一个随机变量时可以获得的信息量。在通信中可以用它来最大化发送和接收信号之间共享的信息量,这一点至关重要。{{math|''X''}}相对于{{math|''Y''}}的互信息由以下公式给出: |
第278行: |
第115行: |
| | | |
| :<math>I(X;Y) = \mathbb{E}_{X,Y} [SI(x,y)] = \sum_{x,y} p(x,y) \log \frac{p(x,y)}{p(x)\, p(y)}</math> | | :<math>I(X;Y) = \mathbb{E}_{X,Y} [SI(x,y)] = \sum_{x,y} p(x,y) \log \frac{p(x,y)}{p(x)\, p(y)}</math> |
− |
| |
− | where {{math|SI}} (''S''pecific mutual ''I''nformation) is the [[pointwise mutual information]].
| |
− |
| |
− | where (Specific mutual Information) is the pointwise mutual information.
| |
| | | |
| 其中{{math|SI}} (Specific mutual Information,特定互信息)是点间的互信息。 | | 其中{{math|SI}} (Specific mutual Information,特定互信息)是点间的互信息。 |
− |
| |
− |
| |
− | A basic property of the mutual information is that
| |
− |
| |
− | A basic property of the mutual information is that
| |
| | | |
| 互信息的一个基本属性是: | | 互信息的一个基本属性是: |
| | | |
| : <math>I(X;Y) = H(X) - H(X|Y).\,</math> | | : <math>I(X;Y) = H(X) - H(X|Y).\,</math> |
− |
| |
− |
| |
− | That is, knowing ''Y'', we can save an average of {{math|''I''(''X''; ''Y'')}} bits in encoding ''X'' compared to not knowing ''Y''.
| |
− |
| |
− | That is, knowing Y, we can save an average of bits in encoding X compared to not knowing Y.
| |
| | | |
| 也就是说,在编码''X''的过程中,知道''Y''比不知道''Y''平均节省{{math|''I''(''X''; ''Y'')}}比特。 | | 也就是说,在编码''X''的过程中,知道''Y''比不知道''Y''平均节省{{math|''I''(''X''; ''Y'')}}比特。 |
− |
| |
− |
| |
− | Mutual information is [[symmetric function|symmetric]]:
| |
− |
| |
− | Mutual information is symmetric:
| |
| | | |
| 互信息是对称的: | | 互信息是对称的: |
| | | |
| : <math>I(X;Y) = I(Y;X) = H(X) + H(Y) - H(X,Y).\,</math> | | : <math>I(X;Y) = I(Y;X) = H(X) + H(Y) - H(X,Y).\,</math> |
− |
| |
− |
| |
− | Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the [[posterior probability|posterior probability distribution]] of ''X'' given the value of ''Y'' and the [[prior probability|prior distribution]] on ''X'':
| |
− |
| |
− | Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X:
| |
| | | |
| 互信息可以表示为在给定''Y''值的情况下''X''的后验分布,以及''X''的先验概率分布之间的平均 Kullback-Leibler 散度(信息增益) : | | 互信息可以表示为在给定''Y''值的情况下''X''的后验分布,以及''X''的先验概率分布之间的平均 Kullback-Leibler 散度(信息增益) : |
| | | |
| : <math>I(X;Y) = \mathbb E_{p(y)} [D_{\mathrm{KL}}( p(X|Y=y) \| p(X) )].</math> | | : <math>I(X;Y) = \mathbb E_{p(y)} [D_{\mathrm{KL}}( p(X|Y=y) \| p(X) )].</math> |
− |
| |
− |
| |
− | In other words, this is a measure of how much, on the average, the probability distribution on ''X'' will change if we are given the value of ''Y''. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution:
| |
− |
| |
− | In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution:
| |
| | | |
| 换句话说,这个指标度量:当我们给出''Y''的值,得出''X''上的概率分布将会平均变化多少。这通常用于计算边缘分布的乘积与实际联合分布的差异: | | 换句话说,这个指标度量:当我们给出''Y''的值,得出''X''上的概率分布将会平均变化多少。这通常用于计算边缘分布的乘积与实际联合分布的差异: |
| | | |
| : <math>I(X; Y) = D_{\mathrm{KL}}(p(X,Y) \| p(X)p(Y)).</math> | | : <math>I(X; Y) = D_{\mathrm{KL}}(p(X,Y) \| p(X)p(Y)).</math> |
− |
| |
− |
| |
− | Mutual information is closely related to the [[likelihood-ratio test|log-likelihood ratio test]] in the context of contingency tables and the [[multinomial distribution]] and to [[Pearson's chi-squared test|Pearson's χ<sup>2</sup> test]]: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution.
| |
− |
| |
− | Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ<sup>2</sup> test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution.
| |
| | | |
| 互信息与列联表中的似然比检验,多项分布,以及皮尔森卡方检验密切相关: 互信息可以视为评估一对变量之间独立性的统计量,并且具有明确指定的渐近分布。 | | 互信息与列联表中的似然比检验,多项分布,以及皮尔森卡方检验密切相关: 互信息可以视为评估一对变量之间独立性的统计量,并且具有明确指定的渐近分布。 |
| | | |
− | | + | ===散度(信息增益)=== |
− | ===Kullback-Leibler 散度(信息增益)=== | |
− | | |
− | The ''[[Kullback–Leibler divergence]]'' (or ''information divergence'', ''information gain'', or ''relative entropy'') is a way of comparing two distributions: a "true" [[probability distribution]] ''p(X)'', and an arbitrary probability distribution ''q(X)''. If we compress data in a manner that assumes ''q(X)'' is the distribution underlying some data, when, in reality, ''p(X)'' is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined
| |
− | | |
− | The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution p(X), and an arbitrary probability distribution q(X). If we compress data in a manner that assumes q(X) is the distribution underlying some data, when, in reality, p(X) is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined
| |
| | | |
| '''<font color="#ff8000">Kullback-Leibler 散度</font>'''(或信息散度、相对熵、信息增益)是比较两种分布的方法: “真实的”概率分布''p(X)''和任意概率分布''q(X)''。若假设''q(X)''是基于某种方式压缩的数据的分布,而实际上''p(X)''才是真正分布,那么 Kullback-Leibler 散度是每个数据压缩所需的平均额外比特数。因此定义: | | '''<font color="#ff8000">Kullback-Leibler 散度</font>'''(或信息散度、相对熵、信息增益)是比较两种分布的方法: “真实的”概率分布''p(X)''和任意概率分布''q(X)''。若假设''q(X)''是基于某种方式压缩的数据的分布,而实际上''p(X)''才是真正分布,那么 Kullback-Leibler 散度是每个数据压缩所需的平均额外比特数。因此定义: |
| | | |
| :<math>D_{\mathrm{KL}}(p(X) \| q(X)) = \sum_{x \in X} -p(x) \log {q(x)} \, - \, \sum_{x \in X} -p(x) \log {p(x)} = \sum_{x \in X} p(x) \log \frac{p(x)}{q(x)}.</math> | | :<math>D_{\mathrm{KL}}(p(X) \| q(X)) = \sum_{x \in X} -p(x) \log {q(x)} \, - \, \sum_{x \in X} -p(x) \log {p(x)} = \sum_{x \in X} p(x) \log \frac{p(x)}{q(x)}.</math> |
− |
| |
− |
| |
− | Although it is sometimes used as a 'distance metric', KL divergence is not a true [[Metric (mathematics)|metric]] since it is not symmetric and does not satisfy the [[triangle inequality]] (making it a semi-quasimetric).
| |
− |
| |
− | Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric).
| |
| | | |
| 尽管有时会将KL散度用作距离量度但它并不是一个真正的指标,因为它是不对称的,同时也不满足三角不等式(KL散度可以作为一个半准度量)。 | | 尽管有时会将KL散度用作距离量度但它并不是一个真正的指标,因为它是不对称的,同时也不满足三角不等式(KL散度可以作为一个半准度量)。 |
− |
| |
− |
| |
− | Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number ''X'' is about to be drawn randomly from a discrete set with probability distribution ''p(x)''. If Alice knows the true distribution ''p(x)'', while Bob believes (has a [[prior probability|prior]]) that the distribution is ''q(x)'', then Bob will be more [[Information content|surprised]] than Alice, on average, upon seeing the value of ''X''. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the ''log'' is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him.
| |
− |
| |
− | Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution p(x). If Alice knows the true distribution p(x), while Bob believes (has a prior) that the distribution is q(x), then Bob will be more surprised than Alice, on average, upon seeing the value of X. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him.
| |
| | | |
| KL散度的另一种解释是一种先验知识引入的“不必要的惊讶”。假设将从概率分布为“ p(x)”的离散集合中随机抽取数字“ X”,如果Alice知道真实的分布“p(x)”,而Bob(因为具有先验知识)认为概率分布是“q(x)”,那么在看到抽取出来的''X''的值后,平均而言,Bob将比Alice更加惊讶。KL散度就是Bob惊讶的期望值减去Alice惊讶的期望值(如果对数以2为底,则以比特为单位),这样Bob所拥有的先验知识的“错误的”程度可以用他“不必要的惊讶”的期望值来进行量化。 | | KL散度的另一种解释是一种先验知识引入的“不必要的惊讶”。假设将从概率分布为“ p(x)”的离散集合中随机抽取数字“ X”,如果Alice知道真实的分布“p(x)”,而Bob(因为具有先验知识)认为概率分布是“q(x)”,那么在看到抽取出来的''X''的值后,平均而言,Bob将比Alice更加惊讶。KL散度就是Bob惊讶的期望值减去Alice惊讶的期望值(如果对数以2为底,则以比特为单位),这样Bob所拥有的先验知识的“错误的”程度可以用他“不必要的惊讶”的期望值来进行量化。 |
− |
| |
− |
| |
| | | |
| ===其他度量=== | | ===其他度量=== |
− |
| |
− | Other important information theoretic quantities include [[Rényi entropy]] (a generalization of entropy), [[differential entropy]] (a generalization of quantities of information to continuous distributions), and the [[conditional mutual information]].
| |
− |
| |
− | Other important information theoretic quantities include Rényi entropy (a generalization of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information.
| |
| | | |
| 信息论中其他重要的量包括'''<font color="#ff8000">瑞丽熵 Rényi Entropy</font>'''(一种熵的推广),微分熵(信息量推广到连续分布),以及条件互信息。 | | 信息论中其他重要的量包括'''<font color="#ff8000">瑞丽熵 Rényi Entropy</font>'''(一种熵的推广),微分熵(信息量推广到连续分布),以及条件互信息。 |
第372行: |
第154行: |
| | | |
| ==编码理论== | | ==编码理论== |
− |
| |
− | {{Main|Coding theory}}
| |
| | | |
| [[File:CDSCRATCHES.jpg|thumb|right|A picture showing scratches on the readable surface of a CD-R. Music and data CDs are coded using error correcting codes and thus can still be read even if they have minor scratches using [[error detection and correction]].]] | | [[File:CDSCRATCHES.jpg|thumb|right|A picture showing scratches on the readable surface of a CD-R. Music and data CDs are coded using error correcting codes and thus can still be read even if they have minor scratches using [[error detection and correction]].]] |
第380行: |
第160行: |
| | | |
| [[File:CDSCRATCHES.jpg|thumb|right|在可读CD的表面上显示划痕的图片。音乐和数据CD使用纠错编码进行编码,因此即使它们有轻微的划痕,也可以通过错误检测和纠正来对CD进行读取。]] | | [[File:CDSCRATCHES.jpg|thumb|right|在可读CD的表面上显示划痕的图片。音乐和数据CD使用纠错编码进行编码,因此即使它们有轻微的划痕,也可以通过错误检测和纠正来对CD进行读取。]] |
− |
| |
− |
| |
− |
| |
− |
| |
− |
| |
− | Coding theory is one of the most important and direct applications of information theory. It can be subdivided into [[data compression|source coding]] theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source.
| |
− |
| |
− | Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source.
| |
| | | |
| '''<font color="#ff8000">编码理论 Coding Theory</font>'''是信息论最重要、最直接的应用之一,可以细分为'''<font color="#ff8000">信源编码理论 Source Coding Theory</font>'''和'''<font color="#ff8000">信道编码理论 Channel Coding Theory</font>'''。信息论使用统计学来量化描述数据所需的比特数,也就是源的信息熵。 | | '''<font color="#ff8000">编码理论 Coding Theory</font>'''是信息论最重要、最直接的应用之一,可以细分为'''<font color="#ff8000">信源编码理论 Source Coding Theory</font>'''和'''<font color="#ff8000">信道编码理论 Channel Coding Theory</font>'''。信息论使用统计学来量化描述数据所需的比特数,也就是源的信息熵。 |
第405行: |
第177行: |
| | | |
| *纠错码(信道编码):数据压缩会尽可能多的消除冗余,而纠错码会添加所需的冗余(即纠错),以便在嘈杂的信道上有效且保真地传输数据。 | | *纠错码(信道编码):数据压缩会尽可能多的消除冗余,而纠错码会添加所需的冗余(即纠错),以便在嘈杂的信道上有效且保真地传输数据。 |
− |
| |
− |
| |
− |
| |
− | This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the [[broadcast channel]]) or intermediary "helpers" (the [[relay channel]]), or more general [[computer network|networks]], compression followed by transmission may no longer be optimal. [[Network information theory]] refers to these multi-agent communication models.
| |
− |
| |
− | This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models.
| |
| | | |
| 信息传输定理,或着说“信源-信道分离定理”证明,编码理论应当划分为压缩和传输两部分。定理证明了在许多情况下使用比特作为信息的''通用货币''是合理的,但这只在发送用户与特定接收用户建立通信的情况下才成立。在具有多个发送器(多路访问信道),多个接收器(广播信道)或中转器(中继信道)或多个计算机网络的情况下,压缩后再进行传输可能就不再是最佳选择。[[网络信息论]]指的就是这些多主体通信模型。 | | 信息传输定理,或着说“信源-信道分离定理”证明,编码理论应当划分为压缩和传输两部分。定理证明了在许多情况下使用比特作为信息的''通用货币''是合理的,但这只在发送用户与特定接收用户建立通信的情况下才成立。在具有多个发送器(多路访问信道),多个接收器(广播信道)或中转器(中继信道)或多个计算机网络的情况下,压缩后再进行传输可能就不再是最佳选择。[[网络信息论]]指的就是这些多主体通信模型。 |
第417行: |
第183行: |
| | | |
| ===信源理论=== | | ===信源理论=== |
− |
| |
− | Any process that generates successive messages can be considered a {{em|[[Communication source|source]]}} of information. A memoryless source is one in which each message is an [[Independent identically distributed random variables|independent identically distributed random variable]], whereas the properties of [[ergodic theory|ergodicity]] and [[stationary process|stationarity]] impose less restrictive constraints. All such sources are [[stochastic process|stochastic]]. These terms are well studied in their own right outside information theory.
| |
− |
| |
− | Any process that generates successive messages can be considered a of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory.
| |
| | | |
| 生成连续消息的任何过程都可以视为信息的通讯来源。无记忆信源是指每个消息都是独立同分布的随机变量,而遍历理论和平稳过程的性质对信源施加的限制较少。所有这些信源都可以看作随机的。在信息论领域外,这些术语也已经有很全面的相关研究。 | | 生成连续消息的任何过程都可以视为信息的通讯来源。无记忆信源是指每个消息都是独立同分布的随机变量,而遍历理论和平稳过程的性质对信源施加的限制较少。所有这些信源都可以看作随机的。在信息论领域外,这些术语也已经有很全面的相关研究。 |
第426行: |
第188行: |
| | | |
| ====速率==== | | ====速率==== |
− |
| |
− | Information ''[[Entropy rate|rate]]'' is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is
| |
− |
| |
− | Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is
| |
| | | |
| '''<font color="#ff8000">信息速率 Information Rate</font>'''(熵率)是每个符号的平均熵。对于无记忆信源,信息速率仅表示每个符号的熵,而在平稳随机过程中,它是: | | '''<font color="#ff8000">信息速率 Information Rate</font>'''(熵率)是每个符号的平均熵。对于无记忆信源,信息速率仅表示每个符号的熵,而在平稳随机过程中,它是: |
| | | |
| :<math>r = \lim_{n \to \infty} H(X_n|X_{n-1},X_{n-2},X_{n-3}, \ldots);</math> | | :<math>r = \lim_{n \to \infty} H(X_n|X_{n-1},X_{n-2},X_{n-3}, \ldots);</math> |
− |
| |
− |
| |
− | that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the ''average rate'' is
| |
− |
| |
− | that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is
| |
| | | |
| 也就是,给定所有之前生成的符号下,一个符号的条件熵。对于非平稳的过程的更一般情况,平均速率为: | | 也就是,给定所有之前生成的符号下,一个符号的条件熵。对于非平稳的过程的更一般情况,平均速率为: |
| | | |
| :<math>r = \lim_{n \to \infty} \frac{1}{n} H(X_1, X_2, \dots X_n);</math> | | :<math>r = \lim_{n \to \infty} \frac{1}{n} H(X_1, X_2, \dots X_n);</math> |
− |
| |
− |
| |
− | that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result.
| |
− | that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result.
| |
| | | |
| 也就是每个符号的联合熵的极限。对于平稳源,这两个表达式得出的结果相同。<ref>{{cite book | title = Digital Compression for Multimedia: Principles and Standards | author = Jerry D. Gibson | publisher = Morgan Kaufmann | year = 1998 | url = https://books.google.com/books?id=aqQ2Ry6spu0C&pg=PA56&dq=entropy-rate+conditional#PPA57,M1 | isbn = 1-55860-369-7 }}</ref> | | 也就是每个符号的联合熵的极限。对于平稳源,这两个表达式得出的结果相同。<ref>{{cite book | title = Digital Compression for Multimedia: Principles and Standards | author = Jerry D. Gibson | publisher = Morgan Kaufmann | year = 1998 | url = https://books.google.com/books?id=aqQ2Ry6spu0C&pg=PA56&dq=entropy-rate+conditional#PPA57,M1 | isbn = 1-55860-369-7 }}</ref> |
| | | |
− |
| |
− |
| |
− | It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of {{em|source coding}}.
| |
− |
| |
− | It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of .
| |
| | | |
| 在信息论中谈论一种语言的“速率”或“熵”是很常见的,也是很合适的,比如当信源是英文散文时。信息源的速率与其冗余度以及可被压缩程度有关。 | | 在信息论中谈论一种语言的“速率”或“熵”是很常见的,也是很合适的,比如当信源是英文散文时。信息源的速率与其冗余度以及可被压缩程度有关。 |
第461行: |
第205行: |
| | | |
| ===信道容量=== | | ===信道容量=== |
− |
| |
− | {{Main|Channel capacity}}
| |
− |
| |
− | Communications over a channel—such as an [[ethernet]] cable—is the primary motivation of information theory. However, such channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality.
| |
− |
| |
− | Communications over a channel—such as an ethernet cable—is the primary motivation of information theory. However, such channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality.
| |
| | | |
| 通过信道(例如,以太网电缆)进行通信是信息论的主要动机。然而,这样的信道往往不能产生信号的精确重建; 静默时段内、噪声、其他形式的信号损坏往往会使得信息质量的降低。 | | 通过信道(例如,以太网电缆)进行通信是信息论的主要动机。然而,这样的信道往往不能产生信号的精确重建; 静默时段内、噪声、其他形式的信号损坏往往会使得信息质量的降低。 |
− |
| |
− |
| |
− | Consider the communications process over a discrete channel. A simple model of the process is shown below:
| |
− |
| |
− | Consider the communications process over a discrete channel. A simple model of the process is shown below:
| |
| | | |
| 考虑离散信道上的通信过程。该过程的简单模型如下: | | 考虑离散信道上的通信过程。该过程的简单模型如下: |
第483行: |
第216行: |
| | | |
| [[File:Channel model.svg|center|800px|信道模型] | | [[File:Channel model.svg|center|800px|信道模型] |
− |
| |
− |
| |
− | Here ''X'' represents the space of messages transmitted, and ''Y'' the space of messages received during a unit time over our channel. Let {{math|''p''(''y''{{pipe}}''x'')}} be the [[conditional probability]] distribution function of ''Y'' given ''X''. We will consider {{math|''p''(''y''{{pipe}}''x'')}} to be an inherent fixed property of our communications channel (representing the nature of the ''[[Signal noise|noise]]'' of our channel). Then the joint distribution of ''X'' and ''Y'' is completely determined by our channel and by our choice of {{math|''f''(''x'')}}, the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the ''[[Signal (electrical engineering)|signal]]'', we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the {{em|channel capacity}} and is given by:
| |
− |
| |
− | Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let x)}} be the conditional probability distribution function of Y given X. We will consider x)}} to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of , the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the and is given by:
| |
| | | |
| 这里''X''表示要发送的信息的空间(全集),''Y''表示单位时间内通过信道接收的信息的空间。设{{math|''p''(''y''{{pipe}}''x'')}}是给定''X''的''Y''的条件概率分布函数。我们将{{math|''p''(''y''{{pipe}}''x'')}}视为通信信道的固定属性(表示信道噪声的性质)。那么''X''和''Y''的联合分布完全取决于所选用的信道和{{math|''f''(''x'')}},以及通过信道发送的信息的边缘分布。在这些约束条件下,我们希望最大化信息速率或信号速率,可以通过信道进行通信。对此的适当度量为互信息,信道容量即为最大互信息,且由下式给出: | | 这里''X''表示要发送的信息的空间(全集),''Y''表示单位时间内通过信道接收的信息的空间。设{{math|''p''(''y''{{pipe}}''x'')}}是给定''X''的''Y''的条件概率分布函数。我们将{{math|''p''(''y''{{pipe}}''x'')}}视为通信信道的固定属性(表示信道噪声的性质)。那么''X''和''Y''的联合分布完全取决于所选用的信道和{{math|''f''(''x'')}},以及通过信道发送的信息的边缘分布。在这些约束条件下,我们希望最大化信息速率或信号速率,可以通过信道进行通信。对此的适当度量为互信息,信道容量即为最大互信息,且由下式给出: |
| | | |
| :<math> C = \max_{f} I(X;Y).\! </math> | | :<math> C = \max_{f} I(X;Y).\! </math> |
− |
| |
− |
| |
− | This capacity has the following property related to communicating at information rate ''R'' (where ''R'' is usually bits per symbol). For any information rate ''R < C'' and coding error ε > 0, for large enough ''N'', there exists a code of length ''N'' and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate ''R > C'', it is impossible to transmit with arbitrarily small block error.
| |
− |
| |
− | This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error.
| |
| | | |
| 信道容量具有以下与以信息速率“R”进行通信有关的属性(其中“R”通常为每个符号的比特数)。对于任意信息速率''R < C''和编码错误ε > 0,存在足够大的长度为''N''和速率大于等于R的代码以及解码算法使得块错误的最大概率小于等于ε;即总是可以在任意小的块错误下进行传输。此外对于任何速率的“ R> C”,不可能以很小的块错误进行发送。 | | 信道容量具有以下与以信息速率“R”进行通信有关的属性(其中“R”通常为每个符号的比特数)。对于任意信息速率''R < C''和编码错误ε > 0,存在足够大的长度为''N''和速率大于等于R的代码以及解码算法使得块错误的最大概率小于等于ε;即总是可以在任意小的块错误下进行传输。此外对于任何速率的“ R> C”,不可能以很小的块错误进行发送。 |
− |
| |
− |
| |
− | ''[[Channel code|Channel coding]]'' is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity.
| |
− |
| |
− | Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity.
| |
| | | |
| 信道编码就寻找一种接近最优的编码,它可以用于在噪声信道上以接近信道容量的速率传输数据,且编码错误很小。 | | 信道编码就寻找一种接近最优的编码,它可以用于在噪声信道上以接近信道容量的速率传输数据,且编码错误很小。 |
第532行: |
第250行: |
| | | |
| *二进制擦除通道([[binary erasure channel]],BEC)是擦除概率为“ p”的二进制输入、三进制输出通道。可能的通道输出为0、1和擦除符号'e'。擦除表示信息输入位的完全丢失。每个通道使用的BEC容量为{{nowrap|1 − ''p''}}比特。 | | *二进制擦除通道([[binary erasure channel]],BEC)是擦除概率为“ p”的二进制输入、三进制输出通道。可能的通道输出为0、1和擦除符号'e'。擦除表示信息输入位的完全丢失。每个通道使用的BEC容量为{{nowrap|1 − ''p''}}比特。 |
− |
| |
− |
| |
− |
| |
| | | |
| | | |
第549行: |
第264行: |
| | | |
| ===情报使用和安全应用=== | | ===情报使用和安全应用=== |
− |
| |
− | Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the [[Ban (unit)|ban]], was used in the [[Ultra]] project, breaking the German [[Enigma machine]] code and hastening the [[Victory in Europe Day|end of World War II in Europe]]. Shannon himself defined an important concept now called the [[unicity distance]]. Based on the redundancy of the [[plaintext]], it attempts to give a minimum amount of [[ciphertext]] necessary to ensure unique decipherability.
| |
− |
| |
− | Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability.
| |
| | | |
| 信息论的概念可以应用于密码学和密码分析。在[[Ultra]]的项目中就使用了图灵的信息单位[[Ban(unit)| ban]],破解了德国的恩尼格玛密码,加速了二战在欧洲的结束。香农定义了一个重要的概念,现在称为单一性距离([[unicity distance]]),基于明文的冗余性尝试给出具有唯一可解密性所需的最少量的密文。 | | 信息论的概念可以应用于密码学和密码分析。在[[Ultra]]的项目中就使用了图灵的信息单位[[Ban(unit)| ban]],破解了德国的恩尼格玛密码,加速了二战在欧洲的结束。香农定义了一个重要的概念,现在称为单一性距离([[unicity distance]]),基于明文的冗余性尝试给出具有唯一可解密性所需的最少量的密文。 |
− |
| |
− |
| |
− |
| |
− |
| |
− | Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A [[brute force attack]] can break systems based on [[public-key cryptography|asymmetric key algorithms]] or on most commonly used methods of [[symmetric-key algorithm|symmetric key algorithms]] (sometimes called secret key algorithms), such as [[block cipher]]s. The security of all such methods currently comes from the assumption that no known attack can break them in a practical amount of time.
| |
− |
| |
− | Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods currently comes from the assumption that no known attack can break them in a practical amount of time.
| |
| | | |
| 信息论使我们觉得保密比最初看起来要困难得多。穷举法也可以破解基于非对称密钥算法或最常用的对称密钥算法(也称为密钥算法),如分块加密。所有这些方法的安全性都来自以下假设:在一定的的时间内没有已知的攻击方法可以破解它们。 | | 信息论使我们觉得保密比最初看起来要困难得多。穷举法也可以破解基于非对称密钥算法或最常用的对称密钥算法(也称为密钥算法),如分块加密。所有这些方法的安全性都来自以下假设:在一定的的时间内没有已知的攻击方法可以破解它们。 |
− |
| |
− |
| |
− |
| |
− | [[Information theoretic security]] refers to methods such as the [[one-time pad]] that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the [[key (cryptography)|key]]) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the [[Venona project]] was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material.
| |
− |
| |
− | Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material.
| |
| | | |
| 信息理论安全性指的是诸如一次性密钥之类的不易受到这种暴力攻击的方法。在这种情况下,可以确保明文和密文(以密钥为条件)之间的正条件互信息正确的传输,而明文和密文之间的无条件互信息仍为零,从而保证绝对安全的通信。换句话说,窃听者将无法通过获取密文而不是密钥的知识来改善其对原文本的猜测。但是,就像在其他任何密码系统中一样,即便时信息论中安全的方法必须小心正确的使用; 之所以Venona 项目能够破解苏联的一次性密钥,就是因为苏联不当地重复使用关键材料。 | | 信息理论安全性指的是诸如一次性密钥之类的不易受到这种暴力攻击的方法。在这种情况下,可以确保明文和密文(以密钥为条件)之间的正条件互信息正确的传输,而明文和密文之间的无条件互信息仍为零,从而保证绝对安全的通信。换句话说,窃听者将无法通过获取密文而不是密钥的知识来改善其对原文本的猜测。但是,就像在其他任何密码系统中一样,即便时信息论中安全的方法必须小心正确的使用; 之所以Venona 项目能够破解苏联的一次性密钥,就是因为苏联不当地重复使用关键材料。 |
第575行: |
第273行: |
| | | |
| ===伪随机数的生成=== | | ===伪随机数的生成=== |
− |
| |
− | [[Pseudorandom number generator]]s are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed [[cryptographically secure pseudorandom number generator]]s, but even they require [[random seed]]s external to the software to work as intended. These can be obtained via [[Extractor (mathematics)|extractors]], if done carefully. The measure of sufficient randomness in extractors is [[min-entropy]], a value related to Shannon entropy through [[Rényi entropy]]; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses.
| |
− |
| |
− | Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses.
| |
| | | |
| 伪随机数生成器在计算机语言库和应用程序中广泛应用。由于它们没有规避现代计算机设备和软件的确定性,因此普遍不适合用在密码学中。一类改进的随机数生成器称为加密安全的伪随机数生成器,但也需要软件外部的随机种子才能正常工作,这通过提取器来获得。用来度量提取器中充分随机性的概念是最小熵,该值通过[[瑞丽熵]]与香农熵关联;瑞丽熵还用于评估密码系统中的随机性。虽然相关,但具有较高香农熵的随机变量不一定适合在提取器中使用,因此也不能用在密码学中。 | | 伪随机数生成器在计算机语言库和应用程序中广泛应用。由于它们没有规避现代计算机设备和软件的确定性,因此普遍不适合用在密码学中。一类改进的随机数生成器称为加密安全的伪随机数生成器,但也需要软件外部的随机种子才能正常工作,这通过提取器来获得。用来度量提取器中充分随机性的概念是最小熵,该值通过[[瑞丽熵]]与香农熵关联;瑞丽熵还用于评估密码系统中的随机性。虽然相关,但具有较高香农熵的随机变量不一定适合在提取器中使用,因此也不能用在密码学中。 |
第585行: |
第279行: |
| | | |
| ===地震勘探=== | | ===地震勘探=== |
− |
| |
− | One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and [[digital signal processing]] offer a major improvement of resolution and image clarity over previous analog methods.<ref>{{cite journal|doi=10.1002/smj.4250020202 | volume=2 | issue=2 | title=The corporation and innovation | year=1981 | journal=Strategic Management Journal | pages=97–118 | last1 = Haggerty | first1 = Patrick E.}}</ref>
| |
− |
| |
− | One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods.
| |
| | | |
| 信息论的一个早期商业应用是在地震石油勘探领域。在该领域的应用可以从期望的地震信号中剔除和分离不需要的噪声。与以前的模拟方法相比,信息论和数字信号处理大大提高了图像的分辨率和清晰度。<ref>{{cite journal|doi=10.1002/smj.4250020202 | volume=2 | issue=2 | title=The corporation and innovation | year=1981 | journal=Strategic Management Journal | pages=97–118 | last1 = Haggerty | first1 = Patrick E.}}</ref> | | 信息论的一个早期商业应用是在地震石油勘探领域。在该领域的应用可以从期望的地震信号中剔除和分离不需要的噪声。与以前的模拟方法相比,信息论和数字信号处理大大提高了图像的分辨率和清晰度。<ref>{{cite journal|doi=10.1002/smj.4250020202 | volume=2 | issue=2 | title=The corporation and innovation | year=1981 | journal=Strategic Management Journal | pages=97–118 | last1 = Haggerty | first1 = Patrick E.}}</ref> |
第596行: |
第286行: |
| | | |
| ===符号学=== | | ===符号学=== |
− |
| |
− | [[Semiotics|Semioticians]] [[:nl:Doede Nauta|Doede Nauta]] and [[Winfried Nöth]] both considered [[Charles Sanders Peirce]] as having created a theory of information in his works on semiotics.{{rp|137}} Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing."<ref name="Nauta 1972"/>{{rp|91}}
| |
− |
| |
− | Semioticians Doede Nauta and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics. Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing."
| |
| | | |
| 符号学家[[:nl:Doede Nauta|Doede Nauta]]和[[Winfried Nöth]]都认为[[Charles Sanders Peirce]]在他的符号学著作中创造了信息论。<ref name="Nauta 1972">{{cite book |ref=harv |last1=Nauta |first1=Doede |title=The Meaning of Information |date=1972 |publisher=Mouton |location=The Hague |isbn=9789027919960}}</ref>{{rp|171}}<ref name="Nöth 2012">{{cite journal |ref=harv |last1=Nöth |first1=Winfried |title=Charles S. Peirce's theory of information: a theory of the growth of symbols and of knowledge |journal=Cybernetics and Human Knowing |date=January 2012 |volume=19 |issue=1–2 |pages=137–161 |url=https://edisciplinas.usp.br/mod/resource/view.php?id=2311849}}</ref>{{rp|137}} Nauta将符号信息论定义为研究编码、过滤和信息处理的内部过程。<ref name="Nauta 1972"/>{{rp|91}} | | 符号学家[[:nl:Doede Nauta|Doede Nauta]]和[[Winfried Nöth]]都认为[[Charles Sanders Peirce]]在他的符号学著作中创造了信息论。<ref name="Nauta 1972">{{cite book |ref=harv |last1=Nauta |first1=Doede |title=The Meaning of Information |date=1972 |publisher=Mouton |location=The Hague |isbn=9789027919960}}</ref>{{rp|171}}<ref name="Nöth 2012">{{cite journal |ref=harv |last1=Nöth |first1=Winfried |title=Charles S. Peirce's theory of information: a theory of the growth of symbols and of knowledge |journal=Cybernetics and Human Knowing |date=January 2012 |volume=19 |issue=1–2 |pages=137–161 |url=https://edisciplinas.usp.br/mod/resource/view.php?id=2311849}}</ref>{{rp|137}} Nauta将符号信息论定义为研究编码、过滤和信息处理的内部过程。<ref name="Nauta 1972"/>{{rp|91}} |
− |
| |
− |
| |
− | Concepts from information theory such as redundancy and code control have been used by semioticians such as [[Umberto Eco]] and [[:it:Ferruccio Rossi-Landi|Ferruccio Rossi-Landi]] to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones.
| |
− |
| |
− | Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and Ferruccio Rossi-Landi to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones.
| |
| | | |
| 信息论的概念(例如冗余和代码控制)已被符号学家如Umberto Eco和Ferruccio Rossi-Landi用来解释意识形态,将其作为消息传输的一种形式,占统治地位的社会阶层通过使用具有高度冗余性的标志来发出其信息,使得从符号中解码出来的消息只有一种,而不会时其他可能的消息。<ref>Nöth, Winfried (1981). "[https://kobra.uni-kassel.de/bitstream/handle/123456789/2014122246977/semi_2004_002.pdf?sequence=1&isAllowed=y Semiotics of ideology]". ''Semiotica'', Issue 148.</ref> | | 信息论的概念(例如冗余和代码控制)已被符号学家如Umberto Eco和Ferruccio Rossi-Landi用来解释意识形态,将其作为消息传输的一种形式,占统治地位的社会阶层通过使用具有高度冗余性的标志来发出其信息,使得从符号中解码出来的消息只有一种,而不会时其他可能的消息。<ref>Nöth, Winfried (1981). "[https://kobra.uni-kassel.de/bitstream/handle/123456789/2014122246977/semi_2004_002.pdf?sequence=1&isAllowed=y Semiotics of ideology]". ''Semiotica'', Issue 148.</ref> |
第613行: |
第294行: |
| | | |
| ===其他应用=== | | ===其他应用=== |
− |
| |
− | Information theory also has applications in [[Gambling and information theory]], [[black hole information paradox|black holes]], and [[bioinformatics]].
| |
− |
| |
− | Information theory also has applications in Gambling and information theory, black holes, and bioinformatics.
| |
| | | |
| 信息论在赌博、黑洞信息悖论和生物信息学中也有应用。 | | 信息论在赌博、黑洞信息悖论和生物信息学中也有应用。 |
第622行: |
第299行: |
| | | |
| | | |
− | ==See also== | + | ==参见== |
− | | |
− | {{Portal|Mathematics}}
| |
| | | |
| * [[算法概率]] | | * [[算法概率]] |
第843行: |
第518行: |
| | | |
| | | |
− | [[Category:Information theory| ]] | + | ==编者推荐== |
− | [[Category:Computer science]] | + | [[File:Last1.png|400px|thumb|right|[https://swarma.org/?p=13364 用神经学习模型计算海量实际网络中的节点中心性度量 | 论文速递1篇|集智俱乐部]]] |
− | | + | ===集智文章推荐=== |
− | Category:Computer science
| + | ====[https://swarma.org/?p=21423 计算美学前沿速递:用信息论“重新发现”风景画艺术史]==== |
− | | + | 美术研究中的一个核心问题是,不同年代和流派的绘画作品,在组织架构上,是否有着相似之处?2020年10月发表在美国国家科学院院刊PNAS的论文中,研究者通过信息论和网络分析,对来自61个国家,1476名画家总计14912幅西方风景画的研究,证实了该假说。 |
− | 类别: 计算机科学
| + | <br/><br/> |
− | | |
− | [[Category:Cybernetics]]
| |
− | | |
− | Category:Cybernetics
| |
− | | |
− | 类别: 控制论
| |
− | | |
− | [[Category:Formal sciences]]
| |
− | | |
− | Category:Formal sciences
| |
− | | |
− | 类别: 正规科学
| |
− | | |
− | [[Category:Information Age]] | |
− | | |
− | Category:Information Age
| |
| | | |
− | 类别: 信息时代
| + | ====[https://swarma.org/?p=21423 计算美学前沿速递:用信息论“重新发现”风景画艺术史]==== |
| + | 在植物与植食性昆虫组成的生态系统中,不同物种在相互作用的过程中, 彼此适应,形成了一个相互影响的协同适应系统。近期Sicence的一项研究从气味信息的角度,讨论动植物协同进化中的军备竞赛,对研究生态网络内部的交流机制很有启发。同期的一篇相关评论文章对该话题进行了全新解答,本文是该评论文章的编译。 |
| | | |
− | <noinclude>
| |
| | | |
− | <small>This page was moved from [[wikipedia:en:Information theory]]. Its edit history can be viewed at [[信息论/edithistory]]</small></noinclude> | + | <br/> |
| + | ---- |
| + | 本中文词条由 参与编译, [[用户:Qige96|Qige96]]、[[用户:Ricky|Ricky]] 审校,[[用户:不是海绵宝宝|不是海绵宝宝]]编辑,欢迎在讨论页面留言。 |
| | | |
− | [[Category:待整理页面]] | + | '''本词条内容源自wikipedia及公开资料,遵守 CC3.0协议。''' |
| + | [[分类: 信息时代]] [[分类: 正规科学]] [[分类: 控制论]] [[分类: 计算机科学]] |