第216行: |
第216行: |
| The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.{{citation needed|date=July 2017}} An AI rewriting its own source code could do so while contained in an [[AI box]]. | | The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.{{citation needed|date=July 2017}} An AI rewriting its own source code could do so while contained in an [[AI box]]. |
| | | |
− | 递归自我改进算法集的机制在两个方面不同于原始计算速度的提高。首先,它不需要外部影响:设计更快的硬件的机器仍然需要人类来创造改进的硬件,或者对工厂进行适当的编程。AI可以既身处一个<font color = "#ff8000">AI盒 AI box</font>里面,又同时改进自己的 源代码。
| + | 递归自我改进算法集的机制在两个方面不同于原始计算速度的提高。首先,它不需要外部影响:设计更快的硬件的机器仍然需要人类来创造改进的硬件,或者对工厂进行适当的编程。AI可以既身处一个AI盒 AI box里面,又同时改进自己的 源代码。 |
| | | |
| Second, as with [[Vernor Vinge]]’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. [[Eliezer Yudkowsky]] compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.<ref name="yudkowsky">{{cite web|author=Eliezer S. Yudkowsky |url=http://yudkowsky.net/singularity/power |title=Power of Intelligence |publisher=Yudkowsky |accessdate=2011-09-09}}</ref> | | Second, as with [[Vernor Vinge]]’s conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. [[Eliezer Yudkowsky]] compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.<ref name="yudkowsky">{{cite web|author=Eliezer S. Yudkowsky |url=http://yudkowsky.net/singularity/power |title=Power of Intelligence |publisher=Yudkowsky |accessdate=2011-09-09}}</ref> |
第349行: |
第349行: |
| According to Eliezer Yudkowsky, a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification. Bill Hibbard (2014) proposes an AI design that avoids several dangers including self-delusion, unintended instrumental actions, and corruption of the reward generator.[84] He also discusses social impacts of AI and testing AI. His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator. | | According to Eliezer Yudkowsky, a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification. Bill Hibbard (2014) proposes an AI design that avoids several dangers including self-delusion, unintended instrumental actions, and corruption of the reward generator.[84] He also discusses social impacts of AI and testing AI. His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator. |
| | | |
− | 按照[[Eliezer Yudkowsky]]的观点,人工智能安全的一个重要问题是,不友好的人工智能可能比友好的人工智能更容易创建。虽然两者都需要递归优化过程的进步,但友好的人工智能还需要目标结构在自我改进过程中保持不变(否则人工智能可以将自己转变成不友好的东西),以及一个与人类价值观相一致且不会自动毁灭人类的目标结构。另一方面,一个不友好的人工智能可以针对任意的目标结构进行优化,而目标结构不需要在自我改进过程中保持不变。Bill Hibbard (2014)提出了一种人工智能设计,可以避免包括自欺欺人、无意的工具性行为和奖励机制的腐败等一些危险。他还讨论了人工智能和人工智能测试的社会影响。他在2001年出版的“<font color = "#ff8000">超级智能机器Super-Intelligent Machines</font>”一书中提倡对人工智能的公共教育和公众控制。<font color = "#cd32cd">该书还提出了一个简单的易受奖励机制的腐败影响的设计。It also proposed a simple design that was vulnerable to corruption of the reward generator.</font> | + | 按照[[Eliezer Yudkowsky]]的观点,人工智能安全的一个重要问题是,不友好的人工智能可能比友好的人工智能更容易创建。虽然两者都需要递归优化过程的进步,但友好的人工智能还需要目标结构在自我改进过程中保持不变(否则人工智能可以将自己转变成不友好的东西),以及一个与人类价值观相一致且不会自动毁灭人类的目标结构。另一方面,一个不友好的人工智能可以针对任意的目标结构进行优化,而目标结构不需要在自我改进过程中保持不变。Bill Hibbard (2014)提出了一种人工智能设计,可以避免包括自欺欺人、无意的工具性行为和奖励机制的腐败等一些危险。他还讨论了人工智能和人工智能测试的社会影响。他在2001年出版的“超级智能机器Super-Intelligent Machines”一书中提倡对人工智能的公共教育和公众控制。该书还提出了一个简单的易受奖励机制的腐败影响的设计。It also proposed a simple design that was vulnerable to corruption of the reward generator. |
| | | |
| ===Next step of sociobiological evolution社会生物进化的下一步=== | | ===Next step of sociobiological evolution社会生物进化的下一步=== |
第371行: |
第371行: |
| In addition, some argue that we are already in the midst of a [[The Major Transitions in Evolution|major evolutionary transition]] that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. | | In addition, some argue that we are already in the midst of a [[The Major Transitions in Evolution|major evolutionary transition]] that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. |
| | | |
− | 此外,有人认为,我们已经处在一个融合了技术、生物学和社会学的<font color = "#ff8000">进化巨变major evolutionary transition</font>之中。数字技术已经无可争辩地渗透到人类社会的结构中,而且生命的维持常常依赖数字技术。
| + | 此外,有人认为,我们已经处在一个融合了技术、生物学和社会学的进化巨变major evolutionary transition之中。数字技术已经无可争辩地渗透到人类社会的结构中,而且生命的维持常常依赖数字技术。 |
| | | |
| | | |
第472行: |
第472行: |
| Beyond merely extending the operational life of the physical body, [[Jaron Lanier]] argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".<ref>{{cite book |title = You Are Not a Gadget: A Manifesto |last = Lanier |first = Jaron |author-link = Jaron Lanier |publisher = [[Alfred A. Knopf]] |year = 2010 |isbn = 978-0307269645 |location = New York, NY |page = [https://archive.org/details/isbn_9780307269645/page/26 26] |url-access = registration |url = https://archive.org/details/isbn_9780307269645 }}</ref> | | Beyond merely extending the operational life of the physical body, [[Jaron Lanier]] argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".<ref>{{cite book |title = You Are Not a Gadget: A Manifesto |last = Lanier |first = Jaron |author-link = Jaron Lanier |publisher = [[Alfred A. Knopf]] |year = 2010 |isbn = 978-0307269645 |location = New York, NY |page = [https://archive.org/details/isbn_9780307269645/page/26 26] |url-access = registration |url = https://archive.org/details/isbn_9780307269645 }}</ref> |
| | | |
− | 除了仅仅延长物质身体的运行寿命之外,[[Jaron Lanier]]还主张一种称为“<font color = "#ff8000">数字提升Digital Ascension</font>”的不朽形式,即“人在肉体层面死亡,意识被上传到电脑里并保持清醒”。 | + | 除了仅仅延长物质身体的运行寿命之外,[[Jaron Lanier]]还主张一种称为“数字提升Digital Ascension”的不朽形式,即“人在肉体层面死亡,意识被上传到电脑里并保持清醒”。 |
| | | |
| | | |
第484行: |
第484行: |
| An early description of the idea was made in [[John Wood Campbell Jr.]]'s 1932 short story "The last evolution". | | An early description of the idea was made in [[John Wood Campbell Jr.]]'s 1932 short story "The last evolution". |
| | | |
− | 1932年[[约翰.伍德.坎贝尔]的短篇小说<font color = "#ff8000">《最后的进化》(the last evolution)</font>对这一想法作了早期的描述。 | + | 1932年[[约翰.伍德.坎贝尔]的短篇小说《最后的进化》(the last evolution)对这一想法作了早期的描述。 |
| | | |
| In his 1958 obituary for [[John von Neumann]], Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."<ref name=mathematical/> | | In his 1958 obituary for [[John von Neumann]], Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."<ref name=mathematical/> |